The Boss You Never Met
Somewhere in a Sydney warehouse right now, a worker is being told what to pick, how fast to pick it, and exactly when they've paused for too long — all by software. No human supervisor made those decisions. The algorithm did.
Across Australia, millions of workers already report to AI systems they've never seen and can't question. Algorithms assign shifts, monitor keystrokes, route deliveries, score phone calls, and flag workers who fall below automated performance thresholds. In some workplaces, the software can trigger a written warning before any human manager has reviewed the situation.
This isn't a future scenario. It's happening in Australian warehouses, call centres, delivery networks, and offices right now. And in February 2026, NSW became the first state in Australia to say this is a workplace safety problem.
How AI Manages Workers Across Australian Jobs
Algorithmic management looks different depending on where you work, but the pattern is the same: software makes decisions that used to belong to people.
Call centres are among the most intensively monitored workplaces in the country. AI systems route calls based on predicted complexity, analyse tone and sentiment in real time, coach agents through suggested responses, and score every interaction against KPI dashboards. Australia has around 29,400 call and contact centre workers with an AI exposure score of 7.4 out of 10 — one of the highest of any occupation.
Warehouses and distribution centres use management systems that direct workers through handheld scanners, tracking pick rates, items processed per hour, and "time off task" — the minutes a worker spends not scanning items. These systems can automatically generate performance warnings. There are 164,200 storepersons in Australia, and while their overall AI score of 4.2 reflects the physical nature of the work, the management layer above them is almost entirely automated.
Delivery drivers work under routing algorithms that determine which jobs they receive, what order to complete them, and how long each delivery should take. GPS tracking means the platform knows exactly where they are at every moment. Gig platforms like Uber Eats and DoorDash use algorithms to adjust pay in real time — the same delivery at the same time can pay different drivers different amounts. Australia has 84,900 delivery drivers and another 44,200 couriers and postal deliverers, most of whom interact more with an app than a human manager on any given shift.
Office and admin workers face a quieter version of the same trend. Workflow allocation, email monitoring, productivity scoring, and automated task assignment are spreading through corporate environments. With 286,600 general clerks carrying an AI score of 7.0 and 126,900 purchasing and supply logistics clerks at 6.2, a significant share of Australia's administrative workforce is already working alongside AI management systems — whether they realise it or not.
NSW Just Called This a Safety Risk
On 12 February 2026, the NSW Parliament passed the Work Health and Safety Amendment (Digital Work Systems) Act — making NSW the first Australian state to explicitly recognise that AI can be a workplace health and safety hazard.
The law defines "digital work systems" as any algorithm, AI system, automation, or online platform used in the workplace. Under the amended WHS Act, employers who use these systems must ensure they don't create risks to worker health and safety. That includes risks from excessive workloads generated by automated rostering, unreasonable performance metrics set by algorithmic KPI systems, constant surveillance and monitoring, and discriminatory outcomes from automated decisions.
The Act hasn't commenced yet — it's waiting on proclamation and guidelines from SafeWork NSW. But the signal is clear. Algorithmic management is no longer just a technology issue. Under NSW law, it will carry the same weight as any other workplace hazard.
Unions NSW backed the bill, and it gives WHS entry permit holders new powers to inspect digital work systems with 48 hours' notice where a breach is suspected.
The Fair Work Commission Is Dealing With the Fallout
While NSW tackles AI management of workers, the Fair Work Commission is dealing with a different AI problem: the flood of AI-generated workplace claims.
FWC caseload has grown 70% in three years, from roughly 30,000 matters per year before 2023 to 45,000 in 2024–25, with a projected 55,000 this financial year. As ACS Information Age reported, Justice Adam Hatcher — the FWC president — ran his own test: he asked ChatGPT to generate an unfair dismissal claim. In under ten minutes, it produced a complete application and witness statement containing "substantially invented" facts and a compensation claim of up to $40,000.
That's not hypothetical. In Riley v Nuvei Australia Merchant Services [2026], an applicant submitted case law citations that turned out to be entirely fabricated by AI. A separate case saw a Sydney worker's general protections claim dismissed after the Commission determined it was AI-generated, lodged two and a half years late, and riddled with legal errors.
The FWC has responded with draft disclosure rules requiring anyone who uses AI to prepare a document to disclose it and confirm they've checked the output for accuracy. Failing to disclose could mean the document is given reduced weight, disregarded, or the case dismissed entirely. Public comments on these rules close on 10 April 2026.
Your Existing Rights Still Apply
Here's the part that matters if an algorithm just changed your roster or flagged your performance: existing workplace protections apply regardless of whether a human or a machine made the decision.
If an algorithm fires you, your unfair dismissal rights are the same as if a human did. Anti-discrimination laws still apply to AI-driven hiring, rostering, and performance management. Redundancy consultation requirements don't change because a platform made the call. And if your employer introduced an AI system that substantially changed your role, that may trigger consultation obligations under the Fair Work Act.
The CBA learned this in 2026 when it reversed 45 customer service job cuts after Fair Work intervention — a reminder that automation doesn't exempt employers from their legal obligations.
Which Occupations Should Pay Attention
If you work in a role where software assigns your tasks, monitors your output, or scores your performance, algorithmic management is already part of your job. The occupations most directly affected include:
- Call or contact centre workers — AI score 7.4/10, 29,400 employed
- General clerks — AI score 7.0/10, 286,600 employed
- Purchasing and supply logistics clerks — AI score 6.2/10, 126,900 employed
- Delivery drivers — AI score 5.1/10, 84,900 employed
- Retail managers — AI score 5.5/10, 248,000 employed
- Storepersons — AI score 4.2/10, 164,200 employed
- Fast food cooks — AI score 3.6/10, 49,500 employed
That's close to a million workers across just seven occupation categories. Many of them are young (median age 18 for fast food cooks, 35 for storepersons, 36 for call centre workers), female-dominated in office roles (81% of general clerks are women), and earning below the national median in physical roles.
These aren't the workers making headlines about AI. They're the ones quietly being managed by it.
What You Can Do
The shift toward algorithmic management isn't slowing down, but the regulatory response is accelerating. NSW's law may be first, but other states are likely to follow — the risks it identifies (excessive monitoring, unreasonable metrics, automated overwork) exist in every jurisdiction.
If you want to understand where your own occupation sits, check your AI exposure score on our rankings page or search your job directly. The scores draw on Jobs and Skills Australia data covering 358 occupations, so you can see exactly how your role compares — not just for replacement risk, but for the kind of AI augmentation and management that's already here.
Knowing the score is the first step. Knowing your rights is the second.