News

Companies Are Secretly Using AI to Replace Workers – Are you ready to join them?

Photo of author

Mark Jackson

Photo Credit: FreePik

If you’re wondering whether AI is replacing workers, you’re not imagining things. Klarna’s AI assistant now does work equal to ~700 full-time roles and handles about two-thirds of support chats—replying 82% faster with 25% fewer repeat issues—which helped the company shrink headcount mostly through attrition. That’s how companies use AI to cut jobs without a splashy layoff memo. Reuters+1

Zoom out: the IMF estimates ~60% of jobs in advanced economies are “impacted” by AI (some tasks get automated, some get upgraded). And the World Economic Forum’s 2025 report says ~40% of employers expect to reduce headcount where AI can automate tasks. That’s the reality this year. IMF+1

This guide gives you fresh 2024–2025 data, real examples (Klarna, UPS, Duolingo, IBM), the legal guardrails (NYC AEDT, EU AI Act, CFPB/FCRA, California ADMT), a 10-minute risk check, and a step-by-step plan to protect your job from AI—starting this quarter. You’ll also see what “good” looks like when employers deploy AI the right way. Keep reading for plain-English steps you can use this week to protect your job from AI—and even use AI to make yourself harder to cut. Supply Chain Dive+2TechCrunch+2

Is AI Replacing Workers in 2025? What the Data Shows

Is AI Replacing Workers in 2025? What the Data Shows
Photo Credit: FreePik

The macro picture. The IMF says about 60% of jobs in advanced economies are exposed to AI. “Exposed” means parts of your role may be automated or upgraded—outcomes vary by task and skill level. Practically, that shows up as fewer junior openings and “no backfill” when people leave. IMF

Employer intentions. The World Economic Forum (2025) reports ~40% of employers plan workforce reductions where AI can automate tasks, while two-thirds plan to hire for AI skills and half will re-orient their business around AI. Expect restructuring plus selective hiring, not just layoffs. World Economic Forum

Entry-level squeeze. A 2025 BSI survey (covered by The Guardian) found 41% of leaders are using AI to reduce headcount, and 31% consider AI before hiring—with many saying entry-level tasks are the first to go. That lines up with what many graduates are seeing: fewer junior roles, more AI in the workflow. The Guardian

Reality check on ROI. Not every AI pilot pays off. Recent research and industry analyses highlight a “GenAI divide”: many pilots fail to hit P&L, while a small share captures big gains. In contrast, field studies (e.g., MIT/Stanford) show double-digit productivity lifts in customer support—especially for newer workers—when agents use AI assistance. Translation: broad displacement isn’t automatic, but productivity gains let companies run leaner teams through attrition and fewer backfills. MLQ AI+1

What the numbers look like inside a company. At Klarna, an AI assistant now performs work equal to ~700 FTE, resolves issues in ~2 minutes (down from 11), and handles ~2/3 of chats—supporting a smaller workforce with faster responses and fewer repeat contacts. That’s a clear path to AI job displacement without mass firings. Reuters+1

Bottom line. Yes, AI is replacing workers—mostly by not rehiring for routine tasks and cutting contractor budgets—while ramping demand for people who can aim, audit, and extend these tools. Your best move is to quantify your value and shift toward tasks AI boosts rather than replaces.

How Companies Quietly Use AI to Cut Headcount

How Companies Quietly Use AI to Cut Headcount
Photo Credit: FreePik

1) Attrition + hiring freezes. Instead of layoffs, firms stop backfilling roles and let AI pick up the slack. Klarna’s leadership publicly linked AI to fewer hires and hundreds of FTEs’ worth of automated work—shrinking staff primarily via attrition. Reuters

2) Contractor cuts. Duolingo confirmed it cut ~10% of contractors as content work shifted to AI. Full-timers stayed, but this shows the contractor channel is often the first replaced when AI can draft, translate, or QA content. TechCrunch

3) Back-office automation. IBM signaled back-office roles (e.g., HR operations) could be automated over five years and paused hiring accordingly. These moves often happen long before a “layoff” headline. Reuters

4) Restructuring + “new way of working.” UPS cut 12,000 jobs in 2024 and said it doesn’t expect those roles to return as workflows change—automation and efficiency enable smaller teams even if volume rebounds. UPS headcount was ~45,000 lower than 2021—a mix of automation, cost control, and demand shifts. Don’t over-attribute to AI alone; it’s part of a larger efficiency stack. Supply Chain Dive

What ties this together? Productivity wins (faster replies, fewer touches, better routing) let leaders meet SLAs with fewer people. For example, Klarna cites 82% faster responses and 25% fewer follow-ups after deploying its assistant. Similar patterns appear wherever repeatable, text-heavy tasks can be automated. customerexperiencedive.com

Your takeaway. If your org is freezing hiring, consolidating vendors, running AI pilots that mirror your tasks, or quietly shrinking contractors, assume AI is in the loop—and plan accordingly.

Are You at Risk? A 10-Minute Role & Task Audit

Are You at Risk? A 10-Minute Role & Task Audit
Photo Credit: FreePik

Step 1: Map last 30 days. List your tasks. Circle repetitive, rules-based, text-heavy work (ticket replies, tagging, reports, first drafts).

Step 2: Check exposure. Roles in AI-exposed sectors show ~4.8× faster productivity growth (PwC). That sounds positive—but it also means a smaller team can do more, which pressures headcount. If your function sits in those sectors, plan for “do more with less.” PwC

Step 3: Spot signals. Red flags include “no backfill” notices, a pilot chatbot/agent that mirrors your tasks, vendor RFPs for automation, or new QA rules around AI output. In support roles, if a new assistant resolves tickets in ~2 minutes, assume coverage will expand to more queues. customerexperiencedive.com

Step 4: Write your risk notes. For each circled task, answer: Can AI do the first pass? Can I measure a “human-in-the-loop” uplift (quality, safety, compliance)?

Step 5: Decide moves. Either own the agent (build, prompt, QA, report ROI) or shift to tasks AI feeds (analysis, exception handling, stakeholder work). Use the audit to propose a pilot you lead (see Defense Plan).

Your Rights in 2025: Spotting Hidden AI in Hiring & Firing

Your Rights in 2025: Spotting Hidden AI in Hiring & Firing
Photo Credit: FreePik

NYC Local Law 144 (AEDT). If an employer uses automated tools to screen candidates or promotions in NYC, they must do a bias audit every year, publish the audit summary, and give notices. Action: ask the recruiter or HR for the AEDT bias-audit link and notice before you’re assessed. New York City Government

EU AI Act timeline. The Act entered into force Aug 1, 2024. Bans + AI literacy obligations apply Feb 2, 2025. Rules for general-purpose models (GPAI) apply Aug 2, 2025. High-risk rules (which include many HR tools) phase toward Aug 2, 2026–2027. Action (EU workers): ask which risk-management steps your employer is taking for any high-risk HR systems and request the AI literacy plan you’re entitled to. digital-strategy.ec.europa.eu+1

CFPB + FCRA (U.S.). If your employer uses third-party “black-box” scores or reports (surveillance-based metrics, algorithmic risk/fit scores) for hiring, promotion, or discipline, that often counts as a consumer report under the FCRA. You have rights: consent, adverse-action notice, access to your file, and dispute of inaccuracies. Action: if you’re denied or harmed, request the report, demand the adverse-action notice, and dispute errors in writing. Consumer Financial Protection Bureau+1

California ADMT rules (effective 2026). California finalized privacy rules that include opt-out/access rights and risk assessments for Automated Decision-Making Technology (ADMT)—covering uses like allocating work or evaluating performance. Action (CA workers): ask now about ADMT notices, appeal/human review options, and the company’s risk-assessment plan. cppa.ca.gov

Colorado AI Act (delayed). Colorado delayed its comprehensive AI law to June 30, 2026, but it still sets “reasonable care” duties around algorithmic discrimination. Action: if you’re in Colorado, track employer prep and ask who the responsible AI owner is. Akin – Akin, an Elite Global Law Firm

Defense Plan: Make AI Your Leverage (Next 30–60 Days)

Defense Plan: Make AI Your Leverage (Next 30–60 Days)
Photo Credit: FreePik

1) Turn tasks into proof. Pick 1–2 repeatable tasks. Measure baseline (tickets/hour, lead volume, cycle time). Add a copilot/agent. Re-measure for 2–5× output with quality checks. Keep a simple weekly before/after log. Managers keep roles that prove ROI.

2) Skill spikes that travel. Focus on skills the WEF 2025 flags as rising: AI & big data, analytical/creative thinking, tech literacy, plus resilience & agility. Practically, that means:
• Prompt frameworks for repeatable outputs.
Data cleanup + schema tagging so AI tools work better.
Workflow orchestration (connecting tools, guardrails, and review).
QA skills (catching AI errors fast). World Economic Forum

3) Propose a “keep-me” project. Identify a cost-center process (e.g., refund approvals, FAQ updates). Design a 30% automation pilot with guardrails. Share a 1-page plan: baseline, target save, quality gates, metrics, and your role as owner (not just user).

4) Legal-aware collaboration. If your team is rolling out hiring/performance AI, ask:
NYC AEDT: Where’s the bias-audit summary and candidate notice?
FCRA/CFPB: Are any third-party reports or scores used? What’s the adverse-action workflow?
EU AI Act (if relevant): Who owns risk management and AI literacy? New York City Government+1

5) Share wins weekly. Short Loom/screenshot + your metric gains. Make your impact visible.

If You’re Laid Off or At Risk: The 30/60/90 Playbook

If You’re Laid Off or At Risk: The 30/60/90 Playbook
Photo Credit: FreePik

Day 1–30: Package your value.
• Convert your best tasks into freelance offers enhanced by AI (e.g., content QA, CRM enrichment, support macros).
• Refresh your CV for ATS with quantified AI productivity wins (use tools like Jobscan/Teal).
• Publish 2–3 quick case notes: problem → your AI-assisted method → result.

Day 31–60: Aim where AI is working.
• Target functions with documented AI productivity gains (support, marketing ops, inside sales).
• Prefer employers who show mature AI governance (e.g., public NYC AEDT audit pages; EU firms referencing the AI Act). New York City Government+1

Day 61–90: Build repeatable services.
• Offer automation audits, content quality checks, or sales-ops enrichment.
• Collect three ROI stories with numbers (time saved, errors reduced, revenue lift).
• Keep shipping—momentum beats perfection.

What Good Looks Like: Ethical & Compliant AI at Work

What Good Looks Like: Ethical & Compliant AI at Work
Photo Credit: FreePik

Bias audits & notices (NYC AEDT). Employers publish audit summaries and notify candidates when using hiring/promotions AI. Good orgs link audits in job posts and explain how to request human alternatives. New York City Government

Risk management & transparency (EU AI Act). For high-risk HR tools, organizations document risks, controls, data quality, and human oversight—and provide AI literacy to staff. digital-strategy.ec.europa.eu+1

FCRA compliance (U.S. CFPB). If third-party reports or algorithmic scores influence employment decisions, workers get consent, access, adverse-action notices, and dispute rights. Good orgs explain this up front and respond fast. Consumer Financial Protection Bureau

California ADMT readiness (effective 2026). Clear opt-outs/access, risk assessments, human review for significant decisions, and a process for allocating work fairly when ADMT is in play.

Flipboard