News

Companies Are Secretly Using AI to Replace Workers – Are you ready to join them?

Photo of author

Mark Jackson

Photo Credit: FreePik

If you wonder if AI is replacing workers, it is. Klarna’s AI assistant now does the work of ~700 full-time roles, handling support chats 82% faster. This isn’t isolated. The IMF estimates ~60% of advanced-economy jobs are impacted.

But here’s the critical data: a 2025 PwC report shows workers with AI skills now command a 56% wage premium, up from 25% last year.

This guide provides 2024-2025 data, legal guardrails (like NYC’s AEDT law), and a plan to protect your job from AI—and use AI to make yourself harder to cut.

Is AI Replacing Workers in 2025? What the Data Shows

Is AI Replacing Workers in 2025? What the Data Shows
Photo Credit: FreePik

The macro picture shows significant job exposure to AI. The International Monetary Fund (IMF) says about 60% of jobs in advanced economies are exposed to AI. “Exposed” means some parts of your role may be automated. Alternatively, exposed parts may be upgraded by AI tools. Outcomes vary significantly by the specific task and the worker’s skill level. Practically, this exposure shows up as fewer junior job openings.

It also results in “no backfill” when current employees leave their positions. Regarding employer intentions, major shifts are planned. The World Economic Forum (2025) reports on employer plans. About 40% of employers plan workforce reductions where AI can automate tasks. Conversely, two-thirds of employers plan to hire for specialized AI skills.

Additionally, half of all employers will re-orient their business entirely around AI. Expect simultaneous restructuring and selective hiring, not just widespread layoffs. The entry-level squeeze is becoming quite evident. A 2025 BSI survey was covered by The Guardian newspaper. It found 41% of leaders are already using AI to reduce their headcount.

Furthermore, 31% of leaders consider AI’s capabilities before making a new hire. Many leaders stated that entry-level tasks are the first to be automated. That outcome lines up precisely with what many graduates are currently observing. There are now fewer junior roles available in the market. There is significantly more AI integrated directly into the workflow.

How Companies Quietly Use AI to Cut Headcount

How Companies Quietly Use AI to Cut Headcount
Photo Credit: FreePik

1) Attrition + hiring freezes. Instead of layoffs, firms stop backfilling roles and let AI pick up the slack. Klarna’s leadership publicly linked AI to fewer hires and hundreds of FTEs’ worth of automated work—shrinking staff primarily via attrition.

2) Contractor cuts. Duolingo confirmed it cut ~10% of contractors as content work shifted to AI. Full-timers stayed, but this shows the contractor channel is often the first replaced when AI can draft, translate, or QA content.

3) Back-office automation. IBM signaled back-office roles (e.g., HR operations) could be automated over five years and paused hiring accordingly. These moves often happen long before a “layoff” headline.

4) Restructuring + “new way of working.” UPS cut 12,000 jobs in 2024 and said it doesn’t expect those roles to return as workflows change—automation and efficiency enable smaller teams even if volume rebounds. UPS headcount was ~45,000 lower than 2021—a mix of automation, cost control, and demand shifts. Don’t over-attribute to AI alone; it’s part of a larger efficiency stack.

What ties this together? Productivity wins (faster replies, fewer touches, better routing) let leaders meet SLAs with fewer people. For example, Klarna cites 82% faster responses and 25% fewer follow-ups after deploying its assistant. Similar patterns appear wherever repeatable, text-heavy tasks can be automated.

Your takeaway. If your org is freezing hiring, consolidating vendors, running AI pilots that mirror your tasks, or quietly shrinking contractors, assume AI is in the loop—and plan accordingly.

Are You at Risk? A 10-Minute Role & Task Audit

Are You at Risk? A 10-Minute Role & Task Audit
Photo Credit: FreePik

Step 1: Map last 30 days. List your tasks. Circle repetitive, rules-based, text-heavy work (ticket replies, tagging, reports, first drafts).

Step 2: Check exposure. Roles in AI-exposed sectors show ~4.8× faster productivity growth (PwC). That sounds positive—but it also means a smaller team can do more, which pressures headcount. If your function sits in those sectors, plan for “do more with less.”

Step 3: Spot signals. Red flags include “no backfill” notices, a pilot chatbot/agent that mirrors your tasks, vendor RFPs for automation, or new QA rules around AI output. In support roles, if a new assistant resolves tickets in ~2 minutes, assume coverage will expand to more queues.

Step 4: Write your risk notes. For each circled task, answer: Can AI do the first pass? Can I measure a “human-in-the-loop” uplift (quality, safety, compliance)?

Step 5: Decide moves. Either own the agent (build, prompt, QA, report ROI) or shift to tasks AI feeds (analysis, exception handling, stakeholder work). Use the audit to propose a pilot you lead (see Defense Plan).

Your Rights in 2025: Spotting Hidden AI in Hiring & Firing

Your Rights in 2025: Spotting Hidden AI in Hiring & Firing
Photo Credit: FreePik

If an employer uses automated tools for screening candidates or promotions in New York City, they must conduct a bias audit annually. The employer must publish a summary of this bias audit. They are also required to provide notices to affected individuals. Action: Ask your recruiter or Human Resources (HR) department for the AEDT bias-audit link and the necessary notice before any assessment is conducted using these tools.

The EU AI Act officially entered into force on August 1, 2024. Bans on certain AI systems and AI literacy obligations will apply starting February 2, 2025. Rules specifically for general-purpose AI models (GPAI) will apply from August 2, 2025. Rules for high-risk AI systems, which include many HR tools, will phase in between August 2, 2026, and 2027.

Action (EU Workers): Ask your employer what risk-management steps they are taking for any high-risk HR systems they use. You are entitled to and should request the AI literacy plan that your employer is required to implement.

Defense Plan: Make AI Your Leverage (Next 30–60 Days)

Defense Plan: Make AI Your Leverage (Next 30–60 Days)
Photo Credit: FreePik

1) Turn tasks into proof. Pick 1–2 repeatable tasks. Measure baseline (tickets/hour, lead volume, cycle time). Add a copilot/agent. Re-measure for 2–5× output with quality checks. Keep a simple weekly before/after log. Managers keep roles that prove ROI.

2) Skill spikes that travel. Focus on skills the WEF 2025 flags as rising: AI & big data, analytical/creative thinking, tech literacy, plus resilience & agility. Practically, that means:
• Prompt frameworks for repeatable outputs.
Data cleanup + schema tagging so AI tools work better.
Workflow orchestration (connecting tools, guardrails, and review).
QA skills (catching AI errors fast).

3) Propose a “keep-me” project. Identify a cost-center process (e.g., refund approvals, FAQ updates). Design a 30% automation pilot with guardrails. Share a 1-page plan: baseline, target save, quality gates, metrics, and your role as owner (not just user).

4) Legal-aware collaboration. If your team is rolling out hiring/performance AI, ask:
• NYC AEDT: Where’s the bias-audit summary and candidate notice?
• FCRA/CFPB: Are any third-party reports or scores used? What’s the adverse-action workflow?
• EU AI Act (if relevant): Who owns risk management and AI literacy?

5) Share wins weekly. Short Loom/screenshot + your metric gains. Make your impact visible.

If You’re Laid Off or At Risk: The 30/60/90 Playbook

If You’re Laid Off or At Risk: The 30/60/90 Playbook
Photo Credit: FreePik

Day 1–30: Package your value.
• Convert your best tasks into freelance offers enhanced by AI (e.g., content QA, CRM enrichment, support macros).
• Refresh your CV for ATS with quantified AI productivity wins (use tools like Jobscan/Teal).
• Publish 2–3 quick case notes: problem → your AI-assisted method → result.

Day 31–60: Aim where AI is working.
• Target functions with documented AI productivity gains (support, marketing ops, inside sales).
• Prefer employers who show mature AI governance (e.g., public NYC AEDT audit pages; EU firms referencing the AI Act).

Day 61–90: Build repeatable services.
• Offer automation audits, content quality checks, or sales-ops enrichment.
• Collect three ROI stories with numbers (time saved, errors reduced, revenue lift).
• Keep shipping—momentum beats perfection.

What Good Looks Like: Ethical & Compliant AI at Work

What Good Looks Like: Ethical & Compliant AI at Work
Photo Credit: FreePik

Bias audits & notices (NYC AEDT). Employers publish audit summaries and notify candidates when using hiring/promotions AI. Good orgs link audits in job posts and explain how to request human alternatives.

Risk management & transparency (EU AI Act). For high-risk HR tools, organizations document risks, controls, data quality, and human oversight—and provide AI literacy to staff.

FCRA compliance (U.S. CFPB). If third-party reports or algorithmic scores influence employment decisions, workers get consent, access, adverse-action notices, and dispute rights. Good orgs explain this up front and respond fast.

California ADMT readiness (effective 2026). Clear opt-outs/access, risk assessments, human review for significant decisions, and a process for allocating work fairly when ADMT is in play.

Flipboard