Maybe don't get caught up in the Boomerang effect, just focus on education.  Image via Nano Banana - Adobe Firefly.

Why the organizations cutting people are losing to the organizations growing them

I’ve spent the last year working with AI systems hands-on—not just using them, but building them. Neural architectures, multi-agent frameworks, the kind of deep integration that reveals both what these tools can do and where they fall short.

And I’m here to tell you: the executives celebrating mass layoffs in the name of AI are making a terrible mistake. Not because AI isn’t powerful. It is. But because they’ve confused acceleration with replacement—and that confusion is already costing them.

The Canaries Are Singing

You’ve probably heard the victory laps. A major European fintech proudly announcing their AI chatbot could do the work of 700 customer service agents. A leading CRM platform’s CEO explaining he “needs less heads” as AI handles 50% of the workload.

Here’s what happened next.

The fintech’s CEO admitted that “cost unfortunately seems to have been a too predominant evaluation factor.”¹ Customer satisfaction tanked. They’re now quietly rehiring human agents—specifically targeting students and rural workers for on-demand remote roles. Forrester’s analyst Kate Leggett summed it up: they “overpivoted to cost containment, without thinking about the longer-term impact of customer experience.”²

This is the Boomerang Effect—the arc from premature declaration of AI's ubiquitous power to the 180-degree turn of quietly announcing human rehirings.

The CRM giant? After the CEO insisted AI wouldn’t cause “some huge mass layoff,” they cut 4,000 support positions within three weeks.³ Now they’re simultaneously laying off and hiring—cutting support while adding AI-focused sales roles. The math doesn’t add up, the messaging is contradictory, and as tech consultant Waseem Mirza noted, it “gives rise to a climate of fear among the industry’s wider workforce.”⁴

The Structural Misunderstanding

These executives weren’t stupid. They had data. They had consultants. They had pilots showing promising results.

They still got burned. The failure isn't in the execution. It's in the mental model.

They measured output, not judgment. AI can produce 10x the customer service responses, 10x the code, 10x the content. What it can’t do is know which responses matter, which code solves the actual problem, which content will land. The humans they cut weren’t doing “tasks”—they were doing triage. They knew when to escalate, when to deviate from the script, when the edge case was actually the main case. When you cut them, you don’t lose throughput. You lose discernment. And discernment failures don’t show up in dashboards until revenue does.

They confused “AI can do this” with “AI can do this without us.” Every impressive AI demo has a human behind it—selecting the prompt, curating the context, recognizing when the output is good. Remove those humans and you don’t have “AI customer service.” You have unsupervised AI customer service. These are categorically different things.

They optimized for cost, not capability. The question they asked: “How many humans can we replace?” The question they should have asked: “How much more capable can we make each human?”

The Vibe Coding Hangover

Nowhere is this clearer than in software development.

In February 2025, Andrej Karpathy coined “vibe coding”—the idea of “fully giving in to the vibes, embrace exponentials, and forget that the code even exists.”⁵ AI now generates 41% of all code written.⁶ A quarter of Y Combinator’s Winter 2025 batch had codebases that were 95% AI-generated.⁷

Then reality hit.

The METR study tracked 16 experienced developers completing 246 real-world tasks. The result? Developers using AI were 19% slower—not faster. But here’s the kicker: they believed they were 20% faster. A 39-point gap between perception and reality.⁸

The “vibe coding hangover” is now upon us.⁹ Roughly 10,000 startups tried to build production apps with AI assistants. More than 8,000 now need rebuilds or rescue engineering, with budgets ranging from $50K to $500K each.¹⁰ One founder built his entire SaaS with “zero handwritten code”—and was hacked within days. No authentication. No rate limiting. No input validation. The AI generated code that looked like a product but lacked everything that makes software production-ready.¹¹

A Veracode study found that 45% of AI-generated code introduces security vulnerabilities.¹² AI models hallucinate non-existent software packages up to 21% of the time—and attackers now exploit this through “slopsquatting,” creating malware-laden packages with commonly hallucinated names.¹³

As one developer observed: “You’re not actually saving time with AI coding; you’re just trading less typing for more time reading and untangling code.”¹⁴

What AI Actually Requires

The more I work with these systems—and I mean really work with them, building architectures from scratch, training models, debugging at 3 AM—the more I realize that AI acceleration actively requires human intervention.

Four things, specifically:

  1. Trigger the tasks. Someone has to know what to ask. AI doesn’t wake up and decide to solve your business problem. A human with domain expertise has to identify the opportunity, frame the question, and initiate the inquiry.
  2. Identify key ideas that relate. Pattern recognition across domains—seeing that this marketing challenge connects to that operations principle, that this customer behavior echoes that research finding. This is where interdisciplinary thinking becomes irreplaceable.
  3. Guide and direct inquiry. The iterative dance of refinement. AI gives you a first pass; you recognize what’s missing, redirect, ask for alternatives, push deeper. This isn’t passive consumption—it’s active collaboration.
  4. Discover what might be useful. Anticipating needs in an evolving landscape. Knowing which AI capabilities matter for your context, which experiments are worth running, which outputs deserve trust and which need verification.

None of these are automatable. They’re acceleratable—but only with a skilled human in the loop.

The Quality Paradox

Here’s what most people miss: AI doesn’t just enable more content—it enables more plausible-looking content.

And plausible-looking content that misses the mark is worse than no content at all.

It consumes attention without delivering value. It trains audiences to distrust the source. It floods channels, making signal-finding harder for everyone. It creates a race to the bottom where everyone produces more, faster, worse.

We’re not drowning in garbage. We’re drowning in competent-seeming garbage that requires more cognitive effort to evaluate than obvious garbage ever did.

An incredibly well-educated person with AI will outperform one without. But a poorly educated person with AI will generate wildly irrelevant, poorly targeted information—often in an overload—all of which adds little or nothing to the discourse.

The scarcest resource in an AI-saturated landscape isn’t compute or even data. It’s humans who know how to think clearly, guide inquiry effectively, and recognize when output is actually good.

The Path Forward

So where are we really going into 2026?

Can C-Suites cut costs by laying off employees en masse in hopes of automating tasks? The evidence says no. The Boomerang Effect is spreading—companies are learning the hard way that the humans they cut were doing more than they appeared to be doing.

Are even modest cuts wise? Not if you're cutting the judgment layer. The factory that runs "lights out" with zero humans? It doesn't exist at scale. Even Tesla's Optimus robots—the cutting edge of manufacturing automation—are doing kitting tasks while humans handle "anomalies." Experts predict fully automated factories are 10-20 years away.

What should leaders do in a climate where everyone is understandably nervous and uncertain?

Education. But not just any education—education with an eye toward reaching aspirational goals, toward growth and innovation.

The smart organizations will invest in their people. Not “how to use ChatGPT” training, but capability development that makes humans more valuable as AI handles the mechanical parts: critical thinking, creative problem-solving, systems analysis, judgment under uncertainty, the ability to guide and validate and redirect and step in when needed.

Who should organizations seek to educate? Everyone. The winners in 2026 won’t be the companies that cut the most humans. They’ll be the companies that made each human dramatically more capable.

The Real Opportunity

AI is the most powerful accelerator we’ve ever had for human capability. I’ve watched it compress months of work into weeks, help me explore technical territories that would have taken years to traverse alone, and surface patterns I wouldn’t have seen without a tireless analytical partner.

But every single one of those victories required me to show up with deep knowledge, clear thinking, and the judgment to know when the AI was helping and when it was hallucinating.

The organizations that understand this—that AI is a partnership technology, not a replacement technology—are building capabilities their competitors won't be able to match.

The organizations celebrating headcount reductions are destroying institutional knowledge they’ll spend years trying to rebuild.

I know which bet I’m making.


Dr. Allen Partridge is Director of Evangelism for Digital Learning Solutions at Adobe, where he leads a team focused on learning technologies including Adobe Learning Manager. He holds a PhD integrating art, music, theater, philosophy, and computer science, and has over 30 years of programming experience. His current work includes building AI-first systems that demonstrate what human-AI partnership actually looks like in practice.


Related:


What’s your experience with AI in your organization—acceleration or attempted replacement? I’d love to hear what’s working and what isn’t.


Endnotes

  1. Solutions Review, “Klarna’s AI Layoffs Exposed the Missing Piece: Empathy,” July 2025. CEO Sebastian Siemiatkowski quoted on prioritizing cost over quality.
  2. CX Dive, “Klarna says its AI agent is doing the work of 853 employees,” October 2025. Kate Leggett, VP Principal Analyst at Forrester, quoted on the overpivot to cost containment.
  3. CNBC, “Salesforce CEO confirms 4,000 layoffs ‘because I need less heads’ with AI,” September 2025. Marc Benioff confirmed reduction from 9,000 to 5,000 support heads.
  4. Al Jazeera, “Salesforce lays off thousands despite strong earnings report,” September 2025. Waseem Mirza, tech consultant, quoted on industry-wide implications.
  5. Wikipedia, “Vibe coding,” citing Andrej Karpathy’s February 2025 post introducing the term.
  6. GitHub data reported across multiple sources including InfoWorld and The Rise of Vibe Coding analyses, 2025.
  7. Y Combinator Winter 2025 batch data, reported in Wikipedia “Vibe coding” entry and multiple tech publications, March 2025.
  8. METR, “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity,” July 2025. Randomized controlled trial with 16 developers, 246 tasks.
  9. Fast Company, “The ‘vibe coding hangover’ is upon us,” September 2025. Senior software engineers cited “development hell” working with AI-generated code.
  10. TechStartups, “The Vibe Coding Delusion: Why Thousands of Startups Are Now Paying the Price for AI-Generated Technical Debt,” December 2025. Analysis of approximately 10,000 startups attempting AI-assisted production development.
  11. FinalRoundAI, “5 Vibe Coding Failures That Prove AI Can’t Replace Developers Yet,” 2025. Case study of Leonel Acevedo (@leojr94_) documenting security failures within days of launch.
  12. Veracode 2025 study analyzing over 100 large language models across 80 coding tasks, reported in Medium “The Rise of Vibe Coding in 2025.”
  13. Wikipedia, “Vibe coding,” citing research on AI model hallucination rates (5.2% commercial, 21.7% open-source). The term “slopsquatting”—a portmanteau of “AI slop” and “typosquatting”—was coined in April 2025 by Seth Larson, Python Software Foundation Developer-in-Residence, and popularized by Andrew Nesbitt on Mastodon.
  14. Cerbos, “The Productivity Paradox of AI Coding Assistants,” September 2025. Developer commentary on the trading of typing time for review time.