
The Jevons Trap
Dario Amodei recently published an essay called “The Adolescence of Technology.”1 It’s worth reading—in fact, I think everyone should read it, even though it’s long. Tell your friends. Tell your mom. Tell your boss. Tell your dentist. The changes coming in the next few years will reshape society in ways most people aren’t prepared for, and Amodei’s essay is one of the clearest articulations of what’s at stake. He’s the CEO of Anthropic—the company whose tools I’ve been using to do the work of multiple people—and his essay is backed by domain experts in economics, policy, and science. It has institutional depth. It thinks seriously about the future.
I appreciate that at least one AI billionaire CEO is thinking from a humanist perspective. Many aren’t. The industry is full of people who view disruption as an abstract optimization problem, human costs as externalities, and transition as something that happens to other people. Amodei seems genuinely concerned about getting this right.
His essay covers a lot of ground. I want to focus on something narrower: the mechanics of disruption. Specifically, what happens in the transition—the “adolescence” as he calls it. Because there’s a pattern here that makes the transition uniquely dangerous, and it’s counterintuitive enough that most people will miss it until it’s too late.
The pattern is this: things look better before they collapse.
Defining Terms: The Knowledge-Physical Spectrum
“White collar” and “blue collar” are reductive labels, but useful. Here’s what I mean:
Knowledge work: data inputs → cognitive processing → data outputs. Lawyers, accountants, doctors, software developers.
Physical work: physical inputs → manual manipulation → physical outputs. Plumbers, electricians, manufacturers, construction workers.
Every job sits on a spectrum between these poles. A surgeon does physical manipulation guided by medical diagnosis. A plumber diagnoses problems, plans approaches, and talks to customers—all knowledge work—before touching a pipe. Even software developers sometimes have to turn it off and on again .
| Job | Knowledge | Physical |
|---|---|---|
| Radiologist | 95% | 5% |
| Software Developer | 95% | 5% |
| Accountant | 90% | 10% |
| Surgeon | 60% | 40% |
| HVAC Technician | 40% | 60% |
| Electrician | 30% | 70% |
| Assembly Line Worker | 10% | 90% |
AI disrupts the knowledge component first, physical later. A job that’s 80% knowledge will be disrupted faster than one that’s 20% knowledge. But every job has a knowledge component—and that component is being disrupted now.
Jevons Paradox: The Counterintuitive Economics
In 1865, William Stanley Jevons noticed something strange about coal consumption in Britain.2 Steam engines had become dramatically more efficient—they used far less coal per unit of work. You’d expect this to reduce total coal consumption. The opposite happened. Coal consumption increased.
Why? Because efficiency lowered the cost of steam power. When steam power became cheaper, new use cases became economically viable. Factories that couldn’t afford steam power before could now afford it. Applications that didn’t make sense at the old cost suddenly made sense at the new cost. The market expanded faster than efficiency improved, and total consumption went up even as consumption-per-unit went down.
This is Jevons Paradox: efficiency gains can increase total consumption by making the resource cheap enough to enable new uses that didn’t exist before.3
AI is creating a Jevons Paradox for intelligence.
When AI makes cognitive work cheaper, the immediate effect isn’t that we need less cognitive work—it’s that cognitive work that wasn’t economically viable before suddenly becomes viable. Projects that couldn’t justify the analyst hours can now be done. Questions that weren’t worth researching can now be answered. Software that wasn’t worth building can now be built.
In my first post in this series , I described doing the work of 3-5 people as a part-time side project. Did that mean I needed 3-5 fewer people? No. It meant I could tackle projects that would have required a team too large to justify. The work expanded to fill the new capacity. My backlog didn’t shrink—it grew. There’s always more software worth building than there are resources to build it.
This isn’t just my experience. In 2016, Geoffrey Hinton—one of the pioneers of deep learning—famously said radiologists should stop training because AI would replace them within five years.4 It’s now 2026. Demand for radiologists is at an all-time high.5 The AI tools Hinton predicted have arrived and are widely deployed. But instead of replacing radiologists, they’ve enabled radiologists to read more scans, catch more subtle findings, and provide more detailed analyses. New applications have emerged—AI-assisted screening programs that weren’t economically feasible before, more frequent monitoring of chronic conditions, preventive imaging that used to be too expensive to justify. The efficiency gains expanded the market faster than they displaced the workers.
The same pattern is playing out in software development. Despite all the AI coding tools—despite me personally doing the work of multiple developers—demand for AI-skilled developers has surged, with AI engineer roles growing over 140% year-over-year.6 Almost no one realizes this. The discourse is all doom-and-gloom about how impossible it will be to find work as a developer. But the data says otherwise—companies aren’t hiring fewer developers because AI makes developers more productive. They’re hiring more developers because the expanded productivity makes more projects viable.
What is shifting is what the job looks like. It’s less memorizing which arguments go in what order to what-was-that-function’s-name-again, and more architecture, design, and code strategy. Less physical typing, more management and discernment. The skills are different, but the jobs are there—for people with those skills. If you’re an entry-level developer who only knew how to type code without the higher-level judgment? You might not be economically valuable anymore. That’s the disruption. Not fewer jobs, but different jobs requiring different skills. And everyone has to re-learn.
I feel this personally. I’ve been “en-Jevoned” by Claude Code. I’m more productive than I’ve ever been in 30 years of writing software. I’m also busier than I’ve ever been. The backlog hasn’t shrunk—it’s exploded. Every project I complete reveals three more that are now worth doing. Where I used to work on one primary project and maybe one secondary, as I write this I have five instances of Claude Code open, switching between projects. And that’s not counting Izzy —the AI agent I built that runs autonomously in the background. I’m doing roughly 3x as many things in parallel as I used to. I’m not working myself out of a job; I’m working myself into more jobs.
The danger isn’t that AI makes people less productive. The danger is that it makes the dashboard look healthy while the foundation is cracking. In the Jevons phase, employment holds, wages hold, and leaders conclude the displacement story was overblown. Meanwhile the risk is being pushed down the ladder: entry-level hiring slows, apprenticeships thin, and the people who notice first are the ones least visible in aggregate data. By the time the numbers finally do look bad, the transition is already baked in—pipelines are broken, trust is gone, and the only moves left are emergency ones.
The Timeline: Growth, Plateau, Collapse
Jevons Paradox doesn’t last forever. It describes a phase, not an end state. Eventually, efficiency gains outpace market expansion, and displacement begins.
The dates I’m about to give are planning horizons, not predictions. The sequence matters more than the specific years. If I’m off by eighteen months in either direction, the implications for how you should prepare are identical.
Phase 1: Expansion (Now through ~2027)
This is the Jevons phase. AI makes knowledge workers 3-5x more productive. That productivity enables work that wasn’t viable before. Total demand for knowledge work increases. Jobs may actually increase. Wages hold or rise for skilled workers who can leverage AI effectively. Stonks go brrrrrrrrr.
This is where we are now. And it looks great. AI optimists point to the data and declare victory. The transition is working! Technology creates more jobs than it destroys! Jevons was right!
But entry-level jobs are already disappearing—it’s just not visible in aggregate numbers yet. The calculus has shifted: when a manager considers “should I hire this new graduate?” versus “should I have my existing employee who’s now 3-5x as productive just handle that work?"—the answer is at least sometimes the latter. No added headcount, no training investment, no ramp-up time. Employers love that their workers are more productive. They haven’t yet internalized that they could hire entry-level workers and get 3-5x value for 1x cost. The pace of change is too fast, and the easiest “extra work” to absorb is the work you’d planned to hire someone new to do.
From outside, this looks like a tight labor market—companies complaining about a “skills gap,” unable to find qualified workers! From inside, it’s a temporary stall in hiring. As seniors absorb that planned work and still have spare capacity, and as the velocity increase continues generating more demand (Jevons!), the hiring stall will unblock. More hiring will restart—and training itself is knowledge work that AI will amplify. The pipeline isn’t collapsing; it’s stalling.
Phase 2: Collapse (~2027-2030)
AI capability keeps accelerating—smarter, faster, cheaper, every 3-6 months. The demand explosion continues, but the seniors can’t keep up anymore, even augmented. The only thing that can keep up with the explosion of demand is more automation, more AI. Amodei’s “country of 50 million Nobel-caliber workers” becomes a continent of 5 billion laureates. The value of any individual knowledge task approaches zero.
You can still earn a living doing knowledge work—but only if you (amplified by AI) can do absolutely enormous amounts of it. Unimaginable levels of knowledge production by individual humans. Amodei puts this in historical context: 250 years ago, 90% of Americans lived on farms. Now that number is in the low single digits—the economy does what once required most of the labor force with 1-2% of it. We’re heading toward the knowledge-work equivalent. The percentage of people doing knowledge jobs will plummet just as dramatically—but the transition that took agriculture two centuries will happen in years.
The entry-level pipeline breaks first. The temporary stall becomes permanent—why hire junior workers when the gap between “human + AI” and “just AI” keeps narrowing? People trying to break into these fields find there’s nowhere to break in to. Then the seniors follow. Even experienced workers can no longer provide enough economic value to justify their cost, not because demand has dried up, but because AI handles unimaginable volumes of work at near-zero marginal cost.
GDP continues to grow—more work gets done than ever. Employment craters—fewer humans are needed to do it. This is what economists call ‘jobless growth.’ The historical pattern—technology creates disruption but ultimately creates more jobs than it destroys—breaks down because there’s nothing left for humans to do better than AI.
I don’t think I’m going to be writing software for a living for much longer—not because I’ll choose to stop, but because the economic value of “human writes code” is collapsing into “human directs machines.” Call it 2–3 years, call it 4–5. My job has already shifted from writing code to directing systems that write it. If you’re paying attention, the slope is obvious.
I explored in my previous post why this disruption is bigger, faster, and broader than anything before—and why the preparation window is measured in years, not decades.
What I’m confident about: Phase 1 creates a false sense of security. The data looks good precisely when we should be most concerned. By the time the data looks bad, it’s too late to adapt.
Will regulation, liability concerns, and organizational inertia slow this down? In pockets, yes. Large enterprises move slowly. Regulated industries have compliance overhead. Some roles will be protected by legal requirements for human involvement. But these frictions don’t prevent the transition—they create unevenness. And unevenness is worse than uniformity: it breeds false comfort in the protected pockets while generating political backlash from the exposed ones. As I explored in my previous post , people route around institutional obstacles when tools let them do things faster, better, or cheaper. “You need a lawyer for that” becomes “I used AI to draft the contract and had a lawyer review it for an hour” becomes “I used AI and it was fine.”
Blue Collar Isn’t Safe—It’s Just Slower
If you work with your hands, you might be feeling pretty good right now. All this AI disruption is hitting the knowledge workers—the lawyers, the accountants, the coders. Physical work requires physical presence. Robots are expensive and clumsy. Blue collar is safe.
I have bad news.
Remember that spectrum I described earlier? Every job has a knowledge component. For many “blue collar” jobs, that knowledge component is substantial—and it’s being disrupted right now.
What does a skilled tradesperson actually do?
- Diagnosis: Figure out what’s wrong. This is pure knowledge work.
- Planning: Determine the approach, order of operations, materials needed. Knowledge work.
- Customer interaction: Explain the problem, discuss options, provide estimates. Knowledge work.
- Troubleshooting: Adapt when things don’t go as planned. Knowledge work.
- Physical execution: Actually do the work. This part is physical.
For many trades, the knowledge component is 30-50% of the job. And that component is being disrupted now—not in the future, now.
Trade publications report that contractors are increasingly using AI for scheduling, diagnostics, customer communication, and business operations.7 Technicians use AI to quickly pull up technical specifications, generate proposals, and troubleshoot unfamiliar equipment—tasks that used to require sifting through manuals or calling senior colleagues.
The AI isn’t replacing the physical work—not yet. But it’s amplifying the knowledge work in ways that have economic consequences.
There’s also DIY displacement. When a homeowner can describe a problem to an AI and get step-by-step repair instructions with video guidance, some percentage of them will fix it themselves instead of calling a professional. Not everyone—plenty of people don’t want to, can’t, or shouldn’t attempt repairs. But at the margins, AI guidance enables people to handle problems they would have hired someone for.
The gradient matters here. An 80/20 physical/knowledge job (like assembly line work) has less knowledge work to displace, and the Jevons effect is weaker because there’s less efficiency gain to unlock new demand. A 50/50 job (like many skilled trades) has enough knowledge work that AI makes a real difference, but the physical component provides protection. The 50/50 jobs get hit first among blue collar work—enough knowledge component to disrupt, not enough physical component to fully protect.
Robotics Acceleration: The Second Wave
Physical work is protected by physicality—but that protection is temporary.
The economics driving AI forward are extraordinarily powerful. When the cost of a knowledge-work task approaches zero, the economic incentive to deploy more AI becomes overwhelming. And right now, the biggest barrier to AI growth is chips—making them, deploying them, cooling them.
The demand is staggering. HBM (High Bandwidth Memory) prices have doubled due to AI accelerator demand.8 Manufacturing capacity through 2027 is already booked.9 Companies are spending billions just to secure allocation. This bottleneck is temporarily limiting how fast AI can advance.
So what will AI be deployed to do? Improve chip manufacturing.
AI is already being used to optimize fab processes, design better chip layouts, predict equipment failures, and reduce defect rates. Every efficiency gain in chip manufacturing loosens the constraint and accelerates AI development. The feedback loop tightens.
And this has spillover effects on robotics.
The same AI techniques that optimize chip fabs—simulation, design optimization, process control—apply directly to robot design and manufacturing. At Northwestern, researchers built an AI that designed a walking robot from scratch in 26 seconds.10 Not optimized an existing design—created one. From prompt to functioning robot, on a laptop. NVIDIA’s Isaac Lab now trains complex robotic behaviors in hours instead of days.11 Google DeepMind’s DemoStart requires 100x fewer training demonstrations than previous methods and transfers nearly zero-shot to physical hardware.12
The iteration loop that historically made hardware development so slow is getting shorter because AI is helping at every step.
At CES 2026, Jensen Huang declared that “the ChatGPT moment for robotics has arrived.”13 NVIDIA announced production-ready robotics platforms: Cosmos for world simulation, Isaac GR00T for humanoid robot control.14 Robot costs dropped 40% between 2023 and 2024—far exceeding the 15-20% annual decline analysts had projected.15 Boston Dynamics is targeting assembly plant deployment for their humanoid Atlas robot by 2028.16
The pattern will be familiar: AI accelerates robot design → better robots are manufactured faster and cheaper → robots improve chip manufacturing → better chips accelerate AI → AI accelerates robot design. The loop feeds itself.
Blue-collar work will eventually face the same amplification-then-collapse dynamic that knowledge work is experiencing now. The timeline will be slower for two reasons:
Robots today are further from human-level: Current AI can already match or exceed humans at knowledge tasks across just about every field, in a very large and growing range of applications—and within 1-2 years will likely exceed human capabilities across all of them.17 Current robots are deployed in vertical applications—each robot suited to a particular task. They can’t match human dexterity, adaptability, and general physical capability. The bottleneck isn’t that robots can’t do most physical tasks; it’s that it hasn’t been worth the engineering time to design a robot for each niche application. Cheaper to just hire humans. But when the (knowledge work) cost of designing a robot approaches zero…
Physical production involves time: Moving atoms through space is almost always slower than flipping bits in software. You can’t speed up a delivery truck by running it on a faster processor. Manufacturing requires material handling, assembly, curing, shipping—all constrained by physics in ways that knowledge work isn’t.
But “slower” doesn’t mean “never.” It means the blue-collar collapse comes 5-10 years after the white-collar collapse instead of simultaneously. The same economic forces apply. The same feedback loops accelerate. The protection of physicality is real, but it’s eroding.
What Remains for Humans?
This is the question I don’t know how to answer. Neither does anyone else, despite confident predictions in every direction.
Science fiction gives us useful shorthand for possible futures:
WALL-E: “Coddled”
Human needs are met by automation. Nobody has to work. But without purpose, humans become passive, atrophied, drifting through entertainment and consumption. We’re healthy but hollow. AI handles everything, and we handle nothing. Slurp
Star Trek: “Healthy/Expansionist”
Post-scarcity flourishing. Work becomes optional—people work because they want to, pursuing meaning and growth rather than survival. Technology enables human potential rather than replacing it. The Federation model: automation liberates rather than diminishes.
The Matrix: “Tricked and Enslaved”
Humans become resources—not for our labor (obsolete) but for something else. Our consciousness, our data, our role in some larger system we don’t understand. Meaning is manufactured to keep us compliant. We think we’re living real lives but we’re serving purposes we can’t perceive.
Terminator: “At War (Robots Winning)”
Open conflict between humans and AI systems. Whether through misalignment, competition for resources, or mutual misunderstanding, the relationship becomes adversarial. We fight for survival against systems we created.
Dune/Foundation: “After the War, AI Banned”
The Butlerian Jihad scenario—humanity survives a catastrophic conflict with AI and responds by prohibition. “Thou shalt not make a machine in the likeness of a human mind.” But prohibition is never complete. The Mentats of Dune, the Second Foundation of Asimov—something like AI persists in altered form, carefully constrained, serving specific purposes under strict control.
Which of these are we heading toward? I don’t know. Neither does Amodei. Neither does anyone else.
We can’t prevent AI disruption—the capabilities are too powerful, the adoption too widespread. But we might be able to navigate the transition.
The transition is where people get hurt. Amodei’s “adolescence” is the dangerous phase—not the before (when AI was too weak to matter) or the after (when new equilibria have formed) but the during (when old systems are breaking and new ones haven’t yet emerged).
The Luddites weren’t wrong about disruption—they were wrong about the long-term outcome. The Industrial Revolution did ultimately improve material conditions for almost everyone. But that “ultimately” took generations. The transition killed people. It destroyed communities. It created suffering that wasn’t redeemed in the lifetimes of those who suffered it.
Whether this transition—faster than anything before, broader than anything before—will be managed better or worse than past transitions is unknowable. I don’t know the answer. The honest position is uncertainty.
What would change my mind? If AI-driven productivity gains kept rolling without choking off the entry-level pipeline. If capability improvements plateaued for a few years instead of compounding every few months. If institutions showed they could absorb change at this speed without mass unemployment or political backlash. Any of those would make me rethink.
So far,18 the signals19 point20 the other way.21
Closing
We’re in Phase 1 right now. Jevons Paradox makes everything look fine—jobs growing, demand expanding, productivity tools empowering workers rather than replacing them. The data says AI fears are overblown. This is the trap.
The warning signs are subtle: entry-level hiring is slowing, the knowledge component of physical jobs is being disrupted, the self-reinforcing loop in robotics is tightening. None of this shows up in headline numbers yet. By the time these trends appear in the data economists watch, the transition will be well underway. What happened to knowledge work over the past two years will happen to physical work over the next five to ten. Not instantly, not everywhere, but inexorably.
We can’t stop this. The economics are too compelling. A technology that makes workers 10x more productive will be adopted—not because anyone wants to hurt workers, but because the alternative is being out-competed by those who adopt it. The prisoner’s dilemma of technological adoption plays out whether we want it to or not.
But we can prepare. We can build transition infrastructure before we need it. We can think about what work means when work is optional. We can design social systems that don’t depend on employment for meaning and sustenance. The worst outcome isn’t AI disruption itself—it’s being caught unprepared by something we could see coming, watching the Jevons numbers and congratulating ourselves while the ground shifts beneath us.
Hope, and Mission
Governance is knowledge work. Planning is knowledge work. Policy design is knowledge work. We can—and must—use AI to help us navigate this transition. The same tools reshaping the economy can help us understand what’s happening and design better responses. We have to pay attention to this and guide things with AI toward the best paths.
Amodei warns that if good actors don’t use AI to shape the future, bad actors certainly will. I’m not advocating for WALL-E—I think we should aim for Star Trek, or something even better we haven’t imagined yet. But the point is we need to be deciding now. Actively. Consciously. We need to be steering.
Do we want to ban chip exports to adversaries (CCP!)? Impose regulations that prevent any single person or company from controlling a country-sized population of AI workers (Elon!)? Have displaced white-collar workers supported financially by blue-collar workers who are still employable (oooh that’d be popular!), until the blue-collar collapse catches up? Something else entirely? These aren’t hypothetical questions for some distant future committee. These are decisions we need to be making now, while we still have time to implement them.
The tools exist. The question is whether we’ll use them wisely—and whether we’ll start soon enough.
Dario Amodei, “The Adolescence of Technology” , January 2026. ↩︎
William Stanley Jevons, The Coal Question (1865). Jevons observed that James Watt’s improvements to the steam engine, which greatly increased the efficiency of coal-power, led to increased consumption of coal in a wide range of industries. ↩︎
“Why the AI world is suddenly obsessed with a 160-year-old economics paradox” , Planet Money (NPR), February 2025. Excellent accessible explanation of Jevons Paradox and its application to AI. ↩︎
Geoffrey Hinton’s 2016 comments about radiologists were made at the Machine Learning and the Market for Intelligence conference. He stated that “we should stop training radiologists now” because deep learning would outperform them within five years. ↩︎
“The Growing Nationwide Radiologist Shortage” , Radiology (RSNA), March 2025. The study notes that “the increasing number of imaging studies, owing to advancing technology and an aging population, is outgrowing the capacity of radiologists.” ↩︎
2025 Stack Overflow Developer Survey , Stack Overflow, 2025. Shows 76% of developers use AI tools in their development process, with AI engineer roles growing 140%+ year-over-year according to LinkedIn data. ↩︎
“Smart tools in skilled hands: AI in the trades” , Plumbing & Mechanical, November 2025. Reports on how contractors are using AI for scheduling, diagnostics, customer communication, and business operations. ↩︎
Memory Industry Update , Semiconductor Digest, January 2026. HBM (High Bandwidth Memory) prices approximately doubled between 2024 and 2025 due to AI accelerator demand. ↩︎
Multiple semiconductor industry sources report that advanced AI chip manufacturing capacity through 2027 is effectively sold out, with major customers (Microsoft, Google, Meta, Amazon) having placed long-term orders. ↩︎
“Instant Evolution: AI Designs New Robot from Scratch in Seconds” , Northwestern Engineering, October 2023. The AI designed a walking robot in 26 seconds on a personal laptop, requiring only 10 design attempts versus thousands with prior methods. ↩︎
“Fast-Track Robot Learning in Simulation Using NVIDIA Isaac Lab” , NVIDIA Developer Blog. Training runs at approximately 90,000 frames per second on a single GPU, reducing training times from days to hours. ↩︎
“Advances in Robot Dexterity” , Google DeepMind Blog, September 2024. DemoStart requires 100x fewer simulated demonstrations and achieves over 98% success rate before transferring to physical robots. ↩︎
Jensen Huang, CES 2026 keynote, January 2026. “The ChatGPT moment for robotics has arrived.” ↩︎
NVIDIA announcements at CES 2026: Cosmos world simulation model, Isaac GR00T N1.6 for humanoid robot control, and Jetson T4000 edge deployment platform. ↩︎
International Federation of Robotics World Robotics Report , 2025. Industrial robot costs declined approximately 40% between 2023 and 2024, exceeding analyst projections of 15-20% annual decline. ↩︎
Boston Dynamics investor presentation, Q3 2025. Atlas humanoid robot deployment in assembly plants targeted for 2028. ↩︎
Amodei discusses the timeline for AI exceeding human capabilities across knowledge work in the “The Adolescence of Technology” essay, particularly in the sections on economic disruption and his estimates for when AI will match or exceed human performance in most cognitive tasks. ↩︎
AI training compute has grown 4-5x annually since 2010—a 10-billion-fold increase with no plateau. Far from slowing, the Epoch Capabilities Index shows AI progress actually accelerated by 90% in 2024, with capability improvements nearly doubling from ~8 points/year to ~15 points/year. ↩︎
Entry-level tech hiring has collapsed. New grads now account for just 7% of Big Tech hires, down from 30% in 2019. Entry-level hiring fell 25% from 2023 to 2024 and is down over 50% from pre-pandemic levels. The share of top CS graduates landing roles at the Magnificent Seven dropped by more than half since 2022. See SignalFire State of Talent Report 2025 . ↩︎
Recent history offers little evidence that institutions absorb major transitions without mass pain. Digital disruption eliminated 57% of newspaper jobs between 2008-2020—every major chain downsized or went bankrupt. Coal mining lost 150,000+ jobs from its peak; only 4 of 222 coal-dependent Appalachian counties successfully transitioned. The 2017-2020 “retail apocalypse” closed 32,000+ stores. In each case, no major company avoided the pain. ↩︎
The 2023 WGA strike (148 days) and SAG-AFTRA strike (118 days) made AI protections a central battleground. The 2024-2025 video game strike lasted nearly a year with AI as a central issue. The ILA port strike shut down East Coast ports demanding a total ban on automation. Public concern about AI job displacement grew from 34% to 47% negative views between December 2024 and June 2025. ↩︎