02: Weird Flex
AI is set to change roles, but how much can one person flex?
This week’s main thing
This week, AI company CEOs are telling reporters that half of entry-level white-collar jobs will be gone in one to five years, and tech companies are laying people off in large numbers blaming AI. At the same time, the researchers studying what AI is doing inside organizations, including those employed by those same AI companies, keep publishing careful findings that AI is reshaping roles, not deleting them.
Putting aside for a minute that the clearest ROI so far for AI is its ability to give cover for a layoff, it’s worth asking this week just how flexible jobs really are (i.e., how much can a stack of tasks or roles really bend before a human being is broken and/or a job is deleted).
Here’s what we know from org dev research:
There are 3 dimensions of job flex and AI hits them unevenly: tasks, relationships, and meaning.
Tasks: this one is the easiest to address, and you can think about the set of tasks someone has as either tight or loosely bundled based on how interrelated or interdependent they are with each other. Some people (often junior staff) have a loose pile of things they do that really don’t have a ton to do with one another, they are just things that need to get done. AI is actively destroying roles with loose bundles. Other folks, often more senior, have a bunch of tasks all related to the same core thing that are dispersed along different cognitive abilities (some are repetitive, some require deep thinking). Because AI isn’t galaxy-brain yet and struggles to hold context for long periods of time, these roles can adapt as AI eats their repetitive tasks.
Relationships: this is who you need to work with to get your tasks done and AI is making this one really hard to predict because it’s hitting all roles in the org chart simultaneously AND it’s making work feel more and more like a single player game. Turns out though, we bond at work through shared problem solving and so we need AI to leave us alone to struggle together at some points. Also, junior folks need the struggle to build both their social networks and their competence.
Meaning: as a society, we are running at job replacement without thinking this one through–if all we’ve left for people to do is click “approve” or “next” then this role has no meaning, and you can basically assume people will do it poorly. Trying to preserve the role then becomes self-defeating.
So when you’re evaluating a person’s role, consider how loose or tight their task bundle is, which relationships they really should be retaining and why, and if you’ve left them any point to getting out bed in the morning.
But here’s the kicker: flexing itself is not what breaks people. Rate is. The change-fatigue research says humans can absorb substantial role redesign if they get six to eight weeks of stability between moves. They give up when they’re given no time to settle into a new role.
What to say to your CEO this week:
Before we sign off on the next round of AI role redesign, I want to flag something. Our people are already absorbing more change than the research says humans can handle without a break. Gartner has the average employee at 14 concurrent changes, up from five in 2016, and willingness to go along with change has collapsed over the same period. Six to eight weeks of stability between moves is roughly the threshold where adaptation keeps working. I don’t think we’ve given anyone that this year.
The AI redesign itself is the right direction. But if we layer it on top of everything else that’s in flight, we aren’t going to get the productivity gain we’re modeling. We’re going to get attrition from the people we most need to keep, and quiet disengagement from the rest. I’d like us to look at the change calendar before we set the timeline on this one.
This week’s move: scenario work beats forecasting when the horizon is short
A common complaint this month from people trying to write anything serious about AI is that the three-year horizon has collapsed to eighteen months. Strategists keep running forecasts that age out during the quarter they were built for. Technology assumptions that ground the analysis change inside the analysis window. Executives feel this too. The usual response is either to ignore the uncertainty and forecast anyway, or to freeze. Both are bad.
The move is structured scenario work, and speculative fiction is better raw material for it than most strategy consultants will admit. Shell’s scenario planning team, under Pierre Wack in the early 1970s, used fiction-adjacent thinking to prepare the company for the 1973 oil shock while competitors were running straight-line forecasts. The point was not to predict the shock. It was to have already imagined a world in which it happened, so that when it did, the organization had mental infrastructure to respond.
A useful new resource for this, surfaced this month, is the Extrapolated Futures Archive (urubos.github.io/efa-site). It catalogs 276 science fiction ideas mapped to 1,903 stories, tagged by domain, scenario type, and outcome. A reader can search by situation and get a ranked list of fictional precedents. It is built, its creator says, for decision-makers who want to widen their thinking before a decision rather than after.
A chief of staff preparing for a leadership offsite this quarter might try a one-hour version of this. Take three of the AI decisions your company is currently treating as settled. For each, pull two or three fictional treatments of analogous situations (creation escaping creator control, automation and labor displacement, all real EFA entries). Have the leadership team argue the decision as if the fictional situation were the actual world. Not to predict. To pressure-test what the decision assumes. The output is a list of assumptions you can now watch the data for.
Top stories
Amodei says half of entry-level white-collar jobs will be gone in one to five years; Garicano, today, argues labor markets price jobs, not tasks. The back-and-forth is worth reading as a set. Dario Amodei, CEO of Anthropic, on Fox News this month: AI will eliminate up to half of entry-level white-collar jobs within one to five years, specifically in finance, consulting, law, and tech. Luis Garicano, writing today on Silicon Continent as a direct response, argues the Amodei prediction confuses task automation with job extinction. Fox News / Silicon Continent
Srinivasan (HBS): the measurable effect is role reshaping, not elimination. Harvard Business School professor Suraj Srinivasan and coauthors analyzed nearly all US job postings from 2019 through March 2025. Srinivasan’s recommendation, published in Harvard’s Working Knowledge on February 20, is that companies treat AI as an augmentation tool and invest in reskilling along the lines of judgment, interpersonal communication, and human-AI collaboration rather than treating AI as a cost-cutting device. Harvard Working Knowledge
Otis et al.: AI helped high performers, hurt low performers. In an MIT Sloan Management Review article published April 20, Nicholas Otis, Rowan Clarke, Solène Delecourt, David Holtz, and Rembrand Koning reported on a field experiment with small business owners in Kenya. AI access boosted revenue and profits by 15% for already-high-performing entrepreneurs and caused a roughly 10% decline for those who had been struggling. The mechanism was judgment: weaker performers followed generic or misleading AI advice because they lacked the domain expertise to filter it out. The finding complicates the “AI as equalizer” claim. AI widens performance gaps rather than narrowing them. MIT Sloan Management Review
The aggregate and the cohort. Yale Budget Lab’s most recent CPS analysis, updated in March 2026, finds the economy-wide picture is one of stability: occupational mix, industry mix, and the AI-exposure of unemployed workers have not shifted meaningfully since ChatGPT’s release. Lead author Martha Gimbel told Fortune in February that AI anxiety “remains largely speculative” in the aggregate data, and that AI’s next real test will be a recession that forces mass adoption. But Brynjolfsson, Chandar, and Chen’s August 2025 Stanford paper, also using ADP payroll data, found a 13% relative decline in 22-to-25-year-old employment in the most AI-exposed jobs, and roughly 20% declines in software engineering and customer service for that cohort, while older workers in the same roles grew 6 to 9%. Both findings are real. They describe AI’s effect landing first on entry-level workers in narrow-bundle occupations, not on the aggregate labor market. Yale Budget Lab / Brynjolfsson et al. “Canaries in the Coal Mine?”
Persistence preprint: ten minutes of AI use measurably changes behavior. Researchers at Carnegie Mellon, MIT, Oxford, and UCLA released a preprint on April 14 (arXiv 2604.04721) covering three randomized controlled trials, N=1,222. Participants given GPT-5 assistance on fraction problems performed better while the AI was present, then performed worse and gave up sooner than the control group once the AI was removed. In one experiment, the AI-assisted group solved 71% of the final three unaided problems versus 77% for controls. The paper is not peer-reviewed. Lead researcher Grace Liu; senior author Rachit Dubey, UCLA. The paper distinguishes between participants who used AI for hints (smaller effect) and for direct answers (larger effect). The measured outcome is willingness to skip problems, not raw ability, which is a narrower claim than most coverage of the paper suggested. arXiv
Meta cuts 8,000, freezes 6,000 more, cites AI capex. On Thursday, April 23, Meta told employees it will lay off about 10% of its workforce, roughly 8,000 people, beginning May 20, and freeze recruitment for 6,000 open roles. Chief People Officer Janelle Gale framed the cuts as necessary to offset heavy AI spending. Meta expects 2026 capital expenditures of $115 to $135 billion, up from $72.2 billion in 2025. Free cash flow is projected to fall 83% year over year. Meta reported quarterly records for Q4 2025 revenue and net income eight weeks ago. The cuts are not the company’s first to fund AI; January saw roughly 1,000 roles eliminated in Reality Labs. CNBC
The rehiring cycle. Forrester’s Predictions 2026 report says 55% of employers regret AI-driven layoffs. A February 2026 Careerminds survey of 600 HR professionals who conducted AI-layoffs found 32.7% had rehired 25 to 50% of cut roles, 35.6% had rehired more than half, and 52% of rehires happened within six months of the original cut. Nearly a third of HR leaders said the cost of rehiring exceeded the savings. Gartner’s Kathy Ross projected on February 3 that 50% of AI-driven headcount reductions will be reversed by 2027. Named public reversals include Klarna, whose CEO Sebastian Siemiatkowski said the company “went too far” and that AI service “resulted in lower quality.” HR Digest
Last time around
February 4, 1985, 12:05 PM. The first car rolled off the line at GM’s Detroit-Hamtramck plant, a Cadillac Eldorado, delivered with fanfare. Roger Smith had spent $500 million on the factory and bet GM’s future on what a company brochure called “the passage to the future”: 260 robots, 2,000 programmable devices, an automated dolly system that moved parts around the floor without human hands. Part of a larger “Factory of the Future” program meant to grow GM’s robot count from 302 units in 1980 to 14,000 by the end of the decade.
The dolly wandered off course. The spray-painting robots sprayed each other. For months, Hamtramck trucked half-finished cars across town to a 57-year-old Cadillac plant so human workers could repaint them. When the robogate welding machine smashed a body, or a welder stopped dead, the whole line stopped, and workers stood around while managers called the robot vendor’s technicians. Paul Ingrassia and Joseph White, reporting on it later, wrote: “They had simply grafted robots onto the old, inefficient system. GM bet the entire Hamtramck production system on the proposition that leading-edge automation would work instantaneously.”
It didn’t. F. Alan Smith, GM’s former CFO, quipped in 1986 that the company would have been better off using the money to buy its two biggest competitors outright. GM’s market share fell from 46% to 35% over the decade. Toyota spent the same period putting fewer robots next to more capable workers and kept winning. Today, automobile assembly is largely automated. Smith had the right long-run direction, but the sequence cost him the company. Fortune
Potpourri
Dan Hon, writing in his newsletter Things That Caught My Attention on April 16, on why people say the economy feels worse than the data says: “I legit think some of the belief and experience that the economy sucks is also sheer exhaustion from shit like this and notification fatigue and every single shitty tech-mediated interaction of which there are hundreds a day.” His frame is convenience-for-agency. Every tech-mediated convenience is a small transfer of agency from the user to the platform, and they add up. Things That Caught My Attention



