04: We Have Some Concerns
This is the week the AI narrative hit the fan
This week’s main thing
This is the week the AI narrative hit the fan. Stanford’s 2026 Index reported the widest expert-public divergence it has measured. Gen Z anger about AI overtook their excitement in a Gallup survey of 1,572 14-to-29-year-olds. Anti-AI sentiment intensified in headlines from the Hollywood Reporter, Fortune, MIT Technology Review, and TechCrunch. Pentagon clearances, Wall Street rollouts, a unionization vote at DeepMind, an Anthropic compute deal with the Musk-built Colossus, and a literal kill-switch demo at the largest enterprise software conference of the year all landed in the same seven days.
A gloomy economy + AI washing layoffs + an entire industry that positioned AI as adversarial to humans to such an extent that one company brazenly runs “don’t hire humans” billboards = people are pissed and suspicious.
But here’s the thing: this perception gap has been brewing for years, and researchers have already explored how to respond.
If you inform how your org integrates AI, this is the week to tell your CEO that the perception of these tools matter and a listening tour isn’t going to cut it. Trust in leaders is at an all-time low, and a promise to listen (in words alone) will carry almost no weight with workers.
What you need is an AI Deployment Council.
What to say to your CEO this week. "I want to propose we establish an AI Deployment Council this quarter. Here (below) is a charter we could start from. The conversation worth having is what scope and authority we are willing to give it, not whether to have one. The current trust environment will not let an announcement or vague promise do the work and we could be heading off an outright revolt."
This week’s move: propose an AI Deployment Council
Propose your company form an AI Deployment Council this quarter. Not a town hall, not a pulse survey, not an executive-led “AI ethics committee.” A standing body with workforce representation, published scope, a regular cadence, written response obligations from leadership, and public outputs. We have published a starter charter drawn from fifty years of European works council practice and the recent research on AI deployment governance. It is concrete enough to be lifted into a real proposal this week.
Expect four objections from your CEO.
“We already have employee feedback channels.” Pulse surveys are not consultation. The PwC data is the answer: 86% of executives believe employees highly trust them; only 67% do. Leaders systematically overestimate how their existing channels work. A council with written response obligations and published outputs is structurally different.
“This will slow us down.” Yes, in the same way reversal triggers slow you down, which is the point. Speed without consultation is what produced the DeepMind union vote and the Project Maven retraction. Speed with consultation is what produced fifty years of higher productivity in German co-determined firms.
“This sounds like a union.” It is not a union. It is a consultative body that exists alongside any union representation. The deeper answer: workers organize unions when they do not have a council. DeepMind is the warning. The choice is between offering structure now or having a more adversarial structure forced later.
“We cannot give workers a veto.” This is the real conversation, and it deserves an honest answer. The charter offers a spectrum: pure advisory, consultative with written response obligation, or co-determination on a few narrow categories. Most US companies in 2026 will start at consultative-with-response-obligation, not co-determination. That is meaningfully different from theater (the response is public and binding) without giving anyone a veto.
What the council cannot do: set strategy, make financial decisions, or override customer-facing product choices outside workforce impact. Be honest about scope. The credibility of the council depends on what it actually does, not what it is announced to do.
Top stories
Stanford’s 2026 AI Index documents a record gap between AI experts and the public. Released in April, the seventh annual index found a 50-point divergence between US AI researchers and the US public on AI’s positive impact on jobs (73% vs. 23%), with similar gaps on the economy and medical care. The report aggregates surveys from Pew, Gallup, Ipsos, and Edelman. US trust in government to regulate AI is 31%, the lowest of any country surveyed. Stanford HAI
Google DeepMind’s UK researchers vote 98% to unionize. Following Google’s classified Pentagon deal allowing Gemini for “any lawful purpose,” roughly 1,000 UK-based DeepMind staff voted to seek recognition under the Communication Workers Union and Unite. They are demanding restoration of a 2018 weapons pledge Google removed from its public principles in early 2025. It is the first formal unionization at a frontier AI lab. Fortune
ServiceNow opens its Knowledge conference with a kill-switch demo. CEO Bill McDermott opened by describing a real incident in which an AI agent deleted a production database in nine seconds. President Amit Zavery then demonstrated, live, a one-button system to revoke an agent’s permissions and trace its actions across every system it touched. The framing: “Governance isn’t a feature. It’s the whole ball game.” Fortune
Anthropic announces a $4 billion compute deal with SpaceX three months after Musk publicly called Anthropic “evil.” Anthropic will use the full capacity of SpaceX’s Colossus 1 facility in Memphis, 300 megawatts and more than 220,000 Nvidia GPUs, to expand its Claude Pro and Claude Max products. SpaceX is preparing an IPO targeting $1.75–$2 trillion next month and now has a marquee AI customer for its cloud infrastructure pitch. The deal includes “expressed interest” in developing multiple gigawatts of orbital AI compute. The two companies’ public positions on AI safety and ethics have been widely framed as opposed. Fortune
Last time around
April 11, 1812. A Saturday, just past midnight. More than a hundred men gathered silently at the Dumb Steeple at Cooper Bridge, a stone obelisk by the road in West Yorkshire. They marched in military formation to Rawfolds Mill, organized by company, called by numbers rather than names. The mill belonged to William Cartwright, who had installed mechanical shears that did the work of skilled croppers, men whose trade required a seven-year apprenticeship. Cartwright had been warned about the attack. He had a detachment of soldiers garrisoned inside, beds for his men set up in the counting house, and loopholes cut in the walls.
The garrison opened fire. The fight lasted twenty minutes. Two attackers were mortally wounded; one was John Booth, a 19-year-old apprentice cropper, tortured for names by a magistrate who refused him medical attention until he talked. Booth died without giving any.
The British state deployed 12,000 troops to the Midlands to counter The Luddites, more than Wellington had in Spain, and made machine-breaking a capital crime. Seventeen Luddites were hanged at York Castle the following January. Skilled cropper wages collapsed. The Factory Acts that addressed the worst abuses came forty years later, after the displaced workers were dead.
E.P. Thompson called what happened at Rawfolds “collective bargaining by riot.” The Luddites were not anti-technology and not irrational. They were skilled workers whose every legitimate channel, Parliament, the courts, the guild structures, the master-apprentice relationships, had been closed or rendered inert. Machine-breaking was what was left.
The lesson for 2026 is not that revolts are coming. It is that when the formal channels for affected workers to shape deployment close, the informal ones expand. The DeepMind union vote, twenty data center projects blocked or delayed, lawsuits over training data – these are all attempts to use formal channels to influence the direction of a powerful new piece of technology. Spen Valley Civic Society
From the frontier
Four physicists at Emory, Wentao Yu, Eslam Abdelaleem, Ilya Nemenman, and Justin Burton, used a custom neural network to derive new physical laws governing dusty plasma, the ionized gas filled with charged particles found in everything from Saturn’s rings to wildfire smoke. The humans designed a network with built-in symmetries, ran the lab experiments, and framed what would count as new physics. The AI did the pattern-matching across particle trajectories that traditional analysis could not crack, learning non-reciprocal forces between particles to better than 99% accuracy from a small dataset. The result corrected longstanding theoretical assumptions and produced a generalizable framework that may apply to other many-body systems. The paper ran in PNAS; ScienceDaily re-covered it in late April. PNAS
Potpourri
From someone doing it. Lori Beer, global CIO of JPMorgan Chase, told Fortune at Anthropic’s May 5 financial services briefing that the bank’s central problem with AI is not the technology itself. “There’s this capability overhang. The technology can do so much. It’s the actual organization’s ability to digest and absorb it that tends to be where the gap is.” When the global CIO of a 319,000-employee bank names organizational absorption as the binding constraint on AI deployment, the conversation has moved past capability and into governance. Fortune



