Advertisement
137 VENTURES RAISES $700M ON ONE SUCCESSFUL BETAI LABS MASTER THE SIX-MONTH THRONE: A STUDY IN IMPERMANENCEAI STARTUPS ANNOUNCE EXCITING 12-MONTH EXPIRATION DATESAI STARTUPS: BUILT FOR OBSOLESCENCE, FUNDED ANYWAYANTHROPIC DEMANDS $900B VALUATION DECISION IN 48 HOURSBALLMER ADMITS HE GOT DUPED: VENTURE'S GREATEST HITSBIG TECH DOUBLES DOWN ON LIFE-OR-DEATH DECISIONS, IGNORES WARNINGSBMO DEPLOYS QUANTUM COMPUTING TO SOLVE PROBLEM PHYSICS CANNOT137 VENTURES RAISES $700M ON ONE SUCCESSFUL BETAI LABS MASTER THE SIX-MONTH THRONE: A STUDY IN IMPERMANENCEAI STARTUPS ANNOUNCE EXCITING 12-MONTH EXPIRATION DATESAI STARTUPS: BUILT FOR OBSOLESCENCE, FUNDED ANYWAYANTHROPIC DEMANDS $900B VALUATION DECISION IN 48 HOURSBALLMER ADMITS HE GOT DUPED: VENTURE'S GREATEST HITSBIG TECH DOUBLES DOWN ON LIFE-OR-DEATH DECISIONS, IGNORES WARNINGSBMO DEPLOYS QUANTUM COMPUTING TO SOLVE PROBLEM PHYSICS CANNOT
Est. when term sheets
outnumbered good ideas
www.dumbcapital.com
North American VC & M&A News — Unfiltered, Unimpressed, Unprofitable
North America Edition
Friday, May 1, 2026
Free (Like Your Equity)
← Back to Unicorn Watch
★ Unicorn Watch
Opinion

Big Tech Doubles Down on Life-or-Death Decisions, Ignores Warnings

Nvidia, Microsoft, and Amazon are expanding classified military AI systems despite multiple groups flagging the existential risks of automating lethal judgment calls.

In a move that perfectly encapsulates Silicon Valley's "move fast and break things" ethos applied to actual human lives, Nvidia, Microsoft, and Amazon are expanding classified military AI systems despite widespread warnings that deploying these tools in life-or-death scenarios is, well, potentially catastrophic. The three companies are apparently treating documented risk assessments the way a startup treats user privacy: as a minor checkbox to be acknowledged and immediately ignored.

What makes this particularly delicious is the transparency of the calculus. These firms know the risks. Multiple groups have explicitly highlighted them. But a classified defense contract—the kind that comes with government backing, recurring revenue, and minimal public scrutiny—is simply too juicy to let something like "could accidentally authorize lethal force" get in the way. The business model is sound: heads, we win the contract; tails, we blame the algorithm.

The expansion tells you everything about how these companies view risk management. It's not a genuine safety concern to be engineered away. It's a regulatory theater requirement—acknowledge the risk, deploy anyway, and hope the first major incident happens after your stock vests. As long as the Pentagon keeps writing checks and the work remains classified, no amount of "several groups highlighting risks" will meaningfully slow the gravy train.

In tech, we call this due diligence. In defense contracting, we call it Tuesday.

💀💀💀💀  Dumb Rating: 4/5 — Liability? Never Heard Of Her
★ From the Glossary
"Classified Military AI Expansion"
The practice of selling increasingly sophisticated algorithmic decision-making systems to the military while the actual failure modes remain secret enough that shareholders never have to price in the downside.
D

About DumbCapital

DumbCapital covers venture capital and M&A in North America with the skepticism these markets have long deserved and rarely received. We are not impressed by large numbers. We are not moved by press releases. All articles are satirical commentary based on real, publicly reported deals. Nothing here is financial advice.