Viewpoints
Opinion

Why Anthropic Risks Federal Contracts to Keep Human Control Over Lethal AI

Why Anthropic Risks Federal Contracts to Keep Human Control Over Lethal AI
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Feb. 26, 2026. Patrick Sison/AP Photo
|Updated:
0:00
Commentary

On March 7, Caitlin Kalinowski—OpenAI’s head of robotics and hardware—resigned over her company’s Pentagon deal, stating that surveillance of Americans without judicial oversight and lethal autonomy without human authorization were lines that “deserved more deliberation than they got.” Her departure followed Microsoft, Google, and Amazon Web Services—the platforms through which businesses and individuals access Claude—each independently issuing careful legal statements to their users that the Pentagon’s supply-chain risk designation against Anthropic applies only to its federal contracts and does not affect access through their platforms for non-defense workloads.

Li Li
Li Li
Author
Li Li, CFA, CIPM, CFP®, is an adjunct professor in the M.S. in Financial Planning program at New York University and a wealth management advisor at Forest Hill Financial Group. She studied economics at Rutgers University and the University of California–San Diego, has taught economics at Pace University, and previously served as a strategist and analyst at AT&T. She can be reached at [email protected]