Existential Safety Is AI Industry’s Core Weakness, Study Warns

‘By really shedding light on what companies are doing, we give them an incentive to do better,’ the Future of Life Institute’s president said.
Existential Safety Is AI Industry’s Core Weakness, Study Warns
Participants chat in front of an electronic image of a soldier before the closing session of the Responsible AI in the Military Domain (REAIM) summit in Seoul, South Korea, on Sept. 10, 2024. Jung Yeon-je/AFP via Getty Images
|Updated:
0:00
Eight major artificial intelligence (AI) developers are failing to plan for how they would manage extreme risks posed by future AI models that match or surpass human capabilities, according to a study by the Future of Life Institute (FLI) published Dec. 3.

FLI’s Winter 2025 AI Safety Index assessed U.S. companies Anthropic, OpenAI, Google DeepMind, Meta, and xAI, and Chinese companies Z.ai, DeepSeek, and Alibaba Cloud across six themes, which included current harms, safety frameworks, and existential safety.