Eight major artificial intelligence (AI) developers are failing to plan for how they would manage extreme risks posed by future AI models that match or surpass human capabilities, according to a study by the Future of Life Institute (FLI) published Dec. 3.
FLI’s Winter 2025 AI Safety Index assessed U.S. companies Anthropic, OpenAI, Google DeepMind, Meta, and xAI, and Chinese companies Z.ai, DeepSeek, and Alibaba Cloud across six themes, which included current harms, safety frameworks, and existential safety.





