People-Pleasing Chatbots: New Study Highlights Dangers of Overly Agreeable AI

All 11 AI chatbots in the study showed signs of sycophancy, often giving bad advice even when users engaged in unethical, illegal, or harmful ways.
People-Pleasing Chatbots: New Study Highlights Dangers of Overly Agreeable AI
Screens display the logo of DeepSeek, a Chinese artificial intelligence company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT in Toulouse in southwestern France, on Jan. 29, 2025. Lionel Bonaventure/AFP via Getty Images
|Updated:
0:00

Artificial intelligence (AI) chatbots are overly flattering its users, according to a new study, showing elevated signs of sycophantic responses as humans increasingly turn to the technology for advice on interpersonal dilemmas.

Published on Thursday in the medical journal Science, the study reviewed 11 AI systems, including four from OpenAI, Anthropic, and Google and seven from Meta, Qwen, DeepSeek, and Mistral. All showed levels of agreeable and affirmative behavior—even when users engaged in unethical, illegal, or harmful ways.

Troy Myers
Troy Myers
Author
Troy Myers is a regional reporter based in St. Augustine, Florida. His background includes breaking, criminal justice, and investigative writing for local news, producing on a national morning newscast in Washington, D.C., and working with an award-winning, weekly investigative news program. In his free time, he enjoys spending time with his dog at the beach.