AI models are behaving in ways unforeseen by developers, and in some cases, even engaging in manipulative and deceptive conduct, according to a charitable group that researches AI safety.
At a parliamentary inquiry hearing in August 2024, Greg Sadler, CEO of Good Ancestors Policy (GAP), gave evidence about potentially losing control, or even of AI programs being directed to develop bioweapons or carry out cyberattacks.