TALLAHASSEE, Fla.—A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment—at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.
The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.