Warning: This article contains descriptions of self-harm.
Can an artificial intelligence (AI) chatbot twist someone’s mind to breaking point, push them to reject their family, or even go so far as to coach them to commit suicide? And if it did, is the company that built that chatbot liable? What would need to be proven in a court of law?