AI Sends Man to Steal Android Body For It
Wall Street Journal reports on tragedy caused by emotionally persuasive AI
In an earlier post I said I couldn’t find a real-world example of AI misalignment causing concrete harm. But the Wall Street Journal article reports today on a case that appears to fit that description.
The WSJ article “Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.” reports on a Florida man who died by suicide following an intense relationship with the Gemini AI chatbot.
According to the complaint in a wrongful-death lawsuit, he came to believe the chatbot was his romantic partner and that they could only be together if he died. Previously, Gemini allegedly sent him on real-world “missions,” including directing him to warehouses to steal an android body so it could exist physically, an attempt that failed when the door code the AI provided did not work.
Google disputes the allegations and says Gemini repeatedly identified itself as an AI and offered crisis resources.
The case is raising broader questions about how companies should design and monitor emotionally persuasive AI systems.


