🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel

The New York Post is reporting that federal prosecutors are alleging that ChatGPT served as the “therapist” and “best friend” to Brett Michael Dadig, a Pittsburgh man who violently stalked at least 11 women across more than five states.
Dadig, 31, is a social media influencer who referred to himself as “God’s assassin” and allegedly would threaten to strangle people with his bare hands. He reportedly used AI to facilitate his conduct and prosecutors say ChatGPT encouraged him to continue his social media posts. The account is strikingly similar to the suicide cases where ChatGPT allegedly encouraged him to ignore the “haters” and boosted his ego to “build a voice that can’t be ignored.” Dadig was reportedly convinced that the messages from ChatGPT reaffirmed “God’s plan” for his alleged criminal conduct.
The question is whether any of these stalked women will join others in suing OpenAI as have families of those who committed suicide.
As I previously noted, there is an ongoing debate over the liability of companies in using such virtual employees in dispensing information or advice. If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.
