π΄ Website π https://u-s-news.com/
Telegram π https://t.me/usnewscom_channel
Perhaps itβs not worth its salt when it comes to health advice.
A stunning medical case report published last month revealed that a 60-year-old man with no history of psychiatric or health conditions was hospitalized with paranoid psychosis and bromide poisoning after following ChatGPTβs advice.
The unidentified man was interested in cutting sodium chloride (table salt) from his diet. He ended up substituting sodium bromide, a toxic compound, for three months upon consultation with the AI chatbot. Bromine can replace chlorine β for cleaning and sanitation, not for human consumption.
β[It was] exactly the kind of error a licensed healthcare providerβs oversight would have prevented,β Andy Kurtzig, CEO of the AI-powered search engineΒ Pearl.com, told The Post. β[That] case shows just how dangerous AI health advice can be.β
In a recent Pearl.com survey, 37% of respondents reported that their trust in doctors has declined over the past year.
Suspicion of doctors and hospitals isnβt new β but it has intensified in recent years thanks to conflicting pandemic guidance, concerns over financial motives, poor quality of care and discrimination.
Skeptics are turning to AI, with 23% believing AIβs medical advice over a doctor.
That worries Kurtzig. The AI CEO believes AI can be useful β but it does not and cannot substitute for the judgment, ethical accountability or lived experience of medical professionals.
βKeeping humans in the loop isnβt optional β itβs the safeguard that protects lives.β he said.
Indeed, 22% of the Pearl.com survey takers admitted they followed health guidance later proven wrong.
There are several ways that AI can go awry.
AΒ Mount Sinai study from AugustΒ found that popular AI chatbots are very vulnerable to repeating and even expanding on false medical information, a phenomenon known as βhallucination.β
βOur internal studies reveal that 70% of AI companies include a disclaimer to consult a doctor because they know how common medical hallucinations are,β Kurtzig said.
βAt the same time,Β 29% of users rarely double-check the advice given by AI,β he continued. βThat gap kills trust, and it could cost lives.β
Kurtzig noted that AI could misinterpret symptoms or miss signs of aΒ serious condition, leading to unnecessary alarm or a false sense of reassurance. Either way, proper care could be delayed.
βAI also carries bias,β Kurtzig said.
βStudies show it describes menβs symptoms in more severe terms while downplaying womenβs, exactly the kind of disparity that has kept women waiting years for diagnoses of endometriosis or PCOS,β he added. βInstead of fixing the gap, AI risks hard-wiring it in.β
And finally, Kurtzig said AI can be βdownright dangerousβ when it comes to mental health.
Experts warn that using AI for mental health support poses significant risks, especially for vulnerable people.
AI has been shown in some situations to provide harmful responses and reinforce unhealthy thoughts. Thatβs why itβs important to use AI thoughtfully.
Kurtzig suggests having it help frame questions about symptoms, research and widespread wellness trends for your next appointment β and leaving the diagnosis and treatment options to the doctor.
He also highlighted his own service, Pearl.com, which has human experts verify AI-generated medical responses.
βWithΒ 30% of AmericansΒ reporting they cannot reach emergency medical services within a 15-minute drive from where they live,β Kurtzig said, βthis is a great way to make professional medical expertise more accessible without the risk.β
When The Post asked Pearl.com if sodium bromide could replace sodium chloride in someoneβs diet, the response was: βI absolutely would not recommend replacing sodium chloride (table salt) with sodium bromide in your diet. This would be dangerous for several important reasonsβ¦β