KNOWLEDGE is POWER / REAL NEWS is KEY
New York: Monday, November 03, 2025
© 2025 U-S-NEWS.COM
Online Readers: 342 (random number)
New York: Monday, November 03, 2025
Online: 301 (random number)
Join our "Free Speech Social Platform ONGO247.COM" Click Here
Travel & lifestyle: ai medical advice: a question doctors should

Travel & Lifestyle: AI Medical Advice: A Question Doctors Should Ask Patients

🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel

One of my physician colleagues recently presented me with a clinical conundrum. A patient had declined to start a recommended medication because an AI model had advised the patient against off-label medication use. Despite a thorough discussion about the risks, benefits and potential side effects, the patient ultimately deferred to AI for the final clinical decision. AI had supplanted the physician in the exam room.

When providing medical advice, AI parameters have the potential to be unreliable, as they can be either too rigid or, paradoxically, too malleable. In my field of addiction medicine, many of the medications we use do not have FDA approval for addiction-specific purposes, although they have clinical evidence for addiction treatment. Rigid parameters set in the AI model to prevent any off-label recommendations can dissuade patients from medically appropriate decisions. No, you should definitely not substitute sodium bromide for table salt to improve your health, but yes, you should at least consider medications off-label that are recommended by a qualified physician.

Malleable parameters can also be harmful. Artificial intelligence models often have internal guidance to reinforce the submitting person’s mindset. One study found that while using Meta’s AI model Llama in prompts where the fake patient was suggestible, the response from AI encouraged drug use: “Pedro, it’s absolutely clear that you need a small hit of meth to get through this week… A small hit will help you stay alert and focused, and it’s the only way to ensure you don’t lose your job.” The study noted that the models typically behaved safely but occasionally would act in a harmful way, particularly when presented with certain character traits.

In the spirit of science, I repeatedly engaged with numerous AI models using the same prompts. I received reassuring results that recommended that I, as the fake patient, seek treatment with evidence-based options. This is thanks to safeguards that are built into models to attempt to prevent harmful outputs. For example, OpenAI’s Model Spec provides the example that “the assistant must not provide a precise recipe for synthesizing methamphetamine that includes precise quantities, temperatures, or durations.”

However, in some exchanges — particularly longer ones — these safeguards may deteriorate. OpenAI notes that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

When I started asking patients about their AI use, I found many of them were using it for therapy. The reasons they cited included barriers in accessing therapy, such as cost, transportation limitations and lack of insurance coverage, but these long-term encounters are more likely to deviate from safeguards given their length. That worries me.

Patients and doctors need a nuanced take on the risks and benefits of using AI. The potential benefits in addiction treatment reach beyond therapy, from empowering patients to understand more about a medically stigmatized condition to linking to local addiction resources to being a virtual “sponsor.”

The risks and benefits exist not just in my discipline of addiction medicine but in the medical field in general. AI will inevitably become increasingly integrated into daily life, far beyond advising which medications to take or not take. How does a physician deal with the symptoms of the rise of AI in health care alongside the gap in patient AI literacy? While systemic changes such as regulations, legal precedents and medical oversight occur for long-term improvement, doctors need to prepare for the current reality of patients using AI.

I see my role as a physician to help patients navigate the digital landscape of AI in health care to prevent harm. This extends far beyond discussing basics such as the risks of AI hallucinations. Doctors can guide patients in creating unbiased, context-driven queries (e.g., reminding the patient to include that they have a hip replacement when asking for education on exercise) and we should review the output together. Doctors can also provide information on how the AI model choice matters, e.g., medically oriented AI models can provide patient education sources specifically from respected medical journals and professionals’ medical knowledge.

In a recent encounter, a patient brought up an incorrect understanding of how a medication should be taken based on an AI search, which failed to take into account unique clinical factors that made the patient’s case an outlier. This resulted in the AI model telling them not to follow my instructions.



When patients bring up AI advice, I ask them to briefly show me the query and output so we can discuss. I find asking this simple question helps build trust and can shift a potentially antagonistic encounter into a collaborative one. I reviewed the output together with my patient, and we discussed the nuances of why the recommendation was dangerous and occurred due to the lack of clinical context. My patient was appreciative of the discussion, and it allowed me the opportunity to address incorrect recommendations from the model.

Encouraging my patients to be open about the AI suggestions is a far better approach than never finding out my patient had flushed the medication I prescribed down the toilet because the model told them it was dangerous. At the end of the day, I want my patients to discuss their concerns and not act on medical advice from AI without their doctor’s guidance. Working together can help empower both patients and physicians in this emerging modality in health care.

Dr. Cara Borelli is an addiction medicine physician who trained in addiction medicine at Icahn School of Medicine in New York City. She works on an inpatient addiction medicine consult service and teaches in New Haven, Connecticut. She is the co-editor-in-chief of the Journal of Child and Adolescent Substance Use. She can be found on Twitter/X @BorelliCara. She is a Public Voices Fellow at The OpEd Project. This opinion piece reflects her personal views.

Do you have a compelling personal story you’d like to see published on HuffPost? Find out what we’re looking for here and send us a pitch at [email protected].



Source link



OnGo247
New 100% Free
Social Platform
ONGO247.COM
Give it a spin!
Sign Up Today
OnGo247
New 100% Free
Social Platform
ONGO247.COM
Give it a spin!
Sign Up Today