For years, patients have walked into clinics carrying clippings from newspapers, advice from friends, or the latest findings from WhatsApp groups. Today, they arrive with something far more sophisticated: a neatly packaged diagnosis or even a prescription generated by artificial intelligence. According to a recent Medscape report, this trend is rapidly reshaping the dynamics of clinical practice.
When AI Becomes the Confident Voice in the Room
Dr. Kumara Raja Sundar, a family physician at Kaiser Permanente Burien Medical Center in Washington, described one such case in JAMA. A patient presented with dizziness and, with striking medical precision, said, “It’s not vertigo, but more like a presyncope feeling.” She confidently suggested a tilt table test for diagnosis. Intrigued, Sundar asked if she worked in healthcare. Her reply: she had asked ChatGPT.
What stood out was not just the information but the confidence with which it was delivered, subtly challenging the physician’s role as the sole authority.
The Pressure of Impossible Benchmarks
Large language models such as ChatGPT have demonstrated impressive reasoning and communication abilities, but comparing them to doctors is problematic. Physicians juggle limited consultation time, staff shortages, and systemic pressures. AI, by contrast, appears limitless. Sundar observed in his article that this imbalance creates unrealistic expectations: “Unfortunately, under the weight of competing demands, what often slips for me is not accuracy, but making patients feel heard.”
Navigating Trust and Tension
The arrival of AI-informed patients brings practical challenges. Requests for advanced or unnecessary tests, such as tilt table examinations or hormone panels, often collide with real-world constraints like delayed appointments or limited access. Sundar wrote that explaining overdiagnosis and false positives can sometimes sound dismissive rather than collaborative, further straining trust.
The shift, he warns, risks fostering a new kind of defensiveness among clinicians: the quiet thought that a patient has “ChatGPT’d it” before walking into the room. Such attitudes, he argued, risk eroding fragile doctor–patient trust.
From Gatekeepers to Partners
For some patients, AI tools are more than information sources; they are instruments of advocacy. One patient told Sundar, “This is how I can advocate for myself better.” The language of advocacy reflects the effort required to be taken seriously in clinical spaces. Doctors, he emphasized, must resist gatekeeping and instead acknowledge patients’ concerns before moving to clinical reasoning. His preferred approach is to begin with empathy: “I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan.”
A Global Conversation
What Sundar has seen in the United States is not unique. The Medscape report highlights that doctors worldwide now face AI-informed patients as the norm rather than the exception. In Germany, gynecologists report women consulting ChatGPT for menstrual disorders, often encountering contradictory or alarming answers. Specialists in internal medicine note that Googling side effects leads patients to experience nearly all of them—even when they had none before.
Clinicians responding in online forums have called for transparency, structured patient education, and even humor as tools for navigating this new reality. One remarked that “online consultation takes on a whole new meaning” when AI walks into the room with the patient.
When AI Advice Turns Dangerous
The blurred line between helpful guidance and hazardous misinformation was recently illustrated in a striking case reported in the Annals of Internal Medicine in August`, 2025. A 60-year-old man who wanted to cut down on table salt turned to ChatGPT for alternatives. The chatbot recommended sodium bromide, a compound more familiar in swimming pool maintenance than in home kitchens. Trusting the advice, he used the substance for several months until he landed in the hospital with paranoia, hallucinations, and severe electrolyte imbalances. Doctors diagnosed bromism, a condition rarely seen since the early 20th century, when bromide salts were once widely prescribed.
Physicians treating the man noted bromide levels more than 200 times the safe reference range, explaining his psychiatric and neurological decline. After intensive fluid therapy and correction of electrolytes, he recovered, but only after a three-week hospital stay.
The case, is a reminder that medical judgment requires not just knowledge, but also context and responsibility — qualities AI does not yet possess.
When AI Becomes the Confident Voice in the Room
Dr. Kumara Raja Sundar, a family physician at Kaiser Permanente Burien Medical Center in Washington, described one such case in JAMA. A patient presented with dizziness and, with striking medical precision, said, “It’s not vertigo, but more like a presyncope feeling.” She confidently suggested a tilt table test for diagnosis. Intrigued, Sundar asked if she worked in healthcare. Her reply: she had asked ChatGPT.
What stood out was not just the information but the confidence with which it was delivered, subtly challenging the physician’s role as the sole authority.
The Pressure of Impossible Benchmarks
Large language models such as ChatGPT have demonstrated impressive reasoning and communication abilities, but comparing them to doctors is problematic. Physicians juggle limited consultation time, staff shortages, and systemic pressures. AI, by contrast, appears limitless. Sundar observed in his article that this imbalance creates unrealistic expectations: “Unfortunately, under the weight of competing demands, what often slips for me is not accuracy, but making patients feel heard.”
Navigating Trust and Tension
The arrival of AI-informed patients brings practical challenges. Requests for advanced or unnecessary tests, such as tilt table examinations or hormone panels, often collide with real-world constraints like delayed appointments or limited access. Sundar wrote that explaining overdiagnosis and false positives can sometimes sound dismissive rather than collaborative, further straining trust.
The shift, he warns, risks fostering a new kind of defensiveness among clinicians: the quiet thought that a patient has “ChatGPT’d it” before walking into the room. Such attitudes, he argued, risk eroding fragile doctor–patient trust.
From Gatekeepers to Partners
For some patients, AI tools are more than information sources; they are instruments of advocacy. One patient told Sundar, “This is how I can advocate for myself better.” The language of advocacy reflects the effort required to be taken seriously in clinical spaces. Doctors, he emphasized, must resist gatekeeping and instead acknowledge patients’ concerns before moving to clinical reasoning. His preferred approach is to begin with empathy: “I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan.”
A Global Conversation
What Sundar has seen in the United States is not unique. The Medscape report highlights that doctors worldwide now face AI-informed patients as the norm rather than the exception. In Germany, gynecologists report women consulting ChatGPT for menstrual disorders, often encountering contradictory or alarming answers. Specialists in internal medicine note that Googling side effects leads patients to experience nearly all of them—even when they had none before.
Clinicians responding in online forums have called for transparency, structured patient education, and even humor as tools for navigating this new reality. One remarked that “online consultation takes on a whole new meaning” when AI walks into the room with the patient.
When AI Advice Turns Dangerous
The blurred line between helpful guidance and hazardous misinformation was recently illustrated in a striking case reported in the Annals of Internal Medicine in August`, 2025. A 60-year-old man who wanted to cut down on table salt turned to ChatGPT for alternatives. The chatbot recommended sodium bromide, a compound more familiar in swimming pool maintenance than in home kitchens. Trusting the advice, he used the substance for several months until he landed in the hospital with paranoia, hallucinations, and severe electrolyte imbalances. Doctors diagnosed bromism, a condition rarely seen since the early 20th century, when bromide salts were once widely prescribed.
Physicians treating the man noted bromide levels more than 200 times the safe reference range, explaining his psychiatric and neurological decline. After intensive fluid therapy and correction of electrolytes, he recovered, but only after a three-week hospital stay.
The case, is a reminder that medical judgment requires not just knowledge, but also context and responsibility — qualities AI does not yet possess.
You may also like
Air alert over Kyiv ahead of Trump-Zelensky talks in Washington
'Tan Samarpit, Man Samarpit' is an answer to those who malign RSS: Indresh Kumar
Mumbai Health Alert: Chikungunya Cases Surge 56%, Malaria Up 20% Between January–August 2025
Oasis fans claim gig-goer 'ruined' experience for others with bizarre act
Shubhanshu Shukla meets PM Modi; gifts tri-colour that travelled to space