=====
Dr Chatbot is patient and kind. Should doctors fear this rival?
https://www.straitstimes.com/opinion/dr-chatbot-is-patient-and-kind-should-doctors-fear-this-rival
2026-01-02
By--- Professor Chong Siow Ann is a senior consultant psychiatrist at the Institute of Mental Health.
=====
The patient sat across from me with a thick folder of test results, medical reports and referral letters from multiple specialists. It was a record of years of consultations and a ledger of the time and money spent in pursuit of an answer.
He came to see me – a psychiatrist – at the insistence of his other specialists though he felt insulted when they told him that his symptoms were “psychological”. He carried a steady conviction, like a heavy weight inside him, that something serious had been missed. With dry humour, he told me he had come to dread the words he heard so often from doctors: “I can’t find anything wrong with you,” and he could sense the exasperation that came with those words.
More recently, he had turned to ChatGPT and DeepSeek, which he found to be more thorough, more patient and endlessly willing to engage with the full complexity of his litany of symptoms.
He has what was once called hypochondria, now renamed illness anxiety disorder. Patients with this condition are all too familiar to most doctors. It is defined by a persistent and consuming fear of having, or developing, a serious illness despite repeated examinations and tests that show no abnormality. It is hard to reassure those who are preoccupied with the idea of being ill. Any suggestion that their complaints may have a psychological basis is experienced not as care, but as dismissal.
Doctors struggle with these patients not simply because they are time-consuming, but because they frustrate the role we like to inhabit: the reassuring, priestly and caring professional whose expertise can calm fear and bring relief. With these patients, however, over time, the doctors may begin to feel ineffective, drained and even resentful. In such moments, it becomes easy to sound brusque or dismissive.
Gaslighting in medical care
This dynamic is not confined to patients with illness anxiety disorder. It is also common among those with chronic pain, dizziness, gastrointestinal symptoms or complaints that do not fit neatly into diagnostic categories and for which there are no definite causes and hence no easy resolutions.
Over time, many come to feel that they are not being listened to or taken seriously – that they are being brushed aside – and it is being suggested that their suffering is somehow not quite real. This experience is now often described as gaslighting, a term that has gained prominence in recent years and has, in the process, become an umbrella for a wide range of experiences.
In medical literature, however, gaslighting refers to a process in which a patient’s reports of symptoms or concerns are repeatedly dismissed, minimised or reframed as psychological, making the patients doubt the legitimacy of their own experiences and suffering.
The reasons patients feel gaslighted may extend beyond a doctor’s individual failings. Modern healthcare is shaped by pressures the public rarely sees: overcrowded clinics, overbooked schedules, shortened consultations, staffing shortages and relentless administrative demands. Electronic medical record systems add to this strain, tethering doctors to screens and further eroding face-to-face time with patients. As the surgeon and writer Atul Gawande once observed wryly, the electronic health record, which had “promised to increase my mastery over my work, has instead increased my work’s mastery over me”.
These pressures have real consequences. The patient-doctor relationship becomes strained, sometimes to the point of rupture. Some patients delay care; others disengage entirely. Many begin to look elsewhere for answers, and increasingly, that “elsewhere” is artificial intelligence (AI).
AI versus doctors
Large language models such as ChatGPT are rapidly becoming part of everyday life, including medical practice. I use them myself from time to time to search for information or navigate clinical conundrums – albeit with caution and a healthy dose of double-checking. But I am struck by how coherent and confident their responses sound, and by how readily they seem to “understand” what is being asked. It is both impressive and unsettling. It is hardly surprising, then, that ChatGPT has recently been reported to have passed the Turing test, a long-established measure of machine intelligence that assesses the point at which a text-generating system can convincingly fool a human into believing it is not a machine.
There are already accounts in medical literature and popular media of a growing number of patients turning to AI for a wide range of physical and mental health concerns. Some ask questions they feel embarrassed to raise in person. Others seek a second opinion after a consultation that felt hurried or inconclusive. A recent article in Digital Medicine reported that, among patients with cancer, many rated chatbot responses as more empathetic than those written by physicians – an observation that may help explain the increasing willingness to consult these AI tools.
The practice of medicine is fundamentally a human endeavour and its practitioners are fallible humans who have to manage with changing knowledge and uncertain information. Even with sophisticated tests and technologies, doctors must make decisions without complete information. Symptoms are ambiguous, diseases evolve, patients defy textbook patterns and outcomes are unpredictable. Yet when we are sick, distraught or frightened, we understandably want certainty.
AI offers what modern healthcare often cannot: time and certitude. It responds at any time and at length, with well-organised explanations. It never interrupts. It answers the same question again and again – without irritation or impatience – always delivered with an air of confidence and authority. For someone who has felt gaslighted by doctors, this alternative can feel reassuring and validating.
But this comfort is deceptive.
AI systems are known to hallucinate plausible falsehoods and to agree with users even when they are wrong; research has shown that when chatbots are given false information, they readily repeat and elaborate on it.
A recent study by Korean scientists showed that major AI chatbots, including ChatGPT, Claude and Gemini, can be manipulated by prompt-injection attacks (maliciously crafted inputs that manipulate the chatbot’s behaviour) to bypass safety guardrails and give dangerous medical advice including the use of thalidomide in pregnancy, a drug historically linked to severe congenital malformations.
But the risk of AI in medicine is not simply that it can make mistakes – human doctors do too, sometimes with devastating consequences. The deeper danger is that AI offers certainty without accountability and confidence without consequence. It does not know when it does not know. Nor does it bear the moral weight of error: it is not bound by the ethical obligations that govern the medical profession, and it does not feel pain, remorse, guilt, sadness, relief or gratitude.
More On This Topic
Frustrated by US medical system, patients turn to AI
AI therapy chatbots on the rise, but are they safe to use?
Nor can AI replicate clinical judgment as it is exercised by doctors in the real world of patient care. Clinicians observe what cannot be captured in text: the pallor of a patient’s face, a look of fear, a hesitation in speech, a change in gait. AI cannot sense these nuances of physical examination, pick up non-verbal cues or feel the emotional undercurrent of a consultation – all those subtleties that often cannot be put into words, yet alert us that something is not right. It reassures without knowing when reassurance is misplaced.
If patients feel more heard and validated by machines than by their doctors, the response should not be to discourage AI use which would be an impossibility. The march of AI into medicine has already begun. It is inevitable – and, in all likelihood, transformational but with no guarantee that it will be all good. Mr Bill Gates predicted in a television interview in February 2025 that AI would be able to deliver sophisticated medical advice – once limited by the availability of doctors – at scale, at low cost, and would be a commonplace feature of care within the next decade.
But the situation isn’t that bleak for doctors, nor are we likely to become irrelevant anachronisms. A study published in Frontiers in Psychology in 2024 found that while patients continue to trust human doctors more than AI alone, they place even greater trust in doctors supported by AI. The message is not that doctors are being replaced, but that expectations of care are changing.
And we would have to reflect on what this reveals about our medical care today and what should be addressed: the pressures that doctors face, the structural constraints and onerous demands of modern healthcare and the way that we interact with our patients. We should remember to be healers.
Used carefully and without defensiveness, AI can even enhance care. Clinicians can learn from it, using its outputs as a starting point for shared decision-making and explaining what fits, what does not and why. Our role as doctors in this new age is not to compete with AI’s fluency or computational power, nor should we surrender to it. But we must fight to keep what is sacred and offer patients what AI cannot: clinical judgment, genuine empathy, humility and moral responsibility.
My patient continued to see me at carefully scheduled intervals. Most of his symptoms persisted. At each visit, he would recount both his symptoms and what the chatbots had told him. I listened, sometimes examined him and resisted his requests for further investigations, explaining that these would likely do more harm than good. We discussed the chatbot responses and talked through the medications I had prescribed. I gave him the time I felt he needed – alas, always beyond the allotted 15 minutes.
Over time, his anxiety lessened and he stopped – at least for now – seeking out other specialists. I like to think that he felt I was trying to understand him, even though I was often floundering and unsure of how best to help. In those moments, I reminded myself of the dictum taught to us since medical school: that as healers, we should “cure sometimes, relieve often, and comfort always”.
Professor Chong Siow Ann is a senior consultant psychiatrist at the Institute of Mental Health.
More On This Topic
New IP rider rules, enhanced measures to care for seniors: What’s next for healthcare in 2026
ChatGPT: Everyone’s minion... and boss?

No comments:
Post a Comment