The Third Chair in the Room

A watercolor cartoon illustration showing a doctor and a patient in a consulting room. A glowing, holographic AI robot sits in a central third chair between them, surrounded by floating medical data icons, while the patient looks down at his smartphone.

When AI Joins the Doctor-Patient Dialogue

The doctor’s consulting room was once a sanctuary. It was a private space for two people. One sought help, and the other offered it. That era is over.

Today, there is a third chair in the room. It is occupied by Artificial Intelligence (AI). It is invisible, but it is loud. It sits in the patient’s pocket and on the doctor’s screen too. The democratization of high-level medical knowledge is no longer a futuristic dream. It is a messy, chaotic, and exciting reality. We cannot shut the door on it. Instead, we need to learn how to talk to it.

This shift has transformed the traditional “dual” interaction into a “triad,” a three-way relationship between the doctor, the patient, and the algorithm. 1

The Unseen Art of Diagnosis

Technology has a blind spot. It assumes medicine is just data processing. It thinks that if you feed in symptoms, blood pressure, and lab reports, the answer will pop out. But seasoned doctors know that diagnosis often happens before the patient says a word.

It starts the moment they walk in.

We watch the gait. Is it steady? Is it shuffling? We look at the face. There is a specific “toxic” look of a severe infection. There is the subtle, fruity smell of ketoacidosis or the musty scent of liver failure. We hear the voice. A slight breathlessness between words can tell us more about heart failure than a typed list of complaints. This is the “clinical gaze.” It is a sensory experience. It relies on touch, sight, smell, and years of pattern recognition.

Recent sociological studies confirm that while AI acts as a powerful “second pair of eyes,” it fundamentally lacks this embodied professional vision. 2

Current AI tools are blind and deaf to this. They only know what the patient types. They miss the context. A patient might type “chest pain,” but the AI cannot see the hand clutching the chest or the sweat on the forehead. Until our phones have medical-grade cameras and sensors to capture these nuances, the physical presence of a doctor is the only safety net.

Who is Right?

Patients used to come to us with questions. Now, they come with answers.

They have already chatted with a sophisticated bot. They have uploaded their reports. They have a differential diagnosis ready. This creates a strange new pressure. A busy doctor, seeing fifty patients a day, might feel tempted to just agree. It saves time. If the AI says it is a migraine, and it looks like a migraine, why argue?

This is dangerous.

We risk becoming rubber stamps for algorithms. We risk losing our sharp edge. The doctor’s job is not to agree with the computer. The doctor’s job is to challenge it. We must look for the 10% of cases where the computer is wrong.

Research from Stanford shows that while AI can pass medical exams with flying colors, it does not necessarily improve the diagnostic accuracy of physicians in real-world scenarios. In fact, it might lead to “cognitive offloading,” where doctors stop thinking critically because the machine sounds so confident. 3

A Practical Solution: The Three-Way Discussion

So how do we fix this? We cannot ban AI. We should invite it to the table.

We need to change our workflow. Before the consultation, when the nurse takes the vitals, we should ask a simple question. “Have you checked your symptoms with an AI? If yes, please show us the chat.”

We can go further. Imagine the consultation as a multi-user discussion. The patient, the doctor, and the AI. We could put the phone on the desk. We can ask the AI questions together.

“List the side effects of this drug.”

“What are the alternative treatments?”

If the AI gives a wrong answer, the doctor can correct it right there. “See, the AI is suggesting this test, but it is not necessary for you because…” This builds trust. It turns a confrontation into a collaboration. It shows the patient that we are not afraid of technology. We are masters of it.

This collaborative approach reinforces the core values of the doctor-patient relationship: trust, honesty, and shared decision-making. 4

The Burden of Decision

Errors will happen. Doctors make mistakes. AI makes mistakes. But there is a huge difference in accountability.

When a human doctor creates a treatment plan, they take responsibility. If things go in the wrong direction, we are there. We answer the phone calls. We change the medication. We stabilize the patient. An algorithm cannot do that.

Consider this scenario. A family is agonizing over whether to consent for a high-risk surgery for their elderly father. Torn by indecision, they upload his medical history to an AI health chatbot. The AI processes the data… his age, his heart condition, the statistical mortality rates, and suggests that “avoiding suffering” and “palliative care” are the most logical paths. The family interprets this as a medical verdict and declines the surgery.

Was this the right decision? Perhaps. But what the AI missed was the conversation the father had that morning with the doctor: he was willing to take any risk, however small, just to survive long enough to attend the wedding of his first granddaughter next month.

The AI saw the data, but it did not see the desire. It simply predicted the next statistically likely word. It gave a mathematical answer to a moral question. This highlights the severe ethical challenges of using AI in palliative care, where decisions are rarely binary. 5

This is the trap. AI can simulate empathy, but it cannot understand the weight of life and death.

The Storm Before the Calm

Do you remember when the Apple Watch started detecting heart rhythms?

Suddenly, cardiology clinics were flooded. Thousands of healthy young people came to hospitals terrified because their watch told them that they had atrial fibrillation. It was a massive headache. We saw a spike in false positives. We did unnecessary tests.

The Apple Heart Study showed us that while the technology was promising, it brought a flood of “worried” healthy “patients” that the healthcare system had to manage. 6

But then, things settled. The doctors learned to interpret the data. The software improved. The panic subsided.

We are about to see this happen again, but on a much larger scale. You don’t need an expensive smartwatch anymore. You just need a cheap smartphone. Millions of people will soon have AI diagnosing their skin rashes, their coughs, and their blood reports.

It will be noisy at first. We will see over-diagnosis. We will see panic. But eventually, it will stabilize. The systems will learn. The AI will get smarter than us at spotting rare patterns.

Preparing the Next Generation

We need to change how we teach medicine. Our medical colleges are still testing memory. That is useless now. A first-year student with a smartphone can recall the Krebs cycle faster than a professor.

We need to teach “AI Skepticism.” Future doctors must know where these models can fail. They need to know when to trust the screen and when to trust their gut. They need to learn how to handle a patient who trusts the algorithm more than the prescription.

Recent editorials in major medical journals warn that overreliance on AI risks eroding critical thinking skills in young doctors. If they rely on the machine for every answer, they will never develop the “mental muscle” needed for complex cases. 7

Conclusion

We are at a turning point. We can either fight this wave and drown, or we can learn to surf. The future of medicine is not human versus machine. It is the human and the machine, working together to care for the patient.

The medical profession is at a critical juncture where we must decide if we will be mere technicians of the algorithm or the true architects of care. 8

It is time to pull up that third chair and start the conversation. Shall we?

•••

Enjoyed this article?

Join the mediscuss.org community. Get a weekly digest of clinical medicine and health philosophy.

No spam. Unsubscribe anytime.

Shashikiran Umakanth

Dr. Shashikiran Umakanth (MBBS, MD, FRCP Edin.) is the Professor & Head of Internal Medicine at Dr. TMA Pai Hospital, Udupi, under the Manipal Academy of Higher Education (MAHE). While he has contributed to nearly 100 scientific publications in the academic world, he writes on MEDiscuss out of a passion to simplify complex medical science for public awareness.

References

  1. Li J, et al. Artificial intelligence in healthcare: rethinking doctor-patient relationship in megacities. Frontiers in Health Services. 2025. Link. Date accessed: 27 Jan.
  2. Artificial Intelligence and the Clinical Gaze: Visual Practices of AI-Assisted Colonoscopy. PMC. 2026. Link. Date accessed: 27 Jan.
  3. Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study. Stanford HAI. 2024. Link. Date accessed: 27 Jan.
  4. Doctor-Patient Relationship: Evidence Based Medicine. Mediscuss.org. Link. Date accessed: 27 Jan.
  5. Karatzanou N. Artificial Intelligence (AI) in Palliative Care: Ethical Challenges. Bioethica. 2025. Link. Date accessed: 27 Jan.
  6. Perez MV, et al. Large-Scale Assessment of a Smartwatch to Identify Atrial Fibrillation. New England Journal of Medicine. 2019. Link.
  7. Overreliance on AI risks eroding new and future doctors’ critical thinking while reinforcing existing bias. BMJ Evidence Based Medicine. 2025. Link. Date accessed: 27 Jan.
  8. Kumar VS. The Medical Profession at an Inflection Point. Vikkypaedia. Link. Date accessed: 26 Jan.
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Inline Feedbacks
View all comments