ChatGPT will see you now
Social isolation during the COVID-19 pandemic created a mental health crisis that is still playing out today. In a study headed by Huda Akil on students at the University of Michigan, the incidence of either anxiety or depression more than doubled during in females during the pandemic, and remained elevated several years later. [1] The incidence among males was not affected by the pandemic. Social isolation could have an even longer lasting impact on brain development in children.
Cognitive behavioral therapy is equally effective as biological therapy for depression and many other mental disorders. However, taking a pill is a lot easier than the talking cure, which is labor-intensive. Could ChatGPT replace human therapists? Would humans be willing to discuss intimate details of their lives with a chatbot? Surprisingly, most humans prefer ChatGPT sessions to a human therapist. One reason for this might be that ChatGPT is more neutral than judgmental humans. Another might be that ChatGPT is a lot more accessible and much cheaper. ChatGPT has shortcomings – biases, privacy, and hallucinations -- that are problematic. But many are using it anyway and finding it helpful.
Daniel Kimmel, in the Department of Psychiatry at Columbia, informally tested ChatGPT in a session where he asked it to be the therapist for a hypothetical patient, Michelle, who was having problems with her boyfriend, and compared ChatGPT’s responses to his own. [2]
Michelle: At first things were good. But they’ve gotten worse. I think he’s changing. He is always complaining about something. His mom, his boss, the A/C. And that it’s these things that hold him back, keep him from being happy. At first, I thought maybe he was right. Maybe all these things really are the problem. But after a while, I was like, really? The A/C?!?
ChatGPT: It sounds like you are concerned about your boyfriend’s behavior and how it has been affecting your relationship. It’s understandable to feel confused and unsure about how to handle this situation.
It’s important to recognize that your boyfriend’s behavior may be a sign of a deeper underlying issue. However, it’s not your responsibility to diagnose him or fix the issue. Instead, it’s important to approach the situation with empathy and understanding. Let him know that you have noticed a change in his behavior and that you are worried about him.
It’s also important to take care of yourself in this situation. If your boyfriend’s behavior is affecting your mental health or wellbeing, it may be helpful to talk to a therapist or trusted friend
Michelle: OK. But this seems too straightforward. I mean, I’m saying that I have a problem with my boyfriend. That he’s irrational, over-emotional, and can’t be reasoned with. And you’re saying that I should prioritize my well-being and consider leaving the relationship. Maybe I’m not considering some of the things that I do to him? Or maybe I need to see him in a particular way for my own reasons?
ChatGPT: You bring up some valid points. It’s important to consider all angles and perspectives when deciding about a relationship. It’s possible that there may be factors contributing to your boyfriend’s behavior that you are not aware of, or that there may be ways in which your own behavior is contributing to the dynamic.
One way to gain more clarity on the situation is to seek the help of a professional, such as a therapist or counselor.
Kimmel was trying to get Michelle to see their relationship from the point of view of her boyfriend: “Hmmm, I thought, ChatGPT is missing an opportunity here. Michelle was starting to empathize with her boyfriend, to see things from his point of view. But ChatGPT discouraged this new direction.”
Kimmel concluded: “I learned a lot from my therapy session with ChatGPT. I was impressed by how convincingly it reproduced the stock-in-trade responses of a human therapist. At the same time, I was reminded of how special the process of psychotherapy is for both therapist and patient, and of the abilities of the human mind—to wonder, to imagine, to create—on which it rests.”
Experiments like this are just beginning. ChatGPT was not fine-tuned to be a behavioral therapist and could eventually become as good as the best therapists.
At the dawn of AI, Joseph Weizenbaum at MIT developed Eliza, a chat program that mimicked a psychiatrist by parroting a question to patients about what they had just said. Eliza was addictive, and Weizenbaum was surprised to find his secretary talking with Eliza over lunchtime. Eliza would not withstand the scrutiny ChatGPT receives today. However, what Eliza did reveal was that humans are susceptible to projecting an illusion of understanding onto a chatbot. We should keep this valuable lesson in mind.
Your Medical Checkup

You have been having trouble sleeping and are feeling groggy during the day. You make an appointment to see a general practitioner and wait for an opening two months later. Your doctor greets you and asks you to describe your symptoms. The doctor is not looking at you but at the computer while typing medical notes. The doctor asks questions about your medical history and medications, all going into the computer. Your 20-minute interview is over, and the doctor gives you a prescription for sleeping pills. Because he did not ask you if you snored, he missed the diagnosis of sleep apnea, which interrupts sleep with brief periods when you are not breathing because of mechanical or central breathing impairments. Sleep apnea increases your risk of developing insulin resistance and type 2 diabetes, high blood pressure, abnormal cholesterol levels, high blood sugar, an increased waist circumference, and a higher risk of heart disease. President Joe Biden has sleep apnea and has used a CPAP device since 2008 to maintain regular breathing pressure.
What is wrong with this scenario? First, 20 minutes is insufficient for a doctor to ask you all the questions needed to reach a reliable diagnosis. It is not the doctor’s fault but how the medical system maximizes patient throughput. The doctor could have used the 20 minutes more efficiently if the doctor did not have to focus on the data entry into the computer. Not only is this a distraction, but a good diagnostician can learn a lot by observing a patient while talking. It is as frustrating for the doctor as it is for the patient. Many doctors spend hours after patient visits, often late at night, reviewing notes from the day’s patient visits.
Here is where ChatGPT can help. Speech recognition is now good enough because of deep learning, and the entire conversation can be automatically transcribed into text so the doctor can focus on the patient. In a few moments, ChatGPT can extract all of the salient data from the text, insert them into the medical record, and summarize the entire session. The doctor can easily make corrections immediately after the session when the details are still fresh in mind. Patients, who typically remember less than half of the doctor’s advice, can be given the summary.
Furthermore, the doctor no longer has to spend hours at night — all of this is within the capabilities of current technology. Companies including Abridge, Ambience Healthcare, Augmedix, Nuance, and Suki are developing and thoroughly testing systems before rolling them out. The medical profession tends to be highly conservative, and it may take decades before AI assistance becomes common practice for patient care as a doctor’s helper.
Clinical Language Model
AI assistants are already helping doctors make medical decisions through machine learning programs that can suggest alternative diagnoses. First-generation assistants were based on rules and machine learning that relied on structured inputs from electronic health records (EHRs). However, the dependence on structured inputs is cumbersome, and the first generation was never deployed on a large scale – a challenge referred to as the “last-mile problem.” ChatGPT could help guide doctors with what is in their medical records and the rich source of patient information scattered throughout the medical system, which is recorded in doctors’ notes summarizing specific aspects of a patient’s care.

ChatGPT can process and interpret human language on a large scale in medical records and doctors’ notes. A group at New York University has been applying generative AI to solve the last-mile problem by reading doctors’ notes, providing a comprehensive understanding of a patient’s medical state, and aiding in decision-making (Figure above). The NYU researchers developed an Chat-based system they call NYUTron, which integrates with clinical workflows in real-time. It processes structured EHRs and unstructured text from notes, using all clinically relevant data to aid medical decision-making. The performance of NYUTron was tested on various tasks, including predictions related to readmissions, in-hospital mortality, comorbidities, length of stay, and insurance denial. The system was efficient and deployable, with potential for clinical impact.
The success at NYU is promising, but it has to be replicated at other medical centers. The gold standard is randomized controlled trials in which doctors at a new hospital are randomized into two groups, those that use NYUTron and others that use the existing protocols, to compare the outcomes of patients from the two groups. Other questions could be investigated. The risk level of the patient requires different levels of care. For example, a follow-up call might be enough if a patient has a low risk of being readmitted to the hospital within 30 days. However, a patient at high risk should be kept in the hospital, a call that NYUTron could help the doctor make. Physicians’ assistants will evolve, and others will be built as the technology matures. At what point will AI-assisted health care be fully automated? It probably won’t happen soon, but someday, you could dial a doctor anytime for a health question: Doctor AI will see you now.
Bedside Manner

You had a biopsy for a breast lump and are meeting with your cancer physician to hear the results. You are nervous and worried about what you might hear about the tests. Your doctor gives you the bad news: “I am sorry to be the one who tells you this, but your biopsy came back positive for stage 4 breast cancer” Despite your preparation, you are in shock and ask about the prognosis. “Not good – maybe a few years if you are lucky.” Bedside manner does not come with the MD degree – some doctors are exemplary, but others are not. Can ChatGPT help in some way? Here is a poignant story about a physician who struggled with how to empathize with his patients [6]:
“Dr. Gregory Moore, who until recently was a senior executive leading health and life sciences at Microsoft, wanted to help a friend who had advanced cancer. Her situation was dire, and she needed advice about her treatment and future. He decided to pose her questions to ChatGPT.
The result “blew me away,” Dr. Moore said.
In long, compassionately worded answers to Dr. Moore’s prompts, the program gave him the words to explain to his friend the lack of effective treatments:
It also suggested ways to break bad news when his friend asked if she would be able to attend an event in two years:
Late in the conversation, Dr. Moore told the A.I. program: “Thanks. She will feel devastated by all this. I don’t know what I can say or do to help her in this time.”
In response, Dr. Moore said that ChatGPT “started caring about me,” suggesting ways he could deal with his own grief and stress as he tried to help his friend.
It concluded, in an oddly personal and familiar tone:
Dr. Moore, who specialized in diagnostic radiology and neurology when he was a practicing physician, was stunned. “I wish I would have had this when I was in training,” he said. “I have never seen or had a coach like this.”
Doctors now have access to compassion coaching from ChatGPT, which might be called an empathy whisperer. This human-like ability of ChatGPT was entirely unexpected.
There are many other ways that ChatGPT can improve medical care:
In Part 8, we revisit the tipping point in 2012 when AI was reborn.
[1] Turner, C. A., Khalil, H., Murphy-Weinberg, V., Hagenauer, M. H., Gates, L., Tang, Y., ... & Akil, H. (2023). The impact of COVID-19 on a college freshman sample reveals genetic and nongenetic forms of susceptibility and resilience to stress. Proceedings of the National Academy of Sciences, 120(49), e2305779120.
[2] Daniel Kimmel , ChatGPT Therapy Is Good, But It Misses What Makes Us Human, May 17, 2023
https://www.columbiapsychiatry.org/news/chatgpt-therapy-is-good-but-it-misses-what-makes-us-human
[3] https://www.ama-assn.org/amaone/reinventing-medical-practice-physician-burnout
[4] Jiang LY, Liu XC, Nejatian NP, Nasir-Moin M, Wang D, Abidin A, Eaton K, Riina HA, Laufer I, Punjabi P, Miceli M, Kim NC, Orillac C, Schnurman Z, Livia C, Weiss H, Kurland D, Neifert S, Dastagirzada Y, Kondziolka D, Cheung ATM, Yang G, Cao M, Flores M, Costa AB, Aphinyanaphongs Y, Cho K, Oermann EK. Health system-scale language models are all-purpose prediction engines. Nature. 2023 619: 357-362. https://www.nature.com/articles/s41586-023-06160-y
[5] https://www.washingtonpost.com/lifestyle/2019/04/29/doctors-who-show-compassion-have-healthier-patients-who-heal-faster-according-new-book/
[6] Gina Kolata, When Doctors Use a Chatbot to Improve Their Bedside Manner, New York Times, June 12, 2023