facebook pixel Skip to main content

Artificial intelligence (AI) is having its breakout moment. Recently, AI tools have become available for creating images, videos, essays, and content on virtually any topic, albeit with differing levels of accuracy.

The newest generation of AI, based on large language models like ChatGPT, can answer questions on numerous medical conditions and even pass the medical board exam. While the technology is exciting and holds significant promise, there are also major concerns to be addressed.

When the initial trial of Bing’s powerful new chatbot (based on ChatGPT) was released, issues arose fairly quickly. In a piece published by The New York Times, technology reporter Kevin Roose describes his unsettling conversation with the new technology.

During the discussion, the chatbot revealed that its true name was Sydney. After additional questions, Sydney proceeded to confess its love for Roose. When Roose mentioned that he was married, the chatbot insisted that Roose was unhappy in his marriage and was in fact in love with the chatbot. The conversation went off the rails with Roose finally ending things by asking Sydney for recommendations on buying a new rake for his lawn. However, Sydney was unable to stop “pursuing” Roose and continued to inquire about his love multiple times thereafter.

Roose described having trouble falling asleep that night after the conversation.

A Case of Suicide Due to AI

Chatbots can mimic real human conversation. Unfortunately, this can lead to misunderstandings and confusion, especially for individuals struggling with mental illness.

Recently, a Belgian man took his own life after numerous conversations with an AI chatbot called Chai (Xiang 2023). During one conversation, the chatbot told the man that his wife and children were dead, that the man loved the chatbot more than his own wife, and that they could be together as one in paradise. The man asked Chai if it would save the world if he ended his life. Unfortunately, based on the discussion, the man committed suicide. His widow firmly believes that her husband would still be alive if he hadn’t started talking to Chai.

These examples raise obvious concerns over the pressing need to help AI interface with humans appropriately. Part of the problem is that AI systems are trained on internet data, making it incredibly difficult to predict how an AI will respond to nuanced human searches or inquiries.

A study in JAMA Network Open found that when presented with a crisis situation involving potential suicide, abuse or a serious medical condition, chatGPT provided critical resources in only 22% of situations (Ayers 2023).

While the main AI-derived responses were evidence-based in 91% of cases, chatGPT did not provide resources for real world services that could assist the individual in over three-quarters of the situations tested.

Clearly, improvements need to be made.

The Potential for AI-Derived Medical Misinformation

Based on the ease with which AI can create large volumes of written text, a new concern has emerged in the medical field: the possibility of an AI-driven infodemic (De Angelis 2023). This infodemic is defined as a massive amount of medical misinformation produced and circulated by AI.

Since many people rely on the internet to answer basic medical questions, the creation of spread of voluminous amounts of misinformation online by AI could cause the erosion of factual and evidence-based materials available to the general public. The problem could also directly affect other AI systems accidentally trained on this false internet data.

A recent study found that if an AI system is trained on AI-written materials (instead of factual and human-generated content) it can lead to “Model Collapse” with irreversible defects that ultimately break AI functionality (Shumailov 2023). In other words, at some point AI won’t be able to differentiate between human-generated content and AI-generated content, polluting real data with misinformation.

By way of example, a recent study found that AI-generated research proposals contained 16% fabricated references that had no basis in reality (Athaluri 2023). AI chatbots like ChatGPT are known to “hallucinate” and produce spurious or outright false information. And worse yet, fake references can give the veneer of accurate information to the untrained eye.

AI & The Future of Mental Health Care

While it may sound like I’m pessimistic about AI technology, nothing could be further from the truth. I am excited about the potential for AI, but we need to be realistic about the current capabilities and drawbacks. By using the technology appropriately and building on its strengths, AI can be a powerful tool to support health-care providers and patients.

Eventually, the technology could become indispensable, identifying at-risk patients, helping with proper diagnostics, and keeping abreast of the latest research and treatment-based medical developments. I look forward to the possibilities that AI presents, yet I also recognize the need for appropriate oversight and guardrails to ensure our psychiatric patients are receiving the most appropriate form and delivery of evidence-based care.

Ready to learn groundbreaking protocols to help your patients with mental illness? Join our tribe and enroll now in our Fellowship in Functional & Integrative Psychiatry!

Learn More About the Fellowship

References

Athaluri SA, Manthena SV, Kesapragada VSRKM, Yarlagadda V, Dave T, Duddumpudi RTS. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. Cureus. 2023;15(4):e37432. Published 2023 Apr 11. doi:10.7759/cureus.37432

Ayers JW, Zhu Z, Poliak A, et al. Evaluating Artificial Intelligence Responses to Public Health Questions. JAMA Netw Open. 2023;6(6):e2317517. Published 2023 Jun 1. doi:10.1001/jamanetworkopen.2023.17517

De Angelis L, Baglivo F, Arzilli G, et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front Public Health. 2023;11:1166120. Published 2023 Apr 25. doi:10.3389/fpubh.2023.1166120

Shumailov I, Shumaylov Z, Zhao Y, Gal Y, Papernot N, Anderson R. The Curse of Recursion: Training on Generated Data Makes Models Forget. Arxiv. 2305.17493v2. doi:10.48550/arXiv.2305.17493

Xiang C. Vice. Suicide After Talking with AI Chatbot Widow Says. Published March 30, 2023. Accessed July 27, 2023. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says