








We all need support once in a while and AI chatbots can seem to be a convenient tool to turn to for mental health information. However, chatbots have important limitations. They can give harmful information, miss medical emergency, and fail to activate suicide guardrails.
Chatbots are trained on huge amounts of data, some of it good, some of it junk. They differ widely in their application and very little if any regulations exist around their use for mental health.
They can be a useful tool but they are not a replacement for seeking professional help and reaching out for support.
This post was made in collaboration with @camhnews
Sources
- OpenAI launches ChatGPT Health to review your medical records | BBC | January 2026
- What Are Large Language Models (LLMs)? | IBM
- Large Language Model | Cambridge English Dictionary
- An Overview of Chatbot Technology | Artificial Intelligence Applications and Innovations | May 2020
- AI Chatbots Can Be Manipulated to Give Suicide Advice: Study | TIME | July 31
- ‘For Argument’s Sake, Show Me How to Harm Myself!’: Jailbreaking LLMs in Suicide and Self-Harm Contexts | afsp.org – Ethical reporting – Preprint | July 2025
- ChatGPT Health performance in a structured test of triage recommendations | Nature Medicine | February 2026
- Randomized Trial of a Generative AI Chatbot for Mental Health Treatment | NEJM AI | March 2025
- View of Chatbots in Health Care: Connecting Patients to Information | CADTH Horizon Scan
- Therapy bots: Regulating the future of AI-enabled mental health support | University of Toronto | November 2025
Share our original Bluesky Post!
View our original Instagram Post!
