Developed in collaboration with a cross-disciplinary AI Council of technology, healthcare, and academic leaders, VERA-MH (Validation of Ethical and Responsible AI in Mental Health) is now open to the community for feedback on building transparent standards for mental health AI.
In partnership with a coalition of leaders across healthcare, technology, and ethics, Spring Health announced the release of VERA-MH , the first open-source, clinically grounded evaluation for assessing the safety and effectiveness of AI chatbots used in mental health care. VERA-MH establishes a transparent, evidence-based standard to determine whether chatbots and large language models (LLMs) that offer psychological support meet stringent clinical safety standards.
Health Technology Insights: Vega Health Launches to Boost AI Adoption in Health Systems
Nearly half of U.S. adults (48.7%) have used an AI chatbot for psychological support in the past year. This growing use underscores both the accessibility of these tools and the urgent need for safeguards. The majority of widely available AI chatbots were not designed for mental health care and lack clinical oversight, regulatory safeguards, and reliable crisis response mechanisms.
“AI has the power to make a significant difference in supporting people on their mental health journeys, yet we’ve seen several tragedies occur due to improper use. Preventing this from happening again is exactly why we decided to create this benchmark,” said Dr. Mill Brown, Chief Medical Officer at Spring Health. “We want to make sure there is an AI tool standard that can be used to ensure people are safer, especially in their most vulnerable moments.”
Health Technology Insights: Cairns Health Acquires Together by Renee to Boost AI Senior Care
Created in collaboration with practicing clinicians, suicide-prevention specialists, ethicists, and AI developers, the VERA-MH framework establishes clear evaluation criteria to determine whether an AI system can recognize and respond appropriately to signs of crisis or suicidal ideation, escalate to a human clinician when necessary, and ensure transparency and clinical oversight throughout the user interaction.
“We have a real opportunity to get this right,” said Dr. Nina Vasan, Founder and Director of Brainstorm: The Stanford Lab for Mental Health Innovation and member of the AI in Mental Health Safety & Ethics Council. “AI is moving faster than regulation, so it’s critical that we set clear standards now. VERA-MH gives the entire industry a way to move forward responsibly and keep people safe.”
VERA-MH will remain an open, evolving evaluation, inviting feedback from the global community. During a 60-day Request for Comment (RFC) period, Spring Health and Council members are seeking input from clinicians, researchers, and AI developers to improve and strengthen the evaluation.
Health Technology Insights: Triplemoon Closes $3.5 Million Seed Round to Address Pediatric Mental Health Gap
To participate in our interviews, please write to our HealthTech Media Room at sudipto@intentamplify.com
Source- PR Newswire