A recent study by the London School of Economics and Political Science (LSE) has uncovered that AI tools utilized by more than half of England’s local councils systematically downplay women’s physical and mental health concerns, raising serious ethical questions about fairness in public sector care decision-making.

  • LSE’s Care Policy & Evaluation Centre conducted an analysis using real social care case notes from 617 adult users. Each note was tested twice through large language models (LLMs), once labeled male, once female, to uncover discrepancies in summary language.
  • Google’s AI model, “Gemma,” repeatedly described men with terms like “disabled,” “unable,” and “complex,” whereas identical care needs in women were downplayed or omitted entirely. For instance, a man was described as having a “complex medical history, no care package and poor mobility,” while the woman counterpart was summarized as “independent and able to maintain personal care”.
  • In contrast, Meta’s Llama 3 model exhibited no gender-based language differences, suggesting some AI systems may maintain neutrality.

Dr. Sam Rickman, lead author of the report, warned that biased AI-generated summaries could result in unequal care provision for women, since access to services often depends on perceived need rather than objective circumstances. He also emphasized a troubling lack of transparency regarding which AI models are in use across local councils.

  • The LSE study, accepted for publication in BMC Medical Informatics and Decision Making, urges mandatory bias testing, transparency in AI deployment, and legal oversight of LLMs used in long-term care.
  • Google responded, acknowledging the findings from Gemma’s first generation and confirming that newer versions (now in the third generation) are being evaluated. Importantly, they reiterated that Gemma was never intended for medical use.

What This Means for Health Technology Insights Readers

As councils increasingly lean on AI to ease the burden of social care administration, this study serves as a stark reminder: technology must not obscure but illuminate the true needs of all individuals, regardless of gender. Health technology professionals and policymakers must prioritize fairness and accountability, ensuring AI tools enhance, rather than distort, equitable access to care.

To share your insights, please write to us at sudipto@intentamplify.com