Google said it took model bias “extremely seriously” and was developing privacy techniques that can sanitise sensitive datasets and develop safeguards against bias and discrimination.
Researchers have suggested that one way to reduce medical bias in AI is to identify what data sets should not be used for training in the first place, and then train on diverse and more representative health data sets.
Zack said Open Evidence, which is used by 400,000 doctors in the US to summarize patient histories and retrieve information, trained its models on medical journals, the US Food and Drug Administration’s labels, health guidelines and expert reviews. Every AI output is also backed up with a citation to a source.
Earlier this year, researchers at University College London and King’s College London partnered with the UK’s NHS to build a generative AI model, called Foresight.
The model was trained on anonymized patient data from 57 million people on medical events such as hospital admissions and Covid-19 vaccinations. Foresight was designed to predict probable health outcomes, such as hospitalization or heart attacks.
“Working with national-scale data allows us to represent the full kind of kaleidoscopic state of England in terms of demographics and diseases,” said Chris Tomlinson, honorary senior research fellow at UCL, who is the lead researcher of the Foresight team. Although not perfect, Tomlinson said it offered a better start than more general datasets.
European scientists have also trained an AI model called Delphi-2M that predicts susceptibility to diseases decades into the future, based on anonymzsed medical records from 400,000 participants in UK Biobank.
But with real patient data of this scale, privacy often becomes an issue. The NHS Foresight project was paused in June to allow the UK’s Information Commissioner’s Office to consider a data protection complaint, filed by the British Medical Association and Royal College of General Practitioners, over its use of sensitive health data in the model’s training.
In addition, experts have warned that AI systems often “hallucinate”—or make up answers—which could be particularly harmful in a medical context.
But MIT’s Ghassemi said AI was bringing huge benefits to healthcare. “My hope is that we will start to refocus models in health on addressing crucial health gaps, not adding an extra percent to task performance that the doctors are honestly pretty good at anyway.”
© 2025 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.