sb.scorecardresearch

Published 18:55 IST, December 25th 2024

Key Recommendations To Reduce Bias Risk In AI Health Tech: International Experts

Internationally agreed upon recommendations to mitigate risks from AI-based health technology could yield positive results according to researchers.

Follow: Google News Icon
  • share
AI-based tech health
AI-based tech health | Image: Pinterest

Internationally agreed upon recommendations to mitigate risks from AI-based health technology could yield positive results according to researchers. 

The latest studies on medical advancements which are rooted in AI tec, ca be biased, given patterns that indicate its works for well for specific individuals and not for everyone. 

The recommendations, published in The Lancet Digital Health journal and New England Journal of Medicine AI, are aimed at improving how datasets -- used to build AI health technologies -- can reduce the risk of potential AI bias.

"Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt," lead author Xiaoxuan Liu, an associate professor of AI and Digital Health Technologies at the University of Birmingham, UK, said.

"To create lasting change in health equity, we must focus on fixing the source, not just the reflection," Liu said.

AI in Healthcare Statistics | emagineHealth
AI health tech. Image credit: Pinterest

Recommendations that help in minimizing risks of AI health tech 

 include preparing summaries of dataset and presenting them in plain language, researchers forming the international initiative 'STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)' involving more than 350 experts from 58 countries.

Known or expected sources of bias, error, or other factors that affect the dataset should also be identified, the authors said.

Further, the performance of an AI health technology should be evaluated and compared between contextualized groups of interest, along with the overall study population.

Uncertainties identified in AI performance should be managed through mitigation plans, ensuring the clinical implications of these findings are clearly stated, along with documenting strategies to monitor, manage and reduce these risks while implementing the technology, the authors said.

"We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation," they said.

"We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies that are safe and effective," they said.

Updated 18:56 IST, December 25th 2024