As artificial intelligence (AI) is increasingly brought to bear in diagnosis, therapeutic intervention, and personalized medicine, some experts are expressing concern over its reliability and accuracy. Though AI could create $150 billion in annual savings for the US economy by 2026 (Accenture), practitioners advise that more research into its applications is needed to ensure that data bias does not negatively impact clinical decision-making—and potentially put patients at risk.

“There are currently no measures to indicate that a result is biased or how much it might be biased,” explained Keith Dreyer, DO, PhD, Chief Data Science Officer, Partners HealthCare, and Vice Chairman of Radiology, Massachusetts General Hospital (MGH), at the World Medical Innovation Forum on AI. “We need to explain the datasets these answers came from, how accurate we can expect them to be, where they work, and where they don’t work. When a number comes back, what does it really mean? What’s the difference between a seven and an eight or a two?”

In order to incorporate AI into clinical care for the greatest patient benefit, Dr. Dreyer and others say, clinicians and laboratory leaders must be aware of potential biases in the data collection process—such as algorithms based on limited data sources or financially driven methods—understand how algorithms are created, and ensure that the human aspect of clinical decision-making remains part of the picture.

Read more here.

Visit Connect with Partners for more stories like this.