Keeping Health Data and AI Unbiased

Artificial Intelligence, or AI, shows promise to transform healthcare, and is quickly becoming commonplace with applications in patient care, disease management, and decision support. How? Well here at Extract we leverage AI and machine learning to help our clients improve patient outcomes by having algorithms read patient scans, labs, etc. more accurately. In addition, we lean on natural language processing to understand which pieces of data are important to retrieve so staff can search through this unstructured data within their EMR.

This technology can allow healthcare organizations to offer better outcomes, faster results, at a lower cost. It must be noted though, that several recent publications have discussed potential biases associated with machine learning algorithms. Many concern generalizability to disadvantaged populations, including patterns of missing data, low minority group sample size, and disparities in patient care caused by implicit biases. Why? Health systems may not fully understand the workings and performance of AI methods and models, and they should. At a time when the country is grappling with systemic bias within our societal institutions, we need technology to reduce health disparities, not exacerbate them.

The algorithms are only as solid as the data they are fed, and medical data has long lacked diversity. Take clinical trials for example; women and minority groups have largely been underrepresented group participants. Why? Because historically they showed more side effects from the trial drug. NIH and the FDA have been working hard to make progress on this imbalance and Congress acted on in it in 1993.  Most recently, with the development of one of the COVID-19 vaccines, Moderna slowed its coronavirus vaccine trial to ensure minorities were well represented, a definite step in the right direction.

Algorithms have been around for decades and biases have been around substantially longer. In healthcare for example, weight has often brought intentional and unintentional bias, healthcare providers sometimes oversimplify and assume that an overweight patient cannot adhere to lifestyle changes such as altering their diet or exercising to control in attempt to change their weight. Providers may also wrongly attribute other health issues to the obesity instead of investigating other potential causes, thus delaying diagnoses and treatment and the data being wrongfully affected.

It’s been long known that AI algorithms trained with data that doesn’t represent the whole population often perform worse for underrepresented groups. It’s imperative that your health system share data sets from several diverse populations so that the algorithms can be trained in an unbiased way.

Bias in AI is a complex issue; simply providing diverse training data does not guarantee elimination of bias, but it does help tremendously. In addition, healthcare stakeholders should think about ways their systems can account for demographic gaps and ask your vendor to test your algorithms before its implemented.

With the use of AI rapidly expanding throughout healthcare, it is imperative that stakeholders take steps to address algorithmic bias now. If you are looking to implement any type of AI or machine learning platform, it is important that your vendor understands the models being used. Here at Extract, we believe AI algorithms should not just powerful but also fair and it’s our mission to ensure that our AI models create a more equitable solution for our healthcare clients. If you’re interested in learning more about what we do, reach out today.

Sources:

https://www.cnbc.com/2020/09/04/moderna-slows-coronavirus-vaccine-trial-t-to-ensure-minority-representation-ceo-says.html

https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai


About the Author: Taylor Genter

Taylor is the Marketing Specialist at Extract with experience in data analytics, graphic design, and both digital and social media marketing.  She earned her Bachelor of Business Administration degree in Marketing at the University of Wisconsin- Whitewater. Taylor enjoys analyzing people’s behaviors and attitudes to find out what motivates them, and then curating better ways to communicate with them