Skip Navigation
Madison, Wisconsin
Extract Systems
Healthcare

Can AI succeed in healthcare?

May 4, 2023

A Meteoric Rise

Artificial Intelligence (AI) has been a hot topic in the healthcare technology community for years, but it’s only been months since the technology gained widespread awareness and use. In recent blogs we’ve talked about the type of work that might be replaced with AI technologies like ChatGPT and also how the government is ready to get involved in oversight of AI’s use.

Despite the fact that the general public may not yet be comfortable with putting artificial intelligence in the driver’s seat (only 10% of American’s think AI is “very trustworthy”), the pace at which these technologies are improving continues to be rapid. ChatGPT failed the bar exam in January and then passed it in the 90th percentile in March; Midjourney can draw hands now!

Now passing the bar is not practicing law and accurate hands probably won’t put artists out of business, but clearly the technology is moving fast enough to at least appear to be as reliable, or perhaps even more so, than a human counterpart.

an image showing Bing's AI tool's attempt at writing a thank you note for readers

It was worth a shot

Obviously, the breadth of impact the technology has is wide, and if ChatGPT messes up trying to write a thank you note, it’s not the end of the world (I asked Bing’s AI tool to write a thank you note to our readers…it could use some work). If, however, I’m using artificial intelligence to diagnose and treat a patient, there’s going to have to be a very high bar that the technology clears due to the potential impacts of a mistake. That isn’t to say an AI tool like ChatGTP couldn’t figure out a way to jump that hurdle quickly; as of last week actual physicians were preferring ChatGPT’s responses to medical questions over their fellow doctors.

Early Adopters

In the haste to harness this technology to provide better services and outcomes for patients (and assuredly, in a rush to see how this all impacts the bottom line), many healthcare organizations have implemented artificial intelligence programs and realized that it may be more complicated than they anticipated. Just because using AI can be difficult, though, doesn’t mean it can’t be done.

Researchers at Duke University just completed a study that looked into AI implementations at their own healthcare organization and ten others. While the research uncovered challenges, the aim is to create practical advice so other organizations can avoid already-experienced pitfalls.

The issue of reliability and the trust that comes with it is going to be a big one for clinicians, but the common thread of the discussed implementation barriers is that AI disrupts workflows. Does a clinician have to type what a patient is saying or turn on the functionality and double check the transcription? Do they deliver both their opinion and that of the AI? Is it even necessary to add the time to review an AI suggestion when visits are already losing patient-focused time? Are clinicians going to be stoked to be responsible for mastering another new technology? You get the point.

Best Practices for AI

To implement AI properly, the researchers suggested eight steps. The most important, though, is to find the right use case. There are often areas of focus that may not get the attention they deserve where AI can lend a hand. Brigham and Women’s is currently trying to use AI to identify information in a patient’s medical record that would indicate they need an imaging procedure. Changes like this that are additive to both the patient and clinician experience have a better shot at being fruitful.

The Duke researchers recommended that AI projects follow these steps:

  • identifying and prioritizing a problem

  • identifying how AI could potentially help

  • developing ways to assess an AI’s outcomes and successes

  • figuring out how to integrate it into existing workflows

  • validating the safety, efficacy, and equity of AI in the health care system before clinical use

  • rolling out the AI tool with communication, training, and trust building

  • monitoring

  • updating or decommissioning the tool as time goes on

Nothing Without the Data

AI for its own sake is unlikely to transform a healthcare organization in any way other than negatively so it’s great to see some of these thoughtful observations shared. Productive projects, even like the one mentioned at Brigham and Women’s, are only as successful as the data that underlies them. We could have told you that years ago.

That’s why our healthcare software, and the artificial intelligence software we use to enhance it, is the foundation of strong insights. Rather than completely disrupting the way you receive documents (which in many cases, you can’t control), Extract offers additive software that gives you more data, more accurately, with an insanely more productive staff.

Rather than get in the way of HIM staff, Extract intercepts documents when they arrive, classifies them, identifies key data based on document type, matches your orders, fits your naming conventions, and still lets one of your staff members sign off before automatically sending the document where it needs to be.

It’s full discrete data in the EMR in the time it normally takes to leave clinicians with a pdf they have to read. Please reach out if you’d like to see how additive software can save you money while giving you more and better data.

Meet The Author
Chris Mack
Chris is a Marketing Manager at Extract with experience in product development, data analysis, and both traditional and digital marketing. Chris received his bachelor’s degree in English from Bucknell University and has an MBA from the University of Notre Dame. A passionate marketer, Chris strives to make complex ideas more accessible to those around him in a compelling way.
Speak to a solution consultant