Identifying Broken Clinical Decision Support Alerts Using Natural Language Processing

Researchers at Partners HealthCare found a treasure trove of data in an unlikely spot – the free-text comment field of Clinical Decision Support override option – to diagnose broken and unoptimized CDS alerts in the EMR.

Rules-based clinical decision support works with data in the electronic medical record to guide providers to safer clinical decision-making. CMS reports that an effective application of CDS, “increases quality of care, enhances health outcomes, helps to avoid errors and adverse events, improves efficiency, reduces costs, and boosts provider and patient satisfaction.” However, an effective application of CDS isn’t always guaranteed. CDS systems can malfunction, and the high volume of rules triggered throughout the day can cause alert fatigue. With a high volume of sometimes-inaccurate alerts, important alerts may be ignored.

In the study, published in the January 2019 issue of Journal of the American Medical Informatics Association, Partners HealthCare researchers reviewed the free-text override comments for CDS rules in their EMR to track down rules that were broken or had room for improvement. They wanted to see whether free-text override comments could be used to identify broken CDS alerts.

First, the researchers manually reviewed the override comments and compared with patient data within the chart to classify the rule as “Broken,” “Not broken,” or “Not broken, but could be improved.”

Next, the researchers compared the “Broken” and “Not broken, but could be improved” rules to the “Not broken” rules. They evaluated three methods of ranking the rules to see if any method filtered the broken rules to the top of the list for repair. The first method calculated the frequency of override comments, hypothesizing that rules with more override comments were more likely to be broken. For the second method, the researchers compiled a list of words, phrases, and punctuation, referred to as the “Cranky Comments Heuristic,” that they believed signified frustration with the alert. Some entries on this list include “stupid,” “!” and “wrong.” They searched the override comments for these words and pulled rules containing these words to the top of the list, suggesting that they were more likely to be broken. The third method utilized the Naïve Bayes classifier in the Natural Language Toolkit package in Python. The researchers trained the Naïve Bayes classifier on a subset of comments that a researcher put into two categories – “worthy of investigation” and not worthy of investigation. Once trained, it then examined the entire dataset of rules and comments and flagged rules most likely to be broken based on which the Natural Language Toolkit found to be worthy of investigation.

The results were staggering. Out of the 120 rules in the dataset, 62 were considered “Broken,” 13 were “not broken, but could be improved,” and the remaining 45 rules were considered “not broken.” 62.5% of rules investigated were broken or could be improved, which accounts for 26.6% of rules within the EMR. Sorting the override comments by the Naïve Bayes Classifier proved to be the most effective method of sorting the broken rules to the top, making it easier to review and fix. Sorting by the Cranky Comments Heuristic also fared well and required less time and overhead than using NLP. Ranking by frequency of override comments proved to be worse than a random sort of comments at identifying broken rules.

Partners HealthCare now reviews free-text override comments daily to identify and fix errors as quickly as possible. This study shows that data is everywhere, and a free-text field can hold useful organizational decision-making information. Here at Extract, we know the value of accessible data. Reach out to discuss how we can help turn your scans and faxes into actionable data in your EMR.


About the Author: Chloe McCabe

Chloe McCabe is the Quality Assurance and Technical Writer at Extract. She has over 7 years of experience in the Health IT world, focusing on quality assurance and user experience. Chloe is passionate about making software a joy to use.