Algorithm Accountability

Algorithms, AI, and machine learning are all designed to make our lives and decision making easier.  In theory, algorithms can remove some of the bias in decision making and take more variables into account, producing a better end result.  In the past, we’ve covered issues that can arise such as when bias is introduced to machine learning and its algorithms.

Recently, the Center for Data Innovation released a report with the goal of algorithms promoting desirable outcomes, protecting against undesirable outcomes, and ensuring laws can be applied to algorithmic decisions.

One of the ways to achieve this outcome is through transparency.  While accuracy should always be the first priority, transparency allows for users to understand how certain data points will affect an eventual outcome.  In the criminal justice system, its recommended that algorithms be open-source, providing an important view into things like sentencing.

Transparency isn’t without its issues, though, as companies have a vested interest in keeping their trade secrets, well, a secret.  Companies like Google make their money from proprietary algorithms that other companies can’t imitate.  Additionally, increased transparency can result in those who have access to an algorithm trying to game it or affect the results in some way.

A good way of toeing the line between transparency and a complete black box is to look at the results an algorithm produces.  Just like when releasing any new piece of software, algorithms should be tested to assess their impact.  A sort of trial period can be used to determine whether an algorithm is producing results that are equitable and fair to those affected by it.

At Extract, we’ve developed machine learning capabilities in order to better capture discrete data fields from unstructured documents.  Rather than simply implementing this into our software, we’ve gone through months of testing to ensure that our software will accurately learn as it reviews more of your documents, reducing the amount of human intervention needed to fine tune our proprietary rules engine.

If you’d like to learn more about how we capture data, and continuously improve our systems, please reach out today.


ABOUT THE AUTHOR: CHRIS MACK

Chris is a Marketing Manager at Extract with experience in product development, data analysis, and both traditional and digital marketing.  Chris received his bachelor's degree in English from Bucknell University and has an MBA from the University of Notre Dame.  A passionate marketer, Chris strives to make complex ideas more accessible to those around him in a compelling way.