Bias in AI is expanding and it’s time to fix the problem

Missed a session from the Future of Work Summit? Visit our Future of Work Summit on-demand library to stream.

This article is contributed by Loren Goodman, co-founder and CTO at InRule Technology.

Traditional machine learning (ML) only does one thing: it makes a prediction based on historical data.

Machine learning starts by analyzing a table of historical data and producing a so-called model; this is known as training. After the model is created, a new row of data can be entered into the model and a prediction is returned. For example, you can train a model from a list of home transactions and then use the model to predict the sale price of a home that has not yet been sold.

There are two primary problems with machine learning today. First, there is the ‘black box’ problem. Machine learning models make very accurate predictions, but they lack the ability to explain the reasoning behind a prediction in human-comprehensible terms. Machine learning models only give you a prediction and a score that indicates confidence in that prediction.

Second, machine learning cannot think beyond the data used to train it. If there is historical bias in the training data, that bias will be present in the predictions if it is not checked. While machine learning presents exciting opportunities for consumers and businesses alike, the historical data on which these algorithms are built can be fraught with inherent biases.

The cause for alarm is that business decision makers have no effective way of seeing biased practices coded into their models. For this reason, there is an urgent need to understand what biases lurk in source data. Accordingly, human-managed governors should be installed as safeguards against actions resulting from machine learning predictions.

Biased predictions lead to biased behavior and as a result we “breathe our own exhaust”. We continuously build on biased actions as a result of biased decisions. This creates a cycle that builds on itself, creating a problem that gets bigger with each prediction over time. The sooner you detect and eliminate biases, the faster you mitigate risk and expand your market into previously rejected opportunities. Those who fail to address prejudice now expose themselves to a host of future unknowns regarding risks, penalties and lost revenue.

Demographic Patterns in Financial Services

Demographic patterns and trends can also lead to further bias in the financial services industry. There is a famous example from 2019, where web programmer and author David Heinemeier shared on Twitter his outrage that Apple’s credit card offered him 20 times his wife’s credit line, even though they file joint taxes.

Two things to keep in mind about this example:

The acceptance process was found to be in accordance with the law. Why? Because there are currently no laws in the US around AI bias as the subject is seen as very subjective. To train these models correctly, historical biases will have to be incorporated into the algorithms. Otherwise, the AI ​​won’t know why it’s biased and won’t be able to correct its mistakes. By doing this, the problem of “breathing our own exhaust” is solved and better predictions for tomorrow are obtained.

Real-world costs of AI bias

Machine learning is used in various applications that affect the audience. In particular, increasing attention is being paid to social service programs, such as Medicaid, housing assistance, or supplemental income from Social Security. Historical data that these programs rely on can be plagued by data bias, and relying on data bias in machine learning models perpetuates bias. However, awareness of possible biases is the first step to correcting them.

A popular algorithm used by many major US health care systems to screen patients for high-risk health care management intervention programs was: revealed to discriminate against black patients because it was based on data related to the cost of treating patients. However, the model did not account for racial inequalities in health care access, which contribute to lower expenditures for black patients than for similarly diagnosed white patients. According to Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study, “Cost is a reasonable measure of health, but it’s a biased one, and that choice is actually what introduces bias into the algorithm.”

In addition, a much quoted case showed that judges in Florida and several other states relied on a machine learning-powered tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to estimate recidivism rates for inmates. However, numerous studies disputed the algorithm’s accuracy, revealing racial bias — even though race was not included as an input in the model.

Overcoming prejudice

The solution to AI bias in models? Put people at the helm to decide when to take real action or not based on a machine learning prediction. Explainability and transparency are critical to empowering people to understand AI and why technology makes certain decisions and predictions. By addressing the reasoning and factors influencing ML predictions, algorithmic biases can be surfaced and decisions adjusted to avoid costly fines or harsh feedback via social media.

Businesses and technologists need to focus on explainability and transparency within AI.

There is limited but growing regulation and guidance from lawmakers to reduce AI biased practices. Recently, the British government has Framework for ethics, transparency and accountability for automated decision-making to provide more precise guidance on the ethical use of artificial intelligence in the public sector. This seven-point framework will help public administrations create secure, sustainable and ethical algorithmic decision-making systems.

To unlock the full power of automation and create just change, people need to understand how and why AI bias leads to certain outcomes and what it means for all of us.

Loren Goodman is co-founder and CTO at InRule technology.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published.