Understanding Bias in AI

Dylan D'Souza
5 min readDec 24, 2022

--

“The key to artificial intelligence has always been the representation.”

- Jeff Hawkins

Biased AI Models are a major pitfall to avoid in automation (Source: Logically.ai)

Introduction

Simply put, artificial intelligence is a process involving the processing of similar data by a computer, to find trends and patterns. These patterns are used by the computer to learn or gain knowledge about a certain category. After this, algorithmic models are implemented to structure the problem-solving capacity of the computer to a desirable extent. The computer, in turn, executes this process, by giving us a simplistic output.

Learning by Computers

As children, we’re often kept away from sharp edges or hot vessels, for the sake of our safety. The simple reason is that our brains lack the cognitive development to detect certain threats and dangers. As we age, our brains start to develop, and we subconsciously keep away from possible dangers. This development is a perfect example of experience. With experience, we can gauge if something is hazardous to us (a sharp edge), and take subsequent actions (staying away).

Similarly, the data (sharp edges) that a computer comes across, helps it to react accordingly (staying away). A larger quantity of data helps build more experience and therefore increases efficiency. However, there is a vital question that we must keep in mind.

How does the computer know about how it should react to certain data?

Labelling Data

To answer the question above, there is a very simple solution — labelling data. If we label every piece of data and the preferred reaction, the computer will begin to learn with these labels. It will then be able to understand how to react to different situations, without prior coding (as it will use its learning experience of labelled data).

The Issue (AI Bias)

Labelling data may not be as easy as it sounds. Often, we tend to make errors. Let’s take the example of an AI-based calculator. If a mathematician had to label the solution of a math equation, they would most probably get an accurate answer. However, if the same labelling task is assigned to a toddler (who has little or no experience with mathematics), the chances of an accurate answer drop significantly.

As humans, we seem to understand this relatively easily. Computers, however, only understand that there are 2 different labels for a particular solution. They have no idea which one was suggested by the toddler or the mathematician. In fact, they do not even know that a mathematician and toddler have labelled the data. Naturally, when an output is asked for, the computer will yield the mathematician’s answer 50% of the time, and the toddler’s answer 50% of the time as well.

This is not efficient at all.

If the toddler is given a chance to label another piece of data, the computer would now favor the toddler’s labelling. This would mean that the answer of the toddler would display 2 times out of every 3 times (67%), as compared to the experienced mathematician’s 1 out of 3 (33%).

Essentially, even if the algorithms involved are coded perfectly, the AI model would still be flawed. Ill representation or under-representation of data confuses the AI model. This bias in the data results in bias in the AI model.

Reasons for AI Bias

1. Insufficient Training Data: As in the case of the example above, lack of data causes AI models to be uncertain about certain outputs. They often tend to give multiple outputs for the exact same input, and hence, lack efficiency.

2. Bias in Datasets: Labelled datasets that computers use to train their algorithm often have discriminatory labels, or in some cases, are labelled incorrectly. This is a repercussion of innocent human errors, which in most cases, are not intentional.

3. History: 29% of the world’s senior management roles are occupied by women. This does not necessarily mean that in the future, the figure of 29% will remain constant. Algorithms are trained on historic experiences, rather than probability, which can yield biased results in some cases.

Examples of AI Bias

1. Sexist Hiring Algorithm: In 2018, Amazon was working on an AI recruiting system, which was designed to scan through resumes and shortlist the most qualified candidates. However, the AI turned out to be extremely sexist and shortlisted almost no women. Amazon soon scrapped the system.

2. Racism in Facial Recognition Systems: Certain facial recognition systems failed to maintain accuracy for people of all skin tones. This could be a result of bias in datasets or a lack of data for people of color.

Tackling AI Bias

Step 1: Understanding the type of AI model being used. In certain cases, filtration for biases is not as important as in other cases. For example, a handwriting detector might not need to be filtered as much as an employee hiring algorithm.

Step 2: Practicing the use of certain tests and processes to check for bias within the dataset. It might be time-consuming, but it helps your model from becoming biased.

Step 3: Investing in more data. Often, we tend to use data of a certain type more than another type, due to its cheap or convenient availability. Investing more time or money to retrieve a large amount of uncommon data is worth it, as it keeps the model unbiased, as well as allows you to test on foreign data (a type which the computer has never seen before).

My Advice

In my opinion, the fact that researchers have been able to understand and conceptualize bias in AI is a huge plus. AI is at a considerably large level globally, but subjects such as social justice and hiring employees are still managed by humans. Detecting the concept of bias right now allows us to form constructive solutions before the large-scale implementation of AI.

My advice is to try and use datasets that have been skimmed through for biases, as well as to ensure that there is no under-representation of a particular type of data within the set.

Helpful Links:

Reasons for Bias

https://mostly.ai/blog/10-reasons-for-bias-in-ai-and-what-to-do-about-it-fairness-series-part-2

The Role of Bias in AI

https://www.forbes.com/sites/forbestechcouncil/2021/02/04/the-role-of-bias-in-artificial-intelligence/?sh=2bd2237a579d

Tackling Bias in AI

https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

Picture Source:

https://www.logically.ai/articles/5-examples-of-biased-ai

--

--

Dylan D'Souza
Dylan D'Souza

Written by Dylan D'Souza

Passionate about business, technology and psychology!

No responses yet