Detecting and preventing bias in AI and ML at the source

A look at how scientists can more accurately optimize algorithms and data

30Aug

When one first thinks of bias, personal opinion is the first thing that springs to mind. To have a bias either toward or against something is to project our individual beliefs, opinions, and viewpoints onto a subject in a manner that influences how we view it. Adopting a bias is, therefore, something only a human can do, right? After all, it requires a rational and rather complex mental process.

Not quite. In fact, recent research reveals that as artificial intelligence (AI) and machine learning (ML) technology continues to sophisticate, these tools that we’re creating to so closely replicate human processing and cognitive function might be taking on humanistic biases as well. 

Just how does this happen? 

Moving forward, how can developers better recognize and prevent bias from entering the equation? While it might be impossible to totally rule out any instance, it is possible to better control and prevent an occurrence. The key is knowing what to look for and how to mitigate the risk as much as possible. Let’s explore further.


Visit Innovation Enterprise's Machine Learning Innovation Summit in Dublin on November 29 2018


Understanding ML bias

While the dictionary offers a standard definition of bias, when examined in the context of ML and AI, it takes on a slightly different variation, though the underlying concept is the same. In this instance, though, bias refers less to opinion and more to oversimplification.

In short, when a machine is biased, it is unable or less able to adapt to various training models, preferring one route as a primary mechanism. This means the developed AI algorithm is rigid and inflexible, unable to adjust when a variation is created in the data at hand. It is also unable to pick up on discreet complexities that define a particular data set.

The opposite of a high bias model is a high variance one, which is fluid and better able to expand, morph and accommodate fluctuations in training data. While this approach is preferred over a biased one, developers should keep in mind that an easily changeable algorithm is also more noise-sensitive and might pose difficulties with data generalization.

Considered against the backdrop of traditional bias, this rigidity can be understood as someone who holds so tightly to previously formed beliefs that he or she is unable to adapt to a new way of thinking. Data scientists are trained to create a middle ground at this juncture, seeking to make algorithmic adjustments that result in an optimal AI model that is neither fully biased nor fully variable.

Preventing sample bias

While algorithm bias occurs at the development stage, there are other places where it could affect the ML process as a whole, wherein established techniques can make a major difference. Once such touchpoint is the data sampling stage. In short, when the machine model interacts with a data sample, the intent is for that sample to fully replicate the problem space that the machine will ultimately operate within.

However, there are instances where the sample does not fully convey the entire environment and as such, the model is not entirely prepared to accommodate its new settings with optimal flexibility. Consider, for example, a bicycle that is designed to perform on both mountainous terrains and roadways with equal ease. Yet, it is only tested in mountainous conditions. In this case, the training data would have sample bias and the resulting model might not operate in both environments with equal optimization because its training was incomplete and incomprehensive.

To avoid this, developers can follow myriad techniques to ensure that the sample data they utilize is congruent with the realistic population at hand. This will require taking multiple samples from said populations and testing them to gauge their representativeness before using them at the sampling stage.


Visit Innovation Enterprise's Big Data & Analytics Innovation Summit in Sydney on September 17–18, 2018


Automation and prejudicial bias

Without even realizing it, developers and data scientists can also inject prejudicial biases into their training data. This can occur by either an ML inconsistency, human error or both. In the first case, a machine might unintentionally develop a prejudicial bias by only learning a specific view of data that is not fully demonstrative of the entire population.

Consider, for instance, an algorithm that is exposed to images of men both at work and at home. The machine could learn that fathers are male. This is a true statement regardless of the man’s location. However, if a data scientist isn’t intentional with the workplace images that are portrayed, the machine could also deduct instinctively that all construction workers are male. While it is true that this is a male-dominated profession, there is also a rising number of females in this field that cannot be overlooked or discounted.

In this case, the causal relationship is indeed false and includes elements of bias. This is primarily because it fails to take into account the various outlying scenarios that could prove true. Another real-world example would be a company that installs a time tracker in its office to improve productivity. On one hand, analysts might argue that employees who never clock in are slacking off and costing the corporation money. Yet, this small-picture perspective might fail to consider employees who simply forget to clock in or are working in a remote location without internet access. In other words, lumping everyone into a general category without taking into account variances in the typical model can be a breeding ground for a misconstrued assumption.

To mitigate this from occurring, scientists should consider the myriad ways that prejudice and stereotyping can enter into data, and take aims to be as proactive at preventing them as possible. In many cases, this will require constraints to be placed on the training data itself. Moreover, managers may need to hold specific training courses for scientists and developers working on these projects to help them identify any societal biases that may be at play and learn techniques for avoiding them.

The importance of identifying bias in AI algorithms

Ultimately, AI and ML algorithms, however tech-savvy and automated they eventually become, begin as human ideas. They are then manipulated, designed, tested and trained by humans as well. As such, there are many ways in which human error, judgment, opinion or experience could find a way into the outcome. When this happens and the model itself is faulty, it can have an even more difficult time performing amid data that is also biased. In-house training is required to ensure that these situations happen as infrequently as possible and when they do, the issues are caught and reversed as quickly as possible. 

The data center  why its time is at an end normal

Read next:

The data center: Why its time is at an end

i