Offsetting a New Rising Prejudice: AI Bias

“We need the ability to not only have high-performance models but also to understand when we cannot trust those models.” – Alexander Amini***

At a time when many organizations are looking forward to incorporating Artificial Intelligence into their systems and processes, for improved operations and efficiencies, up pops a new breeding ground for bias and discrimination. There’s no need to stop the presses on AI deployment but there is yet another need to incorporate new methods of diversity, equity and inclusion in the implementation and monitoring of AI to minimize bias and reduce discrimination in its actualization.

AI bias is the fundamental prejudice that exists within the data that is used to create AI algorithms, which can result in discrimination and other social consequences.

How AI Inherits Bias

AI systems do not breed bias on their own. The prejudice is built in one way or another through human influence. There are two specific ways in which this can happen, as seen in two documented examples of AI bias.

1. The Unintentional Imitation of Human Bias

Artificial Intelligence is created based upon human criteria or upon historic data. In incidences where historic data is being used, algorithm creators must be aware of intrinsic biases that could have occurred through human contributors in the historic process they are trying to recreate through AI. Without this awareness, they run the risk of incorporating the human biases that were present in the past.

An example can be seen as far back as 1988 when the UK Commission for Racial Equality found a British medical school guilty of racial and sexual discrimination in its admissions practices. The computer program the school used to determine which applicants would be invited for interviews was proven to be biased against women and those with non-European names. What is more interesting is how this came to be. The program had been developed to match the human admissions decisions historically implemented in the past, doing so with 90 to 95 percent accuracy. This therefore implicates the bias that existed in the people decisions on admissions, which was then passed on to the computer program that was created to match those outcomes.

2. The Replication of Implicit Bias

Implicit bias refers to stereotypes and attitudes that unconsciously affect our understanding, decision making and actions. Also known as implicit social cognition, these biases, which can be favorable as well as unfavorable, are again activated involuntarily and without an individual’s awareness, thus making them difficult to control.

In 2014, ProPublica, an investigative news site found, a criminal justice algorithm used in Broward Country, Florida, was particularly likely to wrongly mislabel African-American defendants as “high risk” future criminals at nearly twice the rate it mislabeled white defendants. White defendants were frequently mislabeled as low risk more often than African American defendants.

These risk assessments have become increasingly more common in courtrooms around the country, impacting decisions on who to set free and on whom to impose harsher sentences. They are used at every stage of the criminal justice system from assigning bond amounts to sentencing and release. These risk assessment results are known to be given to judges during criminal sentencing in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin. Despite these findings such AI tools for criminal justice systems continue to be used despite findings of bias.

Solutions

Diversity, Equity and Inclusion is deeper than a workshop. It’s more than an enlightening conversation. My thought has often been that an organization has to take the plunge into weaving DE&I into every aspect of the organization’s culture, operations, policies and procedures. That includes artificial intelligence. DE&I cannot be left out of any aspect of the organization lest it become a forgotten branch that begins to produce new tainted appendages that eventually threaten the stability and growth of that organization like a malignant cancer. Three solutions include:

1. AI Development Teams Require DE&I Awareness

Those creating the Algorithms must be well versed in understanding the realities of bias and inclusion. It is imperative that these individuals be mindfully aware of the types of biases that can impact the working results of their programming and how to reduce (with the ultimate goal of eliminating) prejudicial outcomes. It is not enough to simply know one’s craft, these individuals must delve deeply into diversity, equity and inclusion and utilize new learnings into the crafting of AI programming.

2. Internal Compliance Team Review and Monitoring

As AI algorithms are created, implemented and monitored, a compliance team should be working along with the development team to ensure compliance with federal nondiscrimination laws, corporate policy and the determination on the possible effects of implicit bias.

3. Anti-Discrimination Policies and Enforcement

As referenced above, a compliance team should monitor adherence to corporate policies, to include diversity, equity and inclusion policies. In order for a compliance team to monitor and assess this, they have to exist. Organizations show just how serious they are about DE&I by the policies they have created and more importantly, how well they hold all parties accountable to those policies. AI should be included in such policies to indicate the need to assess and eradicate any discriminatory impact.

Closing Thoughts

I cannot close without adding the obvious. Diversity, Equity and Inclusion require a diversity of individuals be involved at every level and every stage of work that is done in an organization, a program, a process and yes in the building of algorithms. Implementing compliance teams and increasing the overall awareness and change capabilities of individuals involved, is not enough, nor is it effective if the team members are homogenous. The teams must be diverse, AI developers, the compliance teams, the policy makers all need to be diverse and inclusive.

The benefits that AI can bring to modern day business is quite remarkable, but not if isms and bias end up being built into their core. That’s the equivalent of replacing a car with faulty brakes to ensure the safety of self and passengers, then you cut the brake lines of the new car and proceed to drive it. Sounds crazy, I know but so does building an AI algorithm to improve efficiency, accuracy and quality and proceeding to add in every aspect of inefficiency, inaccuracy and poor quality that was in effect prior to your AI development.


DE&I is deeper than a workshop. It’s more than an enlightening conversation….

Best Regards
C.
Photo by Tara Winstead from Pexels

1 thought on “Offsetting a New Rising Prejudice: AI Bias”

Comments are closed.