Interested in a custom course on AI prompting for you and your team?

Boost productivity and close the skills gap with a tailored learning experience.

What is AI bias in machine learning?

AI bias, also known as machine learning bias or algorithmic bias, refers to a situation where AI systems produce skewed or prejudiced results. These biases can reflect and even amplify existing human biases.

There are two ways in which AI bias can creep into Machine Learning (ML) models:

  • Training Data: Algorithms learn by pattern recognition in the data they're fed. If that data reflects historical prejudices or skewed information, the algorithm will inherit those biases and can lead to unfair outcomes.  For example,  suppose an AI system for movie recommendations is trained on data from people who watch comedies, it might recommend comedies to everyone, even if you prefer documentaries. This is because the AI learns to associate "good movies" with comedies based on biased data it is trained on.
  • Algorithmic Design: The way an algorithm is designed can also introduce bias. If the algorithm is programmed to focus on specific patterns, it might overlook evidence that contradicts those patterns. An image recognition system trained primarily on pictures of men might have difficulty identifying women accurately.

Examples of AI bias are confirmation bias, measurement bias, selection bias, and stereotype bias to name a few. No matter the kind of AI bias - it can have serious consequences, leading to unfair and discriminatory outcomes.

Insight: Can you identify AI bias? 

AI bias can be particularly tricky to identify because it can be invisible. Unlike a human making a biased decision, an AI system might not show any outward signs of prejudice. However, you can analyze data for representation and historical biases, examining algorithms using fairness metrics, and assessing outcomes for disparities. Continuous monitoring through feedback loops, regular audits, and ensuring ethical compliance helps maintain fairness.

ai-bias

What is confirmation bias in AI?

Confirmation bias is the AI systems’ tendency to favor information that confirms patterns already found in their training data. This can amplify existing biases and limit the ability to identify new information.

LLMs trained on historical data might inherit biases present in that data. For instance, if historical texts about a certain culture rely on stereotypes, the LLM might unintentionally perpetuate those stereotypes when responding to prompts or questions related to that culture.

Confirmation bias can also lead to echo chambers where false information is shared and reinforced, potentially perpetuating negative stereotypes or pushing certain narratives based on the data used to train the AI models.

What is confirmation bias in AI?

Case study: Microsoft Tay chatbot - exposed by biased data

Microsoft's AI chatbot, Tay, was introduced in March 2016 with high hopes as an artificial intelligence-powered "social chatbot" designed to engage with the masses on Twitter and other social media platforms. It was engineered to learn from interactions with users and improve its conversational abilities.

However, within hours of its launch, Tay started to post offensive content, mimicking the language of malicious pranksters who had deluged its Twitter feed with hate speech and other negative inputs.

This incident demonstrated confirmation bias in AI. The chatbot's responses were influenced by the biased data it received from a subset of users, reinforcing their negative beliefs and behaviors. It also underscores the importance of ensuring that AI systems are designed and deployed to minimize the risk of perpetuating harmful stereotypes and biases.

Tips: Mitigating confirmation bias

• Diverse dataset: Feed AI systems with diverse datasets that encompass a broad range of perspectives. This helps to expose the AI to a more complete picture and reduces the influence of any single viewpoint.
• Challenge the AI: Incorporate techniques that challenge the AI's assumptions during training. It can be introducing the AI to contradictory information or prompting it to consider alternative explanations for the data it encounters.

ai-bias

What is measurement bias in AI models?

Measurement bias creeps into AI models when the methods of data collection misrepresent the real world. This happens because of incomplete data, biased sampling, or other issues that result in a dataset that does not accurately represent the reality it aims to model. Imagine measuring height with a ruler that's always a centimeter short. Every measurement would be biased – underestimating the actual height. This bias can severely impact the accuracy and reliability of AI systems.

For example, if a model is trained to predict students' success in an online course but only collects data from students who have completed the course. In such a case, when prompted, it may not accurately predict the performance of students who drop out.

What is measurement bias in AI models?

Measurement bias in LLMs also refers to how we assess these models in a way that overlooks or misrepresents their actual capabilities. Here are some instances:

  • Focus on accuracy: A common metric for LLMs is accuracy, often measured in tasks like question answering or text completion. However, an LLM might achieve high accuracy by simply memorizing statistical patterns in the training data. This doesn't necessarily translate to real-world understanding or reasoning.
  • Limited datasets: Evaluation metrics are often based on specific datasets. An LLM might perform well on a certain benchmark but fail on tasks with different data distributions or formats. This creates a narrow view of the LLM's capabilities.
  • Human evaluation bias: Human evaluation is sometimes used to assess LLM outputs for fairness or offensiveness. However, human evaluators themselves can have biases, leading to inconsistent or subjective judgments. This can paint an inaccurate picture of the LLM's true bias.

Tips: Mitigating measurement bias

• Transparency: Increased transparency in how AI models are measured can help to identify and address potential biases.
• Focus on explainability: A crucial aspect of mitigating measurement bias is focusing on explainable AI (XAI) techniques. By understanding how AI models arrive at their decisions, we can better assess whether these decisions are based on genuine insights or simply artifacts of biased measurements.

ai-bias

What is selection bias in AI models?

Selection bias refers to a bias in the selection of data for training machine learning models. This bias can occur when the selection process doesn't accurately represent the entire population of interest, leading the AI system to learn from a limited sample. 

For example, if you're training an AI to identify dog breeds and feed it pictures of poodles and chihuahuas, it will be great at identifying those breeds. However, it might struggle to recognize a Great Dane because it wasn't included in the training data. 

Selection bias can occur due to: 

  • Non-random sampling: If the data for training is not chosen randomly from the whole population, it might overrepresent certain groups and underrepresent others.
  • Data availability: Sometimes, the accessible data might not be truly representative. For example, training AI for social media content moderation using data primarily from English speakers. The model might struggle to identify harmful content in other languages, potentially allowing hateful or offensive posts to slip through.
  • Self-selection bias: This occurs when participants choose themselves for a study or data collection process. For instance, an online survey about social media usage might attract more users who spend a lot of time on social media, skewing the results.

What is selection bias in AI models?

Case study: Amazon’s AI recruiting model

Amazon's AI recruiting model algorithm was trained on resumes of past hires, who were mostly male engineers. This led the system to favor resumes that looked similar to those of past hires, unintentionally discriminating against women. For example, the algorithm might have downgraded resumes that included the word "women" - such as “women's chess club captain” - because such terms weren't present in the training data dominated by male resumes.

Lessons learned:

  • Data diversity is important: Training data for AI systems needs to be scrupulously examined for bias and actively curated to ensure it reflects the diversity of the real world. This might involve collecting data from underrepresented groups or employing techniques to balance datasets.
  • Algorithmic scrutiny is required: AI algorithms themselves should be audited for potential biases. Techniques like fairness metrics and human review during development can help identify and mitigate these biases.
  • Transparency: There needs to be more transparency in how AI systems are designed and implemented. This allows for public scrutiny and helps identify potential biases before they cause harm.
  • Human oversight remains essential:  Even with advanced AI, human oversight remains crucial in the hiring process. Human recruiters can bring in their judgment and experience to ensure a fair and balanced evaluation alongside AI-driven tools.

Insights: More on selection bias

• Impact on generalizability: Selection bias can significantly impact the generalizability of AI models. A model trained on biased data might perform well on the specific data it was trained on, but it might fail to generalize to new, unseen data that is more representative of the real world.
• Active learning techniques: One approach to mitigate selection bias is to use active learning techniques. In active learning, the AI model can query humans for new data points that are most informative for reducing its bias. This can help the model to explore a wider range of data and improve its generalizability.

ai-bias

What is stereotyping bias in AI models? 

Stereotyping bias in AI occurs when an AI system reinforces any harmful stereotypes present in the training data. These stereotypes can be based on race, gender, age, or other factors.

Like confirmation bias, stereotyping bias can happen due to biased training data and algorithmic design. Imagine a facial recognition system used by law enforcement to identify potential suspects trained on a massive dataset of police mugshots. However, due to historical biases in policing, this dataset might disproportionately contain people of color and present them as suspects. There have been clear examples of these, as was seen in the case of the Rite Aid drugstore chain.

What is stereotyping bias in AI models? 

Case study: Rite Aid Drugstore AI facial recognition

The case of Rite Aid Drugstore AI facial recognition involves allegations that the company used facial recognition technology in hundreds of stores from October 2012 to July 2020 to identify shoppers "it had previously deemed likely to engage in shoplifting or other criminal behavior". In one incident, a Rite Aid employee even searched an 11-year-old girl based on a system alert.

Further investigation revealed that Rite Aid hadn't tested the AI system for accuracy, and the data used to train it was biased. This resulted in an increased risk of misidentification for Black, Asian, Latino, and women customers.

The FTC ( Federal Trade Commission) took action against Rite Aid, alleging insufficient AI governance practices during multiple stages of deploying the technology, including vendor selection, enrollment process, match alert process and algorithm development.

On being found guilty, the chain was banned from using facial recognition technology for 5 years. This landmark case highlights growing concerns about bias and potential misuse of facial recognition technology in retail settings.

Lessons to be learned:

  • Sufficient AI governance practices are crucial: Companies must ensure that they have robust governance practices in place to mitigate bias and ensure the accuracy and reliability of their AI systems.
  • Accountability for accuracy and reliability: Companies must be accountable for ensuring that their AI systems are accurate and reliable, and must take steps to mitigate any biases that may be present in the data or algorithms used.
  • Impact of AI bias on marginalized communities: Companies must be aware of these potential impacts and take steps to cut out any bias, while ensuring that their AI systems are fair and equitable.

Insight: Societal impact of stereotyping bias

Stereotyping bias in AI is particularly concerning because it can perpetuate existing societal biases. When AI systems reinforce stereotypes, it can lead to discrimination against marginalized groups and hinder progress towards a more equitable society.

One approach to mitigate stereotyping bias involves building diverse teams that develop and deploy AI systems. By including people from different backgrounds, you can help to identify and challenge potential stereotypes in the data and algorithm design. Additionally, using diverse and well-balanced datasets can help to reduce the influence of stereotypes in the training process.

ai-bias

What is out-group homogeneity bias in AI models?

Out-group homogeneity bias in AI refers to the tendency of AI systems to perceive members of groups outside their training data (out-groups) as being more similar to each other than they actually are. This can lead to inaccurate assessments and unfair results.

It might sound similar to stereotyping bias and confirmation bias since the root cause is biased training data and algorithmic design. However, the difference lies in how AI processes data and generates results based on bias.

How is out-group homogeneity bias different from stereotyping or confirmation biases?

Out-group homogeneity bias fails to see individuality, treating everything as a single homogeneous group. On the other hand, stereotyping applies unfair generalizations and confirmation bias favors the patterns it has learned.

Consider an AI resume screener trained on a dataset of resumes from software engineers. It might excel at evaluating the resumes for technical roles but would struggle with resumes from marketers (out-groups). The AI might undervalue relevant skills and experiences for creative roles overlooking qualified candidates simply because their resumes don't fit the mold it was trained on.

What is stereotyping bias in AI models? 

Case study: US Healthcare algorithm

In 2019, a healthcare algorithm used by US hospitals was found to be biased against patients of color.

The algorithm, which was used for over 200 million people, was designed to predict which patients needed extra medical care by analyzing their healthcare cost history. It assumed that cost indicates a person's healthcare needs. However, this assumption did not account for the different ways in which black and white patients pay for healthcare. In a report, it was found that black patients are more likely to pay for active interventions like emergency hospital visits, even if they show signs of uncontrolled illnesses. As a result of this, black patients:

  • Had lower risk scores
  • Were unfairly compared to a healthier population in terms of cost
  • Were not qualified for extra care

This shows how out-group homogeneity bias can create healthcare disparities and can lead to undiagnosed or untreated cases in already disadvantaged populations.

Insight: Impact of out-group homogeneity bias on inclusivity

By failing to recognize the diversity within out-groups, AI systems can disadvantage individuals from those groups and limit their opportunities - impacting inclusivity and fairness quotient.

To mitigate this bias, in addition to ensuring diverse datasets, incorporate multi-modal learning techniques. In multi-modal learning techniques, AI models are trained on various data formats like text, images, and audio, helping the system develop a more nuanced understanding of different groups.

ai-bias

How to mitigate AI bias?

AI bias can significantly skew results and lead to unfair discrimination.  Large Language Models (LLMs) can be made fairer in a few ways:

  • User transparency and critical evaluation: Users should be informed about the potential bias in AI systems, encouraging them to critically evaluate the information presented. This empowers users to make informed decisions.
  • Prioritizing viewpoint balance: During information retrieval, LLMs can be instructed to find content that presents both supporting and opposing arguments. This exposes the user to a more complete picture.
  • Examine the data: AI systems are only as good as the data they're trained on. It's crucial to review training data for biases, ensuring it's representative and avoids skewed demographics. Focus on quality over quantity when selecting data for your AI model.
  • Diverse teams: Building and maintaining AI teams with diverse backgrounds and skill sets is essential. A variety of perspectives helps identify and mitigate potential biases during the development process.
  • Algorithmic awareness: Be aware of how AI systems work and how they might arrive at decisions. This allows for proactive measures to identify and address potential bias in the model's decision-making process.
  • Feedback mechanisms: Once deployed, monitor the AI system's performance and incorporate user feedback. This allows for continuous improvement and identification of any emerging biases.

By following these steps, we can work towards developing more equitable and responsible AI systems.

Strategies: Mitigating AI bias

• Human-in-the-loop: Human oversight can be crucial in mitigating AI bias. Human-in-the-loop systems, where humans and AI collaborate, can leverage the strengths of both to achieve fairer and more reliable outcomes.
• Standardization and regulation: Adopt frameworks that are standardized and follow international and local regulations. This ensures that the AI systems are developed and deployed responsibly, minimizing the risk of bias.

ai-bias