Tackling bias in AI
Diversity and inclusion are essential values to uphold for innovation, business growth and societal impact. Today we understand that the biases that negatively influence human experience and decision-making can also make their way into our technologies. As a rapidly growing number of organizations adopt artificial intelligence solutions, it’s crucial that we work to mitigate bias in AI systems.
IBM has been a leader in diversity and inclusion for more than a century, and today these principles continue to drive efforts to tackle AI bias through research and tools to help humans better train AI models against discrimination.
Unconscious bias and AI training
Unconscious bias is prejudiced, unsupported judgment in favor or against certain people, things or groups. While we’d all like to believe that our decisions are based on logic and fair, accurate perceptions, all humans have unconscious biases and can work to be more aware of them and respond more consciously.
More than 180 human biases have now been identified, and when these biases enter our AI systems, they can affect how businesses make decisions.
Bias in AI models can come from:
- Bias in the training data: Take, for example, an admissions process. A university decided to use AI decision-making in its application process. The data it used was admissions data going back 40 years, but 40 years ago, admissions was heavily weighted against women and immigrants. The AI is not introducing any new bias, but is subject to the inherent bias in the training data.
- Discrimination in labeling: Not everything with fur, four legs and ears is a cat. There is a big difference between a cat and a bear. Labeling can have a large impact on how your data is classified.
- Undersampling or oversampling: These techniques can actually be used to balance an unbalanced data set. Taking the example above, if you’re trying to train your AI to recognize that not all things with fur, four legs and ears are cats, then you need to have enough samples of those different entities within your data set.
Why does AI bias matter?
AI bias is a problem, because AI applications now exist in many high-stakes contexts across industries, including finance, human resources, healthcare and education. AI bias can affect people’s credit, employment, school admissions and sentencing.
What you can do to resist bias and build fair AI models
To move into the future with AI applications that are fair and that resist bias, we need resources to help us identify and eliminate unwanted bias.
IBM Research has developed a comprehensive open-source toolkit to do just that. AI Fairness 360 helps users examine, report and mitigate discrimination and bias in their machine learning models.
The toolkit provides 10 bias mitigation algorithms and uses more than 70 fairness metrics. It includes demos, videos, a tutorial and more. The AI Fairness 360 toolkit can be found at the following links:
AI Fairness 360 can help organizations design and adopt AI solutions that are fair and unbiased — and that’s just good business.
Support for your cognitive computing projects
IBM Systems Lab Services has a team of experienced consultants ready to help your organization rapidly deploy cognitive infrastructure solutions for AI and machine learning. We have been trained on the AI Fairness 360 tool and can support any clients looking to deploy fair, unbiased AI applications.
Do you have questions? Are you ready to give it a try?
The post Tackling bias in AI appeared first on IBM IT Infrastructure Blog.