IBM launches new toolkit to avoid bias in decisions within AI algorithms

IBM's new AI Fairness 360 Kit assists in targeting and removing bias within data sets and machine-learning models

19Sep

IBM has launched the AI Fairness 360 Kit that aims to highlight bias in AI algorithms.

The toolkit offers transparency in AI decision-making. It is an open-source library to assist in targeting and removing bias within data sets and machine-learning models.

In an IBM blog post, IBM developers Animesh Singh and Michael Hind stated: "As AI becomes more common, powerful, and able to make critical decisions in areas such as criminal justice and hiring, there’s a growing demand for AI to be fair, transparent, and accountable for everyone.

"Under-representation of data sets and misinterpretation of data can lead to major flaws and bias in critical decision-making for many industries," they said.


Visit Innovation Enterprise's Product Innovation Summit in Boston on September 27–28, 2018


The AI Fairness 360 is available for download via GitHub and offers a library of novel algorithms, code and tutorials to provide academics, researchers and data scientists a system to avoid black box thinking and enhance AI deployments at scale.

Additionally, Singh and Hind outline that the model integrates 30 fairness metrics and nine state-of-the-art bias mitigation algorithms developed by the research community. "It's designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education," they outlined.

The toolkit is an addition to its Adversarial Robustness Toolbox, Fabric for Deep learning and Model Assset Exchange that provides a powerful set of tools to enhance users AI implementations. 

Mit develops machine learning technique for speech and object recognitionsmall

Read next:

MIT develops machine-learning technique for speech and object recognition

i