Detect and Remediate Bias in Machines Learning Datasets and Models

16:20/17:00

One of the most critical and controversial topics around artificial intelligence centers around bias. As more apps come to market that rely on artificial intelligence, software developers and data scientists can unwittingly inject their personal biases into these solutions. Because flaws and biases may not be easy to detect without the right tool, we have launched AI Fairness 360, an open source library to detect and remove bias in models and data sets. The AI F 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these met

Language: English

Level: Intermediate

Animesh Singh

STSM and Lead Architect, AI and Deep Learning - IBM

Animesh Singh is an STSM and lead for IBM Watson and Cloud Platform, currently leading Machine Learning and Deep Learning initiatives on IBM Cloud. He has been with IBM for more than a decade and is currently working with communities and customers to design and implement Deep Learning, Machine Learning and Cloud Computing frameworks. He has been leading cutting edge projects for IBM enterprise customers in Telco, Banking, and Healthcare Industries, around cloud and virtualization technologies.

Go to speaker's detail