Machine learning models are increasingly being used to make critical decisions that impact people’s lives. However, bias in training data, due to prejudice in labels and under or oversampling, can result in models with unwanted bias. Discrimination can become an issue when machine learning models place certain privileged groups at systematic advantage. This talk will provide an introductory look at how bias & discrimination occur in the machine learning pipeline and the methods that can be implemented to remove it. Trisha Mahoney will walk you through AI Fairness 360, a comprehensive bias mitigation toolkit developed by IBM researchers. AI Fairness 360 includes a set of open source Python packages with the most cutting edge metrics & algorithms available across academia and industry today. Learn how to measure bias in your data sets & models, and how to apply the fairness algorithms to reduce bias across the machine learning pipeline.