Fairness in Machine Learning with Hanna Wallach

EPISODE 232
LISTEN
Banner Image: Hanna Wallach - Podcast Interview

Join our list for notifications and early access to events

About this Episode

Today we're joined by Hanna Wallach, a Principal Researcher at Microsoft Research. Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of "fair" ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You'll definitely want to check out the notes page for this episode, which you'll find at twimlai.com/talk/232.
Connect with Hanna
Read More

Thanks to our sponsor Microsoft

Microsoft enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
Microsoft Logo

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *