In a message last week, I addressed the recent death of George Floyd, the protests, and the future we are working towards.
While we all have a responsibility to engage in the fight against racism, the ML/AI community has a unique responsibility to ensure that the technologies we produce are fair and responsible and don’t reinforce racial and socioeconomic biases.
We discuss bias, ethics, and fairness in ML and AI frequently on the podcast.
We’ve highlighted some of the episodes focused on these topics below. I hope these episodes help you engage in conversations about these issues with your colleagues and friends.
We will also be hosting an interactive viewing session of my interview with Rumman Chowdhury, Global Lead of AI Responsibility at Accenture on Monday at 2 PM Pacific. Rumman and I will be live in the chat taking audience questions. Please join us by registering here. We’re looking forward to your questions in the chat.
In the meantime, take a look at the shows:
- AI for Social Good: Why “Good” isn’t Enough with Ben Green – Does political orientation have a place in building technology? Ben comments on the controversial topic and shares how the notion of “good” is often elusive, and lacking in rigorous political or social depth despite enthusiasm from computer scientists to integrate these concepts into their work.
- The Measure and Mismeasure of Fairness with Sharad Goel – Sharad shares how machine learning can be used to expose unregulated police behavior, and why mathematical definitions are not sufficient for determining bias in algorithms. We also discuss The Stanford Open Policing Project, a data-gathering and analysis initiative started by Sharad.
- Algorithmic Injustices and Relational Ethics with Abeba Birhane – Abeba wants to shift our focus away from one that’s fundamentally technology-first (explainability, transparency) to one that reframes ethical questions from the perspective of the vulnerable communities our technologies put at risk. AI is just the latest in a series of technological disruptions, and as Abeba notes, one with the potential to negatively impact disadvantaged groups in significant ways.
- Trends in Fairness and AI Ethics with Timnit Gebru – Timnit provides an overview of the ethics and fairness landscape currently surrounding AI. She shares a ton of insights on diversification, and how groups like Black in AI, and WiML are helping make huge strides towards trends in the fairness communities.
- Operationalizing Responsible AI – This panel from TWIMLcon: AI Platforms features experts discussing the tools, approaches, and methods that have found useful for teams to implement responsible AI practices.
- Responsible AI in Practice with Sarah Bird – Sarah focuses on bringing machine learning research responsibly into production, as well as work on differential privacy. She walks through Microsoft’s interpretability platform, Azure, and discusses the idea of going from “Black-Box models” to “Glass-Box models.”
- The Ethics of AI-Enabled Surveillance with Karen Levy – Karen discusses how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. And how these data tracking and surveillance methods are often exploited in ways that impact marginalized groups.
- Fairness in Machine Learning with Hanna Wallach – Hanna shares how lack of interpretability and transparency show up across machine learning. We discuss the role that inadvertent human biases can impact machine learning. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML.