Legal and policy implications of model interpretability with Solon Barocas

EPISODE 219

Join our list for notifications and early access to events

About this Episode

Today we're joined by Solon Barocas, Assistant Professor of Information Science at Cornell University. Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences like ICML. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we discuss the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper "The Intuitive Appeal of Explainable Machines," which proposes that explainability is really two problems, inscrutability and non-intuitiveness, and that disentangling the two allows us to better reason about the kind of explainability that's really needed in any given situation.
Connect with Solon