Pathologies of Neural Models and Interpretability with Alvin Grissom II

EPISODE 229
LISTEN
Banner Image: Alvin Grissom II - Podcast Interview

Join our list for notifications and early access to events

About this Episode

Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College.

Alvin's research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, "Pathologies of Neural Models Make Interpretations Difficult." We talk through some of the "pathological behaviors" he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.

Connect with Alvin Grissom
Read More

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *