TWIMLcon 2019

Session

Why and How to Build Explainability into your ML Workflow

Technology

Building AI applications comes with significant business risks, e.g. nearly half of companies complain about lack of trust in AI. There have been several instances of companies deploying AI at scale only to roll it back after serious negative PR due to bias and trustworthiness issues. Governments have begun to introduce regulations for automated decisions and fines for non compliance can be hefty. Explainable AI is a way for companies to deal with business risks associated with deploying AI in use cases like underwriting loans, moderating content, providing job recommendations, etc.

Explainable AI helps ML teams understand model behavior and predictions. This fills a critical gap in operationalizing AI in verticals like FinTech (e.g. explaining ML-flagged fraud transactions), insurance (e.g. explaining policy underwriting decisions), banking (e.g. explaining loan denial by ML models), logistics (e.g. explaining predicted marketplace variations), and more. Considering explainability when operationalizing AI allows you to integrate it into the end-to-end ML workflow from training to production, which offers benefits such as the early identification of biased data.

Session Speakers

CEO and Founder
Fiddler

Oops, please Login or Create Account to view On Demand

The good news is that it's both easy and free to register and get access.

Account Login

Create Account

Password
Newsletter Consent(Required)
Terms and Privacy Consent
Hidden
This field is for validation purposes and should be left unchanged.