Why and How to Build Explainability into your ML Workflow
Tuesday, October 1
11:20 am
11:45 am
Robertson 2

Building AI applications comes with significant business risks, e.g. nearly half of companies complain about lack of trust in AI. There have been several instances of companies deploying AI at scale only to roll it back after serious negative PR due to bias and trustworthiness issues. Governments have begun to introduce regulations for automated decisions and fines for non compliance can be hefty. Explainable AI is a way for companies to deal with business risks associated with deploying AI in use cases like underwriting loans, moderating content, providing job recommendations, etc.

Explainable AI helps ML teams understand model behavior and predictions. This fills a critical gap in operationalizing AI in verticals like FinTech (e.g. explaining ML-flagged fraud transactions), insurance (e.g. explaining policy underwriting decisions), banking (e.g. explaining loan denial by ML models), logistics (e.g. explaining predicted marketplace variations), and more. Considering explainability when operationalizing AI allows you to integrate it into the end-to-end ML workflow from training to production, which offers benefits such as the early identification of biased data.


Krishna Gade

Fiddler Labs
© 2020 TWIMLcon. All rights reserved.