Explaining complex machine learning models with LIME

Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations.

Required audience experience

Basic knowledge of machine learning

Objective of the talk

Listeners will get an overview of why understanding machine learning models is important, how it can help us improve models and help gain trust in their decisions. I will explain in detail how one popular approach to explaining complex models – LIME – works and show an example analysis.

You can view Shirin’s presentation below:

Track 1
Location: Auditorium Date: October 16, 2018 Time: 10:45 am – 11:30 am Shirin Glander, codecentric Shirin Glander, Codecentric