An introduction to interpretability

Some machine learning algorithms not only work but the models they produce can readily be understood by mere humans; decision trees are a wonderful example here. The same is not true of neural nets which conceal their decision making process behind a massive smokescreen of numbers. But we live in an age of accountability where people have a right to know why their loan was refused or why their Mother’s hip replacement was rescheduled for the fourth time.

This talk will outline (very briefly) why it is inherently difficult to understand how a given neural net came to a given decision in a given case. Most of the talk will be spent looking at some of the work that is going on to try to blow away the smokescreen. Please note this is an introduction to the topic which means it will involve little to no maths.

You can see Mark’s slides below:

Mark Whitehorn – M3 Interpretability

Location: Gielgud Date: October 1, 2019 Time: 13.35 – 2:20 pm Prof. Mark Whitehorn