We’ve all heard of the trolley problem: can AI be programmed to make moral decisions? Should it be programmed to do so? And how do we feel about delegating moral decision-making to machines?
In this talk I will look at these issues and beyond: who should take responsibility when something goes wrong? What about if we don’t know what’s in the “black box”? Can we really delegate moral responsibility to machines? I will also look at ways of ensuring social and ethical acceptability of the AI we make, to mitigate societal fear of intelligent machines.
Don’t expect any easy answers in this talk – but you will leave with some strategies to help you think these problems through and make your own (responsible) decision.
Required audience experience: No specific knowledge required.
Objective of the talk: This talk explores some of the recent ethical thinking around the development and potential impacts of AI and ML, and gives some practical ways for developers to think through their ethical and social responsibilities when developing AI and ML algorithms.
Keywords: AI, machine learning, ethics, moral responsibility, algorithm development
You can view Catherine’s slides below:
Catherine Flick: Beyond the Trolley Problem
You can watch Catherine’s presentation below: