Search This Blog

Tuesday 3 December 2019

Recent Adventures In Artificial Intelligence

My most recent Dublin trip was timed to take in the SCL event on bias in artificial intelligence, the second in a series following the SCL's Overview of AI in September.

This time Dr Suzanne Little of the School of Computing at Dublin City University explained the types of challenges that introduce bias.

Three further events are planned for Dublin in 2020, drilling into how we should assess the performance of AI, whether transparency is possible without explainability and the thorny issues relating to liability when AIs are wrong.

Assessing Performance 

While giving us some insights into bias, Suzanne Little also explained that 'confidence' in AI is quite different to 'accuracy'. The measurement of accuracy/error and confidence intervals is explained here, for example.

Transparency

The UK's Alan Turing Institute and the Information Commissioner are consulting on best practice for how to explain decisions made with AI, with a view to ensuring a legal person remains responsible and accountable for what an AI decides.  This is aimed at senior management, as well as compliance teams.

This issue is particularly important given that we often don't know that we are exposed to decisions made by artificial intelligence.

Liability

How to determine who should be liable when artificial intelligence goes wrong is also the subject of a recent report published by the European Commission.  


No comments:

Post a Comment