Search This Blog

Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Tuesday, 3 December 2019

Recent Adventures In Artificial Intelligence

My most recent Dublin trip was timed to take in the SCL event on bias in artificial intelligence, the second in a series following the SCL's Overview of AI in September.

This time Dr Suzanne Little of the School of Computing at Dublin City University explained the types of challenges that introduce bias.

Three further events are planned for Dublin in 2020, drilling into how we should assess the performance of AI, whether transparency is possible without explainability and the thorny issues relating to liability when AIs are wrong.

Assessing Performance 

While giving us some insights into bias, Suzanne Little also explained that 'confidence' in AI is quite different to 'accuracy'. The measurement of accuracy/error and confidence intervals is explained here, for example.

Transparency

The UK's Alan Turing Institute and the Information Commissioner are consulting on best practice for how to explain decisions made with AI, with a view to ensuring a legal person remains responsible and accountable for what an AI decides.  This is aimed at senior management, as well as compliance teams.

This issue is particularly important given that we often don't know that we are exposed to decisions made by artificial intelligence.

Liability

How to determine who should be liable when artificial intelligence goes wrong is also the subject of a recent report published by the European Commission.  


Tuesday, 8 October 2019

Hype Will Harm Artificial Intelligence

After exploring AI deployment in some depth and chairing the SCL's overview of AI in Dublin in September, I've been particularly conscious of the hype vs reality. Nobody should deny that narrow artificial intelligence is here to stay - for good and bad. We just have to be realistic about its capabilities and shortcomings - and how to detect their consequences - so that AI is developed and deployed responsibly.

In a recent report on 'smart cities', for example, the Oliver Wyman Forum found that no city on Earth is ready for the disruptive effects of artificial intelligence

Talk of 'killer robots' and beating humans at board games is also all the rage, but Barry O'Sullivan assured us in Dublin that robots take ages to 'train' for any one sequence, can't cope with door handles and their batteries soon run down. It took $50m in electricity to train a computer to beat a human at Go. 

AI can be used for good, but it can also be 'weaponised' against a population, and 'hacked' by altering the appearance of things or people's appearance in quite subtle ways - without actually interfering with the AI itself. 

In the 'real world' of AI, the genuine concerns are inaccuracy, lack of explainability and the inability to remove bias. And there remain vast challenges associated with the reliability of evidence and how to resolve disputes arising from their use. 

That means we have to challenge the use of AI where the consequences of false positives or negatives are fatal or otherwise unacceptable, such as denying fundamental rights or compensation for loss, for example. 


Being realistic about AI and its shortcomings also has implications for how it is regulated. Rather than risk an effective ban on AI by regulating it according to the hype, regulation should instead focus on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be developed and deployed appropriately.