Search This Blog

Friday, 12 July 2019

Explainability Remains The Biggest Challenge To Artificial Intelligence

You might think that understanding and explaining artificial intelligence is becoming a job in itself, but it has actually become part of everyone's job. This struck me particularly hard while reading the recent report from UK Finance (and Microsoft), on the role of artificial intelligence in financial services. It shows that organisations are treating AI as a project or programme in itself, and struggling with where to pin responsibility for it, when actually their use of AI (and existing exposure to it through ad networks etc) means it's already loose in the world. That makes "explainability" - of AI itself and its outcomes - absolutely critical.

What is AI?

One first challenge is understanding what is meant by "AI" in any given context. In this report, the authors generally mean "a set of technologies that enable computers to perceive, learn, reason and assist in decision making to solve problems in ways that mimic human thinking."

We seem to have moved on from the debate about whether AI will ever move far beyond "narrow AI" (better than humans at some tasks like chess, Go or parsing vast quantities of data) to "general AI" (as good as a human mind) to superintelligence (better than humans, to the point where the machines do away with us altogether).

It seems widely accepted that we are (still) developing narrow AI and applying it to more and more data and situations, with the vague expectation (and concern) that one day it might become "general". 

The next major challenge is explaining each technology in the "set of technologies" that encompass AI. Not all are spelt out in the report, but I understand these technologies to include machine learning, neural networks, deep learning networks, natural language processing, speech recognition, image and facial recognition, speech and acoustic recognition. The report notes they are often used in conjunction  (e.g. scanning documents for hints of fraud, robotic process automation ("RPA") and personalising services for individuals or groups of customers). And it's important to understand that one or more technologies will be combined with devices or other machines in the course of biometrics, robotics and the operation and co-ordination of autonomous vehicles, aircraft, vessels and the 'Internet of things' - not ordinarily thought of in terms of financial services, but the data and decision-making in the context of these uses will be relevant for many financial institutions.

Each new report seems to bring a nugget or two of new jargon to understand, and this one alerted me to the use of "Random forests". 

What is a good use-case for AI?

The good news for the human race is that the authors recommend combining artificial and human intelligence rather than allowing the machines to work alone toward our extinction. AI can build on human intelligence by recognising patterns and anomalies in large amounts of data (think fraud detection) and can scale and automate repetitive tasks in a more predictable way to analyse and try to predict risks. The report suggests that AI Nirvana for UK financial institutions is fully automated customer on-boarding, personalised customer experience, retail advice and proactive financial management.

You might have spotted that the last two aspirations will be particularly exciting for fans of financial 'scandals'... and it's worth noting that the report on the health and motor insurance sectors added pricing, underwriting, claims handling, sales and distribution...

UK Finance rightly points out that organisations need to consider the implications of AI beyond the technical (or technological), particularly when used in the core of their businesses. Specifically, there are implications for culture, behaviour and governance from the business, social and economic perspectives. Privacy, safety, reliability, fairness (lack of bias and discrimination) are critical to safeguard, as well as adapting the workforce, communities and society for the impact on employment and skills. Again, AI can't be treated as separate or managed in a silo; and it's a challenge for all stakeholders, including regulators and governments.

Yet, while AI might be pervasive in its impact and effects, that does not mean it is ripe to be deployed in every situation (as is the case with applying process improvement methodologies like Six Sigma). The report provides some insight into identifying where AI is the right solution, as well as high-value use cases, levels of AI maturity and capabilities; and how to scale and measure returns on investment and business impact.

The Thorny Issue of Explainability...

While the UK Finance report is intended as an overview, a major criticism I have is that it only sounds a note of caution on the worrying issue of "explainability" without pointing out that explainability is not possible with technologies that have "hidden" layers of computing, such as artificial neural networks and deep learning. The report merely cautions that: 
"Where firms identify a trade-off between the level of explainability and accuracy, firms will need to consider customer outcomes carefully. Explainabilty of AI/ML is vital for customer reassurance and increasingly it is required by regulators." 
This is the point where the fans of financial scandals start stockpiling popcorn.

The relevant shortcomings and concerns associated with explainability are covered in more detail in my post on the report into the health and motor insurance sectors, including the South Square chambers report. But in summary, these mean that neural and deep learning networks, for example, are currently only really appropriate for automating decision-making where "the level of accuracy only needs to be "tolerable" for commercial parties interested only in the financial consequences... than for... issues touching on fundamental rights." 

Yet the UK Finance warning not only assumes that the use of AI and its outcomes is known by or can be explained to people within the organisation (when that may not be the case), but also assumes that organisations understand what the trade-off between explainability and accuracy means; the implications of that; and therefore whether a given use-case is actually appropriate for the application of AI technologies. A critical issue in that analysis is how to resolve any resulting disputes, whether in the courts or at the Financial Ombudsman, including identifying who is responsible where AI computing has been been outsourced and/or there are multiple external sources of data.

None of this is to say, "Stop!" (even if that were possible), but it's important to proceed with caution and for those deploying and relying on AI to be realistic in their expectations of what it can achieve and the risks it presents...