Search This Blog

Thursday 4 August 2022

UK Govt Takes 'Do Nothing' Approach To Regulating Artificial Intelligence

The UK government has triumphantly announced that it's, er, taking a 'wait and see' approach to whether 'artificial intelligence' technologies require direct regulation. You might've detected a certain level of cynicism when it comes to my evaluation of this UK government's regulatory plans, and you'd be forgiven for thinking that my view is simply that they can't do anything right, or in a way that inspires any trust. So it is with their approach to regulating AI. While I'm sympathetic to allowing 'good' innovation and businesses to flourish before tying it up in red tape, I'm also aware that nobody can pick what wins in practice and there's a middle ground. Besides, there are many significant challenges with AI, a key one being that nobody really knows when AI is being used, let along whether that use is to their disadvantage. To leave this technological development to a patchwork of non-binding regulatory guidance seems careless until you look at what else this government has been up to, at which point you assume malicious intent.

The existing regulatory landscape

The government's paper is well, paper thin, so it's no surprise that this section amounts to the usual Brexiteer 'boosterism' and a desire to avoid references to the EU. Britain is for the British, so we'll have none of your comparative jurisprudence here, thank you very much. 

Needless to say this doesn't play to any firm with pan-European, much less global, ambitions.

Strangely, for a paper that recommends doing nothing, the government admits that "the proliferation of activity; voluntary, regulatory and quasi-regulatory, introduces new challenges that we must take action to address" including "lack of clarity", "overlaps", "inconsistency" and "gaps in our approach"... 

These issues across the regulatory landscape risk undermining consumer trust, harming business confidence and ultimately limiting growth and innovation across the AI ecosystem, including in the public sector. By taking action to improve clarity and coherence, we have an opportunity to establish an internationally competitive regulatory approach that drives innovation and cements the UK's position as an AI leader.

No 'definition' of AI

Here the government is obliged to dismiss the fact that the EU has stolen a march on the regulatory front to address exactly the challenges that the paper just outlined. The European Commission proposed a regulation in April 2021; and, Hell, they even have their own twitter feed and web page

But we can't talk about that EU stuff... except that the government accuses the EU of having a 'relatively fixed definition' of AI, while the UK plan is: 

"to set out the core characteristics of AI to inform the scope of the AI regulatory framework but allow regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors". 

In other words, the government wishes to perpetuate the very "challenges we must take action to address"...

While these 'core characteristics' that will inform the UK's non-regulatory scope are not actually specified with any clarity, it seems possible to distill them as follows: 

  • the logic or intent behind the output of systems can often be extremely hard to explain; 
  • errors and undesirable issues within the training data may be replicated;
  • AI often demonstrates a high degree of autonomy, operating in dynamic and fast-moving environments by automating complex cognitive tasks; 
  • decisions can be made without express intent or the ongoing control of a human.

Look away now if you don't want to see the EU definition (still being debated, to be fair):

‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with; 

ANNEX I 

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; 

(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; 

(c) Statistical approaches, Bayesian estimation, search and optimization methods.

Cross-sectoral Principles

Citing the OECD's AI Principles, the UK government hopes regulators will somehow ensure that: 

  • AI is used safely; 
  • AI is technically secure and functions as designed; 
  • AI is appropriately transparent and explainable; 
  • 'considerations of fairness' are embedded into AI; 
  • legal persons' responsibility for AI governance will be defined;
  • there are routes to redress or contestability.

Conclusion

Apparently this framework will enable "AI-first" start-ups to:

"...understand the rules more easily and spend more time and resource on product development or fundamental AI research, and less on legal costs." 
Never mind the continuing "lack of clarity", "overlaps", "inconsistency" and "gaps in our approach".

It's hardly an investors' charter, is it?


No comments:

Post a Comment