Lessons from International AI Regulation for the UK Public Sector

AI’s transformative effect, on governments, businesses and wider society, has been talked about for a long time, but only in the context of a promise on the horizon. Even those who were most optimistic in their forecasts have been caught by surprise at the rate at which AI has gained traction in the popular consciousness over the last couple of years, such that now everyone is under pressure to say how their organisation is being optimised and enhanced by the use of this rapidly emerging technology.

The public sector is not immune to these pressures, or to the enthusiasm that has gripped many senior stakeholders and leaders for embracing the opportunities that AI presents. But that enthusiasm is not unalloyed. Almost every application of AI in the public sector is one which has the potential to have adverse consequences for users of the body’s services, its staff, or society more widely.

In those circumstances, the light touch “pro-innovation” approach of the UK government to AI regulation leaves public sector bodies with the challenge of navigating an incompletely regulated landscape in which there are competing pressures: from those who wish to see the body embrace these technologies to enhance efficiency and achieve faster, more efficient and less costly solutions; and those (often the service users themselves) who fear the rigid, emotionless and at times brutal effect that automation can have on decisions which may affect their benefits, livelihoods, property holdings or health and justice outcomes.

Fortunately, whatever the position may be in the UK (and at the time of writing it sounds like the government may be starting to re-open the question of formally legislating), governments elsewhere in the world have been grappling with how to regulate the safe and effective use of AI. Perhaps most relevantly for public sector bodies in the UK, the European Union (EU) and the United States have taken distinct approaches, each offering valuable lessons for the UK public sector.

While maybe not the first, the EU’s AI Act is certainly able to claim with some confidence that it is the most comprehensive framework for AI regulation so far produced. Despite there being very real questions about how that framework will actually be able to be applied to, or keep pace with, the multitude of proliferating AI use cases that are being developed daily, the core principles articulated in the Act build on global consensus articulated at the UN, OECD and elsewhere on what ought to be the most important priorities in regulating the development of AI tools. Although differently expressed, a number of the same key principles can be seen articulated in the Biden Administration’s executive orders and other pronouncements.

It can be expected that over time emerging national regulatory regimes will start to overlap and harmonise. That being so, those same principles ought to be at the heart of any set of policies which a UK public sector body seeks to put in place, to govern its own procedures around AI development and adoption.

Given the capacity for harm inherent in many automated systems, the first principle under the EU AI Act is inevitably focused on the “safety, security and robustness” of the system. This is also one of the key areas focused on by the Biden executive orders around AI. This principle reminds bodies that they must be careful not only to ensure that the AI will not be able to produce harmful outcomes for those about whom decisions are being made; but that it will also keep the organisation’s own information and systems secure. Equally, a system which works in a test environment but which is insufficiently robust to withstand the rigours of “live” use, with all of the unanticipated challenges that can entail, is unlikely to be of any great value to an organisation.

No sooner is an AI tool being deployed to support decision-making around benefits, access, outcomes or other meaningful interactions with individuals, than those individuals will be wanting to understand how those decisions have been made. Many AI’s fail at this “transparency and explainability” hurdle – operating like inscrutable black boxes that produce conclusions without showing, sometimes without even having the capacity to show, the reasoning that led to those conclusions. Public sector bodies will want to protect themselves from potential challenge by ensuring that any tool they deploy is transparently auditable in relation to the decisions that it can reach.

Such transparency is all the more important because of the notorious challenges that AI tools face with inherited or ingrained bias, and with producing unexpected or unjustified outcomes (in the context of generative AI, referred to as “hallucinations”). The “fairness and accountability” principle requires developers to ensure that they build, and stand behind, products that enshrine fair outcomes for all. A public sector body looking to develop policies or procedures which reflect that principle should be subjecting any tool they build or adopt to rigorous testing to ensure that no groups of users (and in particular those with protected characteristics) is going to be prejudiced by the way in which the tool operates. The Biden executive orders express this principle slightly differently, focusing on equity and civil rights, but the underlying concept is the same, that anyone whose interests might be affected by the use of AI by a public body, should be confident of obtaining a fair and consistent outcome from that technology.

Given the potential for unfair or harmful outcomes, public sector bodies should also be carefully considering the contractual arrangements with any supplier of AI tools, to ensure that there will be meaningful redress in the event of something going wrong which produces damaging outcomes. Concerns around AI and the increasing automation of civic functions will undoubtedly result in a range of challenges being made to the deployment and use of such systems, and it is important (just as it would be for a private sector company adopting AI tools) to make sure that the developer and/or vendor is prepared to stand behind the product that is being deployed, so as not to leave the public sector body exposed to liability and censure in the event of something going wrong.

Lest it be thought that these considerations are overwhelming negative, the final principle emphasised by the Biden executive orders (but not present among the EU AI Act’s core principles) is the fostering of innovation and competition. This is an equally important principle, both in order to ensure that the public sector is a driver for innovation within the AI industry, but also because it encourages individuals within those organisations to focus on how they might use this new technology in imaginative and novel ways, to achieve benefits beyond what they might have been able to achieve via traditional technology. This also reflects the reality, that for all of the concerns about potential harm resulting from the adoption of new technologies, the capacity that AI has to improve efficiency and bring about fairer and more timely outcomes, is not something that any organisation focused on delivering value can afford to disregard.

For more information regarding International AI regulation please get in touch with Will Richmond-Coggan.

Get in touch

The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.

Related expertise

Get in touch

Contact us today

Whatever your legal needs, our wide ranging expertise is here to support you and your business, so let’s start your legal journey today and get you in touch with the right lawyer to get you started.

Telephone

Get in touch

For general enquiries, please complete this form and we will direct your message to the most appropriate person.