The Council on Geostrategy’s online magazine

About | Contributors | Submissions

How should AI be regulated?

As world leaders gathered from 1st to 2nd November at the site where the Enigma device was decrypted, they would have hoped to crack another code: how to regulate artificial intelligence (AI). Indeed, as much as the technology promises to revolutionise the way we interact with the physical and digital world, there is equal opportunity for it to be exploited and programmed for nefarious purposes. 2023 has overseen dramatic proliferation of AI in the public domain, as well as advances in the technology more broadly. So the stakes at Bletchley Park were high. Agreeing on a shared understanding of AI’s danger and potential is a step in the right direction. But it is far from reaching an agreement on how AI should be regulated. How should AI be regulated? The Council on Geostrategy asks five experts in today’s Big Ask.

Alex Chalmers, Air Street Capital

The United Kingdom (UK) is striking the right balance between building out regulatory capacity while avoiding a rush towards AI-specific legislation. Harm which could be caused by AI is context dependent, so we should empower domain-specific experts, whether that is in medical device regulation, privacy, or financial services, rather than reinventing the wheel by forming a new agency. After all, many of the examples of commonly-cited AI harms (e.g. around bias) fall within the scope of existing legislation. Many of our regulators have existed for decades before and have successfully adapted to change.

Ultimately, we should approach AI as we would any other tool or technology. Where appropriate, AI-powered products should comply with consumer standards, while we ultimately hold users, rather than developers, responsible for their misuse.

While we understand concerns around the extreme risks associated with the most powerful models, it is important to resist calls to regulate systems pre-emptively which give people access to information in the public domain. Our current overhyped risk conversation does not represent the balance of opinions in the AI community. It risks advantaging incumbents by driving up compliance costs and undermining open source, ultimately reducing innovation. As they approach these questions, policymakers should push back against the move away from openness.

Allan Nixon, Onward

The UK’s approach to regulation must start from two principles. The first is that untrammelled frontier AI could be catastrophic. The second is that failing to harness the benefits of AI successfully is a strategic threat too. The first principle was the overarching focus of the recently held AI Safety Summit and is, of course, vital, but it must not come at the expense of the second.

Be in no doubt that leadership in AI innovation will be central to Britain’s core interests in the decades ahead. It will be vital to retaining our military edge, defending against increasingly potent cyber threats, underpinning a vibrant economy, solving previously intractable global challenges, and massively improving the quality and efficiency of our public services.

For national prosperity, security and resilience, therefore, a delicate balance of boosterism and caution is needed.

To achieve this, His Majesty’s (HM) Government must separate algorithms whose behaviour is narrow, which we understand and can control, from algorithms whose behaviour is general, which we do not understand and can not control. As a first step towards achieving the balance we need, our regulatory approach should be to go as fast as we can on the former and be somewhat more cautious on the latter. Otherwise we risk losing the benefits AI promises for fear of the problems it could bring.

Cason Schmit, Texas A&M University

Any AI governance framework should strive to maximise benefits while minimising risks. This is an immense challenge for the rapidly evolving technology. Future applications will introduce new – and presently unknowable – risks and benefits. Certainly, innovations in policy will be necessary to deal with unique AI challenges.

Initially, requiring transparent and regular risk assessments for AI systems is critically important to discover risks emerging. Yet, some risks are difficult to assess. For instance, bias and discrimination are known AI risks. If it is impossible to fully eliminate biases in an AI system, how much bias is acceptable given the benefits of the system?

The field of public health provides a toolset to assess population scale effects. For instance, social factors in the United States like racial segregation and income inequality have been attributed to as many deaths as cerebrovascular disease and chronic respiratory disease. If an AI risk assessment reveals biases which could contribute to structural racism or income inequality, the public health lens provides rich and valuable context to managing that risk. It also suggests that biases which promote equitable outcomes provide a net social benefit. In this way, equity can be a lodestar to guide AI risk management.

Mann Virdee, Council on Geostrategy

There is no single approach which can regulate AI because AI is not a homogeneous entity. Instead, we need to piece together a mosaic of approaches which corresponds to how AI is trained and used, and its associated risks. In doing so, we can create a dynamic regulatory ecosystem which safeguards the interests of citizens without stifling innovation or missing out on the opportunities offered by AI.

This mosaic should include legislation, standards, algorithmic impact assessments, auditing, red-teaming, purple (or violet)-teaming, sandboxes, labelling initiatives, and codes of conduct. Each has its part to play, but the relative mix needs reviewing. Recent initiatives may be over-emphasising the utility of red-teaming, and in doing so limit attention and resources for developing other mechanisms.

It is crucial that AI is developed in line with principles free and open nations (should) hold to be inviolable: fairness, inclusivity, transparency, and privacy. However, there is often disagreement about what these mean in practice. Equally critical is that AI regulation brings together voices from diverse stakeholders – instead of being dominated by the voices and interests of the powerful.

Given the state of geopolitics, reaching meaningful international consensus on AI appears unlikely – but it is incumbent upon us to try. For ‘high-risk’ (another poorly-defined term) applications of AI, international cooperation is required. This could take the shape of treaties and an international forum for mediating disputes – but it is unlikely there would be agreement on what this would look like and we know how impotent such fora can be. 

Nevertheless, current attempts, such as Wednesday’s Bletchley Declaration, whilst generic, indicate there is appetite for cooperation after all.

Alan Winfield, UWE Bristol

As citizens we reasonably expect the products and services we use to be safe, yet AI systems – despite their widespread use – are neither safe, transparent, or reliable. AI is already causing harm to individuals, society and the environment. Like any powerful technology, AI must be regulated. 

The UK already has a strong regulatory ecosystem. What we need is not new regulators but regulation, setting out standardised tests which can be used to certify that a particular AI system is safe. The good news is that standards for AI safety already exist. The ISO/IEC joint technical committee on AI has already published 20 safety, risk assessment and related standards, with 30 more in draft. The Institute of Electrical and Electronics Engineers standards association has to date published six including standards for transparency and data governance. The tools and methods for certification exist.

AI is a general purpose tool. The same large language model could, for instance, be used as the basis of a medical diagnosis app, or to sift job applications. Thus, certification must be domain specific. HM Government must direct regulators to draft clear tests, setting out levels of compliance with standards, for certification of AI systems, alongside criteria for the appointment of third party AI safety assessment bodies by the UK accreditation service.

Accident/incident investigation also plays a key role in ensuring that lessons are learned when things go wrong. What is needed are specialist units with the necessary expertise to investigate accidents in which AI (including intelligent robots) is implicated.

Join our mailing list!

Stay informed about the latest articles from Britain’s World

Leave a Reply

Your email address will not be published. Required fields are marked *