The Council on Geostrategy’s online magazine

About | Contributors | Submissions

AI and national security: What are the challenges?

Artificial intelligence (AI) has made significant advances in recent years. It is used to automate and optimise processes, analyse and derive insights from large volumes of complex data, and bring together different types of data in a way that makes them more useful. AI has been part of everyone’s daily lives for some time – in advertising, social media content curation, and recommending music and videos. But the recent rapid rise of generative AI tools such as ChatGPT and Midjourney have brought AI to the forefront of public attention and imagination due to the human-centred way in which AI models can now be interacted with.

This brings many opportunities – but it also comes with risks and challenges, some of which fall into the realm of national security.

Societal challenges

Increased accessibility of the destructive uses of AI

Advances in AI have ‘democratised’ knowledge – that is, it is becoming easier for the layperson to benefit from AI. However, this means that both the constructive and destructive uses of AI are more easily accessible to a much larger group of people. 

One way this may pose a challenge to national security is if there is an increase in the number and variety of actors using AI to conduct cyber-attacks, or who are able to leverage its capability to cause harm in other ways. AI is supercharging cyberwarfare, leading to a rise in the number and complexity of cyber-attacks – including those targeting critical national infrastructure (such as the energy grid). And non-scientists have, for example, been able to use large language models to gain knowledge and understanding of the necessary steps they would need to take to create a pandemic or to make a bomb.

Enabling of disinformation campaigns

AI can also be used to enable rapid disinformation attacks. The aim of such attacks is deliberately to unleash false information in order to deceive people. AI can be used, for example, to generate fake videos and images, also known as ‘deepfakes’. Improvements in AI mean such videos are becoming easier to make on standard hardware and require less ‘training’ footage. As such, it will become easier to make deepfakes and harder to distinguish them from reality. 

In an increasingly fragmented global political landscape, malicious actors are making deepfake content undermining prominent individuals and key institutions with the intention of sowing confusion and distrust. This represents a challenge to national security as it could harm social cohesion and potentially lead to radicalisation and social unrest. But it is not just adversaries who are using AI. Elected officials in free and open countries have even used generative AI to create false content for use in attack adverts against opponents.

Erosion of public trust in AI

Trust is integral to the functioning of a society. This includes trust between individuals and trust in institutions. With the increasing use of AI in all aspects of society, it is important that the public trusts how AI is used at scale, as well as the robustness of the processes behind it. If AI is used in ways that violate the privacy of individuals, it could significantly erode trust in how and why AI is being used at scale by government or private sector actors. 

For example, a company that used images of people it scraped from the internet for its AI facial recognition services was found to have violated privacy laws in several countries, who ordered the company to delete their citizens’ photos from its database and fined the company. Hospitals have also used AI to categorise patients based on their health status for risk stratification, which in some countries is unlawful because predictive medicine processing requires explicit patient consent

Challenges specific to defence 

Using AI to ensure that different parts of defence work together

Alongside the broader societal challenges, there are some AI challenges that are more specific to defence. AI will be central to the UK’s vision of ensuring its defence efforts across government, military domains, and with allies and partners is coordinated. This is known as Multi-Domain Integration. This coordination will need to be facilitated by AI because it will require exploring and exploiting a large amount of data. AI will be particularly important for multi-domain situational awareness and intelligence capabilities.

Shift in value of defence workforce and intelligence from human to AI

The global data environment is too dense and complex for human intelligence analysts to explore alone. As AI becomes ever more central to the work of the defence and security communities, the workforce will need to adapt to ensure it has the technical knowledge, skills and capability to utilise AI and understand its centrality to national security. Such capacity building takes time and requires resources, and therefore represents a challenge to national security. At the same time, there may be human skills that are underappreciated through the increasing reliance on AI.

Enabling autonomous weapons systems

AI has enabled a type of weapons system that does not need human involvement – autonomous weapons systems. Such systems pose enormous ethical and legal challenges. For example, the rapidity of algorithmic decision-making in autonomous weapons systems may heighten the risk of escalation and unpredictability, and therefore pose a threat to national and international security. Using AI to replace human decision-making will have unforeseen consequences and introduce new risks.

Governance and regulatory challenges

Lack of transparency, ethics and fairness 

It is vital for British national security to ensure continued rapid innovation in AI. After all, adversaries are also innovating at a fast pace. However, it is a challenge to ensure that this pace does not come at the expense of transparent and accountable values. AI is dependent on the quality of the data it is trained on. Flaws in that data mean that such flaws will be replicated in the outputs. To take one example, it is understood that the lack of diversity in genomic databases is a barrier to translation of precision medicines. In the event of a bioweapon attack, the lack of diversity in such databases may prove to be a barrier to finding effective solutions. The challenge is to build trust in AI and to build regulatory frameworks whilst innovating in this field at pace. 

Fragmented governance and regulatory landscape

The piecemeal approach to AI governance and regulation across the world means there is little coordination between countries and allies. The UK is in a strong position to lead on shaping approaches and norms in AI use, as it will try to do at the AI Safety Summit being held from 1st to 2nd November 2023. A framework setting out what ‘responsible scaling’ means in practice would help reduce some threats posed by unregulated AI. But it is more likely that any outcomes of the summit will be more symbolic than substantive. If the national security implications of AI are to be addressed, it requires clear thinking about the trajectory of AI, its benefits and risks, and mechanisms for cooperation – and, importantly, doing so without hindering the pace of development and adoption.

Dr Mann Virdee is a Senior Research Fellow in Science, Technology, and Economics at the Council on Geostrategy

Join our mailing list!

Stay informed about the latest articles from Britain’s World

Leave a Reply

Your email address will not be published. Required fields are marked *