The Council on Geostrategy’s online magazine

How can Britain protect itself from electoral interference and disinformation?

It is no exaggeration to say that 2024 is the year of elections, with elections occurring in 64 countries as well as the European Union (EU). It is also no exaggeration to say that every single one of these elections is under threat from electoral interference and disinformation, particularly as a result of the proliferation of emerging disruptive technologies in recent years, notably artificial intelligence (AI). This includes the election in the United Kingdom (UK), due to take place in under three weeks. So how can Britain protect itself from electoral interference/disinformation? We asked five experts in this week’s Big Ask.

Hui-An Ho, Taiwan FactCheck Centre

Based on Taiwan’s presidential election results, it is hard to determine the success of efforts against electoral interference. However, one notable aspect is the surge of disinformation about potential electoral fraud which appeared about a month before the election. These false claims, particularly in short video formats, suggested the elections could be rigged. After the voting day on 13th January 2024, numerous videos falsely alleged illegal activities at polling stations, with some exceeding one million views.

Despite its prevalence, such disinformation did not cause significant harm. While it was widely viewed and discussed, it did not lead to widespread doubt about the election results and integrity. The new president assumed office peacefully on 20th May.

Reflecting on the process, the Taiwanese election system’s transparency played a crucial role. Extensive records and documentation of the vote-counting process allowed fact-checkers to quickly access evidence and publish fact-checks debunking false claims. The government also acted promptly to clarify and dispel rumours, and the media rapidly disseminated fact-checking efforts. Transparency and speed were essential.

As one of these fact-checkers, this author understands the challenges and limitations of the work. Therefore, the focus in the UK should be on building voter resilience against interference efforts. Educating the public about the existence of such tactics, and drawing on both international and local election experiences to highlight common methods and narratives, can help voters remain vigilant against suspicious information. This author believes this is an area where both government and civil society can make significant contributions.

Sam Hogg, Beijing to Britain

Britain can begin to protect itself from electoral interference and disinformation by understanding that hostile states and actors often choose to magnify existing conflicts, rather than create them. Lots of ink will be spilled over ‘deepfakes’ and AI, but little on personal responsibility. So, as representatives of the British public, and as those meant to explain and report on our politics, politicians and journalists in particular should hold themselves to a higher standard than others, and abide by two key principles. 

First, do not share or reshare suspicious content which seems deliberately provocative. If you see a claim or an image which looks suspicious, scroll on and report it if you think it is illegal. If you see a post that slams your political opponent in a way which seems to be slightly too good to be true, scroll on and report it if you think it is illegal. Likewise, claiming that the UK’s national security and safety will suffer to extreme ends if your opponents form a government is foolhardy and short-sighted. The temporary endorphin rush is not worth spreading disinformation.

Second, do not ‘cry wolf’. People are allowed to fire off angry social media posts in your direction if they want – so long as they are within the law. They are also allowed to vote for things which go against their so-called interests. They are more than likely not paid for by hostile state actors in some move aimed at overthrowing or undermining British democracy. By casting suspicion over normal democratic actions through levying accusations of foreign interference, one dilutes the credibility of their claims and the genuine seriousness they represent. Hostile state disinformation and interference clearly does happen – but we can, and are, dealing with it. Do not become an unwitting pawn by amplifying tensions or pointing at normal democratic outcomes and ‘crying wolf’. Take responsibility. And have a fantastic election.

Alexander Lanoszka, Council on Geostrategy

Considering what we have been seeing from around the world in the last ten years, foreign-fed disinformation will be a feature of Britain’s 2024 general election campaign. The good news is that its impact will likely be very limited since, this author would wager, the vast majority of voters have probably made up their mind already about their voting intentions for reasons which have little to do with outside actors. 

That said, interference and disinformation are a challenge because they are low cost to adversaries and provide some plausible deniability in the age of social media. Nevertheless, this author is wary of His Majesty’s (HM) Government becoming too involved in the fight against disinformation, which is best left to civil society as well as independent bodies tasked with ensuring election integrity which are ultimately at arm’s length from those in power. Since lying is at least as old as politics, partisan authorities could choose to fight disinformation with their own disinformation and so, however factual they may be, will naturally be treated with suspicion by their opponents. 

Alas, short of labelling social media content and having social media platforms make algorithmic changes in the coming weeks, the best tools for fighting against disinformation – media literacy, responsible and strong local journalism, and civic education – take years to develop and should have been nurtured long before yesterday.

Elizabeth Lindley, Council on Geostrategy

There is no silver bullet to protecting democracies in the age of ‘deepfakes’ and AI-generated content. The ‘battleground of ideas’ has expanded to a constantly evolving digital sphere, where generative AI tools and large language models provide unprecedented ability to mislead and deceive. Hostile states, political parties, and bored teenagers alike have the ability to produce misinformation at astonishing scale and speed which is micro-targeted to specific demographics. 

In 2020, the Committee on Democracy and Digital Technologies, led by Lord Puttnam, warned of the ‘existential threat’ posed to democracies by misinformation and urged immediate action. With the UK general election occurring in just under three weeks, it is concerning that these recommendations have not been fully implemented. The Online Safety Act empowers Ofcom to enforce safety guidelines on social media, but its effectiveness is limited as it does not mandate the removal of disinformation. Efforts such as the Counter Disinformation Unit focus on international threats, but the UK neglects threats coming from home at its own peril. While tech companies are developing systems to label AI-generated content, these measures need further development and broader adoption.

A resilient democratic society relies on the collaboration of governments, tech companies, and educational institutions to maintain a healthy information space from which people can make informed, responsible choices. Social trust is the glue binding democratic societies, fueling civic engagement, political participation, and confidence in political institutions, thereby safeguarding against democratic backsliding and authoritarianism. Therefore, a whole-of-society response is crucial. This is a problem which will not disappear: one in 10 teenagers report that TikTok is now their most important news source. This is the next generation of voters. Teaching critical thinking, encouraging fact-checking, and promoting the use of diverse news sources are essential steps – democracy’s best defence is a vigilant, informed and critically thinking electorate.

Sam Stockwell, The Alan Turing Institute

As new generative AI systems like ChatGPT allow users to develop highly realistic fake content through simple keyboard prompts, there is a risk that electoral disinformation will grow exponentially. The UK has already seen elements of this materialise, with ‘deepfake’ video clips of political candidates in the current general election campaign.

Despite reassuring evidence highlighting how AI has so far failed to influence a specific election result, there are worrying signs of this content inciting online harassment and causing confusion among voters over the truth. Although there are only a few weeks remaining before polling day in the UK, much can still be done to increase resilience against AI threats. 

The current lack of guidance on the use of AI systems for campaign content is particularly concerning, since political parties and their supporters could deceive the public by fabricating events or statements which blur the lines between fact and fiction. Encouraging the major parties to sign up to key principles such as in DEMOS’ open letter would therefore create greater accountability mechanisms for the fair use of AI in the election. 

Empowering voters and the media with a list of certified AI verification tools would further help to reduce the risks of AI disinformation being amplified, given the challenges of discerning such content with the naked eye. Finally, social media platforms must also work closely with fact-checking initiatives throughout the election to clearly label and/or remove viral AI ‘deepfakes’ as they emerge, helping to tackle the threat at its source.

Join our mailing list!

Stay informed about the latest articles from Britain’s World

Leave a Reply

Your email address will not be published. Required fields are marked *