AI power and British strategy

Foreword

The world is undergoing the fastest and most comprehensive period of innovation in the history of human civilisation. Artificial intelligence (AI) is at the heart of this revolution, with the potential to transform everything from the jobs we do and the way we teach children through to how we tackle crime and improve transport. Moreover, as a foundational technology, AI can unlock further breakthroughs in the technology we need to treat illnesses, reduce climate change and defend the United Kingdom (UK). 

At the same time the advent of AI poses significant risks. It can create novel challenges for society, from more effective disinformation and radicalisation, to widespread job displacement. And in the wrong hands, AI can be a surveillance tool for dictators and an enabler for terrorists seeking to build bioweapons.

Faced with something so promising and so dangerous, the priority of the UK must be to ensure that we and the West lead in this field. Doing so will not only be a significant economic opportunity, but also a political and security imperative, to ensure autocracies like China do not gain a significant advantage over the West in such a critical area. As Vladimir Putin highlighted as long ago as 2017, ‘whoever becomes the leader in this sphere will become the ruler of the world.’1James Vincent, ‘Putin says the nation that leads in AI “will be the ruler of the world”’, The Verge, 04/09/2017, https://www.theverge.com/ (checked: 21/03/2024). 

This report points out how the UK will need to do even more in the race to lead in AI. Not only are the technological superpowers of the United States and China forging ahead, but middle powers like Saudi Arabia and the United Arab Emirates are pouring colossal sums of money into developing sovereign AI capabilities.

While we cannot compete with such countries when it comes to public investment, the UK can and must deliver a more innovative environment for AI companies. We already know from DeepMind that British expertise can drive progress in AI. Now we need to support more start ups and create an environment where such companies can scale up, instead of being acquired by multinational tech conglomerates. This will require improving tech companies’ access to private financial capital, skilled employees, data, and compute power.

The Government has already made a good deal of progress in this area. The Chancellor’s Mansion House reforms are expected to unlock £75 billion of additional investment from pensions. The new AI Future Grants scheme will help incentivise the next generation of AI leaders to relocate to the UK. And the UK has agreed a landmark AI deal with Canada to share compute capacity. 

Yet the UK will need to keep going further and faster if it is to keep pace in the race for AI with other countries around the world. That the French company Mistral has emerged as a global leader in AI from a standing start just one year ago shows how fast things are moving in the sector. The challenge of reacting to this wave of innovation is so urgent, the danger of falling behind so great and the opportunities so exciting that Britain must make leading on AI a national priority. 

This new report by the Council for Geostrategy rightly calls for the harmonisation of UK regulation, investment, and international cooperation to develop a ‘National AI Enterprise’. Decision-makers in Government and Opposition should read its findings carefully. Only by breaking down the barriers to cross-industry collaboration and making better use of data will the UK be able to keep up in this age of accelerated innovation. Never before have the opportunities been so great and the risks falling behind been so severe.

– The Rt. Hon. The Lord Hague of Richmond
Foreign Secretary (2010-2014) and Leader of the Conservative Party (1997-2001)

Executive summary

  • The world is in the midst of a global technological revolution, with artificial intelligence (AI) at its core. This is the newest development in the ‘Fourth Industrial Revolution’, which is catalysed by a number of technological strands which combine to transform political economy and society. AI is the underlying factor and cutting-edge element of this industrial transformation.
  • Rapid innovation is the key to strategic success in this macro environment. Historically speaking, distinctions in productive capacity between newer and older production mechanisms could be mitigated with capital investment and increased labour. Today, this is no longer the case – the pace and scale of innovation means that falling behind the innovation cycle leads to a long-term decline in economic output, and by extension, societal wealth and political and strategic competitiveness.
  • AI can be conceptualised as a capability and a resource. AI is a capability which can be applied to specific, discrete problem sets and in a variety of industries. However, it is also a foundational technology, upon which a number of other innovations necessarily rest, giving it the characteristics of a resource alongside that of a specific capability.
  • AI innovation includes the research and hardware which underpins AI development. In this respect, the United Kingdom (UK) is behind the innovation environments of the United States (US) and the People’s Republic of China (PRC), the two AI heavyweights, for structural economic and political reasons.
  • AI implementation demands the digital and physical infrastructure to accelerate AI development and deployment. Advanced computing and semiconductor design are central to this, as is broader digital infrastructure. A central lesson of the past four years is the crucial need for a large-scale industrial ecosystem to exist around AI.
  • AI investment provides the funding for all AI development and deployment. The US and PRC have distinctly structured investment environments for their AI enterprises, but the UK has a significant potential leverage point considering its banking and financial sector.
  • In a competitive international environment, AI power is crucial in expanding and improving industrial capacity during long-range strategic competition. Future strategic deterrence will rest upon a nation’s ability to generate computing power and maintain an information advantage as much as on its ability to field physical weapons.
  • The UK’s goal should be to develop a National AI Enterprise. This would integrate all elements of AI development and implementation to create a holistic, robust AI system which could be applied to multiple strategic problem sets and accelerate broader innovation.
  • The greatest challenge is the financial cost of AI development. AI is simply too expensive for national governments to fund its development and deployment independently. This necessitates comprehensive private sector buy-in. Despite the UK’s restricted fiscal situation under the current budget structure, its ability to access international capital and work with the private sector is extraordinarily high.
  • Enabling AI power requires a number of regulatory and strategic steps. The UK should break down barriers to cross-industry collaboration on AI and harmonise its data protection schemes with that of its most innovative ally, the US, while encouraging private capital investment in strategically critical AI sectors.

1.0 Introduction

Global geostrategic pressures have accelerated in the 21st century, creating a period of open political-military contestation. The United Kingdom (UK) and its allies and partners have entered a new era of geopolitical competition as the rapid development of emerging technologies, and principally artificial intelligence (AI), reshape the dynamics of national, economic and military power. The following Policy Paper identifies the character of emerging strategic competition, situates AI development within that competition, and provides a series of recommendations.

The world is still in the initial stages of a global technological revolution which already impacts every aspect of the world. Today’s rapid technological advances promise significant potential in terms of productivity, prosperity and innovation benefits across society. This current age of techno-economic development can be described as the ‘Fourth Industrial Revolution’, as distinct from the First (1780-1850), Second (1870-1910), and Third (1940-1980) revolutions.2Klaus Schwab, ‘The Fourth Industrial Revolution: What it means and how to respond’, Foreign Affairs, 12/12/2015, https://www.foreignaffairs.com/ (checked: 21/03/2024) and Melvin Kranzberg and Carroll W. Pursell, Technology in Western civilization (New York: Oxford University Press, 1967). These changes also coincide with an ever more complex and volatile global backdrop which bears a disquieting similarity to the geopolitical situation emerging at the beginning of the 20th century. Rapid industrial advances were ruthlessly exploited by competing ideological and geopolitical orders, leading ultimately to the unprecedented destructiveness of two world wars. 

A nation’s ability to innovate faster and more effectively than a rival, rather than through simply its population and raw productive capacity, is now increasingly defining for the foundation on which contemporary military and economic power rests. New and emerging technologies will influence the strategic balance, both in a tangible military sense by enhancing military capabilities and a more general economic sense by providing major actors with greater resources to employ, and with much greater efficiency and impact. Moreover, the stakes of escalation are extraordinarily high, giving every actor an interest in technological and economic competition – and potentially controlled, limited conflict, in the case of aggressive revisionists like Russia or the People’s Republic of China (PRC).

AI is the most critical of these developing technological capabilities. This is not only because it underlies nearly every future technology as well as every aspect of social and economic development of advanced nations, but also because it will have such a stark impact upon the military balance. Importantly, AI is a foundational technology: this is a strategically-relevant development in and of itself, in that AI also catalyses the acceleration of all other contemporary technological advances. As such, it is a vital form of strategic advantage.3James Rogers, Gabriel Elefteriu and William Freer, ‘What is Strategic Advantage?’, Council on Geostrategy, 23/11/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024).

The foundational technologies of the late 19th and early 20th centuries – electrification, the internal combustion engine, telephones and wireless telegraphy, powered flight – came together to prompt a number of other innovations and economic-social developments, not least in the industrial clusters which defined the British Isles, Europe and North America until the late 20th century.

Similarly, the emergence of increasingly sophisticated AI models is already revolutionising a raft of industries and driving innovation across multiple commercial sectors.4George Steer and Laurence Fletcher, ‘Computer-driven trading firms fret over risks AI poses to their profits’, Financial Times, 15/06/2023, https://www.ft.com/ (checked: 21/03/2024). The speed of technological development also marks a significant step towards the creation of ever more powerful forms of AI and the strategic capabilities of the future. These AI-driven capabilities will continue to alter the dynamics of all forms of national power. 

Crucially, in the UK and allied economies, contemporary AI-driven innovation stems from the private sector, not from governments, generating a crucial difference from the past. Although the West’s model of free enterprise provided it with an unassailable long-term competitive advantage over the Communist bloc’s centrally planned economies during the Cold War, much of the basic research which prompted critical technologies during the post-Second World War period was government-led. The developments in computing which underlie modern electronics were nearly all funded by government investment – for example, in the UK at Bletchley Park and beyond or in the United States (US) through Pentagon led research and development programmes – while the basic research behind modern telecommunications and web-based connectivity were equally government-dependent. By contrast, it is private investment which drives the vast majority of modern technological advancement, including AI advancement, demanding a different policy framework.

Developing AI capabilities will require a scale and shape of public-private partnership for which free and open governments are not currently well organised or equipped. Most importantly, it requires large-scale funding. Even the superpowers will struggle to fund deep AI capabilities without close collaboration with allies and an intimate working partnership with the private sector. This requires a new model of partnership.

In Britain, this would be one in which government constructs an initiative led by the prime minister and the Cabinet Office. The prime minister would have direct oversight. This National AI Enterprise should employ a cross-societal collaboration framework which integrates industry, the financial sector, universities, and other ecosystem players. It should direct all components of AI policy, whether science, skills, technology, military, or trade.

In the British case, His Majesty’s (HM) Government’s available budget, some £500 million over two years, would be grossly insufficient to fund AI development absent significant private sector collaboration.5‘Science, Innovation and Technology backed in Chancellor’s 2023 Autumn Statement’, Department for Science, Innovation and Technology, 23/11/2023, https://www.gov.uk/ (checked: 21/03/2024). A serious government investment push would identify funds akin to major public infrastructure projects or support for the green transition.6Benedict Macon-Cooney, James Phillips, Luke Stanley and Tom Westgarth, ‘A New National Purpose: AI Promises a World-Leading Future of Britain’, Tony Blair Institute, 13/06/2023, https://www.institute.global/ (checked: 21/03/2024).

AI developmental progress is very difficult to measure, in both relative and absolute terms. AI should be considered as a resource, much like other material resources that national governments consider critical, and a foundational technology, much like electrification in the 19th century. AI must be applied to have an effect or an outcome. But the more AI ‘power’, one has – the more AI ‘resources’ a nation or alliance has cultivated – the easier it will be to apply that power with high impact. Creating an AI stand-in force, an aggregate of AI capability which can be applied to multiple problem sets, will be a critical strategic capacity in future global competition and a central aspect of deterrence.

This paper has three sections. First, it will explore the strategic context, identifying the nature of contemporary geopolitical competition and how it affects the UK. Second, it will identify the way in which AI development is connected to every aspect of national power, and in turn, recognise the way in which AI will be a central determining factor in long-range, strategic competition. Third, it will explore the UK case further, identifying potential for British AI development and cultivating AI power.

2.0 What is AI power?

2.1 Translating power

The current period of systemic techno-economic competition, like its Cold War antecedent, will include the tangible or threatened use of elements of military force. Victory or defeat will stem from either an actual military confrontation between the great powers (and their allies and partners) or, as in the Cold War, from an increasingly disproportionate balance of power which imposes an effective deterrence or creates the conditions for one side’s collapse. This means that technological decisions are now the most critical strategic choices facing policymakers.

The Cold War experience demonstrates the need for accurate technological forecasting coherently linked to grand strategy. Indeed, the US in particular made several attempts to modify the technological foundations of its armed forces and other elements of national security. These had varying degrees of success, but each at a cost. The ‘New Look’ under president Eisenhower mitigated defence expenditure, but the Soviets ultimately closed the nuclear gap, undermining transatlantic alliance coordination. The Second Offset Strategy of the late 1970s and 1980s unravelled the Soviet Union’s military advantages, accumulated since the mid-1960s, in under a decade, but at significant financial cost and alliance strains.7Bryan Clark, Dan Patt, and Timothy A. Walton, ‘The Department of Defense Needs to Relearn the (Almost) Lost Art of Net Assessment’, The Strategy Bridge, 19/11/2020, https://thestrategybridge.org/ (checked: 21/03/2024) and Joseph Felter, ‘It’s Not Just The Technology: Beyond Offset Strategies’, Hoover Institution, 15/03/2017, https://www.hoover.org/ (checked: 21/03/2024).

As during the Cold War, contemporary geopolitical competition is primarily technological-economic, and then, secondarily, military-strategic. Military power will have some elements of continuity with its late 20th and early 21st century predecessors, but the technological advances on display in Ukraine, and increasingly in the Indo-Pacific, have shifted the character of war.8Andrew F. Krepinevich, ‘Maritime Competition in a Mature Precision Strike Regime’, Centre for Strategic and Budgetary Assessment, 20/10/2014, https://www.files.ethz.ch/ (checked: 21/03/2024).

More important are the underlying technological and economic factors and their translation mechanisms into national power: victory in this competition requires the accumulation of long-term geographical, economic, and military leverage points, akin to the dynamics of a chess match in which a game ends several moves before technical checkmate.

2.1.1 The British position

Alliances and strategic relationships are key to building and sustaining strategic advantage, particularly in a complex, rapidly changing and high-tech world: only coalitions provide the power needed to compete geostrategically.9Gabriel Elefteriu, ‘Why alliances matter’, Council on Geostrategy, 20/12/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024).

The UK can leverage its capabilities for geopolitical effect. However, any individual military-diplomatic policy choice by HM Government will be radically less impactful than the long-range policy decisions the UK will make on scientific and technological development, because technological choices have greater influence on Britain’s long-range strategic options.10Mann Virdee, ‘Is Britain losing its scientific edge?’, Council on Geostrategy, 14/12/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024). Geopolitics matters – the strategic relationship with the US, the UK’s position in Europe, and the British presence in the Indo-Pacific all serve to strengthen HM Government’s hand – but even these must be viewed through a technological-economic lens.

2.2 AI as a capability and resource

AI power should be seen as the synthesis between AI as a specific technological capability and as a resource for cultivation akin to a physical resource. This interaction defines AI power, and should guide how the UK and its allies and partners consider AI policy.

AI is concurrently the critical underlying capability, when provided with sufficient volumes of energy, for international political competition and the critical resource for technological and strategic development, akin to a physical resource, because it enables other innovations, accelerates capability development, and when cultivated in aggregate, can be applied to multiple problem-sets, ranging from military operations to scientific research and industrial production. 

Each wave of industrialisation stemmed from the combination of key resources and foundational technologies which trigger progressive shifts. The First Industrial Revolution’s technological bundle included mechanised textile production, canal and steam-powered transport, and improved iron production. This facilitated the factory system which, by the later 18th century, triggered the early-stage development of urbanisation and modern social hierarchy, which in turn drove political, military, and strategic changes.

The Second Industrial Revolution expanded upon these transformations. Between the 1870s and 1920s, a number of new technologies – structural steel manufacturing processes and other metals fabrication techniques, railway networks, improvements in chemical fertilisers, electrification, telegraphy, and radio – triggered a host of innovations which defined the 20th century. 

The Third Industrial Revolution, prompted by the invention of transistors, and in turn modern electronics, has had an equally profound impact upon politics, society, and strategy, as evident throughout the Cold War’s first two decades. Computing has existed alongside industrial production as the foundations of economic power, and by extension, military strength.

The distinction between this phase and the Fourth Industrial Revolution is the networked nature and non-physical character of innovation, and the development of new technologies or processes which increase economic efficiency and output, particularly in the context of geopolitical competition.11‘What are Industry 4.0, the Fourth Industrial Revolution, and 4IR?’, McKinsey and Company, 17/08/2022, https://www.mckinsey.com/ (checked: 21/03/2024) and Klaus Schwab, ‘The Fourth Industrial Revolution’, World Economic Forum, https://law.unimelb.edu.au/ (checked: 21/03/2024), pp. 20-25. Of course, innovation is crucial to long-run growth.12Nathan Rosenberg, ‘Innovation and Economic Growth’, Organisation for Economic Co-operation and Development, 12/01/2005, https://www.oecd.org/ (checked: 21/03/2024). But in the short-run, historically speaking, a less innovative economic system could remain geopolitically competitive with a more innovative one. Nominally speaking, the Soviet Union had a scientific and engineering infrastructure on par with that of the West.13Abraham Sinkov, ‘Soviet Science and Technology: Present Levels and Future Prospects’, National Security Agency (US), 20/02/2008 (declassified), https://media.defense.gov/ (checked: 21/03/2024). The Soviet system, however, was quite radically differently structured than the Western model. The Western system, exemplified by US science and technology policy, included an enormous amount of state-run investment. But much of this investment was partitioned out in a decentralised manner, through university grants, direct contracts, and other mechanisms that encouraged competition for funding and supported multiple technological streams.14John Aubrey Douglass, ‘The Cold War, Technology and the American University’, University of California, 02/02/2000, https://escholarship.org/ (checked: 21/03/2024).

The Soviet Union, by contrast, excelled in large-scale scientific projects like nuclear engineering, but was incapable of distributing funding across a number of areas.15Seth Center and Emma Bates, ‘Tech-Politik: Historical Perspectives on Innovation, Technology and Strategic Competition’, Centre for Strategic and International Studies, 19/12/2019, https://csis-website-prod.s3.amazonaws.com/ (checked: 21/03/2024). This stemmed both from resource constraints and political realities – many top scientists in the Soviet Union were ‘bourgeois specialists’, either educated before the Bolshevik Revolution or exposed to open societies through academic contacts, meaning the Communist Party had to keep these useful but dangerous individuals under reasonably strict control.

This system was manageable throughout the early Cold War, when US and Soviet growth were not grossly separated. A shift occurred, however, between 1970 and 1990. The Soviet economy stagnated, with growth dropping a half-percent each decade from 1960, ultimately leading to the Soviet Union’s collapse in 1991. The US, by contrast, continued growing at c. 4% in the 1960s, and 3% in the 1970s and 1980s. British growth is even more stark, from 3% in the 1960s and 2% in the 1970s to over 4% by the late 1980s.16Data from World Bank. See: ‘GDP growth (annual %) – United States, United Kingdom’, The World Bank, No date, https://data.worldbank.org/ (checked: 21/03/2024) and ‘The Soviet Economic Decline’, The World Bank, 21/11/2021, https://datacatalog.worldbank.org/ (checked: 21/03/2024).

The underlying factor was an explosion in technological development stemming from the Third Industrial Revolution, combined with expanding access to consumer electronics and economic financialisation. The existence of the Solow Paradox, the idea that productivity growth declined just as information technology began to expand in the 1970s and 1980s, does not undermine this view. For one, after a period of structural adjustment, the Western world did have an explosion in productivity growth in the 1990s and 2000s.17Christian E. Weller, ‘Learning Lessons From the 1990s: Long-Term Growth Prospects for the US’, Economic Policy Institute, 10/04/2002, https://www.epi.org/ (checked: 21/03/2024). For another, while technological adjustment is relevant, much of the productivity stagnation of the 1970s was mirrored in broader economic contraction, stemming from issues well beyond technology and innovation – once those extraneous issues were remedied, a subsequent boom resulted. It is reasonable to identify, therefore, a time-lag between AI development, implementation, and productivity growth.

The two other key aspects of the Fourth Industrial Revolution are the role of technology in economic development and the role of private investment in technological growth. Technological acceleration is well-documented. The developments from 1780 onward dwarf those of the previous 200,000 years of human existence.18Max Roser, ‘Technology over the long run: zoom out to see how dramatically the world can change within a lifetime’, Our World in Data, 22/02/2023, https://ourworldindata.org/ (checked: 21/03/2024). But developments from the 1970s onward have become non-physical, that is, initially dependent upon computerised processes, and now increasingly on AI.19Max Roser, ‘The brief history of artificial intelligence: The world has changed fast – what might be next?’, Our World in Data, 06/12/2023, https://ourworldindata.org/ (checked: 21/03/2024). The investment necessary for these developments, in turn, stems largely from the private sector. This is a reflection of simple financial reality: in aggregate, the private sector in a financialised economy will have far more cash available on hand than the government, while a fully centrally-planned system akin to the Soviet Union is simply uncompetitive with a privately-driven system over time in terms of investment potential.

In typical discussions about AI, the focus is on the specific results it generates in certain contexts. However, AI’s diverse applications make it an obvious foundational technology, because it can supercharge nearly every other technological application: AI-enabled development, combined with simulation technology and other forms of automated production, allow for a major acceleration in both innovation and deployment.

Socially, AI has the potential to democratise access to exceptionally large amounts of processing power and to provide increased decision support across processes hitherto seen as the exclusive preserve of human judgement. Far more dangerously, AI can also destabilise societies through its ability to spread disinformation, which includes the creation of so-called ‘deep-fakes’, reinforce political echo chambers, and when combined with virtual reality and social media technologies, to intensify political divisions and social isolation.

Militarily, AI-powered autonomous weapons, intelligence and decision aids could accelerate the kill chain – the time between target identification and an engagement action – and if combined with a sufficiently distributed force, such capabilities can enable the long-distance concentration of firepower and absorption of enemy strikes. AI can also enable far greater military production if twinned with automation and advanced manufacturing processes.

The PRC understands the reality that information is central to the execution of networked warfare and the growth of a modern military industrial system. Chinese Intelligentised Warfare (the People’s Liberation Army’s theory of victory), together with Military-Civil Fusion and the Chinese Communist Party’s (CCP) dual-use technological-industrial policy, all leverage AI. The goal is to improve Chinese military advantages and ultimately gain ‘Decision Dominance’: the ability to think, act, react, and innovate faster than an adversary.

Each discrete application of AI links to the broader phenomenon of AI development itself. Historically speaking, bundles of technologies undergird military and political competition. During the First World War, the production line, telegraph wires, internal combustion engine, long-range optics, improved explosives design, chemical synthesis advances, and improved flight technology, all combined to intensify the ferocity of combat. During the Second World War, these technologies joined with improved sensing, better communications technology, and more advanced propulsion and aeronautical engineering to accelerate military capacity further, and with advances in machine tools and new production methods to increase materiel output. The Cold War combined these advances with additional qualitative leaps, particularly in nuclear technology, the development of microchips and precision weaponry, computing, and space-based assets. In each case, it was the combination of technologies that mattered most.

Moreover, although there were certain resources which ‘created’ these technologies – coal, oil, steel, and various specific metals – and although each technological bundle relied upon massive amounts of capital investment, these military-technological advances were themselves the product of a broader underlying set of capabilities within society. The ability of AI to cut across capability sets, by contrast, makes it unique: AI can enable all aspects of development like an underlying ‘super resource’, meaning it can be treated as an input akin to the early 20th century’s steel and iron and, at the same time, as a refined output. AI sits at the beginning and end of the technological development chain.

Moreover, unlike the production of iron and steel, AI has unlimited potential. At some point, any finite ‘natural’ resource will run out, regardless of technological efficiencies involved in extracting and processing it. By contrast, AI will remain infinitely elastic, especially as renewables – not least fusion power and other forms of unlimited energy – become a reality. Its ‘feedstock’, the data it relies upon, also grows exponentially as AI becomes increasingly sophisticated and is applied in new, novel ways and generates more information organically. in turn, this necessitates ever more powerful AI capabilities to make sense of it all, again reinforcing the compounding growth effect. There is, conceptually speaking, no point of diminishing returns from AI development – each investment will, over a long enough time horizon, pay back greater dividends, assuming sufficient power generation capacity.

If AI is both an underlying technology for innovation and a resource which enables economic growth, the policy framework which encourages AI development should be carefully constructed to ensure the UK and its allies win the AI competition, both in the context of resource growth and that of technological innovation.

3.0 Contemporary British geopolitical interests

The great powers are pursuing competitive AI policies to further their own strategic agendas. Central to this is the development of AI-driven military and national security capabilities. But great-power AI development goes well beyond military and national security questions, and instead is the baseline for a broader competition over technology, innovation, and competitiveness.

There are three strands to this evolution: Innovation, which includes research and development, implementation, including talent, skills, operating environment, and digital infrastructure, and investment, including government policy, the commercial ecosystem, and other financial factors. As outlined below, the US and the PRC are both investing heavily in AI Power along all three of these strands.

3.1 AI innovation

AI Innovation, including research and hardware, drives AI growth. The PRC has sought to adapt its economy to gain an innovation advantage despite its structural issues. The PRC’s 1.4 billion people can provide the CCP with an enormous amount of data, while the Chinese data protection regime gives the state access to any information it seeks. The 2017 Cybersecurity Law demands that network operators store their data at Chinese-located data centres and gives law enforcement – in reality the CCP’s Security Services – the ability to access that data at any time. The law applies to any company that operates within the PRC. The CCP also seeks to convince other governments to adopt similar data protection regimes, enabling data collection.

The PRC’s AI policy more generally stems from Military-Civil Fusion (MCF), which underlies all Chinese military-technological development.20‘Military-Civil Fusion and the People’s Republic of China’, Department of State (US), 28/05/2020, https://www.state.gov/ (checked: 21/03/2024). The CCP’s goal, under MCF, is to break down all distinctions between civilian and strategic technological advances. The PRC’s legal and economic structures give the CCP unlimited power to coerce, compel, and induce private actors to transfer information to the party, and by extension to the PLA, which remains the CCP’s armed wing. MCF has a particular focus on AI and other advanced computing technologies that reflects the CCP and PLA’s understanding of modern technological and military competition. 

The US has fewer data collection options than the PRC given its privacy protections. Yet American firms have always leveraged information technology to collect data on users, if primarily for commercial and marketing ends, while American funding lines are more plentiful than their Chinese counterparts because of US capital markets. The fundamental issue is the lack of data regime harmonisation between the UK, US, European Union (EU), and other partners, limiting data transfers.

3.2 AI implementation

AI Implementation requires both an innovation network and the digital/computerised infrastructure (communications cables/communication technology and other electronics) to diffuse innovations throughout an economy and society. Baseline electronics and communications hardware, including semiconductors and microelectronic devices, are the initial attachment points of AI in the broader economy.

The PRC’s investments in baseline consumer electronics are obvious, as is the dependence of international supply chains upon Asian producers for advanced semiconductors.21Gregory C. Allen, ‘China’s New Strategy for Waging the Microchip Tech War’, Centre for Strategic and International Studies, 03/05/2023, https://www.csis.org/ (checked: 21/03/2024). However, there are two areas of competitive advantage into which the US is investing, and that the other Euro-Atlantic powers could also target. First, advanced semiconductor design remains a largely Euro-Atlantic endeavour, primarily concentrated in Silicon Valley, the Netherlands, the UK, especially for compound semiconductor development. Second, extremely advanced computing and design remains, again, largely a Euro-Atlantic industry.

Nevertheless, there are as of yet no major telecommunications giants which are capable of competing with the Chinese telecommunications infrastructure companies in major foreign markets, although US political pressure has disrupted deals with Chinese state-backed firms like Huawei.22Ray Le Maistre, ‘Huawei is still the world’s biggest telecom equipment vendor’, TelecomTV, 22/03/2023, https://www.telecomtv.com/ (checked: 21/03/2024). Meanwhile, cheap Chinese consumer electronics still dominate low-cost markets in Africa and Asia. 

If they so choose, the UK, US and other like-minded nations could place an outright ban on Chinese technologies, as they have already done in certain contexts, and thereby create an incentive for domestic producers to fill the gap. The issue is that countries of more limited means in Africa, South America, and the Middle East will not be able to purchase these more expensive alternatives at scale. The PRC arguably has already developed enough leverage over these so-called ‘middle ground’ actors to keep them at minimum neutral during periods of contestation, manipulate their votes in international institutions, and even generate parallel institutions which could reduce market access and political contact between Europe, North America, and parts of the Indo-Pacific.

Advanced hardware, including autonomous subcomponents and full-autonomy systems, are the most obvious militarily applicable aspect of AI, but rely on a much broader industrial system. The difficulty with these technologies, in a military context, is matching new systems to operational requirements and creating a development cycle which can integrate lessons from combat. New technologies must be sufficiently cheap to be fielded at scale and have a simple enough design to be modified rapidly once the battlefield provides information about use-cases and weak-points. Equally, particularly in relation to drones, they need to allow the building of systems that are resilient enough against electronic disruption to maximise swarm effects and autonomous programming. This balance is extraordinarily difficult to strike, as Ukraine has demonstrated, with the failure of various precision systems due to electronic interference, and the general lack of AI-enabled drone employment.

3.3 AI investment

The national innovation network of major countries, which rests upon large-scale big-technology and the financial sector, along with small AI research organisations and start-ups, enables creative AI development. AI is a cross-cutting capability, meaning dozens of industries have an interest in fostering AI research and development. This should create a significant number of AI start-ups which support different industries, as is already underway in the UK and US.

In this context, the UK and its allies may have a structural advantage which requires careful policy to realise. The CCP can pour funding into a select number of AI companies and balloon their development, as it has done to create a number of AI ‘unicorns’ over the past five years.23‘The Global AI Index’, Tortoise, No date, https://www.tortoisemedia.com/ (checked: 21/03/2024). However, there is a significant distinction in corporate strategy between those Chinese tech companies that receive renminbi investment and those that rely on external funding, (i.e., funding sources not linked to the CCP) since the Chinese state cannot shape and direct privately-invested technological development in the same manner as it can with publicly-funded development.

Investment sourcing ensures that AI development is properly supported. Despite the UK and allied structural advantage in an AI innovation network, this is wholly insufficient without a proper financing framework – government and private sector – which can mitigate risk and allow for creativity and cutting-edge developments.

Considering the level of financing that AI development requires, it is necessary to ensure that a large pool of capital exists to support AI ventures. Some of this may be state-backed. The US and the PRC both use public funding to support AI development, albeit in different ways: while Chinese state-owned enterprises are a common investment vehicle, in the US, different government agencies have start-up programmes. However, the amount of money needed implies the primacy of private sector investment in AI development.National governments may therefore be able to finance aspects of AI development. But they will not be capable of providing the basic research/training funding for AI or the overall funding for it that creates a multiplicity of AI strategic applications, i.e., the set of applications that will create the AI stand-in aggregate capability from which the UK and its allies can draw in a long-range strategic competition. Government holds the reins of strategy, directing the private sector towards certain investment pathways – including by setting examples as a fast adopter of new solutions – rather than driving it directly, which is the task of industry.

4.0 AI power and industrial capacity for military competition

Combining the three streams discussed above – innovation, implementation, and investment – can create a mass of AI power – per AI’s role as a resource – which can be applied to a variety of issues. As an underlying capability such as electric power generation, AI will touch every aspect of human life. This creates significant cross-applicability. Not only does it have relevance to nearly every economic field and social question, but it can also be translated back from those contexts into purely military-strategic ones. The translation of AI Power from civilian to military purposes, much like the conversion of civilian industry to wartime needs, is imprecise, and involves trade-offs. Nevertheless, having a stand-in AI capacity across the private (as well as public) sector enables the rapid growth of military power when needed – and supercharges economic productivity when not necessary.

In the 20th century, net economic power was a reasonable underlying indicator of military capacity.24Michael Beckley, ‘The Power of Nations: Measuring What Matters’, International Security, 43:2 (2018). This remains true to an extent in the 21st century, but the increasing relevance of advanced communications, sensing, and precision guided munitions make data processing capacity an additional proxy for military power. Yet the current situation is specific. The post-Cold War peace dividend led to a rapid drawdown in military capability and stockpiles, while the early 21st century’s emphasis on counter-insurgency operations and ‘smart’ weapons reduced stockpiles even further with the expectation that a small number of sophisticated capabilities would dominate the battlefield. Russia’s renewed offensive against Ukraine has demonstrated the relevance of materiel depth, even in the context of intense employment of precision fires. A central element of the future military balance, in light of this war, will be the expectation that a state lacking major strategic capacity will be incapable of sustaining a great-power conflict for long, and therefore, will be unable to deter an adversary.

AI power plays into this directly. The ability to leverage a robust computing network – of which AI is a part – for military purposes will be as crucial as traditional industrial capacity to future deterrence. This is because the ability to innovate rests upon AI-enabled design systems and increasingly sophisticated operational networks that can be created only with sufficient computing power and aggregate AI.

5.0 AI power for Britain

Developing the National AI Enterprise requires the harmonisation of regulatory, investment, and international collaboration lines to ensure the British AI ecosystem matures.

There is a significant risk, increasingly real, of a divergent regulatory regime within the free world, where the US and EU create regulatory frameworks with little overlap. This will stymie international data transfers and, by extension, directly hamper technological development and AI-enabled innovation.

In this complicated context, building British AI power demands a nuanced strategic approach. For the UK, conceptualising this challenge requires linking AI with the other capabilities of interest in systemic competition and, of equal relevance, understanding precisely how international cooperation can intensify capabilities development.

5.1 Interlinkages and the funding problem

As discussed above, AI development is related to other technologies – data collection and dissemination techniques, advanced semiconductors, exascale computing, and in time quantum computing capabilities, to increase processing power. In this manner, AI resembles other international technological developments. The nuclear and precision revolutions fed off each other, much like AI can link with other technology streams.

The great distinction, however, is, once again, that the limits of AI power are far beyond the capabilities of individual countries, likely even beyond the capabilities of the most resourceful of them, such as the US and the PRC.

In the context of AI, governments are simply not wealthy enough to fund development even in the short-term. It is worth reinforcing this point to demonstrate the scale of the challenge any power faces, particularly one of still significant but democratically limited means like the UK. Nuclear technology and advanced missile delivery systems all stemmed from government-sponsored, and typically wholly government funded, research programmes. The private sector, however, is the only option for AI funding considering the financial scale of the task. J.P.Morgan, for example, spends $12 billion per year on technology research and development.25‘This $12 Billion Tech Investment Could Disrupt Banking’, JPMorgan Change and Co., No date, https://www.jpmorganchase.com/ (checked: 21/03/2024). The most recent Pentagon budget includes a US$145 billion (£115 billion) allotment for Research, Design, Testing, and Evaluation across the entirety of US defence. Of course, this price tag is 12 times the size of JPMorgan’s investment. But JPMorgan is a financial services multinational and the fourth-largest bank in the US. The fact that the American military only outclasses its research spending by a factor of 12 is a striking reflection of the financial state of affairs. Meanwhile, the UK MoD’s total research and development spending was £2.1 billion in 2022/2023.26‘MOD Departmental resources: 2023’, Ministry of Defence (UK), 30/11/2023, https://www.gov.uk/government/ (checked: 21/03/2024).

Although government does not provide the primary funding stream, it still plays a crucial role in setting demand signals which are essential for stimulating private sector investment. This is true even though AI has obvious non-governmental applications and funding streams that redound to direct economic benefit, unlike nuclear technology, which had a primary military application and secondary specific non-military applications. AI has already reached a relatively mature stage and increasingly can be integrated at scale into industry, which creates a feedback loop for AI research and development funding. This dynamic would be impossible to achieve in nuclear contexts given the fixed cost of nuclear capabilities, even for limited energy and research purposes. There is some governmental financial support for AI development – HM Government articulated aspects of it in the Integrated Review Refresh 2023, UK Research and Innovation’s critical technology areas include some AI funding streams, and the Defence Science and Technology Laboratory (DSTL) has begun projects that use AI for target recognition. Similarly, the US Department of Defence is several years into Project MAVEN, an attempt to develop an AI target recognition software, while the Central Intelligence Agency and other parts of the military have several other programmes for technology startups which either directly or indirectly relate to AI.

The UK is limited by its spending decisions. This is also true in the US and Europe, even though both of them could notionally find more government money for AI research. In the British context, the AI market is valued at some US$21 billion (£17 billion) as of late 2022, with an expected long-term value-added of US$1 trillion (£819 billion).27‘United Kingdom Artificial Intelligence Market’, International Trade Administration, 16/09/2022, https://www.trade.gov/ (checked: 21/03/2024). The broader AI economy, depending on the measurement, may have reached £1.2 trillion last year.28Bryce Elder, ‘How valuable is the UK’s AI industry? Here’s one way to not find out’, Financial Times, 07/09/2023, https://www.ft.com/ (checked: 21/03/2024). Investment into UK AI companies reached £5 billion in 2021, five times that of 2019.29‘Artificial Intelligence Sector Study’, Perspective Economics, 23/03/2023, https://assets.publishing.service.gov.uk/ (checked: 21/03/2024). By comparison, the March 2023 Budget contained £1 billion for AI and supercomputing investment, alongside a number of science and tech accelerators. HM Government funding and a well-constructed regulatory environment are crucial to shape the market environment, but it is undeniable that the private sector must take the lead.

5.1.2 National AI Enterprise

Developing a British AI industrial capacity, a national ‘AI enterprise’ capable of application across multiple sectors and in radically distinct strategic contexts, is a difficult proposition given the financial pressures on the public purse. Nevertheless, by playing to British strengths, it can be done.

The UK has two major advantages in the contemporary AI development ecosystem. Britain is a world leader in the number of AI technology companies it hosts, typically behind only the US and the PRC in most measures.30See the latest figures from 2023 from: ‘The Global AI Index’, Tortoise, No date, https://www.tortoisemedia.com/ (checked: 21/03/2024). This unique situation stems from several factors. London is an international financial, legal, and commercial hub, creating reinforcing incentives for AI development. Indeed, the fact that London is not a ‘tech hub’ per se, but a truly global city home to a number of discrete industries, makes it extraordinarily attractive for broad AI development. Moreover, British universities are particularly robust, creating a talent pool for recruitment. London’s advantages, and the UK’s AI success thus far, largely stem from inherent structural and market factors, not explicit policy support for AI research and development.31‘£300 million to launch first phase of new AI Research Resource’, UK Research and Innovation, 01/11/2023, https://www.ukri.org/ (checked: 21/03/2024). The fact that London is a global financial hub, comparable to New York City in the most recent rankings, provides ample investment access.32‘The Global Financial Centres Index 34’, Financial Centre Future, 27/09/2023, https://www.longfinance.net/ (checked: 21/03/2024). Moreover, the UK has a relatively concentrated high-tech ecosystem, facilitating AI access by virtue of geography. This implies that a coherent AI policy should prompt even greater development and growth in this sector. Additionally, the UK has the potential to act in a far more nimble manner than the US, considering the overlapping, non-rationalised character of US AI and data regulation, and more effectively than the EU, considering its labyrinthine governance structures.

5.2 Enabling UK AI power

HM Government can take two critical steps towards a better enabling environment for AI Power.

First, the UK can break down barriers to cross-industry collaboration. AI is already being applied most directly in the financial services industry, where the City of London’s biggest banks develop or purchase proprietary models to accelerate trading volumes and increase transaction speed. HM Government, however, should ensure that AI development supports other sectors as well. One way to encourage this is through tax credits on cross-sector AI projects and cross-sector AI innovation hubs. In the latter case, it would be particularly useful for the government to place AI and quantum companies face-to-face in innovation programmes, given the synergies between them.

Second, the UK can begin to tilt its data protection schemes towards the US model in financial services and healthcare technology, and accelerate this process if the Americans establish a federal-level data protection framework. The objective would be to ensure that British AI companies can easily operate with American clients. The US so severely outstrips Europe as a market for AI development and investment that the UK would greatly benefit from linking with even a nascent American data protection system in certain industries. British start-ups, meanwhile, could receive funding-line-agnostic tax breaks to attract American capital – that is, an AI company can receive seed funding from international partners, likely from those nations that are allies and partners of the UK, and receive tax relief for these funds. Moreover, given the UK’s current strategic advantage over the EU in AI technology development, and particularly in AI companies, there is the potential for the UK to behave as an aligning power, accelerating the harmonisation of data standards between North America and Europe. This requires explicit linkage between data protection standards and AI development. A North American-wide approach is reasonable because of Canada’s relatively mature AI sector and its nationally-implemented data protection standards. This approach, however, could also be scaled-up to include Indo-Pacific partners, not least Australia and Japan.

6.0 Conclusion

The global geostrategic race for AI power will only accelerate as geopolitical volatility increases, and the US and PRC compete for strategic advantage. The UK can have an outsized impact in this race, and can strengthen the capabilities of its strategic coalition, if it properly structures its National AI Enterprise to accelerate innovation and ensure it commands a critical mass of AI power.

In turn, there are significant costs to dropping out of the international AI race. The only way the UK and its allies can win the ongoing strategic competition with the revisionist axis is by creating a set of interlocking capabilities and economic-technological capacities that add up to an overall advantage greater than the sum of its parts. The UK is one of the strongest and most technologically innovative countries in the world, and arguably the only one capable of acting as an aligning power with its Euro-Atlantic and Indo-Pacific partners in strategic, economic, and technological contexts. The US policy system may simply be too bloated and too focused on other questions to take the lead as an aligning power. Hence if the UK does not lead, not only will it become increasingly marginalised in broader strategic affairs, but its allies and partners will suffer a commensurate decline in their capabilities, potentially ceding the long-term advantage to their shared rivals.

Acknowledgments

The Council on Geostrategy would like to thank Adarga for making this study possible. The analysis and opinions expressed in this report reflect the views of the authors, and do not necessarily reflect the views of Adarga.

About the authors

Rob Bassett Cross MC is the CEO and founder of Adarga, a British AI leader deploying mission-critical software to National Security, Defence and commercial organisations. Adarga is delivering decision advantage to the UK and its allies by unlocking critical insight from vast volumes of information. A former British Army officer, Rob led teams on counter-terrorism operations around the world before joining J.P. Morgan as an investment banker. He graduated with a degree in law (LLB (Hons)) from Exeter University. Rob is a non-resident senior fellow with the Atlantic Council’s Forward Defence practice.

Harry Halem is Senior Fellow at Yorktown Institute. He holds an MA (Hons) in Philosophy and International Relations from the University of St Andrews, and an MSc in Political Philosophy from the London School of Economics. Halem worked for the Hudson Institute’s Seapower Center, along with multiple UK think-tanks. He has published a variety of short-form pieces and monographs on various aspects of military affairs. He is also a Doctoral candidate in the LSE’s Department of International Relations, researching historical conceptions of military thought and their relationship to escalation.

Gabriel Elefteriu FRAeS is Deputy Director at the Council on Geostrategy, where his research focuses on defence and space policy. Gabriel also leads the Strategic Advantage Cell. Previously he was Director of Research and Strategy and member of the Senior Management Team at Policy Exchange, which he first joined in 2014 and where he also founded and directed the first dedicated Space Policy Research Unit in the UK. Gabriel is also an Associate of King’s College, London, an elected Fellow of the Royal Aeronautical Society, and a founding partner at AstroAnalytica, a space consultancy. He holds a BA in War Studies (first class) and an MA in Intelligence and International Security (Distinction), both from King’s College, London.

Disclaimer

This publication should not be considered in any way to constitute advice. It is for knowledge and educational purposes only. The views expressed in this publication are those of the author and do not necessarily reflect the views of the Council on Geostrategy or the views of its Advisory Council.

No. GSPPP03 | ISBN: 978-1-914441-61-5

  • 1
    James Vincent, ‘Putin says the nation that leads in AI “will be the ruler of the world”’, The Verge, 04/09/2017, https://www.theverge.com/ (checked: 21/03/2024). 
  • 2
    Klaus Schwab, ‘The Fourth Industrial Revolution: What it means and how to respond’, Foreign Affairs, 12/12/2015, https://www.foreignaffairs.com/ (checked: 21/03/2024) and Melvin Kranzberg and Carroll W. Pursell, Technology in Western civilization (New York: Oxford University Press, 1967).
  • 3
    James Rogers, Gabriel Elefteriu and William Freer, ‘What is Strategic Advantage?’, Council on Geostrategy, 23/11/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024).
  • 4
    George Steer and Laurence Fletcher, ‘Computer-driven trading firms fret over risks AI poses to their profits’, Financial Times, 15/06/2023, https://www.ft.com/ (checked: 21/03/2024).
  • 5
    ‘Science, Innovation and Technology backed in Chancellor’s 2023 Autumn Statement’, Department for Science, Innovation and Technology, 23/11/2023, https://www.gov.uk/ (checked: 21/03/2024).
  • 6
    Benedict Macon-Cooney, James Phillips, Luke Stanley and Tom Westgarth, ‘A New National Purpose: AI Promises a World-Leading Future of Britain’, Tony Blair Institute, 13/06/2023, https://www.institute.global/ (checked: 21/03/2024).
  • 7
    Bryan Clark, Dan Patt, and Timothy A. Walton, ‘The Department of Defense Needs to Relearn the (Almost) Lost Art of Net Assessment’, The Strategy Bridge, 19/11/2020, https://thestrategybridge.org/ (checked: 21/03/2024) and Joseph Felter, ‘It’s Not Just The Technology: Beyond Offset Strategies’, Hoover Institution, 15/03/2017, https://www.hoover.org/ (checked: 21/03/2024).
  • 8
    Andrew F. Krepinevich, ‘Maritime Competition in a Mature Precision Strike Regime’, Centre for Strategic and Budgetary Assessment, 20/10/2014, https://www.files.ethz.ch/ (checked: 21/03/2024).
  • 9
    Gabriel Elefteriu, ‘Why alliances matter’, Council on Geostrategy, 20/12/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024).
  • 10
    Mann Virdee, ‘Is Britain losing its scientific edge?’, Council on Geostrategy, 14/12/2023, https://www.geostrategy.org.uk/ (checked: 21/03/2024).
  • 11
    ‘What are Industry 4.0, the Fourth Industrial Revolution, and 4IR?’, McKinsey and Company, 17/08/2022, https://www.mckinsey.com/ (checked: 21/03/2024) and Klaus Schwab, ‘The Fourth Industrial Revolution’, World Economic Forum, https://law.unimelb.edu.au/ (checked: 21/03/2024), pp. 20-25.
  • 12
    Nathan Rosenberg, ‘Innovation and Economic Growth’, Organisation for Economic Co-operation and Development, 12/01/2005, https://www.oecd.org/ (checked: 21/03/2024).
  • 13
    Abraham Sinkov, ‘Soviet Science and Technology: Present Levels and Future Prospects’, National Security Agency (US), 20/02/2008 (declassified), https://media.defense.gov/ (checked: 21/03/2024).
  • 14
    John Aubrey Douglass, ‘The Cold War, Technology and the American University’, University of California, 02/02/2000, https://escholarship.org/ (checked: 21/03/2024).
  • 15
    Seth Center and Emma Bates, ‘Tech-Politik: Historical Perspectives on Innovation, Technology and Strategic Competition’, Centre for Strategic and International Studies, 19/12/2019, https://csis-website-prod.s3.amazonaws.com/ (checked: 21/03/2024).
  • 16
    Data from World Bank. See: ‘GDP growth (annual %) – United States, United Kingdom’, The World Bank, No date, https://data.worldbank.org/ (checked: 21/03/2024) and ‘The Soviet Economic Decline’, The World Bank, 21/11/2021, https://datacatalog.worldbank.org/ (checked: 21/03/2024).
  • 17
    Christian E. Weller, ‘Learning Lessons From the 1990s: Long-Term Growth Prospects for the US’, Economic Policy Institute, 10/04/2002, https://www.epi.org/ (checked: 21/03/2024).
  • 18
    Max Roser, ‘Technology over the long run: zoom out to see how dramatically the world can change within a lifetime’, Our World in Data, 22/02/2023, https://ourworldindata.org/ (checked: 21/03/2024).
  • 19
    Max Roser, ‘The brief history of artificial intelligence: The world has changed fast – what might be next?’, Our World in Data, 06/12/2023, https://ourworldindata.org/ (checked: 21/03/2024).
  • 20
    ‘Military-Civil Fusion and the People’s Republic of China’, Department of State (US), 28/05/2020, https://www.state.gov/ (checked: 21/03/2024).
  • 21
    Gregory C. Allen, ‘China’s New Strategy for Waging the Microchip Tech War’, Centre for Strategic and International Studies, 03/05/2023, https://www.csis.org/ (checked: 21/03/2024).
  • 22
    Ray Le Maistre, ‘Huawei is still the world’s biggest telecom equipment vendor’, TelecomTV, 22/03/2023, https://www.telecomtv.com/ (checked: 21/03/2024).
  • 23
    ‘The Global AI Index’, Tortoise, No date, https://www.tortoisemedia.com/ (checked: 21/03/2024).
  • 24
    Michael Beckley, ‘The Power of Nations: Measuring What Matters’, International Security, 43:2 (2018).
  • 25
    ‘This $12 Billion Tech Investment Could Disrupt Banking’, JPMorgan Change and Co., No date, https://www.jpmorganchase.com/ (checked: 21/03/2024).
  • 26
    ‘MOD Departmental resources: 2023’, Ministry of Defence (UK), 30/11/2023, https://www.gov.uk/government/ (checked: 21/03/2024).
  • 27
    ‘United Kingdom Artificial Intelligence Market’, International Trade Administration, 16/09/2022, https://www.trade.gov/ (checked: 21/03/2024).
  • 28
    Bryce Elder, ‘How valuable is the UK’s AI industry? Here’s one way to not find out’, Financial Times, 07/09/2023, https://www.ft.com/ (checked: 21/03/2024).
  • 29
    ‘Artificial Intelligence Sector Study’, Perspective Economics, 23/03/2023, https://assets.publishing.service.gov.uk/ (checked: 21/03/2024).
  • 30
    See the latest figures from 2023 from: ‘The Global AI Index’, Tortoise, No date, https://www.tortoisemedia.com/ (checked: 21/03/2024).
  • 31
    ‘£300 million to launch first phase of new AI Research Resource’, UK Research and Innovation, 01/11/2023, https://www.ukri.org/ (checked: 21/03/2024).
  • 32
    ‘The Global Financial Centres Index 34’, Financial Centre Future, 27/09/2023, https://www.longfinance.net/ (checked: 21/03/2024).