The Council on Geostrategy asks six strategic experts (and ChatGPT) to identify the geostrategic implications of recent and likely imminent advances in Artificial Intelligence (AI).
Gabriel Elefteriu, Council on Geostrategy
Risk and opportunity
The long-term impact of AI on the world is unfathomable at this point. It may become by far the most consequential technology ever created by humankind, for good or ill. AI’s transformative potential for the economy, society and defence makes ‘AI power’ (yet to be fully defined) a key measure of national power. AI is fast becoming a primary strategic concern for states.
AI has been incubated by the commercial sector, but the vital need to compete effectively in this area at the strategic level adds to the burden – in terms of resourcing and attention – that overstretched governments already bear. There is a premium, now, on quickly and effectively incorporating AI into government and the wider economy. More open, ‘AI-ready’ countries like Britain have an advantage over more protectionist, highly-regulated players.
The highly disruptive and fast-moving nature of AI – particularly through its social and political effects, and the second-order technological surprises it can enable – will increase instability in the international system. But with this shared risk comes also a clear diplomatic opportunity for renewed dialogue among key powers on the future of world order in the age of AI.
Ken Payne, King’s College London
Widening gaps in the geopolitical balance of power
These days a week seems like a long time in AI research, such is the pace of change. There is great uncertainty about what AI can do, and who will best harness it. What is possible? Worry about the breakout potential of ‘God-like’ AI is no longer the preserve of libertarian tech-bros. Now, it is a mainstream topic debated inside the capitals of world powers and front pages of major newspapers. It prompted Geoffrey Hinton, a giant of the field, to leave Google and warn of existential risks ahead. However, this risk is currently inflated; AI lacks intrinsic motivation and while it is running on inorganic computers, this is unlikely to change. Instead, the risks ahead come from extraordinarily powerful, but occasionally clumsy, intelligences when tasked by humans.
There are two macro trends occurring: AI employed in tactical military activities, with uncrewed platforms and new concepts, like swarming. And AI used increasingly for strategy – certainly in digital simulations, but also in real-world decision-making, where it will offer new insights for human strategists. Much flows from this. Regulating AI will be challenging, perhaps impossible, given strong incentives to defect from any regime; and there will be dramatic changes in the geopolitical balance of power, because not everyone will be as adept at updating their legacy capabilities. That is unsettling, and perhaps dangerous.
Emma Salisbury, Council on Geostrategy
Revolutionising the military
The central implications of AI for geopolitics will stem from their military use. The nation that can best discern, develop, and deploy them throughout its armed forces will gain a significant competitive advantage over its adversaries.
A useful way to think about this is to split AI into two archetypes – those that enable machines to work with and augment human capabilities, and those that employ machine cognition on its own. The former are far closer to fruition, and will be what is seen sooner on the battlefield and in operational logistics. Human-machine teaming can potentially augment a huge range of processes, from missile targeting to predictive maintenance to planning complex logistics networks.
Intelligences that employ machine cognition on its own are further away, but also more concerning. This capability is fundamentally alien, and it can be impossible to understand how the intelligence has come to a decision. This becomes particularly salient when considering autonomy on the battlefield – who would be accountable if a machine took an action in conflict that would be considered immoral? This level of AI has huge potential for military effectiveness, but governments must be careful not to underestimate the challenges that will come with it.
Zachary Spiro, Flint Global
Increased export and trade controls
The transformational applications of AI should see policymakers devote even more attention to the supply chain for its hardware. More widespread and strategic uses for AI means more demand for the cutting-edge chips and computing capacity required to run such systems. As a result, governments will likely take stronger action over time to hamper rivals’ efforts to access or develop such products, while attempting to bolster national efforts to do the same.
The wider and evolving use of export or trade controls can be expected, perhaps similar to US sanctions on the Chinese semiconductor industry last October. Governments will also police intellectual property transfer more diligently, and likely place further obstacles on investment flowing between different blocs. Much of this already applies to direct trade in key technology families, but may also expand to the supply chains and intellectual property of enablers, such as large-scale data storage technology and high-speed connectivity, as well as equipment hosted in ‘neutral’ countries but used for another’s benefit.
Governments will also come under increasing pressure to directly compensate – and strengthen – affected industries. While many of these trends are already visible, the capabilities unlocked by AI underscore why they will continue to be at the forefront of global geopolitics.
Rian Whitton, Bismark Analysis
Cheap, abundant energy will be key
Today, developments in advanced computing are heavily reliant on AI software frameworks such as Meta’s Pytorch and Google’s TensorFlow. This software is in turn reliant on computing accelerators designed by US giant Nvidia, who are themselves reliant on the Taiwan Semiconductor Manufacturing Company to manufacture the world’s most advanced chips. Modern data centres used to train AI programmes like ChatGPT rely on thousands, soon tens of thousands, of increasingly expensive graphics processing units.
The deployment of this hardware in data centres is going to create a surge in demand for cheap electricity. The increased compute intensity of AI training is likely to override potential efficiencies in energy use. By the middle of the century, high-powered computing will demand more energy than modern-day industries like steel or aerospace. The countries that have the most energy abundance, and can provide electricity to data centres at the cheapest rate, will be far more likely to turn AI into geopolitical success than those with scarce energy resources. Britain, which has easily the highest industrial electricity prices of any advanced nation, must understand that improving the electricity supply precedes reaping the benefits of AI.
Andrew Yeh, China Strategic Risks Institute
Enabling authoritarian oppression
The PRC’s advances in AI driven surveillance technologies not only furthers the repression of its own citizens, but also risks strengthening the position of authoritarian regimes globally.
The PRC’s surveillance state is driving innovative applications of new AI technologies. This is particularly notable in Xinjiang, where the CCP has staged a brutal crackdown on the Uyghurs and other ethnic and religious minority groups. Chinese companies have trained facial recognition technologies to racially profile Uyghurs in order to better monitor their movements, while Uyghurs are used as test cases for emotion-detection software in police stations. The Xinjiang police force’s Integrated Joint Operations Platform – which aggregates data on an individual’s biometric details, internet usage and even religious activity – is used to launch preemptive investigations into Uyghurs without the need for evidence of criminal behaviour.
These advances not only raise concerns for human rights and civil liberties in the PRC, but could also enable authoritarian regimes to suppress dissent more effectively and in doing so strengthen their position strategically. Technology linked to Chinese companies is used for AI surveillance in sixty-three countries. Procuring these technologies from the PRC could also increase authoritarian states’ dependency on Beijing, particularly for those already tied into its Belt and Road Initiative. To counter this influence, free and open countries must start by promoting global standards on the development of AI which normalise user privacy and minimise state surveillance.
What AI ‘thinks’
Recent AI advances have significant geostrategic implications. Firstly, AI revolutionises military capabilities, enhancing autonomous weapons, cyber warfare, and intelligence gathering. This shifts power dynamics, favoring nations with advanced AI. Secondly, AI impacts economic competitiveness, boosting productivity, resource allocation, and innovation. Failure to adopt AI risks falling behind in the race for technological supremacy.
AI also affects information warfare and cybersecurity, challenging authenticity and reliability. Robust defences are needed against AI-driven disinformation and cyber attacks. Moreover, AI can deepen global inequality, widening the technological gap and disparities in development. Bridging this requires collaboration, knowledge-sharing, and equitable distribution of AI resources. Ethical concerns arise, including privacy, accountability, and the risks of AI-driven decision-making. International norms and regulations are necessary for responsible AI development. In conclusion, AI advances have geostrategic implications in military, economic, information, social, and ethical domains. Governments must navigate these challenges, harnessing AI’s benefits while mitigating risks, promoting cooperation, and responsible development for the collective interests of humanity.
Join our mailing list!
Stay informed about the latest articles from Britain’s World