What are AI superpowers doing to advance their incorporation of AI into their defence postures, and what are the national security implications for Australia?

By Michael Shoebridge, Director, Defence Strategy and National Security Program, Australian Strategic Policy Institute (ASPI).

This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).  

In 2022, there are two kinds of AI ‘superpowers’: companies and states.

The most capable AI national security power will be the state that has the closest connections to and is able to take advantage of corporate AI superpowers. AI capability isn’t simply transferable from one application or sector (for example, search, facial recognition or digital navigation) to another without deep understanding of the uses and purposes involved and the limitations of the available datasets and resulting machine-learning applications. AI uses in national security look compelling and potentially destabilising, from insight advantage from huge datasets to the control of autonomous systems and rapid decision-making.

Image concept PGM

AI uses in national security look compelling and potentially destabilising. Photo: iStockPhoto, online.

Right now, the US is a potential AI superpower, and much of its technical capability is coming from US big tech (notably Amazon, Apple, Facebook, Google and IBM), although those capabilities have been developed for particular enterprise purposes. There’s capability within the highly classified government world, also, which enables cyber and other security activities. Policies, strategies and principles have lagged AI development and application in the big-tech world—a phenomenon best captured by the Facebook ‘move fast and break things’ mantra.1 Fortunately, that mindset hasn’t applied to AI in the defence realm, where ‘breaking things’ is less forgivable, given that those things can be people. A key constraint on AI’s application to national security has been the adversarial relationship between government and big tech in the US. This shows some signs of easing but not ending (for example, there’s been a revival of anti-trust thinking).2

China is the other potential AI superpower through a combination of its state-centred data model and its own big-tech corporate sector.3 China is also the widest state applier of data and tech for particular state purposes— including state surveillance of its population (think Xinjiang and China’s social credit system) and state-centred data laws.4 To the extent that all data is open to the state and the state can enable its tech-world actors to use it, China has a ‘data advantage’ that should translate into an AI advantage. However, Beijing’s moves to reassert Chinese Communist Party control over big tech risk damaging it.5

But data is only part of an AI capability. Applying AI is a multidisciplinary team sport, and it turns out that datasets collected for particular purposes and in particular ways can have biases and limitations when used for other purposes. This is an issue for entities that have high risk appetites for rolling AI applications out, particularly for military or offensive cyber uses. Those who apply AI to weapon systems without deep understanding of the intended purpose, the environment and data limitations and a level of knowledgeable human participation are likely to inflict and experience nasty surprises.

Other states and supranational entities (such as the European Union) have capabilities, but not at the scale of the US or China. The US alliance system could enable states such as Australia to both contribute to and draw from US capabilities, while China’s model is likely to remain a national one. Other nations and entities tend to be AI policy- and strategy-heavy,6 with a large focus on getting ethical principles right,7 but are short on applied capability that might use those policies and principles.

The national security implications of this for Australia are broad and complicated but, boiled down, mean one thing: if Australia doesn’t partner with and contribute to the US as an AI superpower, it’s likely to be a victim of the Chinese AI superpower and just an AI customer of the US. AUKUS is a step towards this AI partnership for national security.8

(1) Hemant Taneja, ‘The era of “move fast and break things” is over’, Harvard Business Review, 22 January 2019, online.

(2) Nicolas Rivero, ‘A cheat sheet to all of the antitrust cases against Big Tech in 2021’, Quartz, 29 September 2021, online.

(3) Mapping China’s tech giants, ASPI, Canberra, 2022, online.

(4) Katja Drinhausen, Vincent Brussee, China’s social credit system in 2021: from fragmentation towards integration, MERICS: Mercator Institute for China Studies, 3 March 2021, online.

(5) Lulu Yilun Chen, Jun Luo, Zheng Li, ‘China crushed Jack Ma, and his fintech rivals are next’, Bloomberg, 24 June 2021, online.

(6) ‘A European approach to artificial intelligence’, European Commission, no date, online.

(7) Denham Sadler, ‘Alan Finkel on AI ethics and law’, InnovationAus.com, 10 December 2019, online.

(8) ‘Joint Leaders Statement on AUKUS’, The White House, 15 September 2021, online.

front cover of the report - Artificial intelligence: Your questions answered.

This article is an extract from Artificial intelligence: your questions answered, a report published in partnership with the Australian Strategic Policy Institute (ASPI).  

Tagged in AIML