Getting regulation right: approaches to improving Australia’s cybersecurity

What’s the problem?

As well as having a global impact, Cybersecurity is one of the most significant issues affecting Australia’s economy and national security. On the one hand, poor cybersecurity presents a risk to the interconnected digital systems on which we increasingly rely; on the other hand, well-managed cybersecurity provides an opportunity to build trust and advantage by accelerating digital transformation. Cyber threats can originate from a diverse range of sources and require a diverse set of actions to effectively mitigate them. However, a common theme is that much better cyber risk management is needed to address this critical threat; the current operation of the free market isn’t consistently driving all of the required behaviours or actions.

Regulation can provide a powerful mechanism to modify incentives and change behaviours. However, securing cyberspace depends on the intersection of many factors—technical, social and economic. Current regulations are a patchwork of general, cyber-specific and sector-specific measures with a lack of cohesion that causes overlaps and gaps. That makes the environment complex, which means that finding the right approach that will truly improve overall security and minimise unwanted side effects is difficult. It’s necessary to analyse the interconnected factors that determine the net effectiveness of cybersecurity regulations.

Furthermore, the pace of technological change is so fast today that, even if regulation is successful when first implemented, it needs to be appropriately futureproofed to avoid becoming irrelevant after even a few months. Recent rapid developments in artificial intelligence are an example of the risks here that will need to be anticipated in any changes to the regulatory regimes.

What’s the solution?

Regulatory interventions have an important role to play as one part of a strategy to uplift Australia’s cybersecurity, if done in the right way. This paper presents a framework for the government to make appropriate decisions about whether and how to regulate. That must start with defining which aspect of the cybersecurity challenge it seeks to address and the specific intended long-term impact. In cybersecurity, the most appropriate metrics or measures that regulation seeks to influence should, where possible, be risk-based, rather than specific technical measures. This is because the actual technical measures required are dependent on the individual context of each situation, will change over time, and are effective only when combined with people and process measures. The impact of the interventions on those metrics needs to be readily measurable in order to enable reliable enforcement at acceptable cost—both direct financial cost and indirect opportunity costs.

There’s often a focus on regulation to compel entities to do or not do something. However, compulsion is only one form of regulation, and others, such as facilitation or encouragement, should be considered first, treating compulsion as only one possible approach, which should used carefully and strategically.

Detailed implementation of cybersecurity regulations should use a co-design process with the relevant stakeholders, who will bring perspectives, experiences and knowledge that government alone does not have. It should also draw upon relevant experience of international partners, not only to benefit from lessons learned, but also to minimise the compliance burden for global companies and operators. Finally, in recognising the complexity of the problem, an iterative approach that measures impact and adjusts approaches to enhance effectiveness, incorporate lessons learned and absorb technological advances needs to be planned from the outset.

De-risking authoritarian AI

A balanced approach to protecting our digital ecosystems

What’s the problem?

Artificial intelligence (AI)–enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions, and now they can generate complex text, images and code just as a human can, but much faster.

So it’s unsurprising that moves are afoot across democracies to regulate AI’s impact on our individual rights and economic security, notably in the European Union (EU).

But, if we’re wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests. And, for the foreseeable future, that means the People’s Republic of China (PRC)—a revisionist authoritarian power demonstrably hostile to democracy and the rules-based international order, which routinely uses AI to strengthen its own political and social stability at the expense of individual human rights. In contrast to other authoritarian countries such as Russia, Iran and North Korea, China is a technology superpower with global capacity and ambitions and is a major exporter of effective, cost-competitive AI-enabled technology into democracies.

In a technology-enabled world, the threats come at us ‘at a pace, scale and reach that is unprecedented’.1 And, if our reliance on AI is also without precedent, so too is the opportunity—via the magic of the internet and software updates—for remote, large-scale foreign interference, espionage and sabotage through AI-enabled industrial and consumer goods and services inside democracies’ digital ecosystems. AI systems are embedded in our homes, workplaces and essential services. More and more, we trust them to operate as advertised, always be there for us and keep our secrets.

Notwithstanding the honourable intentions of individual vendors of Chinese AI-enabled products and services, they’re subject to direction from PRC security and intelligence agencies, so we in the democracies need to ask ourselves: against the background of growing strategic competition with China, how much risk are we willing to bear?

We should worry about three kinds of Chinese AI-enabled technology:

  1. products and services (often physical infrastructure), where PRC ownership exposes democracies to risks of espionage (notably surveillance and data theft) and sabotage (disruption and denial of products and services)
  2. AI-enabled technology that facilitates foreign interference (malign covert influence on behalf of a foreign power), the most pervasive example being TikTok
  3. ‘Large language model AI’ and other emerging generative AI systems—a future threat that we need to start thinking about now.

While we should address the risks in all three areas, this report focuses more on the first category (and indeed looks at TikTok through the prism of the espionage and sabotage risks that such an app poses).

The underlying dynamic with Chinese AI-enabled products and services is the same as that which prompted concern over Chinese 5G vendors: the PRC Government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.

But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Protecting them from foreign intelligence agencies is a no-brainer and worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.

In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services, so the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits. And then pick the fights that really matter.

What’s the solution?

A general prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive. Many businesses and researchers in the democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services and publish scientific breakthroughs. The policy goal here is to take prudent steps to protect our digital ecosystems, not to economically decouple from China.

What’s needed is a new three-step framework to identify, triage and manage the riskiest products and services. The intent is similar to that proposed in the recently introduced draft US RESTRICT Act, which seeks to identify and mitigate foreign threats to information and communications technology (ICT) products and services, although the focus here is on teasing out the most serious threats.

Step 1: Audit. Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?

Step 2: Red Team. Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief. What could a team of specialists do if they had privileged access to (that is, ‘owned’) a candidate system identified in Step 1—people with experience in intelligence operations, cybersecurity and perhaps military planning, combined with relevant technical subject-matter experts? This is the real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others. PRC-made cameras and drones in sensitive locations are a legitimate concern, but crippling supply chains through accessing ship-to-shore cranes would be devastating.

For example, we know that TikTok data can be accessed by PRC agencies and reportedly also reveal a user’s location, so it’s obvious that military and government officials shouldn’t use the app. Journalists should also think carefully about this, too. Beyond that, the merits of a general ban on technical security grounds are a bit murky. Can our Red Team use the app to jump onto connected mobiles and IT systems to plant spying malware? What system mitigations could stop them getting access to data on connected systems? If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate.

Step 3: Regulate. Decide what to do about a system identified as ‘high risk’. Treatment measures might range from prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Short of that, governments could insist on measures to mitigate the identified risk or dilute the risk through redundancy arrangements. And, in many cases, public education efforts along the lines of the new UK National Protective Security Authority may be an appropriate alternative to regulation.

The democracies need to think harder about Chinese AI-enabled technology in our digital ecosystems. But we shouldn’t overreact: our approach to regulation should be anxious but selective.

Gaming Public Opinion

The CCP’s increasingly sophisticated cyber-enabled influence operations

What’s the problem?

The Chinese Communist Party’s (CCP’s) embrace of large-scale online influence operations and spreading of disinformation on Western social-media platforms has escalated since the first major attribution from Silicon Valley companies in 2019. While Chinese public diplomacy may have shifted to a softer tone in 2023 after many years of wolf-warrior online rhetoric, the Chinese Government continues to conduct global covert cyber-enabled influence operations. Those operations are now more frequent, increasingly sophisticated and increasingly effective in supporting the CCP’s strategic goals. They focus on disrupting the domestic, foreign, security and defence policies of foreign countries, and most of all they target democracies.

Currently—in targeted democracies—most political leaders, policymakers, businesses, civil society groups and publics have little understanding of how the CCP currently engages in clandestine activities online in their countries, even though this activity is escalating and evolving quickly. The stakes are high for democracies, given the indispensability of the internet and their reliance on open online spaces, free from interference. Despite years of monitoring covert CCP cyber-enabled influence operations by social-media platforms, governments, and research institutes such as ASPI, definitive public attribution of the actors driving these activities is rare. Covert online operations, by design, are difficult to detect and attribute to state actors. 

Social-media platforms and governments struggle to devote adequate resources to identifying, preventing and deterring increasing levels of malicious activity, and sometimes they don’t want to name and shame the Chinese Government for political, economic and/or commercial reasons. 

But when possible, public attribution can play a larger role in deterring malicious actors. Understanding which Chinese Government entities are conducting such operations, and their underlying doctrine, is essential to constructing adequate counter-interference and deterrence strategies. The value of public attribution also goes beyond deterrence. For example, public attribution helps civil society and businesses, which are often the intended targets of online influence operations, to understand the threat landscape and build resilience against malicious activities. It’s also important that general publics are given basic information so that they’re informed about the contemporary security challenges a country is facing, and public attribution helps to provide that information.

ASPI research in this report—which included specialised data collection spanning Twitter, Facebook, Reddit, Sina Weibo and ByteDance products—reveals a previously unreported CCP cyber-enabled influence operation linked to the Spamouflage network, which is using inauthentic accounts to spread claims that the US is irresponsibly conducting cyber-espionage operations against China and other countries. As a part of this research, we geolocated some of the operators of that network to Yancheng in Jiangsu Province, and we show it’s possible that at least some of the operators behind Spamouflage are part of the Yancheng Public Security Bureau.

The CCP’s clandestine efforts to influence international public opinion rely on a very different toolkit today compared to its previous tactics of just a few years ago. CCP cyber-enabled influence operations remain part of a broader strategy to shape global public opinion and enhance China’s ‘international discourse power’. Those efforts have evolved to nudge public opinion towards positions more favourable to the CCP and to interfere in the political decision-making processes of other countries. A greater focus on covert social-media accounts allows the CCP to pursue its interests while providing a plausibly deniable cover. 

Emerging technologies and China’s indigenous cybersecurity industry are also creating new capabilities for the CCP to continue operating clandestinely on Western social platforms.

Left unaddressed, the CCP’s increasing investment in cyber-enabled influence operations threatens to successfully influence the economic decision-making of political elites, destabilise social cohesion during times of crisis, sow distrust of leaders or democratic institutions and processes, fracture alliances and partnerships, and deter journalists, researchers and activists from sharing accurate information about China.

What’s the solution?

This report provides the first public empirical review of the CCP’s clandestine online networks on social-media platforms.

We outline seven key policy recommendations for governments and social-media platforms (further details are on page 39):

  1. Social-media platforms should take advantage of the digital infrastructure, which they control, to more effectively deter cyber-enabled influence operations. To disrupt future influence operations, social-media platforms could remove access to those analytics for suspicious accounts breaching platform policies, making it difficult for identified malicious actors to measure the effectiveness of influence operations.
  2. Social-media platforms should pursue more innovative information-sharing to combat cyber-enabled influence operations. For example, social-media platforms could share more information about the digital infrastructure involved in influence operations, without revealing personally identifiable information.
  3. Governments should change their language in speeches and policy documents to describe social-media platforms as critical infrastructure. This would acknowledge the existing importance of those platforms in democracies and would communicate signals to malicious actors that, like cyber operations on the power grid, efforts to interfere in the information ecosystem will be met with proportionate responses.
  4. Governments should review foreign interference legislation and consider mandating that social-media platforms disclose state-backed influence operations and other transparency reporting to increase the public’s threat awareness.
  5. Public diplomacy should be a pillar of any counter-malign-influence strategy. Government leaders and diplomats should name and shame attributable malign cyber-enabled influence operations, and those entities involved in their operation (state and non-state) to deter those activities.
  6. Partners and allies should strengthen intelligence diplomacy on this emerging security challenge and seek to share more intelligence with one another on such influence operations. Strong open-source intelligence skills and collection capabilities are a crucial part of investigating and attributing these operations, the low classification of which, should making intelligence sharing easier.
  7. Governments should support further research on influence operations and other hybrid threats. To build broader situational awareness of hybrid threats across the region, including malign influence operations, democracies should establish an Indo-Pacific hybrid threats centre.

Key findings

The CCP has developed a sophisticated, persistent capability to sustain coordinated networks of personas on social-media platforms to spread disinformation, wage public-opinion warfare and support its own diplomatic messaging, economic coercion and other levers of state power.

That capability is evolving and has expanded to push a wider range of narratives to a growing international audience with the Indo-Pacific a key target.

The CCP has used these cyber-enabled influence operations to seek to interfere in US politics, Australian politics and national security decisions, undermine the Quad and Japanese defence policies and impose costs on Australian and North American rare-earth mining companies.

  • CCP cyber-enabled influence operations are probably conducted, in parallel if not collectively, by multiple Chinese party-state agencies. Those agencies appear at times to collaborate with private Chinese companies. The most notable actors that are likely to be conducting such operations include the People’s Liberation Army’s Strategic Support Force (PLASSF), which conducts cyber operations as part of the PLA’s political warfare; the Ministry of State Security (MSS), which conducts covert operations for state security; the Central Propaganda Department, which oversees China’s domestic and foreign propaganda efforts; the Ministry of Public Security (MPS), which enforces China’s internet laws; and the Cyberspace Administration of China (CAC), which regulates China’s internet ecosystem. Chinese state media outlets and Ministry of Foreign Affairs (MFA) officials are also running clandestine operations that seek to amplify their own overt propaganda and influence activities.
  • Starting in 2021, a previously unreported CCP cyber-enabled influence operation has been disseminating narratives that the CIA and National Security Agency are ‘irresponsibly conducting cyber-espionage operations against China and other countries’. ASPI isn’t in a position to verify US intelligence agency activities. However, the means used to disseminate the counter-US narrative— this campaign appears to be partly driven by the pro-CCP coordinated inauthentic network known as Spamouflage—strongly suggests an influence operation. ASPI’s research suggests that at least some operators behind the campaign are affiliated with the MPS, or are ‘internet commentators’ hired by the CAC, which may have named this campaign ‘Operation Honey Badger’. The evidence indicates that the Chinese Government probably intended to influence Southeast Asian markets and other countries involved in the Belt and Road Initiative to support the expansion of Chinese cybersecurity companies in those regions.
  • Chinese cybersecurity company Qi An Xin (奇安信) appears at times it may be supporting the influence operation. The company has the capacity to seed disinformation about advanced persistent threats to its clients in Southeast Asia and other countries. It’s deeply connected with Chinese intelligence, military and security services and plays an important role in China’s cybersecurity and state security strategies.

Quad Technology Business and Investment Forum outcomes report

The Quad has prioritised supporting and guiding investment in critical and emerging technology projects consistent with its intent to maintain a free and open Indo-Pacific.

Governments cannot do this alone. Success requires a concerted and coordinated effort between governments, industry, private capital partners and civil society.

To explore opportunities and challenges to this success, the Quad Critical and Emerging Technology Working Group convened the inaugural Quad Technology Business and Investment Forum in Sydney, Australia on 2 December 2022. The forum was supported by the Australian Department of Home Affairs and delivered by the Australian Strategic Policy Institute (ASPI).

The forum brought together senior Quad public- and private-sector leaders, laid the foundations for enhanced private–public collaboration and canvassed a range of practical action-oriented initiatives. Sessions were designed to identify the key challenges and opportunities Quad member nations face in developing coordinated strategic, targeted investment into critical and emerging technology.

Attendees of the forum overwhelmingly endorsed the sentiment that, with our governments, industry, investors and civil society working better together, collectively, our countries can lead the world in quantum technology, artificial intelligence, biotechnology and other critical and emerging technologies.

This report reflects the discussions and key findings from the forum and recommends that the Quad Critical and Emerging Technology Working Group establish an Industry Engagement Sub-Group to develop and deliver a Quad Critical and Emerging Technology Forward Work Plan.

Seeking to undermine democracy and partnerships

How the CCP is influencing the Pacific islands information environment

What’s the problem?

The Chinese Communist Party (CCP) is conducting coordinated information operations in Pacific island countries (PICs). Those operations are designed to influence political elites, public discourse and political sentiment regarding existing partnerships with Western democracies. Our research shows how the CCP frequently seeks to capitalise on regional events, announcements and engagements to push its own narratives, many of which are aimed at undermining some of the region’s key partnerships.

This report examines three significant events and developments:

  • the establishment of AUKUS in 2021
  • the CCP’s recent efforts to sign a region-wide security agreement
  • the 2022 Pacific Islands Forum held in Fiji.

This research, including these three case studies, shows how the CCP uses tailored, reactive messaging in response to regional events and analyses the effectiveness of that messaging in shifting public discourse online.

This report also highlights a series of information channels used by the CCP to push narratives in support of the party’s regional objectives in the Pacific. Those information channels include Chinese state media, CCP publications and statements in local media, and publications by local journalists connected to CCP-linked groups.1

There’s growing recognition of the information operations and misinformation and disinformation being spread globally under the CCP’s directives. Although the CCP’s information operations have had little demonstrated effectiveness in shifting online public sentiment in the case studies examined in this report, they’ve previously proven to be effective in influencing public discourse and political elites in the Pacific.2 Analysing the long-term impact of these operations, so that informed policy decisions can be made by governments and by social media platforms, requires greater measurement and understanding of current operations and local sentiment.

What’s the solution?

The CCP’s presence in the information environment is expanding across the Pacific through online and social media platforms, local and China-based training opportunities, and greater television and short-wave radio programming.3 However, the impact of this growing footprint in the information environment remains largely unexplored and unaddressed by policymakers in the Pacific and in the partner countries that are frequently targeted by the CCP’s information operations.

Pacific partners, including Australia, the US, New Zealand, Japan, the UK and the European Union, need to enhance partnerships with Pacific island media outlets and online news forum managers in order to build a stronger, more resilient media industry that will be less vulnerable to disinformation and pressures exerted by the CCP. This includes further assistance in hiring, training and retaining high-quality professional journalists and media executives and providing financial support without conditions to uphold media freedom in the Pacific. Training should be offered to support online discussion forum managers sharing news content to counter the spread of disinformation and misinformation in public online groups. The data analysis in this report highlights a need for policymakers and platforms to invest more resources in countering CCP information operations in Melanesia, which is shown to have greater susceptibility to those operations.

As part of their targeted training package, Pacific island media and security institutions, such as the Pacific Fusion Centre, should receive further training on identifying disinformation and coordinated information operations to help build media resiliency. For that training to be effective, governments should fund additional research into the actors and activities affecting the Pacific islands information environment, including climate-change and election disinformation and misinformation, and foreign influence activities.

Information sharing among PICs’ media institutions would build greater regional understanding of CCP influence in the information environment and other online harms and malign activity. ASPI has also previously proposed that an Indo-Pacific hybrid threats centre would help regional governments, businesses and civil society better understand and counter those threats.4

Pacific partners, particularly Australia and the US, need to be more effective and transparent in communicating how aid delivered to the region is benefiting PICs and building people-to-people links. Locally based diplomats need to work more closely with Pacific media to contextualise information from press releases and statements and give PIC audiences a better understanding of the benefits delivered by Western governments’ assistance. This includes greater transparency on the provision of aid in the region. Doing so will debunk some of the CCP’s narratives regarding Western support and legitimacy in the region.

  1. A number of local journalists and media contributors have connections to CCP-linked entities, such as Pacific friendship associations. The connections between friendship associations and CCP influence are described in Anne-Marie Brady, ‘Australia and its partners must bring the Pacific into the fold on Chinese interference’, The Strategist, 21 April 2022. ↩︎
  2. Blake Johnson, Miah Hammond-Errey, Daria Impiombato, Albert Zhang, Joshua Dunne, Suppressing the truth and spreading lies: how the CCP is influencing Solomon Islands’ information environment, ASPI, Canberra. ↩︎
  3. Richard Herr, Chinese influence in the Pacific islands: the yin and yang of soft power, ASPI, Canberra, 30 April 2019, online; Denghua Zhang, Amanda Watson, ‘China’s media strategy in the Pacific’, In Brief 2020/29, Department of Pacific Affairs, Australian National University, 26 March 2021, online; Dorothy Wickham, ‘The lesson from my trip to China? Solomon Islands not ready to deal with the giant’, The Guardian, 23 December 2019. ↩︎
  4. Lesley Seebeck, Emily Williams, Jacob Wallis, Countering the Hydra: a proposal for an Indo-Pacific hybrid threat centre, ASPI, Canberra, 7 June 2022. ↩︎

ASPI’s Critical Technology Tracker

ASPI’s Critical Technology Tracker – The global race for future power

The Critical Technology Tracker is a large data-driven project that now covers 64 critical technologies spanning defence, space, energy, the environment, artificial intelligence, biotechnology, robotics, cyber, computing, advanced materials and key quantum technology areas. It provides a leading indicator of a country’s research performance, strategic intent and potential future science and technology capability.

It first launched 1 March 2023 and underwent a major expansion on 28 August 2024 which took the dataset from five years (previously, 2018–2022) to 21 years (2003–2023). Explore the website and the broader project here.

Governments and organisations interested in supporting this ongoing program of work, including further expansions and the addition of new technologies, can contact: criticaltech@aspi.org.au.

What’s the problem?

Western democracies are losing the global technological competition, including the race for scientific and research breakthroughs, and the ability to retain global talent—crucial ingredients that underpin the development and control of the world’s most important technologies, including those that don’t yet exist.

Our research reveals that China has built the foundations to position itself as the world’s leading science and technology superpower, by establishing a sometimes stunning lead in high-impact research across the majority of critical and emerging technology domains.

China’s global lead extends to 37 out of 44 technologies that ASPI is now tracking, covering a range of crucial technology fields spanning defence, space, robotics, energy, the environment, biotechnology, artificial intelligence (AI), advanced materials and key quantum technology areas.1 The Critical Technology Tracker shows that, for some technologies, all of the world’s top 10 leading research institutions are based in China and are collectively generating nine times more high-impact research papers than the second-ranked country (most often the US). Notably, the Chinese Academy of Sciences ranks highly (and often first or second) across many of the 44 technologies included in the Critical Technology Tracker. We also see China’s efforts being bolstered through talent and knowledge import: one-fifth of its high-impact papers are being authored by researchers with postgraduate training in a Five-Eyes country.2 China’s lead is the product of deliberate design and long-term policy planning, as repeatedly outlined by Xi Jinping and his predecessors.3

A key area in which China excels is defence and space-related technologies. China’s strides in nuclear-capable hypersonic missiles reportedly took US intelligence by surprise in August 2021.4

Had a tool such as ASPI’s Critical Technology Tracker been collecting and analysing this data two years ago, Beijing’s strong interest and leading research performance in this area would have been more easily identified…

Had a tool such as ASPI’s Critical Technology Tracker been collecting and analysing this data two years ago, Beijing’s strong interest and leading research performance in this area would have been more easily identified, and such technological advances would have been less surprising. That’s because, according to our data analysis, over the past five years, China generated 48.49% of the world’s high-impact research papers into advanced aircraft engines, including hypersonics, and it hosts seven of the world’s top 10 research institutions in this topic area.

The US comes second in the majority of the 44 technologies examined in the Critical Technology Tracker. The US currently leads in areas such as high performance computing, quantum computing and vaccines. Our dataset reveals that there’s a large gap between China and the US, as the leading two countries, and everyone else. The data then indicates a small, second-tier group of countries led by India and the UK: other countries that regularly appear in this group—in many technological fields— include South Korea, Germany, Australia, Italy, and less often, Japan.

This project—including some of its more surprising findings—further highlights the gap in our understanding of the critical technology ecosystem, including its current trajectory. It’s important that we seek to fill this gap so we don’t face a future in which one or two countries dominate new and emerging industries (something that recently occurred in 5G technologies) and so countries have ongoing access to trusted and secure critical technology supply chains.

China’s overall research lead, and its dominant concentration of expertise across a range of strategic sectors, has short and long term implications for democratic nations. In the long term, China’s leading research position means that it has set itself up to excel not just in current technological development in almost all sectors, but in future technologies that don’t yet exist. Unchecked, this could shift not just technological development and control but global power and influence to an authoritarian state where the development, testing and application of emerging, critical and military technologies isn’t open and transparent and where it can’t be scrutinised by independent civil society and media.

In the more immediate term, that lead—coupled with successful strategies for translating research breakthroughs to commercial systems and products that are fed into an efficient manufacturing base—could allow China to gain a stranglehold on the global supply of certain critical technologies.

Such risks are exacerbated because of the willingness of the Chinese Communist Party (CCP) to use coercive techniques5 outside of the global rules-based order to punish governments and businesses, including withholding the supply of critical technologies.6

What’s the solution?

These findings should be a wake-up call for democratic nations, who must rapidly pursue a strategic critical technology step-up.

Governments around the world should work both collaboratively and individually to catch up to China and, more broadly, they must pay greater attention to the world’s centre of technological innovation and strategic competition: the Indo-Pacific. While China is in front, it’s important for democracies to take stock of the power of their potential aggregate lead and the collective strengths of regions and groupings (for example the EU, the Quad and AUKUS, to name just a few examples). But such aggregate leads will only be fully realised through far deeper collaboration between partners and allies, greater investment in areas including R&D, talent and commercialisation, and more focused intelligence strategies. And, finally, governments must make more space for new, bigger and more creative policy ideas – the step-up in performance required demands no less.

Partners and allies need to step up and seriously consider things such as sovereign wealth funds at 0.5%–0.7% of gross national income providing venture capital, research and scale-up funding, with a sizable portion reserved for high-risk, high-reward ‘moonshots’ (big ideas). Governments should plan for:

  • technology visas, ‘friend-shoring’ and R&D grants between allies
  • a revitalisation of the university sector through specialised scholarships for students and technologists working at the forefront of critical technology research
  • restructuring taxation systems to divert private capital towards venture capital and scale-up efforts for promising new technologies
  • new public–private partnerships and centres of excellence to help to foster greater commercialisation opportunities.

Intelligence communities have a pivotal role to play in both informing decision-makers and building capability. One recommendation we make is that Five-Eyes countries, along with Japan, build an intelligence analytical centre focused on China and technology (starting with open-source intelligence).

We outline 23 policy recommendations for partners and allies to act on collaboratively and individually. They span across the four themes of investment and talent; global partnerships; intelligence; and moonshots. While China is in front, it’s important for democracies to take stock of their combined and complementary strengths. When added up, they have the aggregate lead in many technology areas.

  1. Visit the Critical Technology Tracker site for a list and explanation of these 44 technologies: techtracker.aspi.org.au/list-of-technologies. ↩︎
  2. Australian Signals Directorate, ‘Intelligence partnerships’, Australian Government, 2023 ↩︎
  3. See ‘China’s science and technology vision’ on page 14. ↩︎
  4. Demetri Sevastopulo, Kathrin Hille, ‘China tests new space capability with hypersonic missile’, Financial Times, 17 October 2021 ↩︎
  5. Fergus Hunter, Daria Impiombato, Yvonne Lau, Adam Triggs, Albert Zhang, Urmika Deb, ‘Countering China’s coercive diplomacy: prioritising economic security, sovereignty and the rules-based order’, ASPI, Canberra, 22 February 2023 ↩︎
  6. Fergus Hanson, Emilia Currey, Tracy Beattie, The Chinese Communist Party’s coercive diplomacy, ASPI, Canberra, 1 September 2020, online; State Department, China’s coercive tactics abroad, US Government, no date, online; Bonnie S Glaser, Time for collective pushback against China’s economic coercion, Center for Strategic and International Studies (CSIS), 13 January 2021, online; Marcin Szczepanski, China’s economic coercion: evolution, characteristics and countermeasures, briefing, European Parliament, 15 November 2022, online; Mercy A Kuo, ‘Understanding (and managing) China’s economic coercion’, The Diplomat, 17 October 2022. ↩︎

Countering China’s coercive diplomacy

Countering China’s coercive diplomacy: prioritising economic security, sovereignty and the rules-based order

What’s the problem?

The People’s Republic of China (PRC) is increasingly using a range of economic and non-economic tools to punish, influence and deter foreign governments in its foreign relations. Coercive actions have become a key part of the PRC’s toolkit as it takes a more assertive position in international disputes and seeks to reshape the global order in its favour.

This research finds that the PRC’s use of coercive tactics is now sitting at levels well above those seen a decade ago, or even five years ago. The year 2020 marked a peak, and the use of trade restrictions and state-issued threats have become favoured methods. The tactics have been used in disputes over governments’ decisions on human rights, national security and diplomatic relations.

The PRC’s tactics have had mixed success in affecting the policies of target governments; most governments have stood firm, but some have acquiesced. Undeniably, the tactics are harming certain businesses, challenging sovereign decision-making and weakening economic security. The tactics also undermine the rules-based international order and probably serve as a deterrent to governments, businesses and civil-society groups that have witnessed the PRC’s coercion of others and don’t want to become future targets. This can mean that decision-makers, fearing that punishment, are failing to protect key interests, to stand up for human rights or to align with other states on important regional and international issues.

What’s the solution?

Governments must pursue a deterrence strategy that seeks to change the PRC’s thinking on coercive tactics by reducing the perceived benefits and increasing the costs. The strategy should be based on policies that build deterrence in three forms: resilience, denial and punishment. This strategy should be pursued through national, minilateral and multilateral channels.

Building resilience is essential to counter coercion, but it isn’t a complete solution, so we must look at interventions that enhance deterrence by denial and punishment. States must engage in national efforts to build deterrence but, alone, it’s unlikely that they’ll prevail against more powerful aggressors, so working collectively with like-minded partners and in multilateral institutions is necessary.

It’s essential that effective strategic communications accompany all of these efforts.

This report makes 24 policy recommendations. It recommends, for example, better cooperation between government and business and efforts to improve the World Trade Organization (WTO).

The report argues that a crucial—and currently missing—component of the response is for a coalition of like-minded states to establish an international taskforce on countering coercion. The taskforce members should agree on the nature of the problem, commit to assisting each other, share information and map out potential countermeasures to deploy in response to coercion.

Solidarity between like-minded partners is critical for states to overcome the power differential and divide-and-conquer tactics that the PRC exploits in disputes. Japan’s presidency of the G7 presents an important opportunity to advance this kind of cooperation in 2023.
 

Introduction

We treat our friends with fine wine, but for our enemies we have shotguns.
—Gui Congyou (桂从友), former PRC Ambassador to Sweden, 20191

The PRC’s use of economic and non-economic coercive statecraft has surged to previously unseen levels,2 as the Chinese Communist Party (CCP) more aggressively pursues its ‘core interests’, or bottom-line issues on which it isn’t willing to compromise.3 Those tactics have increasingly been deployed in reaction to other states—especially developed democracies—when they make foreign and security policy decisions that displease the CCP.

Coercive diplomacy encompasses a range of ‘grey zone’ or hybrid activity beyond conventional diplomacy and short of military action. It’s ‘the use of threats or negative actions to force the target state to change behaviour’.4 Much of this is economic coercion—the weaponisation of interdependence in goods and services trade and investment. The use of punitive actions to coerce sits alongside the positive inducements also used to influence as part of a carrot-and-stick approach to foreign relations. The exploitation of economic leverage is often accompanied by other coercive tools as part of a multidomain effort to influence a target. This includes cyberattacks, arbitrary detentions and sanctions on individuals.

The PRC’s use of coercive statecraft presents a particular challenge, as its authoritarian governance allows it to harness a range of malign tactics as part of its broader strategic efforts to reshape the existing global order in its favour. As a hybrid threat, this coercive conduct is often used in a way that exploits plausible deniability and a lack of democratic and market-based restraints. The PRC’s coercive behaviour is rarely formally or clearly declared; nor does it necessarily rely on legitimate legal authority.

While other states, including developed democracies, have and use coercive powers, the nature, scale and intent of the PRC’s conduct pose a distinct threat to the rules-based international order.

The PRC’s use of these tactics is weakening the rules-based, liberal international order. While the methods don’t always cause significant economic harm or succeed in immediately changing a target state’s policy, they have done so and have caused other harms, for example by encouraging an environment of self-censorship and promoting a culture in which policymakers avoid public discussions or advancing policy development in certain areas. Another harm is the disruptive nature of the information environment surrounding the PRC’s coercive actions, which places enormous pressure on politicians and decision-makers (including because some commentators question what ‘concessions’ a government will make to potentially unwind the PRC’s punitive measures).

Some states are nonetheless making difficult decisions in defiance of the PRC’s tactics, which alienate policymakers and populations. However, the PRC’s tactics are probably also functioning as a highly successful signal for many countries, especially developing states, deterring them from making decisions that could provoke PRC aggression. This means that states are compromising important decisions with implications for the international order, human rights and national security.

The main analysis in this report is based on an open-source dataset of examples of coercive diplomacy. The dataset draws on information from news articles, policy papers, academic research, company websites, social media, official government documents and statements made by politicians and business officials. The research team gathered as many examples of coercive diplomacy as could be identified publicly from 2020 to 2022. This carries forward the methodology used for ASPI’s 2020 report, The Chinese Communist Party’s coercive diplomacy.5

In relying on open-source research and mostly English-language sources, this approach does carry limitations. This isn’t intended to be an exhaustive or comprehensive documentation of coercive diplomacy across the world. There will be cases of coercion that have remained private,66 and there may be publicly known cases not captured, especially in countries where English-language reporting is unavailable. This dataset has been compiled to identify trends in the PRC’s use of coercive diplomacy and insights into how and where it operates and how it can be better countered.

In addition to this dataset, the report overviews the PRC’s strategic outlook and analyses a series of in-depth case studies of PRC coercion: Australia, Lithuania and the Republic of Korea. We also conducted modelling of the economic impact of simulated coercive restrictions against those states and analysed the information environment surrounding the actual cases of coercion that they have experienced. The report then concludes with our policy recommendations.

  1. ‘How Sweden copes with Chinese bullying’, The Economist, 20 February 2020, online. This is a reference to ‘My motherland’, the theme song of a Chinese movie about the Korean War. See Fan Anqi, ‘China warns “irretrievable consequences”, “unbearable price” amid US’ Taiwan remarks swings’, Global Times, 24 May 2022, ↩︎
  2. Fergus Hanson, Emilia Currey, Tracy Beattie, The Chinese Communist Party’s coercive diplomacy, ASPI, Canberra, 1 September 2020. ↩︎
  3. For more on China’s core interests, see Appendix 2. ↩︎
  4. See Ketian Zhang, ‘Chinese non-military coercion—tactics and rationale’, Brookings, 22 January 2019. ↩︎
  5. Hanson et al., The Chinese Communist Party’s coercive diplomacy. ↩︎
  6. For example: Primrose Riordan, ‘China’s veiled threat to Bill Shorten on extradition treaty’, The Australian, 5 December 2017, online; Fergus Hunter, ‘Australia abandoned plans for Taiwanese free trade agreement after warning from China’, Sydney Morning Herald, 24 October 2018. ↩︎

The latest flashpoint on the India-China border: Zooming into the Tawang border skirmishes

The latest flashpoint on the India-China border: Zooming into the Tawang border skirmishes

Overview

On 9 December 2022, Indian and Chinese troops clashed at the Yangtse Plateau along the India-China border. The confrontation was the most serious skirmish between Indian and Chinese troops since Galwan in 2020.

The Australian Strategic Policy Institute’s latest visual project provides satellite imagery analysis of the key areas (including 3D models) and geolocates military, infrastructure and transport positions to show new developments over the last 12 months.

Tawang is strategically valuable Indian territory wedged between China and Bhutan. The Yangtse Plateau is an important location in Tawang because it enables visibility over key Indian supply routes to the region.

Our analysis reveals that rapid infrastructure development along the border in this region means the People’s Liberation Army (PLA) can now access key locations on the Yangtse Plateau more easily than it could have just one year ago. While India maintains control of the commanding position on the plateau’s high ground, China has compensated for this disadvantage by building new military and transport infrastructure that allows it to get troops quickly into the area. 

This new ASPI work builds on satellite analysis that ASPI’s International Cyber Policy Centre carried out in September 2021, focused on the Doklam region (‘A 3D deep dive into the India-China border’). 

The latest analysis aims to contextualise India-China border tensions by examining the terrain in which this clash took place, and provides analysis of developments that threaten the status quo along the border – a major flashpoint in the region.

The India-China border continues to become more crowded as infrastructure is built and large numbers of Indian and Chinese outposts compete for strategic, operational and tactical advantage. This increases the risk of escalation and potential military conflict stemming from incidental or deliberate encounters between Indian and Chinese troops. These ongoing tensions, and clashes, deserve more attention from regional governments, global policymakers and international organisations.

Go to website

Explore our new project here

State-sponsored economic cyber-espionage for commercial purposes: tackling an invisible but persistent risk to prosperity

As part of a multi-year capacity building project supporting governments in the Indo-Pacific with defending their economic against the risk of cyber-enabled theft of intellectual property, ASPI analysed public records to determine the effects, the actual scale, severity and spread of current incidents of cyberespionage affecting and targeting commercial entities.

In 2015, the leaders agreed that ‘no country should conduct or support ICT-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors.’

Our analyses suggests that the threat of state-sponsored economic cyberespionage is more significant than ever, with countries industrialising their cyberespionage efforts to target commercial firms and universities at a grander scale; and more of these targeted industries and universities are based in emerging economies.

“Strategic competition has spilled into the economic and technological domains and states have become more comfortable and capable using offensive cyber capabilities. Our analysis shows that the state practice of economic cyber-espionage appears to have resurged to pre-2015 levels and tripled in raw numbers.”

In this light, we issued a Briefing Note on 15 November 2022 recommending that the G20 members recognise that state-sponsored ICT-enabled theft of IP remains a key concern for international cooperation and encouraging them to reaffirm their commitment made in 2015 to refrain from economic cyber-espionage for commercial purposes. 

This latest Policy Brief, State-sponsored economic cyber-espionage for commercial purposes: tackling an invisible but persistent risk to prosperity, further suggests that governments should raise awareness by better assessing and sharing information about the impact of IP theft on their nations’ economies in terms of financial costs, jobs and competitiveness. Cybersecurity and intelligence authorities should invest in better understanding the extent of state sponsored economic cyber-espionage on their territories.

On the international front, the G20 and relevant UN committees should continue addressing the issue and emphasising countries’ responsibilities not to allow the attacks to be launched from their territories. 

The G20 should encourage members to reaffirm their 2015 commitments and consider establishing a cross-sectoral working group to develop concrete guidance for the operationalisation and implementation of the 2015 agreement while assessing the scale and impact of cyber-enabled IP theft.

The future of digital identity in Australia

What’s the problem?

Digital identity was a key part of the Australian Government’s Digital Economy Strategy: a further $161 million was committed in the 2021 mid-year budget update, bringing total investment since 2015 to more than $600 million. Over that period, the government has developed the Trusted Digital Identity Framework, established the Digital Identity System and, in late 2021, published draft legislation to govern and regulate the system. Although there’s been little apparent progress in the past 10 months, if the potential microeconomic benefits (estimated at $11 billion in the previous government’s Digital Economy Strategy) aren’t sufficient incentive, the September 2022 data breach at Optus, and the subsequent run of data breaches on companies in October should supply new impetus. This is because digital identity offers an opportunity to allow organisations to reliably validate customer identities without collecting the sort of sensitive personal information that Optus held, the loss of which has exposed more than 10 million Australians to the risk of identity theft.

Without intervention, the current scheme is on a trajectory to fail. If the government wants to revive the Digital Identity System, it will need to attract state and territory governments and commercial organisations to participate in the system as well as getting the public to sign up—aiming for a critical mass of users to create a ‘network effect’.

However, to build the trust and confidence required to achieve that outcome, the government needs to address three key areas of concern. First, governance arrangements currently give the federal government final decision-making authority on future changes to the rules of the system. Second, there are potential cybersecurity and identity-fraud risks due to gaps in the currently proposed arrangements; although the Optus data breach should help to demonstrate the need for such a system, it means that users will require reassurance of the security of any new system before they’re willing to participate in it. Third, there’s a need for better privacy protections to avoid a situation in which commercial relying parties use the Digital Identity System to build even more valuable profiles of citizens.

What’s the solution?

The Australian Government should recognise that, although its Digital Identity System is only one of many possible digital identity systems in Australia, it could become the dominant system due to network effects, spanning both the government and the private sectors. The current proposals give final decision-making authority, including over detailed technical specifications, to the relevant government minister. This report instead recommends a formal independent oversight authority governed by a board that includes representatives from all groups—the federal government, civil society, the states and territories and the private sector. The oversight authority should also create a formal public reporting mechanism for potential vulnerabilities, and transparency on how such reports have been assessed and acted on, to improve the actual and perceived security of the system.

Security measures should be mandated, the oversight authority should be funded to put in place key controls, and the Digital Transformation Agency (DTA) should work with the Department of Home Affairs to secure some of the vulnerabilities in existing non-digital identity systems upon which the digital systems will rely. Other recommended safeguards include centralised security monitoring and robust management of multiple identity risks.

Finally, privacy will be the key to public acceptance of the system, and a stronger regime is needed to ensure true informed consent to the use of digital identity data by commercial relying parties when building up and monetising profiles of their customers.