ASPI has released a groundbreaking report that finds the Chinese Communist Party seeks to harvest user data from globally popular Chinese apps, games and online platforms in a likely effort to improve its global propaganda.
The research maps the CCP’s propaganda system, highlighting the links between the Central Propaganda Department, state-owned or controlled propaganda entities and data-collection activities, and technology investments in Chinese companies.
In this special short episode of Stop the World, David Wroe speaks with ASPI analyst Daria Impiombato about the key takeaways from this major piece of research.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/10/29231723/Stop-the-World-Banner.png4271280markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-05-02 11:12:442025-02-28 11:15:41Mapping China’s data harvesting and global propaganda efforts
The Australian Strategic Policy Institute (ASPI) is pleased to announce that the third Sydney Dialogue for critical, emerging and cyber technologies will be held on 2-3 September 2024.
The Sydney Dialogue (TSD) brings together world leaders, global technology industry innovators and top experts in cyber and critical technology for frank and productive discussions, with a specific focus on the Indo-Pacific.
TSD 2024 will generate conversations that address the awesome advances being made across these technologies, their impact on our societies, economies and national security, and how we can best manage their adoption over the next decade and beyond. These will include generative artificial intelligence, cybersecurity, quantum computing, biotechnology, climate and space technologies.
We will prioritise speakers and topics that push the boundaries and generate new insights into these fields, while also promoting diverse views, including from the Pacific, Southeast Asia and South Asia.
This year’s event will also capture the key trends that are dominating international technology, security and geopolitical discussions. With more than 80 national elections set to take place around the world in 2024, the event will also focus on the importance of political leadership, global cooperation and the stable development of technologies amid great power transition, geopolitical uncertainty and ongoing conflict.
ASPI is pleased to have the support once again of the Australian Government for TSD in 2024.
Australia’s Minister for Home Affairs and Cyber Security, the Hon Clare O’Neil MP said: “The threats we face from cyber attacks and tech-enabled perils such as disinformation and foreign interference are only growing as the power of artificial intelligence gathers pace.
“The kind of constructive debate that the Sydney Dialogue fosters helps ensure that the rapid advances in critical technologies and cyber bring better living standards for our people rather than new security threats. Closer engagement with our international partners and with industry on these challenges has never been more important than it is today.”
TSD 2024 will build on the momentum of the previous two dialogues, which featured keynote addresses from Indian Prime Minister Narendra Modi, the late former Japanese Prime Minister Shinzo Abe, Samoa’s Prime Minister Fiamē Naomi Mata’afa, Estonia’s Prime Minister Kaja Kallas and former Chief Executive Officer of Google Eric Schmidt. A full list of previous TSD speakers can be found here. You can also watch previous TSD sessions here.
TSD 2024 will be held in person and will feature a mix of keynote addresses, panel discussions, closed-room sessions and media engagements.
Topics for discussion will also include technological disruptors, cybercrime, online disinformation, hybrid warfare, electoral interference, climate security, international standards and norms, as well as technology design with the aim of enhancing partnerships, trust and global co-operation.
Justin Bassi, the Executive Director of ASPI, said: “The Sydney Dialogue 2024 will continue to build on the great success ASPI has established since 2021. These technologies are affecting our security and economies faster, and more profoundly, than we ever imagined. We need frank, open debate about how, as a globe, we manage their adoption into our lives.
“We are proud to be focusing on our Indo-Pacific region and encouraging a wide and diverse range of perspectives on some of the most important challenges of our time.”
More information and updates on the Sydney Dialogue can be found at tsd.aspi.org.au.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-21 03:53:412024-11-07 03:56:31The Sydney Dialogue to return in September
In February, ASPI and the Special Competitive Studies Project held a series of workshops on the rise of artificial intelligence (AI) and its impact on the intelligence sector.
The workshops, which followed a multi-day workshop in Canberra in November 2023, brought together experts from across the Australian and US intelligence communities, think tanks and industry to inform future intelligence approaches in both countries.
The project also focuses on how current and emerging AI capabilities can enhance the quality and timeliness of all-source intelligence analysis and how this new technology may change the nature of the intelligence business.
The aim of the workshops is to develop a prioritised list of recommendations for both the Australian and US intelligence communities on how to adopt AI quickly, safely, and effectively.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-16 04:09:202024-11-07 04:10:45Artificial Intelligence, Human-Machine Teaming, and the Future of Intelligence Analysis
More than 2 billion people in over 50 countries, representing nearly a third of the global population, are set to engage in elections this year. It will have geopolitical ramifications with so many countries having the chance to choose new leaders, testing the resilience of democracy and the rules-based order in countless ways.
These elections also come at a time of increasing ambition among powerful authoritarian regimes, growing use of misinformation and disinformation often linked to state-led or state-backed influence operations, rising extremism of various political stripes, and the technological disruption of artificial intelligence.
At the same time, democracies face formidable challenges with wars raging in Europe and the Middle East, increasing climate disasters, weakening economies, and the erosion of confidence in liberal societies.
Watch the panel below as they explore the issues that are set to define 2024’s election campaigns, as well as the impact the outcomes could have on alliances, geopolitics and regional security around the world.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-16 04:01:332024-11-07 04:07:34ASPI’s 2024 Democracy Primer
A pro-China technology and anti-US influence operation thrives on YouTube
Executive Summary
ASPI has recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition. This new campaign (which ASPI has named ‘Shadow Play’) has attracted an unusually large audience and is using entities and voice overs generated by artificial intelligence (AI) as a tactic that enables broad reach and scale.1 It focuses on promoting a series of narratives including China’s efforts to ‘win the US–China technology war’ amid US sanctions targeting China. It also includes a focus on Chinese and US companies, such as pro-Huawei and anti-Apple content.
The Shadow Play campaign involves a network of at least 30 YouTube channels that have produced more than 4,500 videos. At time of publication, those channels have attracted just under 120 million views and 730,000 subscribers. The accounts began publishing content around mid-2022. The campaign’s ability to amass and access such a large global audience—and its potential to covertly influence public opinion on these topics—should be cause for concern.
ASPI reported our findings to YouTube/Google on 7 December 2023 for comment. By 8 December, they had taken down 19 YouTube channels from the Shadow Play network—10 for coordinated inauthentic behaviour and nine for spam. As of publication, these YouTube channels display a range of messages from YouTube indicating why they were taken down. For example, one channel was ‘terminated for violating YouTube’s community guidelines’, while another was ‘terminated due to multiple or severe violations of YouTube’s policy for spam, deceptive practices and misleading content or other Terms of Service violations’. ASPI also reported our findings to British artificial intelligence company, Synthesia, whose AI avatars were used by the network. On 14 December 2023, Synthesia disabled the Synthesia account used by one of the YouTube accounts, for violating its Media Reporting (News) policy.
We believe that it’s likely that this new campaign is being operated by a Mandarin-speaking actor. Indicators of this actor’s behaviour don’t closely map to the behaviour of any known state actor that conducts online influence operations. Our preliminary analysis (see ‘Attribution’) is that the operator of this network could be a commercial actor operating under some degree of state direction, funding or encouragement. This could suggest that some patriotic companies increasingly operate China-linked campaigns alongside government actors.
The campaign focuses on promoting six narratives. Two of the most dominant narratives are that China is ‘winning’ in crucial areas of global competition: first, in the ‘US–China tech war’ and, second, in the competition for rare earths and critical minerals.2 Other key narratives express that the US is headed for collapse and that its alliance partnerships are fracturing, that China and Russia are responsible, capable players in geopolitics, that the US dollar and the US economy are weak, and that China is highly capable and trusted to deliver massive infrastructure projects. A list of visual representative examples from the network for each narrative is in Appendix 1 on page 35.
Figure 1: An example of the style of content generated by the network, in which multiple YouTube channels published videos alleging that China had innovated a 1-nanometre chip, without using a lithography machine
Sources: ‘China Charged’, ‘China reveals the world’s first 1nm chip & SHOCKS the US!’, YouTube, 3 November 2023, online;‘ Relaxian’, ‘China’s groundbreaking 1nm chip: redefining technology and global power’, YouTube, 4 November 2023, online; ‘Vision of China’, ‘China breaks tech limit: EUV lithography not needed to make 1nm chips!’, YouTube, 17 July 2023 online; ‘China Focus—CNF’, ‘World challenge conquered: 1nm chips produced without EUV lithography!’, YouTube, 5 July 2023, online; ‘Curious Bay’, ‘China’s NEW 1nm chip amazes the world’, YouTube, 24 July 2023, online; ‘China Hub’, ‘China shatters tech boundaries: 1nm chips without EUV lithography? Unbelievable tech breakthrough!’, YouTube, 30 July 2023, online.
This campaign is unique in three ways. First, as noted above, there’s a notable broadening of topics. Previous China-linked campaigns have been tightly targeted and have often focused on a narrow set of topics. For example, the campaign’s focus on promoting narratives that establish China as technologically superior to the US presents detailed arguments on technology topics including semiconductors rare earths, electric vehicles and infrastructure projects. In addition, it targets, via criticism and disinformation, US technology firms such as Apple and Intel. Chinese state media outlets, Chinese officials and online influencers sometimes publish on these topics in an effort to ‘tell China’s story well’ (讲好中国故事).3 A few Chinese state-backed inauthentic information operations have touched on rare earths and semiconductors, but never in depth or by combining multiple narratives in one campaign package.4 The broader set of topics and opinions in this campaign may demonstrate greater alignment with the known behaviour of Russia-linked threat actors.
Second, there’s a change in techniques and tradecraft, as the campaign has leveraged AI. To our knowledge, the YouTube campaign is one of the first times that video essays, together with generative AI voiceovers, have been used as a tactic in an influence operation. Video essays are a popular style of medium-length YouTube video in which a narrator makes an argument through a voiceover, while content to support their argument is displayed on the screen. This shows a continuation of a trend that threat actors are increasingly moving towards: using off-the-shelf video editing and generative AI technology tools to produce convincing, persuasive content at scale that can build an audience on social-media services. We also observed one account in the YouTube network using an avatar created by Sogou, one of China’s largest technology companies (and a subsidiary of Tencent) (see page 24). We believe the use of the Sogou avatar we identified to be the first instance of a Chinese company’s AI-generated human being used in an influence operation.
Third, unlike previous China-focused campaigns, this one has attracted large views and subscribers. It has also been monetised, although only through limited means. For example, one channel accepted money from US and Canadian companies to support the production of their videos. The substantial number of views and subscribers suggest that the campaign is one of the most successful influence operations related to China ever witnessed on social media. Many China-linked influence operations, such as Dragonbridge (also known as ‘Spamouflage’ in the research community), have attracted
initial engagement in some cases but have failed to sustain a meaningful audience on social media.5 However, further research by YouTube is needed to determine whether view counts and subscriber counts on YouTube demonstrated real viewership or were artificially manipulated, or a combination of both. We note that, in our examination of YouTube comments on videos in this campaign, we saw signs of a genuine audience. ASPI believes that this campaign is probably larger than the 30 channels covered in this report, but we constrained our initial examination to channels we saw as core to the campaign. We also believe there to be more channels publishing content in non-English languages that belong to this network; for example, we saw channels publishing in Bahasa Indonesia that aren’t included in this report.
That’s not to say that the effectiveness of influence operations should only be measured through engagement numbers. As ASPI has previously demonstrated, Chinese Communist Party (CCP) influence operations that troll, threaten and harass on social media seek to silence and cause psychological harm to those being targeted, rather than seeking engagement.6 Similarly, influence operations can be used to ‘poison the well’ by crowding out the content of genuine actors in online spaces, or to poison datasets used for AI products, such as large-language models (LLMs).7
This report also discusses another way that an influence operation can be effective: through its ability to spill over and gain traction in a wider system of misinformation. We found that at least one narrative from the Shadow Play network—that Iran had switched on its China-provided BeiDou satellite system—began to gain traction on X (formerly Twitter) and other social-media platforms within a few hours of its posting on YouTube. We discuss that case study on page 29.
This report offers an initial identification of the influence operation and some defining characteristics of a likely new influence actor. In addition to sections on attribution, methodology and analysis of this new campaign, this report concludes with a series of recommendations for government and social media companies, including:
the immediate investigation of this ongoing information operation, including operator intent and the scale and scope of YouTube channels involved
broader efforts by Five Eyes and allied partners to declassify open-source social-media-based influence operations and share information with like-minded nations and relevant NGOs
rules that require social-media users to disclose when generative AI is used in audio, video or image content
national intelligence collection priorities that support the effective amalgamation of information on Russia-, China- and Iran-linked information operations
publishing detailed threat indicators as appendixes in information operations research.
Shadow play (or shadow puppetry) is a storytelling technique in which flat articulated cut-out figures are placed between a light source and a translucent screen. It’s practised across Southeast Asia, China, the Middle East, Europe and the US. See, for example, Inge C Orr, ‘Puppet theatre in Asia’, Asian Folklore Studies, 1974, 33(1):69–84, online. ↩︎
A recent Pew Research Center poll indicates that technology is one of the few areas in which public opinion in high-income and middle-income countries sees China and the US as equally capable, which suggests that narratives on those lines are credible for international viewers. Laura Silver, Christine Huang, Laura Clancy, Nam Lam, Shannon Greenwood, John Carlo Mandapat, Chris Baronavski, Comparing views of the US and China in 24 countries, Pew Research Center, 6 November 2023, online. ↩︎
‘Telling China’s story well’, China Media Project, 16 April 2021, online; Marcel Schliebs, Hannah Bailey, Jonathan Bright, Philip N Howard, China’s public diplomacy operations: understanding engagement and inauthentic amplification of PRC diplomats on Facebook and Twitter, Oxford Internet Institute, 11 May 2021, https://demtech.oii.ox.ac.uk/research/posts/chinas-public-diplomacy-operations-understanding-engagement-and-inauthentic-amplification-of-chinese-diplomats-on-facebook-and-twitter/#continue. ASPI’s work on foreign influencers’ role in telling China’s story well includes Fergus Ryan, Matt Knight, Daria Impiombato, Singing from the CCP’s songsheet, ASPI, Canberra, 24 November 2023, https://www.aspi.org.au/report/singing-ccps-songsheet . Fergus Ryan, Ariel Bogle, Nathan Ruser, Albert Zhang, Daria Impiombato, Borrowing mouths to speak on Xinjiang, ASPI, Canberra, 10 December 2021, https://www.aspi.org.au/report/borrowing-mouths-speak-xinjiang ; Fergus Ryan, Daria Impiombato, Hsi-Ting Pai, Frontier influencers, ASPI, Canberra, 20 October 2022, https://www.aspi.org.au/report/frontier-influencers/. . ↩︎
Reports on China-linked information operations that have targeted semiconductors and rare earths include Albert Zhang, ‘The CCP’s information campaign targeting rare earths and Australian company Lynas’, The Strategist, 29 June 2022, online; ‘Pro-PRC DRAGONBRIDGE influence campaign targets rare earths mining companies in attempt to thwart rivalry to PRC market dominance’, Mandiant, 28 June 2022, https://www.mandiant.com/resources/blog/dragonbridge-targets-rare-earths-mining-companies ; Shane Huntley, ‘TAG Bulletin: Q3 2022’, Google Threat Analysis Group, October 26 2022, https://blog.google/threat-analysis-group/tag-bulletin-q3-2022/ . ↩︎
Ben Nimmo, Ira Hubert, Yang Cheng, ‘Spamouflage breakout’, Graphika, 4 February 2021, online. ↩︎
Danielle Cave, Albert Zhang, ‘Musk’s Twitter takeover comes as the CCP steps up its targeting of smart Asian women’, The Strategist, 6 November 2022, online; Donie O’Sullivan, Curt Devine, Allison Gordon, ‘China is using the world’s largest known online disinformation operation to harass Americans, a CNN review finds’, CNN, 13 November 2023, https://edition.cnn.com/2023/11/13/us/china-online-disinformation-invs/index.html . ↩︎
Rachael Falk, Anne-Louise Brown, ‘Poison the well: AI, data integrity and emerging cyber threats’, Cyber Security Cooperative Research Centre, 30 October 2023, online. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12121352/Policy-brief_-Shadow-play_-a-pro-China-technology-and-anti-US-influence-thumbnail.png555791markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-12-14 12:15:562025-03-12 15:40:30Shadow Play
The role of foreign influencers in China’s propaganda system
Disclaimer: Please note that because of a website upload issue, an earlier version of this page and report contained errors including incorrect author names & acknowledgement text from a previous report. We have rectified these issues.
Executive summary
The Chinese Communist Party (CCP) has always viewed contact with foreigners and the outside world as a double-edged sword, presenting both threats and opportunities. While the CCP and its nationalist supporters harbour fears of foreigners infiltrating China’s information space and subtly ‘setting the tempo’ (带节奏) of discussions, the CCP also actively cultivates a rising group of foreign influencers with millions of fans, which endorses pro-CCP narratives on Chinese and global social-media platforms.
In the People’s Republic of China (PRC), the information ecosystem is geared towards eliminating rival narratives and promoting the party’s ‘main melody’ (主旋律)—the party’s term for themes or narratives that promote its values, policies and ideology.1 Foreign influencers who are amenable to being ‘guided’ towards voicing that main melody are increasingly considered to be valuable assets. They’re seen as building the CCP’s legitimacy for audiences at home, as well as supporting propaganda efforts abroad.
This report examines how a growing subset of foreign influencers, aware of the highly nationalistic online environment and strict censorship rules in China, is increasingly choosing to create content that aligns more explicitly with the CCP’s ‘main melody’.2 In addition to highlighting the country’s achievements in a positive light, these influencers are promoting or defending China’s position on sensitive political issues, such as territorial disputes or human rights concerns.
As we outline in this report, foreign influencers are involved in a wave of experimentation and innovation in domestic (and external) propaganda production that’s taking place at different levels around the PRC as officials heed Xi Jinping’s call to actively participate in ‘international communication’. That experimentation includes their use in the Propaganda Department’s efforts to control global narratives about Covid-19 in China and the cultivation of Russian influencers in China to counter Western narratives.3 This research also reveals that the CCP is effectively co-opting a widespread network of international students at Chinese universities, cultivating them as a talent pool of young, multilingual, social-media-friendly influencers.
Foreign influencers are guided via rules, regulations and laws, as well as via platforms that direct traffic towards user-generated propaganda. Video competitions organised by propaganda organs and the amplification of party-state media and government spokespeople further encourage this trend. The resulting party-aligned content foreign influencers produce, coupled with that of party-state media workers masquerading as influencers and state-approved ethnic-minority influencers4 are part of a coordinated tactic referred to as ‘polyphonous communication’ (复调传播).5
By coordinating foreign influencers and other communicators, Beijing aspires to create a unified choir of voices capable of promoting party narratives more effectively than traditional official PRC media. The ultimate goal is to shield CCP-controlled culture, discourse and ideology from the dangers of foreign and free political speech, thereby safeguarding the party’s legitimacy.
As this report outlines, that strategy reveals the CCP’s determination to defend itself against foreign influence and shape global narratives in its favour, including through covert means. As one party-state media worker put it, the aim is to ‘help cultivate a group of “foreign mouths”, “foreign pens”, and “foreign brains” who can stand up and speak for China at critical moments’.6
The CCP’s growing use of foreign influencers reinforces China’s internal and external narratives in ways that make it increasingly difficult for social-media platforms, foreign governments and individuals to distinguish between genuine and/or factual content and propaganda. It further complicates efforts to counter disinformation and protect the integrity of public discourse and blurs the line between independent voices and those influenced by the party’s narratives.
This report makes key recommendations for media and social-media platforms, governments and civil society aimed at building awareness and accountability. They include broadening social-media platforms’ content labelling practices to include state-linked, PRC-based influencers; preventing PRC-based creators from monetising their content on platforms outside China to diminish the commercial incentives to produce party-aligned content; and, in countries with established foreign interference taskforces, such as Australia, developing appropriate briefing materials for students planning to travel overseas.
Key Findings
Foreign influencers are reaching increasingly larger and more international audiences. Some of them have tens of millions of followers in China and millions more on overseas platforms (see Appendix 1 on page 65), particularly on TikTok, YouTube and X (formerly Twitter).
The CCP is creating competitions that offer significant prize money and other incentives as part of an expanding toolkit to co-opt influencers in the production of pro-CCP and party-state-aligned content (see Section 2.3: ‘State-sponsored competitions’ on page 20).
Beijing is establishing multilingual influencer studios to incubate both domestic and foreign influencers in order to reach younger media consumers globally (see Section 2.5: ‘The influencer studio system’ on page 33).
The CCP is effectively using a widespread network of international students at Chinese universities, cultivating them as a latent talent pool of young, multilingual, social-media-friendly influencers (see breakout box: ‘PRC universities’ propaganda activities’ on page 32).
Russian influencers in China are cultivated as part of the CCP’s strategic goal of strengthening bilateral relations with Russia to counter Western countries (see Section 3.4: ‘Russian influencers’ on page 53).
The CCP is using foreign influencers to enable its propaganda to surreptitiously penetrate mainstream overseas media, including into major US cable TV outlets (see Section 3.3: ‘Rachele Longhi’ on page 44). Chinese authorities use vlogger, influencer and journalist identities interchangeably, in keeping with efforts aimed at influencing audiences, rather than offering professional or objective news coverage.
CCP-aligned influencer content has helped boost the prevalence of party-approved narratives on YouTube, outperforming more credible sources on issues such as Xinjiang due to search-engine algorithms that prioritise fresh content and regular posting (see Section 2.2 ‘Turning a foreign threat into a propaganda opportunity’ on page 15).
Foreign influencers played a key part in the Propaganda Department’s drive to control international narratives about Covid-19 in China and have, in some instances, attempted to push the CCP’s narrative overseas as well (see Section 1.1: ‘Case study’ on page 7).
Efforts to deal with CCP propaganda have taken a step backwards on X, which under Elon Musk has dispensed with state-affiliation labels and is allowing verification for party-state media workers, including foreigners (see Section 2.5 ‘The influencer studio system’ on page 33).
The term ‘Propaganda Department’ is used here for the Publicity Department of the Central Committee of the CCP. Subordinate CCP organisations in many cases have their own propaganda departments. ↩︎
Fergus Ryan, Daria Impiombato, Hsi-Ting Pai, Frontier influencers: the new face of China’s propaganda, ASPI, Canberra, 20 October 2022. ↩︎
Devin Thorne, ‘1 key for 1 lock: the Chinese Communist Party’s strategy for targeted propaganda’, Recorded Future, September 2022. ↩︎
Du Guodong [杜国东], ‘A tentative analysis of how to leverage the role of foreign internet celebrities in China’s international communication’ [试析如何发挥洋网红在中国国际传播中的作用], FX361, 10 September 2019. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12123347/Singing-from-the-CCPs-songsheet_-the-role-of-foreign-influencers-banner.png519740markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-11-24 12:33:192025-03-12 15:48:45Singing from the CCP’s songsheet
In 2020, the then Director of ASPI’s International Cyber Policy Centre, Fergus Hanson, approached me to research the views of the 46th Parliament on a range of cybersecurity and critical technology issues. The resulting data collection was then conducted in two parts across 2021 and 2022, with the results analysed and written up in 2022 and 2023. Those parliamentarians who ‘opted in’ completed and provided an initial quantitative study, which I then followed up on with an interview that explored an additional set of qualitative questions. The results, collated and analysed, form the basis of this report.
This research aims to provide a snapshot of what our nation’s policy shapers and policymakers are thinking when it comes to cybersecurity and critical technologies. What are they worried about? Where are their knowledge gaps and interests? What technologies do they think are important to Australia and where do they believe policy attention and investment should focus in the next five years?
This initial study establishes a baseline for future longitudinal assessments that could capture changes or shifts in parliamentarians’ thinking. Australia’s ongoing cybersecurity challenges, the fast-moving pace of artificial intelligence (AI), the creation of AUKUS and the ongoing development of AUKUS Pillar 2—with its focus on advanced capabilities and emerging technologies (including cybertechnologies)—are just a few reasons among many which highlight why it’s more important than ever that the Australian Parliament be both informed and active when engaging with cybersecurity and critical technologies.
We understand that this in-depth study may be a world first and extend our deep and heartfelt thanks to the 24 parliamentarians who took part in it. Parliamentarians are very busy people, and yet many devoted significant time to considering and completing this study.
This was a non-partisan study. Parliamentarians were speaking on condition of strict anonymity, without any identifiers apart from their gender, chamber, electorate profile and backbench or frontbench status. Because of that, the conversations were candid, upfront and insightful and, as a result, this study provides a rich and honest assessment of their views.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12133247/What-do-Australias-parliamentarians-think-about-cybersecurity-banner.png509792markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-11-14 13:32:332025-03-12 13:36:47What do Australia’sparliamentarians thinkabout cybersecurityand critical technology?
ASPI and a non-government research partner1 conducted a year-long project designed to share detailed and accurate information on state surveillance in the People’s Republic of China (PRC) and engage residents of the PRC on the issue of surveillance technology. A wide range of topics was covered, including how the party-state communicates on issues related to surveillance, as well as people’s views on state surveillance, data privacy, facial recognition, DNA collection and data-management technologies.
The project’s goals were to:
improve our understanding of state surveillance in China and how it’s communicated by the Chinese party-state
develop a nuanced understanding of PRC residents’ perceptions of surveillance technology and personal privacy, the concerns some have in regard to surveillance, and how those perceptions relate to trust in government
explore the reach and potential of an interactive digital platform as an alternative educational and awareness-raising tool.
This unique project combined extensive preliminary research—including media analysis and an online survey of PRC residents—with data collected from an interactive online research platform deployed in mainland China. Media analysis drew on PRC state media to understand the ways in which the party-state communicates on issues of surveillance. The online survey collected opinions from 4,038 people living in mainland China, including about their trust in government and views on surveillance technologies. The interactive research platform offered PRC residents information on the types and capabilities of different surveillance technologies in use in five municipalities and regions in China. Presenting an analysis of more than 1,700 PRC Government procurement documents, it encouraged participants to engage with, critically evaluate and share their views on that information. The research platform engaged more than 55,000 PRC residents.
Data collection was led and conducted by the non-government research partner, and the data was then provided to ASPI for a joint analysis. The project details, including methodology, can be found on page 6.
Key findings
The results of this research project indicate the following:
Project participants’ views on surveillance and trust in the government vary markedly.
Segmentation analysis of survey responses suggests that respondents fall into seven distinct groups, which we have categorised as dissenters, disaffected, critics, possible sceptics, stability seekers, pragmatists and endorsers (the segmentation analysis is on page 12).
In general, PRC state narratives about government surveillance and technology implementation appear to be at least partly effective.
Our analysis of PRC state media identified four main narratives to support the use of government surveillance:
Surveillance helps to fight crime.
The PRC’s surveillance systems are some of the best in the world.
Surveillance is commonplace internationally.
Surveillance is a ‘double-edged sword’, and people should be concerned for their personal privacy when surveillance is handled by private companies.
Public opinion often aligns with state messaging that ties surveillance technologies to personal safety and security. For example, when presented with information about the number of surveillance cameras in their community today, a larger portion of Research Platform participants said they would prefer the same number (39%) or more cameras (38.4%).
PRC state narratives make a clear distinction between private and government surveillance, which suggests party-state efforts to ‘manage’ privacy concerns within acceptable political parameters.
Project participants value privacy but hold mixed views on surveillance.
Participants expressed a preference for consent and active engagement on the issue of surveillance. For example, over 65% agreed that DNA samples should be collected from the general population only on a voluntary basis.
Participants are generally comfortable with the widespread use of certain types of surveillance, such as surveillance cameras; they’re less comfortable with other forms of surveillance, such as DNA collection.
ASPI supported this project with an undisclosed research partner. That institution remains undisclosed to preserve its access to specific research techniques and data and to protect its staff. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12133959/Surveillance-privacy-and-agency_-insights-from-China-banner.png471740markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-10-12 13:39:192025-03-12 15:38:35Surveillance, privacy and agency
As well as having a global impact, Cybersecurity is one of the most significant issues affecting Australia’s economy and national security. On the one hand, poor cybersecurity presents a risk to the interconnected digital systems on which we increasingly rely; on the other hand, well-managed cybersecurity provides an opportunity to build trust and advantage by accelerating digital transformation. Cyber threats can originate from a diverse range of sources and require a diverse set of actions to effectively mitigate them. However, a common theme is that much better cyber risk management is needed to address this critical threat; the current operation of the free market isn’t consistently driving all of the required behaviours or actions.
Regulation can provide a powerful mechanism to modify incentives and change behaviours. However, securing cyberspace depends on the intersection of many factors—technical, social and economic. Current regulations are a patchwork of general, cyber-specific and sector-specific measures with a lack of cohesion that causes overlaps and gaps. That makes the environment complex, which means that finding the right approach that will truly improve overall security and minimise unwanted side effects is difficult. It’s necessary to analyse the interconnected factors that determine the net effectiveness of cybersecurity regulations.
Furthermore, the pace of technological change is so fast today that, even if regulation is successful when first implemented, it needs to be appropriately futureproofed to avoid becoming irrelevant after even a few months. Recent rapid developments in artificial intelligence are an example of the risks here that will need to be anticipated in any changes to the regulatory regimes.
What’s the solution?
Regulatory interventions have an important role to play as one part of a strategy to uplift Australia’s cybersecurity, if done in the right way. This paper presents a framework for the government to make appropriate decisions about whether and how to regulate. That must start with defining which aspect of the cybersecurity challenge it seeks to address and the specific intended long-term impact. In cybersecurity, the most appropriate metrics or measures that regulation seeks to influence should, where possible, be risk-based, rather than specific technical measures. This is because the actual technical measures required are dependent on the individual context of each situation, will change over time, and are effective only when combined with people and process measures. The impact of the interventions on those metrics needs to be readily measurable in order to enable reliable enforcement at acceptable cost—both direct financial cost and indirect opportunity costs.
There’s often a focus on regulation to compel entities to do or not do something. However, compulsion is only one form of regulation, and others, such as facilitation or encouragement, should be considered first, treating compulsion as only one possible approach, which should used carefully and strategically.
Detailed implementation of cybersecurity regulations should use a co-design process with the relevant stakeholders, who will bring perspectives, experiences and knowledge that government alone does not have. It should also draw upon relevant experience of international partners, not only to benefit from lessons learned, but also to minimise the compliance burden for global companies and operators. Finally, in recognising the complexity of the problem, an iterative approach that measures impact and adjusts approaches to enhance effectiveness, incorporate lessons learned and absorb technological advances needs to be planned from the outset.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12135436/Getting-regulation-right_-approaches-to-improving-Australias-cybersecurity-banner.png558791markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-08-14 13:56:512025-03-12 13:57:27Getting regulation right: approaches to improving Australia’s cybersecurity
A balanced approach to protecting our digital ecosystems
What’s the problem?
Artificial intelligence (AI)–enabled systems make many invisible decisions affecting our health, safety and wealth. They shape what we see, think, feel and choose, they calculate our access to financial benefits as well as our transgressions, and now they can generate complex text, images and code just as a human can, but much faster.
So it’s unsurprising that moves are afoot across democracies to regulate AI’s impact on our individual rights and economic security, notably in the European Union (EU).
But, if we’re wary about AI, we should be even more circumspect about AI-enabled products and services from authoritarian countries that share neither our values nor our interests. And, for the foreseeable future, that means the People’s Republic of China (PRC)—a revisionist authoritarian power demonstrably hostile to democracy and the rules-based international order, which routinely uses AI to strengthen its own political and social stability at the expense of individual human rights. In contrast to other authoritarian countries such as Russia, Iran and North Korea, China is a technology superpower with global capacity and ambitions and is a major exporter of effective, cost-competitive AI-enabled technology into democracies.
In a technology-enabled world, the threats come at us ‘at a pace, scale and reach that is unprecedented’.1 And, if our reliance on AI is also without precedent, so too is the opportunity—via the magic of the internet and software updates—for remote, large-scale foreign interference, espionage and sabotage through AI-enabled industrial and consumer goods and services inside democracies’ digital ecosystems. AI systems are embedded in our homes, workplaces and essential services. More and more, we trust them to operate as advertised, always be there for us and keep our secrets.
Notwithstanding the honourable intentions of individual vendors of Chinese AI-enabled products and services, they’re subject to direction from PRC security and intelligence agencies, so we in the democracies need to ask ourselves: against the background of growing strategic competition with China, how much risk are we willing to bear?
We should worry about three kinds of Chinese AI-enabled technology:
products and services (often physical infrastructure), where PRC ownership exposes democracies to risks of espionage (notably surveillance and data theft) and sabotage (disruption and denial of products and services)
AI-enabled technology that facilitates foreign interference (malign covert influence on behalf of a foreign power), the most pervasive example being TikTok
‘Large language model AI’ and other emerging generative AI systems—a future threat that we need to start thinking about now.
While we should address the risks in all three areas, this report focuses more on the first category (and indeed looks at TikTok through the prism of the espionage and sabotage risks that such an app poses).
The underlying dynamic with Chinese AI-enabled products and services is the same as that which prompted concern over Chinese 5G vendors: the PRC Government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.
But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Protecting them from foreign intelligence agencies is a no-brainer and worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.
In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services, so the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits. And then pick the fights that really matter.
What’s the solution?
A general prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive. Many businesses and researchers in the democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services and publish scientific breakthroughs. The policy goal here is to take prudent steps to protect our digital ecosystems, not to economically decouple from China.
What’s needed is a new three-step framework to identify, triage and manage the riskiest products and services. The intent is similar to that proposed in the recently introduced draft US RESTRICT Act, which seeks to identify and mitigate foreign threats to information and communications technology (ICT) products and services, although the focus here is on teasing out the most serious threats.
Step 1: Audit. Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?
Step 2: Red Team. Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief. What could a team of specialists do if they had privileged access to (that is, ‘owned’) a candidate system identified in Step 1—people with experience in intelligence operations, cybersecurity and perhaps military planning, combined with relevant technical subject-matter experts? This is the real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others. PRC-made cameras and drones in sensitive locations are a legitimate concern, but crippling supply chains through accessing ship-to-shore cranes would be devastating.
For example, we know that TikTok data can be accessed by PRC agencies and reportedly also reveal a user’s location, so it’s obvious that military and government officials shouldn’t use the app. Journalists should also think carefully about this, too. Beyond that, the merits of a general ban on technical security grounds are a bit murky. Can our Red Team use the app to jump onto connected mobiles and IT systems to plant spying malware? What system mitigations could stop them getting access to data on connected systems? If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate.
Step 3: Regulate. Decide what to do about a system identified as ‘high risk’. Treatment measures might range from prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Short of that, governments could insist on measures to mitigate the identified risk or dilute the risk through redundancy arrangements. And, in many cases, public education efforts along the lines of the new UK National Protective Security Authority may be an appropriate alternative to regulation.
The democracies need to think harder about Chinese AI-enabled technology in our digital ecosystems. But we shouldn’t overreact: our approach to regulation should be anxious but selective.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12135944/Policy-brief_-De-risking-authoritarian-AI_-a-balanced-approach-to-protecting-banner.png554791markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-07-27 14:00:012025-03-12 14:03:06De-risking authoritarian AI