The Australian Strategic Policy Institute (ASPI) is delighted to share its latest report – the result of a multi-year project on Artificial Intelligence (AI), technical standards and diplomacy – that conducts a deep-dive into the important, yet often opaque and complicated world of technical standards.
At the heart of how AI technologies are developed, deployed and used in a responsible manner sit a suite of technical standards: rules, guidelines and characteristics that ensure the safety, security and interoperability of a product.
The report authors highlight that the Indo-Pacific, including Australia and India, are largely playing catch-up in AI standards initiatives. The United States and China are leading the pack, followed by European nations thanks to their size, scope and resources of their national standardisation communities as well as their domestic AI sectors.
Not being strongly represented in the world of AI governance and technical standards is a strategic risk for Indo-Pacific nations. For a region that’s banking on the opportunities of a digital and technology-enabled economy and has large swathes of its population in at-risk jobs, it’s a matter of national and economic security that Indo-Pacific stakeholders are active and have a big say in how AI technologies will operate and be used.
Being part of the conversations and negotiations is everything, and as such, governments in the Indo-Pacific – including Australia and India – should invest more in whole-of-nation techdiplomacy capabilities.
The authors note that there are currently no representatives from Southeast Asia (except Singapore), Australia, NZ or the Pacific Islands on the UN Secretary-General Advisory Body on AI – a body that’s tasked to come up with suggestions on how to govern AI in a representative and inclusive manner with an eye to achieving the UN Sustainable Development Goals.
The capacity of the Indo-Pacific to engage in critical technology standards has historically been lower in comparison to other regions. However, given the rapid and global impact of AI and the crucial role of technical standards, the report authors argue that dialogue and greater collaboration between policymakers, technologists and civil society has never been more important.
It is hoped this playbook will help key stakeholders – governments, industry, civil society and academia – step through the different aspects of negotiating technical standards for AI, while also encouraging the Indo-Pacific region to step up and get more involved.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/12193129/Negotiating-technical-standards-for-artificial-intelligence-Banner_0.png6241657nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2024-06-13 06:00:002025-04-11 11:09:13Negotiating technical standards for artificial intelligence
The Chinese Communist Party (CCP) is leveraging its propaganda system to build a toolkit to enable information campaigns. Its objective is to control communication and shape narratives and perceptions about China in order to present a specific version of truth and reality, both domestically and internationally. Ultimately, the CCP aims to strengthen its grip on power, legitimise its activities and bolster China’s cultural, technological, economic and military influence.
The CCP seeks to maintain total control over the information environment within China, while simultaneously working to extend its influence abroad to reshape the global information ecosystem. That includes not only controlling media and communications platforms outside China, but also ensuring that Chinese technologies and companies become the foundational layer for the future of information and data exchange worldwide.
This research report finds that the CCP seeks to harvest data from various sources, including commercial entities, to gain insights into target audiences for its information campaigns. We define an information campaign as a targeted, organised plan of related and integrated information operations, employing information-related capabilities (tools, techniques or activities) with other lines of operation to influence, disrupt, corrupt or manipulate information — including the individual or collective decision making based on that information — and deliberately disseminated on a large scale. The party also invests in emerging technologies such as artificial intelligence (AI) and immersive technologies that shape how people perceive reality and engage with information. The aim is to gain greater control, if not dominance, over the global information ecosystem.
To understand the drivers, tools and outcomes of that process, this report and its accompanying website (ChinaInfoBlocks.aspi.org.au) examine the activities of the People’s Republic of China (PRC) in the information domain, particularly its investments in technology and research and development (R&D) companies that might serve as ‘building blocks’ for the party’s information campaigns.
Specifically, this research comprehensively maps the CCP’s propaganda system, highlighting the linkages between the Central Propaganda Department, state-owned or -controlled propaganda entities and data-collection activities, and technology investments in Chinese companies, many of which now operate globally.
This research illustrates the various ways in which the party-state is leveraging the propaganda system and commercial entities to gain access to data that it deems strategically valuable for the propaganda system and its ongoing information operations. It also shows how the propaganda system uses new and emerging technologies, including generative AI, mobile gaming and immersive technologies, to establish and maintain control of the narrative and continuously refine its toolbox and techniques.
It’s imperative that policymakers develop robust defences and countermeasures against future disruptive information campaigns from Beijing and to ensure an open and secure global information environment. In mapping those companies linked to China’s propaganda system that are seeking market dominance in key technologies, and how their activities may support CCP efforts to shape the global information environment, this project aims to inform government and industry decisions on digital supply-chain security, supporting policies for safer and more secure digital technologies.
The first section of this report lays out the fundamentals of CCP theory that have, over decades, defined the party-state’s strategy in the information domain. A theoretical understanding of how the CCP conceptualises its goals is important in unpacking the different tools used to achieve them. The second section outlines the CCP’s complex and vast propaganda system and how it works. Later sections expand on the ways in which CCP theory underpins the propaganda system and its activities, including through practical examples and case studies.
This report is accompanied by a website that offers detailed network diagrams of the relationships between China’s propaganda system and the companies associated with it: directly, through a state-ownership structure linking back to the propaganda system, or indirectly, through significant state support. The website also hosts case studies relevant to the report findings. The map can be explored on the website, Identifying The Building Blocks of China’s Information Campaigns (ChinaInfoBlocks.aspi.org.au).
The CCP’s propaganda efforts on social media have been widely studied, enabling a baseline understanding of common narratives and tactics. Previous ASPI research, for example, has tracked a persistent, large-scale influence campaign linked to Chinese state actors on Twitter and Facebook.1 Several other research institutes have published important research on how the Chinese party-state attempts to control the information environment globally.2
China’s propaganda system is a vast structure. Under its direct control or with its direct support are a web of additional entities whose portfolio contributes to the party’s ability to meet its strategic aims in the information environment. Countries that understand the ‘invisible architecture’ of the CCP’s propaganda system and technologies will be better able to address and respond to its global efforts to skew the information environment.
Important research questions remain understudied. In particular, research on the building blocks that need to be in place to support and inform successful efforts to shape the information environment is limited. What’s the Chinese party-state doing to build its capacity to control ‘truth’ and influence how external audiences perceive, engage with and question reality?
To bridge that knowledge gap, this project examines how the party-state is leveraging the propaganda system:
through commercial entities, by collecting data or gaining access to datasets that it deems strategically valuable that could be used for propaganda purposes, including potentially for current or future information operations (for example, undertaking data-collection activities that build the party-state’s capacity to generate insights on current or potential targets of information operations)
through state support, by investing in R&D and access to new and emerging technology to shape or distort the information environment both domestically and globally.
Our project is based on ASPI’s 2019 report, Engineering global consent. That report first identified Global Tone Communications Technology (GTCOM), a machine-translation company that’s controlled by the CCP Central Propaganda Department. GTCOM claims that it accesses data from social media and has downstream access to datasets of the internet of things (IoT) and software products that it supplies, mainly to other PRC technology companies, to generate insights to support China’s state security and propaganda work.3
Building on Engineering global consent, we’ve sought to identify and explain how the Chinese party-state’s expansive propaganda system exploits new and emerging technologies and seeks to shape or distort the information environment both domestically and globally. To answer these questions, we generated network graphs describing the relationships between companies in our dataset, which are mostly Chinese state-owned or backed by state funds, with direct links to the propaganda system and other entities. We used that research to better understand areas of business activity associated with the PRC’s propaganda system, especially when such activity is related to data collection, aggregation and processing.
Our research effort involved identifying entities linked to the Propaganda Department of the Chinese Communist Party’s Central Committee (‘the Central Propaganda Department’), provincial-level propaganda departments, or other party-state bodies linked to the propaganda system, such as the Ministry of Culture and Tourism. This project began with a months-long effort to build a network graph of companies that were directly and indirectly linked to the Central Propaganda Department. Our research included looking for subsidiaries, shareholders and strategic cooperation and MoU partners of the companies we identified. Our information sources focused on PRC-based company databases and shareholders, and included company websites, company press releases and corporate disclosure documents. We then narrowed the scope of our research to focus on the specific case studies covered in this report.
Party-state news and publishing outlets were included in our research because the Central Propaganda Department is responsible for the supervision of news and publishing work, and those outlets are key platforms for disseminating information. However, rather than simply mapping out the names of media and publishing outlets, and their publication outputs domestically in China and overseas, our research emphasis was on identifying where those outlets are establishing branches or partnerships that expand their business activity into areas of business related to new and emerging technology.
While this research has revealed large amounts of previously inaccessible information on Chinese companies with links to the CCP’s propaganda institutions, it relies on publicly available information sources that are accessible outside mainland China. Continued research on these connections, as well as on connections between these types of companies and other parts of the party-state bureaucracy, is required.
Key findings
The report places the PRC’s propaganda system in the context of the CCP’s overall strategic frameworks, which are filtered down to specific policy outputs. Key findings are as follows:
The Chinese party-state sees data as central to its ability to modernise its propaganda efforts in the global information environment. Unlike the legislation of other state actors, China’s 2021 Data Security Law clearly articulates a vision for how data and data exchanges contribute to an overall national strategy (see ‘The propaganda system and its feedback loop’ at page 13). It prioritises data access and the regulation of data flows as part of its efforts to ensure control. – That data is global. For example, China’s People’s Public Opinion Cloud combines about half a million information sources across 182 countries and 42 languages to support the Chinese Government’s and PRC enterprises’ international communication needs.4 The platform has both government and corporate applications and provides tools for public-security agencies to monitor the information environment and public sentiment on sensitive events and topics.5
The CCP sees emerging technology, such as e-commerce, virtual reality and gaming, as a means to promote a CCP-favoured perspective on truth and reality that supports the official narrative that the CCP seeks to project (even if those technologies may also be potentially hazardous to the party’s interests). This is especially true in relation to the CCP’s ability to conduct information campaigns and shape global information standards and foundational technologies. – The CCP’s national key cultural export enterprises and projects lists (both the 2021–22 and 2022–2023 versions), name dozens of mobile gaming companies and mobile games that receive state support (see ‘The perception of reality’ at page 19), including subsidies, so that they can continue to enjoy global success and help advance the mission to boost China’s cultural soft power. – In e-commerce, for example, companies such as Temu (which became the most-downloaded free iPhone app in the US in 20236) also collect large amounts of data that’s likely to be shared with the PRC’s propaganda system.7 In gaming, popular video games such as Genshin Impact, the developers of which receive Chinese state support linked to the propaganda system, create similar security risks due to the strategic value of the user data that they generate and collect.
Under Xi Jinping’s leadership, the CCP has renewed its emphasis on a national strategy of media convergence that brings together traditional and ‘emerging’ media across various dimensions—content, channels, platforms, operations and management—to enhance the agility of propaganda initiatives in responding to real-time shifts in public sentiment.8 Media convergence is directly linked to the perception that an absence of guidance on public opinion risks China’s security and stability. The party uses digital media, particularly the data resources that digital media help to generate, to improve its ability to use media effectively in its communications strategy and to create feedback loops in China and internationally.9
Policy recommendations
Policymakers face two key challenges: first, to apply the CCP’s way of thinking to efforts to counter information campaigns, before they’re conducted; and, second, to resist China’s efforts to shape global information standards and core foundational technologies for Web 2.0 and beyond.10
Informed by the findings contained in this report, we make the following recommendations for governments, civil society, social-media platforms and hardware and software developers and vendors:
Governments should exert pressure on technology companies to conduct more thorough reviews of their digital supply chains to ensure that their Web 2.0 and future Web 3.0 foundations, and the companies and technologies that they rely on, are transparent and secure. Improving due diligence, transparency, trust and security by design in the digital supply chain, at both the technology and systems/applications layers, must be considered, especially for companies engaged in government procurements. That can be achieved by imposing more stringent reporting requirements, developing high-risk vendor frameworks, imposing and enforcing privacy and data requirements, and developing consistent data-minimisation approaches. Already the US and partner nations have sought to enhance software security by requiring companies working with governments to provide software ‘bills of materials’. The Quad Cybersecurity Partnership’s ‘joint principles for secure software’11 is an excellent template for considering enhanced transparency regulation. – Technology companies, including vendors, platforms and developers should commit and adhere to the Cybersecurity Tech Accord, develop security by design standards, and impose greater moderation and fact-checking standards across online platforms, social media, etc. to reduce the potential for attacks on the availability, confidentiality, and integrity of data, products, services, and networks and highlight mis- and dis-information and propaganda. As China’s information campaigns seek to weaponise truth and reality, increasing vigilance, verification and veracity must be asserted to ensure information consumers are offered the best chance of identifying mis- and dis-information influences.
Governments must exert significantly more policy attention to the regulation of technologies used for surveillance and related immersive technologies. Few governments have developed broad definitions of those technologies or studied their privacy and data-security impacts. As a consequence, their regulation hasn’t been effective or focused on their future societal and national-security implications. More specifically: – Governments should define machine learning and cloud data as surveillance or dual-use goods. For example, the European Union has identified dual-use applications of AI systems as an area of concern in their assessment process as part of the Ethics Guidelines for Trustworthy AI.12 The Council of Europe has also raised concerns with the Pegasus surveillance software.13 The US has identified cloud data as an export under the Export Administration Regulations that may attract dual-use controls. While these efforts are significant, regulation still lags the use of machine learning and cloud data by companies and governments, resulting in inconsistent application, a situation rife for exploitation by authoritarian regimes. Governments should standardise and tighten regulation on the technologies and services not traditionally understood as surveillance or dual-use (data) products, including data-generating products and services in e-commerce gaming industries. Doing so would enable them to apply traditional tool sets for preventing access to goods of that nature, such as export controls, technologies and services not traditionally understood as surveillance or dual-use (data) products, including data-generating products and services in e-commerce gaming industries. – Additionally, increased transparency in regard to which technology actors and entities, whether they’re involved in R&D activities or product sales, are acting on behalf of state interests could clarify what data is used for surveillance purposes and what data can be used to undermine another state’s sovereignty.
To further increase transparency, governments should also more clearly define which individual actors and entities are required to register under foreign-agent registration schemes. That includes Australia’s Foreign Influence Transparency Scheme, the US Foreign Agents Registration Act (FARA) and emerging equivalents elsewhere, such as the UK’s upcoming foreign influence registration scheme. The US, for example, used FARA to force PRC state-owned media companies such as Xinhua and CGTN to register as state agents.14Based on the same logic, any technology company linked directly to China’s propaganda system or receiving state support to facilitate the party-state’s propaganda efforts could be required to register.
Internationally, governments should work to standardise the ways in which data is shared, and proactively regulate how it can be produced and stored. Efforts thus far have failed to reach accord, and many have been siloed within specific functional domains (such as meteorological data, social services, food and agriculture, finance and so on). Such efforts can reduce opportunities for authoritarian regimes to collect, use and misuse data in ways that harm ethnic communities, disparage and denigrate alternative perspectives and silence dissent in the global information environment. The International Organization for Standardization, together with the UN Centre for Trade Facilitation and Electronic Business, among others, should establish joint government–industry standardisation mechanisms.
Multilaterally, democratic governments should work together to develop a stronger institutional understanding of the future vulnerabilities and risks of new technologies, particularly in the digital technology ecosystem. That understanding should guide the development of new standards for emergent technologies and assist industry to commercialise those technologies with the goal of safety and security by design. The Quad Principles on Critical and Emerging Technology Standards are a good example of work that needs to occur on the future vulnerabilities and risks of new technologies.
Locally, governments and civil society should establish guardrails against the negative impacts of CCP efforts to shape the information environment, including through information campaigns such as media literacy and critical thinking campaigns targeting individuals and communities. Efforts should not only help users understand what’s ‘real’ and what’s ‘fake’, but also ensure that they have broader awareness of how entities supporting foreign information campaigns may be present in their supply chains, so that risks associated with them are identified and more reliably controlled.
Tom Uren, Elise Thomas, Jacob Wallis, Tweeting through the Great Firewall, ASPI, Canberra, 3 September 2019, online; Jacob Wallis, Tom Uren, Elise Thomas, Albert Zhang, Samantha Hoffman, Lin Li, Alexandra Pascoe, Danielle Cave, Retweeting through the Great Firewall, ASPI, Canberra, 12 June 2020, online; Albert Zhang, Tilla Hoja, Jasmine Latimore, Gaming public opinion, ASPI, Canberra, 26 April 2023, online. ↩︎
Freedom House, for example, found in its survey of CCP media influence in 30 countries that ‘the Chinese government and its proxies are using more sophisticated, covert, and coercive tactics—including intensified censorship and intimidation, deployment of fake social media accounts, and increased mass distribution of Beijing-backed content via mainstream media—to spread pro-CCP narratives, promote falsehoods, and suppress unfavourable news coverage.’ See ‘Beijing’s global media influence 2022: Authoritarian expansion and the power of democratic resilience’, Freedom House, 8 September 2022, online; ‘New report: Beijing is intensifying its global push for media influence, turning to more covert and aggressive tactics’, Freedom House, 8 September 2022, online. The National Endowment for Democracy’s work on sharp power has similarly examined how the PRC and authoritarian states engage in activities that undermine media integrity; see Christopher Walker, Jessica Ludwig, A full-spectrum response to sharp power the vulnerabilities and strengths of open societies, Sharp Power and Democratic Resilience series, National Endowment for Democracy, June 2021, online; Sharp power: rising authoritarian influence, National Endowment for Democracy, December 2017, online. ↩︎
Samantha Hoffman, Engineering global consent: the Chinese Communist Party’s data-driven power expansion, ASPI, 14 October 2019, online. ↩︎
‘People’s Public Opinion Cloud’ [人民舆情云], People’s Cloud, no date, online. ↩︎
‘People’s Public Opinion Cloud’ [人民舆情云], People’s Cloud, no date, online. ↩︎
Sarah Perez, ‘Temu was the most-downloaded iPhone app in the US in 2023’, TechCrunch, 13 December 2023, online. ↩︎
Temu has also reportedly engaged in controversial business practices, such as forced and exploitative labour practices, and copyright infringement. See Nicholas Kaufman, Shein, Temu, and Chinese e-commerce: data risks, sourcing violations, and trade loopholes, US–China Economic and Security Review Commission, 14 April 2023, online. ↩︎
Patrick Boehler, ‘Two million “internet opinion analysts” employed to monitor China’s vast online population’, South China Morning Post, 3 October 2013, online. ↩︎
‘CMP dictionary: media convergence’, China Media Project, 16 April 2021, online. ↩︎
Web 2.0 refers to a shift in the way websites and web applications are designed and used, characterised by user-generated content, interactivity and collaboration, marking a departure from static web pages to dynamic platforms facilitating social interaction and user participation. See Ashraf Darwish, Kamaljit Lakhtaria, ‘The impact of the new Web 2.0 technologies in communication, development, and revolutions of societies’, Journal of Advances in Information Technology, November 2011, online. ↩︎
Quad Senior Cyber Group, ‘Quad Cybersecurity Partnership: joint principles for secure software’, Department of the Prime Minister and Cabinet, Australian Government, 20 May 2023, online. ↩︎
High Level Expert Group on Artificial Intelligence, ‘Ethics guidelines for trustworthy AI’, European Union, 8 April 2019, online. ↩︎
‘Pegasus spyware and its impacts on human rights’, Council of Europe, 20 June 2022, online. ↩︎
National Security Division, ‘Obligation of CGTN America to register under the Foreign Agents Registration Act’, Department of Justice, US Government, 20 December 2018, online; National Security Division, ‘Obligation of Xinhua News Agency North America to register under the Foreign Agents Registration Act’, Department of Justice, US Government, 18 May 2020, online. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12112115/Truth-and-reality-with-Chinese-characteristics_-The-building-blocks-banner.png376794markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-05-02 11:20:552025-03-13 09:17:23Truth and reality with Chinese characteristics
ASPI has released a groundbreaking report that finds the Chinese Communist Party seeks to harvest user data from globally popular Chinese apps, games and online platforms in a likely effort to improve its global propaganda.
The research maps the CCP’s propaganda system, highlighting the links between the Central Propaganda Department, state-owned or controlled propaganda entities and data-collection activities, and technology investments in Chinese companies.
In this special short episode of Stop the World, David Wroe speaks with ASPI analyst Daria Impiombato about the key takeaways from this major piece of research.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/10/29231723/Stop-the-World-Banner.png4271280markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-05-02 11:12:442025-02-28 11:15:41Mapping China’s data harvesting and global propaganda efforts
The Australian Strategic Policy Institute (ASPI) is pleased to announce that the third Sydney Dialogue for critical, emerging and cyber technologies will be held on 2-3 September 2024.
The Sydney Dialogue (TSD) brings together world leaders, global technology industry innovators and top experts in cyber and critical technology for frank and productive discussions, with a specific focus on the Indo-Pacific.
TSD 2024 will generate conversations that address the awesome advances being made across these technologies, their impact on our societies, economies and national security, and how we can best manage their adoption over the next decade and beyond. These will include generative artificial intelligence, cybersecurity, quantum computing, biotechnology, climate and space technologies.
We will prioritise speakers and topics that push the boundaries and generate new insights into these fields, while also promoting diverse views, including from the Pacific, Southeast Asia and South Asia.
This year’s event will also capture the key trends that are dominating international technology, security and geopolitical discussions. With more than 80 national elections set to take place around the world in 2024, the event will also focus on the importance of political leadership, global cooperation and the stable development of technologies amid great power transition, geopolitical uncertainty and ongoing conflict.
ASPI is pleased to have the support once again of the Australian Government for TSD in 2024.
Australia’s Minister for Home Affairs and Cyber Security, the Hon Clare O’Neil MP said: “The threats we face from cyber attacks and tech-enabled perils such as disinformation and foreign interference are only growing as the power of artificial intelligence gathers pace.
“The kind of constructive debate that the Sydney Dialogue fosters helps ensure that the rapid advances in critical technologies and cyber bring better living standards for our people rather than new security threats. Closer engagement with our international partners and with industry on these challenges has never been more important than it is today.”
TSD 2024 will build on the momentum of the previous two dialogues, which featured keynote addresses from Indian Prime Minister Narendra Modi, the late former Japanese Prime Minister Shinzo Abe, Samoa’s Prime Minister Fiamē Naomi Mata’afa, Estonia’s Prime Minister Kaja Kallas and former Chief Executive Officer of Google Eric Schmidt. A full list of previous TSD speakers can be found here. You can also watch previous TSD sessions here.
TSD 2024 will be held in person and will feature a mix of keynote addresses, panel discussions, closed-room sessions and media engagements.
Topics for discussion will also include technological disruptors, cybercrime, online disinformation, hybrid warfare, electoral interference, climate security, international standards and norms, as well as technology design with the aim of enhancing partnerships, trust and global co-operation.
Justin Bassi, the Executive Director of ASPI, said: “The Sydney Dialogue 2024 will continue to build on the great success ASPI has established since 2021. These technologies are affecting our security and economies faster, and more profoundly, than we ever imagined. We need frank, open debate about how, as a globe, we manage their adoption into our lives.
“We are proud to be focusing on our Indo-Pacific region and encouraging a wide and diverse range of perspectives on some of the most important challenges of our time.”
More information and updates on the Sydney Dialogue can be found at tsd.aspi.org.au.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-21 03:53:412024-11-07 03:56:31The Sydney Dialogue to return in September
In February, ASPI and the Special Competitive Studies Project held a series of workshops on the rise of artificial intelligence (AI) and its impact on the intelligence sector.
The workshops, which followed a multi-day workshop in Canberra in November 2023, brought together experts from across the Australian and US intelligence communities, think tanks and industry to inform future intelligence approaches in both countries.
The project also focuses on how current and emerging AI capabilities can enhance the quality and timeliness of all-source intelligence analysis and how this new technology may change the nature of the intelligence business.
The aim of the workshops is to develop a prioritised list of recommendations for both the Australian and US intelligence communities on how to adopt AI quickly, safely, and effectively.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-16 04:09:202024-11-07 04:10:45Artificial Intelligence, Human-Machine Teaming, and the Future of Intelligence Analysis
More than 2 billion people in over 50 countries, representing nearly a third of the global population, are set to engage in elections this year. It will have geopolitical ramifications with so many countries having the chance to choose new leaders, testing the resilience of democracy and the rules-based order in countless ways.
These elections also come at a time of increasing ambition among powerful authoritarian regimes, growing use of misinformation and disinformation often linked to state-led or state-backed influence operations, rising extremism of various political stripes, and the technological disruption of artificial intelligence.
At the same time, democracies face formidable challenges with wars raging in Europe and the Middle East, increasing climate disasters, weakening economies, and the erosion of confidence in liberal societies.
Watch the panel below as they explore the issues that are set to define 2024’s election campaigns, as well as the impact the outcomes could have on alliances, geopolitics and regional security around the world.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/04/17135358/v2Artboard-1-copy-scaled.jpg8532560markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2024-02-16 04:01:332024-11-07 04:07:34ASPI’s 2024 Democracy Primer
A pro-China technology and anti-US influence operation thrives on YouTube
Executive Summary
ASPI has recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition. This new campaign (which ASPI has named ‘Shadow Play’) has attracted an unusually large audience and is using entities and voice overs generated by artificial intelligence (AI) as a tactic that enables broad reach and scale.1 It focuses on promoting a series of narratives including China’s efforts to ‘win the US–China technology war’ amid US sanctions targeting China. It also includes a focus on Chinese and US companies, such as pro-Huawei and anti-Apple content.
The Shadow Play campaign involves a network of at least 30 YouTube channels that have produced more than 4,500 videos. At time of publication, those channels have attracted just under 120 million views and 730,000 subscribers. The accounts began publishing content around mid-2022. The campaign’s ability to amass and access such a large global audience—and its potential to covertly influence public opinion on these topics—should be cause for concern.
ASPI reported our findings to YouTube/Google on 7 December 2023 for comment. By 8 December, they had taken down 19 YouTube channels from the Shadow Play network—10 for coordinated inauthentic behaviour and nine for spam. As of publication, these YouTube channels display a range of messages from YouTube indicating why they were taken down. For example, one channel was ‘terminated for violating YouTube’s community guidelines’, while another was ‘terminated due to multiple or severe violations of YouTube’s policy for spam, deceptive practices and misleading content or other Terms of Service violations’. ASPI also reported our findings to British artificial intelligence company, Synthesia, whose AI avatars were used by the network. On 14 December 2023, Synthesia disabled the Synthesia account used by one of the YouTube accounts, for violating its Media Reporting (News) policy.
We believe that it’s likely that this new campaign is being operated by a Mandarin-speaking actor. Indicators of this actor’s behaviour don’t closely map to the behaviour of any known state actor that conducts online influence operations. Our preliminary analysis (see ‘Attribution’) is that the operator of this network could be a commercial actor operating under some degree of state direction, funding or encouragement. This could suggest that some patriotic companies increasingly operate China-linked campaigns alongside government actors.
The campaign focuses on promoting six narratives. Two of the most dominant narratives are that China is ‘winning’ in crucial areas of global competition: first, in the ‘US–China tech war’ and, second, in the competition for rare earths and critical minerals.2 Other key narratives express that the US is headed for collapse and that its alliance partnerships are fracturing, that China and Russia are responsible, capable players in geopolitics, that the US dollar and the US economy are weak, and that China is highly capable and trusted to deliver massive infrastructure projects. A list of visual representative examples from the network for each narrative is in Appendix 1 on page 35.
Figure 1: An example of the style of content generated by the network, in which multiple YouTube channels published videos alleging that China had innovated a 1-nanometre chip, without using a lithography machine
Sources: ‘China Charged’, ‘China reveals the world’s first 1nm chip & SHOCKS the US!’, YouTube, 3 November 2023, online;‘ Relaxian’, ‘China’s groundbreaking 1nm chip: redefining technology and global power’, YouTube, 4 November 2023, online; ‘Vision of China’, ‘China breaks tech limit: EUV lithography not needed to make 1nm chips!’, YouTube, 17 July 2023 online; ‘China Focus—CNF’, ‘World challenge conquered: 1nm chips produced without EUV lithography!’, YouTube, 5 July 2023, online; ‘Curious Bay’, ‘China’s NEW 1nm chip amazes the world’, YouTube, 24 July 2023, online; ‘China Hub’, ‘China shatters tech boundaries: 1nm chips without EUV lithography? Unbelievable tech breakthrough!’, YouTube, 30 July 2023, online.
This campaign is unique in three ways. First, as noted above, there’s a notable broadening of topics. Previous China-linked campaigns have been tightly targeted and have often focused on a narrow set of topics. For example, the campaign’s focus on promoting narratives that establish China as technologically superior to the US presents detailed arguments on technology topics including semiconductors rare earths, electric vehicles and infrastructure projects. In addition, it targets, via criticism and disinformation, US technology firms such as Apple and Intel. Chinese state media outlets, Chinese officials and online influencers sometimes publish on these topics in an effort to ‘tell China’s story well’ (讲好中国故事).3 A few Chinese state-backed inauthentic information operations have touched on rare earths and semiconductors, but never in depth or by combining multiple narratives in one campaign package.4 The broader set of topics and opinions in this campaign may demonstrate greater alignment with the known behaviour of Russia-linked threat actors.
Second, there’s a change in techniques and tradecraft, as the campaign has leveraged AI. To our knowledge, the YouTube campaign is one of the first times that video essays, together with generative AI voiceovers, have been used as a tactic in an influence operation. Video essays are a popular style of medium-length YouTube video in which a narrator makes an argument through a voiceover, while content to support their argument is displayed on the screen. This shows a continuation of a trend that threat actors are increasingly moving towards: using off-the-shelf video editing and generative AI technology tools to produce convincing, persuasive content at scale that can build an audience on social-media services. We also observed one account in the YouTube network using an avatar created by Sogou, one of China’s largest technology companies (and a subsidiary of Tencent) (see page 24). We believe the use of the Sogou avatar we identified to be the first instance of a Chinese company’s AI-generated human being used in an influence operation.
Third, unlike previous China-focused campaigns, this one has attracted large views and subscribers. It has also been monetised, although only through limited means. For example, one channel accepted money from US and Canadian companies to support the production of their videos. The substantial number of views and subscribers suggest that the campaign is one of the most successful influence operations related to China ever witnessed on social media. Many China-linked influence operations, such as Dragonbridge (also known as ‘Spamouflage’ in the research community), have attracted
initial engagement in some cases but have failed to sustain a meaningful audience on social media.5 However, further research by YouTube is needed to determine whether view counts and subscriber counts on YouTube demonstrated real viewership or were artificially manipulated, or a combination of both. We note that, in our examination of YouTube comments on videos in this campaign, we saw signs of a genuine audience. ASPI believes that this campaign is probably larger than the 30 channels covered in this report, but we constrained our initial examination to channels we saw as core to the campaign. We also believe there to be more channels publishing content in non-English languages that belong to this network; for example, we saw channels publishing in Bahasa Indonesia that aren’t included in this report.
That’s not to say that the effectiveness of influence operations should only be measured through engagement numbers. As ASPI has previously demonstrated, Chinese Communist Party (CCP) influence operations that troll, threaten and harass on social media seek to silence and cause psychological harm to those being targeted, rather than seeking engagement.6 Similarly, influence operations can be used to ‘poison the well’ by crowding out the content of genuine actors in online spaces, or to poison datasets used for AI products, such as large-language models (LLMs).7
This report also discusses another way that an influence operation can be effective: through its ability to spill over and gain traction in a wider system of misinformation. We found that at least one narrative from the Shadow Play network—that Iran had switched on its China-provided BeiDou satellite system—began to gain traction on X (formerly Twitter) and other social-media platforms within a few hours of its posting on YouTube. We discuss that case study on page 29.
This report offers an initial identification of the influence operation and some defining characteristics of a likely new influence actor. In addition to sections on attribution, methodology and analysis of this new campaign, this report concludes with a series of recommendations for government and social media companies, including:
the immediate investigation of this ongoing information operation, including operator intent and the scale and scope of YouTube channels involved
broader efforts by Five Eyes and allied partners to declassify open-source social-media-based influence operations and share information with like-minded nations and relevant NGOs
rules that require social-media users to disclose when generative AI is used in audio, video or image content
national intelligence collection priorities that support the effective amalgamation of information on Russia-, China- and Iran-linked information operations
publishing detailed threat indicators as appendixes in information operations research.
Shadow play (or shadow puppetry) is a storytelling technique in which flat articulated cut-out figures are placed between a light source and a translucent screen. It’s practised across Southeast Asia, China, the Middle East, Europe and the US. See, for example, Inge C Orr, ‘Puppet theatre in Asia’, Asian Folklore Studies, 1974, 33(1):69–84, online. ↩︎
A recent Pew Research Center poll indicates that technology is one of the few areas in which public opinion in high-income and middle-income countries sees China and the US as equally capable, which suggests that narratives on those lines are credible for international viewers. Laura Silver, Christine Huang, Laura Clancy, Nam Lam, Shannon Greenwood, John Carlo Mandapat, Chris Baronavski, Comparing views of the US and China in 24 countries, Pew Research Center, 6 November 2023, online. ↩︎
‘Telling China’s story well’, China Media Project, 16 April 2021, online; Marcel Schliebs, Hannah Bailey, Jonathan Bright, Philip N Howard, China’s public diplomacy operations: understanding engagement and inauthentic amplification of PRC diplomats on Facebook and Twitter, Oxford Internet Institute, 11 May 2021, https://demtech.oii.ox.ac.uk/research/posts/chinas-public-diplomacy-operations-understanding-engagement-and-inauthentic-amplification-of-chinese-diplomats-on-facebook-and-twitter/#continue. ASPI’s work on foreign influencers’ role in telling China’s story well includes Fergus Ryan, Matt Knight, Daria Impiombato, Singing from the CCP’s songsheet, ASPI, Canberra, 24 November 2023, https://www.aspi.org.au/report/singing-ccps-songsheet . Fergus Ryan, Ariel Bogle, Nathan Ruser, Albert Zhang, Daria Impiombato, Borrowing mouths to speak on Xinjiang, ASPI, Canberra, 10 December 2021, https://www.aspi.org.au/report/borrowing-mouths-speak-xinjiang ; Fergus Ryan, Daria Impiombato, Hsi-Ting Pai, Frontier influencers, ASPI, Canberra, 20 October 2022, https://www.aspi.org.au/report/frontier-influencers/. . ↩︎
Reports on China-linked information operations that have targeted semiconductors and rare earths include Albert Zhang, ‘The CCP’s information campaign targeting rare earths and Australian company Lynas’, The Strategist, 29 June 2022, online; ‘Pro-PRC DRAGONBRIDGE influence campaign targets rare earths mining companies in attempt to thwart rivalry to PRC market dominance’, Mandiant, 28 June 2022, https://www.mandiant.com/resources/blog/dragonbridge-targets-rare-earths-mining-companies ; Shane Huntley, ‘TAG Bulletin: Q3 2022’, Google Threat Analysis Group, October 26 2022, https://blog.google/threat-analysis-group/tag-bulletin-q3-2022/ . ↩︎
Ben Nimmo, Ira Hubert, Yang Cheng, ‘Spamouflage breakout’, Graphika, 4 February 2021, online. ↩︎
Danielle Cave, Albert Zhang, ‘Musk’s Twitter takeover comes as the CCP steps up its targeting of smart Asian women’, The Strategist, 6 November 2022, online; Donie O’Sullivan, Curt Devine, Allison Gordon, ‘China is using the world’s largest known online disinformation operation to harass Americans, a CNN review finds’, CNN, 13 November 2023, https://edition.cnn.com/2023/11/13/us/china-online-disinformation-invs/index.html . ↩︎
Rachael Falk, Anne-Louise Brown, ‘Poison the well: AI, data integrity and emerging cyber threats’, Cyber Security Cooperative Research Centre, 30 October 2023, online. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12121352/Policy-brief_-Shadow-play_-a-pro-China-technology-and-anti-US-influence-thumbnail.png555791markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-12-14 12:15:562025-03-12 15:40:30Shadow Play
The role of foreign influencers in China’s propaganda system
Disclaimer: Please note that because of a website upload issue, an earlier version of this page and report contained errors including incorrect author names & acknowledgement text from a previous report. We have rectified these issues.
Executive summary
The Chinese Communist Party (CCP) has always viewed contact with foreigners and the outside world as a double-edged sword, presenting both threats and opportunities. While the CCP and its nationalist supporters harbour fears of foreigners infiltrating China’s information space and subtly ‘setting the tempo’ (带节奏) of discussions, the CCP also actively cultivates a rising group of foreign influencers with millions of fans, which endorses pro-CCP narratives on Chinese and global social-media platforms.
In the People’s Republic of China (PRC), the information ecosystem is geared towards eliminating rival narratives and promoting the party’s ‘main melody’ (主旋律)—the party’s term for themes or narratives that promote its values, policies and ideology.1 Foreign influencers who are amenable to being ‘guided’ towards voicing that main melody are increasingly considered to be valuable assets. They’re seen as building the CCP’s legitimacy for audiences at home, as well as supporting propaganda efforts abroad.
This report examines how a growing subset of foreign influencers, aware of the highly nationalistic online environment and strict censorship rules in China, is increasingly choosing to create content that aligns more explicitly with the CCP’s ‘main melody’.2 In addition to highlighting the country’s achievements in a positive light, these influencers are promoting or defending China’s position on sensitive political issues, such as territorial disputes or human rights concerns.
As we outline in this report, foreign influencers are involved in a wave of experimentation and innovation in domestic (and external) propaganda production that’s taking place at different levels around the PRC as officials heed Xi Jinping’s call to actively participate in ‘international communication’. That experimentation includes their use in the Propaganda Department’s efforts to control global narratives about Covid-19 in China and the cultivation of Russian influencers in China to counter Western narratives.3 This research also reveals that the CCP is effectively co-opting a widespread network of international students at Chinese universities, cultivating them as a talent pool of young, multilingual, social-media-friendly influencers.
Foreign influencers are guided via rules, regulations and laws, as well as via platforms that direct traffic towards user-generated propaganda. Video competitions organised by propaganda organs and the amplification of party-state media and government spokespeople further encourage this trend. The resulting party-aligned content foreign influencers produce, coupled with that of party-state media workers masquerading as influencers and state-approved ethnic-minority influencers4 are part of a coordinated tactic referred to as ‘polyphonous communication’ (复调传播).5
By coordinating foreign influencers and other communicators, Beijing aspires to create a unified choir of voices capable of promoting party narratives more effectively than traditional official PRC media. The ultimate goal is to shield CCP-controlled culture, discourse and ideology from the dangers of foreign and free political speech, thereby safeguarding the party’s legitimacy.
As this report outlines, that strategy reveals the CCP’s determination to defend itself against foreign influence and shape global narratives in its favour, including through covert means. As one party-state media worker put it, the aim is to ‘help cultivate a group of “foreign mouths”, “foreign pens”, and “foreign brains” who can stand up and speak for China at critical moments’.6
The CCP’s growing use of foreign influencers reinforces China’s internal and external narratives in ways that make it increasingly difficult for social-media platforms, foreign governments and individuals to distinguish between genuine and/or factual content and propaganda. It further complicates efforts to counter disinformation and protect the integrity of public discourse and blurs the line between independent voices and those influenced by the party’s narratives.
This report makes key recommendations for media and social-media platforms, governments and civil society aimed at building awareness and accountability. They include broadening social-media platforms’ content labelling practices to include state-linked, PRC-based influencers; preventing PRC-based creators from monetising their content on platforms outside China to diminish the commercial incentives to produce party-aligned content; and, in countries with established foreign interference taskforces, such as Australia, developing appropriate briefing materials for students planning to travel overseas.
Key Findings
Foreign influencers are reaching increasingly larger and more international audiences. Some of them have tens of millions of followers in China and millions more on overseas platforms (see Appendix 1 on page 65), particularly on TikTok, YouTube and X (formerly Twitter).
The CCP is creating competitions that offer significant prize money and other incentives as part of an expanding toolkit to co-opt influencers in the production of pro-CCP and party-state-aligned content (see Section 2.3: ‘State-sponsored competitions’ on page 20).
Beijing is establishing multilingual influencer studios to incubate both domestic and foreign influencers in order to reach younger media consumers globally (see Section 2.5: ‘The influencer studio system’ on page 33).
The CCP is effectively using a widespread network of international students at Chinese universities, cultivating them as a latent talent pool of young, multilingual, social-media-friendly influencers (see breakout box: ‘PRC universities’ propaganda activities’ on page 32).
Russian influencers in China are cultivated as part of the CCP’s strategic goal of strengthening bilateral relations with Russia to counter Western countries (see Section 3.4: ‘Russian influencers’ on page 53).
The CCP is using foreign influencers to enable its propaganda to surreptitiously penetrate mainstream overseas media, including into major US cable TV outlets (see Section 3.3: ‘Rachele Longhi’ on page 44). Chinese authorities use vlogger, influencer and journalist identities interchangeably, in keeping with efforts aimed at influencing audiences, rather than offering professional or objective news coverage.
CCP-aligned influencer content has helped boost the prevalence of party-approved narratives on YouTube, outperforming more credible sources on issues such as Xinjiang due to search-engine algorithms that prioritise fresh content and regular posting (see Section 2.2 ‘Turning a foreign threat into a propaganda opportunity’ on page 15).
Foreign influencers played a key part in the Propaganda Department’s drive to control international narratives about Covid-19 in China and have, in some instances, attempted to push the CCP’s narrative overseas as well (see Section 1.1: ‘Case study’ on page 7).
Efforts to deal with CCP propaganda have taken a step backwards on X, which under Elon Musk has dispensed with state-affiliation labels and is allowing verification for party-state media workers, including foreigners (see Section 2.5 ‘The influencer studio system’ on page 33).
The term ‘Propaganda Department’ is used here for the Publicity Department of the Central Committee of the CCP. Subordinate CCP organisations in many cases have their own propaganda departments. ↩︎
Fergus Ryan, Daria Impiombato, Hsi-Ting Pai, Frontier influencers: the new face of China’s propaganda, ASPI, Canberra, 20 October 2022. ↩︎
Devin Thorne, ‘1 key for 1 lock: the Chinese Communist Party’s strategy for targeted propaganda’, Recorded Future, September 2022. ↩︎
Du Guodong [杜国东], ‘A tentative analysis of how to leverage the role of foreign internet celebrities in China’s international communication’ [试析如何发挥洋网红在中国国际传播中的作用], FX361, 10 September 2019. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12123347/Singing-from-the-CCPs-songsheet_-the-role-of-foreign-influencers-banner.png519740markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-11-24 12:33:192025-03-12 15:48:45Singing from the CCP’s songsheet
In 2020, the then Director of ASPI’s International Cyber Policy Centre, Fergus Hanson, approached me to research the views of the 46th Parliament on a range of cybersecurity and critical technology issues. The resulting data collection was then conducted in two parts across 2021 and 2022, with the results analysed and written up in 2022 and 2023. Those parliamentarians who ‘opted in’ completed and provided an initial quantitative study, which I then followed up on with an interview that explored an additional set of qualitative questions. The results, collated and analysed, form the basis of this report.
This research aims to provide a snapshot of what our nation’s policy shapers and policymakers are thinking when it comes to cybersecurity and critical technologies. What are they worried about? Where are their knowledge gaps and interests? What technologies do they think are important to Australia and where do they believe policy attention and investment should focus in the next five years?
This initial study establishes a baseline for future longitudinal assessments that could capture changes or shifts in parliamentarians’ thinking. Australia’s ongoing cybersecurity challenges, the fast-moving pace of artificial intelligence (AI), the creation of AUKUS and the ongoing development of AUKUS Pillar 2—with its focus on advanced capabilities and emerging technologies (including cybertechnologies)—are just a few reasons among many which highlight why it’s more important than ever that the Australian Parliament be both informed and active when engaging with cybersecurity and critical technologies.
We understand that this in-depth study may be a world first and extend our deep and heartfelt thanks to the 24 parliamentarians who took part in it. Parliamentarians are very busy people, and yet many devoted significant time to considering and completing this study.
This was a non-partisan study. Parliamentarians were speaking on condition of strict anonymity, without any identifiers apart from their gender, chamber, electorate profile and backbench or frontbench status. Because of that, the conversations were candid, upfront and insightful and, as a result, this study provides a rich and honest assessment of their views.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12133247/What-do-Australias-parliamentarians-think-about-cybersecurity-banner.png509792markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-11-14 13:32:332025-04-14 17:22:19What do Australia’s parliamentarians think about cybersecurity and critical technology?
ASPI and a non-government research partner1 conducted a year-long project designed to share detailed and accurate information on state surveillance in the People’s Republic of China (PRC) and engage residents of the PRC on the issue of surveillance technology. A wide range of topics was covered, including how the party-state communicates on issues related to surveillance, as well as people’s views on state surveillance, data privacy, facial recognition, DNA collection and data-management technologies.
The project’s goals were to:
improve our understanding of state surveillance in China and how it’s communicated by the Chinese party-state
develop a nuanced understanding of PRC residents’ perceptions of surveillance technology and personal privacy, the concerns some have in regard to surveillance, and how those perceptions relate to trust in government
explore the reach and potential of an interactive digital platform as an alternative educational and awareness-raising tool.
This unique project combined extensive preliminary research—including media analysis and an online survey of PRC residents—with data collected from an interactive online research platform deployed in mainland China. Media analysis drew on PRC state media to understand the ways in which the party-state communicates on issues of surveillance. The online survey collected opinions from 4,038 people living in mainland China, including about their trust in government and views on surveillance technologies. The interactive research platform offered PRC residents information on the types and capabilities of different surveillance technologies in use in five municipalities and regions in China. Presenting an analysis of more than 1,700 PRC Government procurement documents, it encouraged participants to engage with, critically evaluate and share their views on that information. The research platform engaged more than 55,000 PRC residents.
Data collection was led and conducted by the non-government research partner, and the data was then provided to ASPI for a joint analysis. The project details, including methodology, can be found on page 6.
Key findings
The results of this research project indicate the following:
Project participants’ views on surveillance and trust in the government vary markedly.
Segmentation analysis of survey responses suggests that respondents fall into seven distinct groups, which we have categorised as dissenters, disaffected, critics, possible sceptics, stability seekers, pragmatists and endorsers (the segmentation analysis is on page 12).
In general, PRC state narratives about government surveillance and technology implementation appear to be at least partly effective.
Our analysis of PRC state media identified four main narratives to support the use of government surveillance:
Surveillance helps to fight crime.
The PRC’s surveillance systems are some of the best in the world.
Surveillance is commonplace internationally.
Surveillance is a ‘double-edged sword’, and people should be concerned for their personal privacy when surveillance is handled by private companies.
Public opinion often aligns with state messaging that ties surveillance technologies to personal safety and security. For example, when presented with information about the number of surveillance cameras in their community today, a larger portion of Research Platform participants said they would prefer the same number (39%) or more cameras (38.4%).
PRC state narratives make a clear distinction between private and government surveillance, which suggests party-state efforts to ‘manage’ privacy concerns within acceptable political parameters.
Project participants value privacy but hold mixed views on surveillance.
Participants expressed a preference for consent and active engagement on the issue of surveillance. For example, over 65% agreed that DNA samples should be collected from the general population only on a voluntary basis.
Participants are generally comfortable with the widespread use of certain types of surveillance, such as surveillance cameras; they’re less comfortable with other forms of surveillance, such as DNA collection.
ASPI supported this project with an undisclosed research partner. That institution remains undisclosed to preserve its access to specific research techniques and data and to protect its staff. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/12133959/Surveillance-privacy-and-agency_-insights-from-China-banner.png471740markohttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngmarko2023-10-12 13:39:192025-03-12 15:38:35Surveillance, privacy and agency