Tag Archive for: Information Operations – Disinformation

Shadow Play

A pro-China technology and anti-US influence operation thrives on YouTube

Executive Summary

ASPI has recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition. This new campaign (which ASPI has named ‘Shadow Play’) has attracted an unusually large audience and is using entities and voice overs generated by artificial intelligence (AI) as a tactic that enables broad reach and scale.1 It focuses on promoting a series of narratives including China’s efforts to ‘win the US–China technology war’ amid US sanctions targeting China. It also includes a focus on Chinese and US companies, such as pro-Huawei and anti-Apple content.

The Shadow Play campaign involves a network of at least 30 YouTube channels that have produced more than 4,500 videos. At time of publication, those channels have attracted just under 120 million views and 730,000 subscribers. The accounts began publishing content around mid-2022. The campaign’s ability to amass and access such a large global audience—and its potential to covertly influence public opinion on these topics—should be cause for concern.

ASPI reported our findings to YouTube/Google on 7 December 2023 for comment. By 8 December, they had taken down 19 YouTube channels from the Shadow Play network—10 for coordinated inauthentic behaviour and nine for spam. As of publication, these YouTube channels display a range of messages from YouTube indicating why they were taken down. For example, one channel was ‘terminated for violating YouTube’s community guidelines’, while another was ‘terminated due to multiple or severe violations of YouTube’s policy for spam, deceptive practices and misleading content or other Terms of Service violations’. ASPI also reported our findings to British artificial intelligence company, Synthesia, whose AI avatars were used by the network. On 14 December 2023, Synthesia disabled the Synthesia account used by one of the YouTube accounts, for violating its Media Reporting (News) policy.

We believe that it’s likely that this new campaign is being operated by a Mandarin-speaking actor. Indicators of this actor’s behaviour don’t closely map to the behaviour of any known state actor that conducts online influence operations. Our preliminary analysis (see ‘Attribution’) is that the operator of this network could be a commercial actor operating under some degree of state direction, funding or encouragement. This could suggest that some patriotic companies increasingly operate China-linked campaigns alongside government actors.

The campaign focuses on promoting six narratives. Two of the most dominant narratives are that China is ‘winning’ in crucial areas of global competition: first, in the ‘US–China tech war’ and, second, in the competition for rare earths and critical minerals.2 Other key narratives express that the US is headed for collapse and that its alliance partnerships are fracturing, that China and Russia are responsible, capable players in geopolitics, that the US dollar and the US economy are weak, and that China is highly capable and trusted to deliver massive infrastructure projects. A list of visual representative examples from the network for each narrative is in Appendix 1 on page 35.

Figure 1: An example of the style of content generated by the network, in which multiple YouTube channels published videos alleging that China had innovated a 1-nanometre chip, without using a lithography machine

figure 1

Sources: ‘China Charged’, ‘China reveals the world’s first 1nm chip & SHOCKS the US!’, YouTube, 3 November 2023, online;‘ Relaxian’, ‘China’s groundbreaking 1nm chip: redefining technology and global power’, YouTube, 4 November 2023, online; ‘Vision of China’, ‘China breaks tech limit: EUV lithography not needed to make 1nm chips!’, YouTube, 17 July 2023 online; ‘China Focus—CNF’, ‘World challenge conquered: 1nm chips produced without EUV lithography!’, YouTube, 5 July 2023, online; ‘Curious Bay’, ‘China’s NEW 1nm chip amazes the world’, YouTube, 24 July 2023, online; ‘China Hub’, ‘China shatters tech boundaries: 1nm chips without EUV lithography? Unbelievable tech breakthrough!’, YouTube, 30 July 2023, online.

This campaign is unique in three ways. First, as noted above, there’s a notable broadening of topics. Previous China-linked campaigns have been tightly targeted and have often focused on a narrow set of topics. For example, the campaign’s focus on promoting narratives that establish China as technologically superior to the US presents detailed arguments on technology topics including semiconductors rare earths, electric vehicles and infrastructure projects. In addition, it targets, via criticism and disinformation, US technology firms such as Apple and Intel. Chinese state media outlets, Chinese officials and online influencers sometimes publish on these topics in an effort to ‘tell China’s story well’ (讲好中国故事).3 A few Chinese state-backed inauthentic information operations have touched on rare earths and semiconductors, but never in depth or by combining multiple narratives in one campaign package.4 The broader set of topics and opinions in this campaign may demonstrate greater alignment with the known behaviour of Russia-linked threat actors.

Second, there’s a change in techniques and tradecraft, as the campaign has leveraged AI. To our knowledge, the YouTube campaign is one of the first times that video essays, together with generative AI voiceovers, have been used as a tactic in an influence operation. Video essays are a popular style of medium-length YouTube video in which a narrator makes an argument through a voiceover, while content to support their argument is displayed on the screen. This shows a continuation of a trend that threat actors are increasingly moving towards: using off-the-shelf video editing and generative AI technology tools to produce convincing, persuasive content at scale that can build an audience on social-media services. We also observed one account in the YouTube network using an avatar created by Sogou, one of China’s largest technology companies (and a subsidiary of Tencent) (see page 24). We believe the use of the Sogou avatar we identified to be the first instance of a Chinese company’s AI-generated human being used in an influence operation.

Third, unlike previous China-focused campaigns, this one has attracted large views and subscribers. It has also been monetised, although only through limited means. For example, one channel accepted money from US and Canadian companies to support the production of their videos. The substantial number of views and subscribers suggest that the campaign is one of the most successful influence operations related to China ever witnessed on social media. Many China-linked influence operations, such as Dragonbridge (also known as ‘Spamouflage’ in the research community), have attracted

initial engagement in some cases but have failed to sustain a meaningful audience on social media.5 However, further research by YouTube is needed to determine whether view counts and subscriber counts on YouTube demonstrated real viewership or were artificially manipulated, or a combination of both. We note that, in our examination of YouTube comments on videos in this campaign, we saw signs of a genuine audience. ASPI believes that this campaign is probably larger than the 30 channels covered in this report, but we constrained our initial examination to channels we saw as core to the campaign. We also believe there to be more channels publishing content in non-English languages that belong to this network; for example, we saw channels publishing in Bahasa Indonesia that aren’t included in this report.

That’s not to say that the effectiveness of influence operations should only be measured through engagement numbers. As ASPI has previously demonstrated, Chinese Communist Party (CCP) influence operations that troll, threaten and harass on social media seek to silence and cause psychological harm to those being targeted, rather than seeking engagement.6 Similarly, influence operations can be used to ‘poison the well’ by crowding out the content of genuine actors in online spaces, or to poison datasets used for AI products, such as large-language models (LLMs).7

This report also discusses another way that an influence operation can be effective: through its ability to spill over and gain traction in a wider system of misinformation. We found that at least one narrative from the Shadow Play network—that Iran had switched on its China-provided BeiDou satellite system—began to gain traction on X (formerly Twitter) and other social-media platforms within a few hours of its posting on YouTube. We discuss that case study on page 29.

This report offers an initial identification of the influence operation and some defining characteristics of a likely new influence actor. In addition to sections on attribution, methodology and analysis of this new campaign, this report concludes with a series of recommendations for government and social media companies, including:

  • the immediate investigation of this ongoing information operation, including operator intent and the scale and scope of YouTube channels involved
  • broader efforts by Five Eyes and allied partners to declassify open-source social-media-based influence operations and share information with like-minded nations and relevant NGOs
  • rules that require social-media users to disclose when generative AI is used in audio, video or image content
  • national intelligence collection priorities that support the effective amalgamation of information on Russia-, China- and Iran-linked information operations
  • publishing detailed threat indicators as appendixes in information operations research.

Full Report

For the full report, please download here.


  1. Shadow play (or shadow puppetry) is a storytelling technique in which flat articulated cut-out figures are placed between a light source and a translucent screen. It’s practised across Southeast Asia, China, the Middle East, Europe and the US. See, for example, Inge C Orr, ‘Puppet theatre in Asia’, Asian Folklore Studies, 1974, 33(1):69–84, online. ↩︎
  2. A recent Pew Research Center poll indicates that technology is one of the few areas in which public opinion in high-income and middle-income countries sees China and the US as equally capable, which suggests that narratives on those lines are credible for international viewers. Laura Silver, Christine Huang, Laura Clancy, Nam Lam, Shannon Greenwood, John Carlo Mandapat, Chris Baronavski, Comparing views of the US and China in 24 countries, Pew Research Center, 6 November 2023, online. ↩︎
  3. ‘Telling China’s story well’, China Media Project, 16 April 2021, online; Marcel Schliebs, Hannah Bailey, Jonathan Bright, Philip N Howard, China’s public diplomacy operations: understanding engagement and inauthentic amplification of PRC diplomats on Facebook and Twitter, Oxford Internet Institute, 11 May 2021, https://demtech.oii.ox.ac.uk/research/posts/chinas-public-diplomacy-operations-understanding-engagement-and-inauthentic-amplification-of-chinese-diplomats-on-facebook-and-twitter/#continue. ASPI’s work on foreign influencers’ role in telling China’s story well includes Fergus Ryan, Matt Knight, Daria Impiombato, Singing from the CCP’s songsheet, ASPI, Canberra, 24 November 2023, https://www.aspi.org.au/report/singing-ccps-songsheet . Fergus Ryan, Ariel Bogle, Nathan Ruser, Albert Zhang, Daria Impiombato, Borrowing mouths to speak on Xinjiang, ASPI, Canberra, 10 December 2021, https://www.aspi.org.au/report/borrowing-mouths-speak-xinjiang ; Fergus Ryan, Daria Impiombato, Hsi-Ting Pai, Frontier influencers, ASPI, Canberra, 20 October 2022, https://www.aspi.org.au/report/frontier-influencers/. . ↩︎
  4. Reports on China-linked information operations that have targeted semiconductors and rare earths include Albert Zhang, ‘The CCP’s information campaign targeting rare earths and Australian company Lynas’, The Strategist, 29 June 2022, online; ‘Pro-PRC DRAGONBRIDGE influence campaign targets rare earths mining companies in attempt to thwart rivalry to PRC market dominance’, Mandiant, 28 June 2022, https://www.mandiant.com/resources/blog/dragonbridge-targets-rare-earths-mining-companies ; Shane Huntley, ‘TAG Bulletin: Q3 2022’, Google Threat Analysis Group, October 26 2022, https://blog.google/threat-analysis-group/tag-bulletin-q3-2022/ . ↩︎
  5. Ben Nimmo, Ira Hubert, Yang Cheng, ‘Spamouflage breakout’, Graphika, 4 February 2021, online. ↩︎
  6. Danielle Cave, Albert Zhang, ‘Musk’s Twitter takeover comes as the CCP steps up its targeting of smart Asian women’, The Strategist, 6 November 2022, online; Donie O’Sullivan, Curt Devine, Allison Gordon, ‘China is using the world’s largest known online disinformation operation to harass Americans, a CNN review finds’, CNN, 13 November 2023, https://edition.cnn.com/2023/11/13/us/china-online-disinformation-invs/index.html . ↩︎
  7. Rachael Falk, Anne-Louise Brown, ‘Poison the well: AI, data integrity and emerging cyber threats’, Cyber Security Cooperative Research Centre, 30 October 2023, online. ↩︎

Surveillance, privacy and agency

Executive summary

ASPI and a non-government research partner1 conducted a year-long project designed to share detailed and accurate information on state surveillance in the People’s Republic of China (PRC) and engage residents of the PRC on the issue of surveillance technology. A wide range of topics was covered, including how the party-state communicates on issues related to surveillance, as well as people’s views on state surveillance, data privacy, facial recognition, DNA collection and data-management technologies.

The project’s goals were to:

  • improve our understanding of state surveillance in China and how it’s communicated by the Chinese party-state
  • develop a nuanced understanding of PRC residents’ perceptions of surveillance technology and personal privacy, the concerns some have in regard to surveillance, and how those perceptions relate to trust in government
  • explore the reach and potential of an interactive digital platform as an alternative educational and awareness-raising tool.

This unique project combined extensive preliminary research—including media analysis and an online survey of PRC residents—with data collected from an interactive online research platform deployed in mainland China. Media analysis drew on PRC state media to understand the ways in which the party-state communicates on issues of surveillance. The online survey collected opinions from 4,038 people living in mainland China, including about their trust in government and views on surveillance technologies. The interactive research platform offered PRC residents information on the types and capabilities of different surveillance technologies in use in five municipalities and regions in China. Presenting an analysis of more than 1,700 PRC Government procurement documents, it encouraged participants to engage with, critically evaluate and share their views on that information. The research platform engaged more than 55,000 PRC residents.

Data collection was led and conducted by the non-government research partner, and the data was then provided to ASPI for a joint analysis. The project details, including methodology, can be found on page 6.

Key findings

The results of this research project indicate the following:

  • Project participants’ views on surveillance and trust in the government vary markedly.
    • Segmentation analysis of survey responses suggests that respondents fall into seven distinct groups, which we have categorised as dissenters, disaffected, critics, possible sceptics, stability seekers, pragmatists and endorsers (the segmentation analysis is on page 12).
  • In general, PRC state narratives about government surveillance and technology implementation appear to be at least partly effective.
    • Our analysis of PRC state media identified four main narratives to support the use of government surveillance:
      1. Surveillance helps to fight crime.
      2. The PRC’s surveillance systems are some of the best in the world.
      3. Surveillance is commonplace internationally.
      4. Surveillance is a ‘double-edged sword’, and people should be concerned for their personal privacy when surveillance is handled by private companies.
    • Public opinion often aligns with state messaging that ties surveillance technologies to personal safety and security. For example, when presented with information about the number of surveillance cameras in their community today, a larger portion of Research Platform participants said they would prefer the same number (39%) or more cameras (38.4%).
    • PRC state narratives make a clear distinction between private and government surveillance, which suggests party-state efforts to ‘manage’ privacy concerns within acceptable political parameters.
  • Project participants value privacy but hold mixed views on surveillance.
    • Participants expressed a preference for consent and active engagement on the issue of surveillance. For example, over 65% agreed that DNA samples should be collected from the general population only on a voluntary basis.
    • Participants are generally comfortable with the widespread use of certain types of surveillance, such as surveillance cameras; they’re less comfortable with other forms of surveillance, such as DNA collection.
  1. ASPI supported this project with an undisclosed research partner. That institution remains undisclosed to preserve its
    access to specific research techniques and data and to protect its staff. ↩︎

Gaming Public Opinion

The CCP’s increasingly sophisticated cyber-enabled influence operations

What’s the problem?

The Chinese Communist Party’s (CCP’s) embrace of large-scale online influence operations and spreading of disinformation on Western social-media platforms has escalated since the first major attribution from Silicon Valley companies in 2019. While Chinese public diplomacy may have shifted to a softer tone in 2023 after many years of wolf-warrior online rhetoric, the Chinese Government continues to conduct global covert cyber-enabled influence operations. Those operations are now more frequent, increasingly sophisticated and increasingly effective in supporting the CCP’s strategic goals. They focus on disrupting the domestic, foreign, security and defence policies of foreign countries, and most of all they target democracies.

Currently—in targeted democracies—most political leaders, policymakers, businesses, civil society groups and publics have little understanding of how the CCP currently engages in clandestine activities online in their countries, even though this activity is escalating and evolving quickly. The stakes are high for democracies, given the indispensability of the internet and their reliance on open online spaces, free from interference. Despite years of monitoring covert CCP cyber-enabled influence operations by social-media platforms, governments, and research institutes such as ASPI, definitive public attribution of the actors driving these activities is rare. Covert online operations, by design, are difficult to detect and attribute to state actors. 

Social-media platforms and governments struggle to devote adequate resources to identifying, preventing and deterring increasing levels of malicious activity, and sometimes they don’t want to name and shame the Chinese Government for political, economic and/or commercial reasons. 

But when possible, public attribution can play a larger role in deterring malicious actors. Understanding which Chinese Government entities are conducting such operations, and their underlying doctrine, is essential to constructing adequate counter-interference and deterrence strategies. The value of public attribution also goes beyond deterrence. For example, public attribution helps civil society and businesses, which are often the intended targets of online influence operations, to understand the threat landscape and build resilience against malicious activities. It’s also important that general publics are given basic information so that they’re informed about the contemporary security challenges a country is facing, and public attribution helps to provide that information.

ASPI research in this report—which included specialised data collection spanning Twitter, Facebook, Reddit, Sina Weibo and ByteDance products—reveals a previously unreported CCP cyber-enabled influence operation linked to the Spamouflage network, which is using inauthentic accounts to spread claims that the US is irresponsibly conducting cyber-espionage operations against China and other countries. As a part of this research, we geolocated some of the operators of that network to Yancheng in Jiangsu Province, and we show it’s possible that at least some of the operators behind Spamouflage are part of the Yancheng Public Security Bureau.

The CCP’s clandestine efforts to influence international public opinion rely on a very different toolkit today compared to its previous tactics of just a few years ago. CCP cyber-enabled influence operations remain part of a broader strategy to shape global public opinion and enhance China’s ‘international discourse power’. Those efforts have evolved to nudge public opinion towards positions more favourable to the CCP and to interfere in the political decision-making processes of other countries. A greater focus on covert social-media accounts allows the CCP to pursue its interests while providing a plausibly deniable cover. 

Emerging technologies and China’s indigenous cybersecurity industry are also creating new capabilities for the CCP to continue operating clandestinely on Western social platforms.

Left unaddressed, the CCP’s increasing investment in cyber-enabled influence operations threatens to successfully influence the economic decision-making of political elites, destabilise social cohesion during times of crisis, sow distrust of leaders or democratic institutions and processes, fracture alliances and partnerships, and deter journalists, researchers and activists from sharing accurate information about China.

What’s the solution?

This report provides the first public empirical review of the CCP’s clandestine online networks on social-media platforms.

We outline seven key policy recommendations for governments and social-media platforms (further details are on page 39):

  1. Social-media platforms should take advantage of the digital infrastructure, which they control, to more effectively deter cyber-enabled influence operations. To disrupt future influence operations, social-media platforms could remove access to those analytics for suspicious accounts breaching platform policies, making it difficult for identified malicious actors to measure the effectiveness of influence operations.
  2. Social-media platforms should pursue more innovative information-sharing to combat cyber-enabled influence operations. For example, social-media platforms could share more information about the digital infrastructure involved in influence operations, without revealing personally identifiable information.
  3. Governments should change their language in speeches and policy documents to describe social-media platforms as critical infrastructure. This would acknowledge the existing importance of those platforms in democracies and would communicate signals to malicious actors that, like cyber operations on the power grid, efforts to interfere in the information ecosystem will be met with proportionate responses.
  4. Governments should review foreign interference legislation and consider mandating that social-media platforms disclose state-backed influence operations and other transparency reporting to increase the public’s threat awareness.
  5. Public diplomacy should be a pillar of any counter-malign-influence strategy. Government leaders and diplomats should name and shame attributable malign cyber-enabled influence operations, and those entities involved in their operation (state and non-state) to deter those activities.
  6. Partners and allies should strengthen intelligence diplomacy on this emerging security challenge and seek to share more intelligence with one another on such influence operations. Strong open-source intelligence skills and collection capabilities are a crucial part of investigating and attributing these operations, the low classification of which, should making intelligence sharing easier.
  7. Governments should support further research on influence operations and other hybrid threats. To build broader situational awareness of hybrid threats across the region, including malign influence operations, democracies should establish an Indo-Pacific hybrid threats centre.

Key findings

The CCP has developed a sophisticated, persistent capability to sustain coordinated networks of personas on social-media platforms to spread disinformation, wage public-opinion warfare and support its own diplomatic messaging, economic coercion and other levers of state power.

That capability is evolving and has expanded to push a wider range of narratives to a growing international audience with the Indo-Pacific a key target.

The CCP has used these cyber-enabled influence operations to seek to interfere in US politics, Australian politics and national security decisions, undermine the Quad and Japanese defence policies and impose costs on Australian and North American rare-earth mining companies.

  • CCP cyber-enabled influence operations are probably conducted, in parallel if not collectively, by multiple Chinese party-state agencies. Those agencies appear at times to collaborate with private Chinese companies. The most notable actors that are likely to be conducting such operations include the People’s Liberation Army’s Strategic Support Force (PLASSF), which conducts cyber operations as part of the PLA’s political warfare; the Ministry of State Security (MSS), which conducts covert operations for state security; the Central Propaganda Department, which oversees China’s domestic and foreign propaganda efforts; the Ministry of Public Security (MPS), which enforces China’s internet laws; and the Cyberspace Administration of China (CAC), which regulates China’s internet ecosystem. Chinese state media outlets and Ministry of Foreign Affairs (MFA) officials are also running clandestine operations that seek to amplify their own overt propaganda and influence activities.
  • Starting in 2021, a previously unreported CCP cyber-enabled influence operation has been disseminating narratives that the CIA and National Security Agency are ‘irresponsibly conducting cyber-espionage operations against China and other countries’. ASPI isn’t in a position to verify US intelligence agency activities. However, the means used to disseminate the counter-US narrative— this campaign appears to be partly driven by the pro-CCP coordinated inauthentic network known as Spamouflage—strongly suggests an influence operation. ASPI’s research suggests that at least some operators behind the campaign are affiliated with the MPS, or are ‘internet commentators’ hired by the CAC, which may have named this campaign ‘Operation Honey Badger’. The evidence indicates that the Chinese Government probably intended to influence Southeast Asian markets and other countries involved in the Belt and Road Initiative to support the expansion of Chinese cybersecurity companies in those regions.
  • Chinese cybersecurity company Qi An Xin (奇安信) appears at times it may be supporting the influence operation. The company has the capacity to seed disinformation about advanced persistent threats to its clients in Southeast Asia and other countries. It’s deeply connected with Chinese intelligence, military and security services and plays an important role in China’s cybersecurity and state security strategies.

Seeking to undermine democracy and partnerships

How the CCP is influencing the Pacific islands information environment

What’s the problem?

The Chinese Communist Party (CCP) is conducting coordinated information operations in Pacific island countries (PICs). Those operations are designed to influence political elites, public discourse and political sentiment regarding existing partnerships with Western democracies. Our research shows how the CCP frequently seeks to capitalise on regional events, announcements and engagements to push its own narratives, many of which are aimed at undermining some of the region’s key partnerships.

This report examines three significant events and developments:

  • the establishment of AUKUS in 2021
  • the CCP’s recent efforts to sign a region-wide security agreement
  • the 2022 Pacific Islands Forum held in Fiji.

This research, including these three case studies, shows how the CCP uses tailored, reactive messaging in response to regional events and analyses the effectiveness of that messaging in shifting public discourse online.

This report also highlights a series of information channels used by the CCP to push narratives in support of the party’s regional objectives in the Pacific. Those information channels include Chinese state media, CCP publications and statements in local media, and publications by local journalists connected to CCP-linked groups.1

There’s growing recognition of the information operations and misinformation and disinformation being spread globally under the CCP’s directives. Although the CCP’s information operations have had little demonstrated effectiveness in shifting online public sentiment in the case studies examined in this report, they’ve previously proven to be effective in influencing public discourse and political elites in the Pacific.2 Analysing the long-term impact of these operations, so that informed policy decisions can be made by governments and by social media platforms, requires greater measurement and understanding of current operations and local sentiment.

What’s the solution?

The CCP’s presence in the information environment is expanding across the Pacific through online and social media platforms, local and China-based training opportunities, and greater television and short-wave radio programming.3 However, the impact of this growing footprint in the information environment remains largely unexplored and unaddressed by policymakers in the Pacific and in the partner countries that are frequently targeted by the CCP’s information operations.

Pacific partners, including Australia, the US, New Zealand, Japan, the UK and the European Union, need to enhance partnerships with Pacific island media outlets and online news forum managers in order to build a stronger, more resilient media industry that will be less vulnerable to disinformation and pressures exerted by the CCP. This includes further assistance in hiring, training and retaining high-quality professional journalists and media executives and providing financial support without conditions to uphold media freedom in the Pacific. Training should be offered to support online discussion forum managers sharing news content to counter the spread of disinformation and misinformation in public online groups. The data analysis in this report highlights a need for policymakers and platforms to invest more resources in countering CCP information operations in Melanesia, which is shown to have greater susceptibility to those operations.

As part of their targeted training package, Pacific island media and security institutions, such as the Pacific Fusion Centre, should receive further training on identifying disinformation and coordinated information operations to help build media resiliency. For that training to be effective, governments should fund additional research into the actors and activities affecting the Pacific islands information environment, including climate-change and election disinformation and misinformation, and foreign influence activities.

Information sharing among PICs’ media institutions would build greater regional understanding of CCP influence in the information environment and other online harms and malign activity. ASPI has also previously proposed that an Indo-Pacific hybrid threats centre would help regional governments, businesses and civil society better understand and counter those threats.4

Pacific partners, particularly Australia and the US, need to be more effective and transparent in communicating how aid delivered to the region is benefiting PICs and building people-to-people links. Locally based diplomats need to work more closely with Pacific media to contextualise information from press releases and statements and give PIC audiences a better understanding of the benefits delivered by Western governments’ assistance. This includes greater transparency on the provision of aid in the region. Doing so will debunk some of the CCP’s narratives regarding Western support and legitimacy in the region.

  1. A number of local journalists and media contributors have connections to CCP-linked entities, such as Pacific friendship associations. The connections between friendship associations and CCP influence are described in Anne-Marie Brady, ‘Australia and its partners must bring the Pacific into the fold on Chinese interference’, The Strategist, 21 April 2022. ↩︎
  2. Blake Johnson, Miah Hammond-Errey, Daria Impiombato, Albert Zhang, Joshua Dunne, Suppressing the truth and spreading lies: how the CCP is influencing Solomon Islands’ information environment, ASPI, Canberra. ↩︎
  3. Richard Herr, Chinese influence in the Pacific islands: the yin and yang of soft power, ASPI, Canberra, 30 April 2019, online; Denghua Zhang, Amanda Watson, ‘China’s media strategy in the Pacific’, In Brief 2020/29, Department of Pacific Affairs, Australian National University, 26 March 2021, online; Dorothy Wickham, ‘The lesson from my trip to China? Solomon Islands not ready to deal with the giant’, The Guardian, 23 December 2019. ↩︎
  4. Lesley Seebeck, Emily Williams, Jacob Wallis, Countering the Hydra: a proposal for an Indo-Pacific hybrid threat centre, ASPI, Canberra, 7 June 2022. ↩︎

Frontier influencers: the new face of China’s propaganda

Executive summary

This report explores how the Chinese party-state’s globally focused propaganda and disinformation capabilities are evolving and increasing in sophistication. Concerningly, this emerging approach by the Chinese party-state to influence international discourse on China, including obfuscating its record of human rights violations, is largely flying under the radar of US social media platforms and western policymakers.

In the broader context of attempts by the Chinese Communist Party (CCP) to censor speech, promote disinformation and seed the internet with its preferred narratives, we focus on a small but increasingly popular set of YouTube accounts that feature mainly female China-based ethnic-minority influencers from the troubled frontier regions of Xinjiang, Tibet and Inner Mongolia, hereafter referred to as ‘frontier influencers’ or ‘frontier accounts’.

Despite being blocked in China, YouTube is seen by the CCP as a key battlefield in its ideological contestation with the outside world, and YouTube’s use in foreign-facing propaganda efforts has intensified in recent years. Originally deployed on domestic video-sharing platforms to meet an internal propaganda need, frontier-influencer content has since been redirected towards global audiences on YouTube as part of the CCP’s evolving efforts to counter criticisms of China’s human rights problems and burnish the country’s image.

Alongside party-state media and foreign vloggers, these carefully vetted domestic vloggers are increasingly seen as another key part of Beijing’s external propaganda arsenal. Their use of a more personal style of communication and softer presentation is expected to be more convincing than traditional party-state media content, which is often inclined towards the more rigid and didactic. For the CCP, frontier influencers represent, in the words of one Chinese propaganda expert, ‘guerrillas or militia’ fighting on the flanks in ‘the international arena of public opinion’, while party-state media or the ‘regular army’ ‘charge, kill and advance on the frontlines’.

The frontier accounts we examine in this report were predominantly created in 2020–21 and feature content that closely hews to CCP narratives, but their less polished presentation has a more authentic feel that conveys a false sense of legitimacy and transparency about China’s frontier regions that party-state media struggle to achieve. For viewers, the video content appears to be the creation of the individual influencers, but is in fact what’s referred to in China as ‘professional user generated content’, or content that’s produced with the help of special influencer-management agencies known as multi-channel networks (MCNs).

For the mostly young and female Uyghur, Tibetan and other ethnic-minority influencers we examine in this report, having such an active presence on a Western social media platform is highly unusual, and ordinarily would be fraught with danger. But, as we reveal, frontier influencers are carefully vetted and considered politically reliable. The content they create is tightly circumscribed via self-censorship and oversight from their MCNs and domestic video platforms before being published on YouTube. In one key case study, we show how frontier influencers’ content was directly commissioned by the Chinese party-state.

Because YouTube is blocked in China, individual influencers based in the country aren’t able to receive advertising revenue through the platform’s Partner Program, which isn’t available there. But, through their arrangements with YouTube, MCNs have been able to monetise content for frontier influencers, as well as for hundreds of other China-based influencers on the platform. Given that many of the MCNs have publicly committed to promote CCP propaganda, this arrangement results in a troubling situation in which MCNs are able to monetise their activities, including the promotion of disinformation, via their access to YouTube’s platform.

The use of professionally supported frontier influencers also appears to be aimed at ensuring that state-backed content ranks well in search results because search-engine algorithms tend to prioritise fresh content and channels that post regularly. From the CCP’s perspective, the continuous flooding of content by party-state media, foreign influencers and professionally supported frontier influencers onto YouTube is aimed at outperforming other more critical but stale content.

This new phenomenon reflects a continued willingness, identified in previous ASPI ICPC reports,11 by the Chinese party-state to experiment in its approach to shaping online political discourse, particularly on those topics that have the potential to disrupt its strategic objectives. By targeting online audiences on YouTube through intermediary accounts managed by MCNs, the CCP can hide its affiliation with those influencers and create the appearance of ‘independent’ and ‘authoritative’ voices supporting its narratives, including disinformation that it’s seeking to propagate globally.

This report (on page 42) makes a series of policy recommendations, including that social media platforms shouldn’t allow MCNs who are conducting propaganda and disinformation work on behalf of the Chinese party-state to monetise their activities or be recognised by the platforms as, for example, official partners or award winners. This report also recommends that social media platforms broaden their practice of labelling the accounts of state media, agencies and officials to include state-linked influencers from the People’s Republic of China.

  1. Fergus Ryan, Ariel Bogle, Nathan Ruser, Albert Zhang, Daria Impiombato, Borrowing mouths to speak on Xinjiang, ASPI, Canberra, 7 December 2021. Fergus Ryan, Ariel Bogle, Albert Zhang, Jacob Wallis, #StopXinjiang Rumors: the CCP’s decentralised disinformation campaign, ASPI, Canberra, 2 December 2021,https://www.aspi.org.au/report/stop-xinjiang-rumors. ↩︎

Suppressing the truth and spreading lies

How the CCP is influencing Solomon Islands’ information environment

What’s the problem?

The Chinese Communist Party (CCP) is attempting to influence public discourse in Solomon Islands through coordinated information operations that seek to spread false narratives and suppress information on a range of topics. Following the November 2021 Honiara riots and the March 2022 leaking of the China – Solomon Islands security agreement, the CCP has used its propaganda and disinformation capabilities to push false narratives in an effort to shape the Solomon Islands public’s perception of security issues and foreign partners. In alignment with the CCP’s regional security objectives, those messages have a strong focus on undermining Solomon Islands’ existing partnerships with Australia and the US.

Although some of the CCP’s messaging occurs through routine diplomatic engagement, there’s a coordinated effort to influence the population across a broad spectrum of information channels. That spectrum includes Chinese party-state media, CCP official-led statements and publications in local and social media, and the amplification of particular individual and pro-CCP content via targeted Facebook groups.

There’s now growing evidence to suggest that CCP officials are also seeking to suppress information that doesn’t align with the party-state’s narratives across the Pacific islands through the coercion of local journalists and media institutions.

What’s the solution?

The Australian Government should coordinate with other foreign partners of Solomon Islands, including the US, New Zealand, Japan and the EU, to further assist local Pacific media outlets in hiring, training and retaining high-quality professional journalists. A stronger, more resilient media industry in Solomon Islands will be less vulnerable to disinformation and the pressures exerted by local CCP officials.

Social media companies need to provide, in national Pacific languages, contextual information on misinformation and label state affiliations on messages from state-controlled entities. Social media companies could encourage civil society to report state affiliations and provide evidence to help companies enforce their policies.

Further government funding should be used to support public research into actors and activities affecting the Pacific islands’ information environment, including foreign influence, the proliferation of disinformation on topics such as climate change, and election misinformation. That research should be used to assist in building media resiliency in Pacific island countries by providing information and targeted training to media professionals to assist in identifying disinformation and aspects of coordinated information operations. Sharing that information with civil-society groups and institutions across the region, such as the Pacific Fusion Centre, can also help to improve regional media literacy and understanding of information operations as a cybersecurity issue.

Pacific island countries will need support as great-power competition intensifies in the region. The US, for example, can do more to demonstrate that the CCP’s narratives are false, such as proving Washington’s genuine interest in supporting the region by answering the call of the local Solomon Islands population to do more to clean up remaining unexploded World War II ordnance on Guadalcanal. ASPI has also previously proposed that an Indo-Pacific hybrid threats centre would help regional governments, business and civil society to understand the threat landscape, develop resilience against online harms and counter malign activity.1 It would contribute to regional stability by promoting confidence-building measures, including information-sharing and capacity-building mechanisms.

Introduction

This report explores how the CCP is using a range of influence channels to shape, promote and suppress messages in the Solomon Islands information environment. Through an examination of CCP online influence in the aftermath of the Honiara riots in late 2021 and in response to the leaked security agreement in March 2022, this report demonstrates a previously undocumented level of coordination across a range of state activities. As part of a wider shift in ASPI’s research on foreign interference and disinformation, this report also seeks to measure the impact of those efforts in shaping public sentiment and opinion, and we welcome feedback on those methods. The data collected in this project doesn’t provide an exhaustive record of all CCP influence tactics and channels in Solomon Islands but provides a snapshot of activity in relation to the two key case studies.

In this paper, we use the term ‘China’ to refer to the People’s Republic of China (PRC) as an international actor, ‘Chinese Government’ or ‘Chinese state’ to refer to the bureaucracy of the government of the PRC, and ‘Chinese Communist Party’ or ‘party-state’ to refer to the regime that monopolises state power in the PRC.

Methodology

Data collection for this case study covered two discrete periods. The first collection period was for 12 weeks from the beginning of the riots on 24 November (referred to in tables and charts as the Honiara riots case study), and the second period was for six weeks from the leaking of the China – Solomon Islands security agreement on 24 March (referred to as the security agreement case study).2 The analytical methods used included quantitative analysis of publicly available data from a range of sources, including articles from Solomon Islands media outlets, articles from party-state media and Facebook posts in public groups and local media pages based in Solomon Islands. For the purpose of the analysis, any article with more than 80% of its content derived from local or foreign government official sources (direct quotes or statements from diplomatic officials, ministers or embassies, for example) was categorised as an ‘official-led’ article. Examples of such content included editorials, media releases and articles that prominently relied on direct quotes. This data was collected systematically for quantitative and qualitative analysis and was strengthened by deeper investigation into some public Facebook groups and activity. This approach drew upon a previously published framework, titled ‘information influence and interference’, used to understand strategy-driven, state-sponsored information activities.33

We conducted a simple categorical sentiment analysis of social media posts as a measure of the effectiveness of CCP influence efforts. We analysed comments from Facebook posts published by three leading media outlets in Solomon Islands (The Solomon StarThe Island Sun and the Solomon Times) for the two events investigated for this research report. We also analysed comments from posts by the Chinese Embassy in Solomon Islands’ Facebook page, as well as posts in public Pacific island Facebook pages and groups that shared links to party-state media. Relevant comments were categorised as being positive (pro) or negative (anti) towards a particular country or group, such as ‘the West’, which had to be explicitly stated in the comment. Comments that referred to more than one grouping (China, the West, or the Solomon Islands Government) were categorised for analytical purposes based on the dominant subject of the comment. Our initial data collection also sought to analyse information relating to New Zealand, the UK and Japan, but that was prevented by the lack of reporting and online discussion focused on those countries (in this data-collection period, only one article each from New Zealand and Japan were identified).

  1. Lesley Seebeck, Emily Williams, Jacob Wallis, Countering the Hydra: a proposal for an Indo-Pacific hybrid threat centre, ASPI, Canberra, 7 June 2022. ↩︎
  2. Anna Powles, ‘Photos of draft security agreement between Solomon Islands and China’, Twitter, 24 March 2022. ↩︎
  3. Miah Hammond-Errey, ‘Understanding and assessing information influence and foreign interference’, Journal of
    Information Warfare, Winter 2019, 18:1–22. ↩︎

Countering the Hydra: A proposal for an Indo-Pacific hybrid threat centre

What’s the problem?

Enabled by digital technologies and fuelled by geopolitical competition, hybrid threats in the Indo-Pacific are increasing in breadth, application and intensity. Hybrid threats are a mix of military, non-military, covert and overt activities by state and non-state actors that occur below the line of conventional warfare. The consequences for individual nations include weakened institutions, disrupted social systems and economies, and greater vulnerability to coercion—especially from revisionist powers such as China.

But the consequences of increased hybrid activity in the Indo-Pacific reach well beyond individual nations. The Indo-Pacific hosts a wide variety of political systems and interests, with multiple centres of influence, multiple points of tension and an increasingly belligerent authoritarian power. It lacks the regional institutions and practised behaviours to help ensure ongoing security and stability. And, because of its position as a critical centre of global economic and social dynamism, instability in the Indo-Pacific, whether through or triggered by hybrid threats, has global ramifications.

Because hybrid threats fall outside the conventional frameworks of the application of state power and use non-traditional tools to achieve their effects, governments have often struggled to identify the activity, articulate the threat and formulate responses. Timeliness and specificity are problematic: hybrid threats evolve, are often embedded or hidden within normal business and operations, and may leverage or amplify other, more traditional forms of coercion.

More often than not, hybrid threat activity is targeted towards the erosion of national capability and trust and the disruption of decision-making by governments—all of which reduce national and regional resilience that would improve security and stability in the region.

What’s the solution?

There’s no silver-bullet solution to hybrid threats; nor are governments readily able to draw on traditional means of managing national defence or regional security against such threats in the Indo-Pacific.

Because of the ubiquity of digital technologies, the ever-broadening application of tools and practices in an increasing number of domains, it’s evident that policymakers need better and more timely information, the opportunity to share information and insights in a trusted forum and models of how hybrid threats work (we provide one here). Exchange of information and good practice is also needed to help counter the amorphous, evolving and adaptive nature of hybrid threats.

We propose the establishment of an Indo-Pacific Hybrid Threat Centre (HTC, or the centre) as a means of building broader situational awareness on hybrid threats across the region.1 Through research and analysis, engagement, information sharing and capacity building, such a centre would function as a confidence-building measure and contribute to regional stability and the security of individual nations.

While modelled on the existing NATO–EU Hybrid Centre of Excellence (CoE) in Finland, the centre would need to reflect the differences between the European and Indo-Pacific security environments. Most notably, that includes the lack of pan-regional Indo-Pacific security institutions and practice that the centre could use. There are also differences in the nature and priorities assigned to threats by different countries: the maritime domain has more influence in the Indo-Pacific than in Europe, many countries in the region face ongoing insurgencies, and there’s much less adherence to, or even interest in, democratic norms and values.

That will inevitably shape the placement, funding, and operations of an Indo-Pacific HTC. A decentralised model facilitating outreach across the region would assist regional buy-in. Partnership arrangements with technology companies would provide technical insight and support. Long-term commitments will be needed to realise the benefits of the centre as a confidence-building measure. The Quad countries are well positioned to provide such long-term commitments, while additional support could come from countries with experience and expertise in hybrid threats, particularly EU countries and the UK.

As with the NATO–EU Hybrid CoE, independence and integrity are paramount. That implies the positioning of the Indo-Pacific HTC core in a strong democracy; better still would be the legislative protection of its operations and data. Accordingly, we propose scoping work to establish policy approval, legislative protection and funding arrangements and to seed initial research capability and networks.

Introduction

Hybrid threats are a mix of military and non-military, covert and overt activities by state and non-state actors that occur below the line of conventional warfare. Their purpose is to blur the lines between war and peace, destabilise societies and governments and sow doubt and confusion among populations and decision-makers. They deliberately target democratic systems and state vulnerabilities, often leveraging legitimate processes for inimical ends, and typically aim to stay below the threshold of detection, attribution and retaliation.2 They’re the same activities that the Australian Government attributes to the ‘grey zone’, involving ‘military and non-military forms of assertiveness and coercion aimed at achieving strategic goals without provoking conflict.’3

Hybrid threats are increasingly of concern to governments as they grapple with the effects of digital technologies, Covid-19 and an increasingly tense geopolitical environment. Ambiguous, evolving, at the intersection of society, commerce and security, and transnational in character, hybrid threats challenge and undercut ‘normal’ conceptions of security. Unmet, they stoke division and anxiety in societies and states. They threaten to erode national security, sovereignty and societal resilience, leaving nations and their people vulnerable to coercion, particularly by authoritarian states and criminal elements.

The immediate targets of motivated hybrid activity are typically non-traditional, in the sense that government security apparatuses aren’t expected to manage and repulse them. Hybrid activity takes advantage of other, easier targets and means of generating confusion and disruption at the nation-state level: individuals may be targeted for repression or assassination; fishing vessels harassed; intellectual property stolen; commercial advantage pillaged; researchers and journalists intimidated; ethnic communities hijacked; and elites co-opted for corrupt ends.

The Indo-Pacific region is particularly vulnerable. For example, it lacks the more practised security frameworks, cooperative mechanisms and understandings present in Europe. There’s little shared awareness and understanding of the nature and consequences of hybrid threats. The region is also especially economically and demographically dynamic and socially diverse, featuring a number of competing political systems and institutions.

That offers both challenge and opportunity. In this paper, we consider the nature of hybrid threats, explore the threat landscape in the Indo-Pacific, turn our attention to the potential ‘fit’ of an Indo-Pacific HTC and make recommendations for the way forward.

A number of the thoughts and insights incorporated in this paper emerged during ASPI’s consultations with governments, businesses and civil society groups in the Indo-Pacific, as well as in Europe and the UK. We thank those respondents for their time and insights.

  1. Danielle Cave, Jacob Wallis, ‘Why the Indo-Pacific needs its own hybrid threats centre’, The Strategist, 15 December 2021. ↩︎
  2. See NATO’s definition, online, and the Hybrid Centre of Excellence’s definition. ↩︎
  3. Defence Department, Defence Strategic Update, Australian Government, 2020, 5. ↩︎

Understanding Global Disinformation and Information Operations: Insights from ASPI’s new analytic website

ASPI’s International Cyber Policy Centre has launched the Understanding Global Disinformation and Information Operations website alongside this companion paper. The site provides a visual breakdown of the publically-available data from state-linked information operations on social media. ASPI’s Information Operations and Disinformation team has analysed each of the data sets in Twitter’s Information Operations archive to provide a longitudinal analysis of how each state’s willingness, capability and intent has evolved over time. Our analysis demonstrates that there is a proliferation of state actors willing to deploy information operations targeting their own domestic populations, as well as those of their adversaries. We find that Russia, Iran, Saudi Arabia, China and Venezuela are the most prolific perpetrators. By making these complex data sets available in accessible form ASPI is broadening meaningful engagement on the challenge of state actor information operations and disinformation campaigns for policymakers, civil society and the international research community

#StopXinjiang Rumors

The CCP’s decentralised disinformation campaign

Introduction

This report analyses two Chinese state-linked networks seeking to influence discourse about Xinjiang across platforms including Twitter and YouTube. This activity targeted the Chinese-speaking diaspora as well as international audiences, sharing content in a variety of languages.

Both networks attempted to shape international perceptions about Xinjiang, among other themes. Despite evidence to the contrary, the Chinese Communist Party (CCP) denies committing human rights abuses in the region and has mounted multifaceted and multiplatform information campaigns to deny accusations of forced labourmass detentionsurveillancesterilisationcultural erasure and alleged genocide in the region. Those efforts have included using Western social media platforms to both push back against and undermine media reports, research and Uyghurs’ testimony about Xinjiang, as well as to promote alternative narratives.

In the datasets we examined, inauthentic and potentially automated accounts using a variety of image and video content shared content aimed at rebutting the evidence of human rights violations against the Uyghur population. Likewise, content was shared using fake Uyghur accounts and other shell accounts promoting video ‘testimonials’ from Uyghurs talking about their happy lives in China.

Our analysis includes two datasets removed by Twitter:

  • Dataset 1: ‘Xinjiang Online’ (CNHU) consisted of 2,046 accounts and 31,269 tweets.
  • Dataset 2: ‘Changyu Culture’ (CNCC) consisted of 112 accounts and 35,924 tweets.

The networks showed indications of being linked by theme and tactics; however, neither achieved significant organic engagement on Twitter overall—although there was notable interaction with the accounts of CCP diplomats. There were signs of old accounts being repurposed, whether purchased or stolen, and little attempt to craft authentic personas.

Twitter has attributed both datasets to the Chinese government, the latter dataset is specifically linked to a company called Changyu Culture, which is connected to the Xinjiang provincial government. This attribution was uncovered by ASPI ICPC in the report Strange bedfellows on Xinjiang: the CCP, fringe media and US social media platforms.

Key takeaways

Different strands of CCP online and offline information operations now interweave to create an increasingly coordinated propaganda ecosystem made up of CCP officials, state and regional media assets, outsourced influence-for-hire operators, social media influencers and covert information operations.

  • The involvement of the CCP’s regional government in Xinjiang in international-facing disinformation suggests that internal party incentive structures are driving devolved strands of information operations activity.
  • The CCP deploys online disinformation campaigns to distract from international criticisms of its policies and to attempt to reframe concepts such as human rights. It aligns the timing of those campaigns to take advantage of moments of strategic opportunity in the information domain.

Notable features of these datasets include:

  • Flooding the zone: While the networks didn’t attract significant organic engagement, the volume of material shared could potentially aim to ‘bury’ critical content on platforms such as YouTube.
  • Multiple languages: There was use of English and other non-Chinese languages to target audiences in other countries, beyond the Chinese diaspora.
  • Promotion of ‘testimonials’ from Uyghurs: Both datasets, but particularly CNCC, shared video of Uyghurs discussing their ‘happy’ lives in Xinjiang and rebutting allegations of human rights abuses. Some of those videos have been linked to a production company connected to the Xinjiang provincial government.
  • Promotion of Western social media influencer content: The CNHU network retweeted and shared content from social media influencers that favoured CCP narratives on Xinjiang, including interviews between influencers and state media journalists.
  • Interaction between network accounts and the accounts of CCP officials: While the networks didn’t attract much organic engagement overall, there were some notable interactions with diplomats and state officials. For example, 48% of all retweets by the CNHU network were of CCP state media and diplomatic accounts.
  • Cross-platform activity: Both networks shared video from YouTube and Douyin (the Chinese mainland version of TikTok), including tourism content about Xinjiang, as well as links to state media articles.
  • Self-referential content creation: The networks promoted state media articles, tweets and other content featuring material created as part of influence operations, including Uyghur ‘testimonial’ videos. Similarly, tweets and content featuring foreign journalists and officials discussing Xinjiang were promoted as ‘organic’, but in some cases were likely to have been created as part of curated state-backed tours of the region.
  • Repurposed spam accounts: Accounts in the CNCC dataset tweeted about Korean television dramas as well as sharing spam and porn material before tweeting Xinjiang content.
  • Potential use of automation: Accounts in both datasets showed signs of automation, including coordinated posting activity, the use of four letter codes (in the CNHU dataset) and misused hashtag symbols (in the CNCC dataset).
  • Persistent account building: ASPI ICPC independently identified additional accounts on Twitter and YouTube that exhibited similar behaviours to those in the two datasets, suggesting that accounts continue to be built across platforms as others are suspended.

The Chinese party-state and influence campaigns

The Chinese party-state continues to experiment with approaches to shape online political discourse, particularly on those topics that have the potential to disrupt its strategic objectives. International criticism of systematic abuses of human rights in the Xinjiang region is a topic about which the CCP is acutely sensitive.

In the first half of 2020, ASPI ICPC analysis of large-scale information operations linked to the Chinese state found a shift of focus towards US domestic issues, including the Black Lives Matter movement and the death of George Floyd (predominantly targeting Chinese-language audiences). This was the first marker of a shift in tactics since Twitter’s initial attribution of on-platform information operations to the Chinese state in 2019. The party-state’s online information operations were moving on from predominantly internal concerns and transitioning to assert the perception of moral equivalence between the CCP’s domestic policies in Xinjiang and human rights issues in democratic states, particularly the US. We see that effort to reframe international debate about human rights continuing in these most recent datasets. This shift also highlighted that CCP information operations deployed on US social media platforms could be increasingly entrepreneurial and agile in shifting focus to take advantage of strategic opportunities in the information domain.

The previous datasets that Twitter has released publicly through its information operations archive focused on a range of topics of broad interest to the CCP: the Hong Kong protests; the Taiwanese presidential election; the party-state’s Covid-19 recovery and vaccine diplomacy; and exiled Chinese businessman Guo Wengui and his relationship with former Trump White House chief strategist Steve Bannon. The datasets that we examine in this report are more specifically focused on the situation in Xinjiang and on attempts to showcase health and economic benefits of CCP policies to the Uyghur population and other minority groups in the region while overlooking and denying evidence of mass abuse. In both datasets, the emblematic #StopXinjiangRumors hashtag features prominently.

Traits in the data suggest that this operation may have been run at a more local level, including:

  • the amplification of regional news media, as well as Chinese state media outlets
  • the involvement of the Xinjiang-based company Changyu Culture and its relationship with the provincial government, which ASPI previously identified in Strange bedfellows on Xinjiang: the CCP, fringe media and US social media platforms by linking social media channels to the company, and the company to a Xinjiang regional government contract
  • an ongoing attempt to communicate through the appropriation of Uyghur voices
  • the use of ready-made porn and Korean soap opera fan account networks on Twitter that were likely to have been compromised, purchased or otherwise acquired, and then repurposed.

The CCP is a complex system, and directives from its elite set the direction for the party organs and underlings to follow. Propaganda serves to mobilise and steer elements within the party structure, as well as to calibrate the tone of domestic and international messaging. The party’s own incentive structures may be a factor that helps us understand the potential regional origins of the propaganda effort that we analyse in this report, and have identified previously. The China Media Project notes, for example, that local party officials are assessed on the basis of their contribution to this international communication work. It’s a contribution to building Beijing’s ‘discourse power’ as well as showing obedience to Xi Jinping’s directions.

The data displays features of the online ecosystem that the party has been building to expand its international influence. The networks that we analysed engaged consistently with Chinese state media as well as with a number of stalwart pro-CCP influencers. One strand of activity within the data continues attempts to discredit the BBC that ASPI and Recorded Future have previously reported on, but the real focus of this campaign is an effort to reframe political discourse about the concept of human rights in Xinjiang.

The CNHU dataset, in particular, offers a series of rebuttals to international critiques of CCP policy in Xinjiang. As we’ve noted, the network was active on issues related to health, such as life expectancy and population growth. CCP policies in the region are framed as counterterrorism responses as a way of attempting to legitimise actions, while negative information and testimonies of abuse are simply denied or not reported. The accounts also seek to promote benefits from CCP policies in Xinjiang, such as offering education and vocational training. The BBC and former US Secretary of State Mike Pompeo—the former having published reports about human rights abuses in the region, and the latter having criticised the party’s policies in the region—feature in the data in negative terms. This external focus on the BBC and Pompeo serves to reframe online discussion of Xinjiang and distract from the evidence of systematic abuse. For the CCP, both entities are sources of external threat, against which the party must mobilise.

Methodology

This analysis uses a quantitative analysis of Twitter data as well as qualitative analysis of tweet content.

In addition, it examines independently identified accounts and content on Twitter, YouTube and Douyin, among other platforms, that appear likely to be related to the network.

Both datasets include video media. That content was processed using SightGraph from AddAxis. SightGraph is a suite of artificial-intelligence and machine-learning capabilities for analysing inauthentic networks that disseminate disinformation. For this project, we used SightGraph to extract and autotranslate multilingual transcripts from video content. This facilitated extended phases of machine-learning-driven analysis to draw out ranked, meaningful linguistic data.

Likewise, images were processed using Yale Digital Humanities Laboratory’s PixPlot. PixPlot visualises a large image collection within an interactive WebGL scene. Each image was processed with an Inception convolutional neural network, trained on ImageNet 2012, and projected into a two-dimensional manifold with the UMAP algorithm such that similar images appear proximate to one another.

The combination of image and video analysis provided an overview of the narrative themes emerging from the media content related to the two Twitter datasets.

Twitter has identified the two datasets for quantitative analysis as being interlinked and associated via a combination of technical and behavioural signals. ICPC doesn’t have direct access to that non-public technical data. Twitter hasn’t released the methodology by which this dataset was selected, and the dataset may not represent a complete picture of Chinese state-linked information operations on Twitter.

The Twitter takedown data

This report analyses the content summarised in Table 1.

Table 1: Twitter dataset summaries

In both datasets, most of the tweeting activity seeking to deny human rights abuses in Xinjiang appears to have started around 2020. In the CNHU dataset, accounts appear to have been created for the purpose of disseminating Xinjiang-related material and began tweeting in April 2019 before ramping up activity in January 2021. That spike in activity aligns with the coordinated targeting of efforts to discredit the BBC that ASPI has previously identified. While some accounts in the CNCC dataset may have originally had a commercial utility, they were probably repurposed some time before 19 June 2020 (the date of the first tweet mentioning Xinjiang and Uyghurs in the dataset) and shifted to posting Xinjiang-related content. Former Secretary of State Mike Pompeo gave his attention-grabbing anti-CCP speech in July 2020, and criticism of him features significantly in both datasets.

Previous ASPI analysis identified Twitter spambot network activity in December 2019 to amplify articles published by the CCP’s People’s Daily tabloid, the Global Times (figures 1 and 2). The articles that were boosted denied the repression of Uyghurs in Xinjiang and attacked the credibility of individuals such as Mike Pompeo and media organisations such as the New York Times. It isn’t clear whether that network was connected to the CNHU and CNCC datasets, but similar behaviours were identified.

Figure 1: Tweets per month, coloured by tweet language, in CNHU dataset

Figure 2: Tweets per month, coloured by tweet language, in CNCC dataset[fig2]

An overview of the tweet text in both datasets shows that topics such as ‘Xinjiang’, ‘BBC’, ‘Pompeo’ and ‘Uyghur’ were common to both campaigns (Figure 3). While there were some tweets mentioning ‘Hong Kong’, specifically about the Covid-19 response in that region, this report focuses on content targeting Xinjiang-related issues.

Figure 3: Topic summary of tweet text posted between December 2019 and May 2021

In early 2021, the #StopXinjiangRumors hashtag was boosted by both networks. Accounts in the CNHU dataset were the first to use the hashtag, and many accounts potentially mistakenly used double hashtags (‘##StopXinjiangRumors’). Accounts in the CNCC dataset that were batch created in February 2021 appear to have posted tweets using the hashtag and tagged ‘Pompeo’ following the tweets posted by accounts in the CNHU dataset. The use of the hashtags may be coincidental, but the similarity of timing and narratives suggests some degree of coordination. #StopXinjiangRumors continues to be a hashtag on Twitter (as well as YouTube and Facebook).

The rest of this report presents the key insights from the two datasets in detail.
 

Dataset 1: CNHU

Dataset 1: CNHU – Key points

  • Nearly one in every two tweets (41%) contained either an image or a video. There were in total 12,400 images and 466 videos in the CNHU dataset.
  • This video and image content was aimed broadly at pushing back against allegations of human rights abuses in Xinjiang, particularly by presenting video footage of ‘happy’ Uyghurs participating in vocational training in Xinjiang, as well as screenshots of state media and government events promoting this content.
  • The network promoted phrases commonly used in CCP propaganda about Xinjiang, such as ‘Xinjiang is a wonderful land’ (新疆是个好地方)—the eighth most retweeted hashtag in the CNHU dataset.
  • In total, 48% (1,308) of all retweets by the network were of CCP state media and diplomatic accounts. The Global Times News account was the most retweeted (287), followed by the account of Ministry of Foreign Affairs (MOFA) spokesperson Hua Chunying (华春莹) (108).
  • While the network shared links to state media, YouTube and Facebook, many videos shared in the CNHU dataset appeared to have originated from Douyin.
  • The network worked to promote state media. Of all the tweets, 35% had links to external websites—mostly to Chinese state media outlets such as the China Daily, the China Global Television Network (CGTN) and the Global Times.
  • The network showed potential indicators of automation, including coordinated posting, the appearance of randomised four-letter digit codes in some tweets, and watermarked images.
  • The network tweeted and shared content in a variety of languages, including using Arabic and French hashtags, suggesting that it was targeting a broad audience.

Dataset 2: CNCC

Dataset 2: CNCC – Key points

  • The CNCC dataset contained a considerable amount of repurposed spam and porn accounts, as well as content linked to Korean music and television.
  • While there was a small amount of content about Hong Kong and other issues, most of the non-spam content related to Xinjiang. Much of that content sought to present ‘testimonials’ from Uyghurs talking about their happy lives in China.
  • Some of this content may be linked to a company called Changyu Culture, which is connected to the Xinjiang provincial government and was funded to create videos depicting Uyghurs as supportive of the Chinese Government’s policies in Xinjiang.
  • The network had a particular focus on former US Secretary of State Mike Pompeo: @蓬佩奥 or @‘Pompeo’ appears 438 times in the dataset. Likewise, video content shared by the network referenced Pompeo 386 times.

Download Report & Dataset Analysis

Readers are encouraged to download the report to access the full dataset analysis.


Acknowledgements

The authors would like to thank the team at Twitter for advanced access to the two data sets analysed in this report, Fergus Hanson and Michael Shoebridge for review comments, and AddAxis for assistance applying AI in the analysis. ASPI’s International Cyber Policy Centre receives funding from a variety of sources, including sponsorship, research and project support from governments, industry and civil society. No specific funding was received to fund the production of this report.

What is ASPI?

The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally.
ASPI’s sources of funding are identified in our annual report, online at www.aspi.org.au and in the acknowledgements section of individual publications. ASPI remains independent in the content of the research and in all editorial judgements.

ASPI International Cyber Policy Centre

ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber, emerging and critical technologies, issues related to information and foreign interference and focuses on the impact these issues have on broader strategic policy. The centre has a growing mixture of expertise and skills with teams of researchers who concentrate on policy, technical analysis, information operations and disinformation, critical and emerging technologies, cyber capacity building, satellite analysis, surveillance and China-related issues.

The ICPC informs public debate in the Indo-Pacific region and supports public policy development by producing original, empirical, data-driven research. The ICPC enriches regional debates by collaborating with research institutes from around the world and by bringing leading global experts to Australia, including through fellowships. To develop capability in Australia and across the Indo-Pacific region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises for the public and private sectors.
We would like to thank all of those who support and contribute to the ICPC with their time, intellect and passion for the topics we work on. If you would like to support the work of the centre please contact: icpc@aspi.org.au.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.

© The Australian Strategic Policy Institute Limited 2021

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.

First published December 2021. ISSN 2209-9689 (online). ISSN 2209-9670 (print).

Cover image: Illustration by Wes Mountain. ASPI ICPC and Wes Mountain allow this image to be republished under the Creative Commons.
License Attribution-Share Alike. Users of the image should use the following sentence for image attribution: ‘Illustration by Wes Mountain, commissioned by the Australian Strategic Policy Institute’s International Cyber Policy Centre.’

Funding Statement: No specific funding was received to fund production of this report.

Influence for hire. The Asia-Pacific’s online shadow economy

The Asia-Pacific’s online shadow economy

What’s the problem?

It’s not just nation-states that interfere in elections and manipulate political discourse. A range of commercial services increasingly engage in such activities, operating in a shadow online influence-for-hire economy that spans from content farms through to high-end PR agencies. There’s growing evidence of states using commercial influence-for-hire networks. The Oxford Internet Institute found 48 instances of states working with influence-for-hire firms in 2019–20, an increase from 21 in 2017–18 and nine in 2016–17.1 There’s a distinction between legitimate, disclosed political campaigning and government advertising campaigns, on the one hand, and efforts by state actors to covertly manipulate the public opinion of domestic populations or citizens of other countries using inauthentic social media activity, on the other. The use of covert, inauthentic, outsourced online influence is also problematic as it degrades the quality of the public sphere in which citizens must make informed political choices and decisions.

The Asia–Pacific region contains many states in different stages of democratisation.2 Many have transitioned to democratic forms of governance from authoritarian regimes. Some have weak political institutions, limitations on independent media and fragile civil societies. The rapid rate of digital penetration in the region layered over that political context leaves populations vulnerable to online manipulation. In fragile democratic contexts, the prevalence of influence-for-hire operations and their leverage by agents of the state is particularly problematic, given the power imbalance between citizens and the state.

A surplus of cheap digital labour makes the Asia–Pacific a focus for operators in this economy, and this report examines the regional influence-for-hire marketplace using case studies of online manipulation in the Philippines, Indonesia, Taiwan and Australia. Governments and other entities in the region contract such services to target and influence their own populations in ways that aren’t transparent and that may inhibit freedom of political expression by drowning out dissenting voices. Several governments have introduced anti-fake-news legislation that has the potential to inhibit civic discourse by limiting popular political dissent or constraining the independence of the media from the state.3 These trends risk damaging the quality of civic engagement in the region’s emerging democracies.

What’s the solution?

This is a policy problem spanning government, industry and civil society, and solutions must incorporate all of those domains. Furthermore, influence-for-hire services are working in transnational online spaces that cut across legislative jurisdictions. Currently, much of the responsibility for taking action against the covert manipulation of online audiences falls to the social media companies.

It’s the companies that carry the responsibility for enforcement actions, and those actions are primarily framed around the terms of service and content moderation policies that underpin platform use. The platforms themselves are conscious of the growing marketplace for platform-manipulation services. Facebook, for example, notes this trend in its strategic threat report, The state of influence operations 2017–2020.4

Solutions must involve responsibility and transparency in how governments engage with their citizens.

The use of online advertising in political campaigning is distinct from the covert manipulation of a domestic population by a state. However, governments, civil society and industry have shared interests in an open information environment and can find alignment on the democratic values that support free—and unmanipulated—political expression. Support for democratic forms of governance remains strong in the Asia–Pacific region,5 albeit with degrees of concern about the destabilising potential of digitally mediated forms of political mobilisation and a trend towards democratic backsliding over the last decade that is constraining the space for civil society.6

The technology industry, civil society and governments should make that alignment of values the bedrock of a productive working relationship. Structures bringing these stakeholders together should reframe those relationships—which are at times adversarial—in order to find common ground. There will be no one-size-fits-all solution, given the region’s cultural diversity. Yet the Asia–Pacific contains many rapidly emerging economies that can contribute to the digital economy in creative ways. The spirit of digital entrepreneurship that drives content farm operations should be reshaped through stakeholder partnerships and engagement into more productive forms of digital labour that can contribute to a creative, diverse and distinct digital economy.

Introduction

It is already well known that the Kremlin’s covert interference in the 2016 US presidential election was outsourced to the now infamous Internet Research Agency.7

ASPI’s investigations of at-scale manipulation of the information environment by other significant state actors have also identified the use of marketing and spam networks to obfuscate state actor involvement. For example, ASPI has previously identified the use of Indonesian spam marketing networks in information operations attributed to the Chinese Government and targeting the Hong Kong protest movement in 2019.8 In 2020, ASPI also discovered the Chinese Government’s repurposing of Russian and Bangladeshi social media accounts to denigrate the movement.9 Those accounts were likely to have been hacked, stolen or on-sold in the influence-for-hire shadow economy. In May 2021, Facebook suspended networks of influence-for-hire activity run from Ukraine targeting domestic audiences and linked to individuals previously sanctioned by the US Department of the Treasury for attempted interference in the 2020 US presidential election.10

Audience engagement with, and heightened sentiment about civic events create new business models for those motivated to influence. Australia’s 2019 federal election was targeted by financially motivated actors from Albania, Kosovo and the Republic of Northern Macedonia.11 Those operators built large Facebook groups, used inflammatory nationalistic and Islamophobic content to drive engagement, and seeded the groups with links through to off-platform content-farm websites. Each click-through from the Facebook group to the content-farm ecosystem generated advertising revenue for those running the operation. A similar business model run from Israel used similar tactics to build audiences on Facebook, again manipulating and monetising nationalistic and Islamophobic sentiment to build audiences that could be steered to an ad-revenue-generating content-farm ecosystem of news-style websites.12 Mehreen Faruqi, Australia’s first female Muslim senator, was a target of racist vitriol among the 546,000 followers of 10 Facebook pages within the network. These financially motivated actors demonstrate that even well-established democracies are vulnerable to manipulation through exploitation of the fissures in their social cohesion.

This report examines the influence-for-hire marketplace across the Asia–Pacific through case studies of online manipulation in the Philippines, Indonesia, Taiwan and Australia over five chapters and concludes with policy recommendations (pages 36-37). The authors explore the business models that support and sustain the marketplace for influence and the services that influence operators offer.

Those services are increasingly integrated into political campaigning, yet the report highlights that those same approaches are being used by states in the region to influence their domestic populations in ways that aren’t transparent and that constrict and constrain political expression. In some instances, states in the region are using commercial services as proxies to covertly influence targeted international audiences.

Download full report

The above sections are the report introduction only – readers are encouraged to download the full report which includes many case-studies and references.


Editor and project manager: Dr Jacob Wallis is Head of Program, Information Operations and Disinformation at ASPI’s International Cyber Policy Centre.

About the authors: 

  • Ariel Bogle is an Analyst at ASPI’s International Cyber Policy Centre.
  • Albert Zhang is a Researcher at ASPI’s International Cyber Policy Centre.
  • Hillary Mansour is a Research Intern at ASPI’s International Cyber Policy Centre.
  • Tim Niven is a Research Scientist at Taiwan-based DoubleThink Lab.
  • Elena Yi-Ching Ho was a Research Intern at ASPI’s International Cyber Policy Centre.
  • Jason Liu is a Taiwan-based investigative journalist.
  • Dr Jonathan Corpus Ong is Associate Professor, University of Massachusetts-Amherst and Shorenstein Center Fellow, Technology and Social Change Project, Harvard Kennedy School.
  • Dr Ross Tapsell is Senior Lecturer at the College of Asia & the Pacific at Australian National University.

Acknowledgements

Thank you to Danielle Cave and Fergus Hanson for all of their work on this project. Thank you also to peer reviewers inside of ASPI, including Michael Shoebridge, and external, anonymous peer reviewers for their useful feedback on drafts of the report. Facebook Inc. provided ASPI with a grant of AU$100,000 which was used towards this report. The views reflected in the report are those of the authors only. Additional research costs were covered from ASPI ICPC’s mixed revenue base. The work of ASPI ICPC would not be possible without the support of our partners and sponsors across governments, industry and civil society.

What is ASPI?

The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally. ASPI’s sources of funding are identified in our annual report, online at www.aspi.org.au and in the acknowledgements section of individual publications. ASPI remains independent in the content of the research and in all editorial judgements.

ASPI International Cyber Policy Centre

ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber, emerging and critical technologies, issues related to information and foreign interference and focuses on the impact these issues have on broader strategic policy. The centre has a growing mixture of expertise and skills with teams of researchers who concentrate on policy, technical analysis, information operations and disinformation, critical and emerging technologies, cyber capacity building, satellite analysis, surveillance and China-related issues.

The ICPC informs public debate in the Indo-Pacific region and supports public policy development by producing original, empirical, data-driven research. The ICPC enriches regional debates by collaborating with research institutes from around the world and by bringing leading global experts to Australia, including through fellowships. To develop capability in Australia and across the Indo-Pacific region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises for the public and private sectors.

We would like to thank all of those who support and contribute to the ICPC with their time, intellect and passion for the topics we work on. If you would like to support the work of the centre please contact: icpc@aspi.org.au

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.

© The Australian Strategic Policy Institute Limited 2021

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.

First published August 2021. ISSN 2209-9689 (online), ISSN 2209-9670 (print).

Cover image: Illustration by Wes Mountain. ASPI ICPC and Wes Mountain allow this image to be republished under the Creative Commons License Attribution-Share Alike. Users of the image should use the following sentence for image attribution: ‘Illustration by Wes Mountain, commissioned by the Australian Strategic Policy Institute’s International Cyber Policy Centre.’

Funding statement: This report was in part funded by Facebook Inc.

  1. Samantha Bradshaw, Hannah Bailey, Philip N Howard, Industrialized disinformation: 2020 global inventory of organized social media manipulation, Computational Propaganda Research Project, 2020, online. ↩︎
  2. Lindsey W Ford, Ryan Hass, Democracy in Asia, Brookings Institution, 22 January 2021, online. ↩︎
  3. Andrea Carson, Liam Fallon, Fighting fake news: a study of online misinformation regulation in the Asia Pacific, La Trobe University, January 2021, online. ↩︎
  4. Threat report: the state of influence operations 2017–2020, Facebook, May 2021, online. ↩︎
  5. L.F. Ford, R. Hass, Democracy in Asia, Brookings, 2021, online. ↩︎
  6. V-Dem Institute, Democracy report 2021: Autocratization turns viral, 2021, online. ↩︎
  7. US Department of Justice, Internet Research Agency indictment, US Government, 2018, online. ↩︎
  8. T Uren, E Thomas, J Wallis, Tweeting through the Great Firewall: preliminary analysis of PRC-linked information operations on the Hong Kong protests, ASPI, Canberra, 3 September 2019, online. ↩︎
  9. Wallis, T Uren, E Thomas, A Zhang, S Hoffman, L Li, A Pascoe, D Cave, Retweeting through the Great Firewall: a persistent and undeterred threat actor, ASPI, Canberra, 12 June 2020, online. ↩︎
  10. Facebook, April 2021 coordinated inauthentic behaviour report, 2021, online. ↩︎
  11. M Workman, S Hutcheon, ‘Facebook trolls and scammers from Kosovo are manipulating Australian users’, ABC News, 15 March 2019, online. ↩︎
  12. C Knaus, M McGowan, M Evershed, O Homes, ‘Inside the hate factory: how Facebook fuels far-right profit’, The Guardian, 6 December 2019, online. ↩︎

Tag Archive for: Information Operations – Disinformation

Stop the World: TSD Summit Sessions: How to navigate the deep fake and disinformation minefield with Nina Jankowicz

The Sydney Dialogue is over, but never fear, we have more TSD content coming your way! This week, ASPI’s David Wroe speaks to Nina Jankowicz, global disinformation expert and author of the books How to Lose the Information War and How to Be a Woman Online.

Nina takes us through the trends she is seeing in disinformation across the globe, and offers an assessment of who does it best, and whether countries like China and Iran are learning from Russia. She also discusses the links between disinformation and political polarisation, and what governments can do to protect the information domain from foreign interference and disinformation.

Finally, Dave asks Nina about her experience being the target of disinformation and online harassment, and the tactics being used against many women in influential roles, including US Vice President Kamala Harris and Australia’s eSafety Commissioner Julie Inman Grant, in attempts to censor and discredit them.

Guests:
⁠David Wroe
⁠Nina Jankowicz

Stop the World: TSD Summit Sessions: Countering hybrid threats with NATO Deputy Assistant Secretary General James Appathurai

The countdown to the Sydney Dialogue (TSD) is on!  
 
In the second episode of ASPI’s TSD Summit Sessions, Justin Bassi, Executive Director of ASPI, speaks to James Appathurai, NATO’s Deputy Assistant Secretary General for Innovation, Hybrid and Cyber on all things tech, innovation, security and democracy.  
 
Justin and James discuss hybrid threats in the context of challenges in Europe and the Indo-Pacific, and how democracies in both regions need to work together to prevent and respond to these increasing activities. They explore the impact of technological innovation on security, the rise of artificial intelligence and deep fakes and the risks to democracies, including in elections.  
 
They discuss the challenges posed by Russia and China and how they are harnessing technology to achieve their goals. The conversation canvasses the need for a strategy of deterrence, not just in relation to conflict, but to counter threats below the threshold of war. Such a strategy will require some offence, not just defence, to protect both domestic democratic processes and the international rules-based order.  
 
Note: This episode was recorded prior to the NATO Summit, which took place in Washington DC on 9-11 July.  
 
Guests:  
Justin Bassi 
James Appathurai

Stop the World: Securing democracy: countering disinformation in 2024

This week’s episode of Stop the World comes to you from Brussels, where ASPI’s Executive Director Justin Bassi interviewed the Head of Strategic Communications at the European External Action Service Lutz Güllner.

The conversation explored disinformation, foreign influence, manipulation and interference, with Justin and Lutz discussing the importance of countering state backed disinformation and foreign influence campaigns, how these tactics can affect the resilience of open societies, and what can be done to deter them.

With elections due to take place in the European Union (EU), the United States and the United Kingdom in 2024, they also discussed concerns around election interference, the types of campaigns that foreign actors are undertaking in an attempt to manipulate them, and how the EU can work with Indo-Pacific countries to counter these threats.

Guests:

⁠Justin Bassi⁠

⁠Lutz Güllner