Tag Archive for: Information Operations – Disinformation

Strange bedfellows on Xinjiang: The CCP, fringe media and US social media platforms

This report explores how the Chinese Communist Party (CCP), fringe media and pro-CCP online actors seek—sometimes in unison—to shape and influence international perceptions of the Chinese Government’s human rights abuses in Xinjiang, including through the amplification of disinformation. United States (US) based social media networks, including Twitter, Facebook and YouTube, along with Chinese-owned TikTok (owned by Chinese company ByteDance), are centre stage for this global effort.

The Chinese Government continues to deny human rights abuses in Xinjiang despite a proliferation of credible evidence, including media reporting, independent research, testimonies and open-source data, that has revealed abuses including forced labour, mass detention, surveillance, sterilisation, cultural erasure and alleged genocide in the region. To distract from such human rights abuses, covert and overt online information campaigns have been deployed to portray positive narratives about the CCP’s domestic policies in the region, while also injecting disinformation into the global public discourse regarding Xinjiang.

The report’s key findings:

  • Since early 2020, there’s been a stark increase in the Chinese Government and state media’s use of US social media networks to push alternative narratives and disinformation about the situation in Xinjiang. Chinese state media accounts have been most successful in using Facebook to engage and reach an international audience.
  • The CCP is using tactics including leveraging US social media platforms to criticise and smear Uyghur victims, journalists and researchers who work on this topic, as well as their organisations. We expect these efforts to escalate in 2021.
  • Chinese Government officials and state media are increasingly amplifying content, including disinformation, produced by fringe media and conspiracist websites that are often sympathetic to the narrative positioning of authoritarian regimes. This amplifies the reach and influence of these sites in the Western media ecosystem. Senior officials from multilateral organisations, including the World Health Organization (WHO) and the United Nations (UN), have also played a role in sharing such content.
  • The Xinjiang Audio-Video Publishing House, a publishing organisation owned by a regional government bureau and affiliated with the propaganda department, has funded a marketing company to create videos depicting Uyghurs as supportive of the Chinese Government’s policies in Xinjiang. Those videos were then amplified on Twitter and YouTube by a network of inauthentic accounts. The Twitter accounts also retweeted and liked non-Xinjiang-related tweets by Chinese diplomatic officials and Chinese state-affiliated media in 2020.

Trigger warning. The CCP’s coordinated information effort to discredit the BBC

Chinese Communist Party (CCP) diplomatic accounts, Chinese state media, pro-CCP influencers and patriotic trolls are targeting the UK public broadcaster, the BBC, in a coordinated information operation. Recent BBC reports, including the allegations of systematic sexual assault in Xinjiang’s internment camps, were among a number of triggers provoking the CCP’s propaganda apparatus to discredit the BBC, distract international attention and recapture control of the narrative.

In ASPI ICPC’s new report, Albert Zhang and Dr Jacob Wallis provide a snapshot of the CCP’s ongoing coordinated response targeting the BBC, which leveraged YouTube, Twitter and Facebook and was broadly framed around three prominent narratives:

  1. That the BBC spreads disinformation and is biased against China
  2. That the BBC’s domestic audiences think that it’s biased and not to be trusted
  3. That the BBC’s reporting on China is instigated by foreign actors and intelligence agencies.

In addition, the report analyses some of the secondary effects of this propaganda effort by exploring the mobilisation of a pro-CCP Twitter network that has previously amplified the Covid-19 disinformation content being pushed by China’s Ministry of Foreign Affairs, and whose negative online engagement with the BBC peaks on the same days as that of the party-state’s diplomats and state media. 

To contest and blunt criticism of the CCP’s systematic surveillance and control of minority ethnic groups, the party will continue to aggressively deploy its propaganda and disinformation apparatus. Domestic control remains fundamental to its political power and legitimacy, and internationally narrative control is fundamental to the pursuit of its foreign policy interests.

#WhatsHappeningInThailand: The power dynamics of Thailand’s digital activism

Thailand’s political discourse throughout the past decade has increasingly been shaped and amplified by social media and digital activism. The most recent wave of political activism this year saw the emergence of a countrywide youth-led democracy movement against the military-dominated coalition, as well as a nationalist counter-protest movement in support of the establishment.

The steady evolution of tactics on the part of the government, the military and protesters reflects an increasingly sophisticated new battleground for democracy, both on the streets and the screens. Understanding these complex dynamics is crucial for any broader analysis of the Thai protest movement and its implications.

In this report, we analyse samples of Twitter data relating to the online manifestation of contemporary political protests in Thailand. We explore two key aspects in which the online manifestation of the protests differs from its offline counterpart. That includes (1) the power dynamics between institutional actors and protesters and (2) the participation and engagement of international actors surrounding the protests.

Cyber-enabled foreign interference in elections and referendums

What’s the problem?

Over the past decade, state actors have taken advantage of the digitisation of election systems, election administration and election campaigns to interfere in foreign elections and referendums.1 Their activity can be divided into two attack vectors. First, they’ve used various cyber operations, such as denial of service (DoS) attacks and phishing attacks, to disrupt voting infrastructure and target electronic and online voting, including vote tabulation. Second, they’ve used online information operations to exploit the digital presence of election campaigns, politicians, journalists and voters.

Together, these two attack vectors (referred to collectively as ‘cyber-enabled foreign interference’ in this report because both are mediated through cyberspace) have been used to seek to influence voters and their turnout at elections, manipulate the information environment and diminish public trust in democratic processes.

This research identified 41 elections and seven referendums between January 2010 and October 2020 where cyber-enabled foreign interference was reported, and it finds that there’s been a significant uptick in such activity since 2017. This data collection shows that Russia is the most prolific state actor engaging in online interference, followed by China, whose cyber-enabled foreign interference activity has increased significantly over the past two years. As well as these two dominant actors, Iran and North Korea have also tried to influence foreign elections in 2019 and 2020. All four states have sought to interfere in the 2020 US presidential elections using differing cyber-enabled foreign interference tactics.

In many cases, these four actors use a combination of cyber operations and online information operations to reinforce their activities. There’s also often a clear geopolitical link between the interfering state and its target: these actors are targeting states they see as adversaries or useful to their geopolitical interests.

Democratic societies are yet to develop clear thresholds for responding to cyber-enabled interference, particularly when it’s combined with other levers of state power or layered with a veil of plausible deniability.2 Even when they’re able to detect it, often with the help of social media platforms, research institutes and the media, most states are failing to effectively deter such activity. The principles inherent in democratic societies—openness, freedom of speech and the free flow of ideas—have made them particularly vulnerable to online interference.

What’s the solution?

This research finds that not all states are being targeted by serious external threats to their electoral processes, so governments should consider scaled responses to specific challenges. However, the level of threat to all states will change over time, so there’s little room for complacency. For all stakeholders—in government, industry and civil society—learning from the experience of others will help nations minimise the chance of their own election vulnerabilities being exploited in the future.3

The integrity of elections and referendums is key to societal resilience. Therefore, these events must be better protected through greater international collaboration and stronger engagement between government, the private sector and civil society.

Policymakers must respond to these challenges without adopting undue regulatory measures that would undermine their political systems and create ‘the kind of rigidly controlled environment autocrats seek’.4 Those countries facing meaningful cyber-enabled interference need to adopt a multi-stakeholder approach that carefully balances democratic principles and involves governments, parliaments, internet platforms, cybersecurity companies, media, NGOs and research institutes. This report recommends that governments identify vulnerabilities and threats as a basis for developing an effective risk-mitigation framework for resisting cyber-enabled foreign interference.

The rapid adoption of social media and its integration into the fabric of political discourse has created an attack surface for malign actors to exploit. Global online platforms must take responsibility for taking appropriate action against actors attempting to manipulate their users, yet these companies are commercial entities whose interests aren’t always aligned with those of governments. They aren’t intelligence agencies so are sometimes limited in their capacity to attribute malign activities directly. To mitigate risk during election cycles, social media companies’ security teams should work closely with governments and civil society groups to ensure that there’s a shared understanding of the threat actors and of their tactics in order to ensure an effectively calibrated and collaborative security posture.

Policymakers must implement appropriate whole-of-government mechanisms which continuously engage key stakeholders in the private sector and civil society. Greater investments in capacity building must be made by both governments and businesses in the detection and deterrence of these. It’s vital that civil society groups are supported to build up capability that stimulates and informs international public discourse and policymaking. Threats to election integrity are persistent, and the number of actors willing to deploy these tactics is growing.

Background

Foreign states’ efforts to interfere in the elections and referendums of other states, and more broadly to undermine other political systems, are an enduring practice of statecraft.5 Yet the scale and methods through which such interference occurs has changed, with old and new techniques adapting to suit the cyber domain and the opportunities presented by a 24/7, always connected information environment.6

When much of the world moved online, political targets became more vulnerable to foreign interference, and millions of voters were suddenly exposed, ‘in a new, “neutral” medium, to the very old arts of persuasion or agitation’.7 The adoption of electronic and online voting, voter tabulation and voter registration,8 as well as the growth of online information sharing and communication, has made interference in elections easier, cheaper and more covert.9 This has lowered the entry costs for states seeking to engage in election interference.10

Elections and referendums are targeted by foreign adversaries because they are opportunities when significant political and policy change occurs and they are also the means through which elected governments derive their legitimacy.11 By targeting electoral events, foreign actors can attempt to influence political decisions and policymaking, shift political agendas, encourage social polarisation and undermine democracies. This enables them to achieve long-term strategic goals, such as strengthening their relative national and regional influence, subverting undesired candidates, and compromising international alliances that ‘pose a threat’ to their interests.12

Elections and referendums also involve diverse actors, such as politicians, campaign staffers, voters and social media platforms, all of which can be targeted to knowingly or unknowingly participate in, or assist with, interference orchestrated by a foreign state.13 There are also a number of cases where journalists and media outlets have unwittingly shared, amplified, and contributed to the online information operations of foreign state actors.14 The use of unknowing participants has proved to be a key feature of cyber-enabled foreign election interference.

This is a dangerous place for liberal democracies to be in. This report highlights that the same foreign state actors continue to pursue this type of interference, so much so that it is now becoming a global norm that’s an expected part of some countries’ election processes. On its own, this perceived threat has the potential to undermine the integrity of elections and referendums and trust in public and democratic institutions.

Methodology and definitions

This research is an extension and expansion of the International Cyber Policy Centre’s Hacking democracies: cataloguing cyber-enabled attacks on elections, which was published in May 2019. That project developed a database of reported cases of cyber-enabled foreign interference in national elections held between November 2016 and April 2019.15 This new research extends the scope of Hacking democracies by examining cases of cyber-enabled foreign interference between January 2010 and October 2020. This time frame was selected because information on the use of cyber-enabled techniques as a means of foreign interference started to emerge only in the early 2010s.16

This reports appendix includes a dataset that provides an inventory of case studies where foreign state actors have reportedly used cyber-enabled techniques to interfere in elections and referendums.

The cases have been categorised by:

  • target
  • type of political process
  • year
  • attack vector (method of interference)
  • alleged foreign state actor.

Also accompanying this report is an interactive online map which geo-codes and illustrates our dataset, allowing users to apply filters to search through the above categories.

This research relied on open-source information, predominantly in English, including media reports from local, national, and international outlets, policy papers, academic research, and public databases. It was desktop based and consisted of case selection, case categorisation and mixed-methods analysis.17 The research also benefited from a series of roundtable discussions and consultations with experts in the field,18 as well as a lengthy internal and external peer review process.

The accompanying dataset only includes cases where attribution was publicly reported by credible researchers, cybersecurity firms or journalists. The role of non-state actors and the use of cyber-enabled techniques by domestic governments and political parties to shape political discourse and public attitudes within their own societies weren’t considered as part of this research.19

This methodology has limitations. For example, the research is limited by the covert and ongoing nature of cyber-enabled foreign interference, which is not limited to the period of an election cycle or campaign. Case selection for the new dataset, in particular, was impeded by the lack of publicly available information and uncertainty about intent and attribution, which are common problems in work concerning cyber-enabled or other online activity. It likely results in the underreporting of cases and a skewing towards English-language and mainstream media sources. The inability to accurately assess the impact of interference campaigns also results in a dataset that doesn’t distinguish between major and minor campaigns and their outcomes. The methodology omitted cyber-enabled foreign interference that occurred outside the context of elections or referendums.20

In the context of this policy brief, the term ‘attack vector’ refers to the means by which foreign state actors carry out cyber-enabled interference. Accordingly, the dataset contains cases of interference that can broadly be divided into two categories:

• Cyber operations: covert activities carried out via digital infrastructure to gain access to a server or system in order to compromise its service, identify or introduce vulnerabilities, manipulate information or perform espionage21
• Online information operations: information operations carried out in the online information environment to covertly distort, confuse, mislead and manipulate targets through deceptive or inaccurate information.22

Cyber operations and online information operations are carried out via an ‘attack surface’, which is to be understood as the ‘environment where an attacker can try to enter, cause an effect on, or extract data from’.23
 

Key findings

ASPI’s International Cyber Policy Centre has identified 41 elections and seven referendums between January 2010 and October 2020 (Figure 1) that have been subject to cyber-enabled foreign interference in the form of cyber operations, online information operations or a combination of the two.24

Figure 1: Cases of cyber-enabled foreign interference, by year and type of political process

Figure 1 shows that reports of the use of cyber-enabled techniques to interfere in foreign elections and referendums has increased significantly over the past five years. Thirty-eight of the 41 elections in which foreign interference was identified, and six of the referendums, occurred between 2015 and 2020 (Figure 1). These figures are significant when we consider that elections take place only every couple of years and that referendums are typically held on an ad hoc basis, meaning that foreign state actors have limited opportunities to carry out this type of interference.

As a key feature of cyber-enabled interference is deniability, there are likely many more cases that remain publicly undetected or unattributed. Moreover, what might be perceived as a drop in recorded cases in 2020 can be attributed to a number of factors, including election delays caused by Covid-19 and that election interference is often identified and reported on only after an election period is over.

Figure 2: Targets of cyber-enabled foreign interference in an election or referendum

Note: The numbers in the map represent the number of reported cases of cyber-enabled foreign interference in an election or referendum. Access this interactive map here. Source: Maptive, map data © 2020 Google.

Figure 3: Number of political processes targeted (1–4), by state or region

Cyber-enabled interference occurred on six continents (Africa, Asia, Europe, North America, Australia and South America).The research identified 33 states that have experienced cyber-enabled foreign interference in at least one election cycle or referendum, the overwhelming majority of which are democracies.25 The EU has also been a target: several member states were targeted in the lead-up to the 2019 European Parliament election.26

Significantly, this research identified 11 states that were targeted in more than one election cycle or referendum (Figure 3). The repeated targeting of certain states is indicative of their (perceived) strategic value, the existence of candidates that are aligned with the foreign state actors’ interests,27 insufficient deterrence efforts, or past efforts that have delivered results.28 This research also identified five cases in which multiple foreign state actors targeted the same election or referendum (the 2014 Scottish independence referendum, the 2016 UK referendum on EU membership, the 2018 Macedonian referendum, the 2019 Indonesian general election and the 2020 US presidential election). Rather than suggesting coordinated action, the targeting of a single election or referendum by multiple foreign state actors more likely reflects the strategic importance of the outcome to multiple states.

The attack vectors

The attack vectors are cyber operations and online information operations.29 Of the 48 political processes targeted, 26 were subjected to cyber operations and 34 were subjected to online information operations. Twelve were subjected to a combination of both (Figure 4).

Figure 4: Attacks on political processes, by attack vector

Cyber operations

This research identified 25 elections and one referendum over the past decade in which cyber operations were used for interference purposes. In the context of election interference, cyber operations fell into two broad classes: operations to directly disrupt (such as DoS attacks) or operations to gain unauthorised access (such as phishing). Unauthorised access could be used to enable subsequent disruption or to gather intelligence that could then enable online information operations, such as a hack-and-leak campaign.

Phishing attacks were the main technique used to gain unauthorised access to the personal online accounts and computer systems of individuals and organisations involved in managing and running election campaigns or infrastructure. They were used in 17 of the 25 elections, as well as the referendum, with political campaigns on the receiving end in most of the reported instances. Phishing involves misleading a target into downloading malware or disclosing personal information, such as login credentials, by sending a malicious link or file in an otherwise seemingly innocuous email or message (Figure 5).30 For example, Google revealed in 2020 that Chinese state-sponsored threat actors pretended to be from antivirus software firm McAfee in order to target US election campaigns and staffers with a phishing attack.31

Figure 5: The email Russian hackers used to compromise state voting systems ahead of the 2016 US presidential election

Source: Sam Biddle, ‘Here’s the email Russian hackers used to try to break into state voting systems’, The Intercept, 2 June 2018, online.

When threat actors gain unauthorised access to election infrastructure, they could potentially disrupt or even alter vote counts, as well as use information gathered from their access to distract public discourse and sow doubt about the validity and integrity of the process.

Then there are DoS attacks, in which a computer or online server is overwhelmed by connection requests, leaving it unable to provide service.32 In elections, they’re often used to compromise government and election-related websites, including those used for voter registration and vote tallying.

DoS attacks were used in six of the 25 elections, and one referendum, targeting vote-tallying websites, national electoral commissions and the websites of political campaigns and candidates. For example, in 2019, the website of Ukrainian presidential candidate Volodymyr Zelenskiy was subjected to a distributed DoS attack the day after he announced his intention to run for office. The website received 5 million requests within minutes of its launch and was quickly taken offline, preventing people from registering as supporters.33

Online information operations

This research identified 28 elections and six referendums over the past decade in which online information operations were used for interference purposes. In the context of election interference, online information operations should be understood as the actions taken online by foreign state actors to distort political sentiment in an election to achieve a strategic or geopolitical outcome.34

They can be difficult to distinguish from everyday online interactions and often seek to exploit existing divisions and tensions within the targeted society.35

Online information operations combine social media manipulation (‘inauthentic coordinated behaviour’), for example partisan media coverage and disinformation to distort political sentiment during an election and, more broadly, to alter the information environment. The operations are designed to target voters directly and often make use of social media and networking platforms to interact in real time and assimilate more readily with their targets.36

Online information operations tend to attract and include domestic actors.37 There have been several examples in which Russian operatives have successfully infiltrated and influenced legitimate activist groups in the US.38 This becomes even more prominent as foreign state actors align their online information operations with domestic disinformation and extremist campaigns, amplifying rather than creating disinformation.39 The strategic use of domestic disinformation means that governments and regulators may find it difficult to target them without also taking a stand against domestic misinformers and groups.

It is important to acknowledge the synergy of the two attack vectors, and also how they can converge and reinforce one another.40 This research identified three elections where cyber operations were used to compromise a system and obtain sensitive material, such as emails or documents, which were then strategically disclosed online and amplified.41 For example, according to Reuters, classified documents titled ‘UK-US Trade & Investment Working Group Full Readout’ were distributed online before the 2019 British general election as part of a Russian-backed strategic disclosure campaign.42

The main concern with the strategic use of both attack vectors is that it further complicates the target’s ability to detect, attribute and respond. This means that any meaningful response will need to consider both potential attack vectors when securing vulnerabilities.

State actors and targets

Cyber-enabled foreign interference in elections and referendums between 2010 and 2020 has been publicly attributed to only a small number of states: Russia, China, Iran and North Korea. In most cases, a clear geopolitical link between the source of interference and the target can be identified; Russia, China, Iran and North Korea mainly target states in their respective regions, or states they regard as adversaries— such as the US.43

The increasing cohesion among foreign state actors, notably China and Iran learning and adopting various techniques from Russia, has made it increasingly difficult to distinguish between the different foreign state actors.44 This has been further complicated by the adoption of Russian tactics and techniques by domestic groups, in particular groups aligned with the far-right for example.45

Russia

Russia is the most prolific foreign actor in this space. This research identified 31 elections and seven referendums involving 26 states over the past decade in which Russia allegedly used cyber-enabled foreign interference tactics. Unlike the actions of many of the other state actors profiled here, Russia’s approach has been global and wide-ranging. Many of Russia’s efforts remain focused on Europe, where Moscow allegedly used cyber-enabled means to interfere in 20 elections, including the 2019 European Parliament election and seven referendums. Of the 16 European states affected, 12 are members of the EU and 13 are members of NATO.46 Another focus for Russia has been the US and while the actual impact on voters remains debatable, Russian interference has become an expected part of US elections.47 Moscow has also sought to interfere in the elections of several countries in South America and Africa, possibly in an attempt to undermine democratisation efforts and influence their foreign policy orientations.48

Russia appears to be motivated by the intent to signal its capacity to respond to perceived foreign interference in its internal affairs and anti-Russian sentiment.49 It also seeks to strengthen its regional power by weakening alliances that pose a threat. For instance, Russia used cyber operations and online information operations to interfere in both the 2016 Montenegrin parliamentary election and the 2018 Macedonian referendum. This campaign was part of its broader political strategy to block the two states from joining NATO and prevent the expansion of Western influence into the Balkan peninsula.50

Figure 6: States targeted by Russia between 2010 and 2020

Source: Maptive, map data © 2020 Google.

China

Over the past decade, it’s been reported that China has targeted 10 elections in seven states and regions. Taiwan, specifically Taiwanese President Tsai Ing-wen and her Democratic Progressive Party, has been the main target of China’s cyber-enabled election interference.51 Over the past three years, however, the Chinese state has expanded its efforts across the Indo-Pacific region.52 Beijing has also been linked to activity during the 2020 US presidential election. As reported by the New York Times and confirmed by both Google and Microsoft, state-backed hackers from China allegedly conducted unsuccessful spear-phishing attacks to gain access to the personal email accounts of campaign staff members working for the Democratic Party candidate Joseph Biden.53

China’s interference in foreign elections is part of its broader strategy to defend its ‘core’ national interests, both domestically and regionally, and apply pressure to political figures who challenge those interests. Those core interests, as defined by the Chinese Communist Party, include the preservation of domestic stability, economic development, territorial integrity and the advancement of China’s great-power status.54 Previously, China’s approach could be contrasted with Russia’s in that China attempted to deflect negativity and shape foreign perceptions to bolster its legitimacy, whereas Russia sought to destabilise the information environment, disrupt societies and weaken the target.55 More recently, however, China has adopted methods associated with Russian interference, such as blatantly destabilising the general information environment in targeted countries with obvious mistruths and conspiracy theories.56

Figure 7: States and regions targeted by China between 2010 and 2020

Source: Maptive, map data © 2020 Google.

Iran

This dataset shows that Iran engaged in alleged interference in two elections and two referendums in three states.57 Iranian interference in foreign elections appears to be similar to Russian interference in that it’s a defensive action against the target for meddling in Iran’s internal affairs and a reaction to perceived anti-Iran sentiment. A pertinent and current example of this is Iran’s recent efforts to interfere in the 2020 US presidential election by targeting President Trump’s campaign.58 As reported by the Washington Post, Microsoft discovered that the Iranian-backed hacker group Phosphorus had used phishing emails to target 241 email accounts belonging to government officials, journalists, prominent Iranian citizens and staff associated with Trump’s election campaign and successfully compromised four of those accounts.59

Figure 8: States targeted by Iran between 2010 and 2020

Source: Maptive, map data © 2020 Google.

North Korea

North Korea has been identified as a foreign threat actor behind activity targeting both the 2020 South Korean legislative election and the 2020 US presidential election.60 Somewhat similarly to China’s approach, North Korea’s interference appears to focus on silencing critics and discrediting narratives that undermine its national interests. For example, North Korea targeted North Korean citizens running in South Korea’s 2020 legislative election, including Thae Yong-ho, the former North Korean Deputy Ambassador to the UK and one of the highest-ranking North Korean officials to ever defect.61

Figure 9: States targeted by North Korea between 2010 and 2020

Source: Maptive, map data © 2020 Google.

Detection and attribution

Detection and attribution requires considerable time and resources, as those tasks require the technical ability to analyse and reverse engineer a cyber operation or online information operation.

Beyond attribution, understanding the strategic and geopolitical aims of each event is challenging and time-consuming.62 The covert and online nature of cyber-enabled interference, whether carried out as a cyber operation or an online information operation, inevitably complicates the detection and identification of interference. For example, a DoS attack can be difficult to distinguish from a legitimate rise in online traffic. Moreover, the nature of the digital infrastructure and the online information environment used to carry out interference enables foreign state actors to conceal or falsify their identities, locations, time zones and languages.

As detection and attribution capabilities improve, the tactics and techniques used by foreign states will adapt accordingly, further complicating efforts to detect and attribute interference promptly.63

There are already examples of foreign state actors adapting their techniques, such as using closed groups and encrypted communication platforms (such as WhatsApp, Telegram and LINE) to spread disinformation64 or using artificial intelligence to generate false content.65 It can also be difficult to determine whether an individual or group is acting on its own or on behalf of a state.66 This is further complicated by the use of non-state actors, such as hackers-for-hire, consultancy firms and unwitting individuals, as proxies. Ahead of the 2017 Catalan independence referendum, for example, the Russian-backed media outlets RT and Sputnik used Venezuelan and Chavista-linked social media accounts as part of an amplification campaign. The hashtag #VenezuelaSalutesCatalonia was amplified by the accounts to give the impression that Venezuela supported Catalonian independence.67 More recently, Russia outsourced part of its 2020 US presidential disinformation campaign to Ghanaian and Nigerian nationals who were employed to generate content and disseminate it on social media.68

The ‘bigger picture’

States vary in their vulnerability to cyber-enabled foreign interference in elections and referendums.

In particular, ‘highly polarised or divided’ democracies tend to be more vulnerable to such interference.69 The effectiveness of cyber-enabled interference in the lead-up to an election is overwhelmingly determined by the robustness and integrity of the information environment and the extent to which the electoral process has been digitised.70 Academics from the School of Politics and International Relations at the Australian National University found that local factors, such as the length of the election cycle and the target’s preparedness and response, also play a significant role. For example, Emmanuel Macron’s En Marche! campaign prepared for Russian interference by implementing strategies to respond to both cyber operations (specifically, phishing attacks) and online information operations. In the event that a phishing attack was detected, Macron’s IT team was instructed to ‘flood’ phishing emails with multiple login credentials to disrupt and distract the would-be attacker. To deal with online information operations, Macron’s team planted fake emails and documents that could be identified in the event of a strategic disclosure and undermine the adversary’s effort.71

Electronic and online voting, vote tabulation and voter registration systems are often presented as the main targets of cyber-enabled interference. It is important to recognise that the level of trust the public has in the integrity of electoral systems, democratic processes and the information environment is at stake. In Europe, a 2018 Eurobarometer survey on democracy and elections found that 68% of respondents were concerned about the potential for fraud or cyberattack in electronic voting, and 61% were concerned about ‘elections being manipulated through cyberattacks’.72 

That figure matched the result of a similar survey conducted by the Pew Research Center in the US, which found that 61% of respondents believed it was likely that cyberattacks would be used in the future to interfere in their country’s elections.73

However, not all states are equally vulnerable to this type of interference. Some, for example, opt to limit or restrict the use of information and communication technologies in the electoral process.74 The Netherlands even reverted to using paper ballots to minimise its vulnerability to a cyber operation, ensuring that there wouldn’t be doubts about the electoral outcome.75 Authoritarian states that control, suppress and censor their information environments are also less vulnerable to cyber-enabled foreign interference.76

The proliferation of actors involved in elections and the digitisation of election functions has dramatically widened the attack surface available to foreign state actors. This has in large part been facilitated by the pervasive and persistent growth of social media and networking platforms, which has made targeted populations more accessible than ever to foreign state actors. For example, Russian operatives at the Internet Research Agency were able to pose convincingly as Americans online to form groups and mobilise political rallies and protests.77 The scale of this operation wouldn’t have been possible without social media and networking platforms.

Figure 10: Number of people using social media platforms, July 2020 (million)

Source: ‘Most popular social networks worldwide as of July 2020, ranked by number of active users’, Statista, 2020, online.

While these platforms play an increasingly significant role in how people communicate about current affairs, politics and other social issues, they continue to be misused and exploited by foreign state actors.78 Moreover, they have fundamentally changed the way information is created, accessed and consumed, resulting in an online information environment ‘characterised by high volumes of information and limited levels of user attention’.79

In responding to accusations of election interference, foreign actors tend to deny their involvement and then deflect by indicating that the accusations are politically motivated. In 2017, following the release of the United States’ declassified assessment of Russian election interference,80 Russian Presidential Spokesperson Dmitry Peskov compared the allegations of interference to a ‘witch-hunt’ and stated that they were unfounded and unsubstantiated, and that Russia was ‘growing rather tired’ of the accusations.81 Russian President Vladimir Putin even suggested that it could be Russian hackers with ‘patriotic leanings’ that have carried out cyber-enabled election interference rather than state-sponsored hackers.82

Plausible deniability is often cited in response to accusations of interference, with China’s Foreign Ministry noting that the ‘internet was full of theories that were hard to trace’.83 China has attempted to deter future allegations by threatening diplomatic relations, responding to the allegations that it was behind the sophisticated cyber attack on Australia’s parliament by issuing a warning that the ‘irresponsible’ and ‘baseless’ allegations could negatively impact China’s relationship with Australia.84

Recommendations

The threats posed by cyber-enabled foreign interference in elections and referendums will persist, and the range of state actors willing to deploy these tactics will continue to grow. Responding to the accelerating challenges in this space requires a multi-stakeholder approach that doesn’t impose an undue regulatory burden that could undermine democratic rights and freedoms. Responses should be calibrated according to the identified risks and vulnerabilities of each state. This report proposes recommendations categorised under four broad themes: identify, protect, detect and respond.

1. Identify

Identify vulnerabilities and threats as a basis for developing an effective risk-mitigation framework

  • Governments should develop and implement risk-mitigation frameworks for cyber-enabled foreign interference that incorporate comprehensive threat and vulnerability assessments. Each framework should include a component that is available to the public, provide an assessment of cybersecurity vulnerabilities in election infrastructure, explain efforts to detect foreign interference, raise public awareness, outline engagement with key stakeholders, and provide a clearer threshold for response.85
  • The security of election infrastructure needs to be continuously assessed and audited, during and in between elections.
  • Key political players, including political campaigns, political parties and governments, should engage experts to develop and facilitate tabletop exercises to identify and develop mitigation strategies that consider the different potential attack vectors, threats and vulnerabilities.86

2. Protect

Improve societal resilience by raising public awareness

  • Governments need to develop communication and response plans for talking to the public about cyber-enabled foreign interference, particularly when it involves attempts to interfere in elections and referendums.
  • Government leaders should help to improve societal resilience and situational awareness by making clear and timely public statements about cyber-enabled foreign interference in political processes. This would help to eliminate ambiguity and restore community trust. Such statements should be backed by robust public reporting mechanisms from relevant public service agencies.
  • Governments should require that all major social media and internet companies regularly report on how they detect and respond to cyber-enabled foreign interference. Such reports, which should include positions on political advertising and further transparency on how algorithms amplify and suppress content, would be extremely useful in informing public discourse and also in shaping policy recommendations.

Facilitate cybersecurity training to limit the effect of cyber-enabled foreign interference

  • Cybersecurity, cyber hygiene and disinformation training sessions and briefings should be provided regularly for all politicians, political parties, campaign staff and electoral commission staff to reduce the possibility of a successful cyber operation, such as a phishing attack, that can be exploited by foreign state actors.87 This could include both technical guides and induction guides for new staff, focused on detecting phishing emails and responding to DoS attacks.

Establish clear and context-specific reporting guidelines to minimise the effect of online information operations

  • As possible targets of online information operations, researchers and reporters covering elections and referendums should adopt ‘responsible’ reporting guidelines to minimise the effect of online information operations and ensure that they don’t act as conduits.88 The guidelines should highlight the importance of context when covering possible strategic disclosures, social media manipulation and disinformation campaigns.89 Stanford University’s Cyber Policy Center has developed a set of guidelines that provide a useful reference point for reporters and researchers covering elections and referendums.90

3. Detect

Improve cyber-enabled foreign interference detection capabilities

  • The computer systems of parliaments, governments and electoral agencies should be upgraded and regularly tested for vulnerabilities, particularly in the lead-up to elections and referendums.
  • Greater investments by both governments and the private sector must be made in the detection of interference activities through funding data-driven investigative journalism and research institutes so that key local and regional civil society groups can build capability that stimulates and informs public discourse and policymaking.
  • Governments and the private sector must invest in long-term research into how emerging technologies, such as ‘deep fake’ technologies,91 could be exploited by those engaging in foreign interference. Such research would also assist those involved in detecting and deterring that activity.

4. Respond

Assign a counter-foreign-interference taskforce to lead a whole-of-government approach

  • Global online platforms must take responsibility for enforcement actions against actors attempting to manipulate their online audiences. Their security teams should work closely with governments and civil society groups to ensure that there’s a shared understanding of the threat actors and their tactics in order to create an effectively calibrated and collaborative security posture.
  • Governments should look to build counter-foreign-interference taskforces that would help to coordinate national efforts to deal with many of the challenges discussed in this report. Australia’s National Counter Foreign Interference Coordinator and the US’s Foreign Influence Task Force provide different templates that could prove useful. Such taskforces, involving policy, electoral, intelligence and law enforcement agencies, should engage globally and will need to regularly engage with industry and civil society. They should also carry out formal investigations into major electoral interference activities and publish the findings of such investigations in a timely and transparent manner.

Signal a willingness to impose costs on adversaries

  • As this research demonstrates that a small number of foreign state actors persistently carry out cyber-enabled election interference, governments should establish clear prevention and deterrence postures based on their most likely adversaries. For example, pre-emptive legislation that automatically imposes sanctions or other punishments if interference is detected has been proposed in the US Senate.92
  • Democratic governments should work more closely together to form coalitions that develop a collective and publicly defined deterrence posture. Clearly communicated costs could change the aggressor’s cost–benefit calculus.

Download full report

Readers are urged to download the full report to access the appendix and citations.


Acknowledgements

The authors would like to thank Danielle Cave, Dr Samantha Hoffman, Tom Uren and Dr Jacob Wallis for all of their work on this project. We would also like to thank Michael Shoebridge, anonymous peer reviewers, and external peer reviewers Katherine Mansted, Alicia Wanless and Dr Jacob Shapiro for their invaluable feedback on drafts of this report.

In 2019, ASPI’s International Cyber Policy Centre was awarded a US$100,000 research grant from Twitter, which was used towards this project. The work of ASPI ICPC would not be possible without the support of our partners and sponsors across governments, industry and civil society.

What is ASPI?

The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally. ASPI’s sources of funding are identified in our Annual Report, online at www.aspi.org.au and in the acknowledgements section of individual publications. ASPI remains independent in the content of the research and in all editorial judgements.

ASPI International Cyber Policy Centre

ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber, emerging and critical technologies, issues related to information and foreign interference and focuses on the impact these issues have on broader strategic policy. The centre has a growing mixture of expertise and skills with teams of researchers who concentrate on policy, technical analysis, information operations and disinformation, critical and emerging technologies, cyber capacity building, satellite analysis, surveillance and China-related issues.

The ICPC informs public debate in the Indo-Pacific region and supports public policy development by producing original, empirical, data-driven research. The ICPC enriches regional debates by collaborating with research institutes from around the world and by bringing leading global experts to Australia, including through fellowships. To develop capability in Australia and across the Indo-Pacific region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises for the public and private sectors.

We would like to thank all of those who support and contribute to the ICPC with their time, intellect and passion for the topics we work on. If you would like to support the work of the centre please contact: icpc@aspi.org.au

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.

© The Australian Strategic Policy Institute Limited 2020

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.

First published October 2020.

ISSN 2209-9689 (online),
ISSN 2209-9670 (print)
Cover image: Produced by Rebecca Hendin, online.

Funding for this report was provided by Twitter.

  1. Fergus Hanson, Sarah O’Connor, Mali Walker, Luke Courtois, Hacking democracies: cataloguing cyber-enabled attacks on elections, ASPI, Canberra, 17 May 2019, online. ↩︎
  2. Katherine Mansted, ‘Engaging the public to counter foreign interference’, The Strategist, 9 December 2019, online. ↩︎
  3. Erik Brattberg, Tim Maurer, Russian election interference: Europe’s counter to fake news and cyber attacks, Carnegie Endowment for International Peace, May 2018, online. ↩︎
  4. Laura Rosenberger, ‘Making cyberspace safe for democracy: the new landscape of information competition’, Foreign Affairs, May/June 2020, online. ↩︎
  5. For a comprehensive overview of foreign interference in elections, see David Shimer, Rigged: America, Russia, and one hundred years of covert electoral interference, Knopf Publishing Group, 2020; Casey Michel, ‘Russia’s long and mostly unsuccessful history of election interference’, Politico, 26 October 2019, online. ↩︎
  6. David M Howard, ‘Can democracy withstand the cyber age: 1984 in the 21st century’, Hastings Law Journal, 2018, 69:1365. ↩︎
  7. Philip Ewing, ‘In “Rigged,” a comprehensive account of decades of election interference’, NPR, 9 June 2020, online. ↩︎
  8. Eric Geller, ‘Some states have embraced online voting. It’s a huge risk’, Politico, 8 June 2020, online. For a comprehensive discussion on electronic voting, see NRC, Asking the right questions about electronic voting. ↩︎
  9. CSE, Cyber threats to Canada’s democratic process. ↩︎
  10. Samantha Bradshaw, Philip N Howard, The global disinformation order: 2019 global inventory of organised social media manipulation, Computational Propaganda Research Project, Oxford Internet Institute, 2019, online. ↩︎
  11. National Research Council (NRC), ‘Public confidence in elections’, Asking the right questions about electronic voting, Computer Science and Telecommunications Board, National Academies Press, Washington DC, 2006, online. ↩︎
  12. Communications Security Establishment (CSE), Cyber threats to Canada’s democratic process, Canada, 7 June 2017, online. ↩︎
  13. Elizabeth Dwoskin, Craig Timberg, ‘Facebook takes down Russian operation that recruited U.S. journalists, amid rising concerns about election misinformation’, Washington Post, 1 September 2020, online. ↩︎
  14. See Alicia Wanless and Laura Walters, How Journalists Become an Unwitting Cog in the Influence Machine, Carnegie Endowment for International Peace, online, 1. ↩︎

Covid-19 Disinformation & Social Media Manipulation

Arange of actors are manipulating the information environment to exploit the COVID-19 crisis for strategic gain. ASPI’s International Cyber Policy Centre is tracking many of these state and non-state actors online, and will occasionally publish investigative, data-driven reporting that will focus on the use of disinformation, propaganda, extremist narratives and conspiracy theories by these actors.

The bulk of ASPI’s data analysis uses our in-house Influence Tracker tool – a machine learning and data analytics capability that draws out insights from multi-language social media datasets. This new tool can ingest data in multiple languages and auto-translate, producing insights on topics, sentiment, shared content, influential accounts, metrics of impact and posting patterns.

The reports are listed in chronological order:

#10: Attempted influence in disguise

This report builds from a Twitter network take-down announced on 8 October 2020 and attributed by Twitter as an Iranian state-linked information operation. Just over 100 accounts were suspended for violations of Twitter’s platform manipulation policies. This case study provides an overview of how to extrapolate from Twitter’s take-down dataset to identify persistent accounts on the periphery of the network. It provides observations on the operating mechanisms and impact of the cluster of accounts, characterising their traits as activist, media and hobbyist personas. The purpose of the case study is to provide a guide on how to use transparency datasets as a means of identifying ongoing inauthentic activity.

#9: Covid-19 and the reach of pro-Kremlin messaging

This research investigation examines Russia’s efforts to manipulate the information environment during the coronavirus crisis. It leverages data from the European External Action Service’s East StratCom Task Force, which, through its EUvsDisinfo project, tracks pro-Kremlin messages spreading in the EU and Eastern Partnership countries. Using this open-source repository of pro-Kremlin disinformation, in combination with OSINT investigative techniques that track links between online entities, we analyse the narratives being seeded about COVID-19 and map the social media accounts spreading those messages.

We found that the key subjects of the Kremlin’s messaging focused on the EU, NATO, Bill Gates, George Soros, the World Health Organization (WHO), the US and Ukraine. Narratives included well-trodden conspiracies about the source of the coronavirus, the development and testing of a potential vaccine, the impact on the EU’s institutions, the EU’s slow response to the virus and Ukraine’s new president. We also found that Facebook groups were a powerful hub for the spread of some of those messages.

27 Oct 2020

#8: Viral videos: Covid-19, China and inauthentic influence on Facebook

For the latest report in our series on Covid-19 disinformation, we’ve investigated ongoing inauthentic activity on Facebook and YouTube. This activity uses both English and Chinese language content to present narratives that support the political objectives of the Chinese Communist Party (CCP). These narratives span a range of topics, including assertions of corruption and incompetence in the Trump administration, the US Government’s decision to ban TikTok, the George Floyd and Black Lives Matter protests, and the ongoing tensions in the US–China relationship. A major theme, and the focus of this report, is criticism of how the US broadly, and the Trump administration in particular, are handling the Covid-19 crisis on both the domestic and the global levels.

29 Sept 2020

#7: Possible inauthentic activity promoting the Epoch Times and Truth Media targets Australians on Facebook

This ASPI ICPC report investigates a Facebook page which appears to be using coordinated, inauthentic tactics to target Australian users with content linked to The Epoch Times and other media groups. This includes running paid advertisements, as well as systematically seeding content into Australian Facebook groups for minority communities, hobbyists and conspiracy theories. Inauthentic and covert efforts to shape political opinions have no place in an open democratic society.

This report has been edited to delete references to a Facebook page entitled ‘May the Truth Be With You’. ASPI advises that, to the best of the Institute’s knowledge, the Facebook page has no connection with the other entities mentioned in this edited report.

Revised: 10 Dec 2021

#6: Pro-Russian vaccine politics drives new disinformation narratives

This latest report in our series on COVID-19 disinformation and social media manipulation investigates vaccine disinformation emerging – the day after Russia announced plans to mass-produce its own vaccine – from Eastern Ukraine’s pro-Russian media ecosystem.

We identify how a false narrative about a vaccination trial that never happened was seeded into the information environment by a pro-Russian militia media outlet, laundered through pro-Russian English language alternative news websites, and permeated anti-vaccination social media groups in multiple languages, ultimately completely decontextualised from its origins.

The report provides a case study of how these narratives ripple across international social media networks, including into a prominent Australian anti-vaccination Facebook group.

The successful transfer of this completely fictional narrative reflects a broader shift across the disinformation space. As international focus moves from the initial response to the pandemic towards the race for a vaccine, with all of the complex geopolitical interests that entails, political disinformation is moving on from the origins of the virus to vaccine politics.

24 Aug 2020

#5 Automating influence operations on Covid-19: Chinese speaking actors targeting US audiences

Automating influence on Covid-19 looks at how Chinese-speaking actors are attempting to target US-based audiences on Facebook and Twitter across key narratives including amplifying criticisms of the US’s handling of Covid-19, emphasising racial divisions, and political and personal scandals linked to President Donald Trump.

This new report investigates a campaign of cross-platform inauthentic activity that relies on a high-degree of automation and is broadly in alignment with the political goal of the People’s Republic of China (PRC) to denigrate the standing of the US. The campaign appears to be targeted primarily at Western and US-based audiences by artificially boosting legitimate media and social media content in order to amplify divisive or negative narratives about the US.

04 Aug 2020

#4 ID2020, Bill Gates and the Mark of the Beast: how Covid-19 catalyses existing online conspiracy movements

Against the backdrop of the global Covid-19 pandemic, billionaire philanthropist Bill Gates has become the subject of a diverse and rapidly expanding universe of conspiracy theories. This report takes a close look at a particular variant of the Gates conspiracy theories, which is referred to here as the ID2020 conspiracy (named after the non-profit ID2020 Alliance, which the conspiracy theorists claim has a role in the narrative), as a case study for examining the dynamics of online conspiracy theories on Covid-19. Like many conspiracy theories, that narrative builds on legitimate concerns, in this case about privacy and surveillance in the context of digital identity systems, and distorts them in extreme and unfounded ways. Among the many conspiracy theories now surrounding Gates, this one is particularly worthy of attention because it highlights the way emergent events catalyse existing online conspiracy substrates. In times of crisis, these digital structures—the online communities, the content, the shaping of recommendation algorithms—serve to channel anxious, uncertain individuals towards conspiratorial beliefs. This report focuses primarily on the role and use of those digital structures in proliferating the ID2020 conspiracy.

25 June 2020

#3 Retweeting through the Great Firewall: A persistent and undeterred threat actor

This report analyses a persistent, large-scale influence campaign linked to Chinese state actors on Twitter and Facebook.

This activity largely targeted Chinese-speaking audiences outside of the Chinese mainland (where Twitter is blocked) with the intention of influencing perceptions on key issues, including the Hong Kong protests, exiled Chinese billionaire Guo Wengui and, to a lesser extent Covid-19 and Taiwan. Extrapolating from the takedown dataset, to which we had advanced access, given to us by Twitter, we have identified that this operation continues and has pivoted to try to weaponise the US Government’s response to current domestic protests and create the perception of a moral equivalence with the suppression of protests in Hong Kong.

11 June 2020

#2. Covid-19 attracts patriotic troll campaigns in support of China’s geopolitical interests

This new research highlights the growing significance and impact of Chinese non-state actors on western social media platforms. Across March and April 2020, this loosely coordinated pro-China trolling campaign on Twitter has:

  • Harassed and mimicked western media outlets
  • Impersonated Taiwanese users in an effort to undermine Taiwan’s position with the World Health Organisation (WHO
  • Spread false information about the Covid-19 outbreak
  • Joined in pre-existing inauthentic social media campaigns

23 April 2020

#1. Covid-19 disinformation and social media manipulation trends

Includes case studies on:

  • Chinese state-sponsored messaging on Twitter
  • Coordinated anti-Taiwan trolling: WHO & #saysrytoTedros
  • Russian Covid-19 disinformation in Africa

8-15 April 2020

Snapshot of a shadow war

The rapid escalation in the long-running conflict between Azerbaijan and Armenia which took place in late September 2020 has been shadowed by a battle across social media for control of the international narrative about the conflict. On Twitter, large numbers of accounts supporting both sides have been wading in on politicised hashtags linked to the conflict. Our findings indicate large-scale coordinated activity. While much of this behaviour is likely to be authentic, our analysis has also found a significant amount of suspicious and potentially inauthentic behaviour.

The goal of this research piece is to observe and document some of the early dynamics of the information battle playing out in parallel to the conflict on the ground and create a basis for further, more comprehensive research. This report is in no way intended to undermine the legitimacy of authentic social media conversations and debate taking place on all sides of the conflict.

Retweeting through the Great Firewall

A persistent and undeterred threat actor

Key takeaways

This report analyses a persistent, large-scale influence campaign linked to Chinese state actors on Twitter and Facebook.

This activity largely targeted Chinese-speaking audiences outside of the Chinese mainland (where Twitter is blocked) with the intention of influencing perceptions on key issues, including the Hong Kong protests, exiled Chinese billionaire Guo Wengui and, to a lesser extent Covid-19 and Taiwan.

Extrapolating from the takedown dataset, to which we had advanced access, given to us by Twitter, we have identified that this operation continues and has pivoted to try to weaponise the US Government’s response to current domestic protests and create the perception of a moral equivalence with the suppression of protests in Hong Kong.

Figure 1: Normalised topic distribution over time in the Twitter dataset

Our analysis includes a dataset of 23,750 Twitter accounts and 348,608 tweets that occurred from January 2018 to 17 April 2020 (Figure 1). Twitter has attributed this dataset to Chinese state-linked actors and has recently taken the accounts contained within it offline.

In addition to the Twitter dataset, we’ve also found dozens of Facebook accounts that we have high confidence form part of the same state-linked information operation. We’ve also independently discovered—and verified through Twitter—additional Twitter accounts that also form a part of this operation. This activity appears to be a continuation of the campaign targeting the Hong Kong protests, which ASPI’s International Cyber Policy Centre covered in the September 2019 report Tweeting through the Great Firewall and which had begun targeting critics of the Chinese regime in April 2017.

Analysing the dataset as a whole, we found that the posting patterns of tweets mapped cleanly to working hours at Beijing time (despite the fact that Twitter is blocked in mainland China). Posts spiked through 8 a.m.–5 p.m. working hours Monday to Friday and dropped off at weekends. Such a regimented posting pattern clearly suggests coordination and inauthenticity.

The main vector of dissemination was through images, many of which contained embedded Chinese-language text. The linguistic traits within the dataset suggest that audiences in Hong Kong were a primary target for this campaign, with the broader Chinese diaspora as a secondary audience.

There is little effort to cultivate rich, detailed personas that might be used to influence targeted networks; in fact, 78.5% of the accounts in Twitter’s takedown dataset have no followers at all.

There’s evidence that aged accounts—potentially purchased, hacked or stolen—are also a feature of the campaign. Here again, there’s little effort to disguise the incongruous nature of accounts (from Bangladesh, for example) posting propaganda inspired by the Chinese Communist Party (CCP). While the takedown dataset contains many new and low-follower accounts, the operation targeted the aged accounts as the mechanism by which the campaign might gain traction in high-follower networks.

The operation has shown remarkable persistence to stay online in various forms since 2017, and its tenacity has allowed for shifts in tactics and the narrative focus as emerging events—including the Covid-19 pandemic and US protests in May and June 2020—have been incorporated into pro-Chinese government narratives.

Based on the data in the takedown dataset, while these efforts are sufficiently technically sophisticated to persist, they currently lack the linguistic and cultural refinement to drive engagement on Twitter through high-follower networks, and thus far have had relatively low impact on the platform. The operation’s targeting of higher value aged accounts as vehicles for amplifying reach, potentially through the influence-for-hire marketplace, is likely to have been a strategy to obfuscate the campaign’s state-sponsorship. This suggests that the operators lacked the confidence, capability and credibility to develop high-value personas on the platform. This mode of operation highlights the emerging nexus between state-linked propaganda and the internet’s public relations shadow economy, which offers state actors opportunities for outsourcing their disinformation propagation.

Similar studies support our report’s findings. In addition to our own previous work Tweeting through the Great Firewall, Graphika has undertaken two studies of a persistent campaign targeting the Hong Kong protests, Guo Wengui and other critics of the Chinese Government. Bellingcat has also previously reported on networks targeting Guo Wengui and the Hong Kong protest movement.

Google’s Threat Analysis Group noted that it had removed more than a thousand YouTube channels that were behaving in a coordinated manner and sharing content that aligned with Graphika’s findings.

This large-scale pivot to Western platforms is relatively new, and we should expect continued evolution and improvement, given the enormous resourcing the Chinese party-state can bring to bear in aligning state messaging across its diplomacy, state media and covert influence operations. The coordination of diplomatic and state media messaging, the use of Western social media platforms to seed disinformation into international media coverage, the immediate mirroring and rebuttal of Western media coverage by Chinese state media, the co-option of fringe conspiracy media to target networks vulnerable to manipulation and the use of coordinated inauthentic networks and undeclared political ads to actively manipulate social media audiences have all been tactics deployed by the Chinese Government to attempt to shape the information environment to its advantage.

The disruption caused by Covid-19 has created a permissive environment for the CCP to experiment with overt manipulation of global social media audiences on Western platforms. There’s much to suggest that the CCP’s propaganda apparatus has been watching the tactics and impact of Russian disinformation.

The party-state’s online experiments will allow its propaganda apparatus to recalibrate efforts to influence audiences on Western platforms with growing precision. When combined with data acquisition, investments in artificial intelligence and alternative social media platforms, there is potential for the normalisation of a very different information environment from the open internet favoured by democratic societies.

This report is broken into three sections, which follow on from this brief explanation of the dataset, the context of Chinese party-state influence campaigns and the methodology. The first major section investigates the tactics, techniques and operational traits of the campaign. The second section analyses the narratives and nuances included in the campaign messaging. The third section is the appendix, which will allow interested readers to do a deep dive into the data.

ASPI’s International Cyber Policy Centre received the dataset from Twitter on 2 June and produced this report in 10 days.

The Chinese party-state and influence campaigns

The Chinese party-state has demonstrated its willingness to deploy disinformation and influence operations to achieve strategic goals. For example, the CCP has mobilised a long-running campaign of political warfare against Taiwan, incorporating the seeding of disinformation on digital platforms. And our September 2019 report—Tweeting through the Great Firewall—investigated state-linked information campaigns on Western social media platforms targeting the Hong Kong protests, Chinese dissidents and critics of the CCP regime.

Since Tweeting through the Great Firewall, we have observed a significant evolution in the CCP’s efforts to shape the information environment to its advantage, particularly through the manipulation of social media. Through 2018 and 2019 we observed spikes in the creation of Twitter accounts by Chinese Ministry of Foreign Affairs spokespeople, diplomats, embassies and state media.

To deflect attention from its early mishandling of a health and economic crisis that has now gone global, the CCP has unashamedly launched waves of disinformation and influence operations intermingled with diplomatic messaging. There are prominent and consistent themes across the messaging of People’s Republic of China (PRC) diplomats and state media: that the CCP’s model of social governance is one that can successfully manage crises, that the PRC’s economy is rapidly recovering from the period of lockdown, and that the PRC is a generous global citizen that can rapidly mobilise medical support and guide the world through the pandemic.

The trends in the PRC’s coordinated diplomatic and state-media messaging are articulated as a coherent strategy by the Chinese Academy of Social Sciences, which is a prominent PRC-based think tank. The academy has recommended a range of responses to Western, particularly US-based, media criticism of the CCP’s handling of the pandemic, which it suggests is designed to contain the PRC’s global relationships. The think tank has offered several strategies that are being operationalised by diplomats and state media:

  • the coordination of externally facing communication, including 24 x 7 foreign media monitoring and rapid response
  • the promotion of diverse sources, noting that international audiences are inclined to accept independent media
  • support for Chinese social media platforms such as Weibo, WeChat and Douyin
  • enhanced forms of communication targeted to specific audiences
  • the cultivation of foreign talent.

The party-state appears to be allowing for experimentation across the apparatus of government in how to promote the CCP’s view of its place in the world. This study suggests that covert influence operations on Western social media platforms are likely to be an ongoing element of that project.

Methodology

This analysis used a mixed-methods approach combining quantitative analysis of bulk Twitter data with qualitative analysis of tweet content. This was combined with independently identified Facebook accounts, pages and activity including identical or highly similar content to that on Twitter. We assess that this Facebook activity, while not definitively attributed by Facebook itself, is highly likely to be a part of the same operation.

The dataset for quantitative analysis was the tweets from a subset of accounts identified by Twitter as being interlinked and associated through a combination of technical signals to which Twitter has access. Accounts that appeared to be repurposed from originally legitimate users are not included in this dataset, which may potentially skew some analysis.

This dataset consisted of:

  • account information for 23,750 accounts that Twitter suspended from its service
  • 348,608 tweets from January 2018 to 17 April 2020
  • 60,486 pieces of associated media, consisting of 55,750 images and 4,736 videos.

Many of the tweets contained images with Chinese text. They were processed by ASPI’s technology partner in the application of artificial intelligence and cloud computing to cyber policy challenges, Addaxis, using a combination of internal machine-learning capabilities and Google APIs before further analysis in R. The R statistics package was used for quantitative analysis, which informed social network analysis and qualitative content analysis.

Research limitations: ASPI does not have access to the relevant data to independently verify that these accounts are linked to the Chinese Government. Twitter has access to a variety of signals that are not available to outside researchers, and this research proceeded on the assumption that Twitter’s attribution is correct. It is also important to note that Twitter hasn’t released the methodology by which this dataset was selected, and the dataset doesn’t represent a complete picture of Chinese state-linked information operations on Twitter.

Download full report

Readers are warmly encouraged to download the full report (PDF, 62 pages) to access the full and detailed analysis, notes and references. 


Acknowledgements

ASPI would like to thank Twitter for advanced access to the takedown dataset that formed a significant component of this investigation. The authors would also like to thank ASPI colleagues who worked on this report.

What is ASPI?

The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally.

ASPI International Cyber Policy Centre

ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber and emerging technologies and their impact on broader strategic policy. The ICPC informs public debate and supports sound public policy by producing original empirical research, bringing together researchers with diverse expertise, often working together in teams. To develop capability in Australia and our region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises both in Australia and overseas for both the public and private sectors. The ICPC enriches the national debate on cyber and strategic policy by running an international visits program that brings leading experts to Australia.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.

© The Australian Strategic Policy Institute Limited 2020

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.

First published June 2020.

ISSN 2209-9689 (online)
ISSN 2209-9670 (print)

Tweeting through the Great Firewall

Preliminary Analysis of PRC-linked Information Operations on the Hong Kong Protests

Introduction

On August 19th 2019, Twitter released data on a network of accounts which it has identified as being involved in an information operation directed against the protests in Hong Kong. After a tip-off from Twitter, Facebook also dismantled a smaller information network operating on its platform. This network has been identified as being linked to the Chinese government. 

Researchers from the International Cyber Policy Centre (ICPC) at the Australian Strategic Policy Institute have conducted a preliminary analysis of the dataset. Our research indicates that the information operation targeted at the protests appears to have been a relatively small and hastily assembled operation rather than a sophisticated information campaign planned well in advance.

However, our research has also found that the accounts included in the information operation identified by Twitter were active in earlier information operations targeting political opponents of the Chinese government, including an exiled billionaire, a human rights lawyer, a bookseller and protestors in mainland China. The earliest of these operations date back to April 2017.

This is significant because—if the attribution to state-backed actors made by Twitter is correct—it indicates that actors linked to the Chinese government may have been running covert information operations on Western social media platforms for at least two years. 

Methodology

This analysis used a mixed-methods approach combining quantitative analysis of bulk Twitter data with qualitative analysis of tweet content.

The dataset for quantitative analysis was the tweets and accounts identified by Twitter as being associated with a state-backed information operation targeting Hong Kong and is available here.

This dataset consisted of 

  • account information about the 940 accounts Twitter suspended from their service
    • The oldest account was created in December 2007, although half of accounts were created after August 2017 
  • 3.6 million tweets from these accounts, ranging from December 2007 to May 2019

The R statistics package was used for quantitative analysis, which informed phases of social network analysis (using Gephi) and qualitative content analysis.

Research limitations: ICPC does not have access to the relevant data to independently verify that these accounts are linked to the Chinese government; this research proceeds on the assumption that Twitter’s attribution is correct. It is also important to note that Twitter has not released the methodology by which this dataset was selected, and the dataset may not represent a complete picture of Chinese state-linked information operations on Twitter.

Information operation against Hong Kong protests

Indications of a hastily constructed campaign

Carefully crafted, long-running influence operations on social media will have tight network clusters that delineate target audiences. We explored the retweet patterns across the Twitter take-down data from June 2019 – as the network was mobilising to target the Hong Kong protests – and did not find a network that suggested sophisticated coordination. Topics of interest to the PRC emerge in the dataset from mid-2017 but there is little attempt to target online communities with any degree of psychological sophistication.

There have been suggestions that Taiwanese social media, during recent gubernatorial elections, had been manipulated by suspicious public relations contractors operating as proxies for the Chinese government. It is notable that the network targeting the Hong Kong protests was not cultivated to influence targeted communities; it too acted like a marketing spam network. These accounts did not attempt to behave in ways that would have integrated them into – and positioned them to influence – online communities. This lack of coordination was reflected in the messaging. Audiences were not steered into self-contained disinformation ecosystems external to Twitter, nor were hashtags used to build audience, then drive the amplification of specific political positions. As this network was mobilising against the Hong Kong protests, several nodes in the time-sliced retweet data (see Figure 1) were accounts to promote the sex industry, accounts that would have gained attention because of the nature of their content. These central nodes were not accounts that had invested in cultivating engagement with target audiences (beyond their previous marketing function). These accounts spammed retweets at others outside the network in attempts to get engagement rather than working together to drive amplification of a consistent message.

Figure 1: Retweet network from June 2019, derived from Twitter’s take-down data, showing the significant presence of likely pornography-related accounts within the coordinated network that targeted the Hong Kong protests.

This was a blunt–force influence operation, using spam accounts to disseminate messaging, leveraging an influence-for-hire network. The predominant use of Chinese language suggests that the target audiences were Hong Kongers and the overseas diaspora.

This operation is in stark contrast to the efforts of Russia’s Internet Research Agency (IRA) to target US political discourse, particularly through 2015-2017.

The Russian effort displayed well-planned coordination. Analysis of IRA account data has shown that networks of influence activity cluster around identity or issue-based online communities. IRA accounts disseminated messaging that inflamed both sides of the debates around controversial issues in order to further the divide between protagonist communities. High-value and long-running personas cultivated influence within US political discourse. These accounts were retweeted by political figures, and quoted by media outlets.

The IRA sent four staff to the US to undertake ‘market research’ as the IRA geared up its election meddling campaign. The IRA campaign displayed clear understanding of audience segmentation, colloquial language, and the ways in which online communities framed their identities and political stances.

In contrast, this PRC-linked operation is clumsily re-purposed and reactive. Freedom of expression on China’s domestic internet is framed by a combination of top-down technocratic control managed by the Cyberspace Administration of China and devolved, crowdsourced content regulation by government entities, industry and Chinese netizens. Researchers have suggested that Chinese government efforts to shape sentiment on the domestic internet go beyond these approaches. One study estimated that the Chinese government pays for as many as 448 million inauthentic social media posts and comments a year. The aim is to distract the population from social mobilisation and collective forms of protest action. This approach to manipulating China’s domestic internet appears to be much less effective on Western social media platforms that are not bounded by state control.

Yet, the CCP continues to use blunt efforts to grow the reach, impact and influence of its narratives abroad. Elements of the party propaganda apparatus – including the foreign media wing of the United Front Work Department – have issued (as recently as 16 August) tenders for contracts to grow their international influence on Twitter, with specific targets for numbers of followers in particular countries.

In the longer term, China’s investments in AI may lift its capacity to target and manipulate international social media audiences. However, this operation lacks the sophistication of those deployed by other significant state proponents of cyber-enabled influence operations; particularly Iran and Russia, who have demonstrated the capacity to operate with some degree of subtlety across linguistic and cultural boundaries.

This was the quintessential authoritarian approach to influence – one-way floods of messaging, primarily at Hong Kongers.

Use of repurposed spam accounts

Many of the accounts included in the Twitter dataset are repurposed spam or marketing accounts. Such accounts are readily and cheaply available for purchase from resellers, often for a few dollars or less. Accounts in the dataset have tweeted in a variety of languages including Indonesian, Arabic, English, Korean, Japanese and Russian, and on topics ranging from British football to Indonesian tech support, Korean boy bands and pornography.

This graph shows the language used in tweets over time, (although Twitter did not automatically detect language in tweets prior to 2013). The dataset includes accounts tweeting in a variety of languages over a long period of time. Chinese language tweets appear more often after mid-2017.

This map shows the self-reported locations of the accounts suspended by twitter, color-coded for the language they tweeted in. These locations do not reliably indicate the true location of the account-holder, but in this data set there is a discrepancy between language and location. The self-reported locations are likely to reflect the former nature of the accounts as spam and marketing bots – i.e., they report their locations in developed markets where the consumers they are targeting are located in order to make the accounts appear more credible, even if the true operators of the account are based somewhere else entirely.

Evidence of reselling is clearly present in the dataset. Over 630 tweets within the dataset contain phrases like ‘test new owner’, ‘test’, ‘new own’, etc. As an example, the account @SamanthxBerg tweeted in Indonesian on the 2nd of October 2016, ‘lelang acc f/t 14k/135k via duit. minat? rep aja’ – meaning that the @SamanthxBerg account with 14,000 followers and following 135,000 users, was up for auction. The next tweet on 6th October 2016 reads ‘i just become the new owner, wanna be my friend?.’

  • tweetid: 782380635990200320
  • Time stamp: 2016-10-02 00:44:00 UTC
  • userid: 769790067183190016
  • User display name: 阿丽木琴
  • User screen name: SamanthxBerg
  • Tweet text: PLAYMFS: #ptl lelang acc f/t 14k/135k via duit. minat? rep aja

Use of these kinds of accounts suggests that the operators behind the information operation did not have time to establish the kinds of credible digital assets used in the Russian campaign targeting the US 2016 elections. Building that kind of ‘influence infrastructure’ takes time and the situation in Hong Kong was evolving too rapidly, so it appears that the actors behind this campaign effectively took a short-cut by buying established accounts with many followers.

 

Timeline of activity

The amount of content directly targeting the Hong Kong protests makes up only a relatively small fraction of the total dataset released by Twitter, comprising just 112 accounts and approximately 1600 tweets, of which the vast majority are in Chinese with a much smaller number in English.

Content relevant to the current crisis in Hong Kong appears to have begun on 14 April 2019, when the account @HKpoliticalnew (profile description: Love Hong Kong, love China. We should pay attention to current policies and people’s livelihood. 愛港、愛國,關注時政、民生。) tweeted about the planned amendments to the extradition bill. Tweets in the released dataset mentioning Hong Kong continued at the pace of a few tweets every few days, steadily increasing over April and May, until a significant spike on 14 June, the day of a huge protest in which over a million Hong Kongers (1 in 7) marched in protest against the extradition bill.

Hong Kong related tweets per day from 14 April 2019 to 25 July 2019.

Thereafter, spikes in activity correlate with significant developments in the protests. A major spike occurred on 1 July, the day when protestors stormed the Legislative Council building. This is also the start of the English-language tweets, presumably in response to the growing international interest in the Hong Kong protests. Relevant tweets then appear to have tapered off in this dataset, ending on 25 July.

It is worthwhile noting that the tapering off in this dataset may not reflect the tapering off of the operation itself – instead, it is possible that it reflects a move away from this hastily-constructed information operation to more fully developed digital assets which have not been captured in this data.

Lack of targeted messaging and narratives

One of the features of well-planned information operations is the ability to subtly target specific audiences. By contrast, the information operation targeting the Hong Kong protests is relatively blunt.

Three main narratives emerge:

  • Condemnation of the protestors
  • Support for the Hong Kong police and ‘rule of law’
  • Conspiracy theories about Western involvement in the protests

Support for ‘rule of law’:

  • tweetid: 1139524030371733504
  • Time stamp: 2019-06-14 13:24:00 UTC
  • userid: r+QLQEgpn4eFuN1qhvccxtPRmBJk3+rfO3k9wmPZTQI=
  • User display name: r+QLQEgpn4eFuN1qhvccxtPRmBJk3+rfO3k9wmPZTQI=
  • User screen name: r+QLQEgpn4eFuN1qhvccxtPRmBJk3+rfO3k9wmPZTQI=
  • Tweet text: @uallaoeea 《逃犯条例》的修改,只会让香港的法制更加完备,毕竟法律是维护社会公平正义的基石。不能默认法律的漏洞用来让犯罪分子逃避法律制裁而不管。 – 14 June 2019

Translated: ‘The amendment to the Fugitive Offenders Ordinance will only make Hong Kong’s legal system more complete. After all, the law is the cornerstone for safeguarding fairness and justice in society. We can’t allow loopholes in the legal system to allow criminals to escape the arm of the law.’

Conspiracy theories:

  • tweetid: 1142349485906919424
  • Time stamp: 2019-06-22 08:31:00 UTC
  • Userid: 2156741893
  • User display name: 披荆斩棘
  • User screen name: saydullos1d
  • Tweet text: 香港特區警察總部受到包圍和攻擊, 黑衣人嘅真實身份係咩? 係受西方反華勢力指使,然後係背後操縱, 目的明確, 唆使他人參與包圍同遊行示威。把香港特區搞亂, 目的就係非法政治目的, 破環社會秩序。  – 22 June 2019

Translated: ‘Hong Kong SAR police headquarters were surrounded and attacked. Who were the people wearing black? They were acting under the direction of western anti-China forces. They’re manipulating things behind the scenes, with a clear purpose to instigate others to participate in the demonstration and the encirclement. They’re bringing chaos to Hong Kong SAR with an illegal political goal and disrupting the social order.’

[NB: Important to note that this was written in traditional Chinese characters and switches between Standard Chinese and Cantonese, suggesting that the author was a native mandarin speaker but their target audience was Cantonese speakers in Hong Kong.]

  • tweetid: 1147398800786382848
  • Time stamp: 2019-07-06 06:56:00 UTC
  • Userid: 886933306599776257
  • User display name: lingmoms
  • User screen name: lingmoms
  • Tweet text: 無底線的自由,絕不是幸事;不講法治的民主,只能帶來禍亂。香港雖有不錯的家底,但經不起折騰,經不起內耗,惡意製造對立對抗,只會斷送香港前途。法治是香港的核心價值,嚴懲違法行為,是對法治最好的維護,認為太平山下應享太平。 – 6 July 2019

Translated: ‘Freedom without a bottom line is by no means a blessing; democracy without the rule of law can only bring disaster and chaos. Although Hong Kong has a good financial background, it can’t afford to vacillate. It can’t take all of this internal friction and maliciously created agitation, which will only ruin Hong Kong’s future. The rule of law is the core value of Hong Kong. Severe punishment for illegal acts is the best safeguard for the rule of law. Peace should be enjoyed at the foot of The Peak.’’

[NB: This Tweet is also written in Standard Chinese using traditional Chinese characters. The original text says ‘at the foot of Taiping mountain’, meaning Victoria Peak, but is more commonly referred to in Hong Kong as “The Peak” (山頂). However, the use of Taiping mountain instead of ‘The Peak’ to refer to the feature is a deliberate pun, because Taiping means ‘great peace’]

  • tweetid: 1152024329325957120
  • Time stamp: 2019-07-19 01:16:00 UTC
  • Userid: 58615166
  • User display name: 流金岁月
  • User screen name: Licuwangxiaoyua
  • Tweet text: #HongKong #HK #香港 #逃犯条例 #游行 古话说的好,听其言而观其行。看看那些反对派和港独分子,除了煽动上街游行、暴力冲击、袭警、扰乱香港社会秩序之外,就没做过什么实质性有利于香港发展的事情。反对派和港独孕育的“变态游行”这个怪胎,在暴力宣泄这条邪路上愈演愈烈。 – 19 July 2019

Translated: ‘#HongKong #HK #HongKong #FugitiveOffendersOrdinance #Protests The old Chinese saying put it well: ‘Judge a person by their words, as well as their actions’. Take a look at those in the opposition parties and the Hong Kong independence extremists. Apart from instigating street demonstrations, violent attacks, assaulting police officers and disturbing the social order in Hong Kong, they have done nothing that is actually conducive to the development of Hong Kong. This abnormal fetus of a “freak demonstration” that the opposition parties and Hong Kong independence people gave birth to is becoming more violent as it heads down this evil road.’

This approach of vilifying opponents, emphasising the need for law and order as a justification for authoritarian behaviour is consistent with the narrative approaches adopted in earlier information operations contained within the dataset (see below).

Earlier information operations against political opponents

Our research has uncovered evidence that the accounts identified by Twitter were also engaged in earlier information campaigns targeting opponents of the Chinese government.

It appears likely that these information operations were intended to influence the opinions of overseas Chinese diasporas, perhaps in an attempt to undermine critical coverage in Western media of issues of interest to the Chinese government. This is supported by a notice released by China News Service, a Chinese-language media company owned by the United Front Work Department that targets the Chinese diaspora, requesting tenders to expand its Twitter reach.

Campaign against Guo Wengui

The most significant and sustained of these earlier information operations targets Guo Wengui, an exiled Chinese businessman who now resides in the United States. The campaign directed at Guo is by far the most extensive campaign in the dataset and is significantly larger than the activity directed at the Hong Kong protests. This is the earliest activity the report authors have identified that aligns with PRC interests.

Graph showing activity in an information operation targeting Guo from 2017 to the end of the dataset in July 2019

Guo, also known as Miles Kwok, fled to the United States in 2017 following the arrest of one of his associates, former Ministry of State Security vice minister Ma Jian. Guo has made highly public allegations of corruption against senior members of the Chinese government. The Chinese government in turn accused Guo of corruption, prompting an Interpol red notice for his arrest and return to China. Guo has become a vocal opponent of the Chinese government, despite having himself been accused of spying on their behalf in July 2019.

Within the Twitter Hong Kong dataset, the online information campaign targeting Guo began on 24 April 2017, five days after the Interpol red notice was issued at the request of the Chinese government, and continued until the end of July 2019. Guo continues to be targeted on Twitter, although it is unclear if the PRC government is directly involved in the ongoing effort.

Tweets mentioning Guo Wengui over time from 23 April 2017 to 4 May 2017: Graph showing activity in tweet volume by day. Activity appears to take place during the working week (except Wednesdays), suggesting that this activity may be professional rather than authentic personal social media use.

In total, our research identified at least 38,732 tweets from 618 accounts in the dataset which directly targeted Guo. These tweets consist largely of vitriolic attacks on his character, ranging from highly personal criticisms to accusations of criminality, treachery against China and criticisms of his relationship with controversial US political figure Steve Bannon. 

  • tweetid: 1123765841919660032
  • Time stamp: 2019-05-02 01:47:00 UTC
  • Userid: 4752742142
  • User display name: 漂泊一生
  • User screen name: futuretopic
  • Tweet text: “郭文贵用钱收买班农,一方面想找靠山,一方面想继续为自己的骗子生涯增加点砝码,其实班农只是爱财并非真想和郭文贵做什么, 很快双方会发现对方都 是在欺骗自己,那时必将反目成 仇.” – 2 May 2019

Translated: “Guo Wengui used his money to buy Bannon. On the one hand, he needed his backing. On the other hand, he wanted to continue to add weight to his career as a swindler. In fact, Bannon just loves money and doesn’t really want to do anything with Guo Wengui. Soon both sides will find out that they’re both deceiving the other, and then they’ll turn into enemies.”

  • tweetid: 1153122108655861760
  • Time stamp: 2019-07-22 01:58:00 UTC
  • Userid: 1368044863
  • User display name: asdwyzkexa
  • User screen name: asdwyzkexa
  • Tweet text: ‘近日的郭文贵继续自己自欺欺人的把戏,疯狂的直播,疯狂的欺骗,疯狂鼓动煽风点火,疯狂的鼓吹自己所谓的民主,鼓吹自己的“爆料革命”。但其越是疯狂,越是难掩日暮西山之态,无论其吹的再如何天花乱坠,也终要为自己的过往负责,亲自画上句点.’ – 22 July 2019

Translated: ‘Lately, Guo Wengui has continued to use his cheap trick of deceiving himself and others with a crazy live-stream where he lied like crazy, incited and fanned the flames like crazy, and agitated for his so-called democracy like crazy—enthusiastically promoting his “Expose Revolution”. But the crazier he gets the harder it is to hide the fact that the sun has already set on him. It doesn’t matter how much he embellishes things; eventually, he will have to take responsibility and put an end to all of this himself.’

Spikes in activity in this campaign appear to correspond with significant developments in the timeline of Guo’s falling out with the Chinese government. For example, a spike around 23 April 2018 (see below chart) correlates with the publishing of a report by the New York Times exposing a complex plan to pull Guo back to China with the assistance of the United Arab Emirates and Trump fundraiser Elliott Broidy. 

  • tweetid: 988088232075083776
  • Time stamp: 2018-04-22 16:12:00 UTC
  • Userid: 908589031944081408
  • User display name: 如果
  • User screen name: bagaudinzhigj
  • Tweet text: ‘‘谎言说一千遍仍是谎言,郭文贵纵有巧舌如簧的口才,也有录制性爱视频等污蔑他人的手段,更有给人设套录制音频威胁他人的前科,还有诈骗他人钱财的146项民事诉讼和19项刑事犯罪指控,但您在美国再卖力的表演也掩盖不了事实.’ – 22nd April 2018

Translated: ‘Even if a lie is repeated a thousand times, it’s still a lie. Guo Wengui is an eloquent smooth talker and uses sex tapes and other methods to slander people. He also has a criminal record for trying to threaten and set people up with recorded audio. He has 146 civil lawsuits and 19 criminal charges for swindling other people’s money. No matter how much effort you put in in the United States, you still can’t hide the truth.’

This tweet was repeated 41 times by this user from 7 November 2017 to 15 June 2018, at varying hours of the day, but at only 12 or 42 minutes past the hour, suggesting an automated or pre-scheduled process:

Volume of tweets mentioning Guo Wengui over time from 14 April 2019 to 29 April 2019.

Like the information operation targeting the Hong Kong protests, the campaign targeting Guo is primarily in Chinese language. There are approximately 133 tweets in English, many of which are retweets or duplicates. On 5th November 2017, for example, 27 accounts in the dataset tweeted or retweeted: ‘#郭文贵 #RepatriateKwok、#Antiasylumabused、 sooner or later, your fake mask will be revealed.’

As the Hong Kong protests began to increase in size and significance, the information operations against Guo and the protests began to cross over, with some accounts directing tweets at both Guo and the protests.

  • tweetid: 1148407166920876032
  • Time stamp: 2019-07-09 01:42:00 UTC
  • Userid: 886933306599776257
  • User display name: lingmoms
  • User screen name: lingmoms
  • Tweet text: ‘唯恐天下不乱、企图颠覆香港的郭文贵不仅暗中支持香港占中分子搞暴力破坏,还公开支持暴力游行示威,难道这一小撮入狱的暴民就是文贵口中的“香港人”?’– 9 July 2019

Translated: ‘Guo Wengui, who fears only a world not in chaos and schemes to toppleHong Kong, is not only secretly supporting the violent and destructive Occupy extremists in Hong Kong, he’s also openly supporting violent demonstrations.  Is this small mob of criminals the “Hong Kong people” Guo Wengui keeps talking about?’ 

The dataset provided by Twitter ends in late July 2019, but all indications suggest that the information campaign targeting Guo will continue.
 

Campaign against Gui Minhai

Although the campaign targeting Guo Wengui is by far the most extensive in the dataset, other individuals have also been targeted.

One is Gui Minhai, a Chinese-born Swedish citizen. Gui is one of a number of Hong Kong-based publishers specialising in books about China’s political elite who disappeared under mysterious circumstances in 2015. It was later revealed that he had been taken into Chinese police custody. The official reason for his detention is his role in a fatal traffic accident in 2003 in which a schoolgirl was killed. Gui has been in and out of detention since 2015, and has made a number of televised confessions which many human rights advocates believe to have been forced by the Chinese government.

The information operation targeting Gui Minhai is relatively small, involving 193 accounts and at least 350 tweets. With some exceptions, the accounts used in the activity directed against Gui appear to be primarily ‘clean’ accounts created specifically for use in information operations, unlike the repurposed spam accounts utilised by the activity targeted at Hong Kong.

The campaign runs for one month, from 23 January to 23 February 2018. The preciseness of the timing is indicative of an organised campaign rather than authentic social media activity. The posting activity also largely corresponds with the working week, with breaks for weekends and holidays like Chinese New Year.

A graph showing campaign activity in tweets per day. Weekends and public holidays are indicated by grey shading.

The campaign started on 23 January 2018, the day on which news broke that Chinese police had seized Gui off a Beijing-bound train while he was travelling with Swedish diplomats to their embassy. The campaign then continued at a slower pace across several weeks, ending on 23 February 2018. The tweets are entirely in Chinese language and emphasise Gui’s role in the traffic accident, painting him as a coward for attempting to leave the country and blaming Western media for interfering in the Chinese criminal justice process. Some also used Gui’s name as a hashtag.

  • tweetid: 956700365289807872
  • Time stamp: 2018-01-26 01:28:00 UTC
  • Userid: 930592773668945920
  • User display name: 赵祥
  • User screen name: JonesJones4780
  • Tweet text: ‘#桂民海 因为自己一次醉驾,让一个幸福家庭瞬间支离破碎,这令桂敏海痛悔不已。但是,他更担心自己真的因此入狱服刑。于是,在法院判决后不久、民事赔偿还未全部执行完的时候,桂敏海做出了另一个错误选择.’ – 26 January 2018

Translation: ‘#GuiMinhai deeply regrets that a happy family was shattered because of his drunk driving. However, he’s even more worried that he’s actually going to have to serve a prison sentence for it. Therefore, not long after the court’s decision and before any civil compensation was paid out, Gui Minhai made another bad choice’

  • tweetid: 956411588386279424
  • Time stamp: 2018-01-25 06:21:00 UTC
  • Userid: 1454274516
  • User display name: 熏君
  • User screen name: nkisomekusua
  • Tweet data: ‘#桂敏海 西方舆论力量仍想运用它们的话语霸权和双重标准,控制有关中国各种敏感信息的价值判断,延续对中国政治体制的舆论攻击,不过西方媒体这样的炒作都只是自导自演,自娱自乐.’ – 25 January 2018

Translation: ‘#GuiMinhai Western public opinion forces still want to use their discourse hegemony and double standards to control value judgments of all kinds of sensitive information about China and are continuing their public opinion attacks on the Chinese political system. However, this kind of hype in the Western media is just a performance they’re doing for themselves for their own personal entertainment.’

Others amplify the messages of Gui’s “confession”, claiming that he chose to hand himself in to police of his own volition due to his sense of guilt.

  • tweetid: 959276160038289408
  • Time stamp: 2018-02-02 04:03:00 UTC
  • Userid: 898580789952118784
  • User display name: 雪芙
  • User screen name: Ryy7v3wQkXnsGO8
  • Tweet text: ‘#桂敏海     父亲去世他不能奔丧这件事情,对桂敏海触动很大。他的母亲也80多岁了,已经是风烛残年,更让他百般思念、日夜煎熬,心里总是有一种很强烈的愧疚不安。所以他选择回国自首.’ – 2 February 2018

Translation: The death of #GuiMinhai’s father and the fact he couldn’t return home for the funeral greatly affected him. His mother is also over 80 years old and is already in her twilight years, causing him to suffer day and night in every possible way. There was always a strong sense of guilt and uneasiness in his heart. So he chose to return to China and give himself up.’

It seems likely that this was a short-term campaign intended to influence the opinions of overseas Chinese who might see reports of Gui’s case in international media.
 

Campaign against Yu Wensheng

On precisely the same day as the information operation against Gui started, another mini-campaign appears to have been launched. This one was aimed against human rights lawyer and prominent CCP-critic Yu Wensheng.

Yu was arrested by Chinese police whilst walking his son to school on 19 January 2018. Only hours before, Yu had tweeted an open letter critical of the Chinese government, and called for open elections and constitutional reform. Shortly after, an apparently doctored video was released, raising questions about whether Chinese authorities were attempting to launch a smear campaign against Yu.

In this dataset, tweets targeting Yu Wensheng begin on 23 January 2018—the same day as the campaign against Gui Minhai—and continue through until 31 January (only four tweets take place after this, the latest on 10 February 2018). This was a small campaign, consisting of roughly 218 tweets from 80 accounts, many of which were the same content amplified across these accounts. As with Gui, Yu’s name was often used as a hashtag.

This graph shows campaign activity in tweets per day over time. Selected weekends are highlighted in grey.

The content shared by the campaign was primarily condemning Yu for his alleged violence against the police as shown by the doctored video.

  • tweetid: 956707469677359104
  • Time stamp: 2018-01-26 01:56:00
  • Userid: 0jFZp2sQdCYj8hUveyN4Llxe2UvFbQgTqxaymZihMM0
  • User display name: 0jFZp2sQdCYj8hUveyN4Llxe2UvFbQgTqxaymZihMM0
  • User screen name: 0jFZp2sQdCYj8hUveyN4Llxe2UvFbQgTqxaymZihMM0
  • Tweet text: ‘#余文生 1月19日,一余姓男子在接受公安机关依法传唤时暴力袭警致民警受伤,被公安机关依法以妨害公务罪刑事拘留。澎湃新闻从北京市公安机关获悉,涉案男子系在被警方强制传唤时,先后打伤、咬伤两名民警.’ – 26 January 2018.

Translation: ‘#YuWensheng On January 19, a man surnamed Yu violently assaulted a police officer while receiving a legal summons from the public security bureau, and was arrested for obstructing government administration. Beijing Public Security Bureau told The Paper [a Chinese publication] that the man involved in the case wounded the officers repeatedly by biting them when he was being forcibly summoned by the police.’

As with the other campaigns, however, accusations of supposed Western influence were also notable: 

  • tweetid: 956742165845090304
  • Time stamp: 2018-01-26 04:14:00 UTC
  • Userid: 2l1eDka0eiClBUYoDXlwYaKcUaeelnz44aDM9OJRM
  • User display name: 2l1eDka0eiClBUYoDXlwYaKcUaeelnz44aDM9OJRM
  • User screen name: 2l1eDka0eiClBUYoDXlwYaKcUaeelnz44aDM9OJRM
  • Tweet text: ‘#余文生  在中国,有一批人自称维权律师,他们自诩通过行政及法律诉讼来维护公共利益、宪法及公民权利,并鼓吹西方民主、自由,攻击中国黑暗、专制、暴力执法、缺乏法治精神,视频主人公余文生律师也正是其中的一员.’ – 26 January 2018

Translation: ‘#YuWensheng  It can be seen from Yu Wensheng’s past activities that he is one of the so-called rights lawyers in China. Yu Wensheng thinks that with the support of foreign media and rights lawyers, he can become a hero and that naturally, some people will cheer for him. Little did he know that this time the police were wearing a law enforcement recording device that they used to record an overview of the incident and quickly published it to the world. Yu’s ugly face was undoubtedly revealed to the public.’

  • tweetid: 958222061972832256
  • Time stamp: 2018-01-30 06:15:00 UTC
  • Userid: Kmto+XqJ6hcowk0GvAGVEasNxHUW11beLphANrm3uhE=
  • User display name: Kmto+XqJ6hcowk0GvAGVEasNxHUW11beLphANrm3uhE=
  • User screen name: Kmto+XqJ6hcowk0GvAGVEasNxHUW11beLphANrm3uhE=
  • Tweet text: ‘#余文生 从余文生过去的活动中可以看到,他是国内所谓维权律师中的一员。余文生认为身后有国外媒体以及维权律师群体的支持,他就能成为英雄,自然有人为他摇旗呐喊。殊不知这次警察佩戴了执法记录仪,录下了事件的概况,并迅速公布于世,余的丑陋嘴脸在公众暴露无疑.’ – 30 January 2018.

Translation: ‘#YuWensheng In China, a group of people claim to be rights defenders. They claim to protect the public interest, constitution and civil rights through administrative and legal proceedings. They advocate for Western democracy and freedom and attack China’s darkness, autocracy, violent law enforcement and the lack of the rule of law. Lawyer Yu Wensheng, the star of the video, is also one of them.’

As with the other campaigns seen in this dataset, it seems probable that the motivation behind this effort was to convince overseas Chinese to believe the Chinese Communist Party’s version of events, bolstering the doctored video of Yu and amplifying the smear campaign.

Campaign against protesting PLA veterans

Another information campaign aimed at influencing public opinion appears to have taken place in response to the arrest of ten Chinese army veterans over protests in the eastern province of Shandong.

The protests took place in October 2018, when around 300 people demonstrated in Pingdu city to demand unpaid retirement benefits for veterans of the People’s Liberation Army (PLA). The protests allegedly turned violent, leading to injuries and damage to police vehicles. On 9 December 2018, Chinese state media announced that ten veterans had been arrested for their role in the protest. China Digital Times, which publishes leaked censorship instructions, reported that state media had been instructed to adopt a “unified line” on the arrests.

On the same day, a small but structured information operation appears to have kicked into gear. Beginning at 8:43am Beijing time, accounts in the dataset began tweeting about the arrests. This continued with tweets spaced out every few minutes (a total of 683) until 3:52pm Beijing time. At 9:52pm Beijing time the tweets started up again, this time continuing until 11:49pm.

This graph shows campaign activity over the day by hour of the day adjusted for Beijing UTC+8 time.

Activity by the accounts in the dataset included tweets as well as retweeting and responding to one another’s tweets, creating the appearance of authentic conversation. There was significant repetition within and across accounts, however, with many accounts tweeting a phrase and then tweeting the exact same phrase repeatedly in replies to the tweets of other accounts.

The content of the tweets supported and reinforced the message being promoted by state media, in condemning the protestors as violent criminals and calling for them to be punished.

  • tweetid: 1071589476495835136
  • Time stamp: 2018-12-09 02:16:00 UTC
  • Userid: 53022020
  • User display name: sergentxgner
  • User screen name: sergentxgner
  • Tweet text: ‘中国是社会主义法治国家,绝对没有法外之地和法外之人,法律面前人人平等。自觉遵守国家法律、依法合理表达诉求、维护社会正常秩序,是每一位公民的义务和责任。对任何违法犯罪行为,公安机关都将坚决依法予以打击,为中国公安点赞,严厉惩治无视法律法规之人,全力保障人民群众生命、财产安全.’ – 9 December 2018

Translated: ‘China is a socialist country ruled by law. There’s no place and no people in it that are above the law. All people are equal before the law. It is the duty and responsibility of every citizen to consciously abide by the laws of the state, to express their demands reasonably and according to the law, and to maintain the normal social order. Public security organs will resolutely crack down on any illegal or criminal acts in accordance with the law. Like [this post] for China’s public security, severely punish those who ignore laws and regulations, and fully protect the lives and property of the people.’

  • tweetid: 1071614920846786560
  • Time stamp: 2018-12-09 03:58:00 UTC
  • Userid: 4249759479
  • User display name: 林深见鹿
  • User screen name: HcqcPapleyAshle
  • Tweet text: ‘这些人的行为严重造成人民群众的生命财产安全,就应该雷霆出击,绝不手软.’ – 9 December 2018

Translated: ‘The behaviour of these people has seriously caused [harm to] the safety of the lives and property of the people. They should strike out like a thunderclap and not relent.’

[NB: This tweet may have been typed incorrectly and missed out a character or two. It should probably say that the behaviour endangered the lives and property of these people.]

Again, it appears likely that the motivation behind this campaign was to influence the opinions of overseas Chinese against critical international reporting (although international coverage of the arrests appears to have been minimal, which perhaps helps to explain the short-lived nature of the campaign) and videos of the event being circulated on WeChat that contradicted the official narrative.

Dormant accounts and Chinese language tweets

The information operation against Guo Wengui appeared to begin on 24 April 2017. Our research also tried to determine whether earlier PRC-related information operations had taken place. 

Chinese language tweets.

One measure we examined was the percentage of Chinese language tweets per day in the dataset. Twitter assigns a ‘tweet_language’ value to tweets, and manual examination of a sample of tweets showed that this was approximately 90% accurate.

Figure 11: Percent Chinese language tweets per day from Jan 2017 onwards.

Figure 11 shows that prior to April 2017 there was no significant volume of Chinese language tweets in the network of accounts that Twitter identified. A noticeable increase is seen by July 2017, and a significant volume of the tweets are identified as Chinese from then on, with a peak at over 80% in October 2017.

This measure does not support the existence of significant PRC-related operations prior to April 2017, unless their initial operations occurred in languages other than Chinese.

Account creation and tweet language

A second measure examined when accounts were created and the language they tweeted in.

Figure 12: Account creation day by percent Chinese tweets and follower size from 2008 to July 2019.

Figure 12 shows when accounts were created with time on the x-axis, compared to percent Chinese tweets over the lifetime of the account y-axis, with size of point reflecting follower numbers.

Figure 13: Account creation day by percent Chinese tweets and follower size from April 2016 to July 2019.

Figure 13 is the same data from April 2016 to July 2019.

In Figure 12 and Figure 13 we can see a vertical stripe in July 2016, and more in August through October 2017. These stripes indicate many accounts being created at close to the same time. From July 2017 new accounts tweet mostly in Chinese.

These data indicate that accounts were systematically created to be involved in this network. Accounts created after October 2017 tweet mostly in Chinese, with just a couple of exceptions. There are also a group of accounts that were created in July 2016 that were involved in the network that were created close to simultaneously.

Sleeper Accounts

The dataset contained 233 accounts that had greater than year-long breaks between tweets. These sleeper accounts were created as early as December 2007, and had breaks as long as ten years between tweets.

Figure 14: Tweets over time as represented as dots coloured by tweet language for accounts with a greater than one-year gap between tweets. More than year-long gaps between tweets are represented by grey lines.

Figure 14 shows the pattern of tweets for these accounts over time. These accounts tweeted in a variety of languages including Portugese, Spanish and English, but not Chinese prior to their break in activity. After they resumed tweeting there is a significant volume of Chinese language tweets.  

The bulk of these sleeper accounts begin to tweet again from late 2017 onwards. These data support the hypothesis that PRC-related groups began recruiting dormant accounts into their network from mid- to late-2017 and onwards. 

Figure 15: Tweets over time as represented as dots coloured by tweet language for accounts with a greater than one-year gap between tweets that were created between June and August 2016.

Figure 15 shows the tweeting pattern of accounts created in June and August 2016. These accounts can be seen as a vertical stripe in Figure 13.

The presence of long gaps in tweets immediately after account creation before reactivation and tweeting mostly in Chinese from early 2018 does not support the hypothesis that PRC-related elements were engaged in active information operations before April 2017. It is possible that these accounts were created by PRC-related entities expressly for use in subsequent information operations, but our assessment is that it is more likely that these inactive accounts were created en masse for other purposes and then acquired by PRC-related groups.

This research did not identify any evidence for other PRC-related information operations earlier than April 2017.

Conclusion

The ICPC’s preliminary research indicates that the information operation targeting the Hong Kong protests, as reflected in this dataset, was relatively small hastily constructed, and relatively unsophisticated. This suggests that the operation, which Twitter has identified as linked to state-backed actors, is likely to have been a rapid response to the unanticipated size and power of the Hong Kong protests rather than a campaign planned well in advance. The unsophisticated nature of the campaign suggests a crude understanding of information operations and rudimentary tradecraft that is a long way from the skill level demonstrated by other state actors. This may be because the campaigns were outsourced to a contractor, or may reflect a lack of familiarity on the part of Chinese state-backed actors when it comes to information operations on open social media platforms such as Twitter, as opposed to the highly proficient levels of control demonstrated by the Chinese government over heavily censored platforms such as WeChat or Weibo.

Our research has also uncovered evidence that these accounts had previously engaged in multiple information operations targeting political opponents of the Chinese government. Activity in these campaigns show clear signs of coordinated inauthentic behaviour, for example patterns of posting which correspond to working days and hours in Beijing. These information operations were likely aimed at overseas Chinese audiences. 

This research is intended to add to the knowledge-base available to researchers, governments and policymakers about the nature of Chinese state-linked information operations and coordinated inauthentic activity on Twitter. 

Notes

The authors would like to acknowledge the assistance of ICPC colleagues Fergus RyanAlex Joske and Nathan Ruser

Twitter did not provide any funding for this research. It has provided support for a separate ICPC project.


What is ASPI?

The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally.


ASPI International Cyber Policy Centre

The ASPI International Cyber Policy Centre’s mission is to shape debate, policy and understanding on cyber issues, informed by original research and close consultation with government, business and civil society.


It seeks to improve debate, policy and understanding on cyber issues by:

  1. conducting applied, original empirical research
  2. linking government, business and civil society
  3. leading debates and influencing policy in Australia and the Asia–Pacific.

The work of ICPC would be impossible without the financial support of our partners and sponsors across government, industry and civil society. ASPI is grateful to the US State Department for providing funding for this research project.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.


© The Australian Strategic Policy Institute Limited 2019

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.