ASPI is delighted to release its 5th edition of the Counterterrorism (CT) yearbook, edited by Leanne Close, APM and Daria Impiombato. The 2021 yearbook provides a comprehensive picture of the current global terrorism landscape. The yearbook’s 29 authors found Covid-19—a key theme in most chapters—to have had an impact on global terrorism. However, pervasive online social media platforms have played a more significant role, increasing terrorists’ ability to radicalise and incite individuals to commit terrorist acts, as well as encouraging financial support to terrorist groups.
The yearbook begins with an overview of current trends and the terrorism landscape in 2020 identified in the 8th Global Terrorism Index (GTI) produced by Australia’s Institute for Economics and Peace.
As well as analysis of the impacts of Covid-19 on terrorist threats globally, several key themes emerge from the yearbook’s chapters, consistent with the trends identified in the GTI. These include the impact of social media and technology on terrorist events and radicalisation, and a nexus between terrorism and organised crime. One concerning example highlights the impact of natural disasters on violent extremism, with a study of 167 countries over 30 years from 1970, which found that an increase in deaths from natural disasters resulted in an increase in terrorism-related deaths and attacks in the following two years.
Strong examples of prevention and strategies to counter violent extremism are outlined in the yearbook, providing governments and CT practitioners with contemporary analysis of current and emerging challenges and offering key policy recommendations to combat radicalisation, violent extremism and terrorism in all its forms.
This report explores how the Chinese Communist Party (CCP), fringe media and pro-CCP online actors seek—sometimes in unison—to shape and influence international perceptions of the Chinese Government’s human rights abuses in Xinjiang, including through the amplification of disinformation. United States (US) based social media networks, including Twitter, Facebook and YouTube, along with Chinese-owned TikTok (owned by Chinese company ByteDance), are centre stage for this global effort.
The Chinese Government continues to deny human rights abuses in Xinjiang despite a proliferation of credible evidence, including media reporting, independent research, testimonies and open-source data, that has revealed abuses including forced labour, mass detention, surveillance, sterilisation, cultural erasure and alleged genocide in the region. To distract from such human rights abuses, covert and overt online information campaigns have been deployed to portray positive narratives about the CCP’s domestic policies in the region, while also injecting disinformation into the global public discourse regarding Xinjiang.
The report’s key findings:
Since early 2020, there’s been a stark increase in the Chinese Government and state media’s use of US social media networks to push alternative narratives and disinformation about the situation in Xinjiang. Chinese state media accounts have been most successful in using Facebook to engage and reach an international audience.
The CCP is using tactics including leveraging US social media platforms to criticise and smear Uyghur victims, journalists and researchers who work on this topic, as well as their organisations. We expect these efforts to escalate in 2021.
Chinese Government officials and state media are increasingly amplifying content, including disinformation, produced by fringe media and conspiracist websites that are often sympathetic to the narrative positioning of authoritarian regimes. This amplifies the reach and influence of these sites in the Western media ecosystem. Senior officials from multilateral organisations, including the World Health Organization (WHO) and the United Nations (UN), have also played a role in sharing such content.
The Xinjiang Audio-Video Publishing House, a publishing organisation owned by a regional government bureau and affiliated with the propaganda department, has funded a marketing company to create videos depicting Uyghurs as supportive of the Chinese Government’s policies in Xinjiang. Those videos were then amplified on Twitter and YouTube by a network of inauthentic accounts. The Twitter accounts also retweeted and liked non-Xinjiang-related tweets by Chinese diplomatic officials and Chinese state-affiliated media in 2020.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/13215629/strange-bedfellows_banner.jpg4511350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2021-03-30 06:00:002024-12-13 22:00:43Strange bedfellows on Xinjiang: The CCP, fringe media and US social media platforms
This report helps develop an understanding of the quantum of profits being made and where in the value chain they occur. Australians spent approximately A$5.8 billion on methamphetamine and A$470 million on heroin in FY 2019.
Approximately A$1,216,806,017 was paid to international wholesalers overseas for the amphetamine and heroin that was smuggled into Australia in that year. The profit that remained in Australia’s economy was about A$5,012,150,000. Those funds are undermining Australia’s public health and distorting our economy daily, and ultimately funding drug cartels and traffickers in Southeast Asia.
One key takeaway from the figures presented in this report is that the Australian drug trade is large and growing. Despite the best efforts of law enforcement agencies, methamphetamine and heroin use has been increasing by up to 17% year on year. Falling prices in Southeast Asia are likely to keep pushing that number up, while drug prices and purity in Australia remain relatively stable.
Authors Dr John Coyne and Dr Teagan Westendorf write that, ‘While ever-larger drug busts continue to dominate the headlines, the underlying fact is that methamphetamine and heroin imports continue to rise despite authorities seizing up to 34% of imported drugs’.
As production prices for methamphetamine continue to decline along with wholesale prices, more sophisticated transnational organised crime actors are likely to begin to examine their business models in greater detail. Industrial production of methamphetamine for high-volume, low-profit regional markets like Australia has significant benefits for them.
The data suggests that the more sophisticated transnational organised crime groups will seek to expand their control of the heroin and methamphetamine value chains to include greater elements of the wholesale supply chain as well as alternative product lines, such as synthetic opioids.
The authors note that ‘in the absence of supply reduction, and even with more effective supply-chain disruption, our federal and state governments will need to invest more heavily in demand reduction and harm minimisation.’
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/15213325/sr2021-high-rollers_banner.jpg4511350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2021-03-30 06:00:002025-03-06 17:02:13‘High rollers’ A study of criminal profits along Australia’s heroin and methamphetamine supply chains
This report analyses the future impact that hypersonic weaponry will have on global affairs.
Hypersonic systems include anything that travels faster than Mach 5, or five times the speed of sound. We may be on the cusp of seeing hypersonic weapons proliferate around the world, with Russia, China and the US all in the process of developing and testing them. By 2030 they are likely to be in the inventory of all of the major powers. And Australia might well join them – we have some world class researchers and have been active in joint programs with the US for over 20 years. The government has added hypersonic weapons to its defence acquisition plan. It’s a topic we should be interested in and better informed about.
It’s always hard to predict exactly how much will change when a new technology enters the battlefield, but Australia is investing tens of billions of dollars in advanced sensors and combat systems to defend its surface vessels against subsonic and supersonic weapons. It’s not clear that they will be effective enough against hypersonic weapons. On the plus side for our defence forces, hypersonic strike weapons with ranges of thousands of kilometres could return a strike capability to the ADF that has been missing since the F-111 was retired a decade ago.
There are some strategic stability issues to be wrestled with as well. The US is developing a ‘prompt global strike’ system that would allow it hit a target pretty much anywhere on Earth in 20 minutes. Russian and Chinese systems are being developed with a nuclear or conventional warhead capability. The combination of short warning times and nuclear warhead ambiguity is potentially highly destabilising.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/15173501/SI156-ComingReadyOrNot-banner.jpg4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2021-03-23 06:00:002024-12-15 17:37:47Coming ready or not: Hypersonic weapons
Chinese Communist Party (CCP) diplomatic accounts, Chinese state media, pro-CCP influencers and patriotic trolls are targeting the UK public broadcaster, the BBC, in a coordinated information operation. Recent BBC reports, including the allegations of systematic sexual assault in Xinjiang’s internment camps, were among a number of triggers provoking the CCP’s propaganda apparatus to discredit the BBC, distract international attention and recapture control of the narrative.
In ASPI ICPC’s new report, Albert Zhang and Dr Jacob Wallis provide a snapshot of the CCP’s ongoing coordinated response targeting the BBC, which leveraged YouTube, Twitter and Facebook and was broadly framed around three prominent narratives:
That the BBC spreads disinformation and is biased against China
That the BBC’s domestic audiences think that it’s biased and not to be trusted
That the BBC’s reporting on China is instigated by foreign actors and intelligence agencies.
In addition, the report analyses some of the secondary effects of this propaganda effort by exploring the mobilisation of a pro-CCP Twitter network that has previously amplified the Covid-19 disinformation content being pushed by China’s Ministry of Foreign Affairs, and whose negative online engagement with the BBC peaks on the same days as that of the party-state’s diplomats and state media.
To contest and blunt criticism of the CCP’s systematic surveillance and control of minority ethnic groups, the party will continue to aggressively deploy its propaganda and disinformation apparatus. Domestic control remains fundamental to its political power and legitimacy, and internationally narrative control is fundamental to the pursuit of its foreign policy interests.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/15193216/ICPC2021-TriggerWarning_banner.jpg4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2021-03-04 06:00:002024-12-15 19:38:06Trigger warning. The CCP’s coordinated information effort to discredit the BBC
Data has been referred to as the ‘new oil’ or ‘new gold’, but it’s more than that. Most organisations can’t function without it. That applies equally to government.
Government data creation, collection, storage and analysis has grown and continues to grow, as does government reliance on it. With continued government policy directions promoting increased outsourcing of data storage, processing and cloud storage, the value and protection that disaggregation and diversification generate may be lost in the absence of appropriate oversight.
In this report, ASPI’s Gill Savage and Anne Lyons provide an overview of the current state, the implications of the panel arrangements and the resulting challenges. They review the unintended consequences of the Australian Government’s data centre procurement arrangements, first introduced over a decade ago, and suggest areas for reform. The aim is to shape a better conversation on issues, challenges and factors to consider relating to arrangements for the provision of outsourced data centres.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/15231633/SI155_Devolved_data_centre_decisions-banner.png4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2020-12-18 06:00:002025-04-01 09:41:32Devolved data centre decisions: Opportunities for reform?
In the past two decades, Australia’s Chinese-language media landscape has undergone fundamental changes that have come at a cost to quality, freedom of speech, privacy and community representation. The diversity of Australia’s Chinese communities, which often trace their roots to Hong Kong, Southeast Asia and Taiwan as well as the People’s Republic of China, isn’t well reflected in the media sector.
Persistent efforts by the Chinese Communist Party (CCP) to engage with and influence Chinese language media in Australia far outmatch the Australian Government’s work in the same space. A handful of outlets generally offer high-quality coverage of a range of issues. However, CCP influence affects all media. It targets individual outlets while also manipulating market incentives through advertising, coercion and WeChat. Four of the 24 Australian media companies studied in this report show evidence of CCP ownership or financial support.
WeChat, a Chinese social media app created by Tencent, may be driving the most substantial and harmful changes ever observed in Australia’s Chinese-language media sector. On the one hand, the app is particularly important to Chinese Australians and helps people stay connected to friends and family in China. It’s used by as many as 3 million users in Australia for a range of purposes including instant messaging.1 It’s also the most popular platform used by Chinese Australians to access news.2 However, WeChat raises concerns because of its record of censorship, information control and surveillance, which align with Beijing’s objectives. Media outlets on WeChat face tight restrictions that facilitate CCP influence by pushing the vast majority of news accounts targeting Australian audiences to register in China. Networks and information sharing within the app are opaque, contributing to the spread of disinformation.
Australian regulations are still evolving to meet the challenges identified in this report, which often mirror problems in the media industry more generally. They haven’t introduced sufficient transparency to the Chinese-language media sector and influence from the CCP. Few Australian Government policies effectively support Chinese-language media and balance or restrict CCP influence in it.
What’s the solution?
The Australian Government should protect Chinese-language media from foreign interference while introducing measures to support the growth of an independent and professional media sector. WeChat is a serious challenge to the health of the sector and to free and open public discourse in Chinese communities, and addressing it must be a core part of the solution.
The government should encourage the establishment and growth of independent media. It should consider expanding Chinese-language services through the ABC and SBS, while also reviewing conflicts of interest and foreign interference risks in each. Greater funding should be allocated to multicultural media, including for the creation of scholarships and training programs for Chinese-language journalists and editors. The government should subsidise syndication from professional, non-CCPcontrolled media outlets.
On WeChat, the government should hold all social media companies to the same set of rules, standards and norms, regardless of their country of origin or ownership. As it does with platforms such as Facebook and Twitter, the government should increase engagement with WeChat through relevant bodies such as the Department of Home Affairs, the Australian Cyber Security Centre, the Office of the Australian Information Commissioner, the Australian Communications and Media Authority, the eSafety Commissioner, the Australian Electoral Commission and the Department of Infrastructure, Transport, Regional Development and Communications. The aim should be to ensure that WeChat is taking clear and measurable steps in 2021 to address concerns and meet the same sets of rules, standards and norms that US social media platforms are held to. This effort should be done in tandem with outreach to like-minded countries. If companies refuse to meet those standards, they shouldn’t be allowed to operate in Australia.3
The government should explore ways to amend or improve the enforcement of legislation such as the Broadcasting Services Act 1995 and the Foreign Influence Transparency Scheme Act 2018 to increase the transparency of foreign ownership of media in any language, regardless of platform.
Introduction
Australia’s Chinese‑language media sector is an important part of our democracy, yet its contours and its challenges are poorly understood.4 Australia is home to large and diverse Chinese communities. According to the 2016 Census, nearly 600,000 Australians spoke Mandarin at home, and more than 280,000 spoke Cantonese.5 Only a minority of Australians with Chinese heritage were born in mainland China—many were born in Australia, Taiwan, Hong Kong or Southeast Asia.6 However, individuals born in mainland China are probably the largest group of WeChat users. Migration from mainland China is likely to remain high, and Australia has been home to large numbers of visiting Chinese students and businesspeople.
It’s been claimed that most Chinese‑language media in Australia are controlled or influenced by Beijing.7 While that’s broadly accurate, past research hasn’t systematically examined the extent and mechanisms of CCP influence over Australian media.8 In particular, the pervasive effects of WeChat on the Chinese media sector haven’t been widely appreciated. Our research identified no significant influence in Australian Chinese‑language media from governments other than China’s.
Growing concerns about the lack of Chinese‑Australian representation in Australian politics, CCP interference in Australia and Australia–China relations highlight the need for policymakers to understand the Chinese‑language media environment. For example, Australian politicians and scholars have questioned WeChat’s role in elections, called out disinformation on the app and complained about the past absence of relevant security advice from the government.9 Marginal seats such as Chisholm and Reid have large Chinese communities, among which Chinese‑language media, particularly through WeChat, have been an important factor in some elections.10
The authors would like to thank John Fitzgerald, Danielle Cave, Louisa Lim, Michael Shoebridge, Peter Jennings and several anonymous peer reviewers who offered their feedback and insights. Audrey Fritz contributed research on media regulation and censorship.
Funding: The Department of Home Affairs provided ASPI with $230k in funding, which was used towards this report.
What is ASPI?
The Australian Strategic Policy Institute was formed in 2001 as an independent, non-partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally. ASPI’s sources of funding are identified in our annual report, online at www.aspi.org.au and in the acknowledgements section of individual publications. ASPI remains independent in the content of the research and in all editorial judgements.
ASPI International Cyber Policy Centre
ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber, emerging and critical technologies, issues related to information and foreign interference and focuses on the impact these issues have on broader strategic policy. The centre has a growing mixture of expertise and skills with teams of researchers who concentrate on policy, technical analysis, information operations and disinformation, critical and emerging technologies, cyber capacity building, satellite analysis, surveillance and China-related issues.
The ICPC informs public debate in the Indo-Pacific region and supports public policy development by producing original, empirical, data-driven research. The ICPC enriches regional debates by collaborating with research institutes from around the world and by bringing leading global experts to Australia, including through fellowships. To develop capability in Australia and across the Indo-Pacific region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises for the public and private sectors.
We would like to thank all of those who support and contribute to the ICPC with their time, intellect and passion for the topics we work on. If you would like to support the work of the centre please contact: icpc@aspi.org.au
Important disclaimer
This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.
This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.
Thailand’s political discourse throughout the past decade has increasingly been shaped and amplified by social media and digital activism. The most recent wave of political activism this year saw the emergence of a countrywide youth-led democracy movement against the military-dominated coalition, as well as a nationalist counter-protest movement in support of the establishment.
The steady evolution of tactics on the part of the government, the military and protesters reflects an increasingly sophisticated new battleground for democracy, both on the streets and the screens. Understanding these complex dynamics is crucial for any broader analysis of the Thai protest movement and its implications.
In this report, we analyse samples of Twitter data relating to the online manifestation of contemporary political protests in Thailand. We explore two key aspects in which the online manifestation of the protests differs from its offline counterpart. That includes (1) the power dynamics between institutional actors and protesters and (2) the participation and engagement of international actors surrounding the protests.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/15221021/WhatsHappeningInThailand-banner.png4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2020-12-14 06:00:002025-03-06 14:19:46#WhatsHappeningInThailand: The power dynamics of Thailand’s digital activism
For this volume of ASPI’s After Covid-19 series, we asked Australia’s federal parliamentarians to consider the world after the crisis and discuss policy and solutions that could drive Australian prosperity through one of the most difficult periods in living memory. The 49 contributions in this volume are the authentic voices of our elected representatives.
For policymakers, this volume offers a window into thinking from all sides of the House of Representatives and Senate, providing insights to inform their work in creating further policy in service of the Australian public. For the broader public, this is an opportunity to see policy fleshed out by politicians on their own terms and engage with policy thinking that isn’t often seen on the front pages of major news outlets.
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2024/12/13174052/afterCovid-v3_banner.jpg4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2020-12-01 06:00:002024-12-13 17:44:50After Covid-19 Volume 3: Voices from federal parliament
Over the past decade, state actors have taken advantage of the digitisation of election systems, election administration and election campaigns to interfere in foreign elections and referendums.1 Their activity can be divided into two attack vectors. First, they’ve used various cyber operations, such as denial of service (DoS) attacks and phishing attacks, to disrupt voting infrastructure and target electronic and online voting, including vote tabulation. Second, they’ve used online information operations to exploit the digital presence of election campaigns, politicians, journalists and voters.
Together, these two attack vectors (referred to collectively as ‘cyber-enabled foreign interference’ in this report because both are mediated through cyberspace) have been used to seek to influence voters and their turnout at elections, manipulate the information environment and diminish public trust in democratic processes.
This research identified 41 elections and seven referendums between January 2010 and October 2020 where cyber-enabled foreign interference was reported, and it finds that there’s been a significant uptick in such activity since 2017. This data collection shows that Russia is the most prolific state actor engaging in online interference, followed by China, whose cyber-enabled foreign interference activity has increased significantly over the past two years. As well as these two dominant actors, Iran and North Korea have also tried to influence foreign elections in 2019 and 2020. All four states have sought to interfere in the 2020 US presidential elections using differing cyber-enabled foreign interference tactics.
In many cases, these four actors use a combination of cyber operations and online information operations to reinforce their activities. There’s also often a clear geopolitical link between the interfering state and its target: these actors are targeting states they see as adversaries or useful to their geopolitical interests.
Democratic societies are yet to develop clear thresholds for responding to cyber-enabled interference, particularly when it’s combined with other levers of state power or layered with a veil of plausible deniability.2 Even when they’re able to detect it, often with the help of social media platforms, research institutes and the media, most states are failing to effectively deter such activity. The principles inherent in democratic societies—openness, freedom of speech and the free flow of ideas—have made them particularly vulnerable to online interference.
What’s the solution?
This research finds that not all states are being targeted by serious external threats to their electoral processes, so governments should consider scaled responses to specific challenges. However, the level of threat to all states will change over time, so there’s little room for complacency. For all stakeholders—in government, industry and civil society—learning from the experience of others will help nations minimise the chance of their own election vulnerabilities being exploited in the future.3
The integrity of elections and referendums is key to societal resilience. Therefore, these events must be better protected through greater international collaboration and stronger engagement between government, the private sector and civil society.
Policymakers must respond to these challenges without adopting undue regulatory measures that would undermine their political systems and create ‘the kind of rigidly controlled environment autocrats seek’.4 Those countries facing meaningful cyber-enabled interference need to adopt a multi-stakeholder approach that carefully balances democratic principles and involves governments, parliaments, internet platforms, cybersecurity companies, media, NGOs and research institutes. This report recommends that governments identify vulnerabilities and threats as a basis for developing an effective risk-mitigation framework for resisting cyber-enabled foreign interference.
The rapid adoption of social media and its integration into the fabric of political discourse has created an attack surface for malign actors to exploit. Global online platforms must take responsibility for taking appropriate action against actors attempting to manipulate their users, yet these companies are commercial entities whose interests aren’t always aligned with those of governments. They aren’t intelligence agencies so are sometimes limited in their capacity to attribute malign activities directly. To mitigate risk during election cycles, social media companies’ security teams should work closely with governments and civil society groups to ensure that there’s a shared understanding of the threat actors and of their tactics in order to ensure an effectively calibrated and collaborative security posture.
Policymakers must implement appropriate whole-of-government mechanisms which continuously engage key stakeholders in the private sector and civil society. Greater investments in capacity building must be made by both governments and businesses in the detection and deterrence of these. It’s vital that civil society groups are supported to build up capability that stimulates and informs international public discourse and policymaking. Threats to election integrity are persistent, and the number of actors willing to deploy these tactics is growing.
Background
Foreign states’ efforts to interfere in the elections and referendums of other states, and more broadly to undermine other political systems, are an enduring practice of statecraft.5 Yet the scale and methods through which such interference occurs has changed, with old and new techniques adapting to suit the cyber domain and the opportunities presented by a 24/7, always connected information environment.6
When much of the world moved online, political targets became more vulnerable to foreign interference, and millions of voters were suddenly exposed, ‘in a new, “neutral” medium, to the very old arts of persuasion or agitation’.7 The adoption of electronic and online voting, voter tabulation and voter registration,8 as well as the growth of online information sharing and communication, has made interference in elections easier, cheaper and more covert.9 This has lowered the entry costs for states seeking to engage in election interference.10
Elections and referendums are targeted by foreign adversaries because they are opportunities when significant political and policy change occurs and they are also the means through which elected governments derive their legitimacy.11 By targeting electoral events, foreign actors can attempt to influence political decisions and policymaking, shift political agendas, encourage social polarisation and undermine democracies. This enables them to achieve long-term strategic goals, such as strengthening their relative national and regional influence, subverting undesired candidates, and compromising international alliances that ‘pose a threat’ to their interests.12
Elections and referendums also involve diverse actors, such as politicians, campaign staffers, voters and social media platforms, all of which can be targeted to knowingly or unknowingly participate in, or assist with, interference orchestrated by a foreign state.13 There are also a number of cases where journalists and media outlets have unwittingly shared, amplified, and contributed to the online information operations of foreign state actors.14 The use of unknowing participants has proved to be a key feature of cyber-enabled foreign election interference.
This is a dangerous place for liberal democracies to be in. This report highlights that the same foreign state actors continue to pursue this type of interference, so much so that it is now becoming a global norm that’s an expected part of some countries’ election processes. On its own, this perceived threat has the potential to undermine the integrity of elections and referendums and trust in public and democratic institutions.
Methodology and definitions
This research is an extension and expansion of the International Cyber Policy Centre’s Hacking democracies: cataloguing cyber-enabled attacks on elections, which was published in May 2019. That project developed a database of reported cases of cyber-enabled foreign interference in national elections held between November 2016 and April 2019.15 This new research extends the scope of Hacking democracies by examining cases of cyber-enabled foreign interference between January 2010 and October 2020. This time frame was selected because information on the use of cyber-enabled techniques as a means of foreign interference started to emerge only in the early 2010s.16
This reports appendix includes a dataset that provides an inventory of case studies where foreign state actors have reportedly used cyber-enabled techniques to interfere in elections and referendums.
The cases have been categorised by:
target
type of political process
year
attack vector (method of interference)
alleged foreign state actor.
Also accompanying this report is an interactive online map which geo-codes and illustrates our dataset, allowing users to apply filters to search through the above categories.
This research relied on open-source information, predominantly in English, including media reports from local, national, and international outlets, policy papers, academic research, and public databases. It was desktop based and consisted of case selection, case categorisation and mixed-methods analysis.17 The research also benefited from a series of roundtable discussions and consultations with experts in the field,18 as well as a lengthy internal and external peer review process.
The accompanying dataset only includes cases where attribution was publicly reported by credible researchers, cybersecurity firms or journalists. The role of non-state actors and the use of cyber-enabled techniques by domestic governments and political parties to shape political discourse and public attitudes within their own societies weren’t considered as part of this research.19
This methodology has limitations. For example, the research is limited by the covert and ongoing nature of cyber-enabled foreign interference, which is not limited to the period of an election cycle or campaign. Case selection for the new dataset, in particular, was impeded by the lack of publicly available information and uncertainty about intent and attribution, which are common problems in work concerning cyber-enabled or other online activity. It likely results in the underreporting of cases and a skewing towards English-language and mainstream media sources. The inability to accurately assess the impact of interference campaigns also results in a dataset that doesn’t distinguish between major and minor campaigns and their outcomes. The methodology omitted cyber-enabled foreign interference that occurred outside the context of elections or referendums.20
In the context of this policy brief, the term ‘attack vector’ refers to the means by which foreign state actors carry out cyber-enabled interference. Accordingly, the dataset contains cases of interference that can broadly be divided into two categories:
• Cyber operations: covert activities carried out via digital infrastructure to gain access to a server or system in order to compromise its service, identify or introduce vulnerabilities, manipulate information or perform espionage21 • Online information operations: information operations carried out in the online information environment to covertly distort, confuse, mislead and manipulate targets through deceptive or inaccurate information.22
Cyber operations and online information operations are carried out via an ‘attack surface’, which is to be understood as the ‘environment where an attacker can try to enter, cause an effect on, or extract data from’.23
Key findings
ASPI’s International Cyber Policy Centre has identified 41 elections and seven referendums between January 2010 and October 2020 (Figure 1) that have been subject to cyber-enabled foreign interference in the form of cyber operations, online information operations or a combination of the two.24
Figure 1: Cases of cyber-enabled foreign interference, by year and type of political process
Figure 1 shows that reports of the use of cyber-enabled techniques to interfere in foreign elections and referendums has increased significantly over the past five years. Thirty-eight of the 41 elections in which foreign interference was identified, and six of the referendums, occurred between 2015 and 2020 (Figure 1). These figures are significant when we consider that elections take place only every couple of years and that referendums are typically held on an ad hoc basis, meaning that foreign state actors have limited opportunities to carry out this type of interference.
As a key feature of cyber-enabled interference is deniability, there are likely many more cases that remain publicly undetected or unattributed. Moreover, what might be perceived as a drop in recorded cases in 2020 can be attributed to a number of factors, including election delays caused by Covid-19 and that election interference is often identified and reported on only after an election period is over.
Figure 2: Targets of cyber-enabled foreign interference in an election or referendum
Figure 3: Number of political processes targeted (1–4), by state or region
Cyber-enabled interference occurred on six continents (Africa, Asia, Europe, North America, Australia and South America).The research identified 33 states that have experienced cyber-enabled foreign interference in at least one election cycle or referendum, the overwhelming majority of which are democracies.25 The EU has also been a target: several member states were targeted in the lead-up to the 2019 European Parliament election.26
Significantly, this research identified 11 states that were targeted in more than one election cycle or referendum (Figure 3). The repeated targeting of certain states is indicative of their (perceived) strategic value, the existence of candidates that are aligned with the foreign state actors’ interests,27 insufficient deterrence efforts, or past efforts that have delivered results.28 This research also identified five cases in which multiple foreign state actors targeted the same election or referendum (the 2014 Scottish independence referendum, the 2016 UK referendum on EU membership, the 2018 Macedonian referendum, the 2019 Indonesian general election and the 2020 US presidential election). Rather than suggesting coordinated action, the targeting of a single election or referendum by multiple foreign state actors more likely reflects the strategic importance of the outcome to multiple states.
The attack vectors
The attack vectors are cyber operations and online information operations.29 Of the 48 political processes targeted, 26 were subjected to cyber operations and 34 were subjected to online information operations. Twelve were subjected to a combination of both (Figure 4).
Figure 4: Attacks on political processes, by attack vector
Cyber operations
This research identified 25 elections and one referendum over the past decade in which cyber operations were used for interference purposes. In the context of election interference, cyber operations fell into two broad classes: operations to directly disrupt (such as DoS attacks) or operations to gain unauthorised access (such as phishing). Unauthorised access could be used to enable subsequent disruption or to gather intelligence that could then enable online information operations, such as a hack-and-leak campaign.
Phishing attacks were the main technique used to gain unauthorised access to the personal online accounts and computer systems of individuals and organisations involved in managing and running election campaigns or infrastructure. They were used in 17 of the 25 elections, as well as the referendum, with political campaigns on the receiving end in most of the reported instances. Phishing involves misleading a target into downloading malware or disclosing personal information, such as login credentials, by sending a malicious link or file in an otherwise seemingly innocuous email or message (Figure 5).30 For example, Google revealed in 2020 that Chinese state-sponsored threat actors pretended to be from antivirus software firm McAfee in order to target US election campaigns and staffers with a phishing attack.31
Figure 5: The email Russian hackers used to compromise state voting systems ahead of the 2016 US presidential election
Source: Sam Biddle, ‘Here’s the email Russian hackers used to try to break into state voting systems’, The Intercept, 2 June 2018, online.
When threat actors gain unauthorised access to election infrastructure, they could potentially disrupt or even alter vote counts, as well as use information gathered from their access to distract public discourse and sow doubt about the validity and integrity of the process.
Then there are DoS attacks, in which a computer or online server is overwhelmed by connection requests, leaving it unable to provide service.32 In elections, they’re often used to compromise government and election-related websites, including those used for voter registration and vote tallying.
DoS attacks were used in six of the 25 elections, and one referendum, targeting vote-tallying websites, national electoral commissions and the websites of political campaigns and candidates. For example, in 2019, the website of Ukrainian presidential candidate Volodymyr Zelenskiy was subjected to a distributed DoS attack the day after he announced his intention to run for office. The website received 5 million requests within minutes of its launch and was quickly taken offline, preventing people from registering as supporters.33
Online information operations
This research identified 28 elections and six referendums over the past decade in which online information operations were used for interference purposes. In the context of election interference, online information operations should be understood as the actions taken online by foreign state actors to distort political sentiment in an election to achieve a strategic or geopolitical outcome.34
They can be difficult to distinguish from everyday online interactions and often seek to exploit existing divisions and tensions within the targeted society.35
Online information operations combine social media manipulation (‘inauthentic coordinated behaviour’), for example partisan media coverage and disinformation to distort political sentiment during an election and, more broadly, to alter the information environment. The operations are designed to target voters directly and often make use of social media and networking platforms to interact in real time and assimilate more readily with their targets.36
Online information operations tend to attract and include domestic actors.37 There have been several examples in which Russian operatives have successfully infiltrated and influenced legitimate activist groups in the US.38 This becomes even more prominent as foreign state actors align their online information operations with domestic disinformation and extremist campaigns, amplifying rather than creating disinformation.39 The strategic use of domestic disinformation means that governments and regulators may find it difficult to target them without also taking a stand against domestic misinformers and groups.
It is important to acknowledge the synergy of the two attack vectors, and also how they can converge and reinforce one another.40 This research identified three elections where cyber operations were used to compromise a system and obtain sensitive material, such as emails or documents, which were then strategically disclosed online and amplified.41 For example, according to Reuters, classified documents titled ‘UK-US Trade & Investment Working Group Full Readout’ were distributed online before the 2019 British general election as part of a Russian-backed strategic disclosure campaign.42
The main concern with the strategic use of both attack vectors is that it further complicates the target’s ability to detect, attribute and respond. This means that any meaningful response will need to consider both potential attack vectors when securing vulnerabilities.
State actors and targets
Cyber-enabled foreign interference in elections and referendums between 2010 and 2020 has been publicly attributed to only a small number of states: Russia, China, Iran and North Korea. In most cases, a clear geopolitical link between the source of interference and the target can be identified; Russia, China, Iran and North Korea mainly target states in their respective regions, or states they regard as adversaries— such as the US.43
The increasing cohesion among foreign state actors, notably China and Iran learning and adopting various techniques from Russia, has made it increasingly difficult to distinguish between the different foreign state actors.44 This has been further complicated by the adoption of Russian tactics and techniques by domestic groups, in particular groups aligned with the far-right for example.45
Russia
Russia is the most prolific foreign actor in this space. This research identified 31 elections and seven referendums involving 26 states over the past decade in which Russia allegedly used cyber-enabled foreign interference tactics. Unlike the actions of many of the other state actors profiled here, Russia’s approach has been global and wide-ranging. Many of Russia’s efforts remain focused on Europe, where Moscow allegedly used cyber-enabled means to interfere in 20 elections, including the 2019 European Parliament election and seven referendums. Of the 16 European states affected, 12 are members of the EU and 13 are members of NATO.46 Another focus for Russia has been the US and while the actual impact on voters remains debatable, Russian interference has become an expected part of US elections.47 Moscow has also sought to interfere in the elections of several countries in South America and Africa, possibly in an attempt to undermine democratisation efforts and influence their foreign policy orientations.48
Russia appears to be motivated by the intent to signal its capacity to respond to perceived foreign interference in its internal affairs and anti-Russian sentiment.49 It also seeks to strengthen its regional power by weakening alliances that pose a threat. For instance, Russia used cyber operations and online information operations to interfere in both the 2016 Montenegrin parliamentary election and the 2018 Macedonian referendum. This campaign was part of its broader political strategy to block the two states from joining NATO and prevent the expansion of Western influence into the Balkan peninsula.50
Figure 6: States targeted by Russia between 2010 and 2020
Over the past decade, it’s been reported that China has targeted 10 elections in seven states and regions. Taiwan, specifically Taiwanese President Tsai Ing-wen and her Democratic Progressive Party, has been the main target of China’s cyber-enabled election interference.51 Over the past three years, however, the Chinese state has expanded its efforts across the Indo-Pacific region.52 Beijing has also been linked to activity during the 2020 US presidential election. As reported by the New York Times and confirmed by both Google and Microsoft, state-backed hackers from China allegedly conducted unsuccessful spear-phishing attacks to gain access to the personal email accounts of campaign staff members working for the Democratic Party candidate Joseph Biden.53
China’s interference in foreign elections is part of its broader strategy to defend its ‘core’ national interests, both domestically and regionally, and apply pressure to political figures who challenge those interests. Those core interests, as defined by the Chinese Communist Party, include the preservation of domestic stability, economic development, territorial integrity and the advancement of China’s great-power status.54 Previously, China’s approach could be contrasted with Russia’s in that China attempted to deflect negativity and shape foreign perceptions to bolster its legitimacy, whereas Russia sought to destabilise the information environment, disrupt societies and weaken the target.55 More recently, however, China has adopted methods associated with Russian interference, such as blatantly destabilising the general information environment in targeted countries with obvious mistruths and conspiracy theories.56
Figure 7: States and regions targeted by China between 2010 and 2020
This dataset shows that Iran engaged in alleged interference in two elections and two referendums in three states.57 Iranian interference in foreign elections appears to be similar to Russian interference in that it’s a defensive action against the target for meddling in Iran’s internal affairs and a reaction to perceived anti-Iran sentiment. A pertinent and current example of this is Iran’s recent efforts to interfere in the 2020 US presidential election by targeting President Trump’s campaign.58 As reported by the Washington Post, Microsoft discovered that the Iranian-backed hacker group Phosphorus had used phishing emails to target 241 email accounts belonging to government officials, journalists, prominent Iranian citizens and staff associated with Trump’s election campaign and successfully compromised four of those accounts.59
Figure 8: States targeted by Iran between 2010 and 2020
North Korea has been identified as a foreign threat actor behind activity targeting both the 2020 South Korean legislative election and the 2020 US presidential election.60 Somewhat similarly to China’s approach, North Korea’s interference appears to focus on silencing critics and discrediting narratives that undermine its national interests. For example, North Korea targeted North Korean citizens running in South Korea’s 2020 legislative election, including Thae Yong-ho, the former North Korean Deputy Ambassador to the UK and one of the highest-ranking North Korean officials to ever defect.61
Figure 9: States targeted by North Korea between 2010 and 2020
Detection and attribution requires considerable time and resources, as those tasks require the technical ability to analyse and reverse engineer a cyber operation or online information operation.
Beyond attribution, understanding the strategic and geopolitical aims of each event is challenging and time-consuming.62 The covert and online nature of cyber-enabled interference, whether carried out as a cyber operation or an online information operation, inevitably complicates the detection and identification of interference. For example, a DoS attack can be difficult to distinguish from a legitimate rise in online traffic. Moreover, the nature of the digital infrastructure and the online information environment used to carry out interference enables foreign state actors to conceal or falsify their identities, locations, time zones and languages.
As detection and attribution capabilities improve, the tactics and techniques used by foreign states will adapt accordingly, further complicating efforts to detect and attribute interference promptly.63
There are already examples of foreign state actors adapting their techniques, such as using closed groups and encrypted communication platforms (such as WhatsApp, Telegram and LINE) to spread disinformation64 or using artificial intelligence to generate false content.65 It can also be difficult to determine whether an individual or group is acting on its own or on behalf of a state.66 This is further complicated by the use of non-state actors, such as hackers-for-hire, consultancy firms and unwitting individuals, as proxies. Ahead of the 2017 Catalan independence referendum, for example, the Russian-backed media outlets RT and Sputnik used Venezuelan and Chavista-linked social media accounts as part of an amplification campaign. The hashtag #VenezuelaSalutesCatalonia was amplified by the accounts to give the impression that Venezuela supported Catalonian independence.67 More recently, Russia outsourced part of its 2020 US presidential disinformation campaign to Ghanaian and Nigerian nationals who were employed to generate content and disseminate it on social media.68
The ‘bigger picture’
States vary in their vulnerability to cyber-enabled foreign interference in elections and referendums.
In particular, ‘highly polarised or divided’ democracies tend to be more vulnerable to such interference.69 The effectiveness of cyber-enabled interference in the lead-up to an election is overwhelmingly determined by the robustness and integrity of the information environment and the extent to which the electoral process has been digitised.70 Academics from the School of Politics and International Relations at the Australian National University found that local factors, such as the length of the election cycle and the target’s preparedness and response, also play a significant role. For example, Emmanuel Macron’s En Marche! campaign prepared for Russian interference by implementing strategies to respond to both cyber operations (specifically, phishing attacks) and online information operations. In the event that a phishing attack was detected, Macron’s IT team was instructed to ‘flood’ phishing emails with multiple login credentials to disrupt and distract the would-be attacker. To deal with online information operations, Macron’s team planted fake emails and documents that could be identified in the event of a strategic disclosure and undermine the adversary’s effort.71
Electronic and online voting, vote tabulation and voter registration systems are often presented as the main targets of cyber-enabled interference. It is important to recognise that the level of trust the public has in the integrity of electoral systems, democratic processes and the information environment is at stake. In Europe, a 2018 Eurobarometer survey on democracy and elections found that 68% of respondents were concerned about the potential for fraud or cyberattack in electronic voting, and 61% were concerned about ‘elections being manipulated through cyberattacks’.72
That figure matched the result of a similar survey conducted by the Pew Research Center in the US, which found that 61% of respondents believed it was likely that cyberattacks would be used in the future to interfere in their country’s elections.73
However, not all states are equally vulnerable to this type of interference. Some, for example, opt to limit or restrict the use of information and communication technologies in the electoral process.74 The Netherlands even reverted to using paper ballots to minimise its vulnerability to a cyber operation, ensuring that there wouldn’t be doubts about the electoral outcome.75 Authoritarian states that control, suppress and censor their information environments are also less vulnerable to cyber-enabled foreign interference.76
The proliferation of actors involved in elections and the digitisation of election functions has dramatically widened the attack surface available to foreign state actors. This has in large part been facilitated by the pervasive and persistent growth of social media and networking platforms, which has made targeted populations more accessible than ever to foreign state actors. For example, Russian operatives at the Internet Research Agency were able to pose convincingly as Americans online to form groups and mobilise political rallies and protests.77 The scale of this operation wouldn’t have been possible without social media and networking platforms.
Figure 10: Number of people using social media platforms, July 2020 (million)
Source: ‘Most popular social networks worldwide as of July 2020, ranked by number of active users’, Statista, 2020, online.
While these platforms play an increasingly significant role in how people communicate about current affairs, politics and other social issues, they continue to be misused and exploited by foreign state actors.78 Moreover, they have fundamentally changed the way information is created, accessed and consumed, resulting in an online information environment ‘characterised by high volumes of information and limited levels of user attention’.79
In responding to accusations of election interference, foreign actors tend to deny their involvement and then deflect by indicating that the accusations are politically motivated. In 2017, following the release of the United States’ declassified assessment of Russian election interference,80 Russian Presidential Spokesperson Dmitry Peskov compared the allegations of interference to a ‘witch-hunt’ and stated that they were unfounded and unsubstantiated, and that Russia was ‘growing rather tired’ of the accusations.81 Russian President Vladimir Putin even suggested that it could be Russian hackers with ‘patriotic leanings’ that have carried out cyber-enabled election interference rather than state-sponsored hackers.82
Plausible deniability is often cited in response to accusations of interference, with China’s Foreign Ministry noting that the ‘internet was full of theories that were hard to trace’.83 China has attempted to deter future allegations by threatening diplomatic relations, responding to the allegations that it was behind the sophisticated cyber attack on Australia’s parliament by issuing a warning that the ‘irresponsible’ and ‘baseless’ allegations could negatively impact China’s relationship with Australia.84
Recommendations
The threats posed by cyber-enabled foreign interference in elections and referendums will persist, and the range of state actors willing to deploy these tactics will continue to grow. Responding to the accelerating challenges in this space requires a multi-stakeholder approach that doesn’t impose an undue regulatory burden that could undermine democratic rights and freedoms. Responses should be calibrated according to the identified risks and vulnerabilities of each state. This report proposes recommendations categorised under four broad themes: identify, protect, detect and respond.
1. Identify
Identify vulnerabilities and threats as a basis for developing an effective risk-mitigation framework
Governments should develop and implement risk-mitigation frameworks for cyber-enabled foreign interference that incorporate comprehensive threat and vulnerability assessments. Each framework should include a component that is available to the public, provide an assessment of cybersecurity vulnerabilities in election infrastructure, explain efforts to detect foreign interference, raise public awareness, outline engagement with key stakeholders, and provide a clearer threshold for response.85
The security of election infrastructure needs to be continuously assessed and audited, during and in between elections.
Key political players, including political campaigns, political parties and governments, should engage experts to develop and facilitate tabletop exercises to identify and develop mitigation strategies that consider the different potential attack vectors, threats and vulnerabilities.86
2. Protect
Improve societal resilience by raising public awareness
Governments need to develop communication and response plans for talking to the public about cyber-enabled foreign interference, particularly when it involves attempts to interfere in elections and referendums.
Government leaders should help to improve societal resilience and situational awareness by making clear and timely public statements about cyber-enabled foreign interference in political processes. This would help to eliminate ambiguity and restore community trust. Such statements should be backed by robust public reporting mechanisms from relevant public service agencies.
Governments should require that all major social media and internet companies regularly report on how they detect and respond to cyber-enabled foreign interference. Such reports, which should include positions on political advertising and further transparency on how algorithms amplify and suppress content, would be extremely useful in informing public discourse and also in shaping policy recommendations.
Facilitate cybersecurity training to limit the effect of cyber-enabled foreign interference
Cybersecurity, cyber hygiene and disinformation training sessions and briefings should be provided regularly for all politicians, political parties, campaign staff and electoral commission staff to reduce the possibility of a successful cyber operation, such as a phishing attack, that can be exploited by foreign state actors.87 This could include both technical guides and induction guides for new staff, focused on detecting phishing emails and responding to DoS attacks.
Establish clear and context-specific reporting guidelines to minimise the effect of online information operations
As possible targets of online information operations, researchers and reporters covering elections and referendums should adopt ‘responsible’ reporting guidelines to minimise the effect of online information operations and ensure that they don’t act as conduits.88 The guidelines should highlight the importance of context when covering possible strategic disclosures, social media manipulation and disinformation campaigns.89 Stanford University’s Cyber Policy Center has developed a set of guidelines that provide a useful reference point for reporters and researchers covering elections and referendums.90
The computer systems of parliaments, governments and electoral agencies should be upgraded and regularly tested for vulnerabilities, particularly in the lead-up to elections and referendums.
Greater investments by both governments and the private sector must be made in the detection of interference activities through funding data-driven investigative journalism and research institutes so that key local and regional civil society groups can build capability that stimulates and informs public discourse and policymaking.
Governments and the private sector must invest in long-term research into how emerging technologies, such as ‘deep fake’ technologies,91 could be exploited by those engaging in foreign interference. Such research would also assist those involved in detecting and deterring that activity.
4. Respond
Assign a counter-foreign-interference taskforce to lead a whole-of-government approach
Global online platforms must take responsibility for enforcement actions against actors attempting to manipulate their online audiences. Their security teams should work closely with governments and civil society groups to ensure that there’s a shared understanding of the threat actors and their tactics in order to create an effectively calibrated and collaborative security posture.
Governments should look to build counter-foreign-interference taskforces that would help to coordinate national efforts to deal with many of the challenges discussed in this report. Australia’s National Counter Foreign Interference Coordinator and the US’s Foreign Influence Task Force provide different templates that could prove useful. Such taskforces, involving policy, electoral, intelligence and law enforcement agencies, should engage globally and will need to regularly engage with industry and civil society. They should also carry out formal investigations into major electoral interference activities and publish the findings of such investigations in a timely and transparent manner.
Signal a willingness to impose costs on adversaries
As this research demonstrates that a small number of foreign state actors persistently carry out cyber-enabled election interference, governments should establish clear prevention and deterrence postures based on their most likely adversaries. For example, pre-emptive legislation that automatically imposes sanctions or other punishments if interference is detected has been proposed in the US Senate.92
Democratic governments should work more closely together to form coalitions that develop a collective and publicly defined deterrence posture. Clearly communicated costs could change the aggressor’s cost–benefit calculus.
The authors would like to thank Danielle Cave, Dr Samantha Hoffman, Tom Uren and Dr Jacob Wallis for all of their work on this project. We would also like to thank Michael Shoebridge, anonymous peer reviewers, and external peer reviewers Katherine Mansted, Alicia Wanless and Dr Jacob Shapiro for their invaluable feedback on drafts of this report.
In 2019, ASPI’s International Cyber Policy Centre was awarded a US$100,000 research grant from Twitter, which was used towards this project. The work of ASPI ICPC would not be possible without the support of our partners and sponsors across governments, industry and civil society.
What is ASPI?
The Australian Strategic Policy Institute was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally. ASPI’s sources of funding are identified in our Annual Report, online at www.aspi.org.au and in the acknowledgements section of individual publications. ASPI remains independent in the content of the research and in all editorial judgements.
ASPI International Cyber Policy Centre
ASPI’s International Cyber Policy Centre (ICPC) is a leading voice in global debates on cyber, emerging and critical technologies, issues related to information and foreign interference and focuses on the impact these issues have on broader strategic policy. The centre has a growing mixture of expertise and skills with teams of researchers who concentrate on policy, technical analysis, information operations and disinformation, critical and emerging technologies, cyber capacity building, satellite analysis, surveillance and China-related issues.
The ICPC informs public debate in the Indo-Pacific region and supports public policy development by producing original, empirical, data-driven research. The ICPC enriches regional debates by collaborating with research institutes from around the world and by bringing leading global experts to Australia, including through fellowships. To develop capability in Australia and across the Indo-Pacific region, the ICPC has a capacity building team that conducts workshops, training programs and large-scale exercises for the public and private sectors.
We would like to thank all of those who support and contribute to the ICPC with their time, intellect and passion for the topics we work on. If you would like to support the work of the centre please contact: icpc@aspi.org.au
Important disclaimer
This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional.
This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.
First published October 2020.
ISSN 2209-9689 (online), ISSN 2209-9670 (print) Cover image: Produced by Rebecca Hendin, online.
Funding for this report was provided by Twitter.
Fergus Hanson, Sarah O’Connor, Mali Walker, Luke Courtois, Hacking democracies: cataloguing cyber-enabled attacks on elections, ASPI, Canberra, 17 May 2019, online. ↩︎
Katherine Mansted, ‘Engaging the public to counter foreign interference’, The Strategist, 9 December 2019, online. ↩︎
Erik Brattberg, Tim Maurer, Russian election interference: Europe’s counter to fake news and cyber attacks, Carnegie Endowment for International Peace, May 2018, online. ↩︎
Laura Rosenberger, ‘Making cyberspace safe for democracy: the new landscape of information competition’, Foreign Affairs, May/June 2020, online. ↩︎
For a comprehensive overview of foreign interference in elections, see David Shimer, Rigged: America, Russia, and one hundred years of covert electoral interference, Knopf Publishing Group, 2020; Casey Michel, ‘Russia’s long and mostly unsuccessful history of election interference’, Politico, 26 October 2019, online. ↩︎
David M Howard, ‘Can democracy withstand the cyber age: 1984 in the 21st century’, Hastings Law Journal, 2018, 69:1365. ↩︎
Philip Ewing, ‘In “Rigged,” a comprehensive account of decades of election interference’, NPR, 9 June 2020, online. ↩︎
Eric Geller, ‘Some states have embraced online voting. It’s a huge risk’, Politico, 8 June 2020, online. For a comprehensive discussion on electronic voting, see NRC, Asking the right questions about electronic voting. ↩︎
CSE, Cyber threats to Canada’s democratic process. ↩︎
Samantha Bradshaw, Philip N Howard, The global disinformation order: 2019 global inventory of organised social media manipulation, Computational Propaganda Research Project, Oxford Internet Institute, 2019, online. ↩︎
National Research Council (NRC), ‘Public confidence in elections’, Asking the right questions about electronic voting, Computer Science and Telecommunications Board, National Academies Press, Washington DC, 2006, online. ↩︎
Communications Security Establishment (CSE), Cyber threats to Canada’s democratic process, Canada, 7 June 2017, online. ↩︎
Elizabeth Dwoskin, Craig Timberg, ‘Facebook takes down Russian operation that recruited U.S. journalists, amid rising concerns about election misinformation’, Washington Post, 1 September 2020, online. ↩︎
See Alicia Wanless and Laura Walters, How Journalists Become an Unwitting Cog in the Influence Machine, Carnegie Endowment for International Peace, online, 1. ↩︎
https://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/03/25155529/pb41-cyber-elections_static-banner.jpg4501350nathanhttps://aspi.s3.ap-southeast-2.amazonaws.com/wp-content/uploads/2025/04/10130806/ASPI-Logo.pngnathan2020-10-28 06:00:002025-03-25 16:02:33Cyber-enabled foreign interference in elections and referendums