Tag Archive for: Disinformation

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Disinformation

Examining Australia’s bid to curb online disinformation

Hardly a day had passed after the government unveiled its initial draft of the Combatting Misinformation and Disinformation Bill 2023 when critics descended upon it.

‘Hey Peasants, Your Opinions Hell, your facts Are Fake News’, posted Canadian right-wing professor Jordan Peterson on X (then Twitter) in response to the announcement.

Since then, commentary on the bill has grown more intense and fervent. The bill sets up Canberra to be a ‘back-up censor’ ready to urge the big tech companies to ‘engage in the cancellation of wrong-speak’, wrote Peta Credlin in The Australian under the headline ‘“Ministry of Truth” clamps down on free expression’. For Tim Cudmore writing in The Spectator, the bill represents nothing less than ‘the most absurdly petty, juvenile, and downright moronic piece of nanny-state governmental garbage ever put to paper’.

In reality, the intentions of the bill are far more modest than the establishment of a so-called Ministry of Truth. Indeed, they’re so modest that it may come as a surprise to many that the powers don’t already exist.

Put simply, the bill is designed to ensure that all digital platforms in the industry have systems in place to deal with mis- and disinformation and that those systems are transparent. It doesn’t give the Australian Communications and Media Authority any special ability to request that specific content or posts be removed from online platforms.

If the bill were to pass, it would mean that digital platforms like WeChat will finally have to come clean with their censorship practices and how they’re applying them, or not, to content aimed at Australian users. It will also mean that digital platforms like X that once devoted resources to ensuring trust and safety on their platforms, but are now walking away from those efforts, are made accountable for those decisions.

If there’s one thing that Elon Musk’s stewardship of X has shown, it’s that even with an absolutist approach to free speech, content-moderation decisions still need to be made. Inevitably, any embrace of absolute free-speech principles soon gives way to the complexities of addressing issues like child exploitation, hate speech, copyright infringement and other forms of legal compliance. Every free-speech Twitter clone has had to come to this realisation, including Parler, Gettr and even Donald Trump’s Truth Social.

So, if all digital platforms inevitably engage in some sort of content moderation, why not have some democratic oversight over that process? The alternative is to stick with a system where interventions against mis- and disinformation take place every day, but they’re done according to the internal policies of each different platform and the decisions are often hidden from their users. What the Combatting Misinformation and Disinformation Bill does is make sure that those decisions aren’t made behind closed doors.

Under the current system, when platforms overreach in their efforts to moderate content, it’s only the highest-profile cases that get the attention. To take one recent example, a report by Credlin was labelled ‘false information’ on Facebook based on a fact-check by RMIT FactLab. The shadow minister for home affairs and cyber security wrote to Facebook’s parent company, Meta, to complain, and the ABC’s Media Watch sided with Credlin.

Would it not be better if this ad hoc approach were replaced with something more systematic that applied to regular members of the public and not just high-profile commentators? Under the proposed bill, all the platforms will have to have systems in place to deal with mis-and disinformation while also balancing the need for free expression. The risk of the status quo is not just that the platforms will not moderate content enough, but that they will overdo it at times.

When digital platforms refrain from moderating content, harmful content proliferates. But as platforms become more active in filtering content without publicly disclosing their decision-making, there’s an increased risk that legitimate expression will be stifled. Meta executives admitted at a recent Senate committee hearing that it had gone too far when moderating content on the origin of Covid-19, for example.

In contrast to the Australian government’s modest approach is the EU’s Digital Services Act, which just came into effect last week. That act heaps multiple requirements on the platforms to stop them from spreading mis- and disinformation. Many of these requirements are worthwhile, and in a future article, I’ll make the case for what elements we might like to use to improve our legislation. Fundamentally, the act represents a positive step forward by mandating that major social networks such as Facebook, Instagram and TikTok enhance transparency over their content moderation processes and provide EU residents with a means to appeal content-moderation decisions.

But if critics wish to warn about Orwellian overreach, they’d do better scrutinising the EU’s Digital Services Act, not Australia’s proposed Combatting Misinformation and Disinformation Bill. In particular, they should take a look at one element of the European legislation that enables the EU Commission to declare a ‘crisis’ and force platforms to moderate content according to the state’s orders. That sets up a worrying precedent that authoritarian rulers around the world are sure to point to when they shut down internet services in their own countries.

After years of relative laissez-faire policymaking, the world’s biggest tech companies are finally becoming subject to more stringent regulation. The risk of regulatory overreach is real and critics are right to be wary. But the Australian government’s proposed solution, with its focus on scrutinising the processes the platforms have in place to deal with mis- and disinformation, is a flexible approach for dealing with a problem that will inevitably continue to grow. And unlike the European strategy, it avoids overreach by both the platforms and the government.

Presenting intelligence: from Iraq WMD to the new era of ‘strategic downgrades’

Recent research from ASPI finds that Philip Flood’s 2004 inquiry into Australian intelligence agencies proved an inflection point in the national intelligence community’s development. In addition, the Flood report grappled with a matter at the heart of the intelligence failure on Iraqi weapons of mass destruction, and one of significant contemporary relevance: public presentation of intelligence for policy purposes.

Flood laid out cons, including risks to intelligence sources and methods, sensitivities of intelligence-sharing arrangements and partnerships, and the possibility that public exposure could distort the intelligence-assessment process by making analysts more risk-averse. He might have added a few other negatives to the list:

  • the threat posed by the aggregation of data—the ‘mosaic effect’. Even seemingly innocuous data can be consequential when aggregated and recontextualised
  • the risk that the nuances in intelligence assessments will be lost in public presentation (a factor in the Iraq WMD debacle)
  • the possible deleterious effects of selective declassification on government transparency.

Nonetheless, Flood acknowledged circumstances in which democratic policy decisions (especially about going to war) necessitated some form of suitable public release of intelligence. He pointed out a common-place precedent: use of sanitised intelligence to inform threat warnings to the Australian public (in the form of travel advisories).

Today, release of intelligence for statecraft purposes remains highly relevant, as evident from attempts by the US and UK governments in early 2022 to deter Russia from invading Ukraine by publicly revealing their intelligence about Moscow’s intentions and issuing regular intelligence-based updates.

Of course, the Iraq and Ukraine instances are not unique. Cold War governments on both sides of the iron curtain were prepared to leverage intelligence publicly for policy purposes or simply one-upmanship. Witness duelling defector statements and press conferences, the Kennedy administration’s public messaging during the Cuban missile crisis (including hitherto sensitive aerial imagery) and later the US declassification of satellite images highlighting Soviet violations of nuclear test bans and continuing bioweapons capability.

This continued in the 21st century. The UK publicly confirmed intelligence in November 2001 indicating al-Qaeda’s responsibility for the 9/11 terror attacks, and the Obama administration released intelligence obtained during the raid on Osama bin Ladin’s hideout. The UK would also issue a public statement on Syrian chemical weapons use, sourced to intelligence, in 2013 (including release of a complete Joint Intelligence Committee assessment). There are also regular references to intelligence-based conclusions without necessarily releasing intelligence itself—such as Russian culpability for the Salisbury poisonings. And there have been various US government indictments of hostile cyber operations (Chinese, Russian, Iranian, North Korean), in addition to cyberattack attribution by governments more generally.

Confronted in August 2021 with Russia’s worrying military build-up and hostile intent towards its neighbour, the US government first sought to leverage its intelligence knowledge behind closed doors. So, in mid-November 2021, CIA Director Bill Burns was sent to confront Moscow with what the US knew about its plans for an invasion. But, as Burns has since commented: ‘I found Putin and his senior advisers unmoved by the clarity of our understanding of what he was planning, convinced that the window was closing for his opportunity to dominate Ukraine. I left even more troubled than when I arrived.’

The Biden administration changed tack, to what Dan Lomas has termed ‘pre-buttal’, beginning in mid-January 2022 when the White House press secretary openly briefed the media on a Russian plot to manufacture a pretext for invasion, using a false-flag sabotage team. A fortnight later, in response to a press question, the Pentagon acknowledged that it knew the Russians had already prepared a propaganda video supporting this invasion pretext, for broadcast once an invasion commenced. Then, on 15 and 18 February, President Joe Biden revealed that US intelligence was now aware that more than 150,000 troops were assembled on Ukraine’s border awaiting an order to move. These efforts were buttressed by the UK’s public reference to Russian plans to install a friendly regime in Kyiv via a coup prior to the planned invasion.

Yet, as we know, the Russian invasion of Ukraine commenced on 24 February.

So, were these efforts a success or a failure? The obvious answer is they failed: Russia wasn’t deterred. But was deterrence actually possible? And the public release of intelligence did complicate and disrupt Moscow’s invasion plans and arguably contributed somewhat to the Russian military’s poor performance in the early stages of the conflict. What’s more, the audience wasn’t just Russia. Public release, beyond traditional intelligence sharing in classified channels, had the effect of alerting and cuing Ukraine. Perhaps most materially, the approach galvanised third parties post-invasion, especially in Europe. This involved overcoming some lingering distrust associated with the disastrous efforts to leverage intelligence diplomatically in 2002–03 over Iraq.

The US government has since explicitly laid out its strategy for what it calls ‘strategic downgrades’. It is an increasingly proactive approach to public disclosures aided by the opportunities presented by an overwhelming volume of available open-source intelligence that allows for effective obfuscation of the actual sensitive sources of the material disclosed.

Last month, Principal Deputy National Security Adviser Jon Finer detailed this strategy in a speech:

The delivered and authorized public release of intelligence, what we now refer to as strategic downgrades, has become an important tool of the Biden administration’s foreign policy. This is a tool that we have found to be highly effective, but also one that we believe must be wielded carefully within strict parameters and oversight.

This speech was itself a form of public release of intelligence—and presumably was targeted again at both allies and adversaries.

The US has deployed this approach beyond just attempts to deter the Russians. For example, it has applied ‘strategic downgrades’ in relation to Chinese arms supplies to Russia, Wagner Group activities in Mali, and its own findings in relation to Chinese ‘spy balloons’.

The approach is underpinned by a formalised framework developed by US policymakers. Related decision-making is centralised in the National Security Council and the Office of the Director of National Intelligence. And its application is apparently limited to select situations—for example, when civilian lives or infrastructure are at risk, or to counter disinformation or false-flag operations.

Guidelines require that downgrades be accurate, be based on verifiable reporting, and be part of a broader plan that includes diplomacy as well as security and economic assistance. According to Finer: ‘It should always be in service of clear policy objectives. It’s not like you just get a piece of very interesting information that could sort of damage one of your adversaries and you decide that could be embarrassing to them, let’s put it out.’

‘Strategic downgrades’ are a potentially important tool for democratic governments, and US formalisation of the related strategy is a welcome development.

But public presentation of intelligence for policy effect deserves careful consideration and risk management. The landscape is complicated by the marked decline in public trust across the Western world and the emergence of a more uncertain strategic environment since 2003. Notably, invocation of intelligence in the political sphere—as with, inter alia, Iraq WMD, the course of the ‘global war on terror’ and Russia’s attempted election interference—necessarily politicises that same intelligence. Perhaps the most alarming example is the degree to which circulation of US intelligence on Russian interference in an increasingly toxic US political environment has effectively tarred US intelligence agencies with the same toxic politics.

And, as Finer observed: ‘You’ve got to be right, because if you go out alarming the world that something terrible is going to happen and you have it wrong, it will be much harder to use the tool effectively the next time.’

I think Flood would have agreed.

More stick, less carrot: Australia’s new approach to tackling fake news

An urgent problem for governments around the world in the digital age is how to tackle the harms caused by mis- and disinformation, and Australia is no exception.

Together, mis- and disinformation fall under the umbrella term of ‘fake news’. While this phenomenon isn’t new, the internet makes its rapid, vast spread unprecedented.

It’s a tricky problem and hard to police because of the sheer amount of misinformation online. But, left unchecked, public health and safety, electoral integrity, social cohesion and ultimately democracy are at risk. The Covid-19 pandemic taught us not to be complacent, as fake news about Covid treatments led to deadly consequences.

But what’s the best way to manage the spread of fake news? How can it be done without government overreach, which risks the freedom and diversity of expression necessary for deliberation in healthy democracies?

Last month, Minister for Communications Michelle Rowland released a draft exposure bill to step up Australia’s fight against harmful online mis- and disinformation.

It offers more stick (hefty penalties) and less carrot (voluntary participation) than the current approach to managing online content.

If passed, the bill will bring us closer to the European Union-style model of mandatory co-regulation.

According to the draft, disinformation is spread intentionally, while misinformation is not.

But both can cause serious harms including hate speech, financial harm and disruption of public order, according to the Australian Communications and Media Authority (ACMA).

Research has shown that countries tend to approach this problem in three distinct ways:

  • non-regulatory ‘supporting activities’ such as digital literacy campaigns and fact-checking units to debunk falsehoods
  • voluntary or mandatory co-regulatory measures involving digital platforms and media authorities
  • anti-fake-news laws.

Initial opinions about the bill are divided. Some commentators have called the proposed changes ‘censorship’, arguing it will have a chilling effect on free speech.

These comments are often unhelpful because they conflate co-regulation with more draconian measures such anti-fake-news laws adopted in illiberal states like Russia, whereby governments arbitrarily rule what information is ‘fake’.

For example, Russia amended its Criminal Code in 2022 to make the spread of ‘fake’ information an offence punishable with jail terms of up to 15 years, to suppress the media and political dissent about its war in Ukraine.

To be clear, under the proposed Australian bill, platforms continue to be responsible for the content on their services—not governments.

The new powers allow ACMA to look under a platform’s hood to see how it deals with online mis- and disinformation that can cause serious harm, and to request changes to processes (not content). ACMA can set industry standards as a last resort.

The proposed changes don’t give ACMA arbitrary powers to determine what content is true or false, nor can it direct specific posts to be removed. The content of private messages, authorised electoral communications, parody and satire, and news media all remains outside the scope of the proposed changes.

None of this is new. Since 2021, Australia has had a voluntary Code of Practice on Disinformation and Misinformation, developed for digital platforms by their industry association, the Digital Industry Group (known as DIGI).

This followed government recommendations arising out of a lengthy Australian Competition and Consumer Commission inquiry into digital platforms. This first effort at online regulation was a good start to stem harmful content using an opt-in model.

But voluntary codes have shortfalls. The most obvious is that not all platforms decide to participate, and some cherrypick the areas of the code they will respond to.

The Australian government is now seeking to deliver on a bipartisan promise to strengthen the regulators’ powers to tackle online mis- and disinformation by shifting to a mandatory co-regulatory model.

Under the proposed changes, ACMA will be given new information-gathering powers and capacity to formally request that an industry association (such as DIGI) vary or replace codes that aren’t up to scratch.

Platform participation with registered codes will be compulsory and attract warnings, fines and, if unresolved, hefty court-approved penalties for noncompliance.

These penalties are steep—as much as 5% of a platform’s annual global turnover if it is repeatedly in breach of industry standards.

The move from voluntary to mandatory regulation in Australia is logical. The EU has set the foundation for other countries to hold digital technology companies responsible for curbing mis- and disinformation on their platforms.

But the draft bill raises important questions that need to be addressed before it is legislated as planned for later this year. Among them are:

  • how to best define mis- and disinformation (the definitions in the draft are different from DIGI’s)
  • how to deal with the interrelationship between mis- and disinformation, especially regarding election content. There’s a potential issue because research shows that the same content labelled ‘disinformation’ can also be labelled ‘misinformation’ depending on the online user’s motive, which can be hard to divine
  • and why exclude online news media content? Research has shown that news media can also be a source of harmful misinformation (such as 2019 election stories about the ‘death tax’).

While aiming to mitigate harmful mis- and disinformation is noble, how it will work in practice remains to be seen.

An important guard against unintended consequences is to ensure that ACMA’s powers are carefully defined along with terms and likely circumstances requiring action, with mechanisms for appeal.

The closing date for public submissions is 6 August.The Conversation

Seizing the memes of advantage in the Ukraine war and beyond

Of all the vagaries we label as ‘non-traditional security’, none is more amusing or indicative of the role of digital networks than that of a compressed, grainy image of a Shiba Inu—a Japanese dog breed that the North Atlantic Fellas Organization uses as its sign. With a swarm of members that include social media researchers, a former president of Estonia, US congressional representatives and military personnel, NAFO is living proof of the importance of memes in contemporary information warfare.

NAFO, and other distributed information campaigns, have become a hallmark of the highly mediatised Russian invasion of Ukraine. These informational insurgencies aim to counter propaganda initiatives with sarcasm and wit. Any post on Twitter or elsewhere advocating support for the Russian invasion, or the legitimacy of sham referenda to annex parts of Ukraine to Russia, is met with relentless mockery, sarcasm and memes from NAFO members, colloquially referred to regardless of age, gender or location as ‘Fellas’.

Contemporary information warfare is often rolled into wider issues involving cyberspace and cybersecurity. The link is usually made based on information warfare being most easily undertaken today through social media and other digital technologies, while requiring a significantly different skillset to propagate and analyse.

If we’re to understand contemporary information warfare, we must recognise that it is fundamentally and by default memetic. The approach taken by the US Department of Defense, which is now under review and that Twitter and Facebook have described as an instance of coordinated inauthentic behaviour, demonstrates a pressing need for this. Meme warfare favours not the frontal assault, but insurgency. Pumping content into the digital aether is unsubtle and ineffective in shaping the narrative to suit strategic ends. There is a better way.

Memetic warfare, more commonly known as meme warfare, has floated around the fringes of the information warfare discourse for quite some time. The ‘meme’—defined by Richard Dawkins as the smallest possible unit of human culture—is at the core of this subset of information warfare. By progressively injecting small cultural elements into discourse on a matter, such as an invasion or other military operation, the idea is that a message can be spread unwittingly—using infection, rather than broadcast, as a medium of transmission.

As far back as 2006, the North Atlantic Treaty Organization recognised that meme warfare should be central to its defence initiatives. Of particular note was a proposal to integrate information operations, psychological operations and strategic communications in the form of a ‘meme warfare centre’ that would advise NATO commanders ‘on meme generation, transmission, coupled with a detailed analysis on enemy, friendly and noncombatant populations’.

These ideas have influenced NATO’s and the European Union’s efforts to expand their involvement in this space. In 2017, NATO and the EU established the European Centre of Excellence for Countering Hybrid Threats, which has a general focus on hybrid warfare across a number of domains, including the information domain. There’s also the Latvia-based NATO Strategic Communications Centre of Excellence, established in 2014, which provides tailored advice on information warfare, with a focus on its conduct online.

But analysis since then has cropped up rarely, and has tended to integrate meme warfare into wider information warfare to the point of conflating the two. One significant exception is the excellent piece published on this forum by Tom Ascott of the Royal United Services Institute in 2020. Ascott provides a highly informative overview of the history and contemporary applications of memetic warfare, from its pedigree and relationship to Soviet dezinformatsiya campaigns to its deployment in the 2016 US election. Alas, content in the space is perilously thin on the ground relative to its deployment, including crowdsourced disinformation campaigns such as the #DraftOurDaughters initiative deployed by a group of 4chan users in a bid to dent the national security credentials of US presidential candidate Hillary Clinton in 2016.

In a networked environment in which individuals are empowered with all the publishing force of the Gutenberg press in the palms of their hands, empowering users to generate a suitable narrative is critical. If Australia and its allies are to replicate the successes of contemporary information warfare operations in Ukraine, an understanding of memes is more critical than ever. The role of NAFO in Ukraine is proof of this. Meme warfare is distributed and participatory, and understanding its power requires an understanding of internet culture.

What much of this relies on—and indeed, how the US, Australia and other like-minded countries may be able to leverage memetic warfare to their advantage—is the ability to create participatory online insurgencies. The best digital marketing initiatives are participatory—Spotify Wrapped, for example, enables users not only to collate data about their music habits for the year, but to share that data with others, contributing to narratives and meta-narratives about popular music. In effect, meme warfare draws from the same playbook by providing a series of cultural objects for individuals to latch onto, remix and reproduce online.

So, back to NAFO. To understand what’s significant about posting a compressed image of a Shiba Inu in reply to disinformation content on social media we need to have an understanding of its history. The Shiba Inu has a surprising history of use online as a meme symbolising irony, irreverence and wholesome clarity in the face of challenging circumstances. This includes the Doge meme, which appeared on Reddit and 4chan in 2010. That in turn influenced the development of a number of other Doge-like memes (usually in the form of edited iterations of the original). One of them, a compressed image of a Shiba referred to as Cheems, was shrunk further, applied to an edited image of the NATO logo, and proliferated widely on Twitter from June 2022 onwards.

On its own, the edited form of the NATO logo wouldn’t have spread. But, by applying a meme with wide awareness and a spirit of irreverence to it, the movement began to grow. It enables users to create their own. There is no central authority. Members of the swarm create their own NAFO ‘Fellas’ avatars or gift them to others. Fellas work together, coordinating responses to drown disinformation content and highlight instances of war crimes or Russian military failures.

Understanding this approach and reproducing it in a deliberate and targeted way in future conflicts can result in similar, highly effective memetic insurgencies in cyberspace.

Australia needs asymmetric options to counter coercive statecraft

There can be little doubt Australia’s strategic environment has evolved significantly in recent years. Across the Indo-Pacific, we now face a region characterised by a broad spectrum of geopolitical influences, from conflict and contest at one end to cooperation and collaboration at the other.

Through this spectrum, we’re seeing a diversification of the circumstances where all our elements of national power will need to contribute to our national security and prosperity goals.

We’re also seeing significant changes in the way some states wield power and influence in the region. Of course, Australia continues to support an Indo-Pacific characterised by rules-based cooperation and stability. But we must also be prudent in recognising the prevalence of new forms of rivalry and competition that fall outside our preferred models.

Australia, like its friends and partners in the Pacific, needs options to challenge the threat posed by coercive statecraft. Adversarial actors including China and Russia are using agile and malign methods to secure strategically significant outcomes, often to our disadvantage.

And to date, these methods have proven both effective and relatively low cost. So, for Australia to more effectively counter this coercive statecraft it will be important to raise the costs—both economically and politically—for the antagonists.

In Chinese strategic literature, there is a strong emphasis on adopting comprehensive approaches to wielding national influence. One of the more important works on this in recent years, Unrestricted warfare, espouses a persistent campaign for advantage that:

Breaks down the dividing lines between civilian and military affairs and between peace and war… non-military tools are equally prominent and useful for the achievement of previously military objectives. Cyberattacks, financial weapons, informational attacks—all of these taken together constitute the future of warfare. In this model, the essence of unrestricted warfare is that the ‘battlefield is everywhere’.

Beijing’s purpose in applying these methods is to achieve its strategic goals below our thresholds of military response. And although China incorporates the threat of military force among the suite of its coercive measures, it nevertheless seeks to render Western military capabilities irrelevant by achieving its goals without triggering conflict.

Once we understand this, it becomes clearer why our own military capabilities, including those for power projection and networked warfighting, are necessary but not sufficient.

Whereas the West has tended to equate the possession of superior military force with deterrence, that plainly isn’t working against China’s comprehensive coercion.

And where Australia’s armed forces have focused on force projection as an essential capability, the time has come to weave these capabilities into broader options for influence projection.

Rather than narrowly focusing on dominating the battlespace with forward-deployed force elements, the military can broaden its value proposition by complementing whole-of-government efforts to out-position rival powers in economic, diplomatic and information-influence campaigns.

As China has amply demonstrated, coercion does not necessarily involve the application of physical violence. Influence comes in many forms and can apply to peacetime and wartime situations, as well as those in between.

As NATO has suggested, ‘even lethality, the ultimate penalty of physical force, is giving way to abstractions of perception management and behavioral control, a fact which suggests that strategic success, not tactical victory, is the more coveted end-state’.

Any Australian efforts to engage in constant competition and counter China’s aggressive political warfare methods will need to have both defensive and offensive elements.

In terms of deterrence, by shining a light on these actions and actively calling them out, we can begin to erase the ambiguity and uncertainty on which they rely. Since these methods depend on cultivating confusion and uncertainty, an important counter action is to establish robust, evidence-based narratives which demonstrate to the world what is going on.

But exposure alone will not always be enough. And that’s where we need to be able to move from a defensive to an offensive mindset. We will need to become tough-minded to more forcibly ensure that antagonists desist from actions that hurt us.

This will require the development of robust cost-imposing strategies to alter their calculus. Such strategies will need to convince threat actors that the price of achieving their aims through political warfare methods exceeds what they are willing or able to pay. And that means we need to signal clearly that we have both the capacity and the will to take actions that will impose these costs.

Cost-imposing approaches can be proportional or asymmetrical. Authoritarian regimes are deeply fearful of threats to their legitimacy, making them vulnerable to well-considered influence operations which might hold that legitimacy up to question.

Such influence actions might be used to inject information into a closed society that authoritarian regimes would not want disclosed. This could range from alternative perspectives on current events that differ from regime-imposed narratives to exposure of political and economic corruption. These kinds of disclosures could impose significant costs on a regime constantly worried about maintaining domestic control.

This points to a broader theme, sometimes called ideational power. Authoritarian regimes like the Chinese Communist Party depend on repression rather than democratic legitimacy to maintain control, as has been made painfully evident in Hong Kong.

Authoritarian regimes are deeply fearful of instability, rendering them brittle at home and severely limiting their appeal abroad. Australia should take a robust and comprehensive approach to emphasising and contrasting its own values and methods against authoritarian ones.

Throughout history, long-term campaigns for influence have had this ideological dimension to them—the contrast between light and dark, between inclusiveness and domination. So Australia should be emphasising, not minimising, the ideological contrast between its own values and those represented by authoritarianism.

It makes sense to develop and enhance asymmetric advantages. For example, China doesn’t have access to anything like the network of alliances and partnerships that Australia has. And although Australia’s economic size pales in comparison to China’s, our network of friendships can be a source of significant strength.

Put another way, this is about how we play our strengths against their weaknesses. Now is the time to be exploring how our creativity and adaptability can be used to identify, explore and exploit the critical vulnerabilities of authoritarian regimes, to be able to get at them in ways that hurt.

To be clear, this doesn’t necessarily mean the application of military force—sending people into harm’s way—although those skills will remain essential. Rather, it’s about using discrete, tailored options to go after background vulnerabilities.

The military notion of unconventionality is helpful here:

In unconventional warfare, the emphasis is on defeating the opponent without a direct military confrontation…typically, the unconventional forces act undercover, or discretely, their targets are not of an exclusively military nature, and the techniques employed are distinct from those specific to purely military operations.

This logic could be broadened to other dimensions of Australian statecraft. The actions needed to impose costs of sufficient magnitude to deter China’s coercive behaviour would likely need to hold at risk Beijing’s core interests—the things it fears and values—without necessarily engaging in physical attack.

This would likely need to involve a more holistic or unrestricted approach, much as China itself pursues. And while we need to stay within the bounds of the international rules and norms we espouse, there is nevertheless fertile ground for exploring unconventional and unorthodox means for generating costs.

And although prudent self-reliance dictates that Australia should not render itself overly dependent on external providers to guarantee our security, it will be vital to work multilaterally counter China’s malign influence. Benefits will arise from cooperation in non-traditional aspects of military operations, including in the information and economic domains.

By working with our partners in the Indo-Pacific, including the Quad, Indonesia and the Pacific island countries, to develop new and disruptive options, Australia can bolster its ability to not only expose China’s malign statecraft, but also to adopt cost-imposing strategies that will deter grey-zone political warfare.

The case for a ‘disinformation CERN’

Democracies around the world are struggling with various forms of disinformation afflictions. But the current suite of policy prescriptions will fail because governments simply don’t know enough about the emerging digital information environment, according to Alicia Wanless, director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace.

Speaking in a panel discussion how democracies can collaborate on disinformation at ASPI’s grey zone and disinformation masterclass last week, Wanless went on to say that what we really need is ‘a disinformation CERN’—in reference to the international particle physics research outfit, where countries pool their resources to operate the Hadron particle accelerator, study results and share findings. The scale and reach of the disinformation problem is so huge that only research cooperation of this kind can address the global shared threat to information systems.

Our democratic societies are doomed to decline if we don’t put forward major effects to arrest the effects of disinformation, said Wanless. Fellow panel member and resident fellow at the American Enterprise Institute, Elizabeth Braw, agreed that democracies are in the middle of a generalised disinformation crisis.

At the same time, incentives to act may be blunted as democracies become numb to a multitude of cascading political crises driven by disinformation. These are having a global-warming-type effect on our political and cultural ecosystems—disinformation is turning up the temperature and toxicity of public discourse, but it also perpetuates denialism about the problem of disinformation itself.

Wanless explained that there are two major areas of research shortfall that democracies need to address. The first is how disinformation flows around global, national, local and individual information landscapes, for example, among news, social media and private messaging apps.

The second gap is in our understanding of both its short- and long-term impacts. Do disinformation campaigns change election outcomes? What’s the relationship between disinformation and politically motivated violence? And what might be the effects on the health of political systems over months or years of disinformation? Wanless noted that from an academic standpoint, most theories of communication are stronger on accounting for transmission but very weak on effects.

In addition, there are yawning knowledge gaps on the effects of disinformation countermeasures. For example, said Wanless, there are very few credible studies on the effects of de-platforming disinformation spreaders. Does it help in limiting disinformation? Or do the perpetrators just move underground to more niche platforms, where followers can be further radicalised and exhorted to violence? To help answer these questions, Washington DC’s Capitol insurrection of 6 January needs to be examined more closely.

The other problem for research is that private companies hold most of the relevant data and are unwilling to share it widely. The platforms regard their data as valuable proprietary information and to date have only been willing to share small amounts with handpicked research institutions on particular cases.

A well-funded, multinational research effort could help spearhead a broad-based, collaborative approach with the digital information industry that holds the bulk of data on information transmission and user behaviour. The big search engines, social media platforms, television networks, public broadcasters and newspapers of record should all be included.

On the question of how much such research would cost and who would lead it, Wanless said she’s costed a number of models that start from US$10 million per year for basic research and rise from there. Given the cost of disinformation to economies and societies—How much has Covid-19-related disinformation alone cost in terms of loss of life and income?—it seems like a miniscule investment compared to what Western democracies spend on military hardware.

Wanless believes that platforms should in some way be involved in funding this research and that discussions around taxes on them should be taking this into account. But the effort should probably be led by academic institutions and civil society rather than the national security community.

Braw agreed with Wanless that better research is critical, but so is building whole-of-society resilience, starting immediately. If this isn’t done, responses to disinformation crises risk continually exacerbating their initial effects, until societies are caught in a spin-cycle of chaotic reaction.

Democracies need to get out of their defensive postures. Disinformation cannot be beaten with de-platforming and labelling. We need to get better at public messaging and be in constant preparation for crisis communication. When Covid-19 hit, governments should have been ready to go with public communication and planning for food, water, energy and fuel shortages.

A good example of multilateral cooperation and public communication on a grey-zone crisis, said Braw, was the 2018 poisoning of former Russian double agent Sergei Skripal and his daughter Yulia using the chemical weapon Novichok in the UK. The UK was able to quickly stand up an informal alliance of countries that expelled Russian diplomats and censured and sanctioned Moscow.

Companies are on the front line of disinformation and grey zone operations, and they need to be consistently involved in a whole-of-society response. But it’s important to note, according to Wanless, the private sector is part of the problem. There’s money and power to be generated by inflaming fear and uncertainty.

Braw waxed nostalgic about the early days of social media—visiting the offices of Twitter when it was just a handful of guys and a few computers. Governments completely failed to see how these platforms would transform politics, change the nature of governance and even threaten democratic institutions.

To add to the challenge, domestic political actors are increasingly getting in on the disinformation action and have no real incentives to neutralise its effects.

In terms of constraints, international law is much too vague on the subject of propaganda and there are no strong agreed guidelines that platforms can implement. So while state regulation may be an old-fashioned, ‘European’ response, said Braw, it’s probably the only effective way forward. Building a multilateral approach to regulating a decentralised, global information space will be the critical factor for success in the fight against disinformation.

Challenges for the US and Australia in the grey zone

Australia and the United States share an increasing security burden thanks to the growing sophistication of information warfare, according to Jake Wallis, a senior analyst at ASPI, and Katherine Mansted, senior adviser at the ANU’s National Security College.

In a panel discussion last week for ASPI’s masterclass on ‘The US–Australia alliance in a more contested Asia’, Mansted and Wallis explored the challenges of dealing with a strategic environment in which grey-zone actions, especially involving information, are becoming a permanent feature.

The information warfare experienced by the US and by Australia over the past half-decade has similar characteristics.

Russian grey-zone activities against the US have grown and evolved, but information warfare remains the key component. It is relatively cheap and, using social media’s unparalleled power as a propaganda vector, can be extraordinarily effective.

The most recent US intelligence assessment on foreign influence in the 2020 presidential election concludes that Russia coupled its online efforts with political influence campaigns. In some cases, Russia directly provided domestic political actors with disinformation talking points, which were then recycled through traditional media outlets.

Russian disinformation is especially geared towards discrediting powerful elements of democracy, the free press and elections, to infect every area of political, social, economic and cultural division in a society with partisanship, and to delegitimise shared truths. It works even better if domestic actors start to use the same tactics in the pursuit of profit and power.

The Australian grey-zone experience is mostly about China—although a recent ABC Four Corners report explored the extent of Russian information and intimidation operations among the Russian diaspora in Australia.

And while Russia has been a relatively noisy grey-zone actor against NATO allies, Chinese grey-zone efforts are better funded, more widespread and highly integrated, warns Wallis. China has a lot more capacity than Russia, and its grey-zone efforts span economic coercion, cyberattacks, political and media influence campaigns, science and technology talent recruitment, and direct targeting of corporations that cross Beijing’s red lines.

Wallis listed some recent examples from China’s playbook, including bans on Australian coal, beef, barley and wine, talent recruitment programs in Australian universities that aim to gain access to expertise on sensitive technologies, as well as the growth in coordinated online disinformation campaigns driven by artificial intelligence.

But Wallis also noted that China’s grey-zone activities are becoming more overt and seem to be increasingly comfortable with raw displays of dominance. Information warfare narratives are now geared towards not just creating confusion and division, but eliciting fear and compliance.

China’s cancellation of NBA broadcasts in response to a tweet by the general manager of the Houston Rockets in support of pro-democracy protestors in Hong Kong is instructive, as are China’s threats to banks not backing its authoritarian actions in the territory.

All of this is complemented by the so-called wolf-warrior diplomacy typified by foreign ministry spokesman Zhao Lijian, who rose from being an envoy in Pakistan to his current position through his deployment of hyperaggressive anti-Western rhetoric on Twitter.

This kind of information warfare is quick to exploit any opportunity to amplify set narratives of a West in terminal decline, heirs to a wasted democratic system that can’t compete with China’s techno-authoritarianism. China’s information war on the West’s Covid-19 vaccines and troubled response to the pandemic is one example.

Effective multilateral responses to global crises are vulnerable to this kind of disinformation churn and narrative jostling. But, as Wallis pointed out, in China’s strategic calculus, it doesn’t need to convince the West. It is aiming to convince everyone else.

Therefore, Australia should also be extremely concerned about its near region, where fragile democracies and their fast-growing economies are especially susceptible to information warfare by both external actors and internal enablers.

What should Australia and its major ally be doing to counter grey-zone coercion and information warfare?

Mansted explained that after five years of disinformation wars, US Department of Justice investigations, comprehensive intelligence assessments and Senate inquiries on Russian foreign interference, the Biden administration is now viscerally aware of the threat. What’s more, President Joe Biden has made democratic resilience the centrepiece of US foreign policy.

So far, the Biden administration has responded to Russia with a suite of targeted sanctions. It is also carefully assessing the domestic propaganda environment.

Biden has been assertive about the need to actively challenge China’s and Russia’s narratives about democratic decay. In the administration’s foreign policy statements to date, Biden has continually reiterated his worldview that only strong democracies, polities that retain some notion of the public interest, can effectively address civilisational challenges such as climate change.

The administration’s public diplomacy is about consistently and clearly articulating democratic values and demonstrating their strength. Australia should be ready to do the same.

But Mansted also pointed out that we should also recognise the difference between democratic resilience—that is, getting our own house in order—and democracy promotion. In the alliance context, Australia may have an important role to play in translating Biden’s democracy agenda to our regional Indo-Pacific context—lest partners perceive it as imposing values on them, rather than working collectively to defend shared values.

Ultimately, Mansted sounded a note of cautious optimism. While China’s increasingly sophisticated coupling of economic, informational and diplomatic tools of influence and coercion is worrying, she argued that there are limits to the effectiveness of each of these tools.

The US experience with interference has shown that more public awareness of Russian tactics may have reduced their impact. Efforts by the US government to expose Russian operatives, and their proxies, have degraded their ability to operate deniably, while more timely cooperation with social media has resulted in the disruption of more state-coordinated propaganda.

Australia and the US can consolidate these gains. At an inter-government level, there’s more work to be done to establish a shared lexicon on grey-zone homeland security challenges and to continue to build domestic resilience through transparent public communications.

To adapt the deterrence capability of the Australia–US alliance to 21st-century grey-zone challenges, shared frameworks for attributing and responding to grey-zone threats are imperative. Early diagnosis and response, narrative trackers, active disruption of disinformation campaigns, and consistent public communications will be key elements of active defence in the grey zone.

Covid-19 disinformation campaigns shift focus to vaccines

In ASPI’s latest report on Covid-19 disinformation, Albert Zhang, Emilia Currey and I investigated how the narrative of an American vaccine trial killing soldiers in Ukraine (which did not actually happen) was laundered from the propaganda site of a pro-Russian militia into the international information ecosystem. What this case study highlights is the way in which the battle for control of the coronavirus narrative is shifting from the origins of the virus to the hopes for a vaccine.

On 17 July, a press release was posted in Russian and English on the website of the Lugansk People’s Republic, a pro-Russian militia and self-declared government in Eastern Ukraine. The statement described a vaccine trial by ‘Americal [sic] virologists’ in the Ukrainian-controlled city of Kharkiv which had led to the death of several volunteers, including Ukrainian soldiers.

From the beginning, this narrative had strong political undertones of both anti-Americanism and opposition to the legitimate Ukrainian government. The implication was that Americans did not value Ukrainian lives, and so wanted to test their dangerous vaccine on Ukrainians rather than on their own people—and that the puppet government in Kiev was letting them do it.

The completely fictional story was quickly picked up by Russian-language media and by a handful of fringe English-language conspiracy sites. Despite multiple fact-checks which found that no such incident ever occurred, the narrative swiftly established a foothold in the digital information ecosystem. Beginning with Russian-language outlet News.ru, the story was further embellished by attributing the deadly vaccine trial to US company Moderna and its mRNA-1273 vaccine candidate.

Intentionally or not, the effect of this move was to expand the appeal of the disinformation narrative to conspiracy and anti-vaxx communities. These communities’ generic opposition to vaccinations has coalesced in recent weeks into a specific fixation on mRNA vaccines, which they believe change a person’s DNA (they do not), among other things.

Our research has found that the disinformation narrative gained relatively little traction on English-language social media until 24 July, a week after it was originally published. On that day, Twitter and Facebook shares of English-language articles about the fictional vaccine trial suddenly and dramatically spiked.

This spike may be linked to an unusual viral Facebook post which began to spread that day, which is discussed in more detail in the report. It’s difficult to reconstruct exactly what occurred, however, as Facebook appears to have clamped down on the viral growth of this piece of disinformation, deleting much of the relevant data in the process.

The clampdown came too late. Despite the best efforts of fact-checkers, the disinformation narrative of a US vaccine killing Ukrainian soldiers spread in multiple languages in addition to Russian and English, including Spanish, Italian, Romanian, Czech and French.

As of early August, it had effectively reached the final stage of the information-laundering cycle: decontextualisation. This is the phase in which the disinformation is just out there in the information ecosystem and asserted as fact, completely independent of its original context. It has since been incorporated into a range of other mis- and disinformation narratives, from conspiracies about US bioweapons labs in Ukraine to fuelling resistance to the Moderna vaccine in Canada.

This case study is one example of a broader shift which is taking place in the fight for control over the narrative around Covid-19. Earlier in the crisis, when there was a much greater international focus on the nature and origin of the virus, disinformation efforts were centred around those questions. We saw, for example, the duelling conspiracy theories about whether the virus escaped from a Wuhan lab or was created in a US Army medical facility and released in China.

As the global conversation shifts to the hopes for a vaccine, naturally the targets of disinformation efforts shift too. While the medical efficacy of any vaccine is essential, there’s another ingredient which is crucial for the success of any mass vaccination effort: public trust.

Intelligence agencies around the world have warned of state-linked cyber operations targeting vaccines and vaccine manufacturers. Such efforts are aimed at stealing intellectual property to give a nation’s vaccine candidates a leg-up in the race for the first safe and effective Covid-19 vaccine.

It should be assumed that some state and non-state actors are prepared to consider other tactics to give their own vaccine candidates the edge, including grey-zone disinformation campaigns. It’s perhaps worth noting that the US–Ukraine vaccine disinformation was launched the day after Russia announced plans to mass-produce its own vaccine in a matter of weeks.

While it might have been hoped that the global search for a vaccine would be above this kind of geopolitical jostling, clearly that’s not the case. Policymakers, media outlets, social media platforms and vaccine manufacturers should be aware that politically motivated disinformation is only likely to increase as the race for a Covid-19 vaccine intensifies.

How can journalists avoid being used in disinformation operations?

Twitter suspended 16 accounts earlier this month for breaching the platform’s policies on ‘manipulation and spam’. The move followed an investigative report by The Daily Beast that uncovered a network of fake personas that had been presented as consultants or freelance journalists. These manufactured identities pushed out articles advancing anti-Iran and anti-Qatar narratives favourable to the United Arab Emirates. They were surprisingly successful and had more than 90 reports appear in 46 different publications, mainly US media outlets.

This case is merely one of many information operations emanating from the Gulf and the Middle East. Viewed together, they demonstrate how the role of the media and journalists as guardians of the public interest can be manipulated to project false content into mainstream spheres.

Journalists play a crucial role in bringing the truth to light and holding the powerful to account. Authoritarian regimes see how high-quality journalism can erode their legitimacy and support by exposing poor governance. This results in attempts to censor the free press, not only internally but also outside of autocratic states.

The Middle East provides myriad examples of autocratic leaders seeking to silence criticism from journalists overseas. The killing of Washington Post reporter Jamal Khashoggi by Saudi Arabia is one of the most notable and demonstrates how far some states will go to eradicate dissent. Iran is also known for its efforts to threaten critical journalists based overseas and to harass their Iran-based family members.

However, such states also exploit public trust in journalism, and weaponise journalists and news media organisations, to amplify propaganda and disinformation favourable to their regimes. Indeed, inauthentic content doesn’t usually get much traction unless it is picked up and promoted by legitimate platforms.

A key tactic of the Iran-aligned ‘Endless Mayfly’ disinformation campaign, for example, was to publicly and privately engage journalists and activists on Twitter in an attempt to disseminate and amplify false content that advanced Iranian state interests.

Similarly, news media played a role in propagating a Saudi-based Twitter disinformation campaign following the blockade of Qatar in 2017. A researcher in Qatar says Saudi-based bots were used to amplify anti-Qatar hashtags and hashtags that painted a manipulated picture of grassroots opposition to the Qatari government and the ruling family. The campaign sought to legitimise the stance of the blockading countries, Saudi Arabia, the UAE and Bahrain, which have long criticised Qatar for its friendly ties with Iran and alleged funding for, and support of, terrorist groups. BBC Arabic’s ‘Trending’ service picked up the story, and then BBC Arabic reported on the trending hashtags. This example shows how easily media organisations can find themselves assisting in the spread of state-backed disinformation.

Another operation from the Gulf came in the form of a tit-for-tat hacking and leaking of emails that played out between the UAE and Qatar. After the blockade was implemented, there were numerous cases of unnamed sources providing hacked emails to journalists and media organisations. Some of these outlets went on to publish stories that stood to influence US foreign policy in the Middle East.

For instance, hacked emails from the accounts of the UAE ambassador to the US, Yousef Al Otaiba, and US businessman and Republican fundraiser Elliott Broidy* sought to erode US support for the blockade of Qatar. Conversely, hacked emails and phone and text messages from an Iranian businessman and Qatari officials attempted to push an anti-Iran and anti-Qatar message to the US administration.

The ethics of publishing news based on hacked information that seeks to further a state’s geopolitical objectives has been hotly debated. David Kirkpatrick, a reporter at the New York Times has said that ‘if we were to start rejecting information from sources with agendas, we might as well stop putting out the paper’. Others have argued that there’s no issue as long as the information has been verified. Still, some commentators are concerned about the potential consequences of these practices, pointing to how hacked emails were used to derail Hillary Clinton’s 2016 presidential campaign.

Regardless, the hacking of emails and the distribution of content have continued, and efforts to utilise journalists and news outlets in the West have not ceased. It’s therefore important that all elements of news media—journalists, editors and publishers writing and vetting articles—have a high degree of literacy on the agents, tactics and infrastructure of disinformation, and are aware of best practices when reporting such content.

Being as transparent as possible without compromising sources, verifying and authenticating information, and contextualising content are all necessary for those reporting on disinformation or on information that stands to advance the interests or agenda of a state, a group or an individual. These principles were followed in reports about Al Otaiba and Broidy which noted that the information was provided by ‘those critical of Emirati influence in Washington’. The articles also provided context on the UAE–Qatar rivalry and fractious relations within the Gulf.

The ‘publish or not’ decision becomes even more fraught when material arrives on a journalist’s desk without any indication of where it came from. Dealing with that requires substantial research and careful judgement calls.

A failure to follow clear professional practices will see journalists and news outlets relegated to pawns in the geopolitical games of states, rather than performing the much-needed function of exposing these tactics.

Disinformation and information operations more broadly constitute a multifaceted problem. A wide variety of people have a responsibility to respond, including political actors, social media platforms, civil society actors and even individuals not directly involved in the process.

Journalists and media outlets can often find themselves at the coalface of information operations. While many in the field have a high degree of awareness of and resilience to such tactics, the profession must develop a clear understanding of how these operations work if journalists are to avoid being used by actors out to deceive.

* Editors’ note, 29 July 2020: An earlier version of this post identified Elliott Broidy as a ‘UAE lobbyist’. A representative for Mr Broidy has since advised The Strategist that Mr Broidy was not a lobbyist for any foreign country.

Bushfires, bots and the spread of disinformation

As fire continues to wreak havoc across large parts of the country, online Australia is battling another crisis: the waves of misinformation and disinformation spreading across social media. Much of the media reporting on this has referred to ‘bots and trolls’, citing a study by Queensland University of Technology researchers which found that about a third of the Twitter accounts tweeting about a particular bushfire-related hashtag showed signs of inauthentic activity.

We can’t fight disinformation with misinformation, however. It is important to be clear about what is, and what is not, happening.

There’s no indication as yet that Australia is the target of a coordinated disinformation ‘attack’. Instead, what we’re seeing online is a reflection of the changing information environment, in which high-profile national crises attract international attention and become fuel for a wide array of actors looking to promote their own narratives—including many who are prepared to use disinformation and inauthentic accounts.

As online discussion of the bushfire crisis becomes caught up in more and more of these tangled webs, from conspiracy theories to Islamophobia, more and more disinformation gets woven into the feeds of real users. Before long, it reaches the point where someone who starts off looking for information on #AustraliaFires winds up 10 minutes later reading about a UN conspiracy to take over the world.

The findings of the QUT study have been somewhat misconstrued in some of the media reporting (by no fault of the researchers themselves). There are a few factors to keep in mind.

First, a certain amount of inauthentic activity will be present on any high-profile hashtag. Twitter is full of bot accounts which are programmed to identify popular hashtags and use them to sell products or build an audience, regardless of what those hashtags are. Using a small sample size as the QUT study did (315 accounts) makes it difficult to determine how representative that sample is of the level of inauthentic activity on the hashtag as a whole.

Second, the QUT study relied on a tool called Bot or Not. This tool and others like it—which, as the name suggests, seek to automatically determine whether an account is a bot or not—are useful, but it’s important to understand the trade-offs they make when you’re interpreting the results. For example, one factor which many bot-detection tools look at is the age of the accounts, based on the assumption that newer accounts are more likely to be bots. That may in general be a reasonable assumption, but it doesn’t necessarily apply well in a case like the Australian bushfire crisis.

Many legitimate users may have recently joined Twitter specifically to get information about the fires. On the flipside, many bot accounts are bought and sold and repurposed, sometimes over several years (just search ‘buy aged Twitter accounts’ on Twitter for yourself to see how many are out there). Both of these things will affect the accuracy of a tool like Bot or Not. It’s not that we shouldn’t use tools which claim to detect bots automatically, but we do need to interpret their findings based on an informed appreciation of the factors which have gone into them.

Finally, there isn’t necessarily a link between bots and disinformation. Disinformation is often, and arguably most effectively, spread by real users from authentic accounts. Bots are sometimes used to share true, helpful information. During California’s wildfires in 2018, for example, researchers built a bot which would automatically generate and share satellite imagery time-lapses of fire locations to help affected communities.

There’s clearly a significant amount of disinformation and misleadingly framed discussion being spread on social media about the bushfires, particularly in relation to the role of arsonists in starting the fires.

However, the bulk of it doesn’t appear to be coming from bots, nor is it anything so straightforward as an attack. Instead, what appears to have happened is that Australia’s bushfire crisis—like other crises, including the burning of the Amazon rainforest in 2019—has been sucked into multiple overlapping fringe right-wing and conspiracy narratives which are generating and amplifying disinformation in support of their own political and ideological positions.

For example, fringe right-wing websites and media figures based in the United States are energetically driving a narrative that the bushfires are the result of arson (which has been resoundingly rejected by Australian authorities) based on an ideological opposition to the consensus view on climate change. Their articles are amplified by pre-existing networks of both real users and inauthentic accounts on social media platforms including Twitter and Facebook.

QAnon conspiracy theorists have integrated the bushfires into their broader conspiracy that US President Donald Trump is waging a secret battle against a powerful cabal of elite cannibalistic paedophiles. Believers in the ‘Agenda 21/Agenda 2030’ conspiracy theory see it as proof of ‘weaponised weather control’ aimed at consolidating a United Nations–led global takeover. Islamophobes are blaming Muslim arsonists—and getting thousands of likes.

And that’s not even touching the issue of misleading information that’s been spread by some Australian mainstream media.

It’s not just the climate that has changed. The information ecosystem in which natural disasters play out, and which influences the attitudes and decisions the public makes about how to respond, is fundamentally different from what it was 50, 20 or even five years ago. Disinformation is now, sadly, a normal, predictable element of environmental catastrophes, particularly those large enough to capture international attention. Where once we had only a handful of Australian newspapers, now we have to worry about the kind of international fringe media outlets which think the US government is putting chemicals in the water to make frogs gay.

This problem is not going away. It will be with us for the rest of this crisis, and the next, and the next. Emergency services, government authorities and the media need to collaborate on strategies to identify and counter both mis- and disinformation spreading on social media. Mainstream media outlets also need to behave responsibly to ensure that their coverage—including their headlines—reflects the facts rather than optimising for clicks.

It would be easy to dismiss worry about online disinformation as insignificant in the face of the enormous scale of this crisis. That would be a mistake. Social media is a source of news for almost all Australians, and increasingly it is the main source of news for many. Responding to this crisis and all of the crises to come will require national cohesion, and a shared sense of what is true and what is just lies, smoke and mirrors.

Tag Archive for: Disinformation

Nothing Found

Sorry, no posts matched your criteria