Tag Archive for: Disinformation

To combat disinformation, Japan could draw lessons from Australia and Europe

Japan is moving to strengthen its resilience to disinformation, though so far it’s only in the preparatory stage.

The EU and some countries have introduced requirements in content moderation for digital platforms. By contrast, Japan has proceeded with only expert discussion on countermeasures against disinformation. While that progress is welcome, Tokyo needs to consider establishing its own standards and join a growing global consensus on countering disinformation, including foreign information manipulation linked to malign state actors.

2024 was a tough year for Japan in countering disinformation campaigns. Immediately after the Noto earthquake in January, false rescue requests were widely spread on social media, diverting scarce resources of emergency services away from people who genuinely needed help. After record-breaking rainfall hit the Tohoku region in July, more than 100,000 spam posts disguised as disaster information appeared on social media. And ahead of the September election for the Liberal Democratic Party’s president and Japan’s October general elections, Japan Fact-check Center identified the spread of false and misleading information about political parties and candidates.

Japan is in a delicate situation. It’s one of the countries at the forefront of Chinese hybrid threats due to its proximity to Taiwan and principled stance upholding the rules-based order. But Japanese society, accustomed to little political division and to passively receiving information, may lack the resilience to disinformation of countries such as the United States or Korea.

Now, about 67 million Japanese are active users of X, more than half the population. X has become an important news and information source for a segment of Japanese society that is less inclined to confirm the accuracy of news items via more mainstream sources.

In response, the government has taken steps to combat disinformation and misinformation. In April 2023, a specialised unit was established within the Cabinet Secretariat to collect and analyse disinformation spread by foreign actors. As president of the G7, Japan introduced the Hiroshima AI Process in 2023 to address AI-enabled disinformation. Furthermore, the Ministry of Foreign Affairs produced solid evidence to effectively counter disinformation campaigns relating to the release of treated wastewater from the Fukushima Daiichi nuclear power plant. This disinformation may have come from China. The ministry’s effort should be applauded and serve as a model for future responses.

But simply responding to every incident may not be sustainable. Countering the proliferation of disinformation also requires content moderation, which must be balanced to protect freedom of expression and avoid placing an undue burden on digital platforms. Thankfully, international partners provide some good examples for reference.

The EU’s Digital Services Act (in full effect since 2024) forces digital platforms to disclose the reasoning behind content moderation decisions and introduces measures to report illicit content. In Australia, the Combatting Misinformation and Disinformation Bill (2024) was intended to provide the Australian Communications and Media Authority with powers to force digital platforms to take proactive steps to manage the risk of disinformation. While it was abandoned in late November, Japan could use this as a lesson to avoid similar outcomes.

Japan’s government has commissioned various study groups but so far has taken no legislative action to combat misinformation and disinformation. The present reliance on voluntary efforts by digital platforms is insufficient, especially given the growing likelihood and sophistication of disinformation threats. Concrete measures are needed.

The Japanese government should engage multiple stakeholder communities, including digital platforms, such as X, and fact checking organisations, to collectively set minimum standards of necessary content moderation by digital platforms. While the specifics of moderation can be left to the discretion of the digital platform, minimum standards could include, for example, labelling trusted media and government agencies and assigning them a higher algorithmic priority for display. If minimum standards are not met, the digital platform would be subjected to guidance or advice by a government authority. But the authority would not have the power to remove or reorder individual content.

Setting standards in this way would respect existing limits of freedom of expression while reducing exposure of users to disinformation that could cause serious harm. It would require, however, verifiable criteria used to determine trusted accounts and the establishment of a contact point for complaints within digital platforms or trusted private fact-checkers.

Regulating digital platforms will not be enough. It’s also important to call out malicious actors and strengthen public awareness and media literacy. Proliferation of disinformation with political intent by foreign actors is a global problem. So Japan should cooperate with partners with similar democratic values, such as Australia. As such, Tokyo should be prepared to be more proactive in joining public attributions of malicious state-sponsored campaigns. It was, for example, with the advisory, initially prepared by Australia, on the cyber threat actor APT40.

Japan’s resilience to disinformation is still developing. Given its prevalent role in the regional and global order and its proven commitment to a rules-based international order, a higher level of urgency is required.

Information, facts, journalism and security

(A speech by the executive director of ASPI to the Media Freedom Summit, hosted by the Alliance for Journalists’ Freedom in Sydney on 14 November.)

 

I want to start by citing the Guardian’s latest pitch for support from its readers. As most of you will know, the Guardian asks readers to pay rather than forcing them to do so through a pay wall. One of the ads that runs at the bottom of every Guardian story reads as follows:

This is what we’re up against …

Bad actors spreading disinformation online to fuel intolerance.

Teams of lawyers from the rich and powerful trying to stop us publishing stories they don’t want you to see.

Lobby groups with opaque funding who are determined to undermine facts about the climate emergency and other established science.

Authoritarian states with no regard for the freedom of the press.

The first and last points are the most pertinent to me as head of ASPI. Bad actors are indeed spreading disinformation, and authoritarian states indeed have no regard for the freedom of the press.

And here’s why, as a national security guy, I like this pitch: because a society in which people want to pay for quality news is also a society that will be more resilient to disinformation, misinformation and the gradual erosion and pollution of our information environment. This resilience is a key pillar of our security; you might say it’s the strength on which all of our other capabilities are founded.

It points to a society in which people want to understand complex issues by engaging with facts.

It points to a society in which people want to do the hard work of exercising their critical-thinking skills so that they can evaluate for themselves what they’re being told, so they have healthy scepticism about political and social orthodoxies, not conspiratorial mistrust of traditions and institutions.

Those skills are built up through education—that includes formal education, life experience, auto-didacticism such as reading newspapers, and community and civic engagement. In other words, life in a vibrant and well-functioning society.

And let me stress, self-education through reading and viewing material online is a perfectly legitimate pursuit. But it doesn’t mean believing everything you read, nor selecting your own preferred facts, nor wrapping yourself in a comforting bubble of online fellow travellers who agree with you and validate your views.

What’s at stake here is that democracy, and in my view the functioning of society more broadly, depends on how we, as participants, recognise facts in a sea of information, and how we sort and prioritise those facts into an understanding of the world that we can use as a basis for action—including how to vote and how to perform all the other functions that engaged citizens perform in a democracy.

People will apply different weights, importance and context to facts based on the values those people hold. As long as the facts, or at least the majority of them, are agreed, people with differing values and world views can have a meaningful discussion. This is the foundation for even the most impassioned debate: people drawing on a common set of facts to arrive at different but nonetheless legitimate opinions.

Journalists and news organisations should hold privileged positions in the information environment based on the credibility they build up over time. However, to earn and hold these positions, journalists also have a sacred responsibility to report fairly, accurately and objectively in the public interest. What we can’t afford is for news organisations to retreat into ever more polarised political positions.

Media are vital to moderating and holding together public conversations even on the most difficult and controversial issues. That means leading civil debates on sensitive social issues, respectful debates and disagreements on very emotive foreign policy issues such as the war between Israel and terrorist organisations Hamas and Hezbollah and, yes, how Australia engages constructively with the new Trump administration.

Public institutions need to accommodate different points of view. Rebuilding trust in those institutions, such as the government, the media and higher education, is not helped when they create a sense that open debate will be quashed and dissenting views will bring damage to a person’s reputation.

Through these debates and (civil) contests of ideas, democracy enables us to make adjustments to the way we collectively run our society. All the knowledge and day-to-day life experience of adult citizens are fed back into decision-making by the elected executive. This happens through elections, through citizens’ engagement with the institutions that implement policies and sometimes through less formal means including public protests—hopefully peaceful and lawful ones.

Though imperfect, it has always worked. But it has been dramatically disrupted by the roughly three decades of the popularisation of the internet, and the roughly 15 years of the popularisation of social media.

Yuval Noah Harari in his most recent book, Nexus, about the history and future of information networks, coined the phrase ‘the naive view of information’ to describe the false expectation that if people have access to ever more information they will, per se, get closer to truth. A related misunderstanding is the so-called ‘free market of ideas’—one of the popular beliefs back during the heady and utopian early days of the internet.

The hope was that if all ideas, good and bad, could be put on this intellectual market, the best ones would naturally compete their way to the top. But we’ve quickly learnt that the ideas that are the stickiest, the most likely to gain traction and spread, are not necessarily the most true, but more often the ones that are most appealing—the ones that give us the most satisfying emotional stimulation.

Far from being a functioning open market, it takes an active effort to create and share information that is directed at the truth. Journalism is one such effort. News media that is not directed at the truth but at social order or the creation of shared realities isn’t journalism; it’s propaganda.

Now, why is all this such a worry to the national security community?

Because it makes us deeply vulnerable. In telling ourselves that government involvement in the digital world would stifle innovation, we have only stifled our own ability to protect our public and left a gaping hole for foreign predators. Inevitably, the absence of government involvement leads to security violations. Instead of calm, methodical government involvement we then get rushed government intervention.

Powerful players such as China and Russia can use their resources and capabilities to put their finger on the scales and influence a society. Disinformation can shape beliefs across wide audiences. This can change how people vote or erode their faith in institutions and even in democracy itself. It can turn people against one another. It can impact policymaking and leave us less safe, less secure and less sovereign. It is one thing for our own politicians and media to influence us, but it is a national security threat that we are being influenced and interfered with by foreign regimes, their intelligence services and their state-run media.

I happen to believe in higher defence and security spending not because I seek aggression, conflict or war, but to deter it—because I believe that we keep ourselves safe by being strong and making it clear that we are strong. I also believe that with all the defence spending in the world, if your society is divided against itself to the point of dysfunction, you eventually have to ask yourself: what exactly are you protecting? And that’s why the information domain is as important as traditional military domains to a sensible national security practitioner.

An adversary doesn’t need to invade you or use coercive force to shape you if they can influence you towards a more favourable position through information operations. It costs billions, maybe trillions of dollars to invade another country, overthrow its government and install a more friendly one. Why do that if you can shape the information environment so that the other country changes its government on its own, for a tiny fraction of the price? The AI expert Stuart Russell has calculated that the Russian interference in the 2016 US presidential election—which was a bargain for Moscow at a cost of about $25 million, given the massive disruption it’s caused—could be done for about $1000 today thanks to generative AI.

Solutions and responses 

So, what can we do about this? I don’t need to tell you that the business models of the news media are under enormous stress. Ask the person who wrote that eloquent pitch for support for the Guardian.

It’s easy to look around and feel despondent about the scale of the challenge. But it’s worth remembering we are still really in the early stages of the information revolution.

My submission is that the best way to create sustainable business models for strong, independent journalism is to foster societies in which people want to pay for this journalism, because they see value in having high-quality information. And they want this information because they recognise that it empowers them. It does not shut them down. There are rules and accountability but not censorship. Importantly, this requires our politicians, security agencies and the media to protect all views, not just ones the political leadership or journalists agree with. Too often I see genuine debate shut down, resulting in fear and self-censorship by those who might have a different view. For example, there is unquestionably a growing fear in our society from people wanting to support Israel. Shutting down legitimate views just because you think it is for a good cause does not make it right.

If our societies, including our media, focus their demands for accountability upon those countries and governments that cannot extract a cost from us (such as harming us economically as China has done and could do again) and if we hold only democracies to a high standard, we leave ourselves and our sovereignty vulnerable.

We should want to be a society open to ideas, views and debate. That is a foundation for resilience and security. Strong national security starts with a strong society aligned by a common set of principles, and resilient to different ideas.

So, we need to build our resilience to disinformation and the pollution of the information environment, as well as our appreciation of the importance of democratic values and freedoms. That means education throughout life, civics classes, digital literacy and support for civil society dealing with technology and democracy.

It means the government helping to create incentives for media to act as sheriffs in the information wild west (rather than those that abdicate any responsibility). That includes everything from content moderators on social media platforms to hardcore investigative journalists.

Conclusion

This is why I strongly believe that journalists and the national security community have many more aspirations and interests in common than they do natural tensions. And I want to dispel the idea that there is an inherent trade-off whereby the goals of one will necessarily come at the expense of the other.

It worries me when national security is seen as a potential threat to democratic freedoms and liberties, privacy being the most common example. This is the wrong framing.

Sometimes, the national security community gets things wrong. It makes mistakes. From time to time, officials might even behave unethically or, in rare cases, illegally. These are, for the most part, legitimate matters for journalists to pursue.

There is a lot of other national security work that simply needs to remain secret and non-public. That’s the nature of most intelligence work, significant portions of defence work, some diplomacy and some law enforcement.

A responsible national security leader should welcome scrutiny of shortcomings in conduct or competence in their agency. And a responsible journalist or editor should want to live in a functioning society in which national security agencies are able to do their work to protect us and our democratic freedoms. Freedom of speech and freedom of the press are, after all, cornerstones of our democracy.

And in a well-functioning democracy, national security is about protecting our freedoms, never about curbing them. CCTV cameras on the street protect your right to walk safely but are not used to profile minorities as is the case in authoritarian countries.

National security agencies that are accountable to oversight by various watchdogs, and ultimately by the elected government and parliament, keep us safe not just in the sense of our physical bodies and lives, but also our society and our democratic way of life.

As part of this, it is vital that media not regard changes to national security policy or legislation only with respect to their impact on journalists. Just as there is a difference between something in the public interest and something publicly interesting, there is a distinction between restricting press freedom and restricting the press.

Government requests for understanding and cooperation in terrorism investigations or measures to prevent public servants leaking classified information isn’t a violation of press freedom.

To decry every government demand or expectation for journalists to exercise responsibility risks desensitising the public to those few occasions which do cross the line of freedoms.

This is why I support the work that Peter Greste and the Alliance are doing to clearly delineate the true work of journalists in gathering, carefully assessing and responsibly reporting facts from the reckless behaviour of those who believe that all secrets are sinister and should be exposed on principle.

Julian Assange, for instance, should never have been viewed as a journalist, but as someone who ultimately put lives at risk in the name of press freedom. Similarly, so-called whistleblowers who only target the secrets of open, rule-abiding democracies are actually doing the work of the Russian and Chinese government and other authoritarians, and they reduce the ability of our agencies to protect the public, including journalists.

Attempting to argue security laws have a chilling effect on sources leaking classified information will not be successful as that is not an unintended effect—it is the point.

Yes, we must hold ourselves and our democratic governments to account. But freedom of the press and freedom of expression are not enjoyed where one is only free to actually harm our own societies.

Political differences managed and resolved through open debate are a good thing. Political and social divisions driven by fear are toxic to our open societies.

You can’t have a free media without a strong democracy, and you can’t have a strong democracy without a free media. Those truths lie at the heart of the common mission between national security and journalism.

Digital literacy is a national security asset

Not long ago, coordinated disinformation and its trail of social and political chaos was something that happened to other countries. No longer. Authoritarian states have expanded their information operations in Australia, and local actors are learning and imitating. Government efforts to deal with the problem haven’t yet responded to its sheer scale.

Australia urgently needs to put into place well-funded public disinformation literacy campaigns, augmented by digital media literacy education in schools, a report published by the Asia-Pacific Development, Diplomacy & Defence Dialogue (AP4D) argues. The government also needs to grow and support fact-checking bodies in media organisations, universities and the non-government sector.

As the problem of disinformation grows, it’s clear that the softer options of industry self-regulation and voluntary codes of conduct aren’t enough. The news-media bargaining agreement with Meta, for example, is unravelling. It was supposed to put more money into news journalism to balance the mass of disinformation online, but Meta has moderation fatigue and is moving away from news altogether.

The government is still working through draft disinformation legislation that would give more regulatory bite, compelling social media companies to take more responsibility for the disinformation that their platforms so effectively enable.

Some of Australia’s foremost media and information experts, consulted in the AP4D report, say a huge piece of the policy puzzle is missing: people need help to protect themselves from online harms, including disinformation and other forms of manipulation, so they can really understand the powerful cognitive effects of disinformation coupled with the powerful delivery systems of social media.

So far, efforts to counter malicious information operations, disinformation and other threats in the information domain have been piecemeal and reactive, rather than comprehensive and strategic.

With almost half of the adult population not confident in their ability to identify misinformation online, combatting disinformation should be a national priority. Truth-based information is a fundamental national and global public good, needed for basic governance in any political system. In liberal democracies, truth-based information is needed to secure rights of citizens, conduct fair elections, administer the rule of law and make market economics work. It is also essential as a deterrent to corruption and the foreign interference that corrupt political and economic systems attract.

A low level of public literacy on disinformation and related threats is a key vulnerability for Australia.

A well-funded and ongoing public literacy campaign to reach Australia’s diverse national audiences is now a national necessity if we are to help citizens to reject disinformation and avoid such harms as fraud and identity theft, intrusive surveillance, harassment and data exploitation.

The efforts of non-government groups such as the Australian Media Literacy Alliance, a consortium of key public institutions and networked organisations, focus on supporting lifelong learning, especially for those who may be vulnerable to disinformation or digital exclusion. Through consultation, research and advocacy, the consortium’s primary goal is to develop and promote a government-endorsed national media literacy strategy for Australia. Its model should be supported.

In addition to broad-based public awareness campaigns, digital media literary needs to be included in education curriculums from early childhood onwards, to help children and young adults build resilience.

This includes, for example, teaching students to not just engage with information by scrolling down the page or by considering its superficial validity but also by learning about its source—by leaving the webpage, opening another tab and searching elsewhere. The concept is called ‘lateral reading’.

Radicalisation prevention and support strategies should be built into that framework. A successful education campaign would include teaching how to recognise disinformation and propaganda aimed at radicalisation and attempts at exploiting individual vulnerabilities for recruitment. Children and young adults need to be aware of the harmful and violent nature of radical groups and their financial and political aims. They also would benefit from presentation of the real world consequences of online violence. People who have already been radicalised need easily accessible off-ramps when they want to escape.

While cultivating more critical thinking, we also need immediate pushback against disinformation as it arises. This is where factchecking organisations operating at arm’s length from government become important. Their work would reinforce transparency and accuracy in the public sphere.

Key to implementation is raising the level of government communication with citizens. This can be challenging, especially in a risk-adverse public service, but where the government does not speak there is a vacuum that can be filled with disinformation.

With two thirds of Australians polled in a 2021 survey considering the ability to recognise and prevent the flow of misinformation as either extremely important or very important, there is an unarguable mandate for action.

After the Voice referendum, Australia must find a better way to cut through the noise online

The expert assessment is emphatic: the Voice referendum campaign was beset by information that was false, distracting or conducive to an information space so confusing that many people switched off or were diverted away from reliable sources.

On top of the spread of false and manipulated information about Covid-19, Russia’s invasion of Ukraine and now Israel and Gaza, Australians might be tempted to accept fake news and unreliability as an inevitable effect of the sheer amount of information online and the ease with which we can access it.

But it’s not something we should be willing to accept. We can’t make the problem disappear, but we can expect governments to create a healthier information environment.

There are many lessons to be learned from the Voice campaign—among them that governments need to understand better how Australians get news and messages on important issues, how information circulates through the population, and what can be done to better inform voters. Many of the answers come down to strengthening the signal of reliable information while reducing the noise of irrelevant, false and manipulated information.

One potent slogan from the no campaign, ‘If you don’t know, vote no’, perfectly illustrates a problem that plagued this referendum. Too many people, by their own admission, didn’t know what they were voting on, and many didn’t make enough effort to improve their understanding. Some who did seek to learn more were left unsatisfied with the level of detail they could find. There simply wasn’t a strong enough signal of reliable information on the yes campaign.

There are several reasons why it’s difficult for people to sort through information online. A flood of information can often overwhelm readers and prompt them to disengage. We are also prone to accepting information that aligns with our pre-existing beliefs without checking its reliability. Both human tendencies can be exploited by malign actors to influence people—though this also happens when people are sharing information with good intentions.

Throughout the referendum and in various other election campaigns, political parties, foreign governments and other actors were accused of spreading disinformation online to sow division or influence an outcome. It’s easy to think that a lot of the false information online is highly targeted, tactical and precise in an effort to manipulate people and decisions. But in reality, the online space is typically more like a whirlwind of chaotic messaging, fear, anger and confusion.

People are entitled to their opinions. We can’t change that and nor should we want to, even if we know that some of those opinions are going to be informed by misleading or incorrect information. So the focus must be on improving people’s access to facts in as unpolluted an information environment as possible.

It can start with better promotion of the tools that are already available. There were many places to find accurate and reliable information on the Voice, starting with voice.gov.au and the Australian Electoral Commission website. Many of these were drowned out. An analysis using the online tracking tool Meltwater shows that the number of online public mentions of the Voice averaged more than 20,000 per day this year, and influential popular culture icons, influencers and well-funded lobbyists dominated the space.

There were viral videos created by grassroots campaigns that encouraged people to at least search for more information about the Voice through Google before making a decision, which would have helped drive more traffic to government websites. But videos such as one from Australian rapper Briggs—which amassed nearly four million views on Facebook and Instagram in 48 hours, 10 times more than Prime Minister Anthony Albanese’s video did in a month—were rare and largely highlighted the government’s failures to get information to voters. Meanwhile, conservative activist group Fair Australia delivered at least nine TikTok videos that drew more than one million views each.

Grassroots campaigning is great. But the Australian government shouldn’t be relying on individuals and influencers among the population to drive voter towards more and better information.

Governments should learn from what has worked well so that messages can be better tailored in future campaigns.

One successful example of a major company cutting through the noise happened in 2020 as the rollout of 5G internet intersected with fear and confusion about Covid-19. Telstra produced some entertaining and effective videos that dispelled online conspiracies that 5G and Covid were related and sought to inform 5G fence-sitters through humour while providing them with scientific evidence about the safety of 5G installation and use. The videos reached an audience 10 times larger than the standard Telstra video. This campaign shows that crafting the message so that it reaches and engages audiences online is one way of strengthening the signal in the noise.

Rather than having rules or norms that rely only social media companies to identify and label clear instances of misinformation and disinformation, strengthening the signal can occur by applying labels more widely and earlier to more of the conversation and at lower thresholds, especially ahead of elections and referendums. Labels could even be put on uncontested opinions and information, and links to government websites and further information could highlighted so they stand out more clearly on posts.

Education must also play a role in assisting the population to sort through the noise and find sources of reliable information online. From the moment children are expected to use the internet as a resource they must be taught how to spot disinformation tactics and avoid misinformation traps.

The online confusion around the Voice referendum was another wake-up call for our country to act on a problem that goes beyond politics and foreign influence. We are faced with a societal challenge that requires fundamentally changing the way people think about, engage with and process information from all sources. We need to invest more time and effort in deciphering what happens in the information space and helping everyone better understand what they are seeing.

Examining Australia’s bid to curb online disinformation

Hardly a day had passed after the government unveiled its initial draft of the Combatting Misinformation and Disinformation Bill 2023 when critics descended upon it.

‘Hey Peasants, Your Opinions Hell, your facts Are Fake News’, posted Canadian right-wing professor Jordan Peterson on X (then Twitter) in response to the announcement.

Since then, commentary on the bill has grown more intense and fervent. The bill sets up Canberra to be a ‘back-up censor’ ready to urge the big tech companies to ‘engage in the cancellation of wrong-speak’, wrote Peta Credlin in The Australian under the headline ‘“Ministry of Truth” clamps down on free expression’. For Tim Cudmore writing in The Spectator, the bill represents nothing less than ‘the most absurdly petty, juvenile, and downright moronic piece of nanny-state governmental garbage ever put to paper’.

In reality, the intentions of the bill are far more modest than the establishment of a so-called Ministry of Truth. Indeed, they’re so modest that it may come as a surprise to many that the powers don’t already exist.

Put simply, the bill is designed to ensure that all digital platforms in the industry have systems in place to deal with mis- and disinformation and that those systems are transparent. It doesn’t give the Australian Communications and Media Authority any special ability to request that specific content or posts be removed from online platforms.

If the bill were to pass, it would mean that digital platforms like WeChat will finally have to come clean with their censorship practices and how they’re applying them, or not, to content aimed at Australian users. It will also mean that digital platforms like X that once devoted resources to ensuring trust and safety on their platforms, but are now walking away from those efforts, are made accountable for those decisions.

If there’s one thing that Elon Musk’s stewardship of X has shown, it’s that even with an absolutist approach to free speech, content-moderation decisions still need to be made. Inevitably, any embrace of absolute free-speech principles soon gives way to the complexities of addressing issues like child exploitation, hate speech, copyright infringement and other forms of legal compliance. Every free-speech Twitter clone has had to come to this realisation, including Parler, Gettr and even Donald Trump’s Truth Social.

So, if all digital platforms inevitably engage in some sort of content moderation, why not have some democratic oversight over that process? The alternative is to stick with a system where interventions against mis- and disinformation take place every day, but they’re done according to the internal policies of each different platform and the decisions are often hidden from their users. What the Combatting Misinformation and Disinformation Bill does is make sure that those decisions aren’t made behind closed doors.

Under the current system, when platforms overreach in their efforts to moderate content, it’s only the highest-profile cases that get the attention. To take one recent example, a report by Credlin was labelled ‘false information’ on Facebook based on a fact-check by RMIT FactLab. The shadow minister for home affairs and cyber security wrote to Facebook’s parent company, Meta, to complain, and the ABC’s Media Watch sided with Credlin.

Would it not be better if this ad hoc approach were replaced with something more systematic that applied to regular members of the public and not just high-profile commentators? Under the proposed bill, all the platforms will have to have systems in place to deal with mis-and disinformation while also balancing the need for free expression. The risk of the status quo is not just that the platforms will not moderate content enough, but that they will overdo it at times.

When digital platforms refrain from moderating content, harmful content proliferates. But as platforms become more active in filtering content without publicly disclosing their decision-making, there’s an increased risk that legitimate expression will be stifled. Meta executives admitted at a recent Senate committee hearing that it had gone too far when moderating content on the origin of Covid-19, for example.

In contrast to the Australian government’s modest approach is the EU’s Digital Services Act, which just came into effect last week. That act heaps multiple requirements on the platforms to stop them from spreading mis- and disinformation. Many of these requirements are worthwhile, and in a future article, I’ll make the case for what elements we might like to use to improve our legislation. Fundamentally, the act represents a positive step forward by mandating that major social networks such as Facebook, Instagram and TikTok enhance transparency over their content moderation processes and provide EU residents with a means to appeal content-moderation decisions.

But if critics wish to warn about Orwellian overreach, they’d do better scrutinising the EU’s Digital Services Act, not Australia’s proposed Combatting Misinformation and Disinformation Bill. In particular, they should take a look at one element of the European legislation that enables the EU Commission to declare a ‘crisis’ and force platforms to moderate content according to the state’s orders. That sets up a worrying precedent that authoritarian rulers around the world are sure to point to when they shut down internet services in their own countries.

After years of relative laissez-faire policymaking, the world’s biggest tech companies are finally becoming subject to more stringent regulation. The risk of regulatory overreach is real and critics are right to be wary. But the Australian government’s proposed solution, with its focus on scrutinising the processes the platforms have in place to deal with mis- and disinformation, is a flexible approach for dealing with a problem that will inevitably continue to grow. And unlike the European strategy, it avoids overreach by both the platforms and the government.

Presenting intelligence: from Iraq WMD to the new era of ‘strategic downgrades’

Recent research from ASPI finds that Philip Flood’s 2004 inquiry into Australian intelligence agencies proved an inflection point in the national intelligence community’s development. In addition, the Flood report grappled with a matter at the heart of the intelligence failure on Iraqi weapons of mass destruction, and one of significant contemporary relevance: public presentation of intelligence for policy purposes.

Flood laid out cons, including risks to intelligence sources and methods, sensitivities of intelligence-sharing arrangements and partnerships, and the possibility that public exposure could distort the intelligence-assessment process by making analysts more risk-averse. He might have added a few other negatives to the list:

  • the threat posed by the aggregation of data—the ‘mosaic effect’. Even seemingly innocuous data can be consequential when aggregated and recontextualised
  • the risk that the nuances in intelligence assessments will be lost in public presentation (a factor in the Iraq WMD debacle)
  • the possible deleterious effects of selective declassification on government transparency.

Nonetheless, Flood acknowledged circumstances in which democratic policy decisions (especially about going to war) necessitated some form of suitable public release of intelligence. He pointed out a common-place precedent: use of sanitised intelligence to inform threat warnings to the Australian public (in the form of travel advisories).

Today, release of intelligence for statecraft purposes remains highly relevant, as evident from attempts by the US and UK governments in early 2022 to deter Russia from invading Ukraine by publicly revealing their intelligence about Moscow’s intentions and issuing regular intelligence-based updates.

Of course, the Iraq and Ukraine instances are not unique. Cold War governments on both sides of the iron curtain were prepared to leverage intelligence publicly for policy purposes or simply one-upmanship. Witness duelling defector statements and press conferences, the Kennedy administration’s public messaging during the Cuban missile crisis (including hitherto sensitive aerial imagery) and later the US declassification of satellite images highlighting Soviet violations of nuclear test bans and continuing bioweapons capability.

This continued in the 21st century. The UK publicly confirmed intelligence in November 2001 indicating al-Qaeda’s responsibility for the 9/11 terror attacks, and the Obama administration released intelligence obtained during the raid on Osama bin Ladin’s hideout. The UK would also issue a public statement on Syrian chemical weapons use, sourced to intelligence, in 2013 (including release of a complete Joint Intelligence Committee assessment). There are also regular references to intelligence-based conclusions without necessarily releasing intelligence itself—such as Russian culpability for the Salisbury poisonings. And there have been various US government indictments of hostile cyber operations (Chinese, Russian, Iranian, North Korean), in addition to cyberattack attribution by governments more generally.

Confronted in August 2021 with Russia’s worrying military build-up and hostile intent towards its neighbour, the US government first sought to leverage its intelligence knowledge behind closed doors. So, in mid-November 2021, CIA Director Bill Burns was sent to confront Moscow with what the US knew about its plans for an invasion. But, as Burns has since commented: ‘I found Putin and his senior advisers unmoved by the clarity of our understanding of what he was planning, convinced that the window was closing for his opportunity to dominate Ukraine. I left even more troubled than when I arrived.’

The Biden administration changed tack, to what Dan Lomas has termed ‘pre-buttal’, beginning in mid-January 2022 when the White House press secretary openly briefed the media on a Russian plot to manufacture a pretext for invasion, using a false-flag sabotage team. A fortnight later, in response to a press question, the Pentagon acknowledged that it knew the Russians had already prepared a propaganda video supporting this invasion pretext, for broadcast once an invasion commenced. Then, on 15 and 18 February, President Joe Biden revealed that US intelligence was now aware that more than 150,000 troops were assembled on Ukraine’s border awaiting an order to move. These efforts were buttressed by the UK’s public reference to Russian plans to install a friendly regime in Kyiv via a coup prior to the planned invasion.

Yet, as we know, the Russian invasion of Ukraine commenced on 24 February.

So, were these efforts a success or a failure? The obvious answer is they failed: Russia wasn’t deterred. But was deterrence actually possible? And the public release of intelligence did complicate and disrupt Moscow’s invasion plans and arguably contributed somewhat to the Russian military’s poor performance in the early stages of the conflict. What’s more, the audience wasn’t just Russia. Public release, beyond traditional intelligence sharing in classified channels, had the effect of alerting and cuing Ukraine. Perhaps most materially, the approach galvanised third parties post-invasion, especially in Europe. This involved overcoming some lingering distrust associated with the disastrous efforts to leverage intelligence diplomatically in 2002–03 over Iraq.

The US government has since explicitly laid out its strategy for what it calls ‘strategic downgrades’. It is an increasingly proactive approach to public disclosures aided by the opportunities presented by an overwhelming volume of available open-source intelligence that allows for effective obfuscation of the actual sensitive sources of the material disclosed.

Last month, Principal Deputy National Security Adviser Jon Finer detailed this strategy in a speech:

The delivered and authorized public release of intelligence, what we now refer to as strategic downgrades, has become an important tool of the Biden administration’s foreign policy. This is a tool that we have found to be highly effective, but also one that we believe must be wielded carefully within strict parameters and oversight.

This speech was itself a form of public release of intelligence—and presumably was targeted again at both allies and adversaries.

The US has deployed this approach beyond just attempts to deter the Russians. For example, it has applied ‘strategic downgrades’ in relation to Chinese arms supplies to Russia, Wagner Group activities in Mali, and its own findings in relation to Chinese ‘spy balloons’.

The approach is underpinned by a formalised framework developed by US policymakers. Related decision-making is centralised in the National Security Council and the Office of the Director of National Intelligence. And its application is apparently limited to select situations—for example, when civilian lives or infrastructure are at risk, or to counter disinformation or false-flag operations.

Guidelines require that downgrades be accurate, be based on verifiable reporting, and be part of a broader plan that includes diplomacy as well as security and economic assistance. According to Finer: ‘It should always be in service of clear policy objectives. It’s not like you just get a piece of very interesting information that could sort of damage one of your adversaries and you decide that could be embarrassing to them, let’s put it out.’

‘Strategic downgrades’ are a potentially important tool for democratic governments, and US formalisation of the related strategy is a welcome development.

But public presentation of intelligence for policy effect deserves careful consideration and risk management. The landscape is complicated by the marked decline in public trust across the Western world and the emergence of a more uncertain strategic environment since 2003. Notably, invocation of intelligence in the political sphere—as with, inter alia, Iraq WMD, the course of the ‘global war on terror’ and Russia’s attempted election interference—necessarily politicises that same intelligence. Perhaps the most alarming example is the degree to which circulation of US intelligence on Russian interference in an increasingly toxic US political environment has effectively tarred US intelligence agencies with the same toxic politics.

And, as Finer observed: ‘You’ve got to be right, because if you go out alarming the world that something terrible is going to happen and you have it wrong, it will be much harder to use the tool effectively the next time.’

I think Flood would have agreed.

More stick, less carrot: Australia’s new approach to tackling fake news

An urgent problem for governments around the world in the digital age is how to tackle the harms caused by mis- and disinformation, and Australia is no exception.

Together, mis- and disinformation fall under the umbrella term of ‘fake news’. While this phenomenon isn’t new, the internet makes its rapid, vast spread unprecedented.

It’s a tricky problem and hard to police because of the sheer amount of misinformation online. But, left unchecked, public health and safety, electoral integrity, social cohesion and ultimately democracy are at risk. The Covid-19 pandemic taught us not to be complacent, as fake news about Covid treatments led to deadly consequences.

But what’s the best way to manage the spread of fake news? How can it be done without government overreach, which risks the freedom and diversity of expression necessary for deliberation in healthy democracies?

Last month, Minister for Communications Michelle Rowland released a draft exposure bill to step up Australia’s fight against harmful online mis- and disinformation.

It offers more stick (hefty penalties) and less carrot (voluntary participation) than the current approach to managing online content.

If passed, the bill will bring us closer to the European Union-style model of mandatory co-regulation.

According to the draft, disinformation is spread intentionally, while misinformation is not.

But both can cause serious harms including hate speech, financial harm and disruption of public order, according to the Australian Communications and Media Authority (ACMA).

Research has shown that countries tend to approach this problem in three distinct ways:

  • non-regulatory ‘supporting activities’ such as digital literacy campaigns and fact-checking units to debunk falsehoods
  • voluntary or mandatory co-regulatory measures involving digital platforms and media authorities
  • anti-fake-news laws.

Initial opinions about the bill are divided. Some commentators have called the proposed changes ‘censorship’, arguing it will have a chilling effect on free speech.

These comments are often unhelpful because they conflate co-regulation with more draconian measures such anti-fake-news laws adopted in illiberal states like Russia, whereby governments arbitrarily rule what information is ‘fake’.

For example, Russia amended its Criminal Code in 2022 to make the spread of ‘fake’ information an offence punishable with jail terms of up to 15 years, to suppress the media and political dissent about its war in Ukraine.

To be clear, under the proposed Australian bill, platforms continue to be responsible for the content on their services—not governments.

The new powers allow ACMA to look under a platform’s hood to see how it deals with online mis- and disinformation that can cause serious harm, and to request changes to processes (not content). ACMA can set industry standards as a last resort.

The proposed changes don’t give ACMA arbitrary powers to determine what content is true or false, nor can it direct specific posts to be removed. The content of private messages, authorised electoral communications, parody and satire, and news media all remains outside the scope of the proposed changes.

None of this is new. Since 2021, Australia has had a voluntary Code of Practice on Disinformation and Misinformation, developed for digital platforms by their industry association, the Digital Industry Group (known as DIGI).

This followed government recommendations arising out of a lengthy Australian Competition and Consumer Commission inquiry into digital platforms. This first effort at online regulation was a good start to stem harmful content using an opt-in model.

But voluntary codes have shortfalls. The most obvious is that not all platforms decide to participate, and some cherrypick the areas of the code they will respond to.

The Australian government is now seeking to deliver on a bipartisan promise to strengthen the regulators’ powers to tackle online mis- and disinformation by shifting to a mandatory co-regulatory model.

Under the proposed changes, ACMA will be given new information-gathering powers and capacity to formally request that an industry association (such as DIGI) vary or replace codes that aren’t up to scratch.

Platform participation with registered codes will be compulsory and attract warnings, fines and, if unresolved, hefty court-approved penalties for noncompliance.

These penalties are steep—as much as 5% of a platform’s annual global turnover if it is repeatedly in breach of industry standards.

The move from voluntary to mandatory regulation in Australia is logical. The EU has set the foundation for other countries to hold digital technology companies responsible for curbing mis- and disinformation on their platforms.

But the draft bill raises important questions that need to be addressed before it is legislated as planned for later this year. Among them are:

  • how to best define mis- and disinformation (the definitions in the draft are different from DIGI’s)
  • how to deal with the interrelationship between mis- and disinformation, especially regarding election content. There’s a potential issue because research shows that the same content labelled ‘disinformation’ can also be labelled ‘misinformation’ depending on the online user’s motive, which can be hard to divine
  • and why exclude online news media content? Research has shown that news media can also be a source of harmful misinformation (such as 2019 election stories about the ‘death tax’).

While aiming to mitigate harmful mis- and disinformation is noble, how it will work in practice remains to be seen.

An important guard against unintended consequences is to ensure that ACMA’s powers are carefully defined along with terms and likely circumstances requiring action, with mechanisms for appeal.

The closing date for public submissions is 6 August.The Conversation

Seizing the memes of advantage in the Ukraine war and beyond

Of all the vagaries we label as ‘non-traditional security’, none is more amusing or indicative of the role of digital networks than that of a compressed, grainy image of a Shiba Inu—a Japanese dog breed that the North Atlantic Fellas Organization uses as its sign. With a swarm of members that include social media researchers, a former president of Estonia, US congressional representatives and military personnel, NAFO is living proof of the importance of memes in contemporary information warfare.

NAFO, and other distributed information campaigns, have become a hallmark of the highly mediatised Russian invasion of Ukraine. These informational insurgencies aim to counter propaganda initiatives with sarcasm and wit. Any post on Twitter or elsewhere advocating support for the Russian invasion, or the legitimacy of sham referenda to annex parts of Ukraine to Russia, is met with relentless mockery, sarcasm and memes from NAFO members, colloquially referred to regardless of age, gender or location as ‘Fellas’.

Contemporary information warfare is often rolled into wider issues involving cyberspace and cybersecurity. The link is usually made based on information warfare being most easily undertaken today through social media and other digital technologies, while requiring a significantly different skillset to propagate and analyse.

If we’re to understand contemporary information warfare, we must recognise that it is fundamentally and by default memetic. The approach taken by the US Department of Defense, which is now under review and that Twitter and Facebook have described as an instance of coordinated inauthentic behaviour, demonstrates a pressing need for this. Meme warfare favours not the frontal assault, but insurgency. Pumping content into the digital aether is unsubtle and ineffective in shaping the narrative to suit strategic ends. There is a better way.

Memetic warfare, more commonly known as meme warfare, has floated around the fringes of the information warfare discourse for quite some time. The ‘meme’—defined by Richard Dawkins as the smallest possible unit of human culture—is at the core of this subset of information warfare. By progressively injecting small cultural elements into discourse on a matter, such as an invasion or other military operation, the idea is that a message can be spread unwittingly—using infection, rather than broadcast, as a medium of transmission.

As far back as 2006, the North Atlantic Treaty Organization recognised that meme warfare should be central to its defence initiatives. Of particular note was a proposal to integrate information operations, psychological operations and strategic communications in the form of a ‘meme warfare centre’ that would advise NATO commanders ‘on meme generation, transmission, coupled with a detailed analysis on enemy, friendly and noncombatant populations’.

These ideas have influenced NATO’s and the European Union’s efforts to expand their involvement in this space. In 2017, NATO and the EU established the European Centre of Excellence for Countering Hybrid Threats, which has a general focus on hybrid warfare across a number of domains, including the information domain. There’s also the Latvia-based NATO Strategic Communications Centre of Excellence, established in 2014, which provides tailored advice on information warfare, with a focus on its conduct online.

But analysis since then has cropped up rarely, and has tended to integrate meme warfare into wider information warfare to the point of conflating the two. One significant exception is the excellent piece published on this forum by Tom Ascott of the Royal United Services Institute in 2020. Ascott provides a highly informative overview of the history and contemporary applications of memetic warfare, from its pedigree and relationship to Soviet dezinformatsiya campaigns to its deployment in the 2016 US election. Alas, content in the space is perilously thin on the ground relative to its deployment, including crowdsourced disinformation campaigns such as the #DraftOurDaughters initiative deployed by a group of 4chan users in a bid to dent the national security credentials of US presidential candidate Hillary Clinton in 2016.

In a networked environment in which individuals are empowered with all the publishing force of the Gutenberg press in the palms of their hands, empowering users to generate a suitable narrative is critical. If Australia and its allies are to replicate the successes of contemporary information warfare operations in Ukraine, an understanding of memes is more critical than ever. The role of NAFO in Ukraine is proof of this. Meme warfare is distributed and participatory, and understanding its power requires an understanding of internet culture.

What much of this relies on—and indeed, how the US, Australia and other like-minded countries may be able to leverage memetic warfare to their advantage—is the ability to create participatory online insurgencies. The best digital marketing initiatives are participatory—Spotify Wrapped, for example, enables users not only to collate data about their music habits for the year, but to share that data with others, contributing to narratives and meta-narratives about popular music. In effect, meme warfare draws from the same playbook by providing a series of cultural objects for individuals to latch onto, remix and reproduce online.

So, back to NAFO. To understand what’s significant about posting a compressed image of a Shiba Inu in reply to disinformation content on social media we need to have an understanding of its history. The Shiba Inu has a surprising history of use online as a meme symbolising irony, irreverence and wholesome clarity in the face of challenging circumstances. This includes the Doge meme, which appeared on Reddit and 4chan in 2010. That in turn influenced the development of a number of other Doge-like memes (usually in the form of edited iterations of the original). One of them, a compressed image of a Shiba referred to as Cheems, was shrunk further, applied to an edited image of the NATO logo, and proliferated widely on Twitter from June 2022 onwards.

On its own, the edited form of the NATO logo wouldn’t have spread. But, by applying a meme with wide awareness and a spirit of irreverence to it, the movement began to grow. It enables users to create their own. There is no central authority. Members of the swarm create their own NAFO ‘Fellas’ avatars or gift them to others. Fellas work together, coordinating responses to drown disinformation content and highlight instances of war crimes or Russian military failures.

Understanding this approach and reproducing it in a deliberate and targeted way in future conflicts can result in similar, highly effective memetic insurgencies in cyberspace.

Australia needs asymmetric options to counter coercive statecraft

There can be little doubt Australia’s strategic environment has evolved significantly in recent years. Across the Indo-Pacific, we now face a region characterised by a broad spectrum of geopolitical influences, from conflict and contest at one end to cooperation and collaboration at the other.

Through this spectrum, we’re seeing a diversification of the circumstances where all our elements of national power will need to contribute to our national security and prosperity goals.

We’re also seeing significant changes in the way some states wield power and influence in the region. Of course, Australia continues to support an Indo-Pacific characterised by rules-based cooperation and stability. But we must also be prudent in recognising the prevalence of new forms of rivalry and competition that fall outside our preferred models.

Australia, like its friends and partners in the Pacific, needs options to challenge the threat posed by coercive statecraft. Adversarial actors including China and Russia are using agile and malign methods to secure strategically significant outcomes, often to our disadvantage.

And to date, these methods have proven both effective and relatively low cost. So, for Australia to more effectively counter this coercive statecraft it will be important to raise the costs—both economically and politically—for the antagonists.

In Chinese strategic literature, there is a strong emphasis on adopting comprehensive approaches to wielding national influence. One of the more important works on this in recent years, Unrestricted warfare, espouses a persistent campaign for advantage that:

Breaks down the dividing lines between civilian and military affairs and between peace and war… non-military tools are equally prominent and useful for the achievement of previously military objectives. Cyberattacks, financial weapons, informational attacks—all of these taken together constitute the future of warfare. In this model, the essence of unrestricted warfare is that the ‘battlefield is everywhere’.

Beijing’s purpose in applying these methods is to achieve its strategic goals below our thresholds of military response. And although China incorporates the threat of military force among the suite of its coercive measures, it nevertheless seeks to render Western military capabilities irrelevant by achieving its goals without triggering conflict.

Once we understand this, it becomes clearer why our own military capabilities, including those for power projection and networked warfighting, are necessary but not sufficient.

Whereas the West has tended to equate the possession of superior military force with deterrence, that plainly isn’t working against China’s comprehensive coercion.

And where Australia’s armed forces have focused on force projection as an essential capability, the time has come to weave these capabilities into broader options for influence projection.

Rather than narrowly focusing on dominating the battlespace with forward-deployed force elements, the military can broaden its value proposition by complementing whole-of-government efforts to out-position rival powers in economic, diplomatic and information-influence campaigns.

As China has amply demonstrated, coercion does not necessarily involve the application of physical violence. Influence comes in many forms and can apply to peacetime and wartime situations, as well as those in between.

As NATO has suggested, ‘even lethality, the ultimate penalty of physical force, is giving way to abstractions of perception management and behavioral control, a fact which suggests that strategic success, not tactical victory, is the more coveted end-state’.

Any Australian efforts to engage in constant competition and counter China’s aggressive political warfare methods will need to have both defensive and offensive elements.

In terms of deterrence, by shining a light on these actions and actively calling them out, we can begin to erase the ambiguity and uncertainty on which they rely. Since these methods depend on cultivating confusion and uncertainty, an important counter action is to establish robust, evidence-based narratives which demonstrate to the world what is going on.

But exposure alone will not always be enough. And that’s where we need to be able to move from a defensive to an offensive mindset. We will need to become tough-minded to more forcibly ensure that antagonists desist from actions that hurt us.

This will require the development of robust cost-imposing strategies to alter their calculus. Such strategies will need to convince threat actors that the price of achieving their aims through political warfare methods exceeds what they are willing or able to pay. And that means we need to signal clearly that we have both the capacity and the will to take actions that will impose these costs.

Cost-imposing approaches can be proportional or asymmetrical. Authoritarian regimes are deeply fearful of threats to their legitimacy, making them vulnerable to well-considered influence operations which might hold that legitimacy up to question.

Such influence actions might be used to inject information into a closed society that authoritarian regimes would not want disclosed. This could range from alternative perspectives on current events that differ from regime-imposed narratives to exposure of political and economic corruption. These kinds of disclosures could impose significant costs on a regime constantly worried about maintaining domestic control.

This points to a broader theme, sometimes called ideational power. Authoritarian regimes like the Chinese Communist Party depend on repression rather than democratic legitimacy to maintain control, as has been made painfully evident in Hong Kong.

Authoritarian regimes are deeply fearful of instability, rendering them brittle at home and severely limiting their appeal abroad. Australia should take a robust and comprehensive approach to emphasising and contrasting its own values and methods against authoritarian ones.

Throughout history, long-term campaigns for influence have had this ideological dimension to them—the contrast between light and dark, between inclusiveness and domination. So Australia should be emphasising, not minimising, the ideological contrast between its own values and those represented by authoritarianism.

It makes sense to develop and enhance asymmetric advantages. For example, China doesn’t have access to anything like the network of alliances and partnerships that Australia has. And although Australia’s economic size pales in comparison to China’s, our network of friendships can be a source of significant strength.

Put another way, this is about how we play our strengths against their weaknesses. Now is the time to be exploring how our creativity and adaptability can be used to identify, explore and exploit the critical vulnerabilities of authoritarian regimes, to be able to get at them in ways that hurt.

To be clear, this doesn’t necessarily mean the application of military force—sending people into harm’s way—although those skills will remain essential. Rather, it’s about using discrete, tailored options to go after background vulnerabilities.

The military notion of unconventionality is helpful here:

In unconventional warfare, the emphasis is on defeating the opponent without a direct military confrontation…typically, the unconventional forces act undercover, or discretely, their targets are not of an exclusively military nature, and the techniques employed are distinct from those specific to purely military operations.

This logic could be broadened to other dimensions of Australian statecraft. The actions needed to impose costs of sufficient magnitude to deter China’s coercive behaviour would likely need to hold at risk Beijing’s core interests—the things it fears and values—without necessarily engaging in physical attack.

This would likely need to involve a more holistic or unrestricted approach, much as China itself pursues. And while we need to stay within the bounds of the international rules and norms we espouse, there is nevertheless fertile ground for exploring unconventional and unorthodox means for generating costs.

And although prudent self-reliance dictates that Australia should not render itself overly dependent on external providers to guarantee our security, it will be vital to work multilaterally counter China’s malign influence. Benefits will arise from cooperation in non-traditional aspects of military operations, including in the information and economic domains.

By working with our partners in the Indo-Pacific, including the Quad, Indonesia and the Pacific island countries, to develop new and disruptive options, Australia can bolster its ability to not only expose China’s malign statecraft, but also to adopt cost-imposing strategies that will deter grey-zone political warfare.

The case for a ‘disinformation CERN’

Democracies around the world are struggling with various forms of disinformation afflictions. But the current suite of policy prescriptions will fail because governments simply don’t know enough about the emerging digital information environment, according to Alicia Wanless, director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace.

Speaking in a panel discussion how democracies can collaborate on disinformation at ASPI’s grey zone and disinformation masterclass last week, Wanless went on to say that what we really need is ‘a disinformation CERN’—in reference to the international particle physics research outfit, where countries pool their resources to operate the Hadron particle accelerator, study results and share findings. The scale and reach of the disinformation problem is so huge that only research cooperation of this kind can address the global shared threat to information systems.

Our democratic societies are doomed to decline if we don’t put forward major effects to arrest the effects of disinformation, said Wanless. Fellow panel member and resident fellow at the American Enterprise Institute, Elizabeth Braw, agreed that democracies are in the middle of a generalised disinformation crisis.

At the same time, incentives to act may be blunted as democracies become numb to a multitude of cascading political crises driven by disinformation. These are having a global-warming-type effect on our political and cultural ecosystems—disinformation is turning up the temperature and toxicity of public discourse, but it also perpetuates denialism about the problem of disinformation itself.

Wanless explained that there are two major areas of research shortfall that democracies need to address. The first is how disinformation flows around global, national, local and individual information landscapes, for example, among news, social media and private messaging apps.

The second gap is in our understanding of both its short- and long-term impacts. Do disinformation campaigns change election outcomes? What’s the relationship between disinformation and politically motivated violence? And what might be the effects on the health of political systems over months or years of disinformation? Wanless noted that from an academic standpoint, most theories of communication are stronger on accounting for transmission but very weak on effects.

In addition, there are yawning knowledge gaps on the effects of disinformation countermeasures. For example, said Wanless, there are very few credible studies on the effects of de-platforming disinformation spreaders. Does it help in limiting disinformation? Or do the perpetrators just move underground to more niche platforms, where followers can be further radicalised and exhorted to violence? To help answer these questions, Washington DC’s Capitol insurrection of 6 January needs to be examined more closely.

The other problem for research is that private companies hold most of the relevant data and are unwilling to share it widely. The platforms regard their data as valuable proprietary information and to date have only been willing to share small amounts with handpicked research institutions on particular cases.

A well-funded, multinational research effort could help spearhead a broad-based, collaborative approach with the digital information industry that holds the bulk of data on information transmission and user behaviour. The big search engines, social media platforms, television networks, public broadcasters and newspapers of record should all be included.

On the question of how much such research would cost and who would lead it, Wanless said she’s costed a number of models that start from US$10 million per year for basic research and rise from there. Given the cost of disinformation to economies and societies—How much has Covid-19-related disinformation alone cost in terms of loss of life and income?—it seems like a miniscule investment compared to what Western democracies spend on military hardware.

Wanless believes that platforms should in some way be involved in funding this research and that discussions around taxes on them should be taking this into account. But the effort should probably be led by academic institutions and civil society rather than the national security community.

Braw agreed with Wanless that better research is critical, but so is building whole-of-society resilience, starting immediately. If this isn’t done, responses to disinformation crises risk continually exacerbating their initial effects, until societies are caught in a spin-cycle of chaotic reaction.

Democracies need to get out of their defensive postures. Disinformation cannot be beaten with de-platforming and labelling. We need to get better at public messaging and be in constant preparation for crisis communication. When Covid-19 hit, governments should have been ready to go with public communication and planning for food, water, energy and fuel shortages.

A good example of multilateral cooperation and public communication on a grey-zone crisis, said Braw, was the 2018 poisoning of former Russian double agent Sergei Skripal and his daughter Yulia using the chemical weapon Novichok in the UK. The UK was able to quickly stand up an informal alliance of countries that expelled Russian diplomats and censured and sanctioned Moscow.

Companies are on the front line of disinformation and grey zone operations, and they need to be consistently involved in a whole-of-society response. But it’s important to note, according to Wanless, the private sector is part of the problem. There’s money and power to be generated by inflaming fear and uncertainty.

Braw waxed nostalgic about the early days of social media—visiting the offices of Twitter when it was just a handful of guys and a few computers. Governments completely failed to see how these platforms would transform politics, change the nature of governance and even threaten democratic institutions.

To add to the challenge, domestic political actors are increasingly getting in on the disinformation action and have no real incentives to neutralise its effects.

In terms of constraints, international law is much too vague on the subject of propaganda and there are no strong agreed guidelines that platforms can implement. So while state regulation may be an old-fashioned, ‘European’ response, said Braw, it’s probably the only effective way forward. Building a multilateral approach to regulating a decentralised, global information space will be the critical factor for success in the fight against disinformation.

Tag Archive for: Disinformation

ASPI DC Roundtable on Chinese online information strategy

On May 5th, ASPI DC hosted a roundtable, chaired by Vicky Xu, on Chinese online information strategy. The event was attended by US government officials, think tanks and media, and spurred discussion about CCP information warfare and strategies, and contributed to broadening awareness of common tactics and techniques used against individuals.

Joint BBC-ASPI investigation into West Papua information operations

A joint investigation between the BBC and ASPI’s International Cyber Policy Centre analysed a well-funded and co-ordinated information campaign aimed at distorting the truth about events in Indonesia’s West Papua province, and has identified those responsible for its operation.

The researchers found that the campaign used slanted or factually untrue content (including “news” articles, infographics and videos) to promote narratives supportive of the Indonesian government’s actions in West Papua, and to undermine the pro-independence movement.

In a context like this in which independent media is restricted and verified information is scarce, a disinformation campaign such as the one the researchers uncovered has the potential to have a substantial impact on how the situation is perceived by the international community. This in turn could have implications for policies and decisions made by other governments, and in international forums such as the UN.

Building off earlier research published on Bellingcat, the researchers used open source data and digital forensics to analyse the campaign’s operations across multiple platforms and identify Jakarta-based communications consultancy InsightID as the source of the operation. 

This attribution was then confirmed by Facebook, and later acknowledged by the organisation itself.

A second, smaller campaign was also uncovered. Researchers tracked this campaign back to an individual with political connections. On being approached by the BBC, the individual eventually admitted his role in the campaign but insisted that they had been undertaken in his personal capacity and were not connected to his political work.

The investigation was led by BBC’s open source investigator Benjamin Strick and ASPI International Cyber Policy Centre researcher Elise Thomas and included:

A detailed report outlining the full investigation published on Bellingcat

Coverage of the investigation on the BBC in English and in Indonesian

Online Influence and Hostile Narratives in Eastern Asia – Report

ASPI’s International Cyber Policy Centre wrote a report for the NATO Strategic Communications Centre of Excellence that examined online influence and hostile narratives in Asia.

Eastern Asia — which we define as including East and Southeast Asia — is a region of increasing geopolitical competition with many racial, cultural and societal fractures. With the rapid expansion of inexpensive internet access, these fractures and tensions mean that many states in the region are both vulnerable to, and a source of, hostile information activities that are being used to achieve strategic goals both inside and outside the region.

This report documents examples of hostile information activities that have originated in Eastern Asia and have been targeted in the following countries:

  • Taiwan
  • The Hong Kong-based protest movement
  • West Papua
  • The Philippines

Because these activities often target social media, they have been difficult for law enforcement and national security organizations to police. Across the globe, countries are pursuing different methods of tackling the spread of hostile information activities with differing degrees of success. These approaches can range from law enforcement, temporary internet shutdowns, and attempts to legislate against ‘fake news’ or disinformation, through to wider societal media literacy initiatives.

Read this report, authored by ASPI International Cyber Policy Centre researcher Hannah Smith, here.

Tag Archive for: Disinformation

Smart Asian women are the new targets of CCP global online repression

The Chinese Communist Party has a problem with women of Asian descent who have public platforms, opinions and expertise on China.

In an effort to counter the views and work of these women, the CCP has been busy pivoting its growing information operation capabilities to target women, with a focus on journalists working at major Western media outlets.

Right now, and often going back weeks or months, some of the world’s leading China journalists and human rights activists are on the receiving end of an ongoing, coordinated and large-scale online information campaign. These women are high profile journalists at media outlets including the New YorkerThe Economist, the New York TimesThe GuardianQuartz and others. The most malicious and sophisticated aspects of this information campaign are focused on women of Asian descent.

Based on open-source information, ASPI assesses the inauthentic Twitter accounts behind this operation are likely another iteration of the pro-CCP ‘Spamouflage’ network, which Twitter attributed to the Chinese government in 2019.