Tag Archive for: Disinformation

Undermining unity: Disinformation as a threat to the Quad

Disinformation campaigns targeting the Quad, a partnership between Australia, India, Japan, and the United States, challenge its credibility. These campaigns, often state-linked, misrepresent Quad initiatives, exploit internal differences among its members and portray the group as warmongering. This erodes public trust in the Quad, heightens geopolitical tensions and complicates regional cooperation.

The origins of the disinformation campaigns are not completely clear, but it can at least be said that they suit the purposes of China and, perhaps, Pakistan.

The Quad promotes a free, open and prosperous Indo-Pacific by addressing challenges such as health, climate change, cybersecurity and infrastructure development. However, intensifying geopolitical tensions have exposed a fundamental limitation: the Quad’s reluctance to explicitly focus on security challenges. This strategic ambiguity makes the partnership vulnerable to disinformation campaigns. These campaigns often spread false or misleading information to manipulate public opinion, targeting the group and its members.

Two narratives dominate disinformation campaigns targeting the Quad. The first, and most persistent, frames the Quad as a security alliance formed to contain China. This stokes fears of open confrontation caused by Quad involvement in Indo-Pacific disputes, such as the South China Sea and Taiwan.

To support the false narrative, evidence is often fabricated. For instance, in September 2024 an image of a 2017 US-Japan military drill was falsely presented as a standoff between Chinese and US vessels in the South China Sea. The image (below) was used as a thumbnail for a video claiming that the Quad was preparing for military confrontation with China in the South China Sea. The video, which was viewed almost 200,000 times, was posted on a channel that had repeatedly shared misinformation.

The second dominant narrative exploits political and interpersonal differences among Quad members to sow discord. Some campaigns highlight strategic divergences, such as India’s historical non-alignment, Australia’s rebuilding of economic ties with China and Japan’s pacifist stance. Others target relationships between leaders. For example, during the September 2024 Quad summit, a video doctored to depict then US president Joe Biden showing disrespect to Indian Prime Minister Narendra Modi circulated widely. It was aimed at inflaming anti-US sentiment in India.

Disinformation campaigns also exploit issues that indirectly weaken the Quad’s cohesion. Immigration has been used as a wedge issue, with false claims circulating that US President Donald Trump planned to deport 18,000 illegal Indian immigrants as soon as he was inaugurated. Media monitoring revealed that accounts such as @PSYWAROPS (below), which predominately follows Pakistani sources, shared pieces of false information to strain US-India relations and undermine India’s domestic confidence in the partnership.

Health security, a central focus of Quad cooperation, has also been targeted by disinformation. Japan has been falsely accused of labelling mRNA Covid-19 vaccines as deadly. This was done by misrepresenting a Japanese press conference. These claims were first shared in simplified Chinese and spread across platforms such as X (below), Facebook and Weibo.

Such disinformation is particularly damaging, as the Quad Vaccine Partnership was designed to bolster regional health security. With the US hosting Pfizer-BioNTech’s supply chain and Australia opening a Moderna vaccine facility in Victoria last year, these false claims risk damaging the Quad’s credibility in delivering health initiatives.

Another example emerged in 2022, when online users masquerading as local activists falsely said a planned facility in Texas of Australian mining company Lynas Rare Earths would cause pollution. Although this disinformation primarily targeted Lynas, it indirectly affected the Quad’s work to secure its supply chain, as the company was selected by the Pentagon to develop the initial engineering and design for the commercial heavy rare earths separation facility in the US.

Although the immediate effect of these disinformation campaigns on Quad’s cohesion has been limited, their potential for long-term harm is significant. Persistent and convincing false narratives could erode public trust, reduce domestic support for Quad initiatives and hinder its ability to build stronger security partnerships. Narratives framing the Quad as belligerent, divided and ineffective not only diminish its legitimacy; they also complicate its ability to keep the Indo-Pacific free and open.

To mitigate these risks, the Quad should adopt a more coordinated and proactive approach. It should continue to cooperate, including through regular information-sharing on measures against disinformation. Additionally, collaboration with regional organisations such as the Association of Southeast Asian Nations could further build resilience against disinformation through joint capacity-building and media literacy programs.

Furthermore, engaging with social media platforms to address vulnerabilities, promote transparency, and improve content moderation can help combat disinformation. Japan’s recently launched public-private partnership project to improve technological literacy offers the Quad a model initiative.

To fight disinformation, treat it as organised crime

The Australian government’s regulatory approach to tackling disinformation misses the mark by focusing on content moderation and controlling access to platforms. This focus on symptoms is like fighting a flood by mopping the floor: it feels like you’re dealing with the immediate problem, but it ignores the root cause.

The government should instead treat disinformation like organised crime and focus on dismantling networks.

Laws governing organised crime are effective because they focus on patterns and networks, not necessarily the commodities criminal syndicates trade in. Laws treating disinformation similarly would focus on scale, coordinated inauthentic behaviour, financial patterns and systematic manipulation for profit or influence, not content or controlling platform access. This would target orchestrated disinformation infrastructure while preserving freedom of expression.

The approach would allow governments, social media companies and their cyber allies to better tackle disinformation networks and actors. They would be able to take down malign disinformation enterprises, instead of playing Whac-A-Mole with content—shifting to controversial community notes or applying ineffective and unenforceable blanket access bans to groups of citizens.

Every disinformation campaign begins with an initiator, someone who deliberately spreads untruthful content to distort our view of reality. Disinformation differs from misinformation, which is unknowingly false—an honest mistake.

We used to think that content moderation and fact checking were the solution, but alone they are ineffective.

Human content moderation costs too much time and money, so companies have been experimenting with AI-assisted processes.

But automated moderation can’t reliably understand nuance, context or intent, which all help determine whether content is truly harmful or simply controversial. AI systems struggle with basic linguistic challenges. They often fail to catch harmful content in non-English languages, regional dialects and culturally specific contexts. Or they frequently misclassify content, struggling to distinguish between disinformation and legitimate discussion about disinformation.

Controlling platform access, such as recent regulation in Australia banning children under 16 years old from using social media, is another approach. But enforcement is difficult.

Yet the biggest problem is neither technical nor practical. It is philosophical.

Liberal democratic societies value freedom of speech. Content moderation is problematic because it treats freedom of speech as merely a legal or technical problem to be solved through rules and algorithms. But freedom of speech, open discourse and the marketplace of ideas are central to the democratic process.

Age-based social media bans present a fundamental tension with democratic and liberal philosophical principles as they impede young people’s development as democratic citizens. Social media is a key space for civic engagement and public discourse. Blanket age bans prevent young people from gradually developing digital citizenship skills. Consequently, young people would suddenly gain access to digital spaces without prior experience navigating them.

Approaching disinformation as organised crime focuses on the root cause of the problem—the malicious actors and networks creating harmful content—rather than trying to regulate the average citizen’s platform access or speech. Such an approach would target specific malicious groups, whether traced back to foreign information manipulation and interference, domestic coordinated inauthentic networks, or financially motivated groups creating fake news for profit.

Laws that treat disinformation as organised crime would require the prosecution to show several elements: criminal intent, harm or risk to public safety, structured and coordinated efforts, and proceeds of crime.

The first two elements should be covered by the definition of disinformation as the intent to deceive for malicious purposes. For the past four years, the Australian Security Intelligence Organisation has warned of the threat of foreign interference. In 2022, foreign interference supplanted terrorism as ASIO’s main security concern and in 2024, it was described as a real, sophisticated and aggressive danger.

ASPI data supports this assessment, exposing widespread cyber-enabled foreign interference and online information operations targeting Australia’s elections and referendums, originating from China, Russia, Iran and North Korea.

Together, ASIO and ASPI’s work indicates intent and harm—to individuals, institutions, organisations and society—for financial or political purposes.

Structured and coordinated efforts are equally provable. Disinformation is already known to involve coordination by organised networks, akin to organised crime syndicates. Meta, Google, Microsoft, OpenAI and TikTok already detect and disrupt covert online influence operations. They understand the tactics, techniques and procedures malicious actors use on their platforms—including identity obfuscation, impostor news sites, bot networks, coordinated amplification activity, and systematic exploitation of platform vulnerabilities.

Finally, disinformation is a funded enterprise, so profits can be classed as proceeds of crime. Like any criminal venture, disinformation is a calculated operation funded to undermine society, through advertising, fraudulent schemes or foreign funding. Laws that target financial aspects of disinformation operations—such as shell companies, front organisations, suspicious financial transactions or use of fake, compromised or stolen accounts—would differentiate malign enterprises from authentic individuals expressing genuine beliefs, however controversial.

Regulating content and platform access risks either over-censorship that chills legitimate discourse or under-moderation that allows harmful content to spread. We already have the tools and legal frameworks to prove malign online influence without undermining liberal democratic values. It’s time to change our approach and classify disinformation as an organised crime.

To combat disinformation, Japan could draw lessons from Australia and Europe

Japan is moving to strengthen its resilience to disinformation, though so far it’s only in the preparatory stage.

The EU and some countries have introduced requirements in content moderation for digital platforms. By contrast, Japan has proceeded with only expert discussion on countermeasures against disinformation. While that progress is welcome, Tokyo needs to consider establishing its own standards and join a growing global consensus on countering disinformation, including foreign information manipulation linked to malign state actors.

2024 was a tough year for Japan in countering disinformation campaigns. Immediately after the Noto earthquake in January, false rescue requests were widely spread on social media, diverting scarce resources of emergency services away from people who genuinely needed help. After record-breaking rainfall hit the Tohoku region in July, more than 100,000 spam posts disguised as disaster information appeared on social media. And ahead of the September election for the Liberal Democratic Party’s president and Japan’s October general elections, Japan Fact-check Center identified the spread of false and misleading information about political parties and candidates.

Japan is in a delicate situation. It’s one of the countries at the forefront of Chinese hybrid threats due to its proximity to Taiwan and principled stance upholding the rules-based order. But Japanese society, accustomed to little political division and to passively receiving information, may lack the resilience to disinformation of countries such as the United States or Korea.

Now, about 67 million Japanese are active users of X, more than half the population. X has become an important news and information source for a segment of Japanese society that is less inclined to confirm the accuracy of news items via more mainstream sources.

In response, the government has taken steps to combat disinformation and misinformation. In April 2023, a specialised unit was established within the Cabinet Secretariat to collect and analyse disinformation spread by foreign actors. As president of the G7, Japan introduced the Hiroshima AI Process in 2023 to address AI-enabled disinformation. Furthermore, the Ministry of Foreign Affairs produced solid evidence to effectively counter disinformation campaigns relating to the release of treated wastewater from the Fukushima Daiichi nuclear power plant. This disinformation may have come from China. The ministry’s effort should be applauded and serve as a model for future responses.

But simply responding to every incident may not be sustainable. Countering the proliferation of disinformation also requires content moderation, which must be balanced to protect freedom of expression and avoid placing an undue burden on digital platforms. Thankfully, international partners provide some good examples for reference.

The EU’s Digital Services Act (in full effect since 2024) forces digital platforms to disclose the reasoning behind content moderation decisions and introduces measures to report illicit content. In Australia, the Combatting Misinformation and Disinformation Bill (2024) was intended to provide the Australian Communications and Media Authority with powers to force digital platforms to take proactive steps to manage the risk of disinformation. While it was abandoned in late November, Japan could use this as a lesson to avoid similar outcomes.

Japan’s government has commissioned various study groups but so far has taken no legislative action to combat misinformation and disinformation. The present reliance on voluntary efforts by digital platforms is insufficient, especially given the growing likelihood and sophistication of disinformation threats. Concrete measures are needed.

The Japanese government should engage multiple stakeholder communities, including digital platforms, such as X, and fact checking organisations, to collectively set minimum standards of necessary content moderation by digital platforms. While the specifics of moderation can be left to the discretion of the digital platform, minimum standards could include, for example, labelling trusted media and government agencies and assigning them a higher algorithmic priority for display. If minimum standards are not met, the digital platform would be subjected to guidance or advice by a government authority. But the authority would not have the power to remove or reorder individual content.

Setting standards in this way would respect existing limits of freedom of expression while reducing exposure of users to disinformation that could cause serious harm. It would require, however, verifiable criteria used to determine trusted accounts and the establishment of a contact point for complaints within digital platforms or trusted private fact-checkers.

Regulating digital platforms will not be enough. It’s also important to call out malicious actors and strengthen public awareness and media literacy. Proliferation of disinformation with political intent by foreign actors is a global problem. So Japan should cooperate with partners with similar democratic values, such as Australia. As such, Tokyo should be prepared to be more proactive in joining public attributions of malicious state-sponsored campaigns. It was, for example, with the advisory, initially prepared by Australia, on the cyber threat actor APT40.

Japan’s resilience to disinformation is still developing. Given its prevalent role in the regional and global order and its proven commitment to a rules-based international order, a higher level of urgency is required.

Information, facts, journalism and security

(A speech by the executive director of ASPI to the Media Freedom Summit, hosted by the Alliance for Journalists’ Freedom in Sydney on 14 November.)

 

I want to start by citing the Guardian’s latest pitch for support from its readers. As most of you will know, the Guardian asks readers to pay rather than forcing them to do so through a pay wall. One of the ads that runs at the bottom of every Guardian story reads as follows:

This is what we’re up against …

Bad actors spreading disinformation online to fuel intolerance.

Teams of lawyers from the rich and powerful trying to stop us publishing stories they don’t want you to see.

Lobby groups with opaque funding who are determined to undermine facts about the climate emergency and other established science.

Authoritarian states with no regard for the freedom of the press.

The first and last points are the most pertinent to me as head of ASPI. Bad actors are indeed spreading disinformation, and authoritarian states indeed have no regard for the freedom of the press.

And here’s why, as a national security guy, I like this pitch: because a society in which people want to pay for quality news is also a society that will be more resilient to disinformation, misinformation and the gradual erosion and pollution of our information environment. This resilience is a key pillar of our security; you might say it’s the strength on which all of our other capabilities are founded.

It points to a society in which people want to understand complex issues by engaging with facts.

It points to a society in which people want to do the hard work of exercising their critical-thinking skills so that they can evaluate for themselves what they’re being told, so they have healthy scepticism about political and social orthodoxies, not conspiratorial mistrust of traditions and institutions.

Those skills are built up through education—that includes formal education, life experience, auto-didacticism such as reading newspapers, and community and civic engagement. In other words, life in a vibrant and well-functioning society.

And let me stress, self-education through reading and viewing material online is a perfectly legitimate pursuit. But it doesn’t mean believing everything you read, nor selecting your own preferred facts, nor wrapping yourself in a comforting bubble of online fellow travellers who agree with you and validate your views.

What’s at stake here is that democracy, and in my view the functioning of society more broadly, depends on how we, as participants, recognise facts in a sea of information, and how we sort and prioritise those facts into an understanding of the world that we can use as a basis for action—including how to vote and how to perform all the other functions that engaged citizens perform in a democracy.

People will apply different weights, importance and context to facts based on the values those people hold. As long as the facts, or at least the majority of them, are agreed, people with differing values and world views can have a meaningful discussion. This is the foundation for even the most impassioned debate: people drawing on a common set of facts to arrive at different but nonetheless legitimate opinions.

Journalists and news organisations should hold privileged positions in the information environment based on the credibility they build up over time. However, to earn and hold these positions, journalists also have a sacred responsibility to report fairly, accurately and objectively in the public interest. What we can’t afford is for news organisations to retreat into ever more polarised political positions.

Media are vital to moderating and holding together public conversations even on the most difficult and controversial issues. That means leading civil debates on sensitive social issues, respectful debates and disagreements on very emotive foreign policy issues such as the war between Israel and terrorist organisations Hamas and Hezbollah and, yes, how Australia engages constructively with the new Trump administration.

Public institutions need to accommodate different points of view. Rebuilding trust in those institutions, such as the government, the media and higher education, is not helped when they create a sense that open debate will be quashed and dissenting views will bring damage to a person’s reputation.

Through these debates and (civil) contests of ideas, democracy enables us to make adjustments to the way we collectively run our society. All the knowledge and day-to-day life experience of adult citizens are fed back into decision-making by the elected executive. This happens through elections, through citizens’ engagement with the institutions that implement policies and sometimes through less formal means including public protests—hopefully peaceful and lawful ones.

Though imperfect, it has always worked. But it has been dramatically disrupted by the roughly three decades of the popularisation of the internet, and the roughly 15 years of the popularisation of social media.

Yuval Noah Harari in his most recent book, Nexus, about the history and future of information networks, coined the phrase ‘the naive view of information’ to describe the false expectation that if people have access to ever more information they will, per se, get closer to truth. A related misunderstanding is the so-called ‘free market of ideas’—one of the popular beliefs back during the heady and utopian early days of the internet.

The hope was that if all ideas, good and bad, could be put on this intellectual market, the best ones would naturally compete their way to the top. But we’ve quickly learnt that the ideas that are the stickiest, the most likely to gain traction and spread, are not necessarily the most true, but more often the ones that are most appealing—the ones that give us the most satisfying emotional stimulation.

Far from being a functioning open market, it takes an active effort to create and share information that is directed at the truth. Journalism is one such effort. News media that is not directed at the truth but at social order or the creation of shared realities isn’t journalism; it’s propaganda.

Now, why is all this such a worry to the national security community?

Because it makes us deeply vulnerable. In telling ourselves that government involvement in the digital world would stifle innovation, we have only stifled our own ability to protect our public and left a gaping hole for foreign predators. Inevitably, the absence of government involvement leads to security violations. Instead of calm, methodical government involvement we then get rushed government intervention.

Powerful players such as China and Russia can use their resources and capabilities to put their finger on the scales and influence a society. Disinformation can shape beliefs across wide audiences. This can change how people vote or erode their faith in institutions and even in democracy itself. It can turn people against one another. It can impact policymaking and leave us less safe, less secure and less sovereign. It is one thing for our own politicians and media to influence us, but it is a national security threat that we are being influenced and interfered with by foreign regimes, their intelligence services and their state-run media.

I happen to believe in higher defence and security spending not because I seek aggression, conflict or war, but to deter it—because I believe that we keep ourselves safe by being strong and making it clear that we are strong. I also believe that with all the defence spending in the world, if your society is divided against itself to the point of dysfunction, you eventually have to ask yourself: what exactly are you protecting? And that’s why the information domain is as important as traditional military domains to a sensible national security practitioner.

An adversary doesn’t need to invade you or use coercive force to shape you if they can influence you towards a more favourable position through information operations. It costs billions, maybe trillions of dollars to invade another country, overthrow its government and install a more friendly one. Why do that if you can shape the information environment so that the other country changes its government on its own, for a tiny fraction of the price? The AI expert Stuart Russell has calculated that the Russian interference in the 2016 US presidential election—which was a bargain for Moscow at a cost of about $25 million, given the massive disruption it’s caused—could be done for about $1000 today thanks to generative AI.

Solutions and responses 

So, what can we do about this? I don’t need to tell you that the business models of the news media are under enormous stress. Ask the person who wrote that eloquent pitch for support for the Guardian.

It’s easy to look around and feel despondent about the scale of the challenge. But it’s worth remembering we are still really in the early stages of the information revolution.

My submission is that the best way to create sustainable business models for strong, independent journalism is to foster societies in which people want to pay for this journalism, because they see value in having high-quality information. And they want this information because they recognise that it empowers them. It does not shut them down. There are rules and accountability but not censorship. Importantly, this requires our politicians, security agencies and the media to protect all views, not just ones the political leadership or journalists agree with. Too often I see genuine debate shut down, resulting in fear and self-censorship by those who might have a different view. For example, there is unquestionably a growing fear in our society from people wanting to support Israel. Shutting down legitimate views just because you think it is for a good cause does not make it right.

If our societies, including our media, focus their demands for accountability upon those countries and governments that cannot extract a cost from us (such as harming us economically as China has done and could do again) and if we hold only democracies to a high standard, we leave ourselves and our sovereignty vulnerable.

We should want to be a society open to ideas, views and debate. That is a foundation for resilience and security. Strong national security starts with a strong society aligned by a common set of principles, and resilient to different ideas.

So, we need to build our resilience to disinformation and the pollution of the information environment, as well as our appreciation of the importance of democratic values and freedoms. That means education throughout life, civics classes, digital literacy and support for civil society dealing with technology and democracy.

It means the government helping to create incentives for media to act as sheriffs in the information wild west (rather than those that abdicate any responsibility). That includes everything from content moderators on social media platforms to hardcore investigative journalists.

Conclusion

This is why I strongly believe that journalists and the national security community have many more aspirations and interests in common than they do natural tensions. And I want to dispel the idea that there is an inherent trade-off whereby the goals of one will necessarily come at the expense of the other.

It worries me when national security is seen as a potential threat to democratic freedoms and liberties, privacy being the most common example. This is the wrong framing.

Sometimes, the national security community gets things wrong. It makes mistakes. From time to time, officials might even behave unethically or, in rare cases, illegally. These are, for the most part, legitimate matters for journalists to pursue.

There is a lot of other national security work that simply needs to remain secret and non-public. That’s the nature of most intelligence work, significant portions of defence work, some diplomacy and some law enforcement.

A responsible national security leader should welcome scrutiny of shortcomings in conduct or competence in their agency. And a responsible journalist or editor should want to live in a functioning society in which national security agencies are able to do their work to protect us and our democratic freedoms. Freedom of speech and freedom of the press are, after all, cornerstones of our democracy.

And in a well-functioning democracy, national security is about protecting our freedoms, never about curbing them. CCTV cameras on the street protect your right to walk safely but are not used to profile minorities as is the case in authoritarian countries.

National security agencies that are accountable to oversight by various watchdogs, and ultimately by the elected government and parliament, keep us safe not just in the sense of our physical bodies and lives, but also our society and our democratic way of life.

As part of this, it is vital that media not regard changes to national security policy or legislation only with respect to their impact on journalists. Just as there is a difference between something in the public interest and something publicly interesting, there is a distinction between restricting press freedom and restricting the press.

Government requests for understanding and cooperation in terrorism investigations or measures to prevent public servants leaking classified information isn’t a violation of press freedom.

To decry every government demand or expectation for journalists to exercise responsibility risks desensitising the public to those few occasions which do cross the line of freedoms.

This is why I support the work that Peter Greste and the Alliance are doing to clearly delineate the true work of journalists in gathering, carefully assessing and responsibly reporting facts from the reckless behaviour of those who believe that all secrets are sinister and should be exposed on principle.

Julian Assange, for instance, should never have been viewed as a journalist, but as someone who ultimately put lives at risk in the name of press freedom. Similarly, so-called whistleblowers who only target the secrets of open, rule-abiding democracies are actually doing the work of the Russian and Chinese government and other authoritarians, and they reduce the ability of our agencies to protect the public, including journalists.

Attempting to argue security laws have a chilling effect on sources leaking classified information will not be successful as that is not an unintended effect—it is the point.

Yes, we must hold ourselves and our democratic governments to account. But freedom of the press and freedom of expression are not enjoyed where one is only free to actually harm our own societies.

Political differences managed and resolved through open debate are a good thing. Political and social divisions driven by fear are toxic to our open societies.

You can’t have a free media without a strong democracy, and you can’t have a strong democracy without a free media. Those truths lie at the heart of the common mission between national security and journalism.

Public opinion and PLA loyalty: objects of the Information Support Force

The court of public opinion is now a critical battleground in modern warfare, according to China’s People’s Liberation Army (PLA).

China’s newly established Information Support Force is not just responsible for the PLA’s vast information network but also for spreading offensive disinformation, countering perceived foreign disinformation and ensuring loyalty across the military. 

The US and its allies need to take this seriously. China’s intentions and capabilities in the information domain are a military issue, not just a matter of public diplomacy.

The PLA now views the media as a ‘combat weapon’. It believes hostile disinformation is damaging command capabilities and could affect political and military outcomes

In April, China’s senior body of military decision makers, the Central Military Commission (CMC), disbanded the PLA’s Strategic Support Force and announced establishment of a new Information Support Force (ISF). This new force is tasked with engaging China in an information war with the United States and US allies.

But the ISF isn’t just about modernising information warfare. Its major mission is ensuring the PLA never turns against the Chinese Communist Party (CCP).

In establishing the ISF on 19 April 2024, President Xi Jinping called on the new service to ‘adhere to information dominance’, ‘strengthen information protection’, ‘consolidate the foundation of the troops’ and ‘ensure that the troops are absolutely loyal’.

With recent corruption scandals threatening the integrity of PLA leadership, Xi has doubled down on suppressing internal dissent in the PLA.

It is clear he sees the ISF as not only a tool for modernising warfare capabilities but also as a mechanism for reinforcing the CCPs ideological control within the military. Since 2021, the PLA has an expressed intent to defend ‘against the enemy’s [psychological warfare] and incitement to defection’. This emphasis reflects a broader strategy to preserve power within the CCP by ensuring that its military remains a loyal pillar of the party’s authority and by countering foreign attempts at swaying Chinese public opinion.

The PLA claims the United States and its allies have deliberately used media and other communications tactics to fabricate discrediting information about their adversaries’ leaders, politicians and senior military officials. It calls this leadership-debilitating tactic ‘beheading with public opinion’ (舆论斩首) and says it can damage the prestige of China’s leaders and military officials, undermine their resolve and damage their decision-making capabilities.

With the creation of the ISF, the PLA is now determined to neutralise these supposed Western tactics. To do so, PLA strategy authors Sun Jian and Mei Zhifeng are calling on the PLA to ‘attack and defend at the same time’ on the information battlefield.

To defend against the foreign tactics, the PLA plans to dispel what it calls ‘rumours’ and the US’s ‘sinister intentions’ by cutting off their dissemination chain and strengthening Chinese public opinion countermeasures. The PLA also wants to counterattack by exposing ‘the false veil of democracy and freedom [the United States] has constructed’, and to sway Chinese public opinion against the US by creating a situation in which China has the moral upper hand. They say this can be done by highlighting contradictions in US foreign policy and domestic issues, such as political divisions and social inequality, to create an image of moral superiority for China.

China has even said it intends to sow public discord and ‘incite separatist and confrontational activities’ in its adversaries. It may be thinking of acting much as Russia does. There is compelling evidence that bots backed by Russia are disseminating disinformation in the United States.

Creating a separate information support force within the PLA shows China’s seriousness in operating within the information domain. In CCP documents, the PLA views the information domain as equal in importance to the physical domains of air, land, sea and space. It even talks about conducting operations in these physical domains to enable operations in the information domain.

This way of thinking about information warfare as a battlefield itself is at odds with the way the United States and their allies view the concept. They instead see it as mainly a means to support conventional operations. In US doctrine, information operations—such as cyber warfare, psychological operations and propaganda efforts—are often used to enhance the effectiveness of traditional military strategies.

In a speech on 8 October 2024, Xi expressed a desire to ‘improve the ability to guide public opinion’. By manipulating the information domain to manage the public’s perceptions, Xi can better influence public opinion and prevent any challenge to the monopoly power the CCP has over China.

The ISF will likely play a significant role in PLA information warfare in the future, and the US and its allies should watch intently to see how the organisation shapes up.

A new risk on the horizon: organised criminals as mercenaries of disinformation

At a time when controlling the narrative is power, are organised crime groups acting as mercenaries of disinformation, using their skills to manipulate minds for profit? A recent Australian Federal Police (AFP) operation suggests an intersection is forming between crime, disinformation and technological exploitation.

Last month, the AFP arrested six members of a Sydney-based criminal syndicate implicated in drug importation as part of Operation Kraken. The operation targeted Ghost, an encrypted messaging app designed for illicit communications. It played a crucial role in the syndicate’s activities, used to send more than 125,000 criminal messages.

What makes this case particularly interesting is that one senior member of the syndicate allegedly orchestrated a disinformation campaign. This involved fabrication of a terror attack, a false narrative aimed at perverting the course of justice and diverting law enforcement resources.

Criminal groups are known to exploit social and technological developments for profit. Cybercriminals in Europe are already engaging in disinformation as a service for customers. AFP and criminal intelligence organisations should be wary of that happening in Australia.

Further, criminal organisations are now selling services to both state and non-state actors.

In Myanmar, for instance, ethnic armed groups have increasingly relied on drug production to finance their operations. By partnering with organised crime syndicates, they monetise their control over territory, granting protection and resource access in exchange for a share of the profits. This not only perpetuates the cycle of violence and instability but also entrenches the drug trade within these communities, as they become reliant on revenue generated from opium and methamphetamine production.

So we see a complex interplay of local power dynamics and organised crime.

For many years, criminal groups have offered money laundering as a service, creating a new dimension in financial crime. They provide tailored solutions for individuals, businesses and even governments aiming to obscure origins of illicit funds. The criminals charge for their expertise in navigating complex financial systems. A notable example is the case of the Panama Papers, which exposed how many high-profile individuals and corporations used offshore shell companies in jurisdictions such as Panama to launder money and evade taxes.

Criminal groups also offer hacking services on a subscription basis, enabling clients—ranging from individuals and groups to state entities—to breach security systems, steal sensitive data or deploy ransomware. This trend allows customers with limited technical skills to engage in sophisticated attacks, effectively broadening the reach of both state and non-state actors.

For instance, Eastern European cybercriminals have been linked to the proliferation of bot farms, which are used to automate attacks and disseminate disinformation at scale, amplifying the impact. By providing these illicit services, hackers create immunity for their customers, who become difficult to trace and prosecute or, in the case of rogue states, hold accountable.

There is a reasonable assessment that the demand for disinformation services is growing among state and non-state actors, reflecting a shift in how information is weaponised for strategic advantage. As geopolitical tensions rise and digital platforms proliferate, various groups—from rogue states to organised crime syndicates—are increasingly turning to disinformation as a tool for manipulation and control. This can range from spreading false narratives to create confusion and distract law enforcement, to launching smear campaigns to discredit adversaries.

If this demand evolves disinformation from a tool of deception into a service available for hire, Australian law enforcement will face complex new challenges. The intertwining of disinformation with organised crime complicates the national security landscape.

Australia must evaluate whether existing legislation is sufficient to address the commodification of disinformation. The Operation Kraken case should prompt further investigation into intelligence surrounding criminal fee-for-service disinformation schemes. A coordinated approach involving law enforcement and intelligence agencies is essential to counter the threat.

Digital literacy is a national security asset

Not long ago, coordinated disinformation and its trail of social and political chaos was something that happened to other countries. No longer. Authoritarian states have expanded their information operations in Australia, and local actors are learning and imitating. Government efforts to deal with the problem haven’t yet responded to its sheer scale.

Australia urgently needs to put into place well-funded public disinformation literacy campaigns, augmented by digital media literacy education in schools, a report published by the Asia-Pacific Development, Diplomacy & Defence Dialogue (AP4D) argues. The government also needs to grow and support fact-checking bodies in media organisations, universities and the non-government sector.

As the problem of disinformation grows, it’s clear that the softer options of industry self-regulation and voluntary codes of conduct aren’t enough. The news-media bargaining agreement with Meta, for example, is unravelling. It was supposed to put more money into news journalism to balance the mass of disinformation online, but Meta has moderation fatigue and is moving away from news altogether.

The government is still working through draft disinformation legislation that would give more regulatory bite, compelling social media companies to take more responsibility for the disinformation that their platforms so effectively enable.

Some of Australia’s foremost media and information experts, consulted in the AP4D report, say a huge piece of the policy puzzle is missing: people need help to protect themselves from online harms, including disinformation and other forms of manipulation, so they can really understand the powerful cognitive effects of disinformation coupled with the powerful delivery systems of social media.

So far, efforts to counter malicious information operations, disinformation and other threats in the information domain have been piecemeal and reactive, rather than comprehensive and strategic.

With almost half of the adult population not confident in their ability to identify misinformation online, combatting disinformation should be a national priority. Truth-based information is a fundamental national and global public good, needed for basic governance in any political system. In liberal democracies, truth-based information is needed to secure rights of citizens, conduct fair elections, administer the rule of law and make market economics work. It is also essential as a deterrent to corruption and the foreign interference that corrupt political and economic systems attract.

A low level of public literacy on disinformation and related threats is a key vulnerability for Australia.

A well-funded and ongoing public literacy campaign to reach Australia’s diverse national audiences is now a national necessity if we are to help citizens to reject disinformation and avoid such harms as fraud and identity theft, intrusive surveillance, harassment and data exploitation.

The efforts of non-government groups such as the Australian Media Literacy Alliance, a consortium of key public institutions and networked organisations, focus on supporting lifelong learning, especially for those who may be vulnerable to disinformation or digital exclusion. Through consultation, research and advocacy, the consortium’s primary goal is to develop and promote a government-endorsed national media literacy strategy for Australia. Its model should be supported.

In addition to broad-based public awareness campaigns, digital media literary needs to be included in education curriculums from early childhood onwards, to help children and young adults build resilience.

This includes, for example, teaching students to not just engage with information by scrolling down the page or by considering its superficial validity but also by learning about its source—by leaving the webpage, opening another tab and searching elsewhere. The concept is called ‘lateral reading’.

Radicalisation prevention and support strategies should be built into that framework. A successful education campaign would include teaching how to recognise disinformation and propaganda aimed at radicalisation and attempts at exploiting individual vulnerabilities for recruitment. Children and young adults need to be aware of the harmful and violent nature of radical groups and their financial and political aims. They also would benefit from presentation of the real world consequences of online violence. People who have already been radicalised need easily accessible off-ramps when they want to escape.

While cultivating more critical thinking, we also need immediate pushback against disinformation as it arises. This is where factchecking organisations operating at arm’s length from government become important. Their work would reinforce transparency and accuracy in the public sphere.

Key to implementation is raising the level of government communication with citizens. This can be challenging, especially in a risk-adverse public service, but where the government does not speak there is a vacuum that can be filled with disinformation.

With two thirds of Australians polled in a 2021 survey considering the ability to recognise and prevent the flow of misinformation as either extremely important or very important, there is an unarguable mandate for action.

After the Voice referendum, Australia must find a better way to cut through the noise online

The expert assessment is emphatic: the Voice referendum campaign was beset by information that was false, distracting or conducive to an information space so confusing that many people switched off or were diverted away from reliable sources.

On top of the spread of false and manipulated information about Covid-19, Russia’s invasion of Ukraine and now Israel and Gaza, Australians might be tempted to accept fake news and unreliability as an inevitable effect of the sheer amount of information online and the ease with which we can access it.

But it’s not something we should be willing to accept. We can’t make the problem disappear, but we can expect governments to create a healthier information environment.

There are many lessons to be learned from the Voice campaign—among them that governments need to understand better how Australians get news and messages on important issues, how information circulates through the population, and what can be done to better inform voters. Many of the answers come down to strengthening the signal of reliable information while reducing the noise of irrelevant, false and manipulated information.

One potent slogan from the no campaign, ‘If you don’t know, vote no’, perfectly illustrates a problem that plagued this referendum. Too many people, by their own admission, didn’t know what they were voting on, and many didn’t make enough effort to improve their understanding. Some who did seek to learn more were left unsatisfied with the level of detail they could find. There simply wasn’t a strong enough signal of reliable information on the yes campaign.

There are several reasons why it’s difficult for people to sort through information online. A flood of information can often overwhelm readers and prompt them to disengage. We are also prone to accepting information that aligns with our pre-existing beliefs without checking its reliability. Both human tendencies can be exploited by malign actors to influence people—though this also happens when people are sharing information with good intentions.

Throughout the referendum and in various other election campaigns, political parties, foreign governments and other actors were accused of spreading disinformation online to sow division or influence an outcome. It’s easy to think that a lot of the false information online is highly targeted, tactical and precise in an effort to manipulate people and decisions. But in reality, the online space is typically more like a whirlwind of chaotic messaging, fear, anger and confusion.

People are entitled to their opinions. We can’t change that and nor should we want to, even if we know that some of those opinions are going to be informed by misleading or incorrect information. So the focus must be on improving people’s access to facts in as unpolluted an information environment as possible.

It can start with better promotion of the tools that are already available. There were many places to find accurate and reliable information on the Voice, starting with voice.gov.au and the Australian Electoral Commission website. Many of these were drowned out. An analysis using the online tracking tool Meltwater shows that the number of online public mentions of the Voice averaged more than 20,000 per day this year, and influential popular culture icons, influencers and well-funded lobbyists dominated the space.

There were viral videos created by grassroots campaigns that encouraged people to at least search for more information about the Voice through Google before making a decision, which would have helped drive more traffic to government websites. But videos such as one from Australian rapper Briggs—which amassed nearly four million views on Facebook and Instagram in 48 hours, 10 times more than Prime Minister Anthony Albanese’s video did in a month—were rare and largely highlighted the government’s failures to get information to voters. Meanwhile, conservative activist group Fair Australia delivered at least nine TikTok videos that drew more than one million views each.

Grassroots campaigning is great. But the Australian government shouldn’t be relying on individuals and influencers among the population to drive voter towards more and better information.

Governments should learn from what has worked well so that messages can be better tailored in future campaigns.

One successful example of a major company cutting through the noise happened in 2020 as the rollout of 5G internet intersected with fear and confusion about Covid-19. Telstra produced some entertaining and effective videos that dispelled online conspiracies that 5G and Covid were related and sought to inform 5G fence-sitters through humour while providing them with scientific evidence about the safety of 5G installation and use. The videos reached an audience 10 times larger than the standard Telstra video. This campaign shows that crafting the message so that it reaches and engages audiences online is one way of strengthening the signal in the noise.

Rather than having rules or norms that rely only social media companies to identify and label clear instances of misinformation and disinformation, strengthening the signal can occur by applying labels more widely and earlier to more of the conversation and at lower thresholds, especially ahead of elections and referendums. Labels could even be put on uncontested opinions and information, and links to government websites and further information could highlighted so they stand out more clearly on posts.

Education must also play a role in assisting the population to sort through the noise and find sources of reliable information online. From the moment children are expected to use the internet as a resource they must be taught how to spot disinformation tactics and avoid misinformation traps.

The online confusion around the Voice referendum was another wake-up call for our country to act on a problem that goes beyond politics and foreign influence. We are faced with a societal challenge that requires fundamentally changing the way people think about, engage with and process information from all sources. We need to invest more time and effort in deciphering what happens in the information space and helping everyone better understand what they are seeing.

Examining Australia’s bid to curb online disinformation

Hardly a day had passed after the government unveiled its initial draft of the Combatting Misinformation and Disinformation Bill 2023 when critics descended upon it.

‘Hey Peasants, Your Opinions Hell, your facts Are Fake News’, posted Canadian right-wing professor Jordan Peterson on X (then Twitter) in response to the announcement.

Since then, commentary on the bill has grown more intense and fervent. The bill sets up Canberra to be a ‘back-up censor’ ready to urge the big tech companies to ‘engage in the cancellation of wrong-speak’, wrote Peta Credlin in The Australian under the headline ‘“Ministry of Truth” clamps down on free expression’. For Tim Cudmore writing in The Spectator, the bill represents nothing less than ‘the most absurdly petty, juvenile, and downright moronic piece of nanny-state governmental garbage ever put to paper’.

In reality, the intentions of the bill are far more modest than the establishment of a so-called Ministry of Truth. Indeed, they’re so modest that it may come as a surprise to many that the powers don’t already exist.

Put simply, the bill is designed to ensure that all digital platforms in the industry have systems in place to deal with mis- and disinformation and that those systems are transparent. It doesn’t give the Australian Communications and Media Authority any special ability to request that specific content or posts be removed from online platforms.

If the bill were to pass, it would mean that digital platforms like WeChat will finally have to come clean with their censorship practices and how they’re applying them, or not, to content aimed at Australian users. It will also mean that digital platforms like X that once devoted resources to ensuring trust and safety on their platforms, but are now walking away from those efforts, are made accountable for those decisions.

If there’s one thing that Elon Musk’s stewardship of X has shown, it’s that even with an absolutist approach to free speech, content-moderation decisions still need to be made. Inevitably, any embrace of absolute free-speech principles soon gives way to the complexities of addressing issues like child exploitation, hate speech, copyright infringement and other forms of legal compliance. Every free-speech Twitter clone has had to come to this realisation, including Parler, Gettr and even Donald Trump’s Truth Social.

So, if all digital platforms inevitably engage in some sort of content moderation, why not have some democratic oversight over that process? The alternative is to stick with a system where interventions against mis- and disinformation take place every day, but they’re done according to the internal policies of each different platform and the decisions are often hidden from their users. What the Combatting Misinformation and Disinformation Bill does is make sure that those decisions aren’t made behind closed doors.

Under the current system, when platforms overreach in their efforts to moderate content, it’s only the highest-profile cases that get the attention. To take one recent example, a report by Credlin was labelled ‘false information’ on Facebook based on a fact-check by RMIT FactLab. The shadow minister for home affairs and cyber security wrote to Facebook’s parent company, Meta, to complain, and the ABC’s Media Watch sided with Credlin.

Would it not be better if this ad hoc approach were replaced with something more systematic that applied to regular members of the public and not just high-profile commentators? Under the proposed bill, all the platforms will have to have systems in place to deal with mis-and disinformation while also balancing the need for free expression. The risk of the status quo is not just that the platforms will not moderate content enough, but that they will overdo it at times.

When digital platforms refrain from moderating content, harmful content proliferates. But as platforms become more active in filtering content without publicly disclosing their decision-making, there’s an increased risk that legitimate expression will be stifled. Meta executives admitted at a recent Senate committee hearing that it had gone too far when moderating content on the origin of Covid-19, for example.

In contrast to the Australian government’s modest approach is the EU’s Digital Services Act, which just came into effect last week. That act heaps multiple requirements on the platforms to stop them from spreading mis- and disinformation. Many of these requirements are worthwhile, and in a future article, I’ll make the case for what elements we might like to use to improve our legislation. Fundamentally, the act represents a positive step forward by mandating that major social networks such as Facebook, Instagram and TikTok enhance transparency over their content moderation processes and provide EU residents with a means to appeal content-moderation decisions.

But if critics wish to warn about Orwellian overreach, they’d do better scrutinising the EU’s Digital Services Act, not Australia’s proposed Combatting Misinformation and Disinformation Bill. In particular, they should take a look at one element of the European legislation that enables the EU Commission to declare a ‘crisis’ and force platforms to moderate content according to the state’s orders. That sets up a worrying precedent that authoritarian rulers around the world are sure to point to when they shut down internet services in their own countries.

After years of relative laissez-faire policymaking, the world’s biggest tech companies are finally becoming subject to more stringent regulation. The risk of regulatory overreach is real and critics are right to be wary. But the Australian government’s proposed solution, with its focus on scrutinising the processes the platforms have in place to deal with mis- and disinformation, is a flexible approach for dealing with a problem that will inevitably continue to grow. And unlike the European strategy, it avoids overreach by both the platforms and the government.

Presenting intelligence: from Iraq WMD to the new era of ‘strategic downgrades’

Recent research from ASPI finds that Philip Flood’s 2004 inquiry into Australian intelligence agencies proved an inflection point in the national intelligence community’s development. In addition, the Flood report grappled with a matter at the heart of the intelligence failure on Iraqi weapons of mass destruction, and one of significant contemporary relevance: public presentation of intelligence for policy purposes.

Flood laid out cons, including risks to intelligence sources and methods, sensitivities of intelligence-sharing arrangements and partnerships, and the possibility that public exposure could distort the intelligence-assessment process by making analysts more risk-averse. He might have added a few other negatives to the list:

  • the threat posed by the aggregation of data—the ‘mosaic effect’. Even seemingly innocuous data can be consequential when aggregated and recontextualised
  • the risk that the nuances in intelligence assessments will be lost in public presentation (a factor in the Iraq WMD debacle)
  • the possible deleterious effects of selective declassification on government transparency.

Nonetheless, Flood acknowledged circumstances in which democratic policy decisions (especially about going to war) necessitated some form of suitable public release of intelligence. He pointed out a common-place precedent: use of sanitised intelligence to inform threat warnings to the Australian public (in the form of travel advisories).

Today, release of intelligence for statecraft purposes remains highly relevant, as evident from attempts by the US and UK governments in early 2022 to deter Russia from invading Ukraine by publicly revealing their intelligence about Moscow’s intentions and issuing regular intelligence-based updates.

Of course, the Iraq and Ukraine instances are not unique. Cold War governments on both sides of the iron curtain were prepared to leverage intelligence publicly for policy purposes or simply one-upmanship. Witness duelling defector statements and press conferences, the Kennedy administration’s public messaging during the Cuban missile crisis (including hitherto sensitive aerial imagery) and later the US declassification of satellite images highlighting Soviet violations of nuclear test bans and continuing bioweapons capability.

This continued in the 21st century. The UK publicly confirmed intelligence in November 2001 indicating al-Qaeda’s responsibility for the 9/11 terror attacks, and the Obama administration released intelligence obtained during the raid on Osama bin Ladin’s hideout. The UK would also issue a public statement on Syrian chemical weapons use, sourced to intelligence, in 2013 (including release of a complete Joint Intelligence Committee assessment). There are also regular references to intelligence-based conclusions without necessarily releasing intelligence itself—such as Russian culpability for the Salisbury poisonings. And there have been various US government indictments of hostile cyber operations (Chinese, Russian, Iranian, North Korean), in addition to cyberattack attribution by governments more generally.

Confronted in August 2021 with Russia’s worrying military build-up and hostile intent towards its neighbour, the US government first sought to leverage its intelligence knowledge behind closed doors. So, in mid-November 2021, CIA Director Bill Burns was sent to confront Moscow with what the US knew about its plans for an invasion. But, as Burns has since commented: ‘I found Putin and his senior advisers unmoved by the clarity of our understanding of what he was planning, convinced that the window was closing for his opportunity to dominate Ukraine. I left even more troubled than when I arrived.’

The Biden administration changed tack, to what Dan Lomas has termed ‘pre-buttal’, beginning in mid-January 2022 when the White House press secretary openly briefed the media on a Russian plot to manufacture a pretext for invasion, using a false-flag sabotage team. A fortnight later, in response to a press question, the Pentagon acknowledged that it knew the Russians had already prepared a propaganda video supporting this invasion pretext, for broadcast once an invasion commenced. Then, on 15 and 18 February, President Joe Biden revealed that US intelligence was now aware that more than 150,000 troops were assembled on Ukraine’s border awaiting an order to move. These efforts were buttressed by the UK’s public reference to Russian plans to install a friendly regime in Kyiv via a coup prior to the planned invasion.

Yet, as we know, the Russian invasion of Ukraine commenced on 24 February.

So, were these efforts a success or a failure? The obvious answer is they failed: Russia wasn’t deterred. But was deterrence actually possible? And the public release of intelligence did complicate and disrupt Moscow’s invasion plans and arguably contributed somewhat to the Russian military’s poor performance in the early stages of the conflict. What’s more, the audience wasn’t just Russia. Public release, beyond traditional intelligence sharing in classified channels, had the effect of alerting and cuing Ukraine. Perhaps most materially, the approach galvanised third parties post-invasion, especially in Europe. This involved overcoming some lingering distrust associated with the disastrous efforts to leverage intelligence diplomatically in 2002–03 over Iraq.

The US government has since explicitly laid out its strategy for what it calls ‘strategic downgrades’. It is an increasingly proactive approach to public disclosures aided by the opportunities presented by an overwhelming volume of available open-source intelligence that allows for effective obfuscation of the actual sensitive sources of the material disclosed.

Last month, Principal Deputy National Security Adviser Jon Finer detailed this strategy in a speech:

The delivered and authorized public release of intelligence, what we now refer to as strategic downgrades, has become an important tool of the Biden administration’s foreign policy. This is a tool that we have found to be highly effective, but also one that we believe must be wielded carefully within strict parameters and oversight.

This speech was itself a form of public release of intelligence—and presumably was targeted again at both allies and adversaries.

The US has deployed this approach beyond just attempts to deter the Russians. For example, it has applied ‘strategic downgrades’ in relation to Chinese arms supplies to Russia, Wagner Group activities in Mali, and its own findings in relation to Chinese ‘spy balloons’.

The approach is underpinned by a formalised framework developed by US policymakers. Related decision-making is centralised in the National Security Council and the Office of the Director of National Intelligence. And its application is apparently limited to select situations—for example, when civilian lives or infrastructure are at risk, or to counter disinformation or false-flag operations.

Guidelines require that downgrades be accurate, be based on verifiable reporting, and be part of a broader plan that includes diplomacy as well as security and economic assistance. According to Finer: ‘It should always be in service of clear policy objectives. It’s not like you just get a piece of very interesting information that could sort of damage one of your adversaries and you decide that could be embarrassing to them, let’s put it out.’

‘Strategic downgrades’ are a potentially important tool for democratic governments, and US formalisation of the related strategy is a welcome development.

But public presentation of intelligence for policy effect deserves careful consideration and risk management. The landscape is complicated by the marked decline in public trust across the Western world and the emergence of a more uncertain strategic environment since 2003. Notably, invocation of intelligence in the political sphere—as with, inter alia, Iraq WMD, the course of the ‘global war on terror’ and Russia’s attempted election interference—necessarily politicises that same intelligence. Perhaps the most alarming example is the degree to which circulation of US intelligence on Russian interference in an increasingly toxic US political environment has effectively tarred US intelligence agencies with the same toxic politics.

And, as Finer observed: ‘You’ve got to be right, because if you go out alarming the world that something terrible is going to happen and you have it wrong, it will be much harder to use the tool effectively the next time.’

I think Flood would have agreed.