Tag Archive for: Disinformation

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Disinformation

In fighting online interference, the first line of defence is the mind 

‘Nations have no permanent friends or allies, they only have permanent interests’, British statesman Lord Palmerston famously observed. Two centuries later, the adage of foreign policy remains true but the internet extends its relevance into a nation’s domestic affairs.

That’s because the internet and social media give authoritarian nations a lever to pull in democracy’s debate. That can be done through intermediaries who help guide the opinions of groups, or by promoting a worldview that undermines the legitimacy of liberal democracy.

The Chinese Communist Party’s ability to methodically shape views of diaspora communities poses a risk of creating a bloc of voters who could—in theory—support decisions that run counter to Australia’s established national security interest.

The challenge of this domain requires the citizens of democracies to keep their minds open for new and ever-changing information, even while filtering out ideas shaped or co-opted to undermine the state itself. The information challenge is simultaneously external and internal.

The internet creates a scale issue in the economics of propaganda, too.

Censorship in the West today is not so much about restraining free expression, as it was in 20th century; it’s more about being able to drown out legitimate voices online.

In the hours it takes to fact-check a claim, or rebut a false one, another specious claim can be disseminated. Foreign powers, such as Russia, that aim to gum up the wheels of liberal democracy are aware of this.

Recognising these features helps to understand the limitations of using exposure of the online networks as a primary form of prevention. In fact, exposing specific disinformation online can add to the mental complexity of keeping track of what’s factual.

Beijing’s overseas media networks may have diverse owners, but they have a singularity of message on key issues such as the nature of China’s rise, the legitimacy of the CCP and the justification for China’s actions in the South China Sea—all topics that should be open to fair debate.

In this environment, a system designed to defend against disinformation may fail to properly protect vital, genuine, democratic debate.

So how do we bolster democracy’s communication, avoid amplifying distorted facts, and do so without creating additional complexity?

One way we can do it is by seeking to take the battle away from the network and pulling it back into the minds of citizens.

The individual must be able to see through not one, but many simultaneous attempts to sour debate or subvert democracy. To do that, they must understand broadly what information to accept and what to reject, a priori, about their democracy. They must understand the broad information goalposts of democracy: the need for sensible, rational debate that doesn’t descend into conspiracy thinking, or fact-free venting at rivals.

Cyber researcher Bruce Schneier and political scientist Henry Farrell have produced a research paper that seeks to divide information into two camps: common political knowledge and contested political knowledge. The danger that propaganda often poses to a democracy is when commonly held political knowledge (the rules of the system) is miscast—particularly online—as contested knowledge, like when a legal investigation is described as a ‘witch hunt’ and the media is labelled the ‘enemy’.

A broad sifting of political knowledge in this way—even if imperfect—can help the human mind navigate our changed information environment.

This has relevance in Australia, where, for example, the local variant of the anti-establishment gilets jaunes movement raises concerns about genuine political issues but proposes remedies that would arguably chip away at the overall stability of the system.

Another way to aid the democratic mind would be for the government, politicians and institutions to raise questions in voters’ heads about the veracity of politics as experienced on social media.

Ironically, building trust in politics today may require increasing scepticism for what passes as normal on social media platforms. Many-to-many communication platforms like Facebook and Twitter encourage emotion over reason, to the detriment of the non-hysterical debate needed for a democracy.

Communication over social media networks brings a special risk for the personality-driven politics of Australia. Familiar figures can be corrupted by moneyed influence campaigns, or seduced by ideas that prove to be hostile to the state.

To better organise the mind for our information environment, civil society, government and engaged citizens should recognise that not all technologies are suitable for all tasks. Automobiles were in wide use before society fully acknowledged that it wasn’t appropriate for them to be operated by intoxicated drivers under any circumstance. Likewise, democracies may eventually have their own ah-ha moment about the suitability of social media for stable political outcomes.

Democratic societies may ultimately learn that social media, open by necessity to foreign subversion, create unnecessary vulnerabilities. The availability of high-speed mass communication doesn’t necessarily justify its use for the thoughtful endeavour of understanding issues, weighing them and voting on them.

Helping the public fix its gaze not on minute-by-minute news flows or on personalities who can be suborned by foreign powers but on the lasting principles core to Australia’s national interests is key to navigating the information chaos of the 21st century.

To do so, the first line of defence should be in the minds of citizens. Reminding the public of this will pay dividends into the future.

Is fake news here to stay?

The term ‘fake news’ has become an epithet that US President Donald Trump attaches to any unfavourable story. But it is also an analytical term that describes deliberate disinformation presented in the form of a conventional news report.

The problem is not completely novel. In 1925, Harper’s Magazine published an article about the dangers of ‘fake news’. But today two-thirds of American adults get some of their news from social media, which rest on a business model that lends itself to outside manipulation and where algorithms can easily be gamed for profit or malign purposes.

Whether amateur, criminal or governmental, many organisations—both domestic and foreign—are skilled at reverse engineering how tech platforms parse information. To give Russia credit, it was one of the first governments to understand how to weaponise social media and to use America’s own companies against it.

Overwhelmed with the sheer volume of information available online, people find it difficult to know what to focus on. Attention, rather than information, becomes the scarce resource to capture. Big data and artificial intelligence allow micro-targeting of communication so that the information people receive is limited to a ‘filter bubble’ of the like-minded.

The ‘free’ services offered by social media are based on a profit model in which users’ information and attention are actually the products, which are sold to advertisers. Algorithms are designed to learn what keeps users engaged so that they can be served more ads and produce more revenue.

Emotions such as outrage stimulate engagement, and news that is outrageous but false has been shown to engage more viewers than accurate news. One study found that such falsehoods on Twitter were 70% more likely to be retweeted than accurate news. Likewise, a study of demonstrations in Germany earlier this year found that YouTube’s algorithm systematically directed users towards extremist content because that was where the ‘clicks’ and revenue were greatest. Fact-checking by conventional news media is often unable to keep up, and sometimes can even be counterproductive by drawing more attention to the falsehood.

By its nature, the social media profit model can be weaponised by state and non-state actors alike. Recently, Facebook has been under heavy criticism for its cavalier record on protecting users’ privacy. CEO Mark Zuckerberg admitted that in 2016, Facebook was ‘not prepared for the coordinated information operations we regularly face’. The company had, however, ‘learned a lot since then’ and has ‘developed sophisticated systems that combine technology and people to prevent election interference on our services’.

Such efforts include using automated programs to find and remove fake accounts; featuring Facebook pages that spread disinformation less prominently than in the past; issuing a transparency report on the number of false accounts removed; verifying the nationality of those who place political advertisements; hiring 10,000 additional people to work on security; and improving coordination with law enforcement and other companies to address suspicious activity. But the problem has not been solved.

An arms race will continue between the social media companies and the state and non-state actors who invest in ways to exploit their systems. Technological solutions like artificial intelligence are not a silver bullet. Because it is often more sensational and outrageous, fake news travels farther and faster than real news.

In preparing for the 2016 US presidential election, the Internet Research Agency in St Petersburg, Russia, spent more than a year creating dozens of social media accounts masquerading as local American news outlets. Sometimes the reports favoured a candidate, but often they were designed simply to give an impression of chaos and disgust with democracy, and to suppress voter turnout.

When Congress passed the Communications Decency Act in 1996, then-infant social media companies were treated as neutral telecoms providers that enabled customers to interact with one other. But this model is clearly outdated. Under political pressure, the major companies have begun to police their networks more carefully and take down obvious fakes, including those propagated by botnets.

But imposing limits on free speech, protected by the First Amendment of the US Constitution, raises difficult practical problems. While machines and non-US actors have no First Amendment rights (and private companies are not bound by the First Amendment in any case), abhorrent domestic groups and individuals do, and they can serve as intermediaries for foreign influencers.

In any case, the damage done by foreign actors may be less than the damage we do to ourselves. The problem of fake news and foreign impersonation of real news sources is difficult to resolve because it involves trade-offs among our important values. The social media companies, wary of coming under attack for censorship, want to avoid regulation by legislators who criticise them for sins of both omission and commission.

Experience from European elections suggests that investigative journalism and alerting the public in advance can help inoculate voters against disinformation campaigns. But the battle with fake news is likely to remain a cat-and-mouse game between its purveyors and the companies whose platforms they exploit. It will become part of the background noise of elections everywhere. Constant vigilance will be the price of protecting our democracies.