Tag Archive for: Extremism

Bigotry and Islamophobia aren’t products of the internet

Unsurprisingly, the Christchurch terror attack has brought calls for social media companies to do more to control what appears on their platforms. The perception that it’s something they can easily fix is reflected in Prime Minister Scott Morrison’s comment that: ‘If they can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on social media platforms.’

That view reflects a deep misconception about what tech companies can do to regulate violent extremist content and overlooks the role of a major player in facilitating the rise of intolerance—the public. It also ignores the sustained campaign of white supremacists to get the mainstream to promulgate some of their messaging—as seen, for example, when US President Donald Trump used white-nationalist talking points to argue against removing statues of slaveholders.

Anyone who believes there’s a simple answer should read the excellent article by my ASPI colleague, Elise Thomas, who argued that ‘algorithms won’t stop the spread of live-streamed terror’ because it is ‘human users who are actively uploading, promoting and sharing this content’.

The algorithms that power Google searches and Facebook are meant to deliver information to the user. But they have been put to use in two main ways, one benign and the other nefarious. With the former, the algorithm helps a diner find restaurants, for example, that Google knows will suit their tastes because the search engine records the diner’s preferences by keeping a ‘history’ on the user. Simply, the algorithm ensures that the user doesn’t have to trawl through endless pages to find what they want. And yet an extremist can also use an algorithm to magnify their hatred by enabling them to easily connect with other like-minded individuals or find information that supports their prejudice.

Importantly, because of the vastness of the internet and the number of platforms on it, it has also become impossible to truly remove content. In the 2014 Mario Costeja González case in Spain, it was recognised that Google couldn’t tell a website to remove offending information. It could, however, delist information so that it didn’t appear in a search.

Governments and policymakers should also consider the call by Jacinta Carroll, the director of national security policy at the National Security College at the Australian National University, for a shift in public discourse to highlight compassion, civility and tolerance.

Tech and social media companies have sought to devise ways to stop violent extremists from using their platforms to promote intolerance. This has led to the adoption of community guidelines, aimed at regulating content. A second initiative was the establishment of the Global Internet Forum to Counter Terrorism through which Facebook, Microsoft, Twitter and YouTube created a shared database of ‘hashes’ for violent terrorist imagery or terrorist recruitment videos. The purpose of the database is to ensure that violent content can’t be reposted, although there are limitations to this technological solution.

If we are to address the dissemination of violent extremist material online, we must recognise that defining content as offensive is extremely hard, especially when it doesn’t involve a clear incitement to commit violence. Tech and social media companies are justified in asserting that they are a tool—or a ‘neutral platform’—through which people exercise their right to free speech.

Attempts to regulate mainstream platforms have created a space for the emergence of new sites such as Gab, which has become the preferred social media network for many on the far right. The same is seen with messaging applications: concerns over the introduction of a regulatory regime have encouraged the emergence of applications such as Telegram (see here and here).

An additional challenge is that regulating social media might undermine the business model that has made the internet vital to everyday life and commerce. Modern marketing campaigns use social networking sites to reach out to millions of potential consumers.

Clearly, tech and social media companies can do more to police their platforms, but the challenge of regulating content ultimately lies with the public, which creates, feeds and consumes the information disorder. Its users are also those responsible for reporting abuse. Some of those searching for a solution are fixating on the apex predator (the extremist) and not on the bottom strata in the biomass pyramid, the general public, which facilitates intolerance.

Apex predators who occupy a wide spectrum get traction with their messaging because it resonates with a public that refuses to acknowledge its role in narrowing the marketplace of ideas. That underlines the need to return to a fact-based public discourse.

Media outlets are influenced by a political pendulum and the goal of some is to feed their audience as opposed to the public at large. They know that if they don’t provide their audience with what it wants, some of them can find another provider that does give them what they’re after.

If we’re serious about addressing the rising intolerance in our society, we must first look to ourselves and the role that we play in disseminating prejudice. Online bigotry and Islamophobia are not creations of the internet. They are reflections of attitudes in society.

Australia’s National Security Strategy: it’s more than Asia

The release of Australia’s new National Security Strategy raises a number of issues, of which two seem preeminent. One is a growing gap between the opinions of the media and public and those of the small groups concerned with policy advice and policymaking. The other is an equally significant gap between policy guidance intended to seem stable and predictable over a few years and the realities of an increasing unpredictable and volatile international environment.

At first sight the paper seems carefully phased, largely appropriate and, as Michael l’Estrange has pointed out, even subtle. Its statement of principles seems obvious. Many of its views go, helpfully, far beyond the generalities of previous white papers. One example is the priority given to cyber security. That inevitably involves not just attention to countries in other quarters of the world but to a variety of non-state, criminal and mobile individuals and groups who might be located anywhere. They pose multiple threats; industrial espionage or the theft of patents, hacking into private email and other communications or penetration of government and intelligence agency computers. Some may even have no interest in Australian secrets per se but seek access to US or British intelligence or defence networks via Australian systems. The origins of such threats can’t be geographically defined.

Nevertheless, there’s much in this policy statement, especially in its generalisations, that is debatable. ‘Asia’, ‘the Asia-Pacific’, and especially the ’Asian Century’ are abstractions, not ways of describing reality. ‘Asia’ is not a unit economically, politically, demographically or in any other way. China and Japan are almost at daggers drawn, as are India and Pakistan. China is not within sight of matching the United States, militarily or economically, nor in innovation. Even if China’s GDP overtakes the US, that will be a mere statistical aggregate, bearing no necessary relation to global financial or military power, technological leadership or innovative capacity. Not for nothing has China in the last 30 years sent 2.5 million students abroad to developed countries. Read more