Tag Archive for: fake news

To combat disinformation, Japan could draw lessons from Australia and Europe

Japan is moving to strengthen its resilience to disinformation, though so far it’s only in the preparatory stage.

The EU and some countries have introduced requirements in content moderation for digital platforms. By contrast, Japan has proceeded with only expert discussion on countermeasures against disinformation. While that progress is welcome, Tokyo needs to consider establishing its own standards and join a growing global consensus on countering disinformation, including foreign information manipulation linked to malign state actors.

2024 was a tough year for Japan in countering disinformation campaigns. Immediately after the Noto earthquake in January, false rescue requests were widely spread on social media, diverting scarce resources of emergency services away from people who genuinely needed help. After record-breaking rainfall hit the Tohoku region in July, more than 100,000 spam posts disguised as disaster information appeared on social media. And ahead of the September election for the Liberal Democratic Party’s president and Japan’s October general elections, Japan Fact-check Center identified the spread of false and misleading information about political parties and candidates.

Japan is in a delicate situation. It’s one of the countries at the forefront of Chinese hybrid threats due to its proximity to Taiwan and principled stance upholding the rules-based order. But Japanese society, accustomed to little political division and to passively receiving information, may lack the resilience to disinformation of countries such as the United States or Korea.

Now, about 67 million Japanese are active users of X, more than half the population. X has become an important news and information source for a segment of Japanese society that is less inclined to confirm the accuracy of news items via more mainstream sources.

In response, the government has taken steps to combat disinformation and misinformation. In April 2023, a specialised unit was established within the Cabinet Secretariat to collect and analyse disinformation spread by foreign actors. As president of the G7, Japan introduced the Hiroshima AI Process in 2023 to address AI-enabled disinformation. Furthermore, the Ministry of Foreign Affairs produced solid evidence to effectively counter disinformation campaigns relating to the release of treated wastewater from the Fukushima Daiichi nuclear power plant. This disinformation may have come from China. The ministry’s effort should be applauded and serve as a model for future responses.

But simply responding to every incident may not be sustainable. Countering the proliferation of disinformation also requires content moderation, which must be balanced to protect freedom of expression and avoid placing an undue burden on digital platforms. Thankfully, international partners provide some good examples for reference.

The EU’s Digital Services Act (in full effect since 2024) forces digital platforms to disclose the reasoning behind content moderation decisions and introduces measures to report illicit content. In Australia, the Combatting Misinformation and Disinformation Bill (2024) was intended to provide the Australian Communications and Media Authority with powers to force digital platforms to take proactive steps to manage the risk of disinformation. While it was abandoned in late November, Japan could use this as a lesson to avoid similar outcomes.

Japan’s government has commissioned various study groups but so far has taken no legislative action to combat misinformation and disinformation. The present reliance on voluntary efforts by digital platforms is insufficient, especially given the growing likelihood and sophistication of disinformation threats. Concrete measures are needed.

The Japanese government should engage multiple stakeholder communities, including digital platforms, such as X, and fact checking organisations, to collectively set minimum standards of necessary content moderation by digital platforms. While the specifics of moderation can be left to the discretion of the digital platform, minimum standards could include, for example, labelling trusted media and government agencies and assigning them a higher algorithmic priority for display. If minimum standards are not met, the digital platform would be subjected to guidance or advice by a government authority. But the authority would not have the power to remove or reorder individual content.

Setting standards in this way would respect existing limits of freedom of expression while reducing exposure of users to disinformation that could cause serious harm. It would require, however, verifiable criteria used to determine trusted accounts and the establishment of a contact point for complaints within digital platforms or trusted private fact-checkers.

Regulating digital platforms will not be enough. It’s also important to call out malicious actors and strengthen public awareness and media literacy. Proliferation of disinformation with political intent by foreign actors is a global problem. So Japan should cooperate with partners with similar democratic values, such as Australia. As such, Tokyo should be prepared to be more proactive in joining public attributions of malicious state-sponsored campaigns. It was, for example, with the advisory, initially prepared by Australia, on the cyber threat actor APT40.

Japan’s resilience to disinformation is still developing. Given its prevalent role in the regional and global order and its proven commitment to a rules-based international order, a higher level of urgency is required.

The new struggle for truth in the era of deepfakes

A billboard firm cleverly lures clients with a simple slogan on an otherwise blank canvas—unsee this.

Its genius rests on a human trait: we can’t unsee, unhear or unsmell anything. Our senses are primordial devices programmed to extract millions of data points every second, most of it, at some level, novel. Yet the brain can sort and analyse only around 50 points per second in order to assess a possible response.

Research suggests we make more than 200 decisions just about food every day. Because the brain also chews through calories, as animals we favour simple responses that maximise our energy reserves. At some level, we hunger to be told what to do, and believe, because it’s much less tiring than overthinking.

Propaganda, whether purveyed by Joseph Goebbels, Proctor and Gamble, or Mills and Boon, is calibrated to overcome complex decision-making. Seeing is believing, a picture is worth a thousand words, and repetition and sloganeering make it stick.

Similarly, conspiracy theories satisfy our human biological platform, feeding our desire for simplicity rather than complexity. They are sticky, too. One of the stickiest is the anti-Semitic screed The protocols of the elders of Zion, written in Czarist Russia in around 1903, and still a hot favourite.

In the recent confessional Netflix documentary The Social Dilemma, some progenitors of the social media revolution explain that it was calculated to tease and slake our thirst for the novel, addictive and gossipy. Clickbait is exactly as it describes: small pieces of tempting information that attract our clicks, hooking us and generating advertising revenue for Google and Facebook.

Moreover, research from the Massachusetts Institute of Technology suggests that fake news spreads six times faster than the truth. A drunk Nancy Pelosi makes for compelling ‘news’ that returns greater advertising revenue, rather than a sober House speaker doing more of the same dull routine of law-making and politicking. One wins plenty of clicks, the other doesn’t.

Pelosi was not drunk in that viral video of May 2019. The quick discovery that a malicious garden-shed-conservative propagandist had slowed the video to slur her speech made no difference. Slander sticks, gossip is viral, and repetition of another ‘drunk’ Pelosi video a year later reinforced the original for those who believed that she was, regardless of what was likely, let alone true.

As an instinctive huckster, President Donald Trump was well suited to the era of fake news, with his mercurial temperament, lurid sense of proportion, and sledgehammer approach to bedrock political traditions that once seemed perpetual. As his grip on the political bullhorn dwindles, the world is left assessing the damage of four years of distorted reality that have shaken the foundations of convention.

But before liberal states have fully grasped the corrosive effects of fake news—or legislated to rein in the social media behemoths that trade in it—we are on the cusp of the ‘deepfake’ era that will make the past four years seem as quaint as the cinematic effects of Woody Allen’s 1987 mockumentary Zelig, in which the chameleon-like Jewish imposter Zelig supports Adolf Hitler at a Nazi rally and peers from a Vatican balcony behind Pope Pius XI.

Artificial intelligence has supercharged the ability of amateurs to acquire the voice and image of anybody who has been recorded, and to recompose voice and image into entirely fake video sequences. It’s an emanation of the so-called fourth industrial revolution that is embedding hyper-technology into every particle of our lives. It promises a gamed-up world, in which the boundaries of reality are befuddled by a propaganda Pandora’s box in the palm of every hand. Welcome to the ‘infocalypse’.

At this point, the efforts remain reasonably juvenile, and detectable with complex software. But the author of a recent book on deepfakes reckons that within a year, anybody with a mobile phone will be able to recreate and improve on the de-ageing effects applied to Robert De Niro and Al Pacino in Martin Scorsese’s 2019 movie The Irishmen. When filmed, it was the result of hundreds of technicians, millions of dollars and a year of work.

For an introduction to deepfakes, or ‘synthetic media’, the makers of the satirical cartoon Southpark have conjured Sassy Justice, a new series that premiered just last month. It’s a radically technologically updated version of Zelig. Deeply funny, this deepfake show is also a portent of disaster, a time when we can no longer believe our eyes as willingly as we do today.

In the video, Donald Trump has been transformed into a satin-bewigged effeminate reporter from a news station in Cheyenne, Wyoming. Mark Zuckerberg, Julie Andrews, Jared Kushner, Ivanka Trump and Michael Caine have similarly been AI-shanghaied (presumably without their permission). What the show illustrates is the potential of this seminal technological revolution.

The entertainment possibilities are boundless. Paul Robeson can be brought to life as the new James Bond. Jim Morrison will join a mashed-up K-Pop tour. There’ll be remixes of Charlie Chaplin supporting #Me2 and #BLM, Amelia Earhart spruiking space flights for Elon Musk, and Maria Callas singing duets with Taylor Swift. In fact, there’ll be no need for actors, and with a few swipes on our mobile device each of us will be able to star in Titanic—or Gas Light.

Then there’s the bad stuff. If you think that social media stokes teenaged anxiety, there is worse to come. AI has already been deployed in what the creators of one app claimed was harmless fun to strip the clothing from any photographic image of a female figure. As if that’s not bad enough, in this new dystopia a photo can be feasibly lifted from your child’s Instagram and transformed into a fully-fledged pornographic video. Try unseeing that.

When writing my own book about the war in Sri Lanka, I relied on forensic reconstruction of open source audio-visual evidence with electronic fingerprints that left no doubt as to the provenance of evidence. Yet the weak link in any investigation is always doubt. In theory, AI can simply retool itself to avoid forensic detection. The implications for judicial and investigative processes are convulsive.

Identity theft is already a multibillion-dollar industry that financial institutions spend billions fighting against. An American widow was recently duped of almost $300,000 in the course of a romance conducted entirely over Skype by an imposter posing as an American admiral. Your Nigerian scammer need no longer use their own voice, but instead can target the elderly with reconfigured voice recordings of their children lifted from Instagram grabs and reworked according to script: ‘Mother, can you wire me $10,000?’

Now contemplate the videos, photos and voice recordings that constitute the evening news, and the extent to which they drive political and social discourse and spark royal commissions, resignations of ministers and revolutions.

To conjure just one random unsettling example, an academic recently floated the disbanding of Australia’s special forces due to allegations of criminal misconduct in Afghanistan, sourced from video evidence. The rules-based order has been a central tenet of Australia’s foreign policy, and we like to think that we take our obligations under international law seriously.

Imagine, momentarily, a new deepfake body-cam video sequence showing Australian troops beheading Afghan civilians and desecrating the Koran. Grainy footage would do. Its release might ruin trade with the Middle East, lead to the killing of Australians and shatter agreements with nations such as Indonesia, as well as swing public pressure to disband the special forces.

The Russians are still best at using information wedges. After a brief pause in the 1990s, Russian disinformation changed tack. Coupled with the internet, captive social media audiences and the smartphone, Russia exchanged parsimonious Cold War ops for ‘flooding the room’ (a favourite play of Trump’s, deployed in his first debate against Joe Biden). The value proposition was proven in the spoliation of the 2016 presidential election result. Simmering US culture wars did the rest.

Pluralistic societies are being shaken by information-revolution developments eroding our resilience. There is nothing new about political gossip, or the use of new technology for pornography or fraud, or companies making money, or adversary countries seeking an edge. What is new is the speed, scale, force multiplication and challenge to our singular human psychology. To quote Trump, a.k.a. Fred Sassy, ‘As human beings, we all rely on our eyes to determine reality.’

So what can we do when our eyes are no longer a measure of perception? When our senses are drowned in a flood of dubious images? How will we make truth more resilient in order to maintain stable governance, trust in institutions and faith in the evening news? Here are four suggested solutions.

1. Strengthen the gatekeepers. When the internet arrived, it seemed that everybody could be a journalist. But like it or not, there are hierarchies of competence. Iconic media organisations governed by public values are a vital element of liberal democracy. Public broadcasters should be boosted, and their reach expanded, not defunded at a time when competitor nations like Russia and China are expanding their media reach.

2. Legislatively decouple Facebook and Google from their clickbait-driven profit bases, because whatever the cost to shareholders cannot compare with the social, political and economic losses to society at large. Clickbait algorithms destroy advertising revenue streams that fertilise the small-town journalism that is the bedrock of the media’s oversight and investigative role.

3. Legislate to protect the role of the media. According to the Alliance for Journalists’ Freedom, Australia is the weakest of the Five Eyes alliance countries when it comes to protecting the media (a weakness that accounts for the raids on the ABC by the Australian Federal Police that made headlines around the world). These protections ought to include media freedom laws, a public interest defence in defamation, the protection of journalists’ data, protection for whistleblowers, and a public-interest test in matters of national security.

4. Build a farm-to-table system for news. All news and information must be traceable to sources so that hierarchies of competence can be established, and rated on a scale of indicators. Information needs an evidentiary chain, or a genealogy for consumers to establish ‘truth’ to their satisfaction. The security of such a system is complex and expensive, but possible with block-chain technology, multilateral R&D and shared purpose between like-minded nations.

Killer robo-bees show how the sting of disinformation can spread

Investigations of influence operations and information warfare methodologies tend to focus heavily on the use of inauthentic social media activity and websites purpose built to propagate misinformation, disinformation and misleading narratives.

There are, however, a range of other methodologies that bad actors can exploit. One way in which obviously false content can be spread quickly, widely and easily across ‘junk news’ sites is through a syndicated press release.

Here’s a recent example.

Screenshot of press release on AI Organization site, 25 November 2019.

On 29 October, a press release was published on a site belonging to ‘The AI Organization’. The first paragraph says it all [sic]:

This Press Release makes public our Report to the White House and Secret Service in the Spring of 2019, entailing China’s plan to use Micro-Bot’s, cybernetic enhanced dragon flies, and Robo insect drones infused with poison, guided by AI automated drone systems, to kill certain members of congress, world leaders, President Trump and his family. These technologies were extracted from, DARPA, Draper in Boston and Wyss institute at Harvard, who build flying robo-bees for pollination and mosquito drones. It also includes our discoveries of China’s use of AI and Tech companies to build drones, robotics and machines to rule BRI (One Belt, One Road) on the 5G Network.

It should be immediately clear to any human news editor that this press release’s claims—for example, that China is planning to use poison-filled dragonfly drones to assassinate US President Donald Trump and his family—aren’t credible.

This conspiracy theory doesn’t appear to be a case of intentional disinformation. Misinformation and disinformation often make use of the same channels, however, and this example is relevant for both.

On 1 November, the press release was published on a distribution service. It was still available on the distributor’s site as of 3 December.

The distributor describes itself as a company that ‘assists client’s [sic] by disseminating their press release news to online media, print media, journalists and bloggers while also making their press release available for pickup by search engines and of course our very own Media Desk for journalists’.

It also claims that ‘editorial staff … review all press releases before they are distributed to ensure that content is newsworthy, accurate and in an acceptable press release format’.

Evidently, either the editorial staff judged that the AI assassin robo-bees story was newsworthy and accurate, or no review was actually conducted.

The distributor offers different pricing levels for its service. How much you pay determines how many ‘online media partners’ your press release is sent to. For example, the US$49 ‘Visibility Boost’ package includes sending the press release to ‘50+ premium news sites’ as well as syndicating it through RSS and news widget feeds, which the company encourages website owners to embed in their own sites.

Shortly after it was published to the distributor’s site, the Chinese killer bee press release was running on dozens of ‘junk news’ sites and some legitimate local news sites, including on their homepages, in a format that made it look like political news.

Screenshots of Denver News Journal, ABC8 and NBC2 websites, 25 November 2019.

The main purpose of these sites is likely to be monetisation—they syndicate, buy or plagiarise content to attract readers, and make money by running programmatic advertising.

Screenshot of Google search results, 25 November 2019.

As can be seen from the URLs, many of the sites try to position themselves as local news providers—for example, ‘The Times of Texas’, ‘The London News Journal’ and ‘All India Bulletin’. This is at best misleading (the Times of Texas, for example, has an ad encouraging readers to catch a bus from Mumbai for an adventurous family weekend).

In addition to automated news widgets, the article seems to have been published on some sites through a digital advertising company; the text included below the articles disclaims responsibility for the content even as it’s being distributed.

Screenshot of RFD-TV article including Frankly Media disclaimer, 25 November 2019.

As can be seen in the RFD-TV screenshot, the sites acknowledge in the last line that the article is a press release and provide a link back to the distributor’s post. Many studies have found, however, that the majority of online news readers read only the headline and first few paragraphs of articles. In other words, realistically most readers won’t see the admission that these are not news stories.

Conspiracy theories about weaponised Chinese robo-bees may seem like a bit of a joke, but the underlying system this example highlights is no laughing matter.

Despite being obviously untrue, this piece of content passed through at least three companies and was published across dozens of sites which styled and positioned it in a way that made it appear like legitimate news. This suggests one of two scenarios.

One possibility is that the entire process has been automated, so that no human being—whether from the distributor, the digital advertiser or the operator of the ‘junk news’ sites—is actually aware of the content they’re spreading across the internet. The alternative is that there were people at these companies who saw, or were supposed to have seen, the content, but none of them stepped in to prevent this patently false story from spreading any further. Either scenario is worrying.

The point is not that we should be afraid of the rise of a cult of believers in robo-bee assassins.

It’s that, if content as bizarre and extreme as this can be published and stay published for a month or more, equally untrue but more plausible disinformation could easily make use of the same channels.

In the context of an organised campaign, sowing disinformation across junk news and second-tier news sites would be an effective first step for laundering false facts and narratives into social media and then mainstream media, without the investment or hassle of setting up a new fake news website.

It also has implications for readers attempting to fact-check information online. Imagine, for example, a reader who finds an article titled something like ‘Trump owes millions to Russian oligarch, new evidence shows’. The reader is suspicious and Googles the headline to see whether other media are reporting the same thing. When they see that the same piece appears to have been picked up by dozens of ‘news’ sites, the reader might well think the story is legitimate.

This problem exists for the same fundamental reason as many of the other issues around online disinformation: the business models of these companies depend on publishing content as quickly, widely and cheaply as possible. There’s no significant incentive for them to invest in potentially slow and expensive fact-checking. The only real risk they face for publishing nonsense is reputational—and as this example clearly shows, that’s not enough to stop them.

Fortunately, there is a clear path for resolving or at least mitigating this avenue for disinformation to spread (and potentially help with a few other problems at the same time). Regulatory changes to require companies like digital advertisers and distribution services to take greater responsibility for the content they promote would go a long way towards preventing the spread of untrue information, whether it be misguided fears of AI killer bees from China or something truly malicious.

Misinformation for profit

Fake news has been at the forefront of public debate since November 2016, when it was discovered that thousands of fake news articles may have affected the outcome of the US federal election. Journalists discovered that many of the articles, and the ‘American-sounding’ websites that hosted them, had been created by teenagers from the small Macedonian town of Veles. Those teenagers, in typical fashion, didn’t care about politics; they created misinformation for profit. Fake news earned them up to US$5,000 a month from Google AdSense advertising.

The rise of the misinformation-for-profit industry has international implications.

In July 2016, hundreds of people converged on a National Housing Authority office in the Philippines after a fake news article claimed that the government was offering free housing. Such events are commonplace in the Philippines—the country has one of the worst fake news problems in the world. Filipinos spend more time on the internet and social media than people in any other nation, thanks in part to receiving free limited internet access courtesy of Facebook.

‘Onlining’ (the practice of using the internet to earn income) has been a common job in the Philippines for close to a decade. Entrepreneurial Filipinos run the businesses, sending out fake friend requests on Facebook and filling our email inboxes with spam. Now they also create fake news.

Successful fake news businesses in the Philippines often receive between 100,000 and 500,000 site visits a month. That translates into a significant amount of money. I’ve found hiring adverts on the Facebook pages of fake news creators, suggesting there’s growth in the misinformation-for-profit industry.

The profitability of fake news is entirely linked to social media. Almost every fake news website has an associated Facebook page feeding it visitors, and ‘likes’ are commonly in the range of 100,000 to 1 million. Around 90% of traffic to fake news websites in the Philippines originates from Facebook.

As Facebook noted in its recent submission to the Australian Senate’s inquiry into the future of public interest journalism, most fake news is financially motivated. Websites earn more money from advertisements when they’re clicked on by people in the United States or Australia than by people in Eastern Europe or Asia.

Creating fake news targeted specifically at Australia would be commercially viable for Filipino fake news businesses. English is an official language of the Philippines, labour costs are low and our advertising market pays well. If Filipino fake news creators care about profit, and they do, they’ll eventually turn their focus in our direction.

That could inflict significant harm on our institutions. Fake news often breaks several civil and criminal laws—such as defamation, intentional infliction of emotional distress, fraud, deceptive trade practices, cyberbullying and criminal libel—causing damage to private citizens, businesses and governments. Misinformation for profit also undermines democratic decisions and processes because it affects people’s beliefs about the state of the world.

Australians are getting their news from social media more than ever before. A recent survey found that social media is only marginally less popular than television as a news source. Our social media usage is growing year on year, and that means we’re becoming increasingly vulnerable to misinformation for profit.

Most young Australians can’t identify fake news online, and those who can may not be as critical of it as we’d wish. People have their own world views and a tendency to demand information that fits neatly within those bounds. Fake news is often highly partisan and can fulfill an inherent longing for ontological security—a coherent self-identity.

Unfortunately, the need for information that reinforces ontological security can sometimes trump the need for information to be legitimate. To think critically, people have to be motived. If they aren’t, they may simply accept what is false as true.

Fake news creators also employ tactics to manipulate emotions to generate attention, and therefore revenue. The ‘economy of emotions’ partially explains why fake news is so profitable during elections, as we saw in 2016 in the US.

Australia hasn’t yet been a major target of fake news creators. But we shouldn’t mistake the absence of attack for the absence of threat. We have good reason to be concerned. Australia has featured in many fake news stories targeted at audiences in the Philippines. Such articles can damage or undermine our international image and threaten the democracies of our Asia–Pacific neighbours.

Australia is beginning to address the issue. The Senate inquiry on public interest journalism and the Australian Competition and Consumer Commission’s inquiry into digital platforms are a good start, but frank and fearless advice is worthless if it’s not followed by bold action.