Tag Archive for: TikTok

Amusing ourselves to death

Forty years ago, in a seminal masterpiece titled Amusing Ourselves to Death, US author Neil Postman warned that we had entered a brave new world in which people were enslaved by television and other technology-driven entertainment. The threat of subjugation comes not from the oppressive arm of authoritarian regimes and concentration camps but from our own willing submission and surrender.

‘Big brother does not watch us, by his choice. We watch him, by ours’, Postman wrote in 1985.

There is no need for wardens or gates or Ministries of Truth. When a population becomes distract­ed by trivia, when cultural life is redefined as a perpetual round of entertainments, when serious public conversation becomes a form of baby-talk, when, in short, people become an audience and their public business a vaudeville act, then a nation finds itself at risk; culture-death is a clear possibility.

Postman’s insight would have been spot-on had he written this today about TikTok. Postman was mostly thinking about mass media with a commercial imperative. People would be enslaved to superficial consumerism. But add a technologically advanced authoritarian power with platforms that—unlike terrestrial TV—are essentially borderless and can reach around the globe, and you have George Orwell’s Big Brother put together with Aldous Huxley’s cultural and spiritual entropy.

Addictive digital entertainment can be corrosive even without a malign puppeteer. But with an entity such as the Chinese Communist Party fiddling the algorithms, it could be catastrophic.

Just in 2025, we have seen much of the Western world so spellbound by TikTok that the thought of living without it brought on the anguish normally reserved for the impact of conflict. ‘TikTok refugees’ became a description, as though they had been displaced like Jews fleeing Europe or Yazidis escaping Islamic State.

Postman noted that we were innately prepared to ‘resist a prison when the gates begin to close around us … But what if there are no cries of anguish to be heard? Who is prepared to take arms against a sea of amusements?’

The cries of anguish were depressingly muted as TikTok built up a following in Western countries that now means four in 10 Americans aged under 30 get their news from TikTok, according to a recent survey by the Pew Research Center.

When a ban was flagged, the cries came from those who couldn’t bear to give up the platform and from free speech absolutists who believed any rules amounted to government overreach. If our most popular radio stations had been based in Germany in the late 1930s, the Soviet Union during the Cold War or Syria during the ISIS caliphate, our leaders would have protected the public, regardless of popularity and notwithstanding that it would constitute government intervention in the so-called free market of ideas.

In fact, the market isn’t free because powerful actors can man­ipulate the information landscape.

Billionaire Elon Musk gives free-speech advocates a bad name by posting not just different opinions but promoting false content on issues such as Ukraine on his platform X. But more sinister is a platform such as TikTok, which is headquartered in authoritarian China and ultim­ately at the control of the CCP, with algorithms that have been demonstrated to manipulate audiences by privileging posts that serve Beijing’s strategic interests and downgrading content that does not.

Despite such threats, we have no clear framework to protect ourselves from powerful information platforms, including the newest generative artificial intelligence models such as DeepSeek, which will be increasingly available—and, thanks to their affordability, attractive—despite operating under Chinese government control. As a US court declared in upholding the congressional ban on TikTok, giving a foreign power a vector to shape and influence people’s thinking was a constraint on free speech, not an enabler of it.

Freedoms of speech and expression are core democratic principles but they need active protection. This means the involvement of governments.

US Vice-President JD Vance told the Munich Security Conference that Donald Trump represented a ‘new sheriff in town’ who would defend free speech and ‘will fight to defend your right to offer it in the public square, agree or disagree’. It was an admirable derivative of the quote attributed to Evelyn Beatrice Hall describing Voltaire’s principle of ‘I may not agree with what you say, but I will defend to the death your right to say it’. But just as we have regulators for financial and other markets, we need regulation of our information markets.

By all means, speech should be as free as possible. Awful mustn’t equal unlawful, to borrow Australian Security Intelligence Organisation boss Mike Burgess’ phrase. Speech that hurts the feelings of others or advocates unpopular views cannot be the threshold for censorship. Such lazy and faint-hearted policymaking creates only a more brittle society. But that doesn’t mean we should make ourselves fish in a barrel for malign foreign powers.

Anarchy is not freedom. Governments need to brave the minefield that is modern information technology. If a platform poses risks that cannot be avoided, as with TikTok, it should be banned.

Other platforms that sit within democratic nations’ jurisdictions should be subjected to risk mitigations such as content moderation to deter and punish criminal activity. X, Facebook, Instagram and YouTube can be used as avenues for information operations, as shown by Russia buying advertisements on Facebook or CCP-backed trolls posting on X and YouTube, or be used as vectors for organised crime. Even the most ardent free-speech advocates would agree that drug trafficking, child abuse or joining a terrorist group are illegal offline and therefore should be illegal online.

No marketplace remains free and fair when governments overregulate or abdicate responsibility.

The once-free markets of trade and investment have been eroded by China to such an extent that just this week Trump issued a foreign investment policy to protect US ‘critical technology, critical infrastructure, personal data, and other sensitive areas’ from ‘foreign adversaries such as the PRC’, including by making ‘foreign investment subject to appropriate security provisions’.

A key principle of the new presidential policy is that ‘investment at all costs is not always in the national interest’.

In other words, security measures and rules keep US critical infrastructure free.

While it has not yet gained much media attention, it is among the most important economic security policies ever taken to counter Beijing’s objective to ‘systematically direct and facilitate investment in United States companies and assets to obtain cutting-edge technologies, intellectual property and leverage in strategic industries’, and all of the US’s allies and democratic partners should publicly support it and implement it domestically.

We like to think that technologies are neutral mediums that are only vehicles for improvement. As Postman wrote, this belief often rises to the status of an ideology or faith.

‘All that is required to make it stick is a population that devoutly believes in the inevitability of progress’, he wrote. ‘And in this sense … history is moving us toward some preordained paradise and that technology is the force behind that movement.’

Science and technology have of course delivered extraordinary improvements to our health, our economic productivity, our access to information and our ability to connect with other people regardless of geography—provided we engage with it wisely. We must not become cynical about technology entirely, which is why we must maintain control over it and ensure it serves our interests.

The ultimate solution is knowledge and participation. As Postman concluded, the answer must be found in ‘how we watch’. With no discussion on how to use technology, there has been no ‘public understanding of what information is and how it gives direction to a culture’.

Postman wrote that ‘no medium is excessively dangerous if its users understand what its dangers are’. For that to happen, education was the ‘solution to all dangerous social problems’.

He insisted we were ‘in a race between education and disaster’.

To give education a fighting chance, especially against the predations of technologically capable authoritarian powers, democratic governments need to exert responsible and judicious regulation of technology to perform their most basic duty of protecting the freedom of their citizens.

Democracies should learn the TikTok lesson and restrict risky apps from day one

With its recent halt on implementing a legally mandated ban on TikTok, the United States is learning the hard way that when it comes to Chinese technology, an ounce of prevention is worth a pound of cure.

The US and like-minded democracies should no longer permit any social media platforms with direct ties to authoritarian governments with political censorship regimes to operate without restriction.

For years, technology and national security analysts have sketched out scenarios of what might happen if a democratic population were to become dependent on a Chinese-owned technology. Once such a technology becomes embedded in people’s daily lives and livelihoods, removing it stirs up a host of domestic political controversies, making it politically untenable to mitigate the national security risks.

That is exactly what has happened with TikTok. Around 170 million Americans—about half the country’s population and an even higher percentage of those using social media—use the short video app, owned by Chinese tech giant ByteDance. Millions of Americans have become dependent on their TikTok followings, built up over years, for their income or to promote their businesses. Tens of millions more use TikTok as a key source of information, community, and entertainment.

In classic American fashion, those users have refused to go gentle into that good night. As a law banning TikTok was set to go into effect on 19 January, many users downloaded the Chinese social media app RedNote, which isn’t just Chinese-owned—it is Chinese itself, based in Shanghai and subject to all Chinese national security and intelligence laws. Self-styled ‘TikTok refugees’ said they moved to RedNote to express their disregard for US government concern about the risks presented by Chinese companies. Overnight, RedNote, which presents even clearer security risks than TikTok, became the top download on the Apple app store in the US.

TikTok called on US President Donald Trump to offer a reprieve, and he did. On his first day in office, Trump signed an executive order authorising a 75-day extension on the law taking effect.

But it’s unclear what will happen next. We will need to see how a Trump administration navigates this issue. The law mandates either a forced divestiture or a ban. A previous US effort to force the sale of TikTok failed when the Chinese government issued new rules requiring Chinese companies to obtain a license for such a sale. Beijing did not grant ByteDance a license, effectively blocking the sale. Discussions are now reportedly underway for the sale of a 50 percent stake in TikTok to a US company, but that would not fulfill the law’s requirements.

This situation demonstrates the need to act early to inhibit the widespread adoption of social media platforms tied to authoritarian governments, such as Russia and China, that implement sweeping surveillance, censorship and manipulation of public opinion.

Western governments had all the information they needed about the risks of social media apps operating under authoritarian systems when TikTok took off in 2018—the year it became one of the world’s most downloaded apps. That was the time to act—the same time action was being taken to prevent Huawei from dominating the 5G telecommunications sector. The question now is whether we learn from our failures. While it’s too late to prevent TikTok from becoming a beloved American online space, it’s not too late to prevent the widespread adoption of similarly problematic apps. RedNote, for example, remains untouched, as do a host of other Chinese platforms.

The main argument against a sweeping ban on problematic foreign-owned apps is that this would infringe on free speech. But the opposite is true—as the US Court of Appeals essentially found. A social media platform under the sway of a foreign government obsessed with censorship and surveillance is an impediment to free speech. Democratic governments should act to preserve free speech by preventing these platforms from dominating online spaces.

Trade experts and economists understand that free markets don’t just happen naturally; creating and preserving a free market requires a strong government hand. There must be laws against unfair market behavior, mechanisms to bring cases against potential violators, means to investigate those claims, and strong enforcement. Sometimes the biggest violators are governments themselves.

In the same way, a free speech environment doesn’t happen naturally. There must be laws and practices in place to protect it. Put another way, it sometimes takes a strong government hand to create and preserve a free market for speech. As with free markets, sometimes the biggest violators of free speech are governments. And just as the public in a democracy has the ultimate power to vote out its own government for violating freedoms, protecting the public from foreign regimes and their intelligence services is the job of democratic governments.

The Chinese government has no right to censor or manipulate information on US soil. The Trump administration should act as soon as possible to ensure that no other social media companies linked to authoritarian governments can again play host to America’s virtual public square.

Australia enters the America First era: an analysis of the executive orders

The litany of executive orders that have dropped on the White House website tell us plenty about what Australia can expect from a second Trump term’s foreign policies.

And there are plenty of implications of the America First agenda for Canberra.

Let’s begin with Unleashing Alaska’s Extraordinary Resource Potential. Trump’s intent to unlock Alaska’s ‘bounty of natural wealth’ by opening offshore drilling and greenlighting dormant liquified natural gas (LNG) export projects is a boon for the US economy and energy security.

But plans to ‘prioritize … the sale and transportation of Alaskan LNG to … allied nations within the Pacific region’ potentially cuts Australia’s grass. Our fractured LNG export ‘strategy’ is going to have to compete with likely cheaper LNG flooding the Asian market.

Trump’s America First Policy Directive on foreign policy is rather literal, simply stating that it will always put ‘America and its interests first’. Australian policymakers must now frame commitments, agreements, and policies regarding the US around this mandate.

Understanding that this is the way decisions will be taken in this new era will save time and public servants’ energy.

We can already apply the America First policy to one case study: AUKUS pillar one. Trump’s US can be expected to continue supporting the optimal pathway for several national interest reasons. First, Australia has already paid cash. Second, the rotation of US and British nuclear submarines through HMAS Stirling in Western Australia affords a ‘beachhead’ for US strategic depth in the Indo-Pacific. Third, Australia will give billions of dollars more to the US for Virginia class submarines.

America First? Tick.

Central to the America First era is Trump’s plan to block Chinese overreach into strategic regions of American interest. It’s not clear how the US might secure control of Greenland and the Panama Canal, but it’s quite clear why Trump wants to do it.

Canberra shares with Washington common interests and challenges posed by Beijing’s creeping territorialisation efforts in Antarctica. Antarctica is a strategic continent that needs much more work through the US-Australia alliance to protect it.

One obvious point of divergence is commitment to multilateralism. There appears to be zero reversal of this trend—Trump has signed an order to withdraw the US from the World Health Organization, and has signalled an intention to pull out of the Paris Agreement on climate change.

Further Trump presidential action is aimed at multilateralism. Significantly for Australia given the amount of US and other multilateral companies that have operations in our key industries, Washington is also ditching the OECD Global Tax Deal, which was negotiated by the Biden administration though never approved by Congress.

Representing 90 percent of global GDP, and signed by 136 countries and jurisdictions, it seeks to ensure big firms ‘pay a fair share of tax wherever they operate and generate profits’. Australia remains a fervent advocate for it, along with the remnants of most multilateral bodies, while Trump’s memorandum prioritises ‘sovereignty and economic competitiveness by clarifying that the Global Tax Deal has no force or effect in the United States’. This will be a problem for Australia.

An area of little divergence appears to be foreign aid. Australian efforts in this sector are dismal at best—roughly $4.7 billion in foreign aid was distributed in 2023-24, placing Canberra 26th out of 31 wealthy countries ranked for how much foreign aid they provide. Trump’s Reevaluating and Realigning United States Foreign Aid order might put pressure on Australia to ‘do more’—that is, spend more—in our region. The order freezes US aid while a review is undertaken and frames foreign aid to be ‘destabili(sing) world peace by promoting ideas in foreign countries that are directly inverse to harmonious and stable relations internal to and among countries’.

Trump’s declaration of a ‘national energy emergency’ might trigger a much-needed national debate in Australia about our persistent energy insecurity. Our nation sits on immense resource wealth yet has gone from being a global LNG export superpower to importing gas to meet domestic needs in less than a decade.

Trump’s memorandum on Restoring Accountability for Career Senior Executives needs little explanation as to how it could provide lessons for Canberra. Group-think and risk-adverse career public servants have hollowed out our public service’s ability to ‘faithfully fulfill … duties to advance the needs, policies, and goals’ of Australia.

The TikTok saga continues into the Trump 2.0 era. Never fear, watchers of MomTok—a group of Mormon ‘yummy mummies’ who post on TikTok, for the uninitiated— Trump’s attempt to find a compromise on an outright ban of TikTok gives the US government 75 days to get to the bottom of Beijing’s reach afforded by the popular app being used by 170 million Americans.

NSW Premier Chris Minns finds a ‘return to work’ ally in Trump, whose Return to In-Person Work mandate notes ‘all departments and agencies in the executive branch of Government shall, as soon as practicable, take all necessary steps to terminate remote work arrangements’. Again, this could energise debate here in Australia for similar measures.

Trade remains a concern for Australia. Will we, or wont we, be slapped with the tariff stick? Will Trump be able to separate bilateral trade relations from Australia’s lacklustre defence spending? Trump’s America First Trade Policy provides no clear answers. But the Albanese government needs to recognise that simply pointing to a healthy American trade surplus with Australia—saying ‘smile and wave boys’—might no longer pass Trump’s pub test.

The hidden risks we scroll past: the problem with TikTok—and RedNote

What if the most popular apps on our phones were quietly undermining national security? Australians often focus on visible threats, but the digital realm poses less obvious yet equally significant dangers. Yet, when it comes to the digital landscape, a blind spot remains: the hidden risks posed by platforms such as TikTok and RedNote (Xiaohongshu). These apps are more than just harmless entertainment; they’re tools in a global battle for data and influence. And we, as a society, remain largely unaware.

TikTok, RedNote and similar platforms have embedded themselves deeply into daily life. Their algorithms delight us with engaging content, fostering a sense of connection and entertainment. But this convenience comes at a cost. Few stop to question what’s behind these apps: who owns them, where our data goes, what it might say about us, and how it might be used. In fact, these platforms, owned by companies who must obey authoritarian governments, present profound risks to our privacy and national security.

Digital risks are invisible and complex and, for most, our understanding is limited. While most Australians grasp the tangible dangers of terrorism or cyberattacks, the concept of apps and data collection being weaponised for disinformation and influence campaigns feels abstract. This gap in understanding is compounded by the prioritisation of convenience over caution. Governments and experts have sounded alarms, conducted enquiries and in extreme cases implemented total bans—as seen with TikTok in the US—but their warnings often fail to resonate amid the noise of daily life. As a result, we remain unprepared for the evolving tactics of malign actors who exploit these vulnerabilities.

Platforms such as TikTok and RedNote collect vast amounts of user data—from location and device details to browsing habits. In the wrong hands, this data can be used to map social networks, identify vulnerabilities or inform targeted disinformation campaigns. Algorithms don’t just show users what they like; they also shape what users believe. Through curated content, adversaries can subtly influence societal narratives, amplify divisions or undermine trust in democratic institutions. Beyond individual users, these platforms could act as backdoors into sensitive areas, through officials’ use of them (despite rules against it) or business executives sharing trade secrets on them.

Australia must address the vulnerabilities on these apps, particularly as the nation strengthens partnerships under such initiatives as AUKUS. Demonstrating robust digital hygiene and security practices will be essential to maintaining credibility and trust among allies.

The enactment of the Protecting Americans from Foreign Adversary Controlled Applications Act has prompted an exodus of users from TikTok, driving them to seek alternative platforms—though Donald Trump has given the app’s owner some indication of a reprieve.

Many TikTok users have turned to RedNote, which has rapidly gained traction as a replacement. Unlike TikTok, which operates a US subsidiary and is banned within China, RedNote is fully Chinese-owned and operates freely within China, creating a level of commingling and data exposure that was not present with TikTok. This raises even greater concerns about privacy and national security. While banning RedNote might seem like a straightforward solution, it does not address the core issue: the lack of public awareness and education about the risks inherent in these platforms. Without understanding how their data is collected, stored, and potentially exploited, users will continue to migrate to similar platforms, perpetuating the cycle of vulnerability. This underscores the urgent need for widespread digital literacy and education.

Recent legislation aimed at protecting children from social media platforms, such as the minimum-age requirements introduced by the Australian government, is a step in the right direction. However, this approach could be endlessly repetitive: new platforms and workarounds could quickly emerge to bypass regulations. The question remains: can the government effectively manage implementation of such policies in a fast-evolving digital landscape? And if we are applying policies to protect children, what about defence force personnel using these free applications? They could inadvertently expose national-security information. A consistent, security-first approach to app usage should be considered across all demographics, especially those with access to critical data.

Governments must take the lead by implementing stricter regulations and launching public awareness campaigns. Comprehensive digital literacy programs should be as common as public-awareness campaigns on physical health or road safety, equipping Australians to recognise and mitigate digital threats. They should know where their data is stored, understand they should resist letting apps know their location, and consider potential consequences. Digital security is no longer a niche concern; it is a core component of modern citizenship.

The hidden risks we scroll past each day are not just a matter of personal privacy but of national security. As Australians, we must shift our mindset and take these threats seriously. By recognising the vulnerabilities embedded in our digital habits, we can build a more secure and resilient society. Because when it comes to national security, ignorance is no longer bliss.

The TikTok boomerang

Few predicted that TikTok users in the United States would flock to the Chinese app RedNote (Xiaohongshu) in defiance of a US government ban. And yet in the space of just two days this week, RedNote became the most downloaded app in the US, gaining 700,000 users—most of them American TikTok refugees.

Since US data security was the rationale for the TikTok ban, American users’ migration to other Chinese apps only amplifies those concerns. Unlike TikTok—a platform that does not operate in China and is not subject to Chinese law—RedNote is a domestic Chinese app bound by strict Chinese regulations. Moreover, while TikTok says that it stores US user data exclusively within the US, with oversight by a US-led security team, RedNote stores its data entirely in China.

In recent years, China has introduced a series of data protection laws ostensibly aimed at safeguarding user information. But these regulations primarily target businesses, imposing far fewer constraints on government access to personal data. Chinese public authorities thus have wide discretion in requesting and accessing user data.

Beyond the issue of data privacy, US authorities also worry that TikTok might be used to influence public opinion in the US. But TikTok’s algorithms are closely monitored by Oracle, as part of a deal to address security concerns. In contrast, RedNote’s algorithms operate under the close scrutiny of the Chinese government, and the app is subject to China’s stringent content-moderation requirements, which could further shape the opinions of the TikTok refugees now flocking to the platform.

Given the rationale for the law banning TikTok, it is hard to imagine RedNote escaping similar scrutiny. Now that the US Supreme Court has upheld the TikTok law, the president will have the authority to designate RedNote as a national security threat, too. But this process may quickly descend into a game of Whac-a-Mole. As US users migrate from one Chinese platform to another, regulators will find themselves locked in an endless cycle of banning Chinese apps.

As the list of banned apps grows, the US risks constructing its own Great Firewall—a mirror to the censorship strategy long employed by China. Even if Chinese apps are removed from US app stores, tech-savvy users can easily bypass such restrictions with VPNs, just as Chinese users do to access foreign platforms. That means the US government will soon confront the limits of its ability to ban Chinese apps.

Moreover, each new restriction risks fueling defiance, driving even more users toward Chinese-controlled platforms. Instead of mitigating national security concerns, this strategy may inadvertently exacerbate them, introducing the kinds of vulnerabilities that the original ban was supposed to address.

The TikTok ban thus puts the US government in a near-untenable position, which may explain why Donald Trump is reportedly weighing options to spare TikTok (despite having initiated the ban during his first term).

Yet reversing the ban carries its own risks. As legislation passed by congress, it cannot be repealed by executive order. In theory, Trump could direct law enforcement agencies not to enforce the ban; but that would have far-reaching consequences, not least by calling into question America’s commitment to the rule of law (again mirroring a charge the US has long leveled against China).

An alternative to banning TikTok is a forced divestiture of the app’s US operations, but that solution hinges on one critical factor: China’s approval. In 2020, China implemented restrictions on the export of technologies such as recommendation algorithms—the core of TikTok’s operations—effectively giving the Chinese government veto power over any potential deal.

The TikTok dilemma thus now serves as a powerful bargaining chip for China’s leaders, granting them significant leverage in their dealings with Trump, who campaigned on a promise to impose higher import tariffs on Chinese goods. Not surprisingly, he turned to Chinese President Xi Jinping for help just hours before the Supreme Court was set to weigh in on the ban.

At the same time, the TikTok saga has handed China yet another strategic gift. Friendly interaction between TikTok refugees and Chinese netizens on RedNote has created an unprecedented opportunity for cultural exchange, something China’s rulers have long aspired to but struggled to achieve.

For more than two decades, the Chinese government has aggressively tried to promote its culture and expand its influence in the US. But while it has purchased ads in Times Square and established Confucius Institutes on US university campuses, these efforts have largely failed to gain traction. Remarkably, what RedNote has achieved in just a few days seems to have eclipsed the cumulative impact of all these prior initiatives.

As I explored in my recent book, High Wire, centralised decision-making frequently results in fragile, rather than resilient, regulatory outcomes. The TikTok saga offers a stark reminder that an over-concentration of presidential power in shaping US foreign policy—particularly toward China—can lead to similar outcomes. With Trump expected to consolidate executive power, surround himself with loyalists and operate with fewer institutional constraints during his second term, this trend seems likely to intensify, generating vast unintended consequences.

Social media as it should be

Mathematician Cathy O’Neil once said that an algorithm is nothing more than someone’s opinion embedded in code. When we speak of the algorithms that power Facebook, X, TikTok, YouTube or Google Search, we are really talking about choices made by their owners about what information we, as users, should see. In these cases, algorithm is just a fancy name for an editorial line. Each outlet has a process of sourcing, filtering and ranking information that is structurally identical to the editorial work carried out in media—except that it is largely automated.

This automated editorial process, far more than its analogue counterpart, is concentrated in the hands of billionaires and monopolies. Moreover, it has contributed to a well-documented list of social ills, including large-scale disinformation, political polarisation and extremism, negative mental-health impacts and the defunding of journalism. Worse, social-media moguls are now doubling down, seizing the opportunity of a regulation-free operating environment under Donald Trump to roll back content-moderation programs.

But regulation alone is not enough, as Europe has discovered. If our traditional media landscape featured only a couple of outlets that each flouted the public interest, we would not think twice about using every available tool to foster media pluralism. There is no reason to accept in social media and search what we would not tolerate in legacy media.

Fortunately, alternatives are emerging. Bluesky, a younger social-media platform that recently surpassed 26 million users, was built for pluralism: anyone can create a feed based on any algorithm they choose, and anyone can subscribe to it. For users, this opens many different windows onto the world, and people can also choose their sources of content moderation to fit their preferences. Bluesky does not use your data to profile you for advertisers, and if you decide you no longer like the platform, you can move your data and followers to another provider without any disruption.

Bluesky’s potential does not stop there. The product is based on an open protocol, which means anyone can build on top of the underlying technology to create their own feeds or even entirely new social applications. While Bluesky created a Twitter-like microblogging app on this protocol, the same infrastructure can be used to run alternatives to Instagram or TikTok, or to create totally novel services—all without users having to create new accounts.

In this emerging digital world, known as the Atmosphere, so named for the underlying AT Protocol), people have begun creating social apps for everything from recipe sharing and book reviews to long-form blogging. And owing to the diversity of feeds and tools that enable communities or third parties to collaborate on content moderation, it will be much harder for harassment and disinformation campaigns to gain traction.

One can compare an open protocol to public roads and related infrastructure. They follow certain parameters but permit a great variety of creative uses. The road network can convey freight or tourists, and be used by cars, buses, or trucks. We might decide collectively to give more of it to public transportation and it generally requires only minimal adjustments to accommodate electric cars, bikes and even vehicles that had not been invented when most of it was built, such as electric scooters.

An open protocol that is operated as public infrastructure has comparable properties: our feeds are free to encompass any number of topics, reflecting any number of opinions. We can tap into social-media channels specialised for knitting, bird watching or book piles, or for more general news consumption. We can decide how our posts may or may not be used to train AI models, and we can ensure that the protocol is collectively governed, rather than being at the mercy of some billionaire’s dictatorial whims. Nobody wants to drive on a road where the fast lane is reserved for cybertrucks and the far right.

Open social media, as it is known, provides the opportunity to realise the internet’s original promise: user agency, not billionaire control. It is also a key component of national security. Many countries are now grappling with the reality that their critical digital infrastructure—social, search, commerce, advertising, browsers, operating systems and more—is subordinated to foreign, increasingly hostile, companies.

But even open protocols can become subject to corporate capture and manipulation. Bluesky itself will certainly have to contend with the usual forms of pressure from venture capitalists. As its CTO, Paul Frazee, points out, every profit-driven social-media company ‘is a future adversary’ of its own users, since it will come under pressure to prioritise profits over users’ welfare. ‘That’s why we did this whole thing, so other apps could replace us if/when it happens.’

Infrastructure may be privately provided, but it can be properly governed only by its stakeholders: openly and democratically. For this reason, we must all set our minds on building institutions that can govern a new, truly social digital infrastructure. That is why I have joined other technology and governance experts to launch the Atlas Project, a foundation whose mission is to establish open, independent social-media governance and to foster a rich ecosystem of new applications on top of the shared AT Protocol. Our goal is to become a countervailing force that can durably support social media operated in the public interest. Our launch is accompanied by the release of an open letter signed by high-profile Bluesky users such as the actor Mark Ruffalo and renowned figures in technology and academia such as Wikipedia founder Jimmy Wales and Shoshana Zuboff.

There is nothing esoteric about our digital predicaments. Despite the technology industry’s claims, social media is media, and it should be held to the same standards we expect from traditional outlets. Digital infrastructure is infrastructure, and it should be governed in the public interest.

It’s not too late to regulate persuasive technologies

Social media companies such as TikTok have already revolutionised the use of technologies that maximise user engagement. At the heart of TikTok’s success are a predictive algorithm and other extremely addictive design features—or what we call ‘persuasive technologies’. 

But TikTok is only the tip of the iceberg. 

Prominent Chinese tech companies are developing and deploying powerful persuasive tools to work for the Chinese Communist Party’s propaganda, military and public security services—and many of them have already become global leaders in their fields. The persuasive technologies they use are digital systems that shape users’ attitudes and behaviours by exploiting physiological and cognitive reactions or vulnerabilities, such as generative artificial intelligence, neurotechnology and ambient technologies.   

The fields include generative artificial intelligence, wearable devices and brain-computer interfaces. The rapidly advancing tech industry to which these Chinese companies belong is embedded in a political system and ideology that compels companies to align with CCP objectives, driving the creation and use of persuasive technologies for political purposes—at home and abroad.  

This means China is developing cutting-edge innovations while directing their use towards maintaining regime stability at home, reshaping the international order abroad, challenging democratic values, and undermining global human rights norms. As we argue in our new report, ‘Persuasive technologies in China: Implications for the future of national security’, many countries and companies are working to harness the power of emerging technologies with persuasive characteristics, but China and its technology companies pose a unique and concerning challenge. 

Regulation is struggling to keep pace with these developments—and we need to act quickly to protect ourselves and our societies. Over the past decade, the swift technological development and adoption have outpaced responses by liberal democracies, highlighting the urgent need for more proactive approaches that prioritise privacy and user autonomy. This means protecting and enhancing the ability of users to make conscious and informed decisions about how they are interacting with technology and for what purpose.  

When the use of TikTok started spreading like wildfire, it took many observers by surprise. Until then, most had assumed that to have a successful model for social media algorithms, you needed a free internet to gather the diverse data set needed to train the model. It was difficult to fathom how a platform modelled after its Chinese twin, Douyin, developed under some of the world’s toughest information restrictions, censorship and tech regulations, could become one of the world’s most popular apps.  

Few people had considered the national security implications of social media before its use became ubiquitous. In many countries, the regulations that followed are still inadequate, in part because of the lag between the technology and the legislative response. These regulations don’t fully address the broader societal issues caused by current technologies, which are numerous and complex. Further, they fail to appropriately tackle the national security challenges of emerging technologies developed and controlled by authoritarian regimes. Persuasive technologies will make these overlapping challenges increasingly complex. 

The companies highlighted in the report provide some examples of how persuasive technologies are already being used towards national goals—developing generative AI tools that can enhance the government’s control over public opinion; creating neurotechnology that detects, interprets and responds to human emotions in real time; and collaborating with CCP organs on military-civil fusion projects. 

Most of our case studies focus on domestic uses directed primarily at surveillance and manipulation of public opinion, as well as enhancing China’s tech dual-use capabilities. But these offer glimpses of how Chinese tech companies and the party-state might deploy persuasive technologies offshore in the future, and increasingly in support of an agenda that seeks to reshape the world in ways that better fit its national interests. 

With persuasive technologies, influence is achieved through a more direct connection with intimate physiological and emotional reactions compared to previous technologies. This poses the threat that humans’ choices about their actions are either steered or removed entirely without their full awareness. Such technologies won’t just shape what we do; they have the potential to influence who we are.  

As with social media, the ethical application of persuasive technologies largely depends on the intent of those designing, building, deploying and ultimately controlling the technology. They have positive uses when they align with users’ interests and enable people to make decisions autonomously. But if applied unethically, these technologies can be highly damaging. Unintentional impacts are bad enough, but when deployed deliberately by a hostile foreign state, they could be so much worse. 

The national security implications of technologies that are designed to drive users towards certain behaviours are already becoming clear. In the future, persuasive technologies will become even more sophisticated and pervasive, with the consequences increasingly difficult to predict. Accordingly, the policy recommendations set out in our report focus on preparing for, and countering, the potential malicious use of the next generation of persuasive technologies. 

Emerging persuasive technologies will challenge national security in ways that are difficult to forecast, but we can already see enough indicators to prompt us to take a stronger regulatory stance. 

We still have time to regulate these technologies, but that time for both governments and industry are running out. We must act now. 

Digital spinach: What Australia can learn from China’s youth screen-time restrictions

As Australia debates the right cut-off age for social media use, let’s not forget there already is a cut off age—13. That’s the age most platforms set in their terms of service in compliance with the United States’ Children’s Online Privacy Protection Act (COPPA).

But there’s a slight problem—whatever they’re doing to keep younger kids out, it’s not working. Surprisingly, a key part of the solution might be found in a place few would expect—China. While the idea of borrowing tactics from a surveillance state might seem unappealing to Australians, there are valuable lessons we could learn from Beijing when it comes to enforcing age restrictions and protecting young users from the harms of social media.

Research conducted by the Office of the eSafety Commissioner reveals that nearly a quarter of children aged eight to 10 report using social media weekly or more often, and almost half of those aged 11 to 13 are doing so. Which raises the question: what are these companies actually doing to keep underage users off their platforms?

That’s precisely what eSafety Commissioner Julie Inman Grant asked the major social media platforms earlier this month. They have 30 days to respond, but the answer is already clear. With kids getting phones at younger ages, many are downloading apps intended for those 13 and older, and are lying about their age in the process. The safeguards in place are far from robust.

Even the major platforms concede that keeping underage users out is a losing battle. When Instagram paused its ‘Instagram Kids’ project in 2021—a version of the app specifically designed for users under 13—Adam Mosseri, the Head of Instagram, admitted that relying solely on age verification was not enough, advocating instead for a safer, controlled version of the app for younger users.

So, if the current measures are ineffective, what’s the solution? The federal government’s $6.5 million trial of ‘age assurance’ technologies is exploring a range of options to enforce age restrictions more effectively, from a digital ID to AI profiling and biometric analysis.

But in a draft open letter to the government, some Australian tech leaders criticised the trial as a ‘fundamental strategic error’, arguing that tech giants should be responsible for developing and enforcing age verification systems themselves. These companies, they said, should face severe penalties if they fail to comply—penalties that would compel them to figure out a solution.

The crux of the issue is how severe those penalties should be. The US Federal Trade Commission (FTC) regularly fines major platforms for violating COPPA, but with little impact. For example, TikTok and ByteDance have been accused of flagrantly violating COPPA by collecting data from children under 13 without parental consent. However, the $5.7 million fine imposed—a record at the time—was insignificant for a company with $16 billion in US revenue last year. The risk-reward balance remains skewed towards non-compliance.

Even if fines increase, platforms still face a fundamental challenge: verifying the age of children too young to have an ID. Platforms argue that others are better placed to solve the problem. Snap, for instance, has suggested that device manufacturers should handle age verification since they control the registration process when a new phone is activated. Meanwhile, Meta advocates for legislation requiring app stores to implement age verification tools, allowing parents to block children under 16 from downloading social media apps.

So, the social media platforms blame either the app stores or the device makers, who then point right back at the platforms. Perhaps it’s time for everyone to take responsibility?

China, unexpectedly, provides a model for how this could be done. Last year, Beijing mandated a coordinated effort across app developers, app stores, and device manufacturers to create a unified ‘minor’s mode.’ This framework enforces strict rules like age-specific screen time limits, mandatory breaks, and a curfew banning use between 10 p.m. and 6 a.m. These measures are designed to close the loopholes kids have exploited, such as using their grandparents’ accounts to dodge restrictions and indulge in late-night gaming.

Being communist China, the approach extends beyond mere access restrictions. It segments children into age groups, prescribing the type of content they can access. Children under eight are limited to 40 minutes of screen time per day, with content strictly educational. Once they turn eight, their allowance increases to one hour, introducing ‘entertainment content with positive guidance’. It’s a grand piece of social engineering, rooted in a blend of paternalistic, Confucian, and Leninist principles, that appears designed to ensure the next generation grows up patriotic, productive, and in line with the party-state’s vision for the future.

Some Western critics have argued that while Beijing ensures a healthy digital diet for its own youth, it simultaneously exports platforms like TikTok to weaken the youth of other nations. As former Congressman Mike Gallagher starkly put it, ‘ByteDance and the CCP [Chinese Communist Party] have decided that China’s children get spinach, and America’s get digital fentanyl.’

The truth is, TikTok is just a more efficient delivery device for the same content available on any platform, regardless of ownership. While the CCP can influence TikTok’s algorithm, they’re not force-feeding us digital fentanyl; the real issue is our own failure to implement safeguards that ensure a healthier digital experience for our kids.

We don’t need the state to socially engineer or force our children onto a strict ‘spinach’ digital diet, but we can certainly take a page from Beijing’s playbook and force all the stakeholders in our digital ecosystem—from app developers to app stores and device manufacturers—to co-operate, so we can build a digital ecosystem that keeps young children off social media until they’re ready.

Can TikTok alone tackle CCP-linked information ops?

In a welcome development last year, TikTok announced that it would start publishing insights about the covert influence operations it identifies and removes from its platform globally in its quarterly community guidelines enforcement reports.

Since then, the platform has published three quarterly reports that identified 22 separate covert influence operations originating in countries as various as Russia, Azerbaijan, Ireland, Georgia, Kenya, and Taiwan. But there has been one glaring exception: China.

The omission is curious considering that almost every other major social media platform has reported on the presence of covert operations linked to the Chinese party-state. It is, of course, hardly surprising though, considering that TikTok is owned by ByteDance, a Chinese company over which the ruling Communist Party has decisive leverage.

Late last month, one of TikTok’s major competitors decided to give it a helping hand. Facebook owner Meta published details about a Chinese influence campaign it described as the ‘largest known cross-platform covert influence operation in the world’. TikTok was among the more than 50 online platforms and forums where Meta found evidence of the Chinese political spam network known as ‘Spamouflage’. After The Guardian reached out to TikTok to ask what action they would be taking against the accounts, the platform removed 284 of them.

It is a positive step that TikTok has finally acted against these accounts, but it’s one they could have taken themselves months ago. At ASPI, we’ve been actively monitoring Spamouflage accounts on the Chinese-owned video-sharing app for the past year. In April, my colleagues published ‘Gaming Public Opinion,’ which laid out concrete evidence of this type of activity occurring on the app. TikTok’s trust and safety team might like to give it a read; some of the accounts identified in that report are still up on the platform.

Many of the videos shared in the Spamouflage influence operation focused on positive commentary about China’s Xinjiang province including videos featuring local Uyghurs, likely corralled by the propaganda department, to respond in testimonials to reports of forced labour in Xinjiang. Meta’s investigation of the accounts sharing this content found links to ‘individuals associated with Chinese law enforcement.’

None of this comes as a surprise. Three years ago in our report on TikTok and WeChat, my colleagues and I wrote that Xinjiang-related ‘state-linked information campaigns are highly likely to be taking place on TikTok,’ but that it would be unlikely to ‘conduct any transparent investigation to stop state-actor manipulation of its platform.’

How were we able to confidently predict this? Because back in 2018, ByteDance founder Zhang Yiming stated on the record that he would ensure his products served to promote the CCP’s propaganda agenda. In fact, as we noted in our 2020 report, ByteDance works closely with PRC public security bureaus to not just disseminate that propaganda, but to actually produce it in the first place.

The various transparency reports TikTok puts together, including the ones it publishes as part of its obligations as a signatory to the Australian Code of Practice for Disinformation and Misinformation, sure seem comprehensive. As they note in their 2022 transparency report, they are a ‘dedicated signatory’ that ‘opts in to all Objectives and Outcomes’ under the code. Indeed, in some ways TikTok has gone further than other platforms to deal with the various online harms that are prevalent on all platforms.

But given the facts outlined above, TikTok clearly cannot be trusted to root out CCP-led information operations on its platform on their own accord. And while Meta has been of assistance in this instance, there is need for a more sustainable solution that does not rely on competitors to police the information operations that are taking place on rival platforms. It is time we moved from the current self-regulatory model where the platforms are left to create, implement, and enforce their own rules and standards, to one where the government can provide oversight and enforce compliance.

As various efforts to regulate TikTok in the US flounder and stall, the Australian government’s proposed Combatting Misinformation and Disinformation Bill 2023 might finally put us on a path towards this co-regulatory model. If passed, the bill would empower the Australian Communications and Media Authority (ACMA) to gather crucial information regarding TikTok’s efforts to counter foreign interference. The ACMA will also have the power to level substantial fines if, as seems to be the case with the Spamouflage accounts, TikTok’s efforts to deal with them are untimely or inadequate.

The draft bill is not perfect. In our own feedback on it (to be published soon), my colleague Albert Zhang and I propose no fewer than 18 recommendations for how it can be improved. But it at the very least gives the government the ability to force digital platforms like TikTok to do more to combat information operations on their platforms. The clock has been ticking on TikTok for far too long. It’s time we actually did something about it.

There’s no technical fix to a problem driven by ideology

A senior analyst with ASPI’s International Cyber Policy Centre, Fergus Ryan took part in today’s hearing of the Senate Select Committee on Foreign Interference through Social Media. This is his contribution to ASPI’s submission to the inquiry in which he focused on his concerns about TikTok. The full submission is available for download on the Parliament of Australia website.

There are three main national security risks with the PRC-owned video-sharing app, TikTok, that Australians should be concerned about. Two of them—data and content manipulation—are applicable to most other major social media apps regardless of their country of origin. The third risk, that a single political party, the Chinese Communist Party (CCP), has decisive leverage over TikTok, exacerbates the other two risks and is unique to TikTok as a major mainstream social media app.

The first and most discussed risk is about data. Following years of scrutiny, TikTok has been forced to be more forthcoming about the fact that TikTok user data is accessible and has been accessed from the PRC. Close observers of TikTok statements from as early as 2020 know that it has only ever been TikTok’s goal for China-based employees to have minimal access to user data–not to cut it off completely.

Furthermore, the app relies on this access to function. As stated in a September 2020 sworn affidavit by the company’s then chief information security officer, ‘TikTok relies on China-based ByteDance personnel for certain engineering functions that require them to access encrypted TikTok user data.’

In 2023, this still has not changed. Even as the company puts into place its US$1.5 billion plan dubbed ‘Project Texas’ to move all data attached to American users to the United States, and to institute various governance, compliance and auditing systems to mitigate national security concerns, TikTok vice president Michael Beckerman maintains that engineers based in China ‘might need access to data for engineering functions that are specifically tied to their roles.’ At a Senate hearing about social media and national security in September 2022, Vanessa Pappas, TikTok’s chief operating officer, declined to commit to cutting employees in China off from the app’s user data.

As long as PRC-based engineers are able to access TikTok user data, that data is at risk of being accessed and used by PRC intelligence services. TikTok’s constant refrain that user data is stored in Singapore and the US and that it would never hand over the data to the Chinese government even if it were asked is beside the point. The location in which any data is stored is immaterial if it can be readily accessed from China.

Moreover, TikTok’s parent company, ByteDance, couldn’t realistically refuse a request from the Chinese government for TikTok user data because a suite of national security laws effectively compels individuals and companies to participate in Chinese ‘intelligence work’. If the authorities requested TikTok user data, the company would be required by law to assist the government and then would be legally prevented from speaking publicly about the matter.

Unfortunately, even if TikTok’s parent company, ByteDance, were able to sever access to the app’s user data from the PRC, Beijing’s intelligence services could still readily access sensitive data on virtually anyone in Australia via the commercial data broker market.

Second, in what has unfortunately been an under-discussed risk, TikTok could continue to skew its video recommendations in line with the geopolitical goals of the CCP. This threat continues to worsen as more and more people get their news and information from online platforms such as TikTok over which the Chinese party-state can control, curate and censor content.

There’s ample evidence that TikTok has done this in the past. Leaked content moderation documents have previously revealed that TikTok has instructed ‘its moderators to censor videos that mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong’, among other censorship rules. TikTok insists that those documents don’t reflect its current policy and that it had since embraced a localised content moderation strategy tailored to each region.

In ASPI’s 2020 report into TikTok and WeChat, we found they suppressed LGBTQ+ content in at least eight languages. After British MPs questioned TikTok executives about our findings, they publicly apologised. Our report also included a deep dive on TikTok’s Xinjiang hashtag & found a feed that was flooded with glossy propaganda videos with only 5.6% of those videos being critical of the crackdown on the Uyghurs.

In 2022, TikTok blocked an estimated 95% of content previously available to Russians, according to Tracking Exposed, a nonprofit organisation in Europe that analyses algorithms on social media. In addition to this mass restriction of content, the organisation also uncovered a network of coordinated accounts that were using a loophole to post pro-war propaganda in Russia on the platform. In other words, at the outset of Putin’s invasion of Ukraine, TikTok was effectively turned into a 24/7 propaganda channel for the Kremlin.

Following years of intense scrutiny, it is unlikely that TikTok will, in any overt way, become a conduit for pro-CCP propaganda. In a welcome sign in recent months, the company has even begun to label ‘China state-affiliated’ accounts on the platform. It is unclear if these labels also ensure that the content is reduced on the platform as it currently does on other platforms like Twitter.

To further build confidence, TikTok should, as other social media platforms have, regularly investigate and disclose information operations being conducted on the platform by state and non-state actors.

Any manipulation of the public political discourse on TikTok is likely to be subtle. Unfortunately, because each user’s TikTok feed is different, any influence the CCP has over the app will be very difficult to track. It would be trivially easy for the app to, for example, promote or demote certain political speech in line with the CCP’s preferences. The app could tip the scales in favour of speech attacking a political candidate who is critical of the CCP, for example.

TikTok certainly has the ability to detect political speech on the app as it monitors keywords in posts for content related to elections so that it can then attach links to its in-app elections centre. Experiments conducted by nonprofit group Accelerate Change found that including certain election-related words in TikTok videos decreased their distribution by 66%. They also found that TikTok is consistently suppressing videos when it can detect that they’re about voting.

In 2020, US TikTok executives noticed views for videos from certain creators about the US presidential election were mysteriously dropping 30% to 40%, according to people familiar with the episode and cited by the Wall Street Journal. The executives found that a team in China had changed the algorithm to play down political conversations about the election.

Algorithmic manipulation of content is not limited to TikTok. To take one example in February 2023, Twitter chief executive Elon Musk rallied a team of roughly 80 engineers to reconfigure the platform’s algorithm so that his tweets would be more widely viewed. There is clearly a need for all social media companies to be more transparent about how changes to their algorithms affect the content users receive.

The third risk, rightly identified by Cybersecurity Minister Clare O’Neil as a ‘relatively new problem’, is that apps like TikTok are, as the minister put it, ‘based in countries with a more authoritarian approach to the private sector’.

For TikTok’s parent company, ByteDance, this authoritarian approach has included compelling company founder Zhang Yiming to make an abject apology in a public letter for failing to respect the Chinese Communist Party’s ‘socialist core values’ and for ‘deviating from public opinion guidance’—one of the CCP’s terms for censorship and propaganda.

The enormous leverage the CCP has over the company drove ByteDance to boost its army of censors by an extra 4,000 people (candidates with party loyalty were preferred) and it’s what continues to motivate ByteDance to conduct ‘party-building’ exercises inside the company.

In April 2021, Beijing quietly formalised a greater role in overseeing ByteDance when state investors controlled by the China Internet Investment Fund (controlled by internet regulator CAC) and China Media Group (controlled by CCP’s propaganda department) took a 1% stake in ByteDance’s Chinese entity, Beijing ByteDance Technology, giving it veto rights over the company’s decisions. At the time, one of the other two seats on the company’s board was held by Zhang Fuping (张辅评) who was secretary of the company’s Party committee.

More recently the CAC named a director from its bureau overseeing data security and algorithmic governance to the board of ByteDance’s main Chinese entity. According to the Wall Street Journal, this director replaced another CAC official who was formerly part of the regulator’s online opinion bureau.

The PRC party-state is, in other words, completely intertwined with ByteDance to the extent that the company, like many other major Chinese tech companies, can scarcely be considered a purely private company that is only geared towards commercial ends. These companies are neither state-owned nor private, but hybrid entities that are effectively state-controlled.

Too much of the public discussion about the risks of TikTok has been narrowly focused on data security. Even if TikTok were to completely sever access to its user data from China (which it does not plan to do), China’s intelligence services could still buy similar user data from data brokers.

It therefore would be to Australia’s benefit if more rigorous data privacy and data protection legislation were introduced that apply to all firms operating here regardless of ownership. If protecting national security and guarding against foreign interference are our goals, a broad approach such as this is necessary. 

But a complete overhaul of regulation around data will still not address the risk that the CCP could leverage its overwhelming influence over TikTok and its parent company ByteDance to manipulate Australia’s political discourse in a way that would be unlikely to be detected.

There’s no technical fix to a problem driven by ideology. The CCP considers the country’s lack of soft power or ‘international discourse power’ (国际话语权), as having a ‘discourse deficit’ (话语赤字) against the strength of Western media and governments, which in turn has a serious impact on China’s international ambitions. The party is open about its view that homegrown social media apps like TikTok present the opportunity to leapfrog the West and begin to meaningfully close that gap. In the past they’ve attempted to conduct their influence operations on Western social media apps in a process referred to as ‘Borrowing a boat out to sea’ (借船出海). With TikTok, they own the boat.

(This piece has also appeared on the author’s online newsletter, Red Packet, about China, censorship, surveillance & propaganda.)

 

Tag Archive for: TikTok

‘Amusing ourselves to death’ in age of TikTok

Forty years ago, in a seminal masterpiece titled Amusing Ourselves to Death, American author Neil Postman warned that we had entered a brave new world in which people were enslaved by television and other technology-driven entertainment. The threat of subjugation comes not from the oppressive arm of authoritarian regimes and concentration camps but from our own willing submission and surrender.

“Big brother does not watch us, by his choice. We watch him, by ours,” Postman wrote in 1985.

“There is no need for wardens or gates or Ministries of Truth. When a population becomes distract­ed by trivia, when cultural life is redefined as a perpetual round of entertainments, when serious public conversation becomes a form of baby-talk, when, in short, people become an audience and their public business a vaudeville act, then a nation finds itself at risk; culture-death is a clear possibility.”

Postman’s insight would have been spot-on had he written this today about TikTok. Postman was mostly thinking about mass media with a commercial imperative. People would be enslaved to superficial consumerism. But add a technologically advanced authoritarian power with platforms that – unlike terrestrial TV – are essentially borderless and can reach around the globe, and you have George Orwell’s Big Brother put together with Aldous Huxley’s cultural and spiritual entropy.

Addictive digital entertainment can be corrosive even without a malign puppeteer. But with an entity such as the Chinese Communist Party fiddling the algorithms, it could be catastrophic.

Just in 2025, we have seen much of the Western world so spellbound by TikTok that the thought of living without it brought on the anguish normally reserved for the impact of conflict. “TikTok refugees” became a description, as though they had been displaced like Jews fleeing Europe or Yazidis escaping Islamic State.

Postman noted that we were innately prepared to “resist a prison when the gates begin to close around us … But what if there are no cries of anguish to be heard? Who is prepared to take arms against a sea of amusements?”

The cries of anguish were depressingly muted as TikTok built up a following in Western countries that now means four in 10 Americans aged under 30 get their “news” from TikTok, according to a recent survey by the Pew Research Centre.

When a ban was flagged, the cries came from those who couldn’t bear to give up the platform and from free speech absolutists who believed any rules amounted to government overreach. If our most popular radio stations had been based in Germany in the late 1930s, the Soviet Union during the Cold War or Syria during the ISIS caliphate, our leaders would have protected the public, regardless of popularity and notwithstanding that it would constitute government intervention in the so-called free market of ideas.

In fact, the market isn’t free because powerful actors can man­ipulate the information landscape.

Billionaire Elon Musk gives free-speech advocates a bad name by posting not just different opinions but promoting false content on issues such as Ukraine on his platform X. But more sinister is a platform such as TikTok, which is headquartered in authoritarian China and ultim­ately at the control of the CCP, with algorithms that have been demonstrated to manipulate audiences by privileging posts that serve Beijing’s strategic interests and downgrading content that does not.

Despite such threats, we have no clear framework to protect ourselves from powerful information platforms, including the newest generative artificial intelligence models such as DeepSeek, which will be increasingly available – and, thanks to their affordability, attractive – despite operating under Chinese government control. As a US court declared in upholding the congressional ban on TikTok, giving a foreign power a vector to shape and influence people’s thinking was a constraint on free speech, not an enabler of it.

Freedoms of speech and expression are core democratic principles but they need active protection. This means the involvement of governments.

US Vice-President JD Vance told the Munich Security Conference that Donald Trump represented a “new sheriff in town” who would defend free speech and “will fight to defend your right to offer it in the public square, agree or disagree”. It was an admirable derivative of the quote attributed to Evelyn Beatrice Hall describing Voltaire’s principle of “I may not agree with what you say, but I will defend to the death your right to say it”. But just as we have regulators for financial and other markets, we need regulation of our information markets.

By all means, speech should be as free as possible. Awful mustn’t equal unlawful, to borrow ASIO boss Mike Burgess’s phrase. Speech that hurts the feelings of others or advocates unpopular views cannot be the threshold for censorship. Such lazy and faint-hearted policymaking creates only a more brittle society. But that doesn’t mean we should make ourselves fish in a barrel for malign foreign powers.

Anarchy is not freedom. Governments need to brave the minefield that is modern information technology. If a platform poses risks that cannot be avoided, as with TikTok, it should be banned.

Other platforms that sit within democratic nations’ jurisdictions should be subjected to risk mitigations such as content moderation to deter and punish criminal activity. X, Facebook, Instagram and YouTube can be used as avenues for information operations, as shown by Russia buying advertisements on Facebook or CCP-backed trolls posting on X and YouTube, or be used as vectors for organised crime. Even the most ardent free-speech advocates would agree that drug trafficking, child abuse or joining a terrorist group are illegal offline and therefore should be illegal online.

No marketplace remains free and fair when governments overregulate or abdicate responsibility.

The once-free markets of trade and investment have been eroded by China to such an extent that just this week Trump issued a foreign investment policy to protect American “critical technology, critical infrastructure, personal data, and other sensitive areas” from “foreign adversaries such as the PRC”, including by making “foreign investment subject to appropriate security provisions”.

A key principle of the new presidential policy is that “investment at all costs is not always in the national interest”.

In other words, security measures and rules keep American critical infrastructure free.

While it has not yet gained much media attention, it is among the most important economic security policies ever taken to counter Beijing’s objective to “systematically direct and facilitate investment in United States companies and assets to obtain cutting-edge technologies, intellectual property and leverage in strategic industries”, and all of America’s allies and democratic partners should publicly support it and implement it domestically.

We like to think that technologies are neutral mediums that are only vehicles for improvement. As Postman wrote, this belief often rises to the status of an ideology or faith.

“All that is required to make it stick is a population that devoutly believes in the inevitability of progress,” he wrote. “And in this sense … history is moving us toward some preordained paradise and that technology is the force behind that movement.”

Science and technology have of course delivered extraordinary improvements to our health, our economic productivity, our access to information and our ability to connect with other people regardless of geography – provided we engage with it wisely. We must not become cynical about technology entirely, which is why we must maintain control over it and ensure it serves our interests.

The ultimate solution is knowledge and participation. As Postman concluded, the answer must be found in “how we watch”. With no discussion on how to use technology, there has been no “public understanding of what information is and how it gives direction to a culture”.

Postman wrote that “no medium is excessively dangerous if its users understand what its dangers are”. He insisted we were “in a race between education and disaster”.