Tag Archive for: TikTok

Australia needs a cybersecurity overhaul—not whack-a-mole bans on apps like TikTok

Australia has joined other countries in announcing a ban on the use of TikTok on government devices, with some states and territories following suit. The rationale was based on security fears and, in particular, the risk that the platform will be used for foreign interference operations by China.

TikTok is a video-sharing platform operated by ByteDance, a company headquartered in Beijing but incorporated in the Cayman Islands. Data is allegedly stored in the US and Singapore.

Like those of similar sites, TikTok’s privacy policy indicates an expansive approach to the collection and use of personal information. The app can collect information from users and third parties (such as advertisers), and it can draw inferences about its users’ interests. All of this information can then be shared with TikTok’s partners and service providers to, among other things, personalise content and advertising.

The policy also says information will be shared when there is a legal requirement to do so. China’s national intelligence law obliges citizens and organisations to support, assist and cooperate with national intelligence efforts, which could include ByteDance sharing people’s TikTok data.

While TikTok denies it would hand over data in such circumstances, there are reports that data from American users has been accessed by China-based employees. TikTok has also censored content that is politically sensitive in China.

While the Australian government’s response can be explained through this logic, questions remain.

Given the ban only affects government devices, couldn’t the same people be susceptible to foreign interference through their use of TikTok on personal devices? And what about other apps, such as Facebook, that collect significant amounts of user data. Are they more secure than TikTok?

Even if other digital platforms don’t have connections with China, couldn’t they share or sell data to other entities, such as advertisers, data brokers or business partners? And mightn’t those third parties have connections with China? Or other countries with similar laws?

But the problem of digital security and foreign interference is bigger than just one app or the use of government devices. Russia has run information campaigns designed to influence US elections using platforms such as YouTube, Tumblr, Google, Instagram, PayPal, Facebook and Twitter.

Indeed, the Department of Home Affairs notes that foreign interference activities are not only directed towards governments, but also academia, industries, the media and other communities (which is actually everyone).

Banning TikTok on government devices may eliminate one risk, but the broader pool of risks remains, both in government and beyond.

The government is currently developing a new cybersecurity strategy to replace the one put in place by the previous government just three years ago. A discussion paper on the new strategy was released earlier this year. This process will hopefully result in a more holistic strategy on how to manage the cybersecurity and foreign interference concerns that led to the TikTok ban.

Rather than the whack-a-mole tactical response of banning one app at a time, the strategy could provide clarity on how the government will manage the issue of weak security on mobile apps (particularly when used by people in sensitive sectors), as well as the potential for this to be an entry point for foreign interference.

This could include such things as:

  • educating people on digital security and foreign interference
  • streamlining the reporting channels for data breaches, foreign interference attempts, cybercrime, bugs and vulnerabilities
  • developing or recommending the use of appropriate standards on cybersecurity, which could include references to international standards in areas such as information security and data governance
  • strengthening cooperation between government and platforms and civil society
  • introducing targeted prohibitions, which may include bans on apps that could share data with countries that might then use it for foreign interference.

This kind of strategic approach, particularly on the education side, would give Australians better tools to arm themselves against foreign interference online, which, as Home Affairs emphasises, is the ‘best defence’ available.

Another relevant policy development is the government’s review of the Privacy Act, which is the primary Australian law on data protection.

Changing the rules about how data is collected and used by platforms could provide less fodder for those running foreign interference operations. This could include banning unfair uses, such as targeted messaging based on a psychological profile. If the platforms don’t facilitate these uses, it becomes more difficult for foreign governments to use these tools for manipulation.

Enhancing funding for the primary data regulator, the Office of the Australian Information Commissioner, could also strengthen enforcement across the board.

These two reform initiatives exist within a maze of others, including inquiries or proposals relating to online privacydigital platform servicesthe influence of international digital platformselectronic surveillance and digital economy regulation.

Beyond Australia, at the United Nations level, some questions about whether international law can be applied to cyberspace have been resolved, while others remain open. Australia’s position on these issues could also be clarified.

Ultimately, what’s needed is a strategy, rather than tactics, and better coordination of relevant policies across government. The TikTok example also highlights a truism that we shouldn’t think in terms of privacy or security, but rather privacy and security.

While there’s an occasional need to choose between these two values (for example, when government agencies surveil those suspected of a crime, terrorism or espionage), in the vast majority of situations security is enhanced when the privacy of personal information is protected.

For example, the more personal information a foreign agent can access about citizens working in sensitive areas, the better it can target espionage and influence operations. If social media companies are restricted in how they collect, use and share Australians’ data, we can take significant steps towards protecting everyone from foreign interference and other harms.

We need all the policies and associated agencies (cyber, privacy, education, platform regulation, international relations, national security and more) working together if we are to meet the challenges. It may make sense to ban TikTok on government devices, but we need to address this problem more than one app at a time.The Conversation

Editors’ picks for 2021: ‘Why TikTok isn’t really a social media app’

Originally published 12 March 2021.

There’s one thing we’re all getting wrong about TikTok: it’s not really a social media app. As TikTok Australia’s general manager told the Senate Select Committee on Foreign Interference through Social Media in September last year, the app is ‘less about social connection and more about broadcasting creativity and expression’.

Put another way, think of TikTok more as the modern incarnation of a media publisher—like a newspaper or a TV network—than as a social forum like Facebook or Twitter. That’s because TikTok is much more assertively curatorial than its competitors. It’s not a forum, it’s an editor. Its algorithm decides what each user sees, and it’s the opacity of that algorithm that presents the most worrying national security risk.

It may sound like an insignificant distinction, but TikTok’s emphasis on an ‘interest graph’ instead of a ‘social graph’ took the app’s competitors completely by surprise, and has largely gone over the heads of most lawmakers. The app, owned by Chinese technology company ByteDance, hit 2.3 billion all-time downloads in August 2020, so it’s high time policymakers understood exactly what makes TikTok tick.

An essay by Eugene Wei should be at the top of their reading list. A San Francisco–based start-up investor and former Amazon and Facebook employee, Wei dissects TikTok’s strategy and shows how its recommendation engine keeps users glued to their screens. It does it not by connecting them with friends or family, but by closely analysing their behaviour on the app and serving them more of what they’re interested in.

Wei’s opus, which approaches 20,000 words and is only the first in a three-part series, explains how TikTok is not the same as the major social media platforms we’re more familiar with. Put simply, on Facebook and Twitter, the content that users see is largely decided by who they follow. On TikTok, however, the user doesn’t have to follow anyone. Instead, the algorithm very quickly learns from how users interact with the content they’re served in the app’s ‘For You’ feed to decide what it should deliver to them next.

The approach is similar to that of Spotify and Netflix, whose recommendation algorithms take note of which songs and movies you listen to or watch in full and which you skip to decide what new content to suggest. As Wei puts it, ‘TikTok’s algorithm is so effective that it doesn’t feel like work for viewers. Just by watching stuff and reacting, the app learns your tastes quickly. It feels like passive personalization.’

It’s a strategy, Wei argues, that allowed a team of Chinese engineers—who didn’t necessarily have a good understanding of the cultures in the places where the app is available—to take the world by storm.

TikTok didn’t just break out in America. It became unbelievably popular in India and in the Middle East, more countries whose cultures and language were foreign to the Chinese Bytedance product teams. Imagine an algorithm so clever it enables its builders to treat another market and culture as a complete black box. What do people in that country like? No, even better, what does each individual person in each of those foreign countries like? You don’t have to figure it out. The algorithm will handle that. The algorithm knows.

But that’s not the only thing the algorithm knows. In a recent Protocol China exposé, a former censor at ByteDance said the company’s ‘powerful algorithms not only can make precise predictions and recommend content to users—one of the things it’s best known for in the rest of the world—but can also assist content moderators with swift censorship’.

The former employee, who described working at ByteDance as like being ‘a tiny cog in a vast, evil machine’, said that even live-streamed shows on the company’s apps are ‘automatically transcribed into text, allowing algorithms to compare the notes with a long and constantly-updated list of sensitive words, dates and names, as well as Natural Language Processing models. Algorithms would then analyze whether the content was risky enough to require individual monitoring.’

There’s no doubt that TikTok and its parent company have these abilities to monitor and censor. The question is, will they continue to use it? Certainly, the blunt censorship that typified TikTok’s earlier approach to content moderation and is par for the course on ByteDance’s domestic apps is unlikely to continue, especially after the public scrutiny over TikTok’s censoring of content related to the Tiananmen Square massacreBlack Lives Matter protests and Beijing’s persecution of Uyghurs and other ethnic minorities.

But there’s ample room for ByteDance to covertly tweak users’ feeds, subtly nudging them towards content favoured by governments and ruling parties—including the Chinese Communist Party. After all, it’s an approach that would be in line with the strategy that China’s Ministry of Foreign Affairs and state media are already deploying.

Beijing is exploiting pre-existing grievance narratives and amplifying pro-CCP Western influencers in the knowledge that Western voices are more likely to penetrate target online networks than official CCP spokespeople. The strategy, referred to as ‘Borrowing mouths to speak’ (借嘴说话), is reminiscent of the Kremlin’s approach and is perfectly suited to being covertly deployed on Chinese-owned and -operated social media apps.

Just as experiments have shown that TikTok’s algorithm can hurtle users from a politically neutral feed into a far-right firehose of content, so too can it easily be used to send users down any extreme rabbit hole. By design, the app groups people into ‘clusters’ (otherwise known as filter bubbles) based on their preferences. TikTok’s executives stress that they have measures in place to ensure people don’t become trapped in those filter bubbles. TikTok’s recommendation system ‘works to intersperse diverse types of content along with those you already know you love’, the company claims. The goal, they say, is to ensure that users are exposed to ‘new perspectives and ideas’, but who decides which new perspectives and ideas?

What’s to stop Beijing from pressuring TikTok to encourage communities of Xinjiang denialists to flourish on the platform, for instance? As our report revealed, there’s already evidence that this is happening. Our analysis of the hashtag #Xinjiang showed a depiction of the region that glosses over the human-rights tragedy unfolding there and instead provides a more politically convenient version for the CCP, replete with smiling and dancing Uyghurs.

The power of social media apps has been underestimated before. When Facebook started as a ‘hot or not’ website in a Harvard dorm room at the turn of the millennium, who would have expected it would go on to play a role in inciting violence 13,000 kilometres away?

So how do policymakers deal with a Chinese-owned social media app that isn’t really a social media app but a modern-day interactive TV station, whose editorial decisions are made by an opaque algorithm developed and maintained in Beijing?

It’s past time governments realised the unique problem TikTok presents and they must now tailor solutions to deal with it properly.

Why TikTok isn’t really a social media app

There’s one thing we’re all getting wrong about TikTok: it’s not really a social media app. As TikTok Australia’s general manager told the Senate Select Committee on Foreign Interference through Social Media in September last year, the app is ‘less about social connection and more about broadcasting creativity and expression’.

Put another way, think of TikTok more as the modern incarnation of a media publisher—like a newspaper or a TV network—than as a social forum like Facebook or Twitter. That’s because TikTok is much more assertively curatorial than its competitors. It’s not a forum, it’s an editor. Its algorithm decides what each user sees, and it’s the opacity of that algorithm that presents the most worrying national security risk.

It may sound like an insignificant distinction, but TikTok’s emphasis on an ‘interest graph’ instead of a ‘social graph’ took the app’s competitors completely by surprise, and has largely gone over the heads of most lawmakers. The app, owned by Chinese technology company ByteDance, hit 2.3 billion all-time downloads in August 2020, so it’s high time policymakers understood exactly what makes TikTok tick.

An essay by Eugene Wei should be at the top of their reading list. A San Francisco–based start-up investor and former Amazon and Facebook employee, Wei dissects TikTok’s strategy and shows how its recommendation engine keeps users glued to their screens. It does it not by connecting them with friends or family, but by closely analysing their behaviour on the app and serving them more of what they’re interested in.

Wei’s opus, which approaches 20,000 words and is only the first in a three-part series, explains how TikTok is not the same as the major social media platforms we’re more familiar with. Put simply, on Facebook and Twitter, the content that users see is largely decided by who they follow. On TikTok, however, the user doesn’t have to follow anyone. Instead, the algorithm very quickly learns from how users interact with the content they’re served in the app’s ‘For You’ feed to decide what it should deliver to them next.

The approach is similar to that of Spotify and Netflix, whose recommendation algorithms take note of which songs and movies you listen to or watch in full and which you skip to decide what new content to suggest. As Wei puts it, ‘TikTok’s algorithm is so effective that it doesn’t feel like work for viewers. Just by watching stuff and reacting, the app learns your tastes quickly. It feels like passive personalization.’

It’s a strategy, Wei argues, that allowed a team of Chinese engineers—who didn’t necessarily have a good understanding of the cultures in the places where the app is available—to take the world by storm.

TikTok didn’t just break out in America. It became unbelievably popular in India and in the Middle East, more countries whose cultures and language were foreign to the Chinese Bytedance product teams. Imagine an algorithm so clever it enables its builders to treat another market and culture as a complete black box. What do people in that country like? No, even better, what does each individual person in each of those foreign countries like? You don’t have to figure it out. The algorithm will handle that. The algorithm knows.

But that’s not the only thing the algorithm knows. In a recent Protocol China exposé, a former censor at ByteDance said the company’s ‘powerful algorithms not only can make precise predictions and recommend content to users—one of the things it’s best known for in the rest of the world—but can also assist content moderators with swift censorship’.

The former employee, who described working at ByteDance as like being ‘a tiny cog in a vast, evil machine’, said that even live-streamed shows on the company’s apps are ‘automatically transcribed into text, allowing algorithms to compare the notes with a long and constantly-updated list of sensitive words, dates and names, as well as Natural Language Processing models. Algorithms would then analyze whether the content was risky enough to require individual monitoring.’

There’s no doubt that TikTok and its parent company have these abilities to monitor and censor. The question is, will they continue to use it? Certainly, the blunt censorship that typified TikTok’s earlier approach to content moderation and is par for the course on ByteDance’s domestic apps is unlikely to continue, especially after the public scrutiny over TikTok’s censoring of content related to the Tiananmen Square massacre, Black Lives Matter protests and Beijing’s persecution of Uyghurs and other ethnic minorities.

But there’s ample room for ByteDance to covertly tweak users’ feeds, subtly nudging them towards content favoured by governments and ruling parties—including the Chinese Communist Party. After all, it’s an approach that would be in line with the strategy that China’s Ministry of Foreign Affairs and state media are already deploying.

Beijing is exploiting pre-existing grievance narratives and amplifying pro-CCP Western influencers in the knowledge that Western voices are more likely to penetrate target online networks than official CCP spokespeople. The strategy, referred to as ‘Borrowing mouths to speak’ (借嘴说话), is reminiscent of the Kremlin’s approach and is perfectly suited to being covertly deployed on Chinese-owned and -operated social media apps.

Just as experiments have shown that TikTok’s algorithm can hurtle users from a politically neutral feed into a far-right firehose of content, so too can it easily be used to send users down any extreme rabbit hole. By design, the app groups people into ‘clusters’ (otherwise known as filter bubbles) based on their preferences. TikTok’s executives stress that they have measures in place to ensure people don’t become trapped in those filter bubbles. TikTok’s recommendation system ‘works to intersperse diverse types of content along with those you already know you love’, the company claims. The goal, they say, is to ensure that users are exposed to ‘new perspectives and ideas’, but who decides which new perspectives and ideas?

What’s to stop Beijing from pressuring TikTok to encourage communities of Xinjiang denialists to flourish on the platform, for instance? As our report revealed, there’s already evidence that this is happening. Our analysis of the hashtag #Xinjiang showed a depiction of the region that glosses over the human-rights tragedy unfolding there and instead provides a more politically convenient version for the CCP, replete with smiling and dancing Uyghurs.

The power of social media apps has been underestimated before. When Facebook started as a ‘hot or not’ website in a Harvard dorm room at the turn of the millennium, who would have expected it would go on to play a role in inciting violence 13,000 kilometres away?

So how do policymakers deal with a Chinese-owned social media app that isn’t really a social media app but a modern-day interactive TV station, whose editorial decisions are made by an opaque algorithm developed and maintained in Beijing?

It’s past time governments realised the unique problem TikTok presents and they must now tailor solutions to deal with it properly.

TikTok deal fails to address data security and information influence concerns

The proposed TikTok partnership deal between US-based companies Oracle and Walmart and China-based ByteDance fails to address the concerns associated with user data being tied to a Chinese company. Even if majority ownership goes to the US, a continued ByteDance stake in TikTok—regardless of whether it’s as a minority or majority holder—would enable connections between the app’s data and the Chinese Communist Party to be maintained.

It’s clear from ByteDance’s operation of Douyin—the Chinese version of TikTok used in mainland China—that the company’s domestic platforms for Chinese users are closely entangled with the CCP. Proof of a connection between TikTok and the CCP, however, was until recently, more difficult to establish.

When Beijing released a policy document on 15 September urging China’s private sector to adhere closer to party ideology, TikTok’s links to the CCP became much more obvious. These policy guidelines illuminate the party’s approach to increasing its influence over private enterprises and entrepreneurs, which includes TikTok as a subsidiary of ByteDance.

With TikTok’s ties to the party, it’s possible to imagine that the applications of user data from TikTok and from ByteDance’s Chinese platforms may not differ substantially. Douyin, for example, has been used as a tool for spreading ‘positive energy’, a term that embodies political ideology in China and has become a popular hashtag. Many of the videos associated with positive energy reportedly propagate dominant state ideology.

The vice president of ByteDance (and secretary of its CCP committee), Zhang Fuping, has emphasised that as an internet company, ByteDance should make use of its advantages in technology and talent, and actively spread positive energy. Here, it’s evident the CCP can use ByteDance’s platforms such as Douyin as a tool to spread political ideology through consumer-friendly short videos.

Although this shows how Douyin can be used as a vehicle for spreading the party message, a recent ASPI report, TikTok and WeChat: Curating and controlling global information flows, found instances of positive energy being spread on TikTok as well. The account @guanvideo, for example, comes from a company that produces videos it says adhere to positive energy guidance and posts content to the #Xinjiang hashtag on TikTok.

Beyond the distribution of CCP ideology, Douyin and TikTok can also be utilised as data-collection platforms to benefit the party. In 2016, China’s state council released an ‘outline of action to promote the development of big data’, which described how big data will become a means to improve governance. This essentially means that any company with data-collection platforms and affiliations with China is capable of benefiting the CCP’s governance system—including TikTok.

The ByteDance Institute of Public Policy (京字节跳动公共政策研究院) has published a report on how data collected from Jinri Toutiao, a Chinese news and information platform created by ByteDance, can be used to support the CCP’s governance systems. The report features analysis of reader preferences based on data from users in 366 cities across China in the first half of 2018.

One of its most significant findings was that the ‘urban temperament’ in smaller cities is more susceptible to being influenced by social and political events and movements. This has significant implications for the ways in which the government interacts with smaller cities and their inhabitants. In short, user data analysis from ByteDance platforms can aid in shaping how, what and where content is conveyed through the company’s platforms.

Although the ByteDance Institute’s report is specific to users within China, it demonstrates potential applications of user data to benefit the party more broadly. As Beijing continues to tighten its grasp of the private sector, companies like ByteDance with close ties to the CCP will continue to assist it when they can.

ByteDance’s ability to shape the information environment has the potential to impact political and social movements in democracies, as well as influence the way that political and social events in China are portrayed internationally.

‘Page not found’: what happens when diplomatic statements meet the WeChat censor

That the Chinese ‘superapp’ WeChat is subject to political control by the Chinese Communist Party is no secret; many studies have tracked its powerful censorship regime over time and documented the content that is routinely taken down from the app.

In recent months, debate over the potential risk that WeChat poses to national security has again flared up in Western countries. In the US, that culminated in an unprecedented move by the Department of Commerce, which last week announced a ban on the app. However, over the weekend, just hours before it was to take effect, a US district court temporarily blocked the order as contrary to citizens’ right to free speech under the First Amendment.

The future of the app in the US and the rest of the world is still unclear, and several issues surrounding it remain unsolved, including China’s strict censorship regime.

In response to the US ban, the Chinese version of WeChat, called Weixin, censored the department’s statement for its users in the People’s Republic of China. The link still works for WeChat users residing overseas.

There’s strong evidence that users of the app outside of the PRC are subject to less strict rules than their mainland counterparts, and that therefore two different censorship systems apply to WeChat and Weixin. But it’s also becoming more and more apparent that not only censorship, but also surveillance and political interference are rampant on the international version of the app.

Earlier this month, WeChat and Chinese social media platform Weibo censored a post by the US embassy in Beijing that criticised Chinese state media and the Chinese state’s propaganda system.

On 9 September, the official CCP newspaper, the People’s Daily, refused to publish an opinion piece by US Ambassador to China Terry Branstad about the deteriorating relationship between the two countries.

In the article, the ambassador lamented an imbalance in US–China relations, noting that the standards set by the CCP impede equal exchanges between Chinese and American businesses, as well as undermine diplomatic relations and information flows.

The paper’s refusal to carry the piece caused tit-for-tat attacks between Chinese and US officials over press freedom and freedom of speech.

On the same day, the US embassy published an article on its website titled ‘The hypocritical propaganda system of the People’s Republic of China’. The post was later shared on the embassy’s official Weibo and WeChat accounts. Both posts were quickly taken down by the two platforms. The WeChat post was read over 100,000 times before being removed.

This, however, is only the latest case of the censorship of foreign diplomatic statements on WeChat.

In our recent report, TikTok and WeChat: Curating and controlling global information flows, I, together with other researchers at ASPI’s International Cyber Policy Centre, outlined several other, more subtle cases of information suppression on the app.

As we explain in our report, ‘In the same way that Chinese government departments, spokespeople, embassies and diplomats use Twitter and Facebook to promote messaging overseas, diplomatic missions in China use platforms like WeChat to promote messaging and publish official government statements.’ However, the latter are not free to promote debates or publish any form of criticism that the CCP deems unacceptable.

We found 11 posts by the US embassy in Beijing that were censored between April and August, as diplomatic clashes between China and several Western nations intensified. The articles all touched on topics sensitive to the CCP, such as China’s repression of ethnic minorities, the Hong Kong national security law, and China’s mishandling of the coronavirus pandemic. In contrast, we found no instances of censorship from June 2019 to April 2020.

The posts were subjected to different layers of censorship. Most of the summaries of the articles remained available on WeChat, but the links redirecting to the original articles were broken. In some cases the sharing function was disabled.

The US embassy is by far the most active among foreign diplomatic missions on WeChat and also the most regularly censored. However, the British and Indian embassies in Beijing also received the same treatment earlier this year.

In June, two posts related to the UK’s involvement in the Hong Kong issue were censored on WeChat. One was taken down and the other had its sharing function disabled. The two articles aimed to address accusations by the Chinese government about the UK’s stance on Hong Kong’s independence.

Similarly, as reported by Indian media, a post containing Prime Minister Narendra Modi’s speech on the India–China border standoff in the Ladakh region that the Indian embassy published on its official WeChat and Weibo accounts was removed from both platforms.

While in our report we look at TikTok and WeChat as two arms of the same system, it’s important to note that the power an app like WeChat has in shaping and controlling information flows both within and outside the PRC, especially in Chinese diaspora communities, is unparalleled and will continue to increase as the technology behind it continues to develop.

In the light of this, we recommend that governments better regulate the content moderation environment in their jurisdictions and require that all social media platforms diligently disclose all the content they censor, penalising those that undertake content moderation covertly.

Defence concerns about TikTok should take ADF families into account

Apprehensions about the social networking platform TikTok fuelled by media headlines have included suggestions that there are risks associated with content shared by Australian Defence Force personnel. While the Department of Defence hasn’t made a specific statement against TikTok other than to include the platform on its list of non-approved apps for Defence devices, there are valid concerns about the engagement of military members any on social media platform.

However, focusing only on social media use by serving members disregards another crucial cohort—families and loved ones. Connecting on social media is critical for the health and wellbeing of military families, but it can expose Defence to security risks. Defence needs to do more to assist families in mitigating those risks.

Popular military TikTok hashtags, such as #militaryspouse, #defencefamily, #adf and #defencelife, include thousands of videos posted by siblings and partners of serving members. The videos cover multiple facets of military life, including upcoming and current deployment activity and homecomings, and frequently feature ADF members in uniform. The unofficial defence spouse Facebook pages are also popular with ADF families. The content they post is similar to that of other social networking users who are seeking to connect with others in their groups and share their experiences.

Of course, social media content about the personal experiences of ADF families is not inherently negative. It helps others find a community, normalises experiences and also reminds civilian communities about the ongoing work of the ADF, all of which can help create understanding of and support for future operations. The issue is ensuring that this engagement can be done safely, with both operational and personal security front of mind.

Concerns about military personnel on TikTok began with US claims that the platform could facilitate the release of sensitive location, image and biometric data to TikTok’s Chinese parent company, ByteDance, and thus to the Chinese government. ByteDance has denied these claims. Like the ADF, the US has banned defence personnel from downloading the app on government-issued phones. Analysts have suggested that the cybersecurity risks from TikTok are similar to those associated with other social media platforms and that the current panic appears to be generated by uncertainty and unfamiliarity with the platform.

The content shared by Australian military families on TikTok provides an apt case study demonstrating how clear messaging about secure social media behaviour is not being distributed directly to families, who have access to privileged information and sit at the intersection of military and civilian communities.

With no training or information specifically addressing families’ use of social media, they are left to their own devices when it comes to engaging online. My research on attitudes to social media security among ADF partners found that they get cybersecurity information from a range of sources including civilian workplaces, friends, family and ‘common sense’, but almost never from their ADF family member or Defence networks.

Defence’s full social media policy is not publicly available, which makes it difficult for partners to apply Defence advice to their own online engagement. And while civilian partners are not covered by Defence’s social media policy, research participants discussed incidents where ADF members were questioned by senior personnel about their partners’ or dependents’ online activity. Partners are very resistant to the idea of the ADF restricting their social media activity, claiming their civilian status places them outside of the ADF’s authority.

When families are not directly provided with cybersecurity information and support, they may inadvertently rely on outdated or false information. This is particularly problematic when new issues arise, such as the recent concerns over TikTok. If Defence needs to request families to stop using an app or adjust the content they post on sites, it has no clear or established way to do it.

Defence needs to recognise that families play an important role in the security of the ADF and bring them directly into discussions about online safety. These discussions should be mutually respectful, based on the motivations and needs of the families as well as the requirements of the ADF.

My research shows that defence partners feel they already have sufficient awareness to be able to engage safely online. Partners take online security very seriously, particularly when it comes to protecting the safety and wellbeing of their serving partner, and want to avoid any negative consequences resulting from the release of sensitive information.

But continuing to exclude partners and other family members from Defence’s cybersecurity conversations could contribute to false or inflated perceptions of their ability to engage safely when interacting on social media.

The clock’s ticking for regulators on TikTok

In the 18 months the Australian Competition and Consumer Commission was busily working away on its 623-page opus on digital platforms, Chinese-owned app upstart TikTok grew a global audience of over 700 million.

With no serious regulatory barriers—let alone a Great American Firewall—to stop it, TikTok achieved its meteoric growth, ironically enough, by ploughing US$1 billion into ads on the social platforms of its Western rivals like Facebook, Facebook-owned Instagram, and Snapchat.

As the ACCC scribes reached the halfway point of their report, Beijing-based ByteDance—the company behind TikTok—became the world’s most valuable start-up on the planet after securing a US$3 billion investment round which gave it a jaw-dropping valuation of US$75 billion.

Then, just over a week before ACCC Chairman Rod Sims handed the finished report to Treasurer Josh Frydenburg, ByteDance announced at the Shanghai Film Festival that it had amassed more than one billion active users across its family of apps.

The overnight success of TikTok should serve as a reminder to the ACCC that, as journalism professor Margaret Simons flagged in her submission to the inquiry, regulators ‘should not assume that Western digital platforms will be the only ones to gain a foothold in Australia’.

Despite the warning, TikTok gets just a single mention in the report—and only because Facebook listed it as one of its many competitors in an answer to the inquiry. Tencent-operated WeChat gets a paragraph, but with no analysis—despite being used by over a million Australians and having a trail of problematic stories behind it.

To be sure, TikTok is still very much a B player in Australia and—at the moment at least—is only a video-sharing app for tweens that you, as a reader of The Strategist, have probably never heard of (which, by the way, is completely by design).

But ByteDance executives are reportedly considering opening an Australian office. They have already been scouting the local market for content creators and, according to market research group Forrester, the company is already pulling in anywhere from $3.7 million to $12.9 million in Australia.

The speedy rise of TikTok should serve as a wake-up call to regulators. We may be most familiar with Silicon Valley’s tech behemoths like Facebook and Google, but there’s a fresh blessing of Chinese unicorns galloping our way.

Unlike China’s first generation of social media tech giants who stumbled in their international expansion, second-generation upstarts like ByteDance are proving to be much more sure-footed. What we don’t know is where they’ll pirouette to next.

There are already signs that the suite of mini-programs in the TikTok app is going to expand. Similar, seemingly inane Western apps like Snapchat have already branched out to news publishing and certainly ByteDance will be looking to do the same.

After all, news is a core competency for ByteDance. The company’s main app inside China is the news aggregator Jinri Toutiao (‘Today’s Headlines’) and it has already significantly disrupted the news business there.

It also offended the ruling Chinese Communist Party in the process. In April last year, the company was ordered to suspend Jinri Toutiao after the authorities decided the news stories featured on it were ‘opposed to morality’.

That prompted ByteDance CEO Zhang Yiming to pledge to increase his team of censors from 6,000 to 10,000—the job ads for which noted that candidates with ‘strong political sensitivity’ would be preferred—as well as pour even more resources into developing an AI-powered automated censorship apparatus.

TikTok has already attracted the ire of regulators around the world, including in Indonesia, India, the UK and the US, where the company has accepted a US$5.7 million settlement with the Federal Trade Commission for violating the Children’s Online Privacy Protection Act.

But beyond the expected regulatory missteps of a fast-growing social media platform, TikTok is uniquely susceptible to other problems that come from its ineluctable closeness to the censorship and surveillance apparatus of the CCP-led state.

Beijing has demonstrated a propensity for controlling and shaping overseas Chinese-language media. The speedy growth of TikTok now puts the CCP in a position where it can attempt to do the same on a largely non-Chinese-speaking platform—with the help of an AI-powered advanced algorithm. There’s evidence to suggest politically motivated censorship is already happening.

Australia’s regulators may think they have a gargantuan task ahead of them grappling with America’s tech behemoths, but they will face a whole new order of problems when they try to rein in the Chinese tech unicorns that are inextricably linked to the CCP’s opaque and erratic censorship and surveillance regime.