Tag Archive for: data

The hidden risks we scroll past: the problem with TikTok—and RedNote

What if the most popular apps on our phones were quietly undermining national security? Australians often focus on visible threats, but the digital realm poses less obvious yet equally significant dangers. Yet, when it comes to the digital landscape, a blind spot remains: the hidden risks posed by platforms such as TikTok and RedNote (Xiaohongshu). These apps are more than just harmless entertainment; they’re tools in a global battle for data and influence. And we, as a society, remain largely unaware.

TikTok, RedNote and similar platforms have embedded themselves deeply into daily life. Their algorithms delight us with engaging content, fostering a sense of connection and entertainment. But this convenience comes at a cost. Few stop to question what’s behind these apps: who owns them, where our data goes, what it might say about us, and how it might be used. In fact, these platforms, owned by companies who must obey authoritarian governments, present profound risks to our privacy and national security.

Digital risks are invisible and complex and, for most, our understanding is limited. While most Australians grasp the tangible dangers of terrorism or cyberattacks, the concept of apps and data collection being weaponised for disinformation and influence campaigns feels abstract. This gap in understanding is compounded by the prioritisation of convenience over caution. Governments and experts have sounded alarms, conducted enquiries and in extreme cases implemented total bans—as seen with TikTok in the US—but their warnings often fail to resonate amid the noise of daily life. As a result, we remain unprepared for the evolving tactics of malign actors who exploit these vulnerabilities.

Platforms such as TikTok and RedNote collect vast amounts of user data—from location and device details to browsing habits. In the wrong hands, this data can be used to map social networks, identify vulnerabilities or inform targeted disinformation campaigns. Algorithms don’t just show users what they like; they also shape what users believe. Through curated content, adversaries can subtly influence societal narratives, amplify divisions or undermine trust in democratic institutions. Beyond individual users, these platforms could act as backdoors into sensitive areas, through officials’ use of them (despite rules against it) or business executives sharing trade secrets on them.

Australia must address the vulnerabilities on these apps, particularly as the nation strengthens partnerships under such initiatives as AUKUS. Demonstrating robust digital hygiene and security practices will be essential to maintaining credibility and trust among allies.

The enactment of the Protecting Americans from Foreign Adversary Controlled Applications Act has prompted an exodus of users from TikTok, driving them to seek alternative platforms—though Donald Trump has given the app’s owner some indication of a reprieve.

Many TikTok users have turned to RedNote, which has rapidly gained traction as a replacement. Unlike TikTok, which operates a US subsidiary and is banned within China, RedNote is fully Chinese-owned and operates freely within China, creating a level of commingling and data exposure that was not present with TikTok. This raises even greater concerns about privacy and national security. While banning RedNote might seem like a straightforward solution, it does not address the core issue: the lack of public awareness and education about the risks inherent in these platforms. Without understanding how their data is collected, stored, and potentially exploited, users will continue to migrate to similar platforms, perpetuating the cycle of vulnerability. This underscores the urgent need for widespread digital literacy and education.

Recent legislation aimed at protecting children from social media platforms, such as the minimum-age requirements introduced by the Australian government, is a step in the right direction. However, this approach could be endlessly repetitive: new platforms and workarounds could quickly emerge to bypass regulations. The question remains: can the government effectively manage implementation of such policies in a fast-evolving digital landscape? And if we are applying policies to protect children, what about defence force personnel using these free applications? They could inadvertently expose national-security information. A consistent, security-first approach to app usage should be considered across all demographics, especially those with access to critical data.

Governments must take the lead by implementing stricter regulations and launching public awareness campaigns. Comprehensive digital literacy programs should be as common as public-awareness campaigns on physical health or road safety, equipping Australians to recognise and mitigate digital threats. They should know where their data is stored, understand they should resist letting apps know their location, and consider potential consequences. Digital security is no longer a niche concern; it is a core component of modern citizenship.

The hidden risks we scroll past each day are not just a matter of personal privacy but of national security. As Australians, we must shift our mindset and take these threats seriously. By recognising the vulnerabilities embedded in our digital habits, we can build a more secure and resilient society. Because when it comes to national security, ignorance is no longer bliss.

To pre-empt extremist violence, we need real-time social media data sharing

Law enforcement and social media platforms must implement real-time data sharing to stop online extremism before it leads to violence. Using appropriate safeguards, we can achieve this without raising concerns about creating a surveillance state.

Social media companies have vast behavioural data, but their reluctance to share it with authorities means we’re left scrambling after an attack occurs. The resulting delay facilitates radicalisation and puts lives at risk. Rather than reacting to attacks, we should aim to prevent harm through a coordinated, data-driven approach. The current system is failing. Speed matters. Privacy concerns are valid, but when the stakes are this high, we need to ask: how many more lives are we willing to risk?

Extremist groups exploit unregulated online spaces to recruit, radicalise and incite violence. By the time we detect it, it’s often too late. We’ve seen the deadly consequences: shootings, terrorism and violence facilitated through social media. Social media companies like to claim they are neutral platforms, but they control the algorithms that amplify content, creating an environment where radical ideas can thrive.

Take the Christchurch mosque shootings in 2019 for example. The shooter posted his manifesto on Facebook and 8chan (an online message-board) before killing 51 people. Although Facebook moved quickly to remove his manifesto, the content spread to thousands. But his interactions with extremist groups and violent posts could have been flagged long before the attack. If they had then been shared immediately with law enforcement, authorities could have detected his extremist behaviour early and intervened.

Social media platforms must be more proactive in identifying extremist content and sharing it with authorities immediately. Delayed intervention leaves room for radicalisation. This is compounded by algorithms that prioritise content likely to generate engagement—likes, shares and comments. Extreme content, which often elicits strong emotional reactions, is amplified. Conspiracy theories, such as QAnon, spread widely on online platforms, drawing users deeper into radical echo chambers.

This isn’t about mass surveillance—it’s about content moderation. This approach should build on existing moderation systems. Authorities should only be alerted when certain thresholds of suspicious activity are crossed, much as financial institutions report suspicious transactions. For example, if activity suggests a user is being recruited by a terrorist group, or if the user shares plans for violence, social media companies should have the ability—and in fact the responsibility—to flag this behaviour to authorities.

Of course, automated content detection can result in misjudgements. This is where human content moderators within social media companies could play a role: once an automated system flags potentially harmful activity, it could trigger a review by an employee who would assess whether the flagged behaviour meets a threshold for real-time sharing with law enforcement. If the content is likely to incite violence or indicate a credible threat, the moderator could initiate real-time data sharing with authorities for possible intervention.

This verification process could be among the safeguards in place to ensure that only high-risk, potentially harmful activities are flagged, protecting the privacy of those who don’t present a threat and preventing concerns arising about the government creating a surveillance state. Shared data would follow appropriate legal channels, ensuring transparency and accountability.

The costs of implementing real-time data-sharing systems are manageable. Social media platforms already use automated systems for content moderation, which could be adapted to flag extremist behaviour without imposing significant human resource costs. Shared financial responsibility between social media companies and law enforcement could also help. Law enforcement agencies could receive funding to process flagged data, while tech companies would have to pay for technology needed to detect extremist activity. We can manage implementation costs and focus resources where they’re most needed by prioritising high-risk platforms and upscaling the system over time.

A limitation is that Australia could not impose this mechanism on platform operators that had no presence in the country. But the larger platforms’ operators, such as Meta, X and Snap, do.

Our current reactive approach isn’t working. We need real-time data sharing between tech companies and law enforcement to intercept threats before they escalate. Lives are at stake, and we can’t afford to wait for the next tragedy.

As China tries harder to collect data, we must try harder to protect data

China is stepping up efforts to force foreign companies to hand over valuable data while strengthening its own defences. Some of the information it’s looking for would give it greater opportunities for espionage or political interference in other countries.

Australia and other countries need to follow the lead of the United States, which on 21 October proposed rules that would regulate and even prohibit transfers of data containing the personal or medical information of its citizens to foreign entities.

Recent developments from inside China support the idea that the country is refocusing on bulk data, both to aid its intelligence operations and to protect itself from potential adversaries.

China has reformed its domestic legal environment to both protect itself and collect information with intelligence value. A new Data Security Law allows Chinese officials to broadly define ‘core state’ data and ‘important’ data while also banning any company operating inside China from providing data stored in China to overseas agencies without government approval. Firms over a certain size must also have a cell of the Chinese Communist Party to more closely integrate ‘Party leadership into all aspects of corporate governance’, including cybersecurity and data management.

The Communist Party’s Central Committee and the State Council have decreed that the National Data Administration will manage every source of public data by 2030.

The Ministry of State Security has prohibited Western companies from receiving geospatial information from Chinese companies and required companies to take down idle devices to reduce the threat of Western espionage. And Chinese nationals will shortly be unable to access the internet without verifying their identity by facial recognition and their national ID number.

In early October, a report by the Irish Council of Civil Liberties (ICCL) exposed the world of real-time bidding data, where the ads displayed when you go online are the result of an automated bidding process based on your browsing history and precise location. The ICCL report raised concerns that these kinds of analytics could identify people’s political leanings, sexual preferences, mental health state and even the drinks they like. That data has then been sold to companies operating in China.

Beijing’s recent activities in the digital world remind us that even the most mundane and trivial data about a person can have intelligence value—for example, in recruiting agents, guessing passwords and tracking the movements of targets. China’s expansive spying regime, which mobilises countless private entities and citizens, threatens to overwhelm Western intelligence services. That spying regime now has access to more information to inform decisions.

China’s latest moves draw our attention to the peculiar vulnerability of Australia in the region, especially among the AUKUS triad. Australian privacy law does not carry the same type of protections as British and US laws. Australia has neither a constitutional nor statutory right to privacy, and its key piece of legislative protection has provisions dating back to the 1980s. Despite receiving the results of a comprehensive review of the Privacy Act more than 18 months ago, the government has been sluggish to adopt any reforms that might help protect us from China’s data-harvesting practices.

The motivation for China to collect personal data in Australia has risen since we entered the AUKUS agreement in 2021. But the government isn’t showing enough interest in securing it against foreign manipulation and theft. Consider, too, that other intelligence players, such as India and Russia, are just as likely to join in.

Australia should take a leaf out of the US playbook on countering Chinese interference in its sovereign data. Since February 2024, the United States has been keen to regulate the sharing of information with foreign entities, starting with an executive order signed by President Joe Biden. The rules that Biden proposed on 21 October would ban data brokerage with foreign countries and only allow certain data to be shared with entities that adopt strict data security practices.

Beyond that, there is a growing need for industry and especially academia to adopt stronger security postures. Posting travel plans or political views on Facebook or Instagram might seem innocuous, but if it’s done by someone in a position of power or with access to valuable information, the individual’s vulnerability to espionage dramatically increases. As a society, we all need to take a little more notice and a little more care with what we are sharing online.

Digitisation no magic pill for China’s ailing public health system

China has prioritised implementation of digital technology in the health sector in recent years. In November 2022, China proposed the digitisation of national health information by 2025 as part of its five-year national health informatisation plan.

Globally, digitisation has been used as a solution to issues such as labour shortages, healthcare privatisation and health record management. The Australian government, for example, has supported the development of a range of digital initiatives for enhancing the cost-effectiveness of the public health sector.

Because digitisation is being led by technology companies, the public health sector is becoming more complicated by the intersection of public and commercial interests. There is growing awareness of issues related to data governance and bioethics in digital health. Concerns have emerged about the potential for digital technologies to exacerbate health inequality and the influence that corporations may have over public health policies. In China, apprehensions are centred on the potential for privacy breaches and human rights abuse due to its healthcare divide and polarisation as well as its status as a surveillance state.

China’s health informatisation plan aims to digitise and standardise China’s healthcare systems from the national to the county level across all provinces. The plan demonstrates a general sense of optimism that digitisation will solve the current and future problems faced by China’s public health system.

The framing of China’s Covid-19 tracing program, Health Code, as simply an extension of China’s pandemic surveillance doesn’t show the full picture. The Covid-19 pandemic undoubtedly exposed weaknesses in China’s healthcare system and accelerated the process of public health digitisation on a larger scale, but these technologies also create problems.

Health Code was initially implemented on DingTalk, an enterprise productivity management application owned by the Alibaba Group. It was used to document and report employees’ welfare to employers to assess their capability for work. In February 2020, Tencent and Alibaba worked with the municipal governments in Shenzhen and Hangzhou, where the two tech giants are based, to experiment with including the Health Code functionality in WeChat and Alipay, the two primary digital platforms for communication and financial transactions in China.

Since early 2020, Health Code functionality has been rolled out across the nation to improve the recovery of China’s economy after massive lockdowns across China. It tracks users’ movements to identify if they have been exposed to a Covid hotspot. Green, yellow and red indicators show whether a citizen is permitted to go out and access public facilities or must undergo home or collective mandatory quarantine.

Each province launched its own version of Health Code, enabling local government officials to manage and even exploit the local Health Code system. In June 2022, five officials in Henan province were disciplined after manipulating the local Health Code system to prevent protestors from travelling. The case raised concerns about whether digital health systems might become a convenient tool for the government as a form of political and ideological control over ethnic minorities or protestors.

More protective and ethical data-management guidelines have yet to be established in China. In August 2022, a breach of Shanghai’s Health Code system exposed the data of 50 million users. Their names, phone numbers, identity documents and Covid status were then put up for sale on the dark web.

This incident exposed the vulnerability of citizens’ data stored by government agencies. China has accelerated efforts to establish a legal architecture for data protection over the past five years, such as the promulgation of the Cybersecurity Law in 2018 and the Personal Information Protection Law and Data Security Law in 2021. But these measures have proven to be inefficient in providing meaningful guidance for how governments, from the national to the local level, should use, collect and protect citizens’ data ethically and with accountability.

The digitisation and unification of China’s health system will not translate to an inclusive model for public health systems. China’s harsh lockdowns exposed the fundamental challenges of its healthcare system, with poor coordination between health organisations, providers and local governments. Healthcare resources are unevenly distributed across age groups, geolocations, social classes and occupations, with the upper-middle-class residents of major cities benefiting far more than marginalised rural residents. The digital divide and the lack of digital literacy further deepen healthcare polarisation.

Policymakers should seek to improve public health systems by addressing the root causes of the problems affecting healthcare, rather than seeing technology as a silver bullet to resolve all issues.

Forget state surveillance—it’s advertisers who know you best

The Australian government is working to modernise and simplify outdated laws governing how Australian agencies conduct electronic surveillance. However, this reform effort is not examining surveillance by companies—so called surveillance capitalism. This is a problem because, despite concerns about all-seeing Orwellian agencies, most electronic surveillance is undertaken by corporations harvesting data from people’s digital interactions.

Every day we hand over personal information to internet service providers, telecommunications companies, social media outlets and other companies in exchange for free, cheap or convenient service. A digital footprint follows us everywhere and the generated data is the ‘new oil’ for business.

Data collectors, including Alphabet, Amazon, Meta and Apple, have built electronic profiles of consumers to drive advertising and personalisation of services. Storage is cheap, falling from around US$30,000 per gigabyte in 1989 to less than US$0.025 in 2022, and takes up so little physical space (a 1-terabyte microSD card is smaller than a postage stamp) that there are few barriers to companies amassing stockpiles of personal information for data mining and digital experimentation.

The profiling of digital consumers enables targeted advertising to occur in the milliseconds between clicking on a web link and the page loading. The better the profile, the better the results and the higher the advertising revenue.

Consumers seem willing to accept, or at least remain blissfully ignorant of, the surveillance being undertaken, inviting ever more ‘data points’ into their lives, particularly through the use of apps on mobile phones and even paying for the privilege of allowing a constantly listening Alexa or Google device to be present in their homes. The internet has delivered undoubted benefits—reducing the costs of services, connecting global communities, and revolutionising access to information unseen since the invention of the printing press. However, as is the case with all innovation, this technology is a double-edged sword.

The amassing of personal data and profiling of individuals raises ethical, privacy and national security concerns. Profiling occurs through statistical inference applying probabilities, not certainties, which may result in advertising outcomes ranging from the mildly annoying, such as recommending baldness treatments to those with a full head of hair, to the outright dangerous, such as pushing gambling on those with a gambling addiction.

However, the targeting of advertising doesn’t stop at the passive end of responding to existing preferences; marketing professionals also build needs and wants and push consumers towards products that fill these newly created needs. Profiling helps companies find consumers who can be nudged towards desires they didn’t know they had. Worse, profiling can nudge people towards groups and ideologies they didn’t know existed. A recent paper from the Lowy Institute highlights some of the emerging national security threats of mass personal-data aggregation.

The pace of technological change has outstripped the ability of regulators to adequately address each concern that corporate electronic surveillance and use of data throws up. Attempting to close down each potentially objectionable-use case would be a giant game of interjurisdictional whack-a-mole. Instead, some principles-based regulation would ensure a sustainable digital future for citizens and, importantly, help limit national security concerns and provide a transparent framework when governments seek access to data collected by the private sector.

The corporate world has sensed the changing tide of community expectations. Apple is selling its latest iPhone operating system based on its privacy-preserving credentials. Apple’s privacy settings are credited, in part, with wiping US$250 billion from Facebook’s value—although it isn’t yet clear whether Apple is truly invested in its users’ privacy, or if the move was designed more to harm its data competitors.

In the US, a new bipartisan bill, the Social Media NUDGE Act, aims to curb algorithmic recommendations by asking researchers to identify ways of slowing down the spread of harmful content. While this is a useful first step, more is likely to be needed to prevent people from being pushed towards dangerous disinformation—leading to echo chambers of increasing furore on micro-targeted issues. The potential for adversary nations to exploit algorithms in social media platforms to create or amplify internal divisions warrants serious consideration as an area for regulatory reform.

Article 17 of the EU General Data Protection Regulation enshrines a right for a citizen to request erasure of personal data in certain circumstances. In order to prevent the long-term profiling of Australians, and to mitigate privacy and national security risks, consideration should be given to replicating this regulation in Australia. A right to an annual data ‘reset’ might help balance the benefits (using the consumer’s current preferences) against the dangers (building a long-run profile to nudge people towards radicalisation).

Finally, consideration should be given to the extent to which the Australian government may supplement its direct collection of surveillance material with data from private companies. The Department of Home Affairs is currently consulting on a range of reforms to electronic surveillance, but makes little mention of whether that extends to obtaining data collected by corporations. The reform package should address this, including outlining an authorisation regime, establishing limits on the scope of the requests, and setting out the rights of the private company to refuse or contest the authority or scope of the request.

Sharing information and intelligence in the Pacific

A new ASPI report, The Pacific Fusion Centre: the challenge of sharing information and intelligence in the Pacific, finds that much remains to be done in this area. The report examines the Australian-sponsored Pacific Fusion Centre, which was established to provide strategic assessments on non-traditional security issues to Pacific island countries. The report concludes that although the PFC is a useful soft-power initiative, the Pacific still sorely needs a regional information fusion centre to produce and share actionable intelligence in the maritime domain.

The PFC was set up in 2019 in response to the 2018 Boe Declaration on Regional Security issued by the Pacific Islands Forum. Its principal mandate is to provide strategic intelligence to help the forum’s member states formulate high-level national policy on human security, environmental security, transnational crime, and cybersecurity. Its strategic assessments are based on open-source and unclassified official data. The centre also promotes domain awareness, capacity building and information sharing among members.

The PFC currently operates from interim offices in Canberra and is due to open permanent offices in Vanuatu later this year. A permanent director, a Pacific islander, is being appointed. While this is an Australian-sponsored and -funded initiative, care has been taken to ensure that it’s seen to be ‘Pacific led’.

It’s in its early days, but over the long term the PFC could become a trusted source of strategic assessments, helping to better align perspectives across the region and inform national and regional policymaking.

The development of consensus views among policymakers on a range of potential threats can only be a good thing. But while the establishment of the PFC should be applauded as a useful soft-power initiative, in practice the impact of its strategic assessments is likely to be limited in several ways.

The first is the PFC’s reliance on open-source data. That might not be problematic in providing policy guidance on issues such as human health and climate change, but in other cases, such as transnational crime or cybercrime, reliance on open-source data may significantly limit the value of assessments. This might leave a gap in some strategic assessments that will need to be bridged by other means, including at the bilateral level.

The effectiveness of the PFC’s strategic assessments on policymaking may also be limited by their distribution to only a small number of government officials. Given the siloed nature of governance structures in many Pacific island states, limiting the distribution of assessments could well limit their policy impact.

These concerns could be partly addressed by establishing a two-tiered system of assessments. Strategic assessments that include sensitive information or analysis could have limited circulation. This would require the development of an appropriate communications system (with all the substantial challenges that would involve), but it would give the PFC’s assessments more potential impact and credibility. At the same time, assessments based on open-source or official data could be distributed to a wider group of stakeholders.

The lack of formal arrangements for intelligence inputs to the PFC from key partners such as the United States, international organisations such as the United Nations Office on Drugs and Crime or regional information fusion centres elsewhere is likely to also limit the PFC’s effectiveness. Those partners could contribute much to the understanding of the threats faced by the region.

There are also some widespread misperceptions about the PFC that need to be rectified. In practice, its role is considerably narrower than its name might suggest. The PFC is quite different from the regional maritime information fusion centres elsewhere in the world that fuse and share operational information or actionable intelligence on specific security threats.

These regional centres build maritime domain awareness through fusing and disseminating actionable information on specific security threats—for example, by identifying vessels that are engaged in crimes or threats such as illegal fishing or smuggling people, arms or drugs, and by providing actionable intelligence to relevant authorities in a timely way.

Indeed, unlike Southeast Asia or the Indian Ocean where several regional fusion centres have been established, the Pacific still sorely needs a regional centre to fuse and share actionable intelligence in the maritime domain.

The Pacific has several agencies that disseminate information on, for example, fishing (such as the Forum Fisheries Agency) or transnational crime. But they are limited to specified threats and there’s no single centre that brings information together, analyses it and distributes intelligence to security or law enforcement agencies.

There have been several proposals to establish a regional maritime information centre for the Pacific that would provide a one-stop shop for threats in the maritime domain and across the entire border continuum. Indeed, some originally proposed that the PFC would fulfil this role.

To be sure, there are a multitude of practical challenges in sharing operational or classified information and producing actionable intelligence across multiple agencies and countries. Not least is the legitimate concern of Pacific island countries about protecting their sovereignty.

But, as has been demonstrated in many other parts of the world, these challenges can be overcome. A regional information fusion centre for the Pacific may need to start with a small number of partners, but its benefits in providing a comprehensive understanding of the region’s threat environment should become quickly apparent to all.

Mapping China’s Tech Giants: Covid-19, supply chains and strategic competition

Mapping China’s Technology Giants is a multi-year project by ASPI’s International Cyber Policy Centre that tracks the overseas expansion of key Chinese technology companies. This data-driven project, and the accompanying database and research products, fill a research and policy gap by building understanding about the global trajectory and impact of China’s largest companies working across the internet, telecommunications, artificial intelligence, surveillance, e-commerce, finance, biotechnology, big data, cloud computing, smart city and social media sectors.

Today, we’ve relaunched our project with major data updates, new analytical products and two new reports. Here’s a summary of what you can now find on our website.

Data updates

Our China Tech Map now includes more than 3,800 global entries. These are each populated with up to 15 categories of data, totaling 38,000+ data points. With this relaunch, we’ve added four new companies to our database: Ant Group (digital payment and financial technology) Inspur (cloud computing and big data), Ping An Technology (AI, blockchain and cloud computing) and Nuctech (security technology).

Our data—which you can download here—includes many new entry types. For example, because of the pandemic, we’ve added a category focused on the companies’ monetary and in-kind donations to other organisations or countries. Alibaba, ByteDance and Tencent make up 80+ of the 130 Covid-19 donation entries we’ve mapped.

We’ve also looked into new or expanded areas of business, particularly those related to Covid-19. BGI, for example, signed a number of agreements to establish laboratories to improve Covid-19 testing capacity, such as in Angola, Australia and the United Arab Emirates. Elsewhere, it donated testing equipment, such as in Israel, Greece and Canada.

Smart city projects (often referred to as ‘safe cities’ by those selling the technology) featured heavily in our 2019 version of the China Tech Map project. We found that these continued to evolve globally, but also faced greater scrutiny in some countries. In Pakistan, Huawei projects in Islamabad, Lahore and Punjab all faced various political, technical and financial setbacks. Meanwhile, in 2020, Huawei signed an agreement to supply smart cities solutions to Saudi Arabia, while projects in Duisburg, Germany and Valenciennes, France appear to be ongoing.

New analysis products

When the China Tech Map project started, we assessed that the global expansion of China’s technology giants needed to be understood within the unique party-state environment that shapes, limits and drives their global behaviour. This, we argued, sets them apart from other large technology companies expanding around the world. This project has sought to: 

  • Analyse the global expansion of a key sample of China’s technology giants by mapping their major points of overseas presence.
  • Provide the public with analysis of the governance structures, party-state politics, supply chain issues and the data ecosystem in which these companies have emerged, and are deeply entwined.

Our ‘Company briefs’ include new ‘Privacy policies’ and ‘Covid-19 impact’ sections. We’ve also updated each existing overview, and of particular note are updates to the ‘Activities in Xinjiang’ and ‘Party-state Activities’ sections. We’re also introducing a new product: ‘Thematic Snapshots’, which combine Company Briefs content across the four thematic areas named above.

New research reports

Finally, with this relaunch, we are publishing two new research reports.

Supply chains and the global data collection ecosystem

Computer networks have become essential to everyday life in many ways. So, when a cyberattack on a vital United States fuel pipeline or on Ireland’s health system causes massive disruptions, the world takes notice.

But a less obvious and more dangerous threat exists within business-as-usual data exchanges or when an adversary can control the direction of technological development. Then there’s no need for an ‘attack’; it’s simply a matter of turning on the tap. Our new policy report, Supply chains and the global data collection ecosystem, demonstrates how risks can emerge.

Most of the 27 companies tracked by our China Tech Map project are heavily involved in the collection and processing of vast quantities of personal and organisational data—everything from personal social media accounts, to smart cities data and biomedical data. Their business operations, and associated international collaborations, depend on the flow of vast amounts of data, often governed by the data privacy laws of multiple jurisdictions.

This report describes how Beijing—through expectations- and agenda-setting in laws and policy documents, and actions such as the mobilisation of state resources to set technology standards—is refining its capacity to exert control over the tech sector’s activities to ensure that it can derive strategic value and benefit from Chinese companies’ global operations.

Reining in China’s technological giants

Since the launch of our China Tech Map project in April 2019, the Chinese tech companies we canvassed have gone through a tumultuous period. Supply-chain vulnerability has ignited work in Europe, North America and other regions to reduce dependence on China. Telecommunications companies such as Huawei and ZTE that are deemed ‘high risk’ by multiple countries are increasingly finding themselves locked out of developed markets.

For China’s leadership, the twin crises of the Covid-19 pandemic and growing China–US strategic and technological competition highlighted the country’s need to achieve its long-held goal of ‘technological self-reliance’. But regulators in China have also used the Covid-19 pandemic as an opportunity to tighten supervision over Chinese tech companies, which over the past decade had grown into behemoths with relatively light regulatory oversight.

Reining in China’s technological giants describes the effects of these domestic and global developments on the 27 Chinese tech giants we cover on our map. This report argues that while the Covid-19 pandemic may have been a short-term boon to many of China’s technology giants (as it has been for technology companies around the world), for the Chinese Communist Party, the pandemic and the US–China trade war were a stark reminder of the country’s fragility in technological innovation.

Project aims

Through this project, ASPI ICPC is seeking to contribute to a greater understanding of what is taking place as it relates to the global expansion of China’s tech giants. Through extensive data-driven analysis, this project also articulates the reasons why governments, business and civil society should care about, and respond to, these developments. We also hope to stimulate debate on how the public and private sectors should respond.

TikTok’s ‘sale’ is a wake-up call to the dangers of data

After an unseemly scramble that raised concerns about cronyism, economic nationalism and Sino-American tensions, it appears as though the Trump administration’s threat to ban TikTok in the US will result in a new ‘partnership’ with Oracle and now Walmart. In Australia, the episode has prompted Prime Minister Scott Morrison to assert that he ‘won’t be shy’ about taking action if TikTok is found to present a threat to Australia’s national interests.

Concerns over TikTok centre not so much on its data-harvesting business model, but on its Chinese ownership. Western governments fear that TikTok will be compelled to give user data to the Chinese government, which may use it to build profiles of citizens for disinformation and influence campaigns and intelligence operations.

The risks related to data collection and storage and the potential for violations of the privacy, rights and security of citizens and for manipulation of political systems are poorly understood by government in general. Remedying that requires a new, comprehensive communications policy that puts the national security issues arising from digital platform oligopolies firmly in the spotlight.

But the TikTok case also points tangentially to a broader range of national security threats posed by the highly concentrated social media market. These include the external manipulation of Australia’s information environment and the shriveling of Australia’s domestic news industry.

Understanding the TikTok threat begins with recognising what makes the digital information arena different to other venues of contestation. In cyberspace, the marginal costs of gathering, storing and analysing information are effectively zero. Network effects—supersized by the absence of friction—produce a natural tendency towards concentration. The media platforms that win in this intensely competitive environment become dominant purveyors of information.

One consequence is that Australian journalism competes for attention with media the world over. This includes not only credible journalism from established giants like the New York Times, but also endless YouTube videos, blogs, social media posts and—yes—TikTok clips. Maintaining a vibrant domestic media presence is essential for the health of democracy. Yet increased capture of advertising revenue by online platforms threatens to further shrink a media sector that is already one of the world’s most concentrated.

In response, the Australian Competition and Consumer Commission has proposed a new ‘bargaining code’ for digital platforms that would force Google and Facebook to pay Australian media outlets when linking to their products. The code, however, is founded on a conceptual misunderstanding of why Google and Facebook are so successful. Neither firm profits directly from news content, making the notion of a ‘bargain’ faintly ludicrous: there is no revenue to split with media companies.

Studies on Spain’s attempts to force Google to pay to link to news stories in 2014 found very little positive effect on domestic media.

A more intellectually honest way to prop up news organisations would be a simple tax on digital service providers above a certain scale, with revenues going to support domestic journalism.

This would be controversial: a somewhat similar ‘digital tax’ proposal in the EU has met with vociferous opposition from Washington. Yet there’s no reason that an explicit tax should be less controversial than an ill-designed indirect tax posing as a commercial bargain.

The second strategic threat posed by TikTok—as well as other foreign-owned digital media platforms, such as WeChat—is deliberate manipulation of Australia’s domestic information environment.

Concerns that these apps manipulate the information users view and distribute are not theoretical: as ASPI’s International Cyber Policy Centre has shown, it’s already happening. TikTok’s parent company, ByteDance, and WeChat’s parent, Tencent, are bound by China’s internet security law, which requires them to follow Chinese Communist Party directives.

These companies cannot credibly commit to refrain from presenting censored or misleading content in pursuit of Chinese objectives. Combined with the decline of domestic journalism, ceding the digital information environment to opaque foreign outlets would hasten what ASPI’s Tom Uren has termed ‘the slow, creeping erosion of our sovereign decision-making’.

In the face of such threats to the integrity of Australia’s information environment, what policy responses should be on the table? One approach is an outright ban on foreign-owned digital platforms that represent potential vectors for malicious information operations. Such a ban could be enforced at a device level through app stores on iOS and Android, or via a more complex system of IP blocking by internet service providers. As India has shown, such a ban is possible.

Nevertheless, prohibition should only ever be seen as a last resort. Another option is imposing mandatory content and privacy standards for digital platforms that reach a certain size and scale. Yet TikTok cannot credibly commit to the type of user privacy and data protection framework that the International Cyber Policy Centre suggests is required without revealing details about how its algorithm works. And that algorithm has just been placed on China’s export control list.

A final option is to force a change of ownership to an entity that poses less of a strategic concern. This was the theory behind the Trump administration’s original demand that TikTok’s US operations be sold to a US buyer. It appears, however, that the proposed ‘partnership’ with Oracle will not include transfer of the algorithm that determines which content is shown to each individual user. So, even if the Oracle/Walmart deal does cover TikTok’s Australian operations, it won’t forestall potential manipulation of Australia’s information environment from Beijing.

Each of these options seems unpalatable because it directly attacks the business models of digital platforms. And, of course, Russia and China have enthusiastically used Western-owned platforms such as Twitter and Facebook to manipulate the information systems in democratic countries, as have a range of domestic political actors.

But it is time for hard choices about how to deal with the issues of data harvesting and storage and the deluge of disinformation on both Chinese-owned and Western-owned digital platforms. Australia’s current course of muddling through with ad hoc policy looks increasingly dangerous.

Will our digital national identity survive our lack of care?

Australia’s national identity has always been difficult to define. It is complex and ever-changing, the dynamic collective of Australians and our environment, history, geography, culture and outlook.

Defining our national identity might be a significant challenge, but so is storing, preserving and interpreting the data that forms it. National identity data is the digital evidence of who we are, how we see ourselves, and how we relate to the rest of the world. It includes high-value personal, social, legal, democratic and historical data, such as records of births, deaths and marriages; immigration records; the decisions of our courts and parliaments; and the many stories told on our screens and airwaves through social and electronic media.

Because of its importance to us as a nation, we must keep it safe and accessible. Nationally, digitalisation is only going to increase; most Australian governments are committed to being fully digital within the next few years. As custodians of the bulk of national identity data, government agencies have a responsibility to protect it. And with the creation and retention of fewer paper traces, accessing and preserving this information is becoming more complicated.

Equally disturbing is the inability to access, understand and adequately discriminate between what’s valuable and what isn’t. In 2016, American historian Abby Rumsey argued that we are now so far ahead of ourselves in the accumulation of data that we may never catch up or truly understand its significance. One way to address the information overload could be through the use of artificial intelligence and big-data analytics, though we don’t yet know how successful that might be.

But how safe is our data? All government agencies are required to comply with the cybersecurity policies set by their respective jurisdictions, but you only have to read recent audit reports by the Australian National Audit Office and its state equivalents to see that those standards aren’t being met. While audits focus on particular agencies, they’re a snapshot into the reality of how agencies are faring more generally.

The ANAO’s 2017 cyber-resilience review of three large, well-funded agencies—the Department of Human Services, the Australian Taxation Office and the Department of Immigration and Border Protection (now the Department of Home Affairs)—was illuminating. Only DHS was compliant with the top four cyber-intrusion mitigation strategies in the government’s information security manual, and it was also the only department that qualified as internally cyber-resilient.

The review found that the agencies’ lack of compliance puts their data at risk. The Australian Signals Directorate has estimated that around 85% of cyber-intrusions would be mitigated if the top four strategies were implemented.

Given the range of information-management and cybersecurity protocols and frameworks disseminated by the Australian Signals Directorate, the Attorney-General’s Department and the Department of the Prime Minister and Cabinet, it’s worrying to see the inconsistencies across agencies that hold critical data relating to Australia’s national identity and security.

And then there are the ultimate information and data custodians—national and state archives, records organisations, libraries and other cultural institutions—which are struggling to keep even their basic services afloat, let alone protect and preserve our digital heritage and national identity data.

The current parliamentary review of national institutions in Canberra is evidence of that. The committee has received numerous submissions and testimonials from the heads of cultural institutions decrying the continued funding cuts. Although a handful of agencies have recently received one-off funding for digital initiatives, the National Archives of Australia, which holds some of the government’s most valuable and sensitive information, has unsuccessfully sought funding to build a digital archive five times over the past 10 years.

A great deal of effort and focus is placed on protecting critical infrastructure like roads, communications and ports, as well as classified and sensitive information, but the same can’t be said of our national identity data, or of the national and state institutions that protect and provide access to those digital assets.

The value of our digital national identity must be more widely recognised by governments, the public and the entities that create it, and it must be protected.

Keeping that data safe and accessible is vital not only for chronicling Australia’s past, but also for supporting government transparency, accountability, and the rights and entitlements of all Australians now and in the future.