Tag Archive for: data retention

The hidden risks we scroll past: the problem with TikTok—and RedNote

What if the most popular apps on our phones were quietly undermining national security? Australians often focus on visible threats, but the digital realm poses less obvious yet equally significant dangers. Yet, when it comes to the digital landscape, a blind spot remains: the hidden risks posed by platforms such as TikTok and RedNote (Xiaohongshu). These apps are more than just harmless entertainment; they’re tools in a global battle for data and influence. And we, as a society, remain largely unaware.

TikTok, RedNote and similar platforms have embedded themselves deeply into daily life. Their algorithms delight us with engaging content, fostering a sense of connection and entertainment. But this convenience comes at a cost. Few stop to question what’s behind these apps: who owns them, where our data goes, what it might say about us, and how it might be used. In fact, these platforms, owned by companies who must obey authoritarian governments, present profound risks to our privacy and national security.

Digital risks are invisible and complex and, for most, our understanding is limited. While most Australians grasp the tangible dangers of terrorism or cyberattacks, the concept of apps and data collection being weaponised for disinformation and influence campaigns feels abstract. This gap in understanding is compounded by the prioritisation of convenience over caution. Governments and experts have sounded alarms, conducted enquiries and in extreme cases implemented total bans—as seen with TikTok in the US—but their warnings often fail to resonate amid the noise of daily life. As a result, we remain unprepared for the evolving tactics of malign actors who exploit these vulnerabilities.

Platforms such as TikTok and RedNote collect vast amounts of user data—from location and device details to browsing habits. In the wrong hands, this data can be used to map social networks, identify vulnerabilities or inform targeted disinformation campaigns. Algorithms don’t just show users what they like; they also shape what users believe. Through curated content, adversaries can subtly influence societal narratives, amplify divisions or undermine trust in democratic institutions. Beyond individual users, these platforms could act as backdoors into sensitive areas, through officials’ use of them (despite rules against it) or business executives sharing trade secrets on them.

Australia must address the vulnerabilities on these apps, particularly as the nation strengthens partnerships under such initiatives as AUKUS. Demonstrating robust digital hygiene and security practices will be essential to maintaining credibility and trust among allies.

The enactment of the Protecting Americans from Foreign Adversary Controlled Applications Act has prompted an exodus of users from TikTok, driving them to seek alternative platforms—though Donald Trump has given the app’s owner some indication of a reprieve.

Many TikTok users have turned to RedNote, which has rapidly gained traction as a replacement. Unlike TikTok, which operates a US subsidiary and is banned within China, RedNote is fully Chinese-owned and operates freely within China, creating a level of commingling and data exposure that was not present with TikTok. This raises even greater concerns about privacy and national security. While banning RedNote might seem like a straightforward solution, it does not address the core issue: the lack of public awareness and education about the risks inherent in these platforms. Without understanding how their data is collected, stored, and potentially exploited, users will continue to migrate to similar platforms, perpetuating the cycle of vulnerability. This underscores the urgent need for widespread digital literacy and education.

Recent legislation aimed at protecting children from social media platforms, such as the minimum-age requirements introduced by the Australian government, is a step in the right direction. However, this approach could be endlessly repetitive: new platforms and workarounds could quickly emerge to bypass regulations. The question remains: can the government effectively manage implementation of such policies in a fast-evolving digital landscape? And if we are applying policies to protect children, what about defence force personnel using these free applications? They could inadvertently expose national-security information. A consistent, security-first approach to app usage should be considered across all demographics, especially those with access to critical data.

Governments must take the lead by implementing stricter regulations and launching public awareness campaigns. Comprehensive digital literacy programs should be as common as public-awareness campaigns on physical health or road safety, equipping Australians to recognise and mitigate digital threats. They should know where their data is stored, understand they should resist letting apps know their location, and consider potential consequences. Digital security is no longer a niche concern; it is a core component of modern citizenship.

The hidden risks we scroll past each day are not just a matter of personal privacy but of national security. As Australians, we must shift our mindset and take these threats seriously. By recognising the vulnerabilities embedded in our digital habits, we can build a more secure and resilient society. Because when it comes to national security, ignorance is no longer bliss.

Data localisation threatens economic growth without improving security

Increasing data localisation—governments requiring certain data to be stored within a jurisdiction—threatens internet innovation and makes the development of digital goods and services more difficult, potentially slowing economic growth.

India is thinking of joining the countries that have data localisation requirements: China, Russia, Indonesia, Vietnam, and others. It is a bad idea.

The internet has delivered worldwide benefits, mostly through US technology companies offering services globally. Light regulation has enabled innovation, facilitated rapid development of internet services, and delivered benefits across the planet.

These technologies are also bringing new challenges. Fake news and social media have been blamed for nothing less than the potential destruction of democracy, and leading technology companies warn of the dangers posed by cyberattacks.

One response has been for governments to clamp down and impose tighter regulations. The inherently borderless nature of the internet has presented regulators and lawmakers with challenges, but in recent times governments have taken two different approaches.

The first is to create laws that apply beyond their own borders. The European Union has imposed tight data-protection rules—known as the General Data Protection Regulation (GDPR)―based on the premise that individuals have the fundamental right to control the use of their own data. The justification for the GDPR’s extraterritorial application is that a citizen’s right to control their own personal data is universal and exists regardless of overlapping jurisdictions. According to the GDPR, personal data should be used in a manner ‘designed to serve mankind’ and should ‘whatever [an individual’s] nationality or residence, respect their fundamental rights and freedoms, in particular their right to the protection of personal data’.

The second approach is to compel data localisation and legislate that data—typically personal data—must be stored within a state’s jurisdiction. China and Russia both have laws that impose a data localisation requirement. Both states cite protection of personal information as one of the justifications for these laws, but concerns have been raised that they’re using localisation to enable intrusive government access to private information. Governments have a range of exceptional access powers that are typically relatively easy to exercise within their borders, but difficult to enforce outside their jurisdiction.

To be clear, both approaches are forms of regulation that impose additional costs. Ideally, a harmonised global approach to data protection would be preferable. The European GDPR already imposes relatively high costs on companies doing business in the European Union, or even just conducting business with EU citizens.

Data localisation requirements, however, impose additional new costs above and beyond those of the GDPR.

There are several factors that decide where data should ‘live’—that is, where it is best stored. Many of these factors are technological, and the best place to store data has changed over time as technology has evolved.

Some of the factors involved today are:

  • the proliferation of power- and storage-constrained mobile devices—it is better to have much of the world’s data processing and storage take place at data centres that don’t have limits on storage and battery life
  • environmental efficiency—cooling is a significant cost in data centre operations, and some climates are better suited for efficient data centres
  • the limits of communication technology—some locations can provide more responsive and therefore higher quality services
  • physical and political security—it’s no good having a cheap-to-run location with good connectivity to end users if safety and security are compromised by other physical or political instability
  • the human geography—where are end users located, and what are their computing requirements?
  • financial considerations—where can cost-effective space, power and communications connectivity be found?

Data localisation requirements often mean that some or all of these physical and technical factors are compromised. More broadly applied data localisation requirements result in real additional costs, estimated at over 0.5% of GDP. It is only for relatively sensitive data that this cost can be justified.

For some particularly sensitive data, such as financial, health or telecommunications records, and data with national security implications, it’s not unusual for governments to require the information to be physically stored within their jurisdictions. Australia, for example, has mandated that certain health data be stored locally, and national security and other government data has contractually enforced localisation requirements.

For most data, the risk of compromise is not related to its physical location. Hackers don’t gain access to data because of a server’s location—they gain access because of poor cybersecurity.

On top of the direct additional cost, data localisation makes it much harder for new internet businesses to grow. Protecting personal data involves work to secure and manage where it is being sent, tracking and accounting for these data flows, and being able—in the case of the GDPR—to delete individual records if necessary. Localisation requires that this work be duplicated—potentially across many jurisdictions—so that a business expanding internationally might need to simultaneously comply with myriad data localisation requirements on top of its baseline work to manage personal data. This significantly increases the complexity and cost of expansion without directly addressing the cybersecurity concerns that actually improve data security.

Regulation is required to protect personal data, but it should be carefully constructed to avoid stifling innovation and adding unnecessary costs and complexity that will strangle new businesses and stifle economic growth.

‘U Can’t Touch This’: the inviolability of encryption

2291139919_cd960c5aa0_zIn his recent post on The Strategist, Anthony Bergin makes many good points about the use of encryption by non-state actors like Daesh, the related challenges to intelligence collection, and the importance of balancing civil liberties and national security in times of heightened threat. While Anthony’s recommendation that agencies focus on human intelligence is welcome (and in line with the government’s national security statement), what was missing was a clarion call—one in support of strong commercial encryption.

The horror recently unleashed on Paris has prompted questions about what it means if the terrorists had used encryption to shield their plotting and communication from law enforcement and security agencies in Europe. Intelligence heads and lawmakers in the US were quick to claim that encryption technologies were thwarting security efforts and that ‘backdoors’ into devices and software are needed. Regardless of whether the Paris attackers used encryption, the suggestion of banning or weakening commercial encryption represents a patently wrong-headed approach to bolstering security.

The encryption debate isn’t a new one. The so-called ‘Crypto Wars’ have roots back to 1976 when the discovery of ‘public key cryptography’ gave individuals and businesses an option to secure their communications, challenging the domestic monopoly on encryption that the US government had maintained until that point. In the early 1990s, a battle unfolded as the US government lobbied telcos to submit to the ‘Clipper Chip’, technology that ‘relied on a system of “key escrow,” in which a copy of each chip’s unique encryption key would be stored by the government.’ Concerns over deleterious security, privacy and economic consequences saw strong encryption win out after a few years of back and forth. Export controls on encryption were liberalised throughout the Clinton administration, and by 2005, the public’s legal access to encryption was thought to be assured and the Crypto Wars were declared over (at least, by some).

There were various attempts to water down encryption over the intervening years until Snowden’s 2013 disclosure of the NSA’s Bullrun program prompted companies like Apple and Google to begin to package privacy with their products and services. Those firms now offer full-disk encryption, meaning that the data and communications stored on their hardware or software is unable to be decrypted by anyone except the user, rendering access warrants impotent. Privacy and security by way of encryption became a selling point and a strategy to win customers in a hotly contested market.

Beyond the big tech companies, the last few years have seen a proliferation of mobile applications that enable encrypted communication. In March, then-Communications Minister Malcolm Turnbull name-checked a handful of apps that could be used to subvert the government’s data retention regime: ‘Whatsapp or Wickr or Threema or Signal, you know, Telegram, there’s a gazillion of them.’ A few weeks earlier, Turnbull had spoken of the inherent insecurity of text messaging—‘messages are not encrypted in transit… [or] on the telco’s server’—and happily copped to using encryption services himself, including Wickr, WhatsApp and ‘a number of others… because they’re superior over-the-top messaging platforms… You know, millions of people do, hundreds of millions of people use over-the-top applications.’ Encryption is mainstream.

Beyond securing our personal communications, encryption is fundamental to the protection of our online privacy, banking, passwords and corporate assets. In this way, it’s a central contributor to the health of the global economy and business competition. Security and systems experts, cryptographers, digital privacy advocates and tech leaders have all said that weakening encryption is a bad idea and that there’s no way to build a backdoor for government use that won’t also be exploited by terrorists, malicious hackers, tech-savvy criminals, foreign spies and industrial competitors, among others. A few months back, a draft US National Security Council paper determined that ‘the benefits to privacy, civil liberties and cybersecurity gained from encryption outweigh the broader risks that would have been created by weakening encryption’. Mandating that US tech giants introduce backdoors will only push consumers and criminals alike toward products developed in other countries or toward home-brewed encryption. Encryption begets the security and trust that lies at the heart of the internet.

Law enforcement should have the necessary powers and tools to detect and prevent attacks, but weakening or banning cryptography won’t make the masses more secure. Instead, we need to think around encryption. Intelligence agencies should focus on hacking the phones and computers of surveillance targets to exfiltrate private encryption keys, and on breaking into devices to target communications before encryption and after decryption. Greater public–private collaboration and problem solving is needed between the highest levels of the US government and tech firms like Apple, Google, Microsoft and Facebook: government needs a deeper understanding of the technology and the consequences of tweaking it, while private players need to understand the huge operational challenges faced by those charged with keeping us safe. The Australian government should make strong representations in Washington to this end.

We don’t yet have an answer as to the extent to which the Paris terrorists employed encryption. It’s important to remember, however, that many of those who carried out the attacks were on the radar of intelligence services in both Belgium and France, where some were on the high-security watch list La Fiche Salong with more than 10,000 others. It has been reported that Turkish authorities contacted their French counterparts twice in the last year to flag one of the 13/9 assailants, Omar Ismail Mostefai, as a terrorist threat; it was only in the aftermath of the attacks that French authorities allegedly replied requesting information about Mostefai. That the attacks occurred seems less likely due to an inability to unlock encrypted communications data than due to a failure of coordination, follow-up, targeting and action.

The encryption debate is incorrectly characterised as being about security versus liberty. It’s actually about security versus vulnerability, and always has been.

The Beat

Fake food?Welcome to the second instalment of The Beat, your weekly wrap-up of strategic policing and law enforcement news. This week we look at the latest policy ideas about how to combat daesh (IS), efforts to counter corruption in the United Kingdom, counterfeit food and drink, the dangers of resting on our laurels in relation to piracy in the Indian Ocean, and the world’s top criminal podcasts.

Preventative Priorities Survey 2015

The Council of Foreign Relations’ Centre for Preventative Action has released its survey for 2015. This evaluates ongoing and potential conflicts based on their likelihood of occurring in the coming year and is designed to assist US policymakers prioritise competing conflict prevention and mitigation demands. Not surprisingly, the intensification of the conflict in Iraq due to territorial gains by daesh (IS) tops the list. Other priorities of interest for strategic policing and law enforcement include a highly disruptive cyberattack on US critical infrastructure, and an intensification of the Syrian civil war resulting from increased external support from warring parties. Read more

Data retention: who should pay for our national security?

Who should pay for national security?

The data-retention debate has been dominated by discussions over the extent to which metadata information retention is an essential tool for investigating terrorism and crime, what agencies should be able to access it, and the safeguards that should exist for privacy.

A neglected issue is who bears the costs of implementing the means to safeguard national security: do we ask telcos and internet service providers to find the money to retain metadata, or should government bear the full responsibility and foot what could be a significant bill? ISPs have been vocal about the costs that storing metadata will have on their businesses in what might be an open-ended commitment. Read more