Tag Archive for: cyber security

Red tape that tears us apart: regulation fragments Indo-Pacific cyber resilience

The fragmentation of cyber regulation in the Indo-Pacific is not just inconvenient; it is a strategic vulnerability.

In recent years, governments across the Indo-Pacific, including Australia, have moved to reform their regulatory frameworks for cyber resilience. Though well-intentioned, inadequate coordination with regional partners and stakeholder consultations have created a situation of regulatory fragmentation—the existence of multiple regulatory frameworks covering the same subject matter—within and among Indo-Pacific jurisdictions.

This inconsistency hinders our ability to collaboratively tackle and deter cyber threats, essentially fragmenting the cyber resilience of the Indo-Pacific.

Regulatory fragmentation threatens regional security for three key reasons.

Firstly, it impedes technical efficiency. While we tend to think of cyberspace as borderless, its composite parts are designed, deployed and maintained on the territory of states that enact their own laws and regulations. Factors such as threat perception, the organisation of the given state and its agencies, and regulatory culture shape these frameworks. The degree to which the state provides essential services and owns physical and digital infrastructure also influences framework development.

As governments introduce complex regulatory obligations for cyber resilience, most digital services providers and ICT manufacturers will have to divert resources from efforts that would otherwise enable them to prepare for and respond to threats more effectively and across jurisdictions. Ironically, this undermines the effectiveness of regulatory regimes for cyber resilience in the first place.

In addition, complex and confusing nation-specific requirements push regulatees to follow a checkbox approach to cyber resilience, rather than a holistic, risk-informed and agile one. Boards may prioritise meeting the bare minimum of regulatory requirements instead of maintaining a risk management posture commensurate with the rapidly evolving threat environment.

Secondly, regulatory fragmentation undermines innovation. Complex regulatory regimes—especially for government procurement and for critical infrastructure operators—can seriously undermine competition and innovation. Startups and smaller vendors (looking to sell to such entities) have to divert scarce resources away from research, development and innovation to fund compliance with a maze of obligations. This is especially problematic for small and medium enterprises in sectors reliant on innovation—such as cyber resilience and advanced manufacturing—as regulatory risk mitigation can deny these firms the ability to scale and expand into new markets.

Thirdly, regulatory fragmentation impedes trust in partnerships. A jurisdiction’s regulatory robustness in relation to cyber resilience is a key factor in determining the suitability of partners in sensitive policy domains.

For example, while Japan has taken steps to invest in its national cyber resilience, particularly after Chinese hackers compromised government networks, the United States has remained cautious about Japan’s ability to protect sensitive information. Through sections 1333 and 1334 of the National Defense Authorization Act for Fiscal Year 2025, the US Congress tasked the Departments of State and Defense with reporting on issues such as: the effectiveness of Japanese cyber policy reforms since 2014; Japanese procedures for protecting classified and sensitive information; and how Japan ‘might need to strengthen’ its own cyber resilience ‘in order to be a successful potential [AUKUS Pillar 2] partner’.

Collaboration requires trust. That trust hinges not just on the quality and harmonisation of regulatory frameworks; it also depends on whether they’re enforced and underpinned by a shared appreciation of the cyber threat environment, including in relation to state-sponsored actors looking to preposition themselves in critical infrastructure assets and steal intellectual property.

That trust also relies on a shared appreciation of the importance of removing unnecessary impediments to innovation, including the growth of allied and partner capability, and threat mitigation by stakeholders, which is itself contingent on shared political will.

After all, regulatory fragmentation is politically driven. Leaders, ministers, officials and regulators each seek to satisfy constituents at home and exert influence abroad over cyber policy. They may prefer to clean the cobwebs through visible operational reactions rather than kill the spider through holistic, long-term preparation.

Such political considerations may disregard commercial and technical realities when regulatory parameters are determined in the interests of digital sovereignty, including when it comes to (not) banning technology vendors.

Fixing this is a tall order but not impossible. Australia and its partners could consider establishing a baseline degree of regulatory harmonisation and reciprocity. This could include factors such as:

—Definitions of the subjects and objects of cyber regulation;

—Thresholds and deadlines for reporting breaches of cyber resilience to the state;

—Standards and controls that regulatees must implement, and outcomes they must achieve;

—Technology supply chain risk management requirements, including methods to assess whether procuring technology from certain vendors is too risky;

—Types of penalties for non-compliance; and

—Powers of the state to gather information or intervene in the operations of regulatees.

Allies and partners must better align their regulatory frameworks. Be it via multi-stakeholder collaboration or multilateral regulatory diplomacy, tackling regulatory fragmentation will make the Indo-Pacific more cyber-resilient.

Let us tear away the red tape that tears us apart.

Reaction isn’t enough. Australia should aim at preventing cybercrime

Australia’s cyber capabilities have evolved rapidly, but they are still largely reactive, not preventative. Rather than responding to cyber incidents, Australian law enforcement agencies should focus on dismantling underlying criminal networks.

On 11 December, Europol announced the takedown of 27 distributed platforms that offered denial of service (DDoS) for hire and the arrest of multiple administrators. Such a criminal operation allows individuals or groups to rent DDoS attack capabilities, which enable users to overwhelm targeted websites, networks or online services with excessive traffic, often without needing technical expertise.

The takedown was a result of Operation PowerOFF, a coordinated and ongoing global effort targeting the cybercrime black market. While the operation has demonstrated the evolving sophistication of international law enforcement operations in tackling cyber threats, it has also exposed persistent gaps in Australia’s cyber enforcement and resilience. To stay ahead of the next wave of cyber threats, Australia must adopt a more preventative approach combining enforcement with deterrence, international cooperation, and education.

Operation PowerOFF represents a shift in global cybercrime enforcement, moving beyond traditional reactive measures toward targeted disruption of cybercriminal infrastructure. Unlike previous efforts, the operation not only dismantled illicit services; it also aimed to discourage future offenders, deploying Google and YouTube ad campaigns to deter potential cybercriminals searching for DDoS-for-hire tools. This layered strategy—seizing platforms, prosecuting offenders and disrupting recruitment pipelines—serves as a best-practice blueprint for Australia’s approach to cybercrime.

The lesson from Operation PowerOFF is clear: Australia must shift its cyber strategy from defence to disruption, ensuring that cybercriminals cannot operate with impunity.

One of the most effective elements of Operation PowerOFF is its focus on dismantling the infrastructure of cybercrime, rather than just arresting individuals. By taking down major DDoS-for-hire services and identifying more than 300 customers, Europol and its partners effectively collapsed an entire segment of the cybercrime market.

This strategy is particularly relevant for Australia. Cybercriminal operations frequently exploit weak legal frameworks and enforcement gaps in the Indo-Pacific region. Many DDoS-for-hire services, ransomware networks and illicit marketplaces are hosted in jurisdictions with limited enforcement capacity, allowing criminals to operate across borders with little fear of prosecution.

Australia must expand its collaboration with Southeast Asian law enforcement agencies on cybercrime, ensuring that cybercriminal havens are actively targeted rather than passively monitored. Without regional cooperation, Australia risks becoming an isolated target rather than a leader in cybercrime enforcement.

Beyond enforcement, Australia must integrate preventative strategies into its cybercrime response. The low barriers to entry for cybercrime mean that many offenders—particularly young Australians—are lured in through gaming communities, hacking forums and social media.

Targeted digital deterrence, including algorithm-driven advertising campaigns, could disrupt this pipeline, steering potential offenders toward legal cybersecurity careers instead of cybercrime. An education-first approach combined with stronger penalties for repeat offenders, will help prevent low-level offenders from escalating into hardened cybercriminals, while helping to ensure that those cybercriminals face consequences.

Australia’s cybercrime laws must also evolve to address the entire cybercriminal supply chain, not just the most visible offenders. Operation PowerOFF showed that cybercrime is not just about the hackers who launch attacks, but also the administrators, facilitators, and financial backers who enable them.

Australian law enforcement should target financial transactions supporting cybercrime, using crypto-tracing and forensic financial analysis to dismantle cybercriminal funding networks. Harsher penalties for those who fund or facilitate DDoS-for-hire services could create a more hostile legal environment for cybercriminal enterprises, ensuring that they cannot simply relocate to more permissive jurisdictions. At the same time, youth diversion programs should be expanded, offering first-time cyber offenders rehabilitation options rather than immediate prosecution, preventing them from becoming repeat offenders.

Operation PowerOFF’s success is a win for international cybercrime enforcement, demonstrating that proactive, intelligence-driven disruption can dismantle even the most entrenched criminal networks.

But it is also a warning: without continuous vigilance, cybercriminals will regroup, rebrand, and relaunch. Australia must act now to strengthen its cyber enforcement, combining international cooperation, legal reform and preventative education to ensure that cybercriminals see Australia as a hostile environment for their activities, not a soft target.

To regulate cyber behaviour, listen to Indo-Pacific voices

The international community must broaden its understanding of responsible cyber behaviour by incorporating diverse perspectives from the Indo-Pacific, a region critical to the future of global cyber governance.

As the mandate of the United Nations Open-Ended Working Group on the security and use of information and communications technologies ends in July 2025, the world must reflect on what it means to be a responsible state actor in cyberspace. Over two decades, the UN has developed a framework of responsible state behaviour in cyberspace, which includes the acceptance that international law applies to state conduct in cyberspace and a commitment to observe a set of norms.

The framework, designed to address the weaponisation of cyberspace, narrowly focuses on high-stakes security concerns. While its emphasis on international peace and security is essential, this high threshold often sidelines domestic responsibilities and the challenges that developing and emerging economies face.

By amplifying the voices of mature cyber nations, it overlooks regions where the concept of responsible cyber behaviour is less expressed but no less important. As cyberspace is a cornerstone of economic, social, political, and military activities globally, we must expand the framework to address both domestic and international dimensions of cyber norms.

A report issued today and co-edited by ASPI and the Royal United Services Institute highlights this gap by examining how seven Indo-Pacific countries—Cambodia, Fiji, India, Indonesia, Japan, Pakistan and Taiwan—perceive responsibility in cyberspace. We investigate how governments and societies interpret this responsibility, going beyond their expectations of other states to see how they demonstrate their responsibility internally.

Our findings reveal a lack of common understanding and implementation of the UN’s cyber norms across the region. While commitments to responsible state behaviour are formally acknowledged at the UN level, domestic policies and regulations are inconsistent. For many Indo-Pacific countries, responsible cyber behaviour is mainly understood in terms of ensuring state sovereignty and territorial non-interference through cyber means. Governments are also mainly guided by national security concerns. This information is often shrouded in secrecy, complicating oversight and accountability.

Economics also shapes regional cyber policies. For most Indo-Pacific countries, socio-economic development, digitalisation and connectivity are top priorities. Given their limited sovereign cyber and digital capabilities, they view responsible behaviour as the ability to freely choose strategic partners and attract investments, technical support and capacity-building initiatives. This pragmatic approach underscores the need to reconcile international commitments with domestic priorities such as combating cybercrime, achieving data sovereignty, and ensuring affordable and reliable connectivity.

However, pursuit of these priorities often results in over-regulation and reliance on surveillance technologies and restrictive policies to counter cyber threats. Many Indo-Pacific countries struggle to balance protection of critical infrastructure and the information environment with promotion of open and inclusive digital spaces. Our report highlights the need for clear guidelines on the purchase, sale and use of dual-use technologies. While some countries adhere to international frameworks, others lack robust safeguards, exposing cyber vulnerabilities.

The Indo-Pacific’s diverse perspectives on responsible cyber behaviour emphasise the importance of domestic expertise. Governments must nurture talent within both public and private sectors and ensure access to international platforms that foster collaboration and knowledge-sharing. Otherwise, the region risks being left behind in shaping global cyber governance. Furthermore, many Indo-Pacific stakeholders argue that the UN framework’s emphasis on international norms must be complemented by actionable standards addressing states’ internal responsibilities, such as securing their networks and fostering resilient digital ecosystems.

International discussions on cybersecurity are increasingly polarised, with major powers vying for influence over Indo-Pacific countries to shape regional norms. In this context, we must ensure that the perspectives of emerging economies are not overshadowed by the interests of major powers. Ignoring these viewpoints is not only a poor diplomatic strategy—risking the alienation of regional actors and complicating negotiations—but also undermines international efforts to address shared challenges. Incorporating these voices into the framework would create a more inclusive and representative system that fosters equity, trust and long-term cooperation, ultimately strengthening global cybersecurity.

To achieve this, international and regional institutions must prioritise capacity-building and technical assistance tailored to the needs of Indo-Pacific countries. This includes creating platforms that allow these states to share experiences and shape global discourse on cyber norms. An example of such a platform is the Association of Southeast Asian Nations, through which member states have come to develop a norms checklist. It also requires the international community to recognise the interconnectedness of domestic and international cyber responsibilities. By grounding discussions in the specific contexts and priorities of the Indo-Pacific, the framework can evolve into a truly global standard that bridges the gap between developed and developing nations.

As the UN Open-ended Working Group mandate’s deadline approaches, we must reshape the framework of responsible state behaviour in cyberspace. The Indo-Pacific’s challenges and perspectives can help strengthen the framework’s relevance and effectiveness. By incorporating diverse regional viewpoints, the international community can build a more equitable and resilient cyberspace that serves the interests of all states, not just the most powerful. This is not merely a matter of inclusion; it is a matter of global cyber stability and security.

Using open-source AI, sophisticated cyber ops will proliferate

Open-source AI models are on track to disrupt the cyber security paradigm. With the proliferation of such models—those whose parameters are freely accessible—sophisticated cyber operations will become available to a broader pool of hostile actors.

AI insiders and Australian policymakers have a starkly different sense of urgency around advancing AI capabilities. AI leaders like Dario Amodei, chief executive of Anthropic, and Sam Altman, chief executive of OpenAI, forecast that AI systems that surpass Nobel laureate-level expertise across multiple domains could emerge as early as 2026.

On the other hand, Australia’s Cyber Security Strategy, intended to guide us through to 2030, mentions AI only briefly, says innovation is ‘near impossible to predict’, and focuses on economic benefits over security risks.

Experts are alarmed because AI capability has been subject to scaling laws—the idea that capability climbs steadily and predictably, just as in Moore’s Law for semiconductors. Billions of dollars are pouring into leading labs. More talented engineers are writing ever-better code. Larger data centres are running more and faster chips to train new models with larger datasets.

The emergence of reasoning models, such as OpenAI’s o1, shows that giving a model time to think in operation, maybe for a minute or two, increases performance in complex tasks, and giving models more time to think increases performance further. Even if the chief executives’ timelines are optimistic, capability growth will likely be dramatic and expecting transformative AI this decade is reasonable.

The effect of the introduction of thinking time on performance, as assessed in three benchmarks. The o1 systems are built on the same model as gpt4o but benefit from thinking time. Source: Zijian Yang/Medium.

Detractors of AI capabilities downplay concern, arguing, for example, that high-quality data may run out before we reach risky capabilities or that developers will prevent powerful models falling into the wrong hands. Yet these arguments don’t stand up to scrutiny. Data bottlenecks are a real problem, but the best estimates place them relatively far in the future. The availability of open-source models, the weak cyber security of labs and the ease of jailbreaks (removing software restrictions) make it almost inevitable that powerful models will proliferate.

Some also argue we shouldn’t be concerned because powerful AI will help cyber-defenders just as much as attackers. But defenders will benefit only if they appreciate the magnitude of the problem and act accordingly. If we want that to happen, contrary to the Cyber Security Strategy, we must make reasonable predictions about AI capabilities and move urgently to keep ahead of the risks.

In the cyber security context, near-future AI models will be able to continuously probe systems for vulnerabilities, generate and test exploit code, adapt attacks based on defensive responses and automate social engineering at scale. That is, AI models will soon be able to do automatically and at scale many of the tasks currently performed by the top-talent that security agencies are keen to recruit.

Previously, sophisticated cyber weapons, such as Stuxnet, were developed by large teams of specialists working across multiple agencies over months or years. Attacks required detailed knowledge of complex systems and judgement about human factors. With a powerful open-source model, a bad actor could spin-up thousands of AI instances with PhD-equivalent capabilities across multiple domains, working continuously at machine speed. Operations of Stuxnet-level sophistication could be developed and deployed in days.

Today’s cyber strategic balance—based on limited availability of skilled human labour—would evaporate.

The good news is that the open-source AI models that partially drive these risks also create opportunities. Specifically, they give security researchers and Australia’s growing AI safety community access to tools that would otherwise be locked away in leading labs. The ability to fine-tune open-source models fosters innovation but also empowers bad actors.

The open-source ecosystem is just months behind the commercial frontier. Meta’s release of the open-source Llama 3.1 405B in July 2024 demonstrated capabilities matching GPT-4. Chinese startup DeepSeek released R1-Lite-Preview in late November 2024, two months after OpenAI’s release of o1-preview, and will open-source it shortly.

Assuming we can do nothing to stop the proliferation of highly capable models, the best path forward is to use them.

Australia’s growing AI safety community is a powerful, untapped resource. Both the AI safety and national security communities are trying to answer the same questions: how do you reliably direct AI capabilities, when you don’t understand how the systems work and you are unable to verify claims about how they were produced? These communities could cooperate in developing automated tools that serve both security and safety research, with goals such as testing models, generating adversarial examples and monitoring for signs of compromise.

Australia should take two immediate steps: tap into Australia’s AI safety community and establish an AI safety institute.

First, the national security community should reach out to Australia’s top AI safety technical talent in academia and civil society organisations, such as the Gradient Institute and Timaeus, as well as experts in open-source models such as Answer.AI and Harmony Intelligence. Working together can develop a work program that builds on the best open-source models to understand frontier AI capabilities, assess their risk and use those models to our national advantage.

Second, Australia needs to establish an AI safety institute as a mechanism for government, industry and academic collaboration. An open-source framing could give Australia a unique value proposition that builds domestic capability and gives us something valuable to offer our allies

Beijing’s online influence operations along the India–China border

The Chinese government is likely conducting influence operations on social media to covertly dispute territorial claims and denigrate authorities in India’s northeastern states.

As part of a joint investigation with Taiwanese think tank Doublethink Lab for its 2024 Foreign Influence on India’s Election Observation Project, we identified coordinated social media campaigns seeking to amplify social tensions in Manipur and criticise the Indian government, the Bharatiya Janata Party (BJP) party and its policies. This occurred in the lead-up to and during the Indian general elections, when social divisions were especially heightened.

Despite Beijing publicly seeking stability with India the Chinese Communist Party will likely use other covert methods, mainly targeting Chinese-speaking diasporas, to destabilise the India-China border and pursue its territorial ambitions.

The CCP has a history of trying to exploit ethnic and political conflicts in India’s northeastern states, such as in Manipur, where Beijing has allegedly fostered instability using Myanmar-based and local terror groups. On 3 May 2023, Manipur’s latest ethnic conflict in erupted between the Meitei and the Kuki indigenous ethnic groups over a disputed affirmative action measure related to benefits for the Meitei people. According to reports, the violence resulted in 221 deaths and displaced approximately 60,000 individuals.

Our findings shows that most of the narrative had first appeared on Chinese social media platforms which then entered the Indian social media landscape through translation or AI enabled translations. This way it reached to the targeted audience, the Meitei people. Anthropologists say the Meitei people may be ethnically related to Tibetans, whose land is now part of China, but the Meitei do not speak Chinese.

Violence in Manipur became a hot topic on Chinese social media platforms and websites in early 2024, amplified by pro-CCP writers and likely inauthentic social media accounts seeking to push CCP narratives in the region. These accounts spread misleading narratives, such as ‘There is a little China in India that holds the six-star red flag, does not speak Hindi and refuses to marry Indians’ (印度有个“小中国”,举六星红旗,不说印语,拒绝和印度人通婚). Others are ‘conflict in India’s Manipur is a result of Indian Prime Minister Narendra Modi’s crackdown on religious and ethnic minorities’, ‘India is running concentration camps for minorities’, and ‘Manipur has never been a part of India and the demand for independence in the state is justified.’

We also identified coordinated inauthentic accounts likely originating from China disseminating the ‘Little China in India’ narrative on Western social media platforms, such as X and YouTube. For example, one Chinese-language speaking account named jostom, created in November 2023, posted the phrase ‘Little China’ 小中国, and shared a YouTube video with the nonsensical title ‘Manipur India known as “small China” once the impact of independence on India?’

The video (which had had only around ​​2500 views at the time of writing) was uploaded on 18 March 2024 by the YouTube account Earth story, which claims to be a Chinese-language ‘popular science number [sic] on international relations that everyone can understand’. It is unclear whether the videos uploaded by the account are original content or reuploads from an account of the same name on Douyin, a short-form video app popular in China. However, some video titles are also in English, indicating that the channel’s target audience goes beyond Chinese-speaking diasporas. In addition, there are always auto-generated captions in Hindi or English when the narrator speaks in Mandarin.

The jostom X account was one of many likely inauthentic accounts spreading the Little China narrative. The latest post by jostom was on 20 April 2024. The account has only 22 followers and follows 31 accounts, and mostly shares content with Chinese landscape pictures, a common feature of Chinese propaganda. Out of 71 posts on the account, the Little China video is the only political content.

Among its 22 followers, at least six accounts appear to be inauthentic: they were created around the same date, and their profiles and posts share many similarities. For example, they are all following a similar number of accounts, and the only posts these six accounts made were on 22 or 23 July 2023.

These accounts display similar characteristics to a sophisticated subset of Spamouflage disinformation networks, which ASPI identified last year as having interfered in an Australian referendum. This network goes beyond spreading typically pro-China propaganda and is known for amplifying domestic issues in democracies. Like the accounts that targeted Australia, accounts following jostom use images of Western women to develop their personas. Their first posts are aphorisms or quotes, many of which are incomplete.

The small sample of accounts discussed above is likely part of a broader network of inauthentic accounts originating from China that has increasingly sought to interfere in India’s domestic affairs. Since 2023, social media conglomerate Meta has publicly disclosed at least two coordinated inauthentic networks targeting India and originating from China in its quarterly Adversarial Threat Reports. The first disclosure in 2023 revealed that fake accounts originating from China were criticising the Indian government and military by focusing on issues on the India-China border. The second campaign, disclosed in early 2024, was linked to the original 2023 campaign but instead targeted the global Sikh community, creating a fictitious activist movement called Operation K that called for pro-Sikh protests.

On X, many of the accounts identified by Meta in its Adversarial Threat Reports continued to operate and disseminate disinformation in the lead-up to the 2024 Indian elections. Common topics and narratives spread by these included accusing Indian Prime Minister Narendra Modi of not being concerned about the welfare of people in Manipur, amplifying protests in nearby Nagaland and fomenting dissent against the Indian government in another northeastern state, Arunachal Pradesh (see screenshot below). In some cases, accounts called for Indians to boycott the BJP over its activities in the Manipur region.

ASPI has identified some of the same accounts used for interfering in the 2024 Taiwanese elections.

The accounts appear to be copying tweets from other prominent Indian commentators rather than creating original posts. Sometimes this resulted in errors, such as the Nawal Sharma account appearing to have copied a tweet from India Daily Lives but failing to correctly copy the Hindi text while posting the same hashtags and link (see screenshot below).

The CCP’s influence operations targeting India in 2024 were mostly ineffective. However, they are part of a broader strategy to destabilise countries in their neighbourhoods. It has used similar methods to influence electoral outcomes and political narratives in Canada, Taiwan and Britain, where it has employed a combination of disinformation and covert support to influence public opinion and political results. These actions often reveal Beijing’s true intentions, such as its territorial ambitions in India’s northeastern states, and contradict its charm offensive with neighbouring states.

As the CCP resorts to more covert methods to pursue its interests, democratic countries should publicly expose these influence operations and share information on observed tactics, techniques and procedures with allies and partners. Indo-Pacific countries should consider financial sanctions against private companies or state-affiliated media conducting intelligence activities and disinformation campaigns, similar to sanctions applied to Russian disinformation actors. While it may be difficult to deter the CCP through these policy actions, it will at least impose costs on Beijing and make it more difficult to conduct these operations with impunity.

As China tries harder to collect data, we must try harder to protect data

China is stepping up efforts to force foreign companies to hand over valuable data while strengthening its own defences. Some of the information it’s looking for would give it greater opportunities for espionage or political interference in other countries.

Australia and other countries need to follow the lead of the United States, which on 21 October proposed rules that would regulate and even prohibit transfers of data containing the personal or medical information of its citizens to foreign entities.

Recent developments from inside China support the idea that the country is refocusing on bulk data, both to aid its intelligence operations and to protect itself from potential adversaries.

China has reformed its domestic legal environment to both protect itself and collect information with intelligence value. A new Data Security Law allows Chinese officials to broadly define ‘core state’ data and ‘important’ data while also banning any company operating inside China from providing data stored in China to overseas agencies without government approval. Firms over a certain size must also have a cell of the Chinese Communist Party to more closely integrate ‘Party leadership into all aspects of corporate governance’, including cybersecurity and data management.

The Communist Party’s Central Committee and the State Council have decreed that the National Data Administration will manage every source of public data by 2030.

The Ministry of State Security has prohibited Western companies from receiving geospatial information from Chinese companies and required companies to take down idle devices to reduce the threat of Western espionage. And Chinese nationals will shortly be unable to access the internet without verifying their identity by facial recognition and their national ID number.

In early October, a report by the Irish Council of Civil Liberties (ICCL) exposed the world of real-time bidding data, where the ads displayed when you go online are the result of an automated bidding process based on your browsing history and precise location. The ICCL report raised concerns that these kinds of analytics could identify people’s political leanings, sexual preferences, mental health state and even the drinks they like. That data has then been sold to companies operating in China.

Beijing’s recent activities in the digital world remind us that even the most mundane and trivial data about a person can have intelligence value—for example, in recruiting agents, guessing passwords and tracking the movements of targets. China’s expansive spying regime, which mobilises countless private entities and citizens, threatens to overwhelm Western intelligence services. That spying regime now has access to more information to inform decisions.

China’s latest moves draw our attention to the peculiar vulnerability of Australia in the region, especially among the AUKUS triad. Australian privacy law does not carry the same type of protections as British and US laws. Australia has neither a constitutional nor statutory right to privacy, and its key piece of legislative protection has provisions dating back to the 1980s. Despite receiving the results of a comprehensive review of the Privacy Act more than 18 months ago, the government has been sluggish to adopt any reforms that might help protect us from China’s data-harvesting practices.

The motivation for China to collect personal data in Australia has risen since we entered the AUKUS agreement in 2021. But the government isn’t showing enough interest in securing it against foreign manipulation and theft. Consider, too, that other intelligence players, such as India and Russia, are just as likely to join in.

Australia should take a leaf out of the US playbook on countering Chinese interference in its sovereign data. Since February 2024, the United States has been keen to regulate the sharing of information with foreign entities, starting with an executive order signed by President Joe Biden. The rules that Biden proposed on 21 October would ban data brokerage with foreign countries and only allow certain data to be shared with entities that adopt strict data security practices.

Beyond that, there is a growing need for industry and especially academia to adopt stronger security postures. Posting travel plans or political views on Facebook or Instagram might seem innocuous, but if it’s done by someone in a position of power or with access to valuable information, the individual’s vulnerability to espionage dramatically increases. As a society, we all need to take a little more notice and a little more care with what we are sharing online.

Getting Australia’s digital Trust Exchange right

To realise the potential of the Digital ID Act and the recently unveiled Trust Exchange (TEx), the government must move past political soundbites and develop a comprehensive identity and credentials strategy that includes building technical architecture and conducting an end-to-end security assessment.

The government is yet to publish the rules and standards in relation to the Digital ID Act, which was finally passed May. We’re also still waiting to hear details of TEx, a world-leading digital identity verification system, which Government Services Minister Bill Shorten unveiled in August.

The Digital ID Act was the government’s response to the 2022 data breaches at Optus and Medibank, which prompted a fundamental reassessment of what sensitive data should be collected and how long it should be stored. Businesses should still conduct checks on customers—for example, to prevent money-laundering or alcohol sales to minors—but a better solution is needed than simply storing digitised copies of paper identification documents.

A digital ID scheme had been proposed for many years in different guises, but the 2022 breaches finally led to a new draft legislation in September 2023, kicking off the process that led to the Digital ID Act.

This Act is a major step in the right direction. It provides a legislated basis for a federated trust system and avoids creating a unique identifier for every citizen or a centralised ‘honeypot’ of data about people and their transactions. The accreditation rules include strong privacy and security safeguards to build trust in the system and put individuals in control of what personal data is disclosed to whom and when. However, as I outline in a recent ASPI report, there are several policy issues which, if left unresolved, could jeopardise successful deployment and adoption of the digital ID system.

Based on the limited details released so far, TEx could be on the verge of repeating many of the same missteps.

TEx appears to be a system that securely shares specific identity attributes for in-person interactions through a digital identity app on a handheld device. One example is proof-of-age checks at licenced premises: in lieu of physical documentation that shows the customer’s date of birth, the app simply verifies whether they are over or under 18. This would prevent data breaches such as the Clubs NSW incident, in which hackers stole data from patrons’ drivers licences that had been routinely scanned and stored.

But the sparse details about TEx are contradictory and ambiguous, causing some to be sceptical of the scheme. Shorten has suggested that it will ‘build upon digital ID infrastructure’, using the existing identity exchange operated by Services Australia and the myGov app, supported by some sort of record of each identity verification transaction. But this contradicts accreditation rules for the identity exchange, which specifically prohibit it from keeping logs of user activity.

This sort of ambiguity leads some to assume the worst, such as Electronic Frontiers Australia who claim the system will create the ‘mother of all honeypots’ and enable centralised surveillance. It doesn’t help that a recent Ombudsman report suggested that the myGov app currently falls well short of expectations on security and fraud prevention.

The government is also setting unrealistic expectations about the benefits of TEx, with Shorten suggesting that it will achieve ‘some of the best aspects of the GDPR’. The introduction of GDPR—the European Union’s data privacy and security law—had a dramatic effect on companies’ security and privacy practices because it was backed by massive penalties for non-compliance and encompassed all aspects of data collection, storage and usage. In contrast, Australia’s TEx, a voluntary system that might allow some organisations to opt out of collecting some personal data, is never going to have the same level of impact.

The incentives for companies to opt-in are unclear. Big names such as CBA and Seek have apparently offered ‘in-principle’ support, but this may change when they hear more details, particularly about costs.

It is also unclear how these different IT systems, owned and operated by different departments, will fit together to provide end-to-end service, security and privacy. TEx will be built by Services Australia, ‘on top of’ Digital ID infrastructure set up by the Department of Finance. Meanwhile the Attorney-General’s Department is developing a mobile app that alerts users whenever their identity credentials are used.

To execute these systems successfully, the government must develop an overarching identity and credentials strategy across the Commonwealth and the states and territories. This should include technical architecture, based on sound system engineering principles, that outlines how the different systems will work together. There should also be an end-to-end security assessment to ensure data confidentiality and resilience in the system. To achieve this, the government must break down departmental silos and build public support through transparent information and debate.

These new digital ID systems have the potential to increase privacy standards, reduce data breaches and improve the public’s experience of government service delivery—but only if it is properly executed. This opportunity is too big to squander.

Sovereign data: Australia’s AI shield against disinformation

Any attempt to regulate artificial intelligence is likely to be ineffective without first ensuring the availability of trusted large-scale sovereign data sets.

For the Australian government, AI presents transformative potential, promising to revolutionise the way in which government departments and agencies operate. The allure of AI-driven efficiency, precision and insight is irresistible. Yet, amid the chorus of AI evangelists, a discordant note rings true: establishment of robust AI policy guardrails now would be premature and potentially counterproductive without first addressing the fundamental issue of sovereign trusted data.

This contentious stance is rooted in the understanding that trustworthy AI hinges on the availability of trusted large-scale data sets. Without this bedrock, current attempts to regulate AI could end up being built on quicksand and become ineffective in mitigating the menace of misinformation and disinformation. Proposed Australian regulations are focusing on privacy requirements, labelling of AI-generated work, the legal consequences of AI choices and understanding how the software makes decisions.

AI, in its essence, is a data-driven phenomenon. The algorithms that power AI systems are not imbued with inherent intelligence; rather, they learn and evolve through ingestion and analysis of vast quantities of information. The quality, accuracy and representativeness of this data directly influence the performance and trustworthiness of the resulting AI models. In the absence of robust, verifiable data, AI can purvey misinformation, amplifying biases, perpetuating stereotypes and undermining public trust. Such risks are particularly acute for government agencies, for which stakes are high and the impact of erroneous decisions can be far-reaching.

Regulations that are not grounded in the realities of data quality and provenance risk being toothless and easily circumvented by those seeking to exploit AI for nefarious purposes.

Australian sovereign data is data that is owned, controlled and governed within Australia’s borders. It is subject to Australian laws and regulations, ensuring its collection, storage and use adhere to the highest standards of privacy, security and ethics. This control is crucial in mitigating the risks of foreign interference, data manipulation and the spread of misinformation. By maintaining sovereignty over the source data, the Australian government can ensure that the AI systems it deploys are built on a foundation of trust and transparency.

Sovereign data empowers Australian government agencies to build AI models that are tailored to the unique needs and context of the nation. By training AI systems on data that accurately reflects the diversity and complexity of Australian society, we can ensure that these models are not only effective but also equitable and just. Furthermore, sovereign data fosters transparency and accountability, allowing for independent scrutiny of the data and algorithms that underpin AI decision making. This transparency is essential for building public trust in AI and ensuring its responsible use in government.

Establishing trusted large-scale sovereign data sets is undeniably complex. It requires overcoming four challenges:

Data Collection and Integration. Gathering comprehensive, high-quality data from disparate sources across government agencies is a logistical and technical challenge. Data must be standardised, cleaned and de-identified to ensure its usability and protect privacy.

Data Governance. Robust data governance frameworks must be established to ensure data quality, security and ethical use. This includes defining clear roles and responsibilities for data management, implementing access controls and establishing mechanisms for addressing data breaches and misuse.

—Expertise and Resources. Building and maintaining sovereign data capabilities will require significant investment in infrastructure, technology and skilled people. Data scientists, analysts and governance experts are essential for ensuring effective management and utilisation of sovereign data.

—Cultural Shift. A cultural shift towards data-sharing and collaboration is needed across government agencies. Breaking down silos and fostering a culture of open data can accelerate the creation of comprehensive, multi-dimensional data sets that reflect the complexity of real-world challenges.

Despite these complexities, several countries have made significant strides in establishing sovereign data capabilities. The European Union’s General Data Protection Regulation (GDPR) is a prime example, setting a global standard for data privacy and control. India’s push for data localisation and its efforts to build a national digital infrastructure also highlight the growing recognition of the strategic importance of sovereign data.

The concept of sovereign data linkages with trusted nations also presents an opportunity for Australia. By establishing secure and mutually beneficial data-sharing agreements with like-minded and aligned countries, Australia could expand its access to high-quality data while maintaining control and ensuring ethical use. Such linkages would require careful negotiation and robust governance frameworks to ensure data privacy, security and alignment with shared values.

The Chinese government’s unfettered access to vast amounts of citizen data, coupled with its willingness to deploy AI for surveillance and social control, raises serious concerns about the future of AI ethics and governance. Australian collaboration with like-minded nations can counterbalance China’s AI ambitions.

Responsible and effective deployment of AI in Australian government is not merely a technical challenge but a strategic imperative. Sophisticated sovereign data sets can provide the bedrock for trustworthy AI, mitigating the risks of misinformation and disinformation while unlocking the full potential of AI for public good.

‘Weaponisation’ of religious sentiment in Indonesia’s cyberspace

The announcement that prominent Indonesia Ulema Council chairman and cleric Ma’ruf Amin will be President Joko ‘Jokowi’ Widodo’s vice-presidential running mate for the 2019 election has stimulated fresh debate about the ‘Islamisation’ of Indonesian politics.

Amin is the head of Indonesia’s largest Muslim organisation, the 45-million-member Nahdlatul Ulama (NU). Jokowi’s preferred pick had been former Constitutional Court Chief Justice Mahfud MD, but he and his Indonesian Democratic Party of Struggle  bowed to political pressure to choose a running mate with high-level Islamic credentials. The NU-linked National Awakening Party and United Development Party had threatened to leave Jokowi’s governing coalition if Amin were not chosen.

Islamic conservatism has been ascendant in Indonesia ever since Saudi-sponsored theological influence began in the 1980s. Growing Islamic conservatism became even more pronounced  after the fall of Indonesia’s second president, Suharto, and his authoritarian ‘New Order’ regime.

Indonesia’s post-Suharto reformasi saw the opening up of public discourse, and subsequent rise of previously suppressed conservative Islamic rhetoric and its ‘hardliner’ proponents. These hardliner Islamists emerged from decades of marginalisation and repression, under the regimes of both Suharto and his predecessor Sukarno, with little appetite for pluralism and tolerance.

The proliferation of social media in Indonesia has allowed greater unrestrained expression of strong religious views. This has allowed groups such as the Muslim Cyber Army, an organisation described as being without structure and similar to the ‘hacktivist’ group Anonymous, to reach and access a larger audience.

One way the Muslim Cyber Army targets liberal opponents is through ‘doxing’. ‘Doxing’ refers to the theft and publishing of personal details online, which are then used by groups—such as the far-right Sunni fundamentalist group the Front Pembela Islam (Islamic Defenders Front)—to hunt down and physically attack their liberal opponents.

This ‘weaponisation’ of conservative Islamic sentiment and religious intolerance has involved doctored online content and disinformation, deliberately spread through social media. As more Indonesians have gained access to the internet, mainly through low-cost smartphone technology, Indonesia has developed a disinformation problem.

The most prominent example of this phenomenon was the 2017 jailing of former Jakarta governor Basuki Tjahaja Purnama (Ahok) for two years for alleged blasphemy. In a September 2016 speech, Ahok asserted that politicians shouldn’t mislead voters by misinterpreting the Koran in advising Muslims against voting for non-Muslim political candidates. ‘Ladies and gentlemen … you’ve been lied to by those using [the Koran’s] Surah al-Maidah verse 51’, he said. The speech was then edited to seem as though Ahok was saying that it was the Koran itself that was misleading voters. The resulting video was used by anti-Ahok forces, including Ma’ruf Amin, to mobilise mass demonstrations that forced the government to charge Ahok with blasphemy.

The Indonesian government needs to reduce the effect of disinformation, especially ahead of the 2019 general election. Indonesia’s outdated legislation allows cybercriminals and botnets to thrive. The government also needs to do more to stop Indonesia from being used as a haven for these activities, which enables the spread of the doctored content that is in turn used to ‘weaponise’ Islamic sentiment.

While it’s important that Indonesia create appropriate legislative reform that helps reduce cybercrime and botnets, it should not endanger free speech. The recent draft revision to Article 309 of the Criminal Code proposes six years’ imprisonment for ‘any person who broadcasts fake news or hoaxes resulting in a riot or disturbance’.

While the code needs to be updated, the proposed revision is worrying because it doesn’t define or explain what constitutes a ‘disturbance’ or what is considered to be ‘fake’. That’s a problem because it opens the system up to abuse: anything not approved could be labelled as ‘disturbing’. As it stands, the clause could potentially be used to prosecute journalists, threatening press freedom.

The long-awaited creation of the Badan Siber dan Sandi Negara (BSSN), Indonesia’s new national cyber agency—after years of setbacks and delays—shows that Indonesia is becoming more serious about cybersecurity. In its role as manager of Indonesia’s cyberspace as well as content moderator, the BSSN will play a pivotal role in the run-up to the 2019 general election.

The BSSN will have the difficult task of trying to protect Indonesian voters from disinformation without censoring political expression. One way it could do that is to allow individuals and groups open channels for expressing legitimate political opinion, without the threat of being criminalised as blasphemers. It’s vital that the threat represented by doctored content and disinformation doesn’t supersede the importance of BSSN remaining politically impartial.

There are plenty of opportunities for Australia and Indonesia to increase their engagement on cyber issues, which is consistent with the Australian government’s international cyber engagement strategy. Dialogues and bilateral forums should certainly continue and be increased where appropriate, not just with Indonesia but also with other more open societies in the region like Japan and South Korea.

The recently announced comprehensive strategic partnership between Australia and the Republic of Indonesia and subsequent memorandum of understanding on cyber cooperation are promising engagement strategies. The MoU is a two-year non-legally-binding agreement to share information on cyber strategies and policies, build cyber capacity through training and education programs, promote business links to enable growth in the digital economy, and tackle cybercrime by sharing training opportunities to strengthen forensic and investigation capabilities.

Australia should use both the comprehensive strategic partnership and the MoU as platforms to encourage the Indonesian government to either develop clear definitions in the proposed Criminal Code revision or scrap it altogether.

This would help promote ongoing journalistic freedom in Indonesia as well as freedom of expression more generally. The MoU also stipulates closer cooperation with the BSSN, and Australia should use that opportunity to encourage the BSSN not to fall into the trap of state censorship, damaging Indonesia’s youthful democracy.

The campaign against Huawei

The case against Huawei’s participation in bidding for the 5G network in Australia appears to be based on incomplete information, at least as far as the public record allows us to judge.

For a full picture, there are several fields of knowledge we need to understand and reconcile: espionage, computer science, information and communications technology, cyber security, business studies, foreign policy, China studies, political science, international political economy, and globalisation. But there are also political perspectives and biases. The latter issue was rather brilliantly captured in a recent Norwegian study.

This study saw the Huawei challenge, the Snowden revelations about NSA, and the Volkswagen emissions-monitoring scandal as part of a common problem: assurance of supply chain components in the information age. The study concluded that ‘the problem [of supply chain assurance] should therefore receive considerably more attention from the research community as well as from decision makers than is currently the case’.

The consensus of global scholarly opinion on these issues suggests that those in Australia advocating for a ban on Huawei in the 5G network—mimicking the opinion of US intelligence chiefs expressed in February 2018—have not reviewed all of the available information and perspectives. Public policy analysts in Australia should be wary of their own government when it so closely mirrors senior officials in the Trump administration on any issue of intelligence policy, for two reasons.

The first, and most worrying, is the poor record of the US intelligence community on big issues of analysis if they’re highly politicised. Remember Iraqi WMD as one in a 70-year saga of great US intelligence failures. The second is that internal political disputation within the Trump administration and the US Congress on relations with China is at fever pitch.

So what does the study of espionage tell us about the campaign against Huawei?

There’s no doubt that countries like China, the United States, Russia, Israel and France find it easier to implant back doors in commercially available equipment manufactured by companies domiciled in their territories. For this and a variety of other reasons, wise governments, corporations and citizens should assume that all equipment in their supply chains, regardless of the country of origin, can be compromised from a cyber security point of view. The Norwegian study found that such back doors are often very difficult to detect.

We can add to this the overwhelming evidence that vulnerabilities in Microsoft Windows have been responsible for a very large share of security breaches globally, including in Australia. As argued in a study I co-authored with German scholar Sandro Gaycken for the New York-based EastWest Institute in 2014, ‘highly secure computing’ (that is, non-vulnerable systems) has to be the approach.

The national security damage caused by vulnerabilities in Microsoft Windows puts into the shade the unsubstantiated claims (unsubstantiated in the public domain at least) that Huawei equipment has directly produced security breaches. Moreover, NSA cyber weapons based on the vulnerabilities in Windows, such as Eternal Blue, have caused more documented security breaches globally, and in Australia, than any Huawei products. Yet Australia’s Defence Department uses Microsoft Windows.

We also need to assess the relative intelligence value of back doors in Huawei products if they in fact exist. We can assume they do, either by design or by error. But the share of high-grade intelligence collected by this means would be minuscule. Chinese and American spy agencies already have easy access to most unclassified or unencrypted telecommunications from Australia without relying on back doors in telecoms equipment.

If China wanted to use a domiciled company for implanting back doors, it would not rely on the Chinese Communist Party cell in Huawei to set that up. The Huawei party cell would not be in the chain of command for Chinese intelligence operations of this kind. The cell is not oriented towards espionage, though its members would report on internal security issues to the Ministry of Public Security.

If the US wanted to plant back doors in the equipment of a US-domiciled company, it would not need a law to compel the cooperation. It would simply get consent from people at the top of the company, as it did with NSA’s PRISM program, where US telecoms companies, such as AT&T, and information utilities, such as Google, provided a direct feed to NSA headquarters of all communications, according to documents leaked by Snowden.

Beyond intelligence studies, we need industry knowledge. Huawei estimates that 50% of Australians rely on its systems of some kind for their telecommunications. This is probably a radical underestimate—I think it would be closer to 95% if we’re talking about all Chinese-made systems. Most of Australia’s unclassified communications today probably depend on systems using at least one component manufactured in China.

I base this very rough estimate on several considerations. According to a 2018 study on smaller countries like Australia, the bulk of our domestic internet traffic and email is probably routed through foreign servers and internet gateways. A large slice goes to countries like the UK where BT is the provider using Huawei equipment, not to mention other Chinese equipment manufacturers like ZTE. And not to mention the share of our communications traffic to and from China itself. According to a 2018 Chinese study, the diversion of internet traffic through other countries is increasing in spite of intensifying claims to internet sovereignty.

The campaign against Huawei imagines that Australia has a cyber border. It does not. It’s deeply entangled in a globalised laissez-faire ICT economy and diffuse internet traffic pathways. Our public policy is still learning the nature and scope of this reality.

Tag Archive for: cyber security

Tech and Trust: Safeguarding AI for Economic and Security Progress

Tag Archive for: cyber security

Stop the World: Building cyber resilience with Lieutenant General Michelle McGuinness

In this episode of Stop the World, ASPI’s Executive Director Justin Bassi speaks with Australia’s National Cyber Security Coordinator Lieutenant General Michelle McGuinness CSC to discuss her role and how it helps protect Australians online.  

LTGEN McGuinness explores the dual role that the National Office of Cyber Security plays in preparing for and responding to increasing cyber incidents, the importance of building resilience to respond efficiently and effectively to them, and how preventative measures such as using multi-factor authentication can mitigate over 80 percent of cyber risks.  

Justin and LTGEN McGuinness also discuss the role that attribution plays in deterring malicious cyber activity and how attribution can improve mitigation strategies, drive norms and establish that Australia does not tolerate unacceptable behaviour in cyberspace. 

Guests: 

Lieutenant General Michelle McGuinness

Justin Bassi

Stop the World: The Sydney Dialogue Summit Sessions: Australia’s Cyber and Critical Technologies Ambassador Brendan Dowling

The Sydney Dialogue Summit Sessions are back!  

Today on Stop the World, we are relaunching our special series – The Sydney Dialogue Summit Sessions. To kick off the series, Alex Caples, Director of ASPI’s Sydney Dialogue, speaks to Brendan Dowling, Australia’s Cyber Affairs and Critical Technologies Ambassador.

This conversation covers all things cyber and offers a preview of some of the topics to be explored in Sydney in September. Alex and Brendan discuss the importance of security by design, regional security and the cybersecurity threats our region is facing, and the opportunities the digital transition provides the clean energy transition.

The Sydney Dialogue (TSD) is ASPI’s flagship initiative on cyber and critical technologies. The summit brings together world leaders, global technology industry innovators and leading thinkers on cyber and critical technology for frank and productive discussions. TSD 2024 will address the advances made across these technologies and their impact on our societies, economies and national security.

Find out more about TSD 2024 here: ⁠https://tsd.aspi.org.au/⁠ 

Guests:  

⁠Dr Alexandra Caples⁠
⁠Brendan Dowling