Tag Archive for: Technology

Could’ve seen it coming: ASPI’s tech tracker had picked up China’s AI strength

It shouldn’t have come as a complete shock.

US tech stocks, especially chipmaker Nvidia, plunged on Monday after news that the small China-based company DeepSeek had achieved a dramatic and reportedly inexpensive advance in artificial intelligence. But the step forward for China’s AI industry was in fact foreseeable.

It was foreseeable from ASPI’s Critical Technology Tracker, which was launched in early 2023 and which in its latest update monitors high-impact research (measured as the 10 percent most highly cited publications) over two decades across 64 technologies, including machine learning and natural language processing (NLP).

While high-impact research isn’t the full picture, it is a leading indicator of scientific innovation right at the beginning of the lifecycle of a technology. As we argued in our August 2024 update, scientific innovation needs to be nurtured through every step of the lifecycle, notably through commercialisation for economic gain.

The two-decade Critical Technology Tracker report showed that China’s consistent investments in science and technology were paying off, with steady gains in its global share of high-impact publications in machine learning over the previous two decades. In this ascent, China overtook the United States in their yearly global share of highly cited publications in 2017.

ASPI has shown that between 2019 and 2023, 36.5 percent of high impact research in this field was published by Chinese institutions, compared with 15.4 percent by the United States. In NLP, the race is tighter, with the US’s and China’s global share of publications neck-and-neck in the same five-year period, at 24.8 percent and 24.1 percent, respectively.

ASPI’s research has also shown that, of the world-leading institutions in machine-learning research, the top five were in China. Tsinghua University, the alma mater of several key researchers behind the latest DeepSeek model, ranked second. ASPI’s Critical Technology Tracker also ranks Tsinghua University third in research in natural-language processing, behind only Google and the Chinese Academy of Sciences.

Chinese technology firms have been increasingly tapping into the growing pool of indigenous talent. Last year, DeepSeek’s founder, Liang Wenfeng, emphasised that the core research team was ‘all local’ and included no one who had trained or done extensive work abroad—though some members did have work experience in foreign operations inside China, such as Microsoft Research Asia. The Financial Times reports that Liang formed his AI company by combining an excellent team of chips experts with the best talent from a hedge fund he had co-founded.

AI is just the latest technology in which we have seen Chinese companies challenge the established dominance held by US or European companies. Solar cells, electric vehicles and smartphones are all technologies in which Western companies held and lost early advantages. ASPI’s data shows that China has in fact surpassed the US in cutting-edge research for 57 out of 64 technology areas; 2016 was an inflection point.

The global AI industry is still weighted in favour of the US in share of pioneering tech companies. But as DeepSeek’s announcement emphasises, US and other Western countries should have no great confidence in keeping their leads. In fact, any confidence should be called out as complacency.

So, the Trump administration’s commitment to making America great again in technologies is certainly welcomed. The big example so far is the announcement on 21 January of the US AI infrastructure joint venture Stargate, into which US$500 billion ($800 billion) is to be invested.

DeepSeek’s release makes it clear that now is not the time for half-measures or wishful thinking. Bold decisions, strategic foresight and a willingness to lean in to the AI race is vital to maintaining a competitive edge, and not just by the US.

ASPI’s Critical Technology Tracker is clear in another regard: that we should be ready for similar advances by China in other technological domains. Let’s hope that DeepSeek really is the wakeup call needed and likeminded countries now take the action needed to avoid being shocked again—not just in AI, but in all critical technologies.

DeepSeek’s disruption: Australia needs a stronger artificial intelligence strategy

The success of DeepSeek, a Chinese AI startup, has thrown a wrench in the middle of what many observers thought were largely American, or at least democratic, gears.

While the world seems to have been woken up by an AI surprise, DeepSeek’s breakthrough should be a timely reminder for Australia of the need to reduce consumer dependence for technology on China through a proactive and strategic approach to AI.  The Australian government should not want our public to be getting its world view from only the ‘facts’ Beijing permits.

DeepSeek’s development of ‘R1’, a highly efficient and cost-effective AI model, has sent ripples through the global tech community, challenging the perceived dominance of the US in AI and raising questions about the effectiveness of current export controls in preserving technological advantage.

DeepSeek’s R1 model represents a significant departure from conventional AI development paradigms. Reportedly twice the size of Meta’s open-source model and trainable at a fraction of the cost of US-developed models, R1 has fuelled speculation that DeepSeek may have circumvented export controls to access restricted US-made Nvidia chips.

While DeepSeek’s CEO has denied these allegations, attributing the company’s success to innovative development methodologies, he has also openly acknowledged that US export controls have inadvertently spurred his efforts to reduce China’s reliance on American technology. This statement highlights a broader trend of indigenous innovation in China, driven by a desire to achieve technological self-reliance and reduce vulnerability to external pressures. If true, it doesn’t mean the US export controls were so ineffective to be dropped, but rather that the US and its allies have more work to do.

DeepSeek’s emergence as a major player in the AI arena has profound implications for AI in Australia.

First, it challenges the prevailing assumption that US technological leadership, which has long underpinned Australia’s strategic and economic partnerships, can be taken for granted in the medium term.

Second, it shows that while export controls are a tool for maintaining technological advantage, it needs to be part of a full toolbox in an era of rapid technological diffusion and globalised innovation networks.

Third, and most importantly, it underscores the urgent need for Australia to cultivate sovereign AI capabilities. In this regard sovereignty is not going it alone, but not relying on our partners, even our great ally the US, to do all the heavy lifting. Over-reliance on China is a national security threat while overreliance on the US is national negligence. This is why in addition to Australian investment in indigenous AI capabilities, doubling down on the AUKUS partnership is required to safeguard our national interests, maintain our competitive edge, and ensure our strategic autonomy in a technology-driven world. And it is why Australia, the UK and the US made AI one of the six advanced capabilities of AUKUS Pillar 2.

Australia cannot continue the current approach of responding to each new tech development—whether it’s HikVision surveillance, TikTok data manipulation, smart car communications or the risk of AI facts delivered by the Chinese government. As such, we must adopt a comprehensive tech strategy that covers AI.

This strategy should encompass the following key elements:

Investing in sovereign AI capabilities: Increased investment in AI research and development is essential, along with the development of a national AI strategy that prioritises areas of national interest, such as defence, cybersecurity, and critical infrastructure. This investment should focus on building a robust and resilient AI industry that can support innovation, drive economic growth, and enhance national security.

Fostering international collaboration: In addition to AUKUS, strengthening partnerships with like-minded nations, such as Canada, Japan, and South Korea, is crucial for collaborative AI development, knowledge-sharing, and the establishment of international standards and norms for responsible AI development and deployment. Ideally groups like the Quad and the G7 plus should take this on.

Promoting ethical AI development: Australia must play a leading role in promoting ethical AI development and ensuring that AI systems are designed and deployed in a manner that respects human rights, promotes fairness, and safeguards against bias and discrimination but that does not politically censor.

Engaging the public: A public education campaign is necessary to raise awareness of the potential benefits and risks of AI, foster informed public discussion, and ensure that AI development and deployment align with society’s values and expectations.

As former Google CEO Eric Schmidt wrote yesterday: ‘DeepSeek’s release marks a turning point … We should embrace the possibility that open science might once again fuel American dynamism in the age of AI.’

Australia should work with the US and other partners to ensure it is our ‘open science’ and not Beijing’s closed world that is keeping the world informed. This underscores the importance of international engagement to shape the global AI landscape.

By taking a strategic approach that recognises the enormous impact that AI will have on every field, by investing in sovereign capabilities, by fostering international collaboration, and by promoting ethical AI development, Australia can navigate the AI revolution and secure its place as a leader in this transformative technological era.

DeepSeek is a modern Sputnik moment for West

The release of China’s latest DeepSeek artificial intelligence model is a strategic and geopolitical shock as much as it is a shock to stockmarkets around the world.

This is a field into which US investors have been pumping hundreds of billions of dollars, and which many commentators predicted would be led by Silicon Valley for the foreseeable future.

That a little-known Chinese company appears to have leapfrogged into a neck-and-neck position with the US giants, while spending less money and with less computing power, underscores some sobering truths.

First, the West’s clearest strategic rival is a genuine peer competitor in the technologies that will decide who dominates the century and, second, we need to step up our efforts to become less not more reliant on Chinese technology.

More than any other single field, AI will unleash powerful forces from economic productivity through to military capabilities. As Vladimir Putin said in 2017, whoever leads in AI ‘will become the ruler of the world’.

Marc Andreessen, the influential Silicon Valley entrepreneur and venture capitalist, called the DeepSeek announcement a ‘Sputnik moment’ and ‘one of the most amazing and impressive breakthroughs’ in AI. The United States was shocked into action by the Soviet satellite, Sputnik, investing billions into a public-private sector partnership model that helped win back and sustain tech dominance that would play a major role in winning the Cold War.

Andreessen is right but, in many ways, this breakthrough is even more consequential than Sputnik because the world’s consumers are increasingly reliant on China’s technology and economy in ways we never were with the Soviets.

So what does the West need to do now? Above all we need to stop underestimating our major strategic competitor. If hundreds of billions of dollars isn’t enough investment, we either need to redouble our efforts or work more smartly, bringing governments and the private sector together, and working across trusted nations, as we’re doing with AUKUS security technologies—one of which is of course AI.

We also need to dramatically step up so-called derisking of our economies with China’s in these critical technology fields.

When our leaders say they want us to have consumer choice including Chinese-made tech products, they are ignoring the considerable risks of future Chinese dominance, given we have seen the way Beijing is prepared to use its economic power for strategic purposes, whether through 5G or critical minerals.

As it stands, Beijing will have control over the majority of our smart cars, our batteries, the news our public gets through social media and, if models such as the open-source DeepSeek are adopted cheaply by Western companies, the supercharging power that AI will bring to every other sector.

DeepSeek’s breakthrough should actually come as less of a surprise than the stunned market reaction has shown.

In 2015, China told the world its aim was to supplant the US as the global tech superpower in its Made in China 2025 plan.

At ASPI our research in our Critical Technology Tracker has been showing for almost two years that Chinese published research is nipping at the US’s heels.

It surely isn’t a coincidence that at the end of 2024 and the early weeks of 2025, Beijing has shown the world its advances in both military capability in the form of new combat aircraft, and now dual-use technology in AI. Simultaneously we see Beijing’s obsession with keeping Americans and all Westerners hooked on TikTok, which ensures its users see a Beijing-curated version of the world.

Some observers are arguing that the DeepSeek announcement shows the ineffectiveness of US restrictions on exports of advanced technology such as Nvidia’s advanced chips to China.

Far from backing away from such protective measures, the Trump administration should consider stepping them up, along with further investments in data centres—already under way through the Stargate project.

Restricting chips to China is still an important tool in the US toolkit—it’s just not a panacea.

As Donald Trump’s reportedly incoming tech security director, David Feith, argued last year, the US should also target older chips because ‘failing to do so would signal that US talk of derisking and supply chain resilience still far outpaces policy reality’.

It’s not certain how much direct support DeepSeek and its backers have received from the Chinese government but there are some clues in the way the company is behaving. The DeepSeek model is open-source and costs 30 times less for companies to integrate into than US competitors.

Founder Liang Wenfeng has been blunt that the company is not looking for profits from its AI research, at least in the short term—which would enable it to follow the Chinese playbook of undercutting competitors to create monopolies. And the firm had reportedly been stockpiling the most advanced Nvidia chips before the US restrictions, and has received allocations of chips apparently through the Chinese government.

These facts hint at the lopsided playing field China likes to create. As Edouard Harris, of Gladstone AI, told Time magazine: ‘There’s a good chance that DeepSeek and many of the other big Chinese companies are being supported by the (Chinese) government, in more than just a monetary way.’

While the West continues to debate the balance between fully open economies and national industrial and technology strategies with greater government involvement, China has already fused its industry with its government-led national strategy and is evidently stronger for it.

China sees the West’s open economies as a vulnerability through which it has an easy access to our markets that is not reciprocated.

DeepSeek is yet another reminder that China’s technology is a force to be reckoned with and one that its government will use strategically to make China more self-sufficient while making the rest of the world more dependent on China.

We must start recognising this era and responding decisively.

Tiptoeing around China: Australia’s framework for technology vendor review

Australia has a new framework for dealing with high-risk technology vendors, though the government isn’t brave enough to call them that.

Home Affairs Minister Tony Burke says the framework ‘will ensure the government strikes the right balance in managing security risks while ensuring Australia continues to take advantage of economic opportunities’.

An alternative reading would be that it’s an opaque, toothless framework that gives the government wiggle room to minimise risk to the China relationship by increasing risk to our digital sovereignty.

The framework was announced on 20 December but not published. It’s a set of guidelines for assessing national security risks posed by foreign technology products and services sold in Australia. The timing was so unlikely to attract attention that it looked deliberate. Information on the Department of Home Affairs website, striking an unsatisfying balance between brevity and circumlocution, reinforces the impression that the government would be pleased if few people noticed the policy.

The framework establishes a ‘proactive process to consider foreign ownership, control or influence risks associated with technology vendors’. That will enable the government to ‘provide guidance on technology vendor risks to inform public and private sector procurement decisions about the security of technology products and services’. Risks will be assessed and mitigations considered where these risks are unacceptable.

The government’s factsheet provides a few more details. The security reviews will be led by Home Affairs in consultation with relevant agencies, presumably including technical experts in our security agencies. Assessments will be prioritised based on preliminary risk analysis of such factors as where the product or service is deployed, its prevalence and access to sensitive systems or data.

We don’t know what technologies the reviews will focus on or who will make the final decisions on which risks need mitigating. Review findings will apparently inform future government policies or support technical guidance to help organisations mitigate identified risks. The framework itself will not be released publicly to ‘ensure the integrity of the framework’s processes and protect information relating to national security’.

What’s clear is the focus on mitigating risk. Bans or restrictions on vendor access are off the table, even though, as we discovered with 5G, it is sometimes impossible to mitigate technology products and services that are one update away from being remotely manipulated by the vendor who supplies and maintains them.

But who would seek to manipulate or disrupt the critical technologies on which Australians rely?

Well, the government says the framework was not established to ‘target vendors from specific nations.’ The majority of foreign vendors ‘do not present a threat to Australia’s interests. However, in some cases, the application, market prevalence or nature of certain technologies, coupled with foreign influence, could present unacceptable risks to the Australian economy. This is particularly true if the vendor is owned, controlled or influenced by foreign governments with interests which conflict with Australia’s.’

The document steers clear of the more zingy phrase ‘high-risk vendors’, which was associated with Australia’s 2018 ban on Chinese 5G suppliers Huawei and ZTE.

It’s a tricky balance. Reluctance to point the finger at our largest trading partner is understandable, even though everyone knows we wouldn’t need a framework without our growing reliance on Chinese vendors who are indeed owned, controlled or influenced by the Chinese government. But, unsettled by China’s reaction to its predecessor singling out Chinese 5G vendors, this government seems more concerned with anticipating Chinese concerns than explaining to the public what technologies it should be worried about.

For example, will the government target electric cars and solar inverter technologies, where China’s dominant position has raised concerns? Perhaps not, since we are reminded that foreign technology companies ‘are essential’ for Australia’s net zero transition.

Businesses weighing the merits of buying cost-competitive Chinese tech will be reassured that the framework won’t introduce new legislated authorities or regulation. The focus seems to be on consultation with business so the government can ‘understand the risks introduced by a product or service, and the availability of mitigations’.

But mitigations reduce efficiency and add cost, and selecting pricier gear from alternative trusted vendors adds even more. Businesses may feel that avoiding these extra costs is worth the risk.

How might this play out? One way is we never hear about the framework again, aside from occasional technical security guidance. Low public awareness of the risks will mean inquiries can be batted back with assurances that the government has been making progress but can’t talk about it for national security reasons.

Then, one morning in the middle of an Indo-Pacific crisis, we might wake up to find the power and water don’t work.

As Mike Tyson might have said, everyone has a secret technology vendor review framework until they get punched in the mouth.

Using open-source AI, sophisticated cyber ops will proliferate

Open-source AI models are on track to disrupt the cyber security paradigm. With the proliferation of such models—those whose parameters are freely accessible—sophisticated cyber operations will become available to a broader pool of hostile actors.

AI insiders and Australian policymakers have a starkly different sense of urgency around advancing AI capabilities. AI leaders like Dario Amodei, chief executive of Anthropic, and Sam Altman, chief executive of OpenAI, forecast that AI systems that surpass Nobel laureate-level expertise across multiple domains could emerge as early as 2026.

On the other hand, Australia’s Cyber Security Strategy, intended to guide us through to 2030, mentions AI only briefly, says innovation is ‘near impossible to predict’, and focuses on economic benefits over security risks.

Experts are alarmed because AI capability has been subject to scaling laws—the idea that capability climbs steadily and predictably, just as in Moore’s Law for semiconductors. Billions of dollars are pouring into leading labs. More talented engineers are writing ever-better code. Larger data centres are running more and faster chips to train new models with larger datasets.

The emergence of reasoning models, such as OpenAI’s o1, shows that giving a model time to think in operation, maybe for a minute or two, increases performance in complex tasks, and giving models more time to think increases performance further. Even if the chief executives’ timelines are optimistic, capability growth will likely be dramatic and expecting transformative AI this decade is reasonable.

The effect of the introduction of thinking time on performance, as assessed in three benchmarks. The o1 systems are built on the same model as gpt4o but benefit from thinking time. Source: Zijian Yang/Medium.

Detractors of AI capabilities downplay concern, arguing, for example, that high-quality data may run out before we reach risky capabilities or that developers will prevent powerful models falling into the wrong hands. Yet these arguments don’t stand up to scrutiny. Data bottlenecks are a real problem, but the best estimates place them relatively far in the future. The availability of open-source models, the weak cyber security of labs and the ease of jailbreaks (removing software restrictions) make it almost inevitable that powerful models will proliferate.

Some also argue we shouldn’t be concerned because powerful AI will help cyber-defenders just as much as attackers. But defenders will benefit only if they appreciate the magnitude of the problem and act accordingly. If we want that to happen, contrary to the Cyber Security Strategy, we must make reasonable predictions about AI capabilities and move urgently to keep ahead of the risks.

In the cyber security context, near-future AI models will be able to continuously probe systems for vulnerabilities, generate and test exploit code, adapt attacks based on defensive responses and automate social engineering at scale. That is, AI models will soon be able to do automatically and at scale many of the tasks currently performed by the top-talent that security agencies are keen to recruit.

Previously, sophisticated cyber weapons, such as Stuxnet, were developed by large teams of specialists working across multiple agencies over months or years. Attacks required detailed knowledge of complex systems and judgement about human factors. With a powerful open-source model, a bad actor could spin-up thousands of AI instances with PhD-equivalent capabilities across multiple domains, working continuously at machine speed. Operations of Stuxnet-level sophistication could be developed and deployed in days.

Today’s cyber strategic balance—based on limited availability of skilled human labour—would evaporate.

The good news is that the open-source AI models that partially drive these risks also create opportunities. Specifically, they give security researchers and Australia’s growing AI safety community access to tools that would otherwise be locked away in leading labs. The ability to fine-tune open-source models fosters innovation but also empowers bad actors.

The open-source ecosystem is just months behind the commercial frontier. Meta’s release of the open-source Llama 3.1 405B in July 2024 demonstrated capabilities matching GPT-4. Chinese startup DeepSeek released R1-Lite-Preview in late November 2024, two months after OpenAI’s release of o1-preview, and will open-source it shortly.

Assuming we can do nothing to stop the proliferation of highly capable models, the best path forward is to use them.

Australia’s growing AI safety community is a powerful, untapped resource. Both the AI safety and national security communities are trying to answer the same questions: how do you reliably direct AI capabilities, when you don’t understand how the systems work and you are unable to verify claims about how they were produced? These communities could cooperate in developing automated tools that serve both security and safety research, with goals such as testing models, generating adversarial examples and monitoring for signs of compromise.

Australia should take two immediate steps: tap into Australia’s AI safety community and establish an AI safety institute.

First, the national security community should reach out to Australia’s top AI safety technical talent in academia and civil society organisations, such as the Gradient Institute and Timaeus, as well as experts in open-source models such as Answer.AI and Harmony Intelligence. Working together can develop a work program that builds on the best open-source models to understand frontier AI capabilities, assess their risk and use those models to our national advantage.

Second, Australia needs to establish an AI safety institute as a mechanism for government, industry and academic collaboration. An open-source framing could give Australia a unique value proposition that builds domestic capability and gives us something valuable to offer our allies

Tech cooperation between Australia and South Korea will bolster regional stability

Greater alignment between Australia and South Korea in critical technologies would produce significant strategic benefit to both countries and the Indo-Pacific. Overlapping and complex regional challenges, such as climate change, economic shocks and pandemics, underscore the need for international cooperation in critical technologies

Although these technologies have a range of beneficial social, economic and security outcomes, they are increasingly being deployed by regional adversaries for malign purposes, including espionage, cyberattacks and spreading disinformation. This is particularly alarming for many countries in the region amid intensified geostrategic competition.

The latest data from ASPI’s Critical Technology Tracker highlights challenges posed by technological advancements by emphasising the shift in technology leadership from the US to China over the past two decades. The tracker shows that China is now the leading country for high-impact research on critical technologies.

Enhanced collaboration between likeminded Indo-Pacific partners can counter China’s edge in technological research. ASPI’s new report recommends coordination and cooperation between Australia and South Korea in critical technologies, as the two regional powers have complementary technologies and are committed to upholding the US-led rules-based order.

In this report, we examine bilateral technological collaboration through the framework of four stages common to technological life cycles (innovation, research and development; building blocks for manufacturing; testing and application; standards and norms) and four corresponding critical technologies of joint strategic interest to both Australia and South Korea (biotechnologies, electric batteries, satellites and artificial intelligence).

Using this framework, we provide policy recommendations for Australian and South Korean government, research and industry stakeholders. We outline how they can build cooperation in the areas of biotechnology-related research and development, battery materials manufacturing, satellite launches and artificial intelligence (AI) standards-setting.

First, long-term exchanges between key R&D institutions will facilitate knowledge-sharing in the field of biotechnologies, a field relevant to both countries’ goals to become regional clinical trial hubs. We suggest that the Commonwealth Scientific and Industrial Research Organisation and the Korea Research Institute of Bioscience and Biotechnology lead this initiative.

Second, due to Australia’s abundance of critical minerals and South Korea’s desire to elevate its capacity to manufacture electric batteries, battery material manufacturers from both countries should collaborate in the joint production of such battery materials as lithium hydroxide and precursor cathode active materials. Although the POSCO-Pilbara Minerals plant is an existing example of a joint factory operating South Korea, we highlight the strategic benefit of building future factories on Australian soil to take advantage of a secure supply of critical minerals.

Third, a streamlined government-to-government agreement will help South Korean companies to take advantage of Australia’s geography for joint satellite launches. This could emulate an agreement between Australia and the US for joint satellite launches. It would make it easier for both Australia and South Korea to collate satellite data for civilian and defence purposes.

Finally, Australian and South Korean stakeholders involved in international standards-setting bodies should align their approaches to ensure that the development and implementation of AI technologies is consistent with both countries’ respective interests. ISO/IEC JTC 1/SC 42, a joint subcommittee on AI standards shared by the International Organization for Standardization and the International Electrotechnical Commission, is one recommended mechanism for coordinating the approaches of key Australian and South Korean stakeholders in AI standards.

The current political situation in South Korea may sow doubt in the mind of regional counterparts about its domestic stability and suitability as a partner. However, the quick overturning of martial law showed the robustness of South Korea’s democratic institutions. There may be short-term challenges to bolstering bilateral technological initiatives as the domestic situation continues to evolve, but the long-term trajectory for technological cooperation remains optimistic.

Aside from the economic, innovation and technology pillar of the bilateral Comprehensive Strategic Partnership and the Memorandum of Understanding on Cyber and Critical Technology Cooperation, the two countries are also active in furthering multilateral dialogue relating to critical technologies. Particularly, each country is internationally engaged in for a including the 3rd Generational Partnership Project, International Electrotechnical Commission and Minerals Security Partnership.

Technological cooperation between Australia and South Korea has can be leveraged to address regional challenges. This report serves as a starting point for furthering this cooperation. To ensure that the Indo-Pacific remains safe, secure and stable in the coming decades, now is the time for industry, research and government stakeholders in Australia and South Korea to jointly adopt a much greater and meaningful strategic role in regional technological collaboration.

It’s not too late to regulate persuasive technologies

Social media companies such as TikTok have already revolutionised the use of technologies that maximise user engagement. At the heart of TikTok’s success are a predictive algorithm and other extremely addictive design features—or what we call ‘persuasive technologies’. 

But TikTok is only the tip of the iceberg. 

Prominent Chinese tech companies are developing and deploying powerful persuasive tools to work for the Chinese Communist Party’s propaganda, military and public security services—and many of them have already become global leaders in their fields. The persuasive technologies they use are digital systems that shape users’ attitudes and behaviours by exploiting physiological and cognitive reactions or vulnerabilities, such as generative artificial intelligence, neurotechnology and ambient technologies.   

The fields include generative artificial intelligence, wearable devices and brain-computer interfaces. The rapidly advancing tech industry to which these Chinese companies belong is embedded in a political system and ideology that compels companies to align with CCP objectives, driving the creation and use of persuasive technologies for political purposes—at home and abroad.  

This means China is developing cutting-edge innovations while directing their use towards maintaining regime stability at home, reshaping the international order abroad, challenging democratic values, and undermining global human rights norms. As we argue in our new report, ‘Persuasive technologies in China: Implications for the future of national security’, many countries and companies are working to harness the power of emerging technologies with persuasive characteristics, but China and its technology companies pose a unique and concerning challenge. 

Regulation is struggling to keep pace with these developments—and we need to act quickly to protect ourselves and our societies. Over the past decade, the swift technological development and adoption have outpaced responses by liberal democracies, highlighting the urgent need for more proactive approaches that prioritise privacy and user autonomy. This means protecting and enhancing the ability of users to make conscious and informed decisions about how they are interacting with technology and for what purpose.  

When the use of TikTok started spreading like wildfire, it took many observers by surprise. Until then, most had assumed that to have a successful model for social media algorithms, you needed a free internet to gather the diverse data set needed to train the model. It was difficult to fathom how a platform modelled after its Chinese twin, Douyin, developed under some of the world’s toughest information restrictions, censorship and tech regulations, could become one of the world’s most popular apps.  

Few people had considered the national security implications of social media before its use became ubiquitous. In many countries, the regulations that followed are still inadequate, in part because of the lag between the technology and the legislative response. These regulations don’t fully address the broader societal issues caused by current technologies, which are numerous and complex. Further, they fail to appropriately tackle the national security challenges of emerging technologies developed and controlled by authoritarian regimes. Persuasive technologies will make these overlapping challenges increasingly complex. 

The companies highlighted in the report provide some examples of how persuasive technologies are already being used towards national goals—developing generative AI tools that can enhance the government’s control over public opinion; creating neurotechnology that detects, interprets and responds to human emotions in real time; and collaborating with CCP organs on military-civil fusion projects. 

Most of our case studies focus on domestic uses directed primarily at surveillance and manipulation of public opinion, as well as enhancing China’s tech dual-use capabilities. But these offer glimpses of how Chinese tech companies and the party-state might deploy persuasive technologies offshore in the future, and increasingly in support of an agenda that seeks to reshape the world in ways that better fit its national interests. 

With persuasive technologies, influence is achieved through a more direct connection with intimate physiological and emotional reactions compared to previous technologies. This poses the threat that humans’ choices about their actions are either steered or removed entirely without their full awareness. Such technologies won’t just shape what we do; they have the potential to influence who we are.  

As with social media, the ethical application of persuasive technologies largely depends on the intent of those designing, building, deploying and ultimately controlling the technology. They have positive uses when they align with users’ interests and enable people to make decisions autonomously. But if applied unethically, these technologies can be highly damaging. Unintentional impacts are bad enough, but when deployed deliberately by a hostile foreign state, they could be so much worse. 

The national security implications of technologies that are designed to drive users towards certain behaviours are already becoming clear. In the future, persuasive technologies will become even more sophisticated and pervasive, with the consequences increasingly difficult to predict. Accordingly, the policy recommendations set out in our report focus on preparing for, and countering, the potential malicious use of the next generation of persuasive technologies. 

Emerging persuasive technologies will challenge national security in ways that are difficult to forecast, but we can already see enough indicators to prompt us to take a stronger regulatory stance. 

We still have time to regulate these technologies, but that time for both governments and industry are running out. We must act now. 

Australia should lead efforts to address online gender-based violence

Since the UN Security Council adopted the Women, Peace, and Security (WPS) Agenda in 2000, the world has started to facilitate women’s participation in peace and security processes while protecting them from gender-based violence. But technology-facilitated gender-based violence (TFGBV) is undermining these advancements. An October report by UN Women focussing on TFBGV highlighted an intensification of online misogyny and hate speech targeting women.

Australia should address this global phenomenon with a new bill and lead international efforts to improve transparency and accountability on major digital platforms. Proposed legislation to fight disinformation presents an opportunity to do this. The country should also be looking at further measures, such as promotion of digital literacy and participation of women in policy that pertains to TFGBV.

AI-enabled TFGBV, including doctored image-sharing, disinformation, trolling and slander campaigns affected 88 percent of women surveyed in UN Women’s report, with those in public-facing jobs systematically targeted. If no steps are taken, women may be deterred from meaningfully participating in public discussions and decision-making processes, known as the ‘chilling effect’. This is especially true for female politicians, journalists and human rights defenders who often face politically motivated or coordinated attacks.

Domestically, Australia has seen a positive trend in female representation in politics, with participation in state and territory parliaments increasing from 22 percent in 2001 to 39 percent in 2022. However, in a global ranking comparing the percentage of women in national parliaments, Australia has fallen from 27th place to 57th over the last 25 years.

TFGBV poses a direct threat to women’s participation in Australian politics. A 2022 study by Gender Equity Victoria revealed the prevalence of violent rhetoric and material mostly directed at women and gender diverse people. Prominent female politicians such as Julia Gillard, Penny Wong, Sarah Hanson-Young and Mehreen Faruqi have faced relentless online abuse, often involving implied threats of offline physical harm.

The impact of such abuse is often compounded for women who are religious or culturally and linguistically diverse. This discourages marginalised communities with intersectional backgrounds from participating in democratic processes.

Such harassment is not only deeply personal but widely damaging for Australian democracy. When women face a greater risk of gendered online harassment, fewer will pursue public office or meaningfully engage in political discourse. TFGBV is effectively forcing women out of key decision-making spaces and processes, risking the regression of women’s rights and freedom of speech.

The Albanese government’s recently tabled Combatting Misinformation and Disinformation Bill would grant the Australian Communications and Media Authority (ACMA) new powers to regulate digital platforms, with the aim of addressing harmful content while safeguarding freedom of speech. This aligns with the WPS pillars of protecting the rights of women and girls.

The bill presents an opportunity to tackle TFGBV as part of a broader approach to digital safety, especially as the growth of female representation in government may be at stake. By empowering ACMA to clamp down on disinformation campaigns that disproportionately target women, the bill could provide a crucial pathway for women to engage in public life without fear of TFGBV.

Additionally, the government must build resilience against TFGBV by establishing digital literacy programs that address online safety. These programs should educate the public on identifying digital threats, navigating online harassment and reporting abuse. Integrating TFGBV prevention into educational curricula and workplace policies could equip women to protect themselves online and encourage safer, more secure digital environments.

To address TFGBV proactively, the government also needs to increase female representation in cybersecurity, policymaking and technology governance. It should invest in initiatives that offer scholarships, mentorship programs and career development for women in STEM, with a focus on digital security. This would empower more women to participate in developing cyber policies and gender mainstreaming strategies and ensure that the gendered dimensions of digital security are fully considered.

Since TFGBV also stems from AI algorithm bias on social media platforms, Australia should lead international efforts to establish transparency and accountability standards for AI applications, including through UN Secretary-General Advisory Body on AI where currently Australia does not have any representation. It should require digital platforms to disclose AI applications, detect harmful content and protect users’ data. Additionally, Australia should advocate for measures that ensure algorithms do not inadvertently target women with harmful content or amplify misogynistic narratives.

In leading these initiatives, Australia can build on its WPS National Action Plan 2021-2031, which serves as a framework for efforts to enhance women’s participation in peace and security processes.

Given the borderless nature of digital spaces, Australia needs to collaborate with other like-minded partners to address TFGBV. Regional partnerships could involve information-sharing agreements, joint training on addressing TFGBV, and collaborative research on the trends of AI-driven gendered-harms and how to counter them. A united stance by Australia and its partners would bolster digital security for women, fostering a safer environment for women in public roles.

How will the ADF get the technology edge it needs to win?

Fast-moving technology clearly gives the advantage to militaries that can obtain new systems quickly. And it’s a major source of damage and danger to those whose organisations aren’t delivering these powerful capabilities into the hands of the soldiers, sailors and aviators.

This was brutally demonstrated when the Azerbaijani military used cheap, deadly, unmanned systems to destroy scores of Armenian tanks and to attack camouflaged vehicles, headquarters and command locations. The Armenians, fielding traditional manned platforms and operating in conventional ways, lost.

These unmanned systems needed targeting and intelligence information and so didn’t operate alone. But the lesson is that militaries that don’t have fast acquisition processes, and that are without leaders who understand the required pace of change, can expose their people—and the governments and populations that rely on them—to enormous risk.

It’s an obvious lesson that many in defence organisations across the world already know. But sometimes it takes brutal public demonstrations of things that have only been appreciated intellectually to make people act on what they know.

The process of getting fast-moving technology to the Australian Defence Force is at best mixed, slowed by the understandable conservatism about the promise of new technologies balanced against the power of well-understood solutions and approaches.

To any military chief in 2021, now seems not the time to give up on highly capable, complex, crewed surface ships, submarines, fighter jets and surveillance aircraft and leap into the unknown world of autonomy. And no chief of the army, navy or air force wants to live the rest of their life and service reunions as the person who gave up armoured fighting vehicles, frigates, crewed submarines or crewed fighters.

That’s absolutely rational, and the huge psychological and emotional barrier any service chief would face is obvious.

The problem isn’t that this sensible conservatism sees the bulk of the defence investment budget spent on small numbers of very expensive, complex traditional platforms—although there are arguments that the outcomes don’t justify the costs.

The real problem is there are few champions of the ADF’s urgent need for faster moving, new technologies at scale who matter enough to affect government thinking and decisions.

Given continuing uncertainty about the viability of both traditional and emerging military capabilities, it’s absolutely defensible that the big, slow-moving traditional programs delivering small numbers of highly capable, complex, expensive platforms proceed. They may deliver capability to the ADF that’s powerful in the threat environment we have now, and the even more deadly threat environment over the next five or 10 years.

But even if crewed surface ships and submarines remain powerful, they’ll need to be complemented, augmented and wrapped up with things like smart missiles, semi-autonomous intelligence and surveillance systems, loitering munitions and uncrewed undersea systems—armed and unarmed—if they’re to be effective.

Defence’s mega-projects must be complemented by an entirely separate, fast-moving technology acquisition cycle not constrained by all the process layers and mitigators the giant projects require. Instead, they must be driven by the imperative to quickly equip our personnel with what they need to deter conflict and prevail if it occurs. We need to be more like the Azerbaijanis, not the Armenians.

So, who might champion rapid acquisition of fast-moving new technologies?

I’d have hoped the army and its leadership would. The Australian Army has traditionally not been a heavily armoured, heavily mechanised force, but a capable light infantry outfit that can operate in a highly dispersed small team environment, with a leavening of armour.

That was overlooking one big dynamic, though. The army force structure that’s been the vision since at least the late 1990s has embraced armour as its centre, and the army is now on the cusp of doing what the other two services already have done, doubling down on its own ‘next generation’, hugely expensive, complex, crewed weapon platforms. That’s happening just as these are becoming more vulnerable to everything we saw happen to the Armenians.

And no army leader is likely to do much about this because the combination of conservatism and psychology mean it’s way too big an ask—particularly when the army is about to get its hands on $27 billion for 450 infantry fighting vehicles.

That’s a shame, because armies could be the early adopters and are ideally placed to make the shift to highly dispersed, autonomous operations by small groups operating damaging new weapons but in a highly mobile, hard-to-target way. That’s what the new US Marine Corps concept is working towards.

Even buying just (!) 200 more armoured vehicles through its already agreed combat reconnaissance vehicle project and cancelling the IFV program would keep headroom for change.

And the army is also well placed to keep the focus on lower cost, high volumes of things like loitering munitions, advanced ground-to-air, ground-to-ship and ground attack missiles, and low-cost, widely available sensors and communications systems to lace all this together—because armies understand volume.

I know there are markers in Defence’s big integrated investment program for some of this—but the real money comes after the army eats the multibillion-dollar elephant that is its Land 400 armoured vehicle program. Until then, expect high-profile experimentation and press release, but low-volume actual acquisition of anything that doesn’t have armour and a turret.

That leaves us with two other services and ministers.

Strangely, for a force that has always centred itself on the person in the cockpit, selecting its chiefs out of only these folk, the Royal Australian Air Force is doing the most to embrace powerful complementary new uncrewed technologies and platforms.

It’s certainly not getting rid of the hugely expensive and sophisticated crewed weapons—the F-35s, P-8 surveillance aircraft, Super Hornets and Growler electronic attack aircraft. But the RAAF is leading the way with its ‘loyal wingman’ uncrewed system that will magnify the combat power of its fifth-generation of traditional platforms at prices meaning far more can be acquired than the mystical 102 number of crewed fighters the RAAF plans.

This is happening quickly, with the loyal wingman already achieving its first flight last year. A major reason the RAAF is willing to champion this technology is that it’s already got its ‘next generation’ of crewed aircraft, so, unlike the army, none of its traditional capability investments are threatened.

You’d think the navy would be in a similar position because the government has already committed to multibillion-dollar continuous build programs for ships and submarines, and the surface, air and undersea environments are replete with options for powerful but cheap systems to work with ships and submarines.

The truth is disappointing. The navy talks a good game, and it has a remote and autonomous systems roadmap out to 2040 that says so.

As ASPI’s Cost of defence 2021–22 budget analysis shows, however, there’s little cash or momentum outside the ‘non-core’ area of mine countermeasures that will deliver much novel technology before the first Hunter-class frigate or Attack-class submarine enters service years from now.

That may well be because, while a lot of public money is going out the door on the frigates and submarines, there’s not much tangible to show for this. So, there’s a concern that advocating for the military value of things that can threaten frigates and submarines will add to the pressures acquisition folk face.

That’s a fundamental miscalculation. Right now, navies know they face their own Azerbaijan–Armenia scenario from adversaries that are already lacing smart mines and lethal surface and subsurface uncrewed weapons into their command, control and targeting systems.

Wargaming around a Taiwan conflict demonstrates this routinely in ways that should matter to Australia.

There’s still time for our navy to quickly get into the loyal wingman game, whether undersea or on the surface. Uncrewed undersea systems seem most obvious, because Australia and our partners retain an undersea warfare advantage and retaining it must involve uncrewed systems that can complement and multiply the combat power of even the best crewed submarine.

That leaves ministers. Out of all the possible champions, I think they are our best bet for rapid change.

What minister wouldn’t want to do more than just defend the slow-moving, troubled, big defence programs their predecessors at least got the joy of beginning? And any defence minister in the 2020s looking at our deteriorating strategic environment must want to get additional undersea combat power into the hands of our navy well before the first Attack-class submarine turns up in the mid-2030s.

They’d probably be willing to get Defence chiefs to shift some cash in the large and growing defence budget to get this done. And they could make the obvious point that Defence’s acquisition budget underspent by about $1 billion last year and is on track to do this in a bigger way as the budget grows, so why not put the money to acquiring fast-moving technology at a scale well beyond the current innovation funding.

A minister looking at the speed of technological disruption and change in every field of human endeavour will understand that decades-long acquisition programs may have their place, but they must at least be accompanied by a separate, much faster way of getting technologies from concept or demonstrator to weapon system operated by our sailors, soldiers and aviators.

I hope I’m wrong and that in the next few months I hear about new army projects and a navy equivalent to the RAAF’s loyal wingman. In the absence of this, I look forward to Defence Minister Peter Dutton getting serious about the capability needs of our ADF personnel in an environment whose dangers are obvious to us all.

ASPI explains: 8chan

On 3 August at around 11 am, a 21-year-old man named Patrick Crosius posted a PDF to an online forum. Ninety minutes later, he walked into a Walmart in El Paso, Texas, and opened fire. Twenty people were killed and 82 were wounded. Crosius was captured unharmed, apparently after having surrendered to police. Authorities have opened a domestic terrorism investigation.

Since the shooting, much attention has been paid to the shooter’s ‘manifesto’, a typo-riddled four-page document outlining his deeply racist justifications for the attack. Attention has also focused on 8chan, the forum to which he posted the document.

Here’s what you need to know about 8chan.

What is 8chan?

8chan is an online forum that has existed since 2013. It’s a spin-off from 4chan, another online forum known for misogynist, racist and other extreme content. 8chan was created by users who felt that 4chan didn’t allow them to go far enough. It has become infamous for the extremist and, in particular, the white nationalist views of its user base.

8chan allows users to post without creating an account or logging in, which gives them a limited degree of anonymity (it doesn’t necessarily protect them from being identified by law enforcement, journalists or other investigators, however).

How many mass shootings has 8chan been linked to?

Three mass shooters in the past six months have posted ‘manifestos’ to 8chan: the Christchurch shooter Brenton Tarrant in March 2019, the Poway synagogue shooter John Earnest in April 2019, and El Paso shooter Patrick Crusius in August 2019. Another mass shooting at a California garlic festival in July is also thought to have links to 8chan.

Why aren’t law enforcement and intelligence agencies watching this board?

They are. Unfortunately, that doesn’t mean they can always determine whether the threats are real, identify and physically locate the posters and mobilise officers in time to prevent every attack. In the case of the El Paso shooting, the shooter’s manifesto was uploaded shortly before the attack, which meant there was a very small window of time to respond.

Why hasn’t 8chan been shut down yet?

Many have tried. In 2014, 8chan was banned from raising money on the fundraising site Patreon. In 2015, Google briefly removed 8chan from its search results for hosting child abuse content. In 2019 after the Christchurch shooting, Australian and New Zealand internet service providers temporarily blocked access to 8chan and a number of other platforms.

Despite growing political pressure, permanently shutting the platform down has proven difficult, however. The current owner of 8chan, US Army veteran Jim Watkins, is based in the Philippines where he runs a pig farm. Watkins’s company NT Technology is behind a number of other sites, including a far-right ‘alternative news’ site.

The biggest obstacle to shutting down 8chan has been that it’s protected by another company called Cloudflare. Cloudflare provides protection for many websites against distributed denial-of-service (DDoS) attacks, including 8chan. Using Cloudflare’s services has allowed 8chan both to protect itself from DDoS attacks that might have been launched at the site by those who want to take it down, and to conceal the hosting provider for the platform. Cloudflare has been widely criticised in the past for protecting sites like 8chan and the neo-Nazi website the Daily Stormer.

What’s changed?

It seems as though the El Paso shooting was the last straw for Cloudflare. In a blog post on 5 August, Cloudflare CEO Matthew Prince announced that the company is walking away from 8chan. At midnight Pacific time, Cloudflare’s protection will be withdrawn from the 8chan site.

Is this the end of 8chan?

Almost certainly not. While the immediate future is likely to be rocky for 8chan and its administrators, users are already making plans to regroup if the entire site goes down permanently, which is unlikely. One possibility is that 8chan could move onto the dark web, but that’s unlikely for a number of reasons. A more probable outcome is that, like other far-right sites before it, 8chan will go down briefly before finding another DDoS protection service willing to work with it. One way or another, 8chan will be back.

Should we be trying to take 8chan down permanently?

It’s a difficult question with reasonable arguments on both sides. Some, including the original founder, believe the platform is a breeding ground for extremist violence and needs to be taken down. Others argue that the problem isn’t the platform, it’s the people using it, and that taking the platform down will only make them go elsewhere, as we already know they plan to do.

This is a fast-moving story, and what happens when Cloudflare’s protection is removed from 8chan at midnight will be interesting to watch. Whatever happens, one thing is for sure: we haven’t heard the last of 8chan.

Update: As expected, 8chan went down almost immediately after Cloudflare removed its DDoS protection services from the platform. A few hours later, 8chan briefly returned with the help of a company called Bitmitigate, which is in turn owned by Epik. Epik is positioning itself as a rival to Cloudflare, including offering services to the alt-right social platform Gab and the neo-Nazi website the Daily Stormer, which was also previously kicked off Cloudflare.

In an unexpected twist, however, shortly after 8chan came back online, 8chan, the Daily Stormer and Bitmitigate itself all went offline. It appears that the company which Epik rents hardware from, Voxility, only recently became aware of the kind of content Epik was hosting on its servers. Voxility warned Epik to remove the Daily Stormer from its infrastructure, which Epik claimed to have done. On learning that Epik was also planning to host 8chan, Voxility pulled the plug on Epik entirely, taking down 8chan and BitMitigate—and exposing that Epik had been less than truthful about removing the Daily Stormer from Voxility’s infrastructure in the process.

8chan administrators have posted on Twitter that they are still working to get the platform back up, but as of 10 am AEST on 6 August, 8chan remains down.

Tag Archive for: Technology

Nothing Found

Sorry, no posts matched your criteria