As China’s AI industry grows, Australia must support its own

The growth of China’s AI industry gives it great influence over emerging technologies. That creates security risks for countries using those technologies. So, Australia must foster its own domestic AI industry to protect its interests.
To do that, Australia needs a coordinated national AI strategy grounded in long-term security, capability building and international alignment.
The Australian government’s decision in February to ban Chinese AI model DeepSeek from government devices showed growing concern about the influence of foreign technology. While framed as a cybersecurity decision, the ban points to a broader issue: Chinese-linked platforms are already present across Australia, in cloud services, academic partnerships and hardware supply chains. Banning tools after they’re embedded is too late. The question is how far these dependencies reach, and how to reduce them.
China’s lead in AI isn’t just due to planning and investment. It has also benefited from state-backed strategies that exploit gaps in international rules.
In early 2025, OpenAI accused DeepSeek of using its proprietary models without permission. Weeks later, a former Google engineer was indicted in the United States for stealing AI trade secrets to help launch a Chinese startup. A US House of Representatives Committee report logged 60 cases of Chinese-linked cyber espionage across 20 states. In 2023, Five Eyes intelligence leaders directly accused Beijing of sustained intellectual property theft campaigns targeting advanced technologies. And a recent CrowdStrike report documented a 150 percent surge in China-backed cyber espionage in 2024, with critical industries hit hardest.
Such methods help Chinese firms accelerate development and release advanced versions of tools first created elsewhere.
ASPI’s Tech Tracker shows the effect of these strategies. China leads Australia by a wide margin in research output and impact in such field as machine learning, natural language processing, AI hardware and integrated circuit design. These technologies form the foundation of modern AI systems and academic disciplines.
And the research gap is growing. China produces more AI research and receives more citations, allowing it to shape the global AI agenda. In contrast, Australia’s contribution is limited in advanced data analytics, adversarial AI and hardware acceleration. And Australia is dependent on imported ideas and models when it comes to natural language processing and machine learning.
China also outpaces Australia in talent acquisition. In every major AI domain, including natural language processing, integrated circuits and adversarial AI, China is a top destination for leading researchers. Australia struggles to recruit and retain high-end AI talent, which limits its ability to scale local innovation.
China’s tech giants are closely aligned with state goals. Following the strategy of military-civil fusion, Chinese commercial breakthroughs are routinely directed into national security or surveillance applications. That creates risk when their technologies are used third countries, through applications in transport, education, health and infrastructure.
Australia is accelerating domestic AI development but lacks a coordinated national strategy. The country remains heavily reliant on foreign-built systems and opaque partnerships that carry long-term strategic and economic costs. This embeds AI systems that Australia does not control into Australia’s critical infrastructure. The more dependent Australia is on these systems, the more it will struggle to disentangle itself in the future.
A coordinated national strategy should rest on four key pillars.
First, AI infrastructure should be treated as critical infrastructure. This includes not just hardware, but also training datasets, foundational models, software libraries and deployment environments. A government-led audit should trace where AI systems are sourced, who maintains them and what hidden dependencies exist, especially for public services, utilities and strategic industries. This baseline is essential for identifying risks and opportunities.
Second, Australia should invest in trusted alternatives and sovereign capabilities. Australia alone cannot build an entire AI stack—including data infrastructure, machine learning frameworks, models and applications—but it can co-develop secure technologies with trusted allies. It should use partnerships such as AUKUS and the Quad to explore open foundational models, ways to secure compute infrastructure, and the development of interoperable governance frameworks.
Third, Australia must manage research collaboration more carefully. Australian universities and labs are globally respected, but they are navigating a geopolitical landscape with little structured guidance. Building on 2019 guidelines to counter foreign interference in universities, the government should establish clearer rules around high-risk partnerships. For example, it could develop tools to assess institutional exposure and track dual-use research. Risk management should not be punitive but rather support researchers to make informed choices.
Fourth, Australia can lead on standard-setting in the Indo-Pacific. Many countries in the region also wonder how to harness AI while preserving autonomy, enhancing prosperity and minimising security risks. Australia can play a regional leadership role by promoting transparent development practices, fair data use and responsible AI deployment.
AI is shaping everything from diplomacy to defence. Australia cannot be dependent on foreign-built models. The question is whether Australia wants to shape those systems or be shaped by them.