Don’t let Beijing’s AI charm offensive fool us

There’s one thing China’s ambassador to Australia got right in a call to add artificial intelligence to the China-Australia Free Trade Agreement (ChAFTA): ‘China has always viewed Australia and China-Australia relations from a strategic and long-term perspective.’ That long view should concern us deeply because Beijing now is using AI to entangle our economies in ways that could become strategically irreversible.

After three years of stabilisation, China is seeking to lock in gains that go far beyond tariffs and trade volumes. The proposal to expand ChAFTA to include AI and digital technologies should be seen for what it is: a move to create asymmetric dependence in one of the most strategically sensitive domains of the 21st century.

This pitch made by ambassador Xiao Qian in the AFR is audacious, not least because it invokes a framework that China has already shown it’s willing to ignore when convenient. While ChAFTA opened access to Chinese markets in 2015, a half-decade of coercive trade actions over wine, barley, lobster and timber revealed just how little protection the agreement offers when Canberra does something Beijing dislikes.

With traditional exports such as coal or beef, Australia eventually could find alternative markets under pressure. But with AI the stakes are higher and the dependencies harder to unwind. This isn’t just a matter of economic resilience; it’s about control over the digital infrastructure that is set to underpin everything from healthcare to national security.

AI systems already operate on surveillance-driven business models and they’re becoming increasingly intrusive. In just a few years of large language model deployment, we’ve seen users move from drafting emails to confiding private anxieties, uploading sensitive work files and using chatbots for mental health support. At an individual level, that’s a privacy concern. At the scale of a society, it’s a strategic vulnerability.

Even OpenAI chief executive Sam Altman has warned conversations with AI should be treated with the same confidentiality as speaking with a doctor or lawyer. He’s advocating for a new AI privilege in court as legal precedent threatens to force AI firms to retain user conversations indefinitely; a development that, Altman suggests, undermines every promise of user trust and privacy.

Now consider what this looks like in a Chinese context: under Chinese law, firms are compelled to co-operate with the state, giving the Chinese Communist Party not only capability but also a demonstrated intent to turn commercial tools into surveillance and influence systems. The idea of integrating Chinese AI firms bound by those rules more fully into Australia’s digital ecosystem should raise alarm bells in Canberra. If Beijing recognises an AI privilege at all, it’s a privilege reserved exclusively for the CCP, granting itself unfettered access to everyone else’s data.

The situation is about to get messier. Signal president Meredith Whittaker warns that the next generation of agentic AI—tools that act autonomously on the user’s behalf—will require near-total access to your personal data. These AI assistants need root access to your messages, calendar, location and more. You’re not just giving them information; you’re providing context, intent and authority. As Whittaker puts it, it’s like putting your brain in a jar while the AI examines your most sensitive information and transmits it to the cloud.

But this isn’t merely a data security issue. It’s also about ideology: AI systems trained and tuned inside China come embedded with content moderation aligned to CCP values. These systems increasingly are exported through training packages and tech transfers to governments worldwide. If the systems are allowed into Australia’s digital infrastructure, we risk letting a foreign authoritarian power shape how our next generation perceives the world.

The fundamental difference runs deeper than regulation. Democracies and autocracies don’t just regulate AI differently, they also conceptualise it differently.

Liberal systems typically employ risk-based frameworks focused on harm prevention. Beijing, by contrast, approaches AI through a lens of social stability and ideological compliance. This divergence isn’t merely philosophical; it manifests in the algorithm itself, shaping the outputs that Chinese systems allow, filter, recommend or censor.

To be clear: not all Chinese AI systems are equally risky. Applications such as cancer imaging or tutoring software are not equivalent to facial recognition or behavioural analytics. A blanket ban would be costly and unrealistic. What we need is targeted de-risking, not decoupling for its own sake.

But let’s not be naive. AI now sits at the intersection of national security, critical infrastructure and democratic resilience. If the past decade of ChAFTA has taught us anything, it’s this: when China treats international agreements as optional, Australia ends up dangerously exposed. We shouldn’t repeat this mistake, especially not in a domain as sensitive and irretrievable as AI.

The ambassador is right about one thing: this is about the long term. The question is whether we’ll learn from the past decade’s lessons or repeat them in the most consequential technology domain of our time.

This article was originally published in The Australian.