Tag Archive for: Artificial Intelligence

As China’s AI industry grows, Australia must support its own

The growth of China’s AI industry gives it great influence over emerging technologies. That creates security risks for countries using those technologies. So, Australia must foster its own domestic AI industry to protect its interests.

To do that, Australia needs a coordinated national AI strategy grounded in long-term security, capability building and international alignment.

The Australian government’s decision in February to ban Chinese AI model DeepSeek from government devices showed growing concern about the influence of foreign technology. While framed as a cybersecurity decision, the ban points to a broader issue: Chinese-linked platforms are already present across Australia, in cloud services, academic partnerships and hardware supply chains. Banning tools after they’re embedded is too late. The question is how far these dependencies reach, and how to reduce them.

China’s lead in AI isn’t just due to planning and investment. It has also benefited from state-backed strategies that exploit gaps in international rules.

In early 2025, OpenAI accused DeepSeek of using its proprietary models without permission. Weeks later, a former Google engineer was indicted in the United States for stealing AI trade secrets to help launch a Chinese startup. A US House of Representatives Committee report logged 60 cases of Chinese-linked cyber espionage across 20 states. In 2023, Five Eyes intelligence leaders directly accused Beijing of sustained intellectual property theft campaigns targeting advanced technologies. And a recent CrowdStrike report documented a 150 percent surge in China-backed cyber espionage in 2024, with critical industries hit hardest.

Such methods help Chinese firms accelerate development and release advanced versions of tools first created elsewhere.

ASPI’s Tech Tracker shows the effect of these strategies. China leads Australia by a wide margin in research output and impact in such field as machine learning, natural language processing, AI hardware and integrated circuit design. These technologies form the foundation of modern AI systems and academic disciplines.

And the research gap is growing. China produces more AI research and receives more citations, allowing it to shape the global AI agenda. In contrast, Australia’s contribution is limited in advanced data analytics, adversarial AI and hardware acceleration. And Australia is dependent on imported ideas and models when it comes to natural language processing and machine learning.

China also outpaces Australia in talent acquisition. In every major AI domain, including natural language processing, integrated circuits and adversarial AI, China is a top destination for leading researchers. Australia struggles to recruit and retain high-end AI talent, which limits its ability to scale local innovation.

China’s tech giants are closely aligned with state goals. Following the strategy of military-civil fusion, Chinese commercial breakthroughs are routinely directed into national security or surveillance applications. That creates risk when their technologies are used third countries, through applications in transport, education, health and infrastructure.

Australia is accelerating domestic AI development but lacks a coordinated national strategy. The country remains heavily reliant on foreign-built systems and opaque partnerships that carry long-term strategic and economic costs. This embeds AI systems that Australia does not control into Australia’s critical infrastructure. The more dependent Australia is on these systems, the more it will struggle to disentangle itself in the future.

A coordinated national strategy should rest on four key pillars.

First, AI infrastructure should be treated as critical infrastructure. This includes not just hardware, but also training datasets, foundational models, software libraries and deployment environments. A government-led audit should trace where AI systems are sourced, who maintains them and what hidden dependencies exist, especially for public services, utilities and strategic industries. This baseline is essential for identifying risks and opportunities.

Second, Australia should invest in trusted alternatives and sovereign capabilities. Australia alone cannot build an entire AI stack—including data infrastructure, machine learning frameworks, models and applications—but it can co-develop secure technologies with trusted allies. It should use partnerships such as AUKUS and the Quad to explore open foundational models, ways to secure compute infrastructure, and the development of interoperable governance frameworks.

Third, Australia must manage research collaboration more carefully. Australian universities and labs are globally respected, but they are navigating a geopolitical landscape with little structured guidance. Building on 2019 guidelines to counter foreign interference in universities, the government should establish clearer rules around high-risk partnerships. For example, it could develop tools to assess institutional exposure and track dual-use research. Risk management should not be punitive but rather support researchers to make informed choices.

Fourth, Australia can lead on standard-setting in the Indo-Pacific. Many countries in the region also wonder how to harness AI while preserving autonomy, enhancing prosperity and minimising security risks. Australia can play a regional leadership role by promoting transparent development practices, fair data use and responsible AI deployment.

AI is shaping everything from diplomacy to defence. Australia cannot be dependent on foreign-built models. The question is whether Australia wants to shape those systems or be shaped by them.

How to spot AI influence in Australia’s election campaign

Be on guard for AI-powered messaging and disinformation in the campaign for Australia’s 3 May election.

And be aware that parties can use AI to sharpen their campaigning, zeroing in on issues that the technology tells them will attract your vote.

In 2025, there are still ways to detect AI-generated content. Voters can use this knowledge. So can the authorities trying to manage a proper election campaign. The parties can, too, as they try to police each other. In the digital age, we must be vigilant against various tactics that are strengthened or driven by AI and aim to manipulate and deceive.

Some tactics are already heavily associated with AI. Deepfakes—images or videos that use hyper-realistic fabricated visuals to deceive—are a particularly concerning example. Automated engagement is another example, involving AI-driven bots and algorithms to amplify likes, shares and comments to create the illusion of widespread support.

But political actors are now using AI to improve tried-and-tested influence tactics. These methods include:

—Sponsored posts that mimic authentic content, such as news, to subtly promote a product, service or agenda without clear disclosure, potentially influencing opinions;

—Clickbait headlines that are crafted to grab attention and drive clicks, often exaggerating claims or omitting key context to lure readers;

—Fake endorsements providing false credibility, authenticity or authority through fabricated testimonials or endorsements;

—Selective presentation of facts, skewing narratives by focusing on specific data points that support one perspective while omitting contradictory evidence; and

—Emotionally charged content aimed at provoking strong reactions, clouding judgment and influencing impulsive decisions.

Deepfakes can be identified by inconsistencies in lighting, unnatural facial movements or mismatched audio and lip-syncing. Tools such as reverse image search or AI detection software can help verify authenticity. Automated engagement typically involves accounts that have generic usernames, minimal personal information, and display repetitive posting patterns. These are strong indicators that an account may be an AI-driven bot.

Sponsored posts can be checked for disclaimer labels such as ‘sponsored’ or ‘ad’. Users should be cautious of posts that seem overly polished or perfectly tailored to their interests.

Clickbait headlines, if they seem too outrageous or emotionally charged, should be read critically to verify their claims. Cross-checking with reputable sources can help users to spot inaccuracies. As well as this, one-sided arguments and missing context are both strong indicators of a selective presentation of facts. Consulting multiple sources can help build balanced view of the issue.

Fake endorsements can be verified by checking the official channels of the purported endorser. Inconsistencies in language or tone between the channels and the post may indicate fabrication.

For parties, AI is offering transformative opportunities for campaigning. Data-driven targeting can help to more effectively analyse voter demographics, preferences, and behaviours. This allows parties to craft highly targeted messages, ensuring campaigns reach the right audience with the right message.

Predictive analytics forecast voter turnout and behaviour, helping campaigns focus efforts on swing regions or undecided voters. For campaigns aiming to narrow their focus, AI can help to craft personalised communication. This content is tailored to individual voters, making interactions feel more personal and engaging.

AI can also be used to monitor social media and public sentiment, providing real-time feedback. These instant insights into voter reactions allows campaigns to adapt their strategies on the fly. Beyond analytics and outreach, AI programs can be developed to optimise campaign budgets by identifying the most impactful channels and strategies, reducing waste and ensuring effective resource allocation.

Finally, while it can be used to mislead, automated engagement has ethical applications. Through chatbots and virtual assistants powered by AI, parties can handle voter queries, provide information and streamline processes such as voter registration.

AI is reshaping political campaigning, offering unprecedented opportunities and challenges. While it sharpens strategies and enhances efficiency, it also necessitates vigilance to ensure ethical use and protect against manipulation. By staying informed and critical, individuals can navigate this evolving landscape with confidence.

As tensions grow, an Australian AI safety institute is a no-brainer

Australia needs to deliver its commitment under the Seoul Declaration to create an Australian AI safety, or security, institute. Australia is the only signatory to the declaration that has yet to meet its commitments. Given the broader erosion of global norms, now isn’t the time to break commitments to allies and partners such as Britain, South Korea and the European Union.

China has also entered this space: it has created an AI safety institute, signalled intent to collaborate with the Western network of such organisations and commented on the global governance of increasingly powerful AI systems.

Developments in the United States further demand an Australian safety institute. The US is radically deregulating its tech sector, taking risky bets on integrating AI with government, and racing to beat China to artificial general intelligence—a theoretical system that would rival human thinking. Collectively, these trends mean that AI risks—such as cyber offensive capability; widespread availability of chemical, biological, radiological and nuclear weapons; and loss of control over advanced systems—are less likely to be addressed at their source: the frontier labs. Australia needs to act.

Fortunately, we have options for addressing AI safety and security concerns. Minister for Industry and Science Ed Husic’s ‘mandatory guardrails’ consultation mooted an Australian AI Act that would align with the EU and impose basic requirements on high-risk AI models. Australia can foster its domestic AI assurance technology industry, and we can expand our productive involvement in multilateral approaches, ensuring that safety and security remain a global priority.

While an Australian AI Act has policy merit, it might face a rocky political path. In March, the Computer & Communications Industry Association—a peak body with members including Amazon, Apple, Google, X and Meta—urged US President Donald Trump to bring the News Media Bargaining Code into a US-Australia trade war. In the same submission, the association complained about the proliferation of AI laws and the proposed Australian regulation of high-risk AI models.

An Australian AI safety institute would be an immediate way to protect Australian interests and create a new path to collaborate with our allies without these political risks. In addition to giving us a seat at the table, such an institute would reduce our dependency on others for technical AI safety and security. In other security domains, we’ve seen dependency used as a bargaining chip in transactional negotiations. This is still something we have time to avoid for AI.

Domestic pressure is building. In March, Australia’s AI experts united in a call for action, including the establishment of an Australian safety institute and an Australian AI Act. The letter will remain open to expert and public support until the election.

Australian AI expert and philosopher Toby Ord, a senior researcher at Oxford University and author of The Precipice: Existential risks and the future of humanity, said:

Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An [Australian AI safety institute] would allow Australia to participate on the world stage in guiding this critical technology that affects us all.

And it’s not just the experts. Australians are more worried about AI risks than the people of any other nation for which we have data.

The experts and the public are right. It’s realistic that we will see transformative AI during the next term of government, though expert opinion varies on the exact timing. Regardless, the window for Australia to have any influence over these powerful and risky systems is rapidly closing.

Britain recently renamed its ‘AI Safety Institute’ as the ‘AI Security Institute’ but without significantly changing its priorities. The institute targets AI capabilities that enable malicious actors and the potential loss of control of advanced AI systems, including the ability to deceive human operators or autonomously replicate.

Given that these are fundamentally national security issues, perhaps ‘security’ was a better name from the start and appropriate for Australia to use for our institute.

The US has many chip vulnerabilities

Although semiconductor chips are ubiquitous nowadays, their production is concentrated in just a few countries, and this has left the US economy and military highly vulnerable at a time of rising geopolitical tensions. While the United States commands a leading position in designing and providing the software for the high-end chips used in AI technologies, production of the chips themselves occurs elsewhere. To head off the risk of catastrophic supply disruptions, the US needs a coherent strategy that embraces all nodes of the semiconductor industry.

That is why the CHIPS and Science Act, signed by President Joe Biden in 2022, provided funding to reshore manufacturing capacity for high-end chips. According to the Semiconductor Industry Association, the impact has been significant: currently planned investments should give the US control of almost 30 percent of global wafer fabrication capacity for chips below ten nanometres by 2032. Only Taiwan and South Korea currently have foundries to produce such chips. China, by contrast, will control only 2 percent of manufacturing capacity, while Europe and Japan’s share will rise to about 12 percent.

But US President Donald Trump is now trying to roll back this strategy, describing the CHIPS Act—one of his predecessor’s signature achievements—as a waste of money. His administration is instead seeking to tighten the export restrictions that Biden introduced to frustrate China’s AI ambitions.

It is a strategic mistake to de-emphasise strengthening domestic capacity through targeted industrial policies. Coercive measures against China not only have proved ineffective, but may have even accelerated Chinese innovation. DeepSeek’s highly competitive models were apparently developed at a fraction of the cost of OpenAI’s. A substantial share of the semiconductors used in DeepSeek’s R1 model comprises chips that were smuggled through intermediaries in Singapore and other Asian countries, and DeepSeek relied on clever engineering techniques to overcome the remaining hardware limitations it faced. Meanwhile, Chinese tech giants such as Alibaba and Tencent are developing similar AI models under similar supply constraints.

Even before the DeepSeek breakthrough, there were doubts about the effectiveness of US trade restrictions. The Biden administration’s export ban, adopted in October 2022, targeted chips smaller than 16nm, banning not only exports of the final product, but also the equipment and the human capital needed to develop them. Less than a year later, in August 2023, Huawei launched a new smartphone model (the Mate 60) that uses a 7nm chip.

Even if China no longer has access to the most advanced lithography machines, it can still use old ones to produce 7nm chips, albeit at a higher cost. While these older machines do not allow it to go below 7nm (Taiwan Semiconductor Manufacturing Company is working on 1nm chips), Huawei and DeepSeek’s achievements are a cautionary tale. China now has every reason to develop its own semiconductor industry, and it may have made more progress than we think.

To reduce its own supply-chain vulnerabilities, the US cannot rely on an isolationist reshoring-only approach. Given how broadly the current supply chain is distributed, leveraging existing alliances is the only viable way forward. ASML, the Dutch firm with a near-monopoly on the high-end lithography machines used to make the most advanced chips, cannot simply be recreated overnight.

So far, the US has focused on reducing security risks related to the most sophisticated chips, giving short shrift to the higher-node chips that are needed to run modern economies. Yet these legacy chips (those above 28nm) are key components in cars, airplanes, fighter jets, medical devices, smartphones, computers and much more.

According to the Semiconductor Industry Association, China is expected to control almost 40 percent of global wafer fabrication capacity for these types of chips by 2032, while Taiwan, the US and Europe will account for 25 percent, 10 percent, and 3 percent, respectively. China will thus control a major strategic chokepoint, enabling it to bring the US economy to a halt with its own export bans. It also will have a sizable military edge, because it could impair US defences by cutting off the supply of legacy chips. Finally, China’s security services could put back doors into Chinese-made chips, allowing for espionage or even cyberattacks on US infrastructure.

Compounding the challenge, Chinese-made chips are usually already incorporated into final products by the time they reach the US. If the US wants to curtail imports of potentially compromised hardware, it will have to do it indirectly, tracking down chips at customs by dismantling assembled products. That would be exceedingly costly.

Fortunately, the US does not lack policy tools to reduce its vulnerabilities. When it comes to military applications of legacy chips, it can resort to procurement restrictions, trade sanctions (justified on national-security grounds), and cybersecurity defences. As for expanding domestic production capacity, it can use anti-dumping and countervailing duties to counter unfair Chinese practices, such as its heavy subsidisation of domestic producers.

Chips, and the data they support, will be the oil of the future. The US needs to devise a comprehensive strategy that addresses the full range of its current vulnerabilities. That means looking beyond the most advanced chips and the AI race.

South Korea has acted decisively on DeepSeek. Other countries must stop hesitating

South Korea has suspended new downloads of DeepSeek, and it was were right to do so.

Chinese tech firms operate under the shadow of state influence, misusing data for surveillance and geopolitical advantage. Any country that values its data and sovereignty must watch this national security threat and take note of South Korea’s response.

Every AI tool captures vast amounts of data, but DeepSeek collects data unnecessary to its function as a simple chatbot. The company was caught over-collecting personal data and failed to be transparent about where that data was going. This typifies China’s lack of transparency about data collection, usage and storage.

South Korea’s National Intelligence Service flagged the chatbot for logging keystrokes and chat interactions, which were all stored on Chinese-controlled servers.

Once data enters China’s jurisdiction, it’s fair game for Beijing’s intelligence agencies. That’s not paranoia; it’s the law. Chinese companies must hand over data to the government upon request. South Korea saw the writing on the wall and acted before it was too late.

Data in the wrong hands can be weaponised. By cross-referencing DeepSeek’s collected data with other stolen datasets, Chinese intelligence agencies could build profiles on foreign officials, business leaders, journalists and dissidents. Keystroke tracking could help to identify individuals even when they use anonymous communication platforms. AI-powered analysis could pinpoint behavioral patterns, making it easier to manipulate public opinion or even blackmail individuals with compromising data.

If this sounds familiar, you’re not mistaken. Huawei was banned from operating 5G networks in multiple countries based on similar concerns. TikTok has come under scrutiny for its ties to the Chinese government. China has spent years perfecting cyber-espionage and DeepSeek appears to be the latest tool in its arsenal, joining the growing list of Chinese tech products raising red flags.

Chinese actors have displayed a pattern of digital intrusion. Recent events include the Volt Typhoon and Salt Typhoon operations, which targeted US digital infrastructure and telecom networks. These attacks compromised the data of more than one million people, including government officials. Looking to Europe, Germany fell victim to Chinese-backed hackers breaching its federal cartography agency.

China is using AI tools for influence, data gathering and geopolitical maneuvering. AI is a versatile tool through which the flow of information is controlled.

The risk goes far beyond espionage. It extends to economic coercion and intellectual property theft. For example, multinational companies relying on AI-powered tools may unknowingly send sensitive business strategies to foreign adversaries. Government agencies may unknowingly feed points of information that would be classified in aggregate into an AI system that Beijing can tap into. The consequences would be far-reaching and deeply troubling.

What if South Korea had looked the other way? Millions of South Korean citizens would have been at risk of Chinese coercion and exposed to data harvesting under the guise of harmless AI. In an era where data shapes power, handing control to foreign entities is dangerous.

Some countries are beginning to grasp these threats. India and Australia are ramping up scrutiny of foreign AI applications, and Australia and Taiwan have banned DeepSeek on government devices. The European Union is tightening regulations to demand transparency and accountability for data usage.

The United States, on the other hand, is still deliberating. President Donald Trump has focused on AI as a push for Silicon Valley to lift its game, rather than considering the technology’s national security implications. US lawmakers are beginning to propose restrictions on AI tools linked to foreign adversaries. For Texan officials and US navy personnel, for example, DeepSeek has been banned due to its links to the Chinese government.

However, regulatory action has been slow to gain traction, caught in a web of political disagreements and lobbying pressures. Meanwhile, security agencies warn that inaction could leave critical infrastructure and government institutions vulnerable to AI-driven espionage. Without decisive policies, the US risks becoming not only a prime target for data manipulation and intelligence gathering, but a soft target. It must act to prevent another major data breach, before it finds itself reacting to one. Waiting is not an option.

China’s AI ambitions aren’t slowing down, and global vigilance must not flag. The battle for digital sovereignty is already underway, and governments that hesitate will find themselves at a disadvantage from both economic and security standpoints.

Act now or pay later. AI is the new frontier of global competition, and data is the ultimate weapon. Those who don’t secure it will face the consequences. South Korea made the right move—who’s next?

Southeast Asia faces AI influence on elections

Artificial intelligence is becoming commonplace in electoral campaigns and politics across Southeast Asia, but the region is struggling to regulate it.

Indonesia’s 2024 general election exposed actual harms of AI-driven politics and overhyped concerns that distracted from its real dangers. As the Philippines and Singapore head to the polls in 2025, they can draw lessons from Indonesia’s experience, while tailoring insights for their electoral landscapes.

While deepfakes dominated concerns in last year’s elections, a quieter threat loomed: unregulated AI-driven microtargeting. These covert and custom messages are delivered at scale via private channels or dark posts—targeted advertisements that don’t appear on the publisher’s page, making them difficult to track. This isolates recipients, making verification trickier. The risk is even greater in Southeast Asia, where fake news thrives amid low media literacy rates.

AI in Indonesia’s general election was more commonly used for image polishing and rebranding than attacking opponents, though some attacks occurred. Prabowo Subianto, a retired military general known for his fiery nationalism, rebranded himself as a cuddly grandfather to soften his strongman image. This redirected the focus from substantial issues, such as corruption and economic challenges, to superficial narratives, including his cheerful dances.

Darker deepfakes also emerged, such as an audio clip of then presidential candidate Anies Baswedan being scolded by the chair of the National Democrat Party, Surya Paloh. A video of late President Suharto endorsing the Golkar party also went viral. This was controversial given Suharto’s dictatorship and violent record.

Microtargeting in Indonesia also notably focused on young voters instead of racial segments. Prabowo’s rebranding resonated with youth—usually first time voters who lacked political maturity. This demographic emerged as an important voter segment, comprising about 60 percent of the total electorate in Indonesia’s 2024 general election.

The situation emphasises a need for intentional regulations. Currently, the Indonesian Electronic Information and Transactions and Personal Data Protection laws address electronic content, including deepfakes, but lack election-specific AI guidelines. The General Election Committee could have helped, but it earlier declared AI regulation beyond its jurisdiction. Instead, Indonesia’s Constitutional Court now prohibits AI for political campaigning.

Indonesia’s experience offers valuable lessons for its close neighbours. In May 2025, the Philippines will hold mid-term elections, and Singapore will have a general election this year too. Both nations are enforcing some rules but their approaches differ to Indonesia’s.

Given the Philippines’ complex experience enforcing technology-related bans (some effective, others not so much), simply prohibiting AI during elections may not be ideal. Instead, the Commission on Elections is taking the transparency route, requiring candidates to register their digital campaign platforms—including social media accounts, websites and blogs—or face penalties. While the use of deepfakes is prohibited, AI is permitted with disclosure.

Singapore has previously implemented measures that ensure comprehensive coverage. For instance, its Elections Bill complements its legislation on falsehoods by barring AI-generated deepfakes targeting candidates. However, the proposed legislation applies only during the official election period and excludes private conversations, potentially leaving gaps for disinformation outside election season, microtargeting through private messaging and deepfakes of influential non-candidates. Such vulnerabilities have already been observed in Indonesia.

These cases also highlight Southeast Asia’s uneven regulatory readiness. Tackling AI risks demands a stronger stance, more binding than a guide or roadmap, bolstered by a whole of society collaboration to address complex challenges.

An article in Time argued the effect of AI on elections in 2024 was underwhelming, pointing to the quality—or lack thereof—of viral deepfakes. But Indonesia’s case suggests that power may lie not just in persuasiveness but also in appeal. Prabowo’s camp successfully used AI-generated figures to polish his image and distract people from real problems.

To dismiss the effect of AI is to miss the normalisation of unregulated AI-powered microtargeting. Last year revealed AI’s capability to target vulnerable yet sizable populations such as the youth in Indonesia, potentially beyond election cycles.

Blanket bans are an easy cop-out and may just encourage covert uses of AI. With choices available, people can simply use other companies. When OpenAI banned its use for political campaigning and generating images of real people, Prabowo turned to Midjourney, an AI image generator.

An alternative solution is to ensure transparent and responsible AI use in elections. This requires engaging those with contextual knowledge of the electorate—academics, industry leaders, the media, watchdogs and even voters themselves—alongside policymakers such as electoral commissions and national AI oversight bodies. But a key challenge remains: some Southeast Asian countries still lack dedicated AI regulatory bodies, or even AI strategies.

In the development of such bodies and strategies, public participation in AI policy consultations could ensure electorate concerns are heard. For instance, Malaysia’s National AI Office recently opened a call for experts and community representatives to help shape the country’s AI landscape. International organisations may also contribute through capacity building and stakeholder engagement, fostering relevant AI policies and regulations.

Certainly, further studies are needed for tailored AI governance for specific societies. But overall, adaptive and anticipatory regulation that evolves as technology advances will help mitigate AI-related risks in Southeast Asian elections and beyond.

DeepSeek is in the driver’s seat. That’s a big security problem

Democratic states have a smart-car problem. For those that don’t act quickly and decisively, it’s about to become a severe national security headache.

Over the past few weeks, about 20 of China’s largest car manufacturers have rushed to sign new strategic partnerships with DeepSeek to integrate its AI technology into their vehicles. This poses immediate security, data and privacy challenges for governments.  While international relations would be easier if it weren’t the case, China’s suite of national security and intelligence laws makes it impossible for Chinese companies to truly protect the data they collect.

China is the world’s largest producer of cars, and is now making good quality, low-cost and tech-heavy vehicles at a pace no country can match. It has also bought European industry stalwarts, including Volvo, MG and Lotus. Through joint ventures, it builds and exports a range of US and European car models back into global markets.

DeepSeek has struck partnerships with many large companies, such as BYD, Great Wall Motor, Chery, SAIC (owner of MG and LDV) and Geely (owner of Volvo and Lotus). In addition, major US, European and Japanese brands, including General Motors, Volkswagen and Nissan, have signed on to integrate DeepSeek via their joint ventures.

Australia is one of the many international markets where Chinese cars have gained enormous traction. More than 210,000 new cars were sold into Australia in 2024, and Chinese brands are set to take almost 20 percent of the market in 2025, up from 1.7 percent in 2019. Part of this new success is due to the government’s financial incentives encouraging Australians to purchase electric vehicles. China now builds about 80 percent of all electric vehicles sold in Australia.

Then, there are global markets where Chinese car brands are not gaining the market share they have in Australia (or in Russia, the Middle East and South America), but where Chinese-made cars are. This is the case in the United States and in Europe, for example. This is because many foreign companies use their joint ventures in China to sell China-made, foreign-branded cars into global markets. Such companies include Volkswagen, Volvo, BMW, Lincoln, Polestar, Hyundai and Kia.

Through its Chinese joint venture, Volkswagen will reportedly partner with DeepSeek. General Motors has also said it will integrate DeepSeek into its next-generation vehicles, including Cadillacs and Buicks. It’s unclear how many such cars may end up in overseas markets this year; that will likely depend on each country’s regulations.

It is not surprising that DeepSeek is a sought-after partner, with companies scrambling to integrate and build off its technology. It also shouldn’t have been a shock to see this AI breakthrough coming out of China—and we should expect a lot more. Chinese companies, universities and scientific institutions made impressive gains over the past two decades across most critical technology areas. Other factors, such as industrial espionage, have also helped.

But widespread integration of Chinese AI systems into products and services carries serious data, privacy, governance, censorship, interference and espionage risks. These risks are unlikely ever to go away, and few government strategies will be able to keep up.

For some nations, especially developing countries, this global integration will be a bit of a non-event. It won’t be seen as a security issue that deserves urgent policy attention above other pressing climate, human security, development and economic challenges.

But for others, it will quickly become a problem—a severe one, given the speed at which this integration could unfold.

Knowing the risks, governments (federal and state), militaries, university groups and companies (such as industrial behemoth Toyota) have moved quickly to ban or limit the use of DeepSeek during work time and via work devices. Regulators, particularly across Europe, are launching official investigations. South Korea has gone further than most and taken it off local app stores after authorities reportedly discovered that DeepSeek was sending South Korean user data to Chinese company ByteDance, whose subsidiaries include including TikTok.

But outside of banning employee use of DeepSeek, the integration of Chinese AI systems and models into data-hungry smart cars has not received due public attention. This quick development will test many governments globally.

Smart cars are packed full of the latest technology and are built to integrate into our personal lives. As users move between work, family and social commitments, they travel with a combination of microphones, cameras, voice recognition technology, radars, GPS trackers and increasingly biometric devices—such as those for fingerprint scanning and facial recognition to track driver behaviour and approve vehicle access. It’s also safe to assume that multiple mobile phones and other smart devices, such as smart watches, are present, some connecting to the car daily.

Then there is the information aspect—a potential influx of new AI assistants who will not always provide drivers with accurate and reliable information. At times, they may censor the truth or provide Chinese Communist Party talking points on major political, economic, security and human rights issues. If such AI models remain unregulated and continue to gain popularity internationally, they will expose future generations to systems that lack information integrity. As China’s internal politics and strategic outlook evolve, the amount of censored and false information provided to users of these systems will likely increase, as it does domestically for Chinese citizens.

Chinese built and maintained AI assistants may soon sit at the heart of a growing number of vehicles driven by politicians, military officers, policymakers, intelligence officials, defence scientists and others who work on sensitive issues. Democratic governments need a realistic and actionable plan to deal with this.

It may be possible to ensure that government-issued devices never connect to Chinese AI systems (although slip-ups can happen when people are busy and rushing), but it’s hard to imagine how users could keep most of their personal data from interacting with such systems. Putting all security obligations on the individual will not be enough.

Australia has been here before. Australia banned ‘high-risk vendors’ in from its 5G telecommunications network in 2018, and the debates leading up to and surrounding that decision taught us how valuable it was for the business community to be given an early and clear decision—something some other countries struggled with. Geostrategic circumstances haven’t improved since Australia banned high-risk vendors from 5G; unfortunately, they’ve worsened.

Australia’s domestic policy settings are also driving consumers towards the very brands that will soon integrate DeepSeek’s technology, which politicians and policymakers have been told not to use. Politicians from all parties test-driving BYD and LDV vehicles highlights that parliamentarians may need greater access to more regular security briefings to ensure they are fully across the risks, with updates provided to them in a timely fashion as and when those risks evolve.

Tackling this latest challenge head-on is a first-order priority that can’t wait until after the 2025 federal election.

Governments must ensure this issue is given immediate attention from their security agencies. This needs to include an in-depth assessment of the risks, as well as a consideration of future challenges. Partners and allies should share their findings with each other. An example of the type of activity that should be incorporated into such an assessment is Australia’s experience in 2017 and 2018 leading up to its 5G decision, when the Australian Signals Directorate conducted technical evaluation and scenario-planning.

There is also a question of choice, or rather lack of it, that needs deeper reflection from governments when it comes to high-risk vendors. Democratic governments should not allow the commercial sector to offer only one product if that product originates from a high-risk vendor. Yet there are major internet providers in Australia which provide only Chinese TP-Link modems for some internet services, and businesses which only sell Hikvision or Dahua surveillance systems (both Chinese companies were added to the US Entity List in 2019 because of their association with human rights abuses and violations).

Not only do the digital rights of consumers have to be better protected; consumers must also be given genuine choices, including the right to not choose high-risk vendors. This is especially important in selecting vendors that will have access to personal data of citizens or connect to national critical infrastructure. Currently, across many countries, those rights are not being adequately protected.

As smart cars integrate AI systems, consumers deserve a choice on the origin of such systems, especially as censorship and information manipulation will be a feature of some products. Governments must also provide a commitment to their citizens that they are only greenlighting AI systems that have met a high standard of data protection, information integrity and privacy safeguards.

Which brings us back to DeepSeek and other AI models that will soon come out of China. If politicians, government officials, companies and universities around the world are being told they cannot use DeepSeek because such use is too high-risk, governments need to ensure they aren’t then forcing their citizens to take on those same risks, simply because they’ve given consumers no other choice.

Australia needs Australian AI

Australia must do more to shape its artificial intelligence future. The release of DeepSeek is a stark reminder that if Australia does not invest in its own AI solutions, it will remain reliant on foreign technology—technology that may not align with its values and often carries the imprints of its country of origin.

This reliance means that Australian user data and the economic benefits derived from it will continue to flow offshore, subject to foreign legal jurisdictions and foreign corporate priorities.

When people engage with AI chatbot assistant-type services from platforms such as ChatGPT, Gemini, Copilot or DeepSeek—via web interfaces, mobile apps, or application programming interfaces (or APIs)—they are sharing their data with these services as well as receiving AI-generated responses. The market entry of DeepSeek, which stores its data in China and moderates its responses to align with Chinese Communist Party narratives, raises two critical concerns: the exploitation of data for foreign interests and the ability of AI-generated content to shape public discourse.

AI platforms not based in Australia operate under the legal frameworks of their home countries. In the case of DeepSeek, this means compliance with China’s national intelligence laws, which require firms to provide data to the government on request. User inputs including text, audio and uploaded files, and user information such as registration details, unique device identifiers, IP address and even behavioural inputs like keystroke patterns, could be accessed by Chinese authorities. The flow of Australian data into China’s data ecosystem poses a long-term risk that should not be overlooked.

While individual data points may seem insignificant on their own, in aggregate they provide valuable insights that could be leveraged in ways contrary to Australian interests. As a 2024 ASPI report found, the CCP seeks to harvest user data from globally popular Chinese apps, games and online platforms, to ‘gauge the pulse of public opinion’, gain insight into societal trends and preferences, and thereby improve its propaganda.

This may be even more powerful for chatbots, which can collect data for aggregation to understand audience sentiment in particular countries, and also be used as a tool for influence in those countries. AI models are shaped by the priorities of their developers, the datasets they are trained on, and the fine-tuning processes that refine their outputs. This means AI does not just provide information, it can be trained to reinforce particular narratives while omitting others.

Many chatbots include a safety layer to filter harmful content such as instructions for making drugs or weapons. In the case of DeepSeek, this moderation extends to political censorship. The model refuses to discuss politically sensitive topics such as the 1989 Tiananmen Square protests and aligns with official CCP positions on topics such as Taiwan and territorial disputes in the South China Sea. AI-generated narratives influence public perception, which can pose risks to the democratic process and social cohesion, especially as these tools become more commonly embedded in search engines, education and customer service.

Australia’s response should be about having the right safeguards in place to mitigate known risks. It needs to ensure that AI systems used in the country reflect its values, security interests, and regulatory standards. This challenge demands that Australia play an active role in AI development and implement regulatory frameworks that protect against harms and foster domestic innovation.

DeepSeek challenges the idea that only tech giants with massive resources can develop competitive AI models. With a team of just 300, DeepSeek reportedly developed its model for less than US$6 million, far less than the $40 million training cost of OpenAI’s GPT-4, or the $22 million cost for training Mistral’s Mistral Large. While some experts argue this figure may not reflect the full cost—including potential access to restricted advanced processors before US export controls took effect—the broader lesson is clear: significant AI advances are possible without vast financial backing.

DeepSeek has proven that having talent is even more important than having tech giants, which highlights an opportunity for Australia to participate meaningfully in AI development.

To harness its potential, Australia must foster an environment that nurtures homegrown talent and innovation. The announcement last week of the $32 million investment by Australian AI healthtech firm Harrison.ai by the National Reconstruction Fund is a step in the right direction, but investment in a single company is not enough.

Australia needs increased investment in education and research, strengthening existing developer communities—particularly open-source initiatives—supporting commercialisation efforts, and promoting success stories to build momentum. A well-supported AI sector would allow Australia to harness the benefits of AI without attempting to match the spending power of global tech giants. The focus should be on fostering an environment where AI talent can thrive and ethical AI can flourish, ensuring that Australia reaps both the economic and societal benefits.

Without strategic investment in domestic AI capabilities, Australia risks ceding influence over critical technologies that will shape its economy, security and society in the years ahead. The challenge is not just technological—it is strategic. Without decisive action, Australia will remain a passive consumer of AI technologies shaped by foreign priorities and foreign commercial interests, with long-term consequences for democratic integrity, economic security and public trust in AI-driven systems.

Meeting this challenge requires more than just regulatory safeguards; it demands sustained support for a strong domestic tech ecosystem.

The crisis in Western AI is real

The release of the Chinese DeepSeek-R1 large language model, with its impressive capabilities and low development cost, shocked financial markets and led to claims of a ‘Sputnik moment’ in artificial intelligence. But a powerful, innovative Chinese model achieving parity with US products should come as no surprise. It is the predictable result of a major US and Western policy failure, for which the AI industry itself bears much of the blame.

China’s growing AI capabilities were well known to the AI research community, and even to the interested public. After all, Chinese AI researchers and companies have been remarkably open about their progress, publishing papers, open-sourcing their software and speaking with US researchers and journalists. A New York Times article from last July was headlined, ‘China Is Closing the AI Gap with the United States’.

Two factors explain China’s achievement of near parity. First, China has an aggressive, coherent national policy to reach self-sufficiency and technical superiority across the entire digital technology stack, from semiconductor capital equipment and AI processors to hardware products and AI models—and in both commercial and military applications. Second, US (and EU) government policies and industry behavior have exhibited a depressing combination of complacency, incompetence and greed.

It should be obvious that Chinese President Xi Jinping and Russian President Vladimir Putin are no friends of the West and that AI will drive enormously consequential economic and military transformations. Given the stakes involved, maintaining AI leadership within democratic advanced economies justifies, and even demands, an enormous public-private strategic mobilisation on the scale of the Manhattan Project, NATO, various energy-independence efforts, or nuclear-weapons policies. Yet the West is doing the opposite.

In the US, government and academic research in AI are falling behind both China and the private sector. Owing to inadequate funding, neither government agencies nor universities can compete with the salaries and computing facilities offered by the likes of Google, Meta, OpenAI, or their Chinese counterparts. Moreover, US immigration policy toward graduate students and researchers is self-defeating and nonsensical, because it forces highly talented people to leave the country at the end of their studies.

Then there is the US policy on regulating Chinese access to AI-related technology. Export controls have been slow to appear, wholly inadequate, poorly staffed, easily evaded, and under-enforced. Chinese access to US AI technologies through services and licensing agreements has remained nearly unregulated, even when the underlying technologies, such as Nvidia processors, are themselves subject to export controls. The US announced stricter licensing rules just a week before former President Joe Biden left office.

Finally, US policy ignores the fact that AI R&D must be strongly supported, used, and, where necessary, regulated throughout the private sector, the government, and the military. The US still has no AI or IT equivalent of the Department of Energy, the National Institutes of Health, NASA, or the national laboratories that conduct (and tightly control) US nuclear-weapons R&D.

This situation is partly the result of sclerotic government bureaucracies in both the European Union and the US. The EU technology sector is severely overregulated, and the US Departments of Defense and Commerce, among other agencies, need reform.

Here, the tech industry is somewhat justified in criticising their governments. But the industry itself is not blameless: over time, lobbying efforts and revolving-door personnel appointments have weakened the capabilities of critically important public institutions. Many of the problems with US policy reflect the industry’s own resistance or neglect. In critical ways, it has been its own worst enemy, as well as the enemy of the West’s long-term security.

For example, ASML (the Dutch maker of state-of-the-art lithography machines used in chip manufacturing) and the US-based semiconductor-equipment supplier Applied Materials both lobbied to weaken export controls on semiconductor capital equipment, thus assisting China in its effort to displace TSMC, Nvidia and Intel. Not to be outdone, Nvidia designed special chips for the Chinese market that performed just slightly below the threshold set by export restrictions; these were then used to train DeepSeek-R1. And at the level of AI models, Meta and the venture capital firm Andreessen Horowitz have lobbied fiercely to prevent any limits on open-source products.

At least in public, the industry’s line has been: ‘the government is hopeless, but if you leave us alone, everything will be fine’. Yet things are not fine. China has nearly caught up with the US, and it is already ahead of Europe. Moreover, the US government is not hopeless, and must be enlisted to help. Historically, federal and academic research and development compare very favourably with private-sector efforts.

The internet, after all, was pioneered by the US Advanced Research Projects Agency (now DARPA), and the World Wide Web emerged from the European Organisation for Nuclear Research. Netscape co-founder Marc Andreessen created the first web browser at a federally funded supercomputer center within a public university. Meanwhile, private industry gave us online services such as CompuServe, Prodigy and AOL—centralised, closed, mutually incompatible walled gardens that were justly obliterated when the internet was opened to commercial use.

The challenges of AI research and development and China’s rise require a forceful, serious response. Where government capacity falls short, we need to bolster it; not destroy it. We need to pay competitive salaries for government and academic work; modernise US (and EU) technology infrastructure and procedures; create robust research and development capacity within the government, particularly for military applications; strengthen academic research; and implement rational policies for immigration, AI research and development funding, safety testing and export controls.

The one truly difficult policy problem is openness, particularly open-source licensing. We cannot let everyone have access to models optimised for hunter-killer drone attacks; nor, however, can we stamp ‘top secret’ on every model. We need to find a pragmatic middle ground, perhaps relying on national defence research laboratories and carefully crafted export controls for intermediate cases. Above all, we need the AI industry to realise that if we don’t hang together, we will hang separately.

Will DeepSeek upend US tech dominance?

In 1957, the Soviet Union launched the world’s first artificial satellite into orbit, sparking fears in the United States that, unless it took radical action to accelerate innovation, its Cold War adversary would leave it in the technological dust. Now, the Chinese startup DeepSeek has built an artificial intelligence model that it claims can outperform industry-leading US competitors, at a fraction of the cost, leading some commentators to proclaim that another ‘Sputnik moment’ has arrived.

But the focus on the US-China geopolitical rivalry misses the point. Rather than viewing DeepSeek as a stand-in for China, and established industry leaders (such as OpenAI, Meta and Anthropic) as representatives of the US, we should see this as a case of an ingenious startup emerging to challenge oligopolistic incumbents—a dynamic that is typically welcomed in open markets.

DeepSeek has proved that software ingenuity can compensate, at least partly, for hardware deficiencies. Its achievement raises an uncomfortable question: why haven’t leading US industry leaders achieved similar breakthroughs? Nobel laureate economist Daron Acemoglu points the finger at groupthink, which he says prevented Silicon Valley incumbents from adequately considering alternative approaches. He might have a point, but it is only half the story.

DeepSeek’s success didn’t happen overnight. In May 2024, the firm launched its V2 model, which boasted an exceptional cost-to-performance ratio and sparked a fierce price war among Chinese AI providers. Moreover, over the last year or so, Chinese firms—both giants (including Alibaba, Tencent and ByteDance) and startups (such as Moonshot AI, Zhipu AI, Baichuan AI, MiniMax and 01.AI)—have all developed cutting-edge AI models with remarkable cost efficiency.

Even within the US, researchers have long explored ways to improve the efficiency—and thus lower the costs—of AI training. For example, in 2022, former Meta researcher Tim Dettmers, now at the Allen Institute for Artificial Intelligence, and his co-authors published research on optimising AI models to run on less computing power. DeepSeek cited their research in the technical paper it released along with its V3 model.

Put simply, it would have been impossible for any AI firm—especially an industry leader—not to realise that lower-cost models were feasible. But US AI developers showed much less interest than their Chinese counterparts in pursuing this line of innovation. This was not a matter only of insularity or hubris; it appears to be a deliberate business choice.

AI development has so far been defined by the scaling law, which predicts that more computing power leads to more powerful models. This has fuelled demand for high-performance semiconductor chips, with more than 80 percent of the funds raised by many AI companies going toward computing resources.

That is why the biggest winner has been the advanced chipmaker Nvidia, which claimed 90 percent of the market for AI graphics processing units by the end of last year. Thanks to this virtual monopoly in the hardware layer, Nvidia could control the foundations of generative AI. The cloud-computing sector, which provides the on-demand computing power AI models require, is similarly concentrated, with Amazon, Google and Microsoft dominating the market.

But these upstream players aren’t just passive suppliers. They have strategically positioned themselves across the AI value chain by acquiring, investing in, or forming alliances with leading AI model developers. Nvidia has invested in OpenAI, Mistral, Perplexity and others. Google not only develops its own AI models, but also holds a stake in Anthropic, OpenAI’s main competitor. And Microsoft, an early OpenAI investor, recently backed Inflection AI in the US and expanded overseas, with investments in France’s Mistral and the United Arab Emirates’ G42.

Taking this approach has ensured that the entire AI industry depends on a few giant firms and entrenched a dynamic whereby rising demand for computing power across the sector increases these firms’ profits. As dominant players, they had less incentive to improve cost efficiency downstream, which could cut into their upstream profits.

Chinese AI firms have been operating within an entirely different reality, as US-led trade restrictions have prevented them from purchasing the most advanced chips. The goal of US export controls has always been to cripple China’s AI sector. But, as DeepSeek has shown, they have had the opposite effect, spurring precisely the innovations that will enable Chinese firms to challenge American AI oligopolies. Already, DeepSeek’s rise triggered a stock-market selloff of AI-related US companies, not least Nvidia.

This is surely unwelcome news for US President Donald Trump’s administration. Trump has made no secret of his determination to contain China, including by fulfilling his promise to impose a 10 percent across-the-board import tariff on Chinese goods. And he has heavily courted Silicon Valley bosses—once aligned with the Democratic Party—who have eagerly embraced the prospect of lax regulation.

But that does not mean that DeepSeek’s rise is bad news for the US or the AI industry more broadly. Over the past five years, calls to rein in the US’s tech giants have been growing louder. Despite the best efforts of former President Joe Biden’s administration, however, the US Congress has failed to introduce any meaningful legislation on this front. Ironically, thanks to US policies designed to constrain China’s AI ambitions, the US AI sector seems set to get some of the market competition that it so badly needs.

Geopolitics might have contributed to DeepSeek’s rise. But the firm’s disruption of the AI industry is about market—not great-power—competition.

Tag Archive for: Artificial Intelligence

Stop the World: The road to artificial general intelligence, with Helen Toner

Australian AI expert Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET). She also spent two years on the board of OpenAI, which put her at the centre of the dramatic events in late 2023 when OpenAI CEO Sam Altman was briefly sacked before being reinstated.

David Wroe speaks with Helen about the curve humanity is on towards artificial general intelligence—which will be equal to or better than humans at everything—progress with the new “reasoning” models; the arrival of China’s DeepSeek; the need for regulation; democracy and AI; and the risks of AI.

They finish by discussing what will life be like if we get AI right and it solves all our problems for us? Will it be great, or boring?

Stop the World: Artificial intimacy, persuasive technologies, and how bots can manipulate us

Today on Stop the World, David Wroe speaks with Casey Mock and Sasha Fegan from the US-based Center for Humane Technology. The CHT is at the forefront of efforts to ensure technology makes our lives better, and strengthens rather than divides our communities. They also produce the podcast, Your Undivided Attention—one of the world’s most popular forums for deep and serious conversations about the impact of technology on society. David, Casey and Sasha discuss the tragic case of 14-year-old Sewell Setzer, who took his life after forming an intimate attachment to an online chatbot. They also talk about persuasive technologies that influence users at deep emotional and even unconscious levels, disinformation, the increasingly polluted information landscape, deepfakes, the pros and cons of age verification for social media and Australia’s approach to these challenges.

To read ASPI’s latest report, ‘Persuasive technologies in China: Implications for the future of national security’, please visit ⁠https://www.aspi.org.au/report/persuasive-technologies-china-implications-future-national-security⁠

Warning: this episode discusses mental health and suicide, which some listeners might find distressing. If you need someone to talk to, help is available through a range of services, including ⁠Lifeline⁠ on 13 11 14 and ⁠Beyond Blue⁠ on 1300 22 46 36.

Stop the World: TSD Summit Sessions: How to navigate the deep fake and disinformation minefield with Nina Jankowicz

The Sydney Dialogue is over, but never fear, we have more TSD content coming your way! This week, ASPI’s David Wroe speaks to Nina Jankowicz, global disinformation expert and author of the books How to Lose the Information War and How to Be a Woman Online.

Nina takes us through the trends she is seeing in disinformation across the globe, and offers an assessment of who does it best, and whether countries like China and Iran are learning from Russia. She also discusses the links between disinformation and political polarisation, and what governments can do to protect the information domain from foreign interference and disinformation.

Finally, Dave asks Nina about her experience being the target of disinformation and online harassment, and the tactics being used against many women in influential roles, including US Vice President Kamala Harris and Australia’s eSafety Commissioner Julie Inman Grant, in attempts to censor and discredit them.

Guests:
⁠David Wroe
⁠Nina Jankowicz

Stop the World: TSD Summit Sessions: Defence, intelligence and technology with Shashank Joshi

In the final lead-in episode to the Sydney Dialogue (but not the last in the series!), ASPI’s Executive Director, Justin Bassi, interviews Shashank Joshi, Defence Editor at the Economist.  

They discuss technology, security and strategic competition, including the impact of artificial intelligence on defence and intelligence operations, the implications of the no-limits partnership between Russia and China and increasing alignment between authoritarian states. They also cover the challenge of protecting free speech online within a framework of rules which also protects public safety.

They talk about Shashank’s latest Economist report ‘Spycraft: Watching the Watchers’, which explores the intersection of technology and intelligence, and looks at the history of intel and tech development, including advancements from radio to the internet and encryption.

The Sydney Dialogue (TSD) is ASPI’s flagship initiative on cyber and critical technologies. The summit brings together world leaders, global technology industry innovators and leading thinkers on cyber and critical technology for frank and productive discussions. TSD 2024 will address the advances made across these technologies and their impact on our societies, economies and national security.

Find out more about TSD 2024 here: ⁠https://tsd.aspi.org.au/⁠    

Mentioned in this episode: ⁠https://www.economist.com/technology-quarterly/2024-07-06⁠  

Guests:
⁠Justin Bassi⁠
Shashank Joshi

Tag Archive for: Artificial Intelligence

Tech and Trust: Safeguarding AI for Economic and Security Progress

Safeguarding Australian elections: Addressing AI-enabled disinformation