Tag Archive for: Artificial Intelligence

Artificial intelligence: Your questions answered

This collection of short papers developed by the Australian Institute for Machine Learning (AIML) at the University of Adelaide and the Australian Strategic Policy Institute (ASPI) offers a refreshing primer into the world of artificial intelligence and the opportunities and risks this technology presents to Australia.

AI’s potential role in enhancing Australia’s defence capabilities, strengthening alliances and deterring those who would seek to harm our interests was significantly enhanced as a result of the September 2021 announcement of the AUKUS partnership between the US, the UK and Australia. Perhaps not surprisingly, much public attention on AUKUS has focused on developing a plan ‘identifying the optimal pathway to deliver at least eight nuclear-powered submarines for Australia’.

This AIML/ASPI report is a great starting point for individuals looking to better understand the growing role of AI in our lives. I commend the authors and look forward to the amazing AI developments to come that will, we must all hope, reshape the world for a more peaceful, stable and prosperous future.

University of Adelaide, Australian Institute for Machine Learning - logo

Artificial intelligence and policing in Australia

ASPI’s Strategic Policing and Law Enforcement Program is delighted to share its new Strategic Insights report, Artificial intelligence and policing in Australia by Dr Teagan Westendorf.

Digital technologies, devices and the internet are producing huge amounts of data and greater capacity to store it, and those developments are likely to accelerate. For law enforcement, a critical capability lagging behind the pace of tech innovation is the ability and capacity to screen, analyse and render insights from the ever-increasing volume of data—and to do so in accordance with the constraints on access to and use of personal information within our democratic system.

Artificial intelligence (AI) and machine learning are presenting valuable solutions to the public and private sectors for screening big and live data. AI is also commonly considered and marketed as a solution that removes human bias, although AI algorithms and dataset creation can also perpetuate human bias and so aren’t value or error free.

This report analyses limitations, both technical and implementation, of AI algorithms, and the implications of those limitations on the safe, reliable and ethical use of AI in policing and law enforcement scenarios. This publication closely examines usage of AI by domestic policing agencies to model what success looks like for safe, reliable and ethical use of AI in policing and law enforcement spaces. It also explores possible strategies to mitigate the potential negative effects of AI data insights and decision-making in the justice system; and implications for regulation of AI use by police and law enforcement in Australia.

AI ‘algorithms’ or ‘models’ promise to: enable high volumes of data processing at speed, while identifying patterns human judgement is not capable of; supercharge knowledge management while (supposedly) removing human bias from that process; and operate with ethical principles coded into their decision-making.

This ‘promise’, however, is not a guarantee.

Engineering global consent: The Chinese Communist Party’s data-driven power expansion

The Chinese party-state engages in data collection on a massive scale as a means of generating information to enhance state security—and, crucially, the political security of the Chinese Communist Party (CCP)—across multiple domains. The party-state intends to shape, manage and control its global operating environment so that public sentiment is favourable to its own interests. The party’s interests are prioritised over simply the Chinese state’s interests or simply the Chinese people’s interests. The effort requires continuous expansion of the party’s power overseas because, according to its own articulation of its threat perceptions, external risks to its power are just as likely—if not more likely—to emerge from outside the People’s Republic of China’s (PRC) borders as from within.

This report explains how the party-state’s tech-enhanced authoritarianism is expanding globally. The effort doesn’t always involve distinctly coercive and overtly invasive technology, such as surveillance cameras. In fact, it often relies on technologies that provide useful services. Those services are designed to bring efficiency to everyday governance and convenience to everyday life. The problem is that it’s not only the customer deploying these technologies—notably those associated with ‘smart cities’, such as ‘internet of things’ (IoT) devices—that derives benefit from their use. Whoever has the opportunity to access the data a product generates and collects can derive value from the data. How the data is processed, and then used, depends on the intent of the actor processing it.

Tag Archive for: Artificial Intelligence

DeepSeek may be cheap AI, but Australian companies should beware

Amid the shocked reactions this week to the release of the Chinese artificial intelligence model, DeepSeek, the risk we should be most concerned about is the potential for the model to be misused to disrupt critical infrastructure and services.

I wrote in 2023 about the many forms of Chinese AI-enabled technology we use that pump data back to China, where it is sorted by Chinese algorithms before it is sent back here.

These include things such as digital railway networks, electric vehicles, solar inverters, giant cranes for unloading containers, border screening equipment, and industrial control technology in power stations, water and sewerage works. Like DeepSeek, the vendors of these products are subject to direction from China’s security services.

This clear risk has been buried by the avalanche of commentary about the other implications—not least the panicked stock market reaction in which Nvidia’s share price plunged 17 percent and the Nasdaq fell 3 percent. With so much money chasing AI, investors are as twitchy as meerkats.

Don’t cry for Nvidia—cheaper AI models promise to broaden the market for its chips, and this is reflected in its recovering share price. Besides, Nvidia helped create its temporary setback by selling powerful H800 chips to Chinese companies—including DeepSeek—for a year before the Biden administration tightened up its chip export controls.

There may even be some upside when a company produces comparable results to leading US models—purportedly for a fraction of the price and using dumber chips. US big tech will be spurred to figure out how to do generative AI more cheaply. That’s good for business and good for the planet.

From a national security perspective, how worried should we be about an AI model with a chatbot algorithm that provides such lame answers on issues sensitive to the Chinese government?

Of course it’s undesirable for yet another wildly popular Chinese app to be shaping how we think. It’s also a worry that the company will make all our data available to Chinese security services on request. DeepSeek’s own privacy policy says as much: ‘We may access, preserve, and share the information described in “What Information We Collect” with law enforcement agencies (and) public authorities … if we have good faith belief that it is necessary to comply with applicable law, legal process or government requests.’

The policy also explains that the company stores ‘the information we collect in secure servers located in the People’s Republic of China’.

But the bigger question is what would happen if DeepSeek’s model lowered the costs and increased the competitiveness of Chinese AI-enabled products and services embedded in our critical infrastructure? If these offerings were even cheaper and better, they might become even more pervasive in our digital ecosystem, and therefore even more risky.

Here’s another case. What if DeepSeek became the default choice for Australian and other non-Chinese companies seeking to improve their products and services with customised, low-cost, leading-edge AI? As the Wall Street Journal notes: ‘DeepSeek’s model is open-source, meaning that other developers can inspect and fiddle with its code and build their own applications with it. This could help give more small businesses access to AI tools at a fraction of the cost of closed-source models like OpenAI and Anthropic.’

Useful applications might include customised chatbots and product recommendations, streamlined inventory management or predictive analytics and fraud detection.

Could DeepSeek embedded in tech made by non-Chinese companies be a vector for espionage and sabotage—an arm of China’s DeepState, as it were? Could DeepSeek be directed to alter embedded code or simply turn off access to its open-source model to disable these products and services?

Perhaps we can take some comfort here. One of the advantages of so-called ‘open source’ models is that users can host them in their own controlled environments to better protect their customers’ data. That would mitigate the espionage risk. Using isolated environments would also mitigate the sabotage risk to some degree as well. However, if DeepSeek AI were embedded in products and services that are used in sensitive and critical products—for example, essential components of an electricity station or grid—we might want additional mitigations, given the much higher stakes.

The key point is governments need take a close look at the potential risks of DeepSeek employed in sensitive areas in two contexts: by Chinese companies—given their legal obligations to co-operate with China’s security agencies—and by non-Chinese companies that might use applications derived from the DeepSeek model. In Australia, that sounds like a job for the security review process recently established under our framework to ‘consider foreign ownership, control or influence risks associated with technology vendors’.

It’s early days. US big tech is not going to rest on its oars. DeepSeek may not be as cheap as it claims, nor as original. Indeed, OpenAI is investigating whether DeepSeek leaned on the company’s tools to train its own model. But when it comes to protecting our digital ecosystems from emerging technologies with the game-changing potential of DeepSeek, it’s never too early to start planning.

Could’ve seen it coming: ASPI’s tech tracker had picked up China’s AI strength

It shouldn’t have come as a complete shock.

US tech stocks, especially chipmaker Nvidia, plunged on Monday after news that the small China-based company DeepSeek had achieved a dramatic and reportedly inexpensive advance in artificial intelligence. But the step forward for China’s AI industry was in fact foreseeable.

It was foreseeable from ASPI’s Critical Technology Tracker, which was launched in early 2023 and which in its latest update monitors high-impact research (measured as the 10 percent most highly cited publications) over two decades across 64 technologies, including machine learning and natural language processing (NLP).

While high-impact research isn’t the full picture, it is a leading indicator of scientific innovation right at the beginning of the lifecycle of a technology. As we argued in our August 2024 update, scientific innovation needs to be nurtured through every step of the lifecycle, notably through commercialisation for economic gain.

The two-decade Critical Technology Tracker report showed that China’s consistent investments in science and technology were paying off, with steady gains in its global share of high-impact publications in machine learning over the previous two decades. In this ascent, China overtook the United States in their yearly global share of highly cited publications in 2017.

ASPI has shown that between 2019 and 2023, 36.5 percent of high impact research in this field was published by Chinese institutions, compared with 15.4 percent by the United States. In NLP, the race is tighter, with the US’s and China’s global share of publications neck-and-neck in the same five-year period, at 24.8 percent and 24.1 percent, respectively.

ASPI’s research has also shown that, of the world-leading institutions in machine-learning research, the top five were in China. Tsinghua University, the alma mater of several key researchers behind the latest DeepSeek model, ranked second. ASPI’s Critical Technology Tracker also ranks Tsinghua University third in research in natural-language processing, behind only Google and the Chinese Academy of Sciences.

Chinese technology firms have been increasingly tapping into the growing pool of indigenous talent. Last year, DeepSeek’s founder, Liang Wenfeng, emphasised that the core research team was ‘all local’ and included no one who had trained or done extensive work abroad—though some members did have work experience in foreign operations inside China, such as Microsoft Research Asia. The Financial Times reports that Liang formed his AI company by combining an excellent team of chips experts with the best talent from a hedge fund he had co-founded.

AI is just the latest technology in which we have seen Chinese companies challenge the established dominance held by US or European companies. Solar cells, electric vehicles and smartphones are all technologies in which Western companies held and lost early advantages. ASPI’s data shows that China has in fact surpassed the US in cutting-edge research for 57 out of 64 technology areas; 2016 was an inflection point.

The global AI industry is still weighted in favour of the US in share of pioneering tech companies. But as DeepSeek’s announcement emphasises, US and other Western countries should have no great confidence in keeping their leads. In fact, any confidence should be called out as complacency.

So, the Trump administration’s commitment to making America great again in technologies is certainly welcomed. The big example so far is the announcement on 21 January of the US AI infrastructure joint venture Stargate, into which US$500 billion ($800 billion) is to be invested.

DeepSeek’s release makes it clear that now is not the time for half-measures or wishful thinking. Bold decisions, strategic foresight and a willingness to lean in to the AI race is vital to maintaining a competitive edge, and not just by the US.

ASPI’s Critical Technology Tracker is clear in another regard: that we should be ready for similar advances by China in other technological domains. Let’s hope that DeepSeek really is the wakeup call needed and likeminded countries now take the action needed to avoid being shocked again—not just in AI, but in all critical technologies.

Fighting deepfakes: what’s next after legislation?

Deepfake technology is weaponising artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life. This digital abuse requires urgent global action, as it not only infringes on human rights but also affects their democratic participation.

Britain’s recent decision to criminalise explicit deepfakes is a significant step forward. It follows similar legislation passed in Australia last year and aligns with the European Union’s AI Act, which emphasises accountability. However, regulations alone are not enough, effective enforcement and international collaboration are essential to combat this growing and complex threat.

Britain’s legislation to criminalise explicit deepfakes as part of the broader Crime and Policing Bill that will be introduced to the parliament marks a pivotal step in addressing technology-facilitated gender-based violence. This move is a response to a 400 percent rise in deepfake-related abuse since 2017, as reported by Britain’s Revenge Porn Helpline.

Deepfakes, which fabricate hyper-realistic content, often target women and girls, objectifying and eroding their public engagement. By criminalising both the creation and sharing of explicit deepfakes, Britain’s law closes loopholes in earlier revenge porn legislation. The legislation places stricter accountability on platforms hosting these harmful images, reinforcing the message that businesses must play a role in combatting online abuse.

The EU has taken a complementary approach by introducing requirements for transparency in its recently adopted AI Act. The regulation does not ban deepfakes outright but mandates that creators disclose their artificial origins and provide details about the techniques used. This empowers consumers to better identify manipulated content. Furthermore, the EU’s 2024 directive on violence against women explicitly addresses cyberviolence, including non-consensual image-sharing, providing tools for victims to prevent the spread of harmful content.

While these measures are robust, enforcement remains a challenge due to fragmented national laws, and deepfake abuse often transcends borders. The EU is working to harmonise its digital governance and promote AI transparency standards to mitigate these challenges.

In Asia, concern over deepfake technology is growing in countries such as South KoreaSingapore and especially Taiwan where it not only targets individual women but is increasingly used as a tool for politically motivated disinformation. Similarly, in the United States and Pakistan, female lawmakers have been targeted with sexualised deepfakes designed to discredit and silence them. Italy’s Prime Minister Giorgia Meloni faced a similar attack but successfully brought the perpetrators to court.

Unfortunately, many countries still lack comprehensive legislation to effectively combat the abuse of deepfakes, leaving individuals vulnerable, especially those without the resources and support to fight back. For example, similar laws in the United States remain stalled in legislative pipelines—the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Bill and Deepfake Accountability Bill.

Australia offers a strong example of legislative action as it faces similar challenges with deepfake abuse contributing to a chilling effect on women’s activity in public life, affecting underage students and politicians. This abuse not only affects individual privacy but also deters other women from engaging in public and pursuing leadership roles, weakening democratic representation.

In August 2024, Australia passed the Criminal Code Amendment, penalising the sharing of non-consensual explicit material.

While formulating legislation is the first step, to effectively address this issue, governments must enforce the regulation while ensuring that victims have accessible mechanisms to report abuse and seek justice. Digital literacy programs should be expanded to equip individuals with the tools to identify and report manipulated content. Schools and workplaces should incorporate online safety education to build societal resilience against deepfake threats.

Simultaneously, women’s representation in cybersecurity and technology governance needs to be increased. Women’s participation in shaping policies and technologies ensures that gendered dimensions of digital abuse are adequately addressed.

Although Meta recently decided to cut back on factchecking, social media platforms need to be held to account for hosting and amplifying harmful content. Platforms must proactively detect and remove deepfakes while maintaining transparency about their AI applications and data practices. The EU AI Act’s transparency requirements serve as a reference point for implementing similar measures globally.

Ultimately, addressing deepfake abuse is about creating a safe and inclusive online space. As digital spaces transcend borders, the fight against deepfake abuse must be inherently global. Countries need to collaborate with international partners to establish shared enforcement mechanisms, harmonise legal frameworks and promote joint research on AI ethics and governance. Regional initiatives, such as the EU AI Act and the Association of Southeast Asian Nations’ guidelines for combatting fake news and disinformation, can serve as a means for building capacity in nations lacking the expertise or resources to tackle these challenges alone.

In a world where AI is advancing rapidly, combatting deepfake abuse is more than regulating technology—it is about safeguarding human dignity, protecting democratic processes and ensuring that everyone, including women, can participate in society without fear of intimidation or harm. By working together, we can build a safer, more equitable digital environment for all.

Editors’ picks for 2024: ‘The danger of AI in war: it doesn’t care about self-preservation’

Originally published on 30 August 2024.

Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war.

This outcome highlights a fundamental difference in the nature of war between humans and AI. For humans, war is a means to impose will for survival; for AI the calculus of risk and reward is entirely different, because, as the pioneering scientist Geoffrey Hinton noted, ‘we’re biological systems, and these are digital systems.’

Regardless of how much control humans exercise over AI systems, we cannot stop the widening divergence between their behaviour and ours, because AI neural networks are moving towards autonomy and are increasingly hard to explain.

To put it bluntly, whereas human wargames and war itself entail the deliberate use of force to compel an enemy to our will, AI is not bound to the core of human instincts, self-preservation. The human desire for survival opens the door for diplomacy and conflict resolution, but whether and to what extent AI models can be trusted to handle the nuances of negotiation that align with human values is unknown.

The potential for catastrophic harm from advanced AI is real, as underscored by the Bletchley Declaration on AI, signed by nearly 30 countries, including Australia, China, the US and Britain. The declaration emphasises the need for responsible AI development and control over the tools of war we create.

Similarly, ongoing UN discussions on lethal autonomous weapons stress that algorithms should not have full control over decisions involving life and death. This concern mirrors past efforts to regulate or ban certain weapons. However, what sets AI-enabled autonomous weapons apart is the extent to which they remove human oversight from the use of force.

A major issue with AI is what’s called the explainability paradox: even its developers often cannot explain why AI systems make certain decisions. This lack of transparency is a significant problem in high-stakes areas, including military and diplomatic decision-making, where it could exacerbate existing geopolitical tensions. As Mustafa Suleyman, co-founder of DeepMind, pointed out, AI’s opaque nature means we are unable to decode the decisions of AI to explain precisely why an algorithm produced a particular result.

Rather than seeing AI as a mere tool, it’s more accurate to view it as an agent capable of making independent judgments and decisions. This capability is unprecedented, as AI can generate new ideas and interact with other AI agents autonomously, beyond direct human control. The potential for AI agents to make decisions without human input raises significant concerns about the control of these powerful technologies—a problem that even the developers of the first nuclear weapons grappled with.

While some want to impose regulation on AI somewhat like the nuclear non-proliferation regime, which has so far limited nuclear weapons to nine states, AI poses unique challenges. Unlike nuclear technology, its development and deployment are decentralized and driven by private entities and individuals, so its inherently hard to regulate. The technology is spreading universally and rapidly with little government oversight. It’s open to malicious use by state and nonstate actors.

As AI systems grow more advanced, they introduce new risks, including elevating misinformation and disinformation to unprecedented levels.

AI’s application to biotech opens new avenues for terrorist groups and individuals to develop advanced biological weapons. That could encourage malign actors, lowering the threshold for conflict and making attacks more likely.

Keeping a human in the loop is vital as AI systems increasingly influence critical decisions. Even when humans are involved, their role in oversight may diminish as trust in AI output grows, despite AI’s known issues with hallucinations and errors. The reliance on AI could lead to a dangerous overconfidence in its decisions, especially in military contexts where speed and efficiency often trump caution.

As AI becomes ubiquitous, human involvement in decision-making processes may dwindle due to the costs and inefficiencies associated with human oversight. In military scenarios, speed is a critical factor, and AI’s ability to perform complex tasks rapidly can provide a decisive edge. However, this speed advantage may come at the cost of surrendering human control, raising ethical and strategic dilemmas about the extent to which we allow machines to dictate the course of human conflict.

The accelerating pace at which AI operates could ultimately pressure the role of humans in decision-making loops, as the demand for faster responses might lead to sidelining human judgment. This dynamic could create a precarious situation where the quest for speed and efficiency undermines the very human oversight needed to ensure that the use of AI aligns with our values and safety standards.

Using open-source AI, sophisticated cyber ops will proliferate

Open-source AI models are on track to disrupt the cyber security paradigm. With the proliferation of such models—those whose parameters are freely accessible—sophisticated cyber operations will become available to a broader pool of hostile actors.

AI insiders and Australian policymakers have a starkly different sense of urgency around advancing AI capabilities. AI leaders like Dario Amodei, chief executive of Anthropic, and Sam Altman, chief executive of OpenAI, forecast that AI systems that surpass Nobel laureate-level expertise across multiple domains could emerge as early as 2026.

On the other hand, Australia’s Cyber Security Strategy, intended to guide us through to 2030, mentions AI only briefly, says innovation is ‘near impossible to predict’, and focuses on economic benefits over security risks.

Experts are alarmed because AI capability has been subject to scaling laws—the idea that capability climbs steadily and predictably, just as in Moore’s Law for semiconductors. Billions of dollars are pouring into leading labs. More talented engineers are writing ever-better code. Larger data centres are running more and faster chips to train new models with larger datasets.

The emergence of reasoning models, such as OpenAI’s o1, shows that giving a model time to think in operation, maybe for a minute or two, increases performance in complex tasks, and giving models more time to think increases performance further. Even if the chief executives’ timelines are optimistic, capability growth will likely be dramatic and expecting transformative AI this decade is reasonable.

The effect of the introduction of thinking time on performance, as assessed in three benchmarks. The o1 systems are built on the same model as gpt4o but benefit from thinking time. Source: Zijian Yang/Medium.

Detractors of AI capabilities downplay concern, arguing, for example, that high-quality data may run out before we reach risky capabilities or that developers will prevent powerful models falling into the wrong hands. Yet these arguments don’t stand up to scrutiny. Data bottlenecks are a real problem, but the best estimates place them relatively far in the future. The availability of open-source models, the weak cyber security of labs and the ease of jailbreaks (removing software restrictions) make it almost inevitable that powerful models will proliferate.

Some also argue we shouldn’t be concerned because powerful AI will help cyber-defenders just as much as attackers. But defenders will benefit only if they appreciate the magnitude of the problem and act accordingly. If we want that to happen, contrary to the Cyber Security Strategy, we must make reasonable predictions about AI capabilities and move urgently to keep ahead of the risks.

In the cyber security context, near-future AI models will be able to continuously probe systems for vulnerabilities, generate and test exploit code, adapt attacks based on defensive responses and automate social engineering at scale. That is, AI models will soon be able to do automatically and at scale many of the tasks currently performed by the top-talent that security agencies are keen to recruit.

Previously, sophisticated cyber weapons, such as Stuxnet, were developed by large teams of specialists working across multiple agencies over months or years. Attacks required detailed knowledge of complex systems and judgement about human factors. With a powerful open-source model, a bad actor could spin-up thousands of AI instances with PhD-equivalent capabilities across multiple domains, working continuously at machine speed. Operations of Stuxnet-level sophistication could be developed and deployed in days.

Today’s cyber strategic balance—based on limited availability of skilled human labour—would evaporate.

The good news is that the open-source AI models that partially drive these risks also create opportunities. Specifically, they give security researchers and Australia’s growing AI safety community access to tools that would otherwise be locked away in leading labs. The ability to fine-tune open-source models fosters innovation but also empowers bad actors.

The open-source ecosystem is just months behind the commercial frontier. Meta’s release of the open-source Llama 3.1 405B in July 2024 demonstrated capabilities matching GPT-4. Chinese startup DeepSeek released R1-Lite-Preview in late November 2024, two months after OpenAI’s release of o1-preview, and will open-source it shortly.

Assuming we can do nothing to stop the proliferation of highly capable models, the best path forward is to use them.

Australia’s growing AI safety community is a powerful, untapped resource. Both the AI safety and national security communities are trying to answer the same questions: how do you reliably direct AI capabilities, when you don’t understand how the systems work and you are unable to verify claims about how they were produced? These communities could cooperate in developing automated tools that serve both security and safety research, with goals such as testing models, generating adversarial examples and monitoring for signs of compromise.

Australia should take two immediate steps: tap into Australia’s AI safety community and establish an AI safety institute.

First, the national security community should reach out to Australia’s top AI safety technical talent in academia and civil society organisations, such as the Gradient Institute and Timaeus, as well as experts in open-source models such as Answer.AI and Harmony Intelligence. Working together can develop a work program that builds on the best open-source models to understand frontier AI capabilities, assess their risk and use those models to our national advantage.

Second, Australia needs to establish an AI safety institute as a mechanism for government, industry and academic collaboration. An open-source framing could give Australia a unique value proposition that builds domestic capability and gives us something valuable to offer our allies

America’s tech blind spot

Nationalism has emerged as a potent force shaping global tech policy, nowhere more so than in the United States. With Donald Trump returning to the White House for a second term, his vision for America’s technological future is coming into sharper focus.

At home, Trump promises a sweeping deregulatory agenda coupled with industrial policy aimed at boosting domestic tech businesses. Abroad, his administration appears poised to double down on aggressive restrictions aimed at keeping American technology out of China’s hands.

Yet Trump’s grand vision to make America great again overlooks a crucial detail: the cycle of innovation matters hugely for technological progress. The path the US is charting risks fostering a tech ecosystem dominated by mediocre products, such as attention-grabbing social media apps, while failing to nurture the kind of transformative inventions that drive productivity and long-term economic growth.

Joseph Schumpeter, the renowned Austrian economist who popularized the term ‘creative destruction’, identified three key stages of the process. First, there’s innovation—a breakthrough idea or method. In the realm of artificial intelligence, this stage includes the development of neural networks, which laid the foundation for deep learning, and, more recently, the transformer architecture that has powered the rise of generative AI.

Then comes the stage of commercialisation, when disruptive ideas evolve into market-ready products. This is where tools like ChatGPT—applications built on large language models (LLM)—emerge and become accessible to everyday consumers. Finally, there’s diffusion, the phase when the novel technology becomes pervasive, reshaping industries and daily life.

So far, discussions of tech regulation have tended to focus on the later stages of this process, which bring immediate economic benefits, often overlooking the early stage of invention. It is true that regulations to ensure safety, guarantee data privacy and protect intellectual property can raise adoption costs and slow down product rollouts. But these guardrails are less likely to stifle innovation at the invention stage, where creative ideas take shape.

Of course, the prospect of discovering the next commercial blockbuster—something like ChatGPT—may indeed spur future invention, and widespread adoption can also help refine these technologies. But such feedback is likely to be very limited for most products.

Consider the case of Character.AI, a company that developed a popular companion chatbot. While the product has certainly contributed to the diffusion of LLM-based services, it has done little to spur invention. Recently, the company even abandoned its plans to build its own LLM, signalling that its focus remains firmly on diffusion rather than groundbreaking invention.

In such cases, regulations ensuring that innovations are safe, ethical and responsible by the time they reach the market would most likely deliver benefits outweighing the costs. The recent tragedy of a 14-year-old boy who took his own life after prolonged interactions with Character.AI’s chatbot underscores the urgent need for safeguards, especially when such services are easily accessible to young users.

Lax tech regulation also carries a hidden cost: it can shift resources away from scientific discovery, favouring quick profits through mass diffusion instead. This dynamic has fuelled the proliferation of addictive social media apps that now dominate the market, leaving behind a trail of societal ills—everything from teenage addiction to deepening political polarisation.

In recent years, a growing chorus of academics and policymakers has sounded the alarm over the systemic dysfunction of the US tech sector. Yet, despite the high drama of congressional hearings with Big Tech CEOs and a cascade of bills promising comprehensive reforms, the results have been disappointing.

So far, the federal government’s highest-profile effort to rein in Big Tech has centred on TikTok—in the form of a bill that would either ban the app outright or force its Chinese owners to divest. In the realm of data privacy, the most significant measure so far has been an executive order restricting the flow of bulk sensitive data to ‘countries of concern’, China chief among them.

Meanwhile, US authorities have increasingly directed their scrutiny inward to root out espionage. The now-infamous China Initiative, which disproportionately targeted ethnic Chinese scientists, has stoked fear and prompted a talent exodus from the US. Compounding this is a broad visa ban on Chinese students and researchers associated with China’s ‘military-civil fusion’ program. While ostensibly aimed at protecting national security, the policy has driven away countless skilled individuals.

This brings us to the paradox at the heart of US tech policy: simultaneous under- and over-regulation. On one hand, US policymakers have failed to implement essential safeguards for product safety and data privacy—areas where thoughtful oversight could mitigate risks while fostering a competitive environment conducive to cutting-edge innovation. On the other hand, they have adopted an aggressive, even punitive, stance toward US-based researchers at the forefront of scientific discovery, effectively regulating invention itself.

The irony could not be starker: in its bid to outcompete China, America risks stifling its own potential for the next breakthrough technology.

It’s not too late to regulate persuasive technologies

Social media companies such as TikTok have already revolutionised the use of technologies that maximise user engagement. At the heart of TikTok’s success are a predictive algorithm and other extremely addictive design features—or what we call ‘persuasive technologies’. 

But TikTok is only the tip of the iceberg. 

Prominent Chinese tech companies are developing and deploying powerful persuasive tools to work for the Chinese Communist Party’s propaganda, military and public security services—and many of them have already become global leaders in their fields. The persuasive technologies they use are digital systems that shape users’ attitudes and behaviours by exploiting physiological and cognitive reactions or vulnerabilities, such as generative artificial intelligence, neurotechnology and ambient technologies.   

The fields include generative artificial intelligence, wearable devices and brain-computer interfaces. The rapidly advancing tech industry to which these Chinese companies belong is embedded in a political system and ideology that compels companies to align with CCP objectives, driving the creation and use of persuasive technologies for political purposes—at home and abroad.  

This means China is developing cutting-edge innovations while directing their use towards maintaining regime stability at home, reshaping the international order abroad, challenging democratic values, and undermining global human rights norms. As we argue in our new report, ‘Persuasive technologies in China: Implications for the future of national security’, many countries and companies are working to harness the power of emerging technologies with persuasive characteristics, but China and its technology companies pose a unique and concerning challenge. 

Regulation is struggling to keep pace with these developments—and we need to act quickly to protect ourselves and our societies. Over the past decade, the swift technological development and adoption have outpaced responses by liberal democracies, highlighting the urgent need for more proactive approaches that prioritise privacy and user autonomy. This means protecting and enhancing the ability of users to make conscious and informed decisions about how they are interacting with technology and for what purpose.  

When the use of TikTok started spreading like wildfire, it took many observers by surprise. Until then, most had assumed that to have a successful model for social media algorithms, you needed a free internet to gather the diverse data set needed to train the model. It was difficult to fathom how a platform modelled after its Chinese twin, Douyin, developed under some of the world’s toughest information restrictions, censorship and tech regulations, could become one of the world’s most popular apps.  

Few people had considered the national security implications of social media before its use became ubiquitous. In many countries, the regulations that followed are still inadequate, in part because of the lag between the technology and the legislative response. These regulations don’t fully address the broader societal issues caused by current technologies, which are numerous and complex. Further, they fail to appropriately tackle the national security challenges of emerging technologies developed and controlled by authoritarian regimes. Persuasive technologies will make these overlapping challenges increasingly complex. 

The companies highlighted in the report provide some examples of how persuasive technologies are already being used towards national goals—developing generative AI tools that can enhance the government’s control over public opinion; creating neurotechnology that detects, interprets and responds to human emotions in real time; and collaborating with CCP organs on military-civil fusion projects. 

Most of our case studies focus on domestic uses directed primarily at surveillance and manipulation of public opinion, as well as enhancing China’s tech dual-use capabilities. But these offer glimpses of how Chinese tech companies and the party-state might deploy persuasive technologies offshore in the future, and increasingly in support of an agenda that seeks to reshape the world in ways that better fit its national interests. 

With persuasive technologies, influence is achieved through a more direct connection with intimate physiological and emotional reactions compared to previous technologies. This poses the threat that humans’ choices about their actions are either steered or removed entirely without their full awareness. Such technologies won’t just shape what we do; they have the potential to influence who we are.  

As with social media, the ethical application of persuasive technologies largely depends on the intent of those designing, building, deploying and ultimately controlling the technology. They have positive uses when they align with users’ interests and enable people to make decisions autonomously. But if applied unethically, these technologies can be highly damaging. Unintentional impacts are bad enough, but when deployed deliberately by a hostile foreign state, they could be so much worse. 

The national security implications of technologies that are designed to drive users towards certain behaviours are already becoming clear. In the future, persuasive technologies will become even more sophisticated and pervasive, with the consequences increasingly difficult to predict. Accordingly, the policy recommendations set out in our report focus on preparing for, and countering, the potential malicious use of the next generation of persuasive technologies. 

Emerging persuasive technologies will challenge national security in ways that are difficult to forecast, but we can already see enough indicators to prompt us to take a stronger regulatory stance. 

We still have time to regulate these technologies, but that time for both governments and industry are running out. We must act now. 

AI, bioterrorism and the urgent need for Australian action

Today, you’d have to be a top-notch scientist to create a pathogen. Experts worry that, within a few years, AI will put that capability into the hands of tens of thousands of people. Without a new approach to regulation, the risk of bioterrorism and lab leaks will soar.

The US acted a year ago to reduce that risk. With the return of President Trump and his commitment to repeal important executive orders, it’s time for Australia to take action.

The key action, adopted in an executive order signed by President Biden, is to control not the AI but the supply of the genetic material that would be needed for the design of pathogens.

Biosafety regulation of Australian laboratories needs tightening, too.

When the genome for variola, the virus that causes smallpox, was published in 1994, the capacity to use that information malevolently had not yet evolved. But it soon did. By 2002, ‘mail-order’ DNA could be used to synthesise poliovirus. In 2018, researchers manufactured horsepox using mail-order DNA. Today, the market for synthetic DNA is large and growing.

Both generative AI, such as chatbots, and narrow AI designed for the pharmaceutical industry are on track to make it possible for many more people to develop pathogens. In one study, researchers used in reverse a pharmaceutical AI system that had been designed to find new treatments. They instead asked it to find new pathogens. It invented 40,000 potentially lethal molecules in six hours. The lead author remarked how easy this had been, suggesting someone with basic skills and access to public data could replicate the study in a weekend.

In another study, a chatbot recommended four potential pandemic pathogens, explained how they could be made from synthetic DNA ordered online and provided the names of DNA synthesis companies unlikely to screen orders. The chatbot’s safeguards didn’t prevent it from sharing dangerous knowledge.

President Biden was alert to risks at the intersection of AI and biotechnology. His Executive Order on AI Safety attracted attention in tech circles, but it also took action on biosafety. Section 4.4 directed departments to create a framework to screen synthetic DNA to ensure that suppliers didn’t produce sequences that could threaten US national security.

Before Biden’s executive order, experts estimated that about 20 percent of manufactured DNA evaded safety screening. Now, all DNA manufacturers have obligations to screen orders going to the US and to comply with obligations to know their customers.

With President Trump committing to repeal the executive order, it’s imperative that other countries impose equivalent requirements to sustain a global norm of DNA safety screening. While Australia has yet to act, a fix would be relatively straightforward. The minister for agriculture, Julie Collins, and the minister for health, Mark Butler, already jointly administer a regime governing the importation of synthetic DNA into Australia.

Updating those regulations in line with the US’s approach is a no-brainer. Prospective synthetic pandemics have profound security implications. A designed pathogen could have features unseen in naturally evolved viruses. Those features could include both a high reproduction rate and high lethality. A pandemic caused by such a pathogen could cause widespread absenteeism, leading to such blows as the collapse of the power grid and other critical infrastructure.

Lab leaks are also a growing risk. The intersection of AI and biotechnology increases the risk of accidents. Experts assess that lab leaks have already overtaken natural spillover as the most likely cause of the next pandemic.

While the origin of coronavirus that causes Covid-19 remains unknown and contested, we know that lab leaks occur frequently. The original SARS virus escaped from labs at least three times. A 2021 study reported 71 high-risk human-caused pathogen exposure events between 1975 and 2016, and data collected via an anonymous survey on biosecurity in Belgium reported almost 100 laboratory-acquired infections in five years.

Tighter regulations and regular inspections improve biosafety. In the US, more tightly regulated ‘select agent’ laboratories exhibited a 6.5-fold lower accidental infection rate than other labs. In Australia, the Office of the Gene Technology Regulator is responsible for lab regulation and oversight. Australia is a significant player, with four of the approximately 51 known labs classified as level-4. Level-4 facilities hold terrifying viruses such as ebola, marburg and nipah.

The regulator is required to, and does, inspect those labs only once every three years for recertification. (It also does a few inspections to confirm compliance with specific licenses.) Bridging the three-year gaps, the labs submit annual reports of inspections by experts whom they appoint. We need to look at tightening this regimen, particularly by increasing the frequency of inspections by the regulator.

The concerns with Australia’s current approach aren’t limited to inspections. The guidelines for Australia’s level-4 facilities were last updated in 2007. Australian Standard 1324.1 is used to specify the level of filtration for exhausts from such facilities. AS1324.1 was functionally superseded in 2016 by ISO16890 because AS1324.1 overestimates the effectiveness of HEPA filters by about half.

A sovereign Australian AI drive needs sovereign data centres

Australia needs to build its own domestic AI capability. To do so, it must first develop and build more of its own data centres across the country.

AI is the technology of both today and tomorrow. We’ll fall behind the world if we don’t make the most of it.

There is a push from CSIRO, pockets of government and the private sector for Australia to develop its own sovereign AI capabilities. Doing so will provide Australia with the domestic capabilities and leave us less dependent on overseas systems. But we’re not off to the best start.

ChatGPT, the main headline generator in the world of AI, has little to do with Australia other than the fact that people here use it. And in using it, they help to train the OpenAI’s models themselves without any direct benefit in return. We also don’t make the equivalent of NVIDIA’s AI chips here and it’s wildly unlikely we ever will.

It has now become more recognised that keeping the benefits of investment here is essential to ensuring we have a stake in a technology that will dominate industry for the next century. In doing so, we will be at the centre of whatever untold future-dominating technologies emerge.

After all, we have seen the same process already happen in our own lifetimes. The internet, personal computers and cloud computing have all radically altered our lives. With AI, those same types of changes can become supercharged.

In doing this, it also means we must secure the data used to better train AI and align that training with the nation’s rising data sovereignty requirements.

This is where sovereign AI meets data sovereignty.

The ability to store and manage data from within our own shores is a national security requirement. As a critical consideration for compliance with local and global data protection standards, Australia is highly dependent on offshore data storage systems for even some of its most crucial data assets. This is a consequence of the advent of cloud computing.

But sovereign AI won’t be sovereign if it relies on foreign data. It must be fed by data that’s in Australia.

Given the sheer amount of data we’ll need to create a domestic AI capability, Australia must first invest in data centres across the country. These centres will then be connected only within the borders of the nation and with no data to traverse beyond the land girt by sea. These will be essential to house, store and protect the data needed for AI to be effective and to adhere to data sovereignty standards.

One key constraint is the sheer level of additional power, storage and computing required. This constraint is staggering compared the pre-AI era. Thus, large-scale data centres on hectares of land will be required. While we are seeing large-scale private investments into data centres locally, we’ll need more. Data centre planning, approval and construction can take many years, and many of the facilities already in the pipeline were envisioned at a time before AI became mainstream. Direct government help to plan, build and fund these centres will be necessary.

But we’ll also need to see more data storage on premises. This is necessary to ensure that the data required is certainly sovereign and unable to be parsed through international servers. It can be done with data lakes, repositories of huge amounts of data from multiple sources ripe for analysis, stored within modern on-premises environments in one or multiple office sites.

This is not a new idea. Large-scale multi-city offices are quite common for large corporations. Ensuring data sovereignty due to the private nature of these deployments is vital.

Australia has a successful ICT services industry. With a handful of remarkable success stories like Canva and Atlassian, it has built a globally recognised brand. But at its core, Australia’s success is still propped up by the world around it.

While the eventual benefits of AI are still unknowable, we know what it can do for the present. In areas where Australia has deep investment, such as mining and decarbonisation, AI can filter through data to find what humans cannot. It can find the conclusions that will lead to better safety standards, create new products, find new veins and figure out new ways to do business.

Our fate on AI comes down to the data, plain and simple. If we don’t have our sovereign data ecosystems in place, we won’t have a sovereign AI success story to tell. If we don’t have a sovereign AI, we will be forced to use someone else’s. And they will get to use our data more effectively than us.

Private enterprise and government leaders need to seriously ramp up their sovereign data capabilities to help drive our AI future. Otherwise we risk being left behind by others who have already realised its potential and whose investment thus far leaves Australia in the dust.

Countering deepfakes: We need to forecast AI threats

Australia needs to get ahead of the AI criminality curve.

Last month, parliament criminalised the use of deepfake technology to create or share non-consensual pornographic material. The legislation is commendable and important, but the government should consider more action to address new forms of criminality based on AI and other technology.

As far as possible, we shouldn’t let these new forms surprise us. The government should organise a group of representatives from law-enforcement and national security agencies to identify potential or emerging criminal applications of new tech and begin working on responses before people are affected. Functionally, the group would look for the early warning signs and adjust our course well before potential challenges become crises.

The legislation followed recent cases in which Australians, especially young women and girls, were targeted via deepfakes and legislation was found wanting. In the past few years, there have been many incidents of non-consensual pornographic deepfakes affecting students and teachers. Most often, that content is created by young men. Similar cases have occurred internationally.

Deepfake risks were identified years ago. High-profile cases of non-consensual deepfake pornography date back to 2017, when it was used to generate sexually explicit content depicting various celebrities; and, in 2019, AI monitoring group Sensity found that 96 percent of deepfake videos were non-consensual pornography. A 2020 ASPI report also highlighted the issue’s national-security implications.

Unfortunately, non-consensual deepfakes are not the only issue. Rapidly developing AI and other emerging technologies have intricate and multiplying effects and are useful for both legitimate and criminal actors. AI chatbots can be used to generate misleading resources for financial investment scams, and image generators and voice clones can be used to create divisive misinformation or disinformation or promote conspiracy theories.

The issue is not that we can’t foresee these challenges. We can and do. The problem is in the lag between identifying the emergent threat and creating policy to address it before it becomes more widespread. Legislative systems are cumbersome and complex—and policymakers and legislators alike are often focused on current challenges and crises, not those still emerging. Bringing together the right people to identify and effectively prepare for challenges is essential to good law enforcement and protecting victims.

Beyond legislation, the government should establish a group of experts—from the Department of Home Affairs and the Attorney-General’s Department, the Department of Education, the National Intelligence Community, the Australian Cyber Security Centre, the eSafety Commission and law enforcement agencies. The group’s key role would be to consider how emerging technology can be manipulated by criminal and other actors, and how to best prepare against it and protect Australians.

It would need to meet regularly, ideally quarterly, and distribute its assessments at a high level to affect strategic and operational decision-making. Meeting this challenge will also require a whole-of-society approach, including experts from academia, think tanks, industry, social workers and representatives from community and vulnerable groups. Each of those groups offers valuable and necessary insights—especially at the coalface—and will be vital in creating change on the ground.

The need for their inclusion is evident from the current non-consensual deepfake pornography challenge. Reports on the bill highlighted a handful of problematic areas with it, including effects on young offenders. A significant proportion of this content is being created by young people—and, while it is now rightly a crime, the ideal long-term solution is in preventing, not prosecuting, non-consensual deepfake pornography. The National Children’s Commissioner particularly raised concerns that the law could result in higher rates of child incarceration as a result of sharing the material.

The effects of generative AI and other technology in the community are also extensive and harmful below the criminal threshold. The technology is increasingly being used to create fake social media influencers and streamers, or fake online love interests. Users can interact in real time with often highly sexualised or explicit AI-dependent content. Harms include the fostering of unhealthy parasocial attachments among vulnerable or socially isolated people—especially young men and boys.

Community and socially focused organisations and individuals will see these challenges far more immediately and clearly than government. Accessing their experience and expertise should be a priority for policymakers.

New technologies will continue to be developed, and they will continue to have an even greater effect on our lives. We might not always be able to predict such changes, but the potential challenges are not unpredictable. While legislation is important, a proactive approach is crucial.

Tag Archive for: Artificial Intelligence

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Artificial Intelligence

Nothing Found

Sorry, no posts matched your criteria