As tensions grow, an Australian AI safety institute is a no-brainer

Australia needs to deliver its commitment under the Seoul Declaration to create an Australian AI safety, or security, institute. Australia is the only signatory to the declaration that has yet to meet its commitments. Given the broader erosion of global norms, now isn’t the time to break commitments to allies and partners such as Britain, South Korea and the European Union.
China has also entered this space: it has created an AI safety institute, signalled intent to collaborate with the Western network of such organisations and commented on the global governance of increasingly powerful AI systems.
Developments in the United States further demand an Australian safety institute. The US is radically deregulating its tech sector, taking risky bets on integrating AI with government, and racing to beat China to artificial general intelligence—a theoretical system that would rival human thinking. Collectively, these trends mean that AI risks—such as cyber offensive capability; widespread availability of chemical, biological, radiological and nuclear weapons; and loss of control over advanced systems—are less likely to be addressed at their source: the frontier labs. Australia needs to act.
Fortunately, we have options for addressing AI safety and security concerns. Minister for Industry and Science Ed Husic’s ‘mandatory guardrails’ consultation mooted an Australian AI Act that would align with the EU and impose basic requirements on high-risk AI models. Australia can foster its domestic AI assurance technology industry, and we can expand our productive involvement in multilateral approaches, ensuring that safety and security remain a global priority.
While an Australian AI Act has policy merit, it might face a rocky political path. In March, the Computer & Communications Industry Association—a peak body with members including Amazon, Apple, Google, X and Meta—urged US President Donald Trump to bring the News Media Bargaining Code into a US-Australia trade war. In the same submission, the association complained about the proliferation of AI laws and the proposed Australian regulation of high-risk AI models.
An Australian AI safety institute would be an immediate way to protect Australian interests and create a new path to collaborate with our allies without these political risks. In addition to giving us a seat at the table, such an institute would reduce our dependency on others for technical AI safety and security. In other security domains, we’ve seen dependency used as a bargaining chip in transactional negotiations. This is still something we have time to avoid for AI.
Domestic pressure is building. In March, Australia’s AI experts united in a call for action, including the establishment of an Australian safety institute and an Australian AI Act. The letter will remain open to expert and public support until the election.
Australian AI expert and philosopher Toby Ord, a senior researcher at Oxford University and author of The Precipice: Existential risks and the future of humanity, said:
Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An [Australian AI safety institute] would allow Australia to participate on the world stage in guiding this critical technology that affects us all.
And it’s not just the experts. Australians are more worried about AI risks than the people of any other nation for which we have data.
The experts and the public are right. It’s realistic that we will see transformative AI during the next term of government, though expert opinion varies on the exact timing. Regardless, the window for Australia to have any influence over these powerful and risky systems is rapidly closing.
Britain recently renamed its ‘AI Safety Institute’ as the ‘AI Security Institute’ but without significantly changing its priorities. The institute targets AI capabilities that enable malicious actors and the potential loss of control of advanced AI systems, including the ability to deceive human operators or autonomously replicate.
Given that these are fundamentally national security issues, perhaps ‘security’ was a better name from the start and appropriate for Australia to use for our institute.