Tag Archive for: Artificial Intelligence

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Artificial Intelligence

Technology can serve humanity if we don’t let it outpace our societies

Progress is happening so quickly that governments and societies struggle to understand revolutionary and disruptive technology, much less mobilise effective responses.’

  • This is an edited version of Justin Bassi’s opening speech at ASPI’s Sydney Dialogue taking place on September 2 and 3.

Most of you probably flew to Sydney to join us. And most of you probably didn’t think much about the shape of the windows in the plane—but let me tell you something interesting about plane windows. They used to be square.

The world’s first commercial airliner, the de Havilland Comet built in Hertfordshire England carried more than 30,000 passengers in its first full year of operation in 1953, including Queen Elizabeth, the Queen Mother and Princess Margaret. 

But in 1954, two Comets came apart in the air, killing everyone on board in both instances. Investigations found that the fuselage around the corners of the square windows suffered metal fatigue from the stress caused by the sharp angles. And this was the cause of the crashes.

Plane makers around the world switched to rounded windows … and commercial aviation has gone on to contribute arguably as much as any technology to boost our civilisation—enabling most people to experience other parts of the world, conduct business face-to-face, share their thoughts and ideas, and learn from one another—as we’ll do over the next two days.

The point of the story is that technology improves our lives while introducing new risks. As with aviation, there’s a period of adjustment: we discover problems, we find sensible fixes, the cost comes down, the take-up rises and the technology becomes baked into our lives, contributing to our social, economic and cultural growth.

What is changing, however, is the speed at which new technologies are being developed and the impact they are having. We have galloping progress in fields such as artificial intelligence, synthetic biology and quantum computing—to name a few. Technological progress is cumulative and cross-pollinating, so that advances in one field tend to drive advances in others. And as that progress builds, the stakes both in terms of rewards and risks grow.

As our first speaker Eric Schmidt has written of artificial intelligence: ‘Faster aeroplanes did not help build faster aeroplanes, but faster computers will help build faster computers.’

And that’s the challenge. Progress is happening so quickly that governments and societies struggle to understand revolutionary and disruptive technology, much less mobilise effective responses.

And if we don’t roll up our sleeves and wrestle with difficult policy challenges around emerging technologies, we yield the space to others who might be motivated purely by financial gain or political power. Then, technology isn’t serving the needs and interests of the majority of people.

Things don’t automatically break our way. People actually have to ask the right questions and start the right conversations.

In September 1984, 40 years ago almost to the day, the US introduced the first Presidential Directive on cyber, titled NSDD-145, establishing a comprehensive and co-ordinated approach to information systems security.

It came about because then President Ronald Reagan, after watching the movie War Games, asked whether the cyber attacks portrayed in the film could really happen. His question was initially met with derision but, after a review, the Chairman of the Joint Chiefs, General Vessey, returned with the answer: “Mr President, the problem is much worse than you think.”

The lag period between the development of technology and our ability to manage it is where the risk is most intense and the benefits most uncertain.

And the reason the Sydney Dialogue was established was to help shorten this period—to prompt those questions and conversations by bringing together political leaders, tech CEOs and the world’s top civil society voices to talk about how we can roll out the next waves of technology in secure and stable ways.

In just three years since we held the first Sydney Dialogue:

  • The lingering lessons of COVID and the deteriorating strategic environment have combined to accelerate the trend of economic derisking—especially by the United States and China.
  • Generative AI has demonstrated the immense power and mystery of deep learning and massive amounts of computing—leaving governments grappling with how to regulate a technology for which there isn’t really any regulatory precedent.
  • Russia’s invasion of Ukraine has demonstrated the chameleonic dimensions of hybrid warfare—underscoring the importance of cybersecurity and of mounting defences against threats ranging from disinformation and propaganda to attacks on power grids and other critical infrastructure.
  • And disinformation—including deepfake-generated disinformation—has thrown our entire infoscape into question, raising fears of personalised content created at effectively zero cost yet targeting millions of people for malign purposes. This has profound implications for democracies that depend on public trust in the integrity of institutions and elections.

The Sydney Dialogue is proudly focused on the Indo-Pacific region—the most populous, dynamic and diverse region in the world.

These are conversations for all of us—and I’m very pleased to say that we have more than 30 countries represented here.

And by the end of the Dialogue, I hope that we’re a little bit closer to finding the right rules and norms that translate across borders to foster safe and secure access to transformative technologies; help distribute the benefits equitably; earn the trust of our citizens; and protect individual rights and democratic freedoms.

As the power of technology grows—and everyone in this room knows that’s the course we’re on—the stakes are getting higher and the conversations more vital. That’s why we’re all here.

The danger of AI in war: it doesn’t care about self-preservation

Recent wargames using artificial-intelligence models from OpenAI, Meta and Anthropic revealed a troubling trend: AI models are more likely than humans to escalate conflicts to kinetic, even nuclear, war.

This outcome highlights a fundamental difference in the nature of war between humans and AI. For humans, war is a means to impose will for survival; for AI the calculus of risk and reward is entirely different, because, as the pioneering scientist Geoffrey Hinton noted, ‘we’re biological systems, and these are digital systems.’

Regardless of how much control humans exercise over AI systems, we cannot stop the widening divergence between their behaviour and ours, because AI neural networks are moving towards autonomy and are increasingly hard to explain.

To put it bluntly, whereas human wargames and war itself entail the deliberate use of force to compel an enemy to our will, AI is not bound to the core of human instincts, self-preservation. The human desire for survival opens the door for diplomacy and conflict resolution, but whether and to what extent AI models can be trusted to handle the nuances of negotiation that align with human values is unknown.

The potential for catastrophic harm from advanced AI is real, as underscored by the Bletchley Declaration on AI, signed by nearly 30 countries, including Australia, China, the US and Britain. The declaration emphasises the need for responsible AI development and control over the tools of war we create.

Similarly, ongoing UN discussions on lethal autonomous weapons stress that algorithms should not have full control over decisions involving life and death. This concern mirrors past efforts to regulate or ban certain weapons. However, what sets AI-enabled autonomous weapons apart is the extent to which they remove human oversight from the use of force.

A major issue with AI is what’s called the explainability paradox: even its developers often cannot explain why AI systems make certain decisions. This lack of transparency is a significant problem in high-stakes areas, including military and diplomatic decision-making, where it could exacerbate existing geopolitical tensions. As Mustafa Suleyman, co-founder of DeepMind, pointed out, AI’s opaque nature means we are unable to decode the decisions of AI to explain precisely why an algorithm produced a particular result.

Rather than seeing AI as a mere tool, it’s more accurate to view it as an agent capable of making independent judgments and decisions. This capability is unprecedented, as AI can generate new ideas and interact with other AI agents autonomously, beyond direct human control. The potential for AI agents to make decisions without human input raises significant concerns about the control of these powerful technologies—a problem that even the developers of the first nuclear weapons grappled with.

While some want to impose regulation on AI somewhat like the nuclear non-proliferation regime, which has so far limited nuclear weapons to nine states, AI poses unique challenges. Unlike nuclear technology, its development and deployment are decentralized and driven by private entities and individuals, so its inherently hard to regulate. The technology is spreading universally and rapidly with little government oversight. It’s open to malicious use by state and nonstate actors.

As AI systems grow more advanced, they introduce new risks, including elevating misinformation and disinformation to unprecedented levels.

AI’s application to biotech opens new avenues for terrorist groups and individuals to develop advanced biological weapons. That could encourage malign actors, lowering the threshold for conflict and making attacks more likely.

Keeping a human in the loop is vital as AI systems increasingly influence critical decisions. Even when humans are involved, their role in oversight may diminish as trust in AI output grows, despite AI’s known issues with hallucinations and errors. The reliance on AI could lead to a dangerous overconfidence in its decisions, especially in military contexts where speed and efficiency often trump caution.

As AI becomes ubiquitous, human involvement in decision-making processes may dwindle due to the costs and inefficiencies associated with human oversight. In military scenarios, speed is a critical factor, and AI’s ability to perform complex tasks rapidly can provide a decisive edge. However, this speed advantage may come at the cost of surrendering human control, raising ethical and strategic dilemmas about the extent to which we allow machines to dictate the course of human conflict.

The accelerating pace at which AI operates could ultimately pressure the role of humans in decision-making loops, as the demand for faster responses might lead to sidelining human judgment. This dynamic could create a precarious situation where the quest for speed and efficiency undermines the very human oversight needed to ensure that the use of AI aligns with our values and safety standards.

Protecting our elections against tech-enabled disinformation

The Strategist is running a short series of articles in the lead up to ASPI’s Sydney Dialogue on September 2 and 3. The event will cover key topics in critical, emerging and cyber technologies, including disinformation, electoral interference, artificial intelligence, hybrid warfare, clean technologies and more.

 

In 1922, Mr Norman J. Trotter, from the Pappinbarra region of New South Wales, wrote to the then federal Treasurer, the Hon Earle Page MP, complaining that the general election was being held when there wasn’t a full moon.

Trotter pointed out the safety implications of transporting ballot boxes over mountainous roads, on horseback, with insufficient illumination!

A century on, electoral administrators around the world are dealing with a radically changed democratic landscape.  Concerns about moonlight—or its absence—have been replaced by the pervasive presence of disinformation and false narratives, the rise of new technologies such as generative artificial intelligence, occasional madcap conspiracy theories, threats to electoral workers, and the need to maintain citizens’ confidence in electoral outcomes.

Together, these dramatic changes will demand the ongoing vigilance of legislators, regulators and civil society. Increased focus and resourcing on this continually emerging space can harness the opportunities it presents while lessening the potential negative effects already being experienced.

The Australian Electoral Commission has been remarkably successful in maintaining the confidence of the Australian people: survey results show persistently high levels of trust in our operations, with nine out of 10 Australians expressing a high degree of satisfaction. That assurance is indispensable when the democratic legitimacy of governments rests on trust in electoral outcomes—the foundation on which all other actions of democratic government rests.

Yet maintaining these results may become increasingly complex with the rapidly expanding use of new technologies and an ever-evolving information ecosystem.

The attempted manipulation of information isn’t new.  In 1675, King Charles II tried to close London coffee houses because he was worried about false information being peddled in those places where people gathered to talk politics. Modern communications, including the ubiquitous use of mobile phones and social media platforms, has turbocharged the development and spread of information—both accurate and false. This has significantly affected all aspects of elections, from campaigning to the way they are conducted.

The relatively recent advent of generative AI heralds a potentially new epoch in electoral management. Globally, democracies are coming to terms with this new technology, and jurisdictions are trying different approaches from outright bans, through to mandatory declarations on messaging, and voluntary codes.

Regardless of the approach, democracies need to be aware that generative AI will have a significant impact on communications around elections. It will enable the generation of information—including disinformation—at a volume and velocity not seen previously and, perhaps even more troublingly, with a verifiability that may make it hard for audiences to discern the truth of the information they are receiving.

Legislators and regulators need to be alert to the potential impact of these ‘three Vs’. Meanwhile Australia’s regulatory framework needs to evolve to harness the benefits of new technology—including to democratic participation and political inclusion—while ameliorating the potentially negative impacts, and protecting the rights of citizens to express themselves freely.

Citizens’ electoral expectations have also changed dramatically. There was a time when the role of an electoral management body was simply to produce a statistically valid result. Such bodies must now also work to maintain trust by listening to the huge amount of feedback they get through social media and other channels—much of which reflects immediate feelings and does not necessarily take account of the legislation or resourcing realities by which an electoral body is bound. These bodies must swiftly respond to concerns and provide a constant stream of assurance about the electoral process.

The AEC has instituted several initiatives to manage these recent developments.  We have developed a reputation management system, which outlines a range of strategies to ensure citizens can trust election results. This includes arguably the most active media and social media engagement in Australia’s public service and the operation of a disinformation register during electoral events. These activities, and many others, are supported by an AEC command centre that provides real-time data, oversight and connectivity to the manual election operation like we’ve never had before.

We’ve also established a defending democracy unit that works with our partners across the government and social media platforms, and supports the operation of the multi-agency electoral integrity assurance taskforce.

AI-generated deepfakes—using audio, video, or a combination of both—have been used to sway public opinion in a growing number of elections overseas. In some cases, the use of AI has been clearly labelled; but in others, the material is presented as genuine. In extreme cases, voters can be steered toward or away from candidates—or even to avoid the polls altogether.

The next federal election is likely to be the first in Australia in which the use of AI-generated political communication could be a prominent feature of the campaign. The net effect, some experts say, is a genuine threat to democracy with a surge of AI deepfakes eroding the public’s trust in what they see and hear.

The AEC is watching global developments closely and is working to ensure voters are not misled about the electoral process, nor the role, capabilities and performance of the AEC.

We are also looking forward to the Australian Parliament grappling with this issue to produce national legislation to help regulate the use of this new technology. Education—specifically digital media literacy—will be fundamental to supporting voters and protecting elections.

Despite the wave of change, the actual process of voting remains reassuringly the same as it was for the very first federal election in 1901.  Australians use a pencil—or a pen, if they choose—to mark their paper and put it in the ballot box.  Those votes still need to be transported to be counted, and votes are counted by citizens working in a temporary capacity with the AEC, in the presence of party scrutineers. The results are published and certified by the electoral authority. Of course, there are some advances such as postal voting and pre-poll voting, as well as telephone voting for blind and low-vision voters, but the core process remains largely unchanged. (As an aside, Mr Trotter would be pleased that advances in electrification means moonlight is no longer a key concern.)

The AEC is very clear on its role in administering elections and maintaining citizens’ trust. We have never and will never—unless told to do so by Parliament—be involved in ascertaining the truth or otherwise of statements by candidates and parties.

Rather, we focus on protecting the integrity of the electoral system and ensuring citizens have the information they need to participate in the process.

This is becoming a more complex and challenging task. It is one that needs the active commitment and attention of every Australian to ensure trust and confidence in our elections remains strong.

 

Get ready for AI-supercharged hacking

Artificial intelligence can supercharge the effect of hacking attacks. As use of AI widens, people and organisations will have to become much more careful in guarding against its malicious use.

One aspect of the hacking problem is that malicious actors, having succeeded in hacking a system, such as a database or phone, can apply AI to the information they have stolen to create phishing messages that are much more persuasive and effective.

Another challenge is that an AI program loaded on to a phone or other computer must have access to far more information than a normal app. So a hacker may target the AI tool itself, seeing it as a wide door to more information that in turn can be used to execute more and stronger attacks.

Cybercrime is causing significant disruption to the Australian economy. According to the Australian Institute of Criminology, cybercrime cost $3.5 billion in Australia in 2019. Around $1.9 billion was lost directly by victims and the rest was the cost of recovery from attacks and of measures to protect systems.

To guard against AI-supercharged hacking, we’ll need to try harder in protecting ourselves and organisations we’re affiliated to. We’ll need even more vigilance when receiving emails and text messages, more diligence in reporting suspicious ones and more reluctance in sharing information in response to them.

Spear-phishing is sending emails and text messages that are highly targeted to the individuals they’re addressed to. For example, suppose you visited a bakery yesterday, bought a tiramisu cake and later received a text message asking you to follow a link to rate the cake and your shopping experience. Mistakenly assuming that such a message can have come only from the innocent local bakery, you may click through and provide personal information, when in fact you’re dealing with a hacker who has found out just a little about you—your phone number, the name of the shop and what you bought.

But that example is mild compared with the spear-phishing that might be done with generative AI, the type that can create text, music, voice or images. It’s quite conceivable that a hacker using generative AI could send a detailed email purporting to come from your friend, written in the friend’s style and discussing things that you’d expect to hear only from that friend.

Next, there’s the problem that AI tools that are or will be on our phones and other computers must have permission to access a great deal of other information in other apps. Although the AI tools are mostly pre-trained, for them to provide personalised solutions or recommendations for each individual they need to access our data. For example, to send that persuasive message from your friend, it would learn from records in your messages, email and contacts apps, and maybe the photos app, too.

This means that if someone can get into the system, maybe using some means that hackers already use, then get access to the AI tool, he or she may be able to collect whatever information the AI is collecting and do so without having to directly get into the stores of information to which the AI has access.

Imagine that you and two friends are planning a birthday party for your brother and discussing gift ideas by email. A hacker who can read the contents of your email app, because your AI tool has access to it, can then send an extremely persuasive spear-phishing email. It might purport to come from one of the friends, offering links to gifts of the type you were discussing. With today’s usual level of guardedness, you are not likely to be at all suspicious. But the links are in fact malicious, possibly designed to give access to your organisation’s computer network.

The AI tool that Apple announced in June, for example, requires access to your contacts and other personal information on your phone or other computer.

So far, the only answer to all this is increased vigilance, by individuals and their employers. Governments can help by publicising the problem. They should.

Top secret cloud and AI loomerism

Intelligence and defence are now data enterprises, which means they are AI enterprises. The volume and velocity of data is well beyond human scale. To extract actionable insights, shorten decision loops and empower our spies and warfighters, data must be handled at machine speed. Human-machine teaming is our only viable path forward.

The announcement by Amazon Web Services (AWS) today of a $2 billion strategic partnership with the Australian government for a top secret cloud is a much needed technology boost that will bring our intelligence and defence communities up to par with the US and Britain.

But this new technology should also trigger radical organisational changes within these organisations that reflect the new reality and the necessity of human‑machine teaming. Such changes will optimise the cloud’s capabilities, the value of the data that traverses it, and the power of the AI models it will feed. The conservative world of intelligence will consider these organisational changes heretical. But they are necessary.

Australia is not far behind the herd. The US was the first of the Five Eyes to get a top secret cloud service. In 2013, the CIA’s Commercial Cloud Services (C2S) contract with AWS was worth a reported US$600 million over 10 years. In November 2021, the CIA awarded the successor program, Commercial Cloud Enterprise (C2E), to five vendors—AWS, Google, IBM, Microsoft and Oracle—for probably more than US$10 billion over 15 years. Britain was next in 2021, again with AWS, in a deal worth probably up to £1 billion over 10 years.

In December 2022, the US Department of Defense procured the Joint Warfighter Cloud Capability, an up-to-top-secret cloud capability delivered by Microsoft, Oracle, AWS and Google. In this arrangement, the four companies will compete for task orders worth up to US$9 billion until June 2028.

In December 2023, Office of National Intelligence (ONI) Director‑General and AUKUS architect Andrew Shearer intimated that top secret cloud was coming to Australia. ONI had already approached the market in December 2020, leading to failed negotiations with Microsoft, AWS’s main hyperscale cloud competitor.

AWS’s partnership is Australia’s first ever top secret cloud, which has been a tough hurdle to clear. Defence, intelligence and other agencies have been progressively adopting lower‑classification cloud capabilities for years. The government created its cloud-first policy 10 years ago, and in 2022 the Digital Transformation Agency renewed AWS’s whole-of-government cloud agreement until May 2025. This can carry data up to the ‘protected’ level, which is the highest classification allowed for a public cloud, one available over to many users over the internet.

Extremely rigorous data security, localisation and control measures have meant that a top secret cloud must be a private cloud, available to just one customer. In this case, it’s ‘purpose-built for Australia’s Defence and Intelligence agencies’, AWS says. And this is why it comes with a $2 billion price tag.

It’s a very good thing that Australia has designed this procurement to deliver a capability for all Australia’s intelligence and defence agencies. This has maximised our purchasing power and taxpayer’s value for money. Considering the estimated costs for the US and British top secret clouds, $2 billion is in the ballpark.

Also, in the relentless quest for sharper intelligence, disparate agencies can resemble warring fiefdoms, each clutching its data and capabilities close to its chest. A unified top secret cloud will be a digital nervous system for the intelligence community. This fulfils a ‘central theme’ of the 2017 Independent Intelligence Review of ‘strengthening integration across Australia’s national intelligence enterprise.’

Australia’s choice to go with a single hyperscale cloud provider was probably the easiest and quickest way forward, given the urgency of the need. The contract with AWS may have provisions that allow the government to integrate other cloud providers down the road. A multi-cloud environment can increase resilience and offer better choice between offerings from various providers for storage, computing, analytics, applications and AI capabilities. This is why the US intelligence community and Pentagon have both signed multi-cloud deals that include four or more cloud providers. Having several providers dodges the problem of locking into just one and gives the customer more power.

Technology begets social and organisational changes. Data is increasing exponentially. AI capabilities are accelerating demand for new, higher quality data to train models. Intelligence and military operations now depend on massive data collection and management to extract actionable insights and shorten decision loops. In one example, the US National Geospatial‑Intelligence Agency collects 15 terabytes of imagery per day. This is expected to grow 1000 percent in six or seven years.

Dynamic, real‑time data collection and exchange will feed AI models of deployed edge capabilities. We may now think of UAVs and other intelligence, surveillance and reconnaissance platforms in that context, but real‑time data collection and exchange are moving into almost every piece of deployed equipment. Increasingly enabled by 5G technology, this includes weapons, communications and clothing—and will soon include implanted biotechnologies such as brain‑computer interfaces.

So, AWS’s top secret private cloud is more than just cheaper, scalable infrastructure for sharing and storing classified reports. It is the foundation for intelligence and defence agencies’ ability to utilise AI. The cloud is the ocean of data storage and processing power that will fuel AI’s catalysing effect on intelligence and defence agencies. Sensor feeds, human whispers, the electromagnetic hum of a thousand devices—these all converge. The data needs to be rapidly ingested, cleaned of noise and integrated into coherent intelligence.

Human-machine teaming is necessary. There is no other option. This is where our real challenge lies, which will make the policy and technology challenges pale in comparison. It will be the challenge of cultural and organisational change of intelligence and defence agencies. Australia’s intelligence and defence agencies are already using AI and machine-learning algorithms in their enterprise technology stacks, but so far this has been a process of augmenting existing roles, processes and organisational structures. We need to move away from the idea that the cloud and AI are there to enhance or support existing roles. It is about completely transforming these roles.

AI is the latest gale of creative destruction. In the 1780s, the introduction of the power loom helped kick off the industrial revolution. The power loom put hand weavers out of business and transformed the process of textile manufacturing. AI will transform intelligence and defence agencies to the same degree.

AI doomerism is the fear that AI will destroy humanity. Let me coin ‘AI loomerism’ as the misplaced idea that AI can be effectively integrated into organisations without radical transformation of roles, processes and structures. AI is not there to support the conventional roles of spies or warfighters, in the same way the power loom was not there to support the hand weaver. It was there to radically improve efficiency and force a reconceptualisation of humans’ role in the system, the value of their skills and expertise, and therefore their very identity.

In our quest to utilise AI technology to give our decision makers faster and more actionable insights, we must embark on a transformative odyssey, not merely tinkering with analytical tools or tradecraft. The intelligence community stands at a threshold. We are called upon to not just reassess the processes of spying and warfighting, but the very foundations of them.

Imagine, if you will, a complete restructuring, a rewiring of the very fabric of Australia’s intelligence and defence agencies. Legal frameworks, once rigid, must become adaptable. Policy and governance, once siloed entities, must enable data flows in real time, unlocking the full potential of AI. By streamlining these core functions, we unlock the potential for shared resources: powerful language models, cutting-edge infrastructure, and partnerships that bridge the chasms between agencies and missions. It is through this grand transformation that we can truly harness the power of cloud, data and AI, not just as tools but as paradigm‑shifting technologies.

The high cost of GPT-4o

With the launch of GPT-4o, OpenAI has once again shown itself to be the world’s most innovative artificial-intelligence company. This new multimodal AI tool, which seamlessly integrates text, voice and visual capabilities, is significantly faster than previous models, greatly enhancing the user experience. But perhaps the most attractive feature of GPT-4o is that it is free—or so it seems.

One does not have to pay a subscription fee to use GPT-4o. Instead, users pay with their data. Like a black hole, GPT-4o increases in mass by sucking up any and all material that gets too close, accumulating every piece of information that users enter, whether in the form of text, audio files or images.

GPT-4o gobbles up not only users’ own information but also third-party data that are revealed during interactions with the AI service. Let’s assume you are seeking a summary of a New York Times article’s content. You take a screenshot and share it with GPT-4o, which reads the screenshot and generates the requested summary within seconds. For you, the interaction is over. But OpenAI is now in possession of all the copyrighted material from the screenshot you provided, and it can use that information to train and enhance its model.

OpenAI is not alone. In the past year, many firms, including Microsoft, Meta, Google, and X, have quietly updated their privacy policies in ways that potentially allow them to collect user data and apply it to train generative AI models. Though leading AI companies have already faced numerous lawsuits in the United States over their unauthorised use of copyrighted content for this purpose, their appetite for data remains as voracious as ever. After all, the more they obtain, the better they can make their models.

The problem for leading AI firms is that high-quality training data has become increasingly scarce. In late 2021, OpenAI was so desperate for more data that it reportedly transcribed over a million hours of YouTube videos, violating the platform’s rules. (Google, YouTube’s parent company, has not pursued legal action against OpenAI, possibly to avoid accountability for its own harvesting of YouTube videos, the copyrights for which are owned by their creators.)

With GPT-4o, OpenAI is trying a different approach, leveraging a large and growing user base, drawn in by the promise of free service, to crowdsource massive amounts of multimodal data. This approach mirrors a well-known tech-platform business model: charge users nothing for services, from search engines to social media, while profiting from app-tracking and data-harvesting, what Harvard professor Shoshana Zuboff famously called ‘surveillance capitalism’.

To be sure, users can prohibit OpenAI from using their ‘chats’ with GPT-4o for model training. But the obvious way to do this, on ChatGPT’s settings page, automatically turns off the user’s chat history, causing users to lose access to their past conversations. There is no discernible reason why these two functions should be linked, other than to discourage users from opting out of model training.

If users want to opt out of model training without losing their chat history, they must, first, figure out that there is another way, as OpenAI highlights only the first option. They must then navigate through OpenAI’s privacy portal, a multi-step process. Simply put, OpenAI has made sure that opting out carries significant transaction costs, hoping that users will not do it.

Even if users consent to the use of their data for AI training, consent alone would not guard against copyright infringement, because users are providing data that they may not actually own. Their interactions with GPT-4o thus have spillover effects on the creators of the content being shared, what economists call ‘externalities’. In this sense, consent means little.

While OpenAI’s crowdsourcing activities could lead to copyright violations, holding the company, or others like it, accountable will not be easy. AI-generated output rarely looks like the data that informed it, which makes it difficult for copyright holders to know for certain whether their content was used in model training. Moreover, a firm might be able to claim ignorance: users provided the content during interactions with its services, so how can the company know where they got it from?

Creators and publishers have employed a number of methods to keep their content from being sucked into the AI-training blackhole. Some have introduced technological solutions to block data scraping. Others have updated their terms of service to prohibit the use of their content for AI training. Last month, Sony Music, one of the world’s largest record labels, sent letters to more than 700 generative-AI companies and streaming platforms, warning them not to use its content without explicit authorisation.

But as long as OpenAI can exploit the loophole of user provision of data, such efforts will be in vain. The only credible way to address GPT-4o’s externality problem is for regulators to limit AI firms’ ability to collect and use the data their users share.

Modernising defence must respect international humanitarian law

The integration of artificial intelligence and autonomous systems is essential to ensuring that Australia is capable of defending its interests now and into the future, but we as a country must be careful not to abandon our humanity inadvertently in the race to modernise.

To ensure this does not happen, Australia needs better safeguards to ensure compliance with international humanitarian law as we develop new defence capabilities through emerging technologies.

Some in the technology industry have previously encouraged a ‘move fast and break things’ approach, emphasising speed and innovation above all else. This approach has seen human decision-making and intelligence increasingly replaced by algorithms and autonomous systems.

There are particular risks when this occurs in the military context. Humans must remain at the centre of our defence capabilities, particularly when strategic decisions are made that are literally life-or-death decisions. Technologies that take human decision-makers ‘out of the loop’ raise important questions about how these decisions are made, and who is accountable for them.

Lethal autonomous weapons systems (LAWS) can generally be understood as weapons that independently select and attack targets without human supervision.

The idea of drones patrolling the skies, identifying and executing targets reads as science fiction, but as early as 2003, the US first predicted that AI and facial recognition technologies (FRT) would be used with limited human supervision to execute lethal attacks during military operations.

One of the first examples of LAWS being used was in 2020, in Libya’s civil war, when forces backed by the government in Tripoli were believed to have used STM Kargu-2 drones to attack retreating enemy soldiers, according to a United Nations report. Since then, LAWS have been used in the Ukraine-Russia war and Russia is known to be developing a nuclear capable LAWS called ‘Poseidon’.

Understandably, many people oppose the idea of machines making life-and-death decisions.

As part of our recent evidence to an enquiry by federal Parliament’s Joint Standing Committee on Foreign Affairs, Defence and Trade (JSCFADT) into the modernisation of Australia’s defence capabilities, the AHRC dedicated the whole of our 38-page submission to discussing the human rights and international humanitarian law concerns raised by Australia’s current approach to LAWS.

A key concern is that LAWS are incompatible with the jus in bellum principles of proportionality and distinction.

Under the Geneva Convention, attacks must be proportionate—that is, the harms caused by an attack must be outweighed by the perceived advantages gained. This subjective analysis is inherently a difficult and imprecise one for humans to undertake. It requires a weighing of the value of a human life.

To allow an algorithm to conduct this weighing exercise has been described by United Nations Secretary-General António Guterres as ‘politically unacceptable and morally repugnant’.

Equally as important is the principle of distinction, which prohibits the targeting of civilians or use of indiscriminate attacks as combatants must seek to minimise the impact of conflict.

While AI and FRT are becoming more advanced every day, they are currently incapable of determining whether a person is hors de combat (which means they have surrendered or are so badly wounded that they are incapable of defending themselves). That type of contextual interpretation is simply beyond the capabilities of these technologies.

This could result in combatants who are hors de combat being killed by a LAWS that cannot make this distinction. The problem is further complicated by the rise in ‘grey zone’ and irregular conflict, in which combatants are not easily identifiable or easily distinguished from civilians.

Despite these significant risks, Australia currently has insufficient safeguards in place to ensure compliance with international humanitarian law as the ADF continues to evolve its capabilities by integrating emerging technologies.

The Department of Defence, in its own submission to the JSCFADT, noted that under Article 36 of the Additional Protocol I of the Geneva Conventions, new weaponry must undergo a legal review.

The Australian Government’s position appears to be that these reviews are sufficient to comply with international humanitarian law. But  Article 36 reviews have been widely criticised across the globe due to their inflexibility, lack of accountability and lack of compliance mechanisms. These concerns will be exacerbated if LAWS are, as expected, able to ‘learn’ from new data and missions. This could lead to Article 36 reviews approving a technology that may then operate differently after being deployed—rendering the previous review redundant.

Article 36 reviews are a necessary, but not a sufficient, safeguard with respect to LAWS.

It is due to these insufficiencies that many groups, including the AHRC, are calling for the regulation of LAWS.

There has recently been positive movement on this front, with Australia voting in favour of a UN resolution that stressed the ‘urgent need for the international community to address the challenges and concerns raised by autonomous weapons systems’.

However more must be done.

Australia should reconsider its 2018 position that it is premature to regulate LAWS. There are some military technologies, such as landmines and cluster munitions, that have been recognised as posing too great a threat to human rights and international humanitarian law, and therefore requiring regulation. It is our view that LAWS fall into the same category.

As Australia seeks to improve its defence capabilities, we must ensure that the swift pace of modernisation does not result in human rights and international humanitarian law being left behind.

How the Australian Border Force can exploit AI

Our ability to plan for crises, to reduce the uncertainties they present and to quickly diagnose the effectiveness of our actions in novel scenarios—all of this opens up a conversation about the possibilities and challenges of artificial intelligence.

The extraordinary power of AI to support the Australian Border Force, to analyse data in close to real time, and at scale, is already helping our officers to detect and disrupt all manner of criminal activities. AI is giving us more capacity to detect and disrupt new threats at and before they even reach the border.

The ABF is well advanced in developing our Targeting 2.0 capability, to incorporate all of our assessments of border-related threats, risks and vulnerabilities along with new data from industry and partners, to support our decision-making.

Targeting 2.0 seeks to apply the extraordinary power of AI to complement and amplify the deep expertise of our people, to identify new patterns at speed and at scale, to detect and disrupt crime as it happens, and, in time, to get ahead of the perpetual evolution of criminal activities.

As AI continues to evolve we’re going to be able to look at an increasingly bigger picture and start addressing problems at the systems level—whether in terms of threat discovery, modelling or disruption.

Our jobs and the world in which we operate are going to be very different in the coming years because of AI—whether it’s strategic planning, preparedness, operational planning and response, augmented decision-making, or being able to respond to or get ahead of threats.

The concept of digital twins—virtual models designed to accurately mirror a physical process, object or system—is one of the things that has grabbed our attention. And social systems are well in scope, opening the door for policy twins. A digital representation of a policy could include legislation as code, relevant data, modelling tools, impact monitoring and more.

AI is only going to accelerate our ability to design and implement policy twins as well as other digital twins. Add in the incredible horsepower of quantum computing, and we’ll be able to have digital and policy twins of things as complex as the entire Australian border and all its related infrastructure and systems. We should eventually be able to model the effects of a crisis across the whole border continuum, more easily, and on an enterprise scale.

There are many other technological advances contributing to the immense power of AI, including neural network architectures, edge computing, blockchain, and augmented/virtual reality.

Another one of the tools at our disposal is Bayesian belief networks—advanced decision-making maps that consider how different variables are connected and how certain or uncertain those connections are in determining an outcome. They aim reduce uncertainty in decision-making, and help to determine the probability of an event based on what we know.

Imagine an array of sensors and data feeds, technology stacks with learning ability and visualisation tools; now incorporate digital twins, Bayesian belief networks and quantum computing. We’ll be able to model crises and our responses, with augmented decision-making and the ability to monitor those decisions’ impact on complex social systems, during a crisis.

But the future is hard to predict, and we always have to factor people into our equation. Because AI won’t supplant human judgment, accountability and responsibility for decision making; AI will augment it.

For many governments, to gain the social license to implement AI systems like I’ve described, building and maintaining trust is key. People must trust that our data is secure, have trust in the information we push to them and they pull from us, trust in our people, trust that we won’t misuse personal information, trust that we won’t act unlawfully or unethically.

One of the best ways to build trust is to demonstrate, measurably, the benefits to people of sharing their information and data with us. Take truly seamless and contactless travel through digital borders. To collect the data we need from travellers, we need to emphasise the benefits of people providing their biometrics, for example. Travellers will reap economic and personal benefits like time saving and convenience.

I think we have to introduce human-centric measures of success into our success criteria, budgeting and operating models, so that our AI systems aren’t just based on being good value for money but also having a positive effect on people. We’ll have to monitor for outputs and public impacts to ensure systems are operating as they should, and not leading to unintended bias or harm.

In Australia more broadly, we’re currently developing a whole-of-government AI policy and legislation and a consistent approach to AI assurance.

In the ABF we’re focused on developing practical and effective AI guardrails and governance, and robust data science.

For us, it’s not just about ethical and responsible design of AI systems. It’s also about assurance—monitoring of outputs and impact, ensuring independent oversight of our systems, and appropriate transparency measures.

Given the threats we will face at our borders, and the likelihood of future crises, we have to start building now to be ready for the future—by assembling vast amounts of data ready to be fed into AI, by getting our people ready to use it, and by genuinely reinforcing trust.

Those who don’t start building readiness for AI into their systems now are going to have a hard time adapting when it becomes imperative to do so.

Chinese innovation, regulation and AI

What follows is an interview by Project Syndicate with Angela Huyue Zhang, author of High Wire: How China Regulates Big Tech and Governs its Economy (Oxford University Press, 2024).

Project Syndicate: In 2020, Chinese regulators launched a crackdown on tech companies—a process that cost firms more than $1 trillion in market value. How has the crackdown changed the innovation and entrepreneurial culture in China? Has it had any positive effects?

Angela Huyue Zhang: The crackdown appears to have yielded few, if any, positive outcomes. Beyond failing to encourage new market entrants, it seems to have entrenched the dominance of incumbents. Moreover, it has severely undermined investor confidence, leading to a substantial reduction in capital flows into the consumer-tech sector.

As private entities recede, the state advances. For example, government-backed funds or companies acquire stakes in key subsidiaries of tech giants; these ‘golden shares’ enable the government to exert more control over content moderation and other business decisions. Finally, the crackdown has enabled the state to steer investment within the technology sector, leading both private and state-backed investors increasingly to focus on so-called hard tech.

PS: China does have a ‘strong reason to regulate’, you write in your forthcoming book, High Wire: How China Regulates Big Tech and Governs Its Economy, not least because its platform economy has ‘grown to be very unruly.’ If Alibaba’s ‘forced restructuring’ last year was not the right way to rein in a firm that had grown too large, how should the authorities have addressed monopoly concerns?

AHZ: The Chinese antitrust authorities should have intervened much earlier to curb the rapid expansion of leading tech firms like Alibaba and Tencent. Instead, they allowed acquisitions by these giants to proceed with little regulatory oversight, let alone constraints, for over a decade.

For Chinese firms, the key to circumventing the government’s investment restrictions was the ‘variable interest entity’ (VIE) structure, which enabled them to raise capital from overseas. Given uncertainty around the legitimacy and enforceability of the VIE structure, administrative authorities like the Ministry of Commerce avoided scrutinizing merger transactions involving firms that had adopted it. This worked for Chinese Big Tech firms, which amassed significant market power—so much power, in fact, that altering the competitive landscape now will be extremely difficult.

PS: Chinese regulators have lately been easing up. For example, you note that they are taking a rather lax approach to artificial intelligence, in order to give Chinese firms a ‘competitive advantage over their American and European counterparts.’ But this approach, too, carries risks. In High Wire, you compare China’s regulatory system to a particularly difficult balancing act, characterized by ‘hierarchy, volatility, and fragility.’ What are the implications of this ‘dynamic pyramid model’, as you call it, for Chinese regulation?

AHZ: Chinese regulation is characterized by repeated cycles of policy easing and tightening. Chinese regulators’ lax approach to AI today could sow the seeds for a regulatory crisis tomorrow. And because China’s regulatory structure is rigidly hierarchical, information transmission within the bureaucracy is sometimes very inefficient, so regulators often fail to respond to issues as they arise. Instead, they tend to wait until the problem has grown to be rather serious, so the costs of changing course are high.

PS: In your book, you note that ‘Chinese tech firms self-regulate’ with the judiciary’s participation. How do Chinese tech firms act as ‘quasi-regulators’, and how has the judiciary facilitated the platform economy’s growth?

AHZ: The vast majority of disputes arising from large online platforms are adjudicated primarily by the platforms themselves. That is because these claims tend to be very small, so rather than going through the trouble of suing, most consumers and merchants rely on the platforms themselves to resolve their complaints. But platforms sometimes require support from the court system, especially when enforcing claims proves challenging. Chinese courts have devoted enormous resources to adjudicating disputes involving tech firms and, in doing so, have enhanced the credibility of firms’ ‘self-regulation’. The government thus effectively lends a helping hand to tech firms.

PS: A number of ‘seemingly random policies’ that the Chinese government has introduced since 2020, you write in High Wire, ‘are all connected by a common desire to combat inequality.’ Which of those policies has been—or is likely to be—particularly effective, and which are misguided?

AHZ: All these policies appear to be well-intentioned, but the way the government has gone about implementing them has led to serious unintended consequences. Because of the power imbalance between business and government in China, any negative policy signal can cause tech stocks to plummet. Investors lack confidence in tech firms’ ability to counteract government intervention. Ultimately, the market’s deep-seated mistrust of the Chinese legal system is the primary driver of investors’ bearishness.

PS: How might regulatory trends in the United States and the European Union affect the trajectory of tech governance in China?

AHZ: Regulatory trends in the US and the EU are poised to influence Chinese tech governance in three ways. First, Chinese policymakers might emulate their Western counterparts in strengthening oversight over Big Tech firms. Second, America’s expansion of overseas surveillance and aggressive assertion of jurisdiction over overseas data are likely to prompt China to enforce more stringent cross-border data-transfer rules. Finally, China might use its control over data outflows as a bargaining chip in negotiations with the EU and other jurisdictions that are tightening their data-outflow regulations.

Making emerging technologies safe for democracy

Dozens of countries around the world, from the United States to India, will hold or have already held elections in 2024. While this may seem like a banner year for democracy, these elections are taking place against a backdrop of global economic instability, geopolitical shifts and intensifying climate change, leading to widespread uncertainty.  

Underpinning all this uncertainty is the rapid emergence of powerful new technologies, some of which are already reshaping markets and recalibrating global power dynamics. While they have the potential to solve global problems, they could also disrupt economies, endanger civil liberties, and undermine democratic governance. As Thierry Breton, the European Union’s commissioner for the internal market, has observed, ‘We have entered a global race in which the mastery of technologies is central’ to navigating the ‘new geopolitical order.’ 

To be sure, technological disruption is not a new phenomenon. What sets today’s emerging technologies apart is that they have reached a point where even their creators struggle to understand them.  

Consider, for example, generative artificial intelligence. The precise mechanisms by which large language models like Google’s Gemini (formerly known as Bard) and OpenAI’s ChatGPT generate responses to user prompts are still not fully understood, even by their own developers.  

What we do know is that AI and other rapidly advancing technologies, such as quantum computing, biotechnology, neurotechnology, and climate-intervention tech, are growing increasingly powerful and influential by the day. Despite the scandals and the political and regulatory backlash of the past few years, Big Tech firms are still among the world’s largest companies and continue to shape our lives in myriad ways, for better or worse.  

Moreover, over the past 20 years, a handful of tech giants have invested heavily in development and acquisitions, amassing wealth and talent that empowers them to capture new markets before potential competitors emerge. Such concentration of innovation power enables these few players to maintain their market dominance – and to call the shots on how their technologies are developed and used worldwide. Regulators have scrambled to enact societal safeguards for increasingly powerful, complex technologies, and the public-private knowledge gap is growing.  

For example, in addition to developing vaccines and early detection systems to trace the spread of viruses, bioengineers are developing new tools to engineer cells, organisms, and ecosystems, leading to new medicines, crops, and materials. Neuralink is working on trials with chip implants in the bodies of disabled people, and on enhancing the speed at which humans communicate with systems through direct brain-computer interaction. Meanwhile, quantum engineers are developing supercomputers that could potentially break existing encryption systems crucial for cybersecurity and privacy. Then there are the climate technologists who are increasingly open to radical options for curbing global warming, despite a dearth of real-world research into the side effects of global interventions like solar radiation management.  

While these developments hold great promise, applying them recklessly could lead to irreversible harm. The destabilising effect of unregulated social media on political systems over the past decade is a prime example. Likewise, absent appropriate safeguards, the biotech breakthroughs we welcome today could unleash new pandemics tomorrow, whether from accidental lab leaks or deliberate weaponization.  

Regardless of whether one is excited by the possibilities of technological innovation or concerned about potential risks, the unique characteristics, corporate power, and global scale of these technologies require guardrails and oversight. These companies’ immense power and global reach, together with the potential for misuse and unintended consequences, underscore the importance of ensuring that these powerful systems are used responsibly and in ways that benefit society.  

Here, governments face a seemingly impossible task: they must oversee systems that are not fully understood by their creators while also trying to anticipate future breakthroughs. To navigate this dilemma, policymakers must deepen their understanding of how these technologies function, as well as the interplay between them.  

To this end, regulators must have access to independent information. As capital, data, and knowledge become increasingly concentrated in the hands of a few corporations, it is crucial to ensure that decision-makers are able to access policy-oriented expertise that enables them to develop fact-based policies that serve the public interest. Democratic leaders need policy-oriented expertise about emerging technology – not lobbyists’ framings. 

Having adopted a series of important laws like the AI Act over the past few years, the EU is uniquely positioned to govern emerging technologies on the basis of solid rule of law, rather than in service of corporate profits. But first, European policymakers must keep up with the latest technological advances. It is time for EU decision-makers to get ahead of the next curve. They must educate themselves on what exactly is happening at the cutting edge. Waiting until new technologies are introduced to the market is waiting too long.  

Governments must learn from past challenges and actively steer technological innovation, prioritising democratic principles and positive social impact over industry profits. As the global order comes under increasing strain, political leaders must look beyond the ballot box and focus on mitigating the long-term risks posed by emerging technologies.

Tag Archive for: Artificial Intelligence

Nothing Found

Sorry, no posts matched your criteria