Tag Archive for: ethics

What I learned about artificial intelligence while in the Wadi Rum Desert

I write this article from Wadi Rum Desert, Jordan. Instead of stiffly sitting at my ergonomic desk, I’m reclined in a camel-skin tent. My phone lies forgotten in my backpack (there’s no reception here). I look out my window, not to rest my eyes from all the screen time, but to watch a herd of wild camels wander by.

Maybe I’ve been away from home too long and I’m having my ‘Lawrence (or Harriet) of Arabia’ moment, but as a cybersecurity professional I can’t help but use this time to reflect on the impact of technology. And I don’t just mean my inability to google synonyms for this article. Back home, my personal and professional lives revolve around the artificial intelligence side of technology and innovation—developing it, talking to colleagues about it, boring my friends with stories about it.

In the academic world, the pace of AI development is staggering. OpenAI’s DALL-E model can generate incredibly authentic images from just a few words of text (see below for an image created by my prompt: ‘an artificial intelligence taking over the world as an oil painting’). Meta’s No Language Left Behind model can translate 200 languages, including Igbo and Assamese, which have previously been ‘left behind’ due to lack of training data. OpenAI’s Codex model can even generate computer software programs based on text instructions.

Now AI is not just a technological novelty, but a strategic priority for every country with the resources to invest in it. AI dominance is a goal of the United States, China and Russia, and some reports assert that whoever achieves AI dominance in the next few years will ‘rule the world’.

We’ve already seen the introduction of policies that seek to control this new theatre. Last month, the US announced new export controls aimed at choking China’s access to AI and semiconductor technologies. Closer to home, Australia is developing partnerships geared at AI collaboration, through bilateral mechanisms such as the comprehensive economic cooperation agreement with India and multilateral groupings like the Quad partnership with Japan, India and the US.

Now, how does all of this relate to Wadi Rum?

Spending a few days cut off from technology but very much in touch with human experiences—learning about Wadi Rum from the Bedouins, and meeting other travellers from around the world, with all sorts of backgrounds and careers and views on technology and innovation—prompted me to question my own assumptions. In particular, I started to reflect on why we invest in AI at all and who benefits from it.

During a brief stint at Stanford University in 2018, I was fortunate enough to be exposed to the global epicentre of innovation, Silicon Valley. Almost everyone I spoke to told me how they were about to change the world through AI. However, I came to realise that when people talk about changing the world, their definition is a little small. Most of the time it was the developed world, the digital world, the US or Silicon Valley. And I came to question what exactly they were trying to change, and why.

When it comes to AI, a chatbot that can’t be differentiated from a human agent is really cool, but who is it benefiting? Is it just a company’s profit margin? UN reports increasingly highlight that the growing digital divide is actually worsening inequality around the world.

As AI increasingly becomes a tool for international politics, I strongly believe we need to consider how we ensure its development and its execution are safe, secure and ethical.

AI has known technical challenges, such as how to safely deploy it so that it doesn’t become biased—for example, sexist, racist or homophobic—over time. From a strategic lens, there are also issues to overcome in terms of the level of autonomy these systems can be granted, when and how to delegate decisions to humans, and whether this should be legislated. Australia’s Centrelink Robodebt scandal and the international debate over lethal autonomous weapons are examples of this balancing act.

AI also has known security issues, demonstrated by how these systems have been shown to be vulnerable to attack through adversarial machine learning. Inference attacks can cause leaks of sensitive data used in the model training process, or the decisions models make may be incorrect due to evasion attacks. Their use for security purposes is also of strategic consequence—threats over AI capabilities are increasingly being used as political instruments, for example.

The ethical application of AI overarches both of these concerns. The ability for AI to perform object detection with great accuracy is technically impressive, but does that mean it’s ethical to use these systems to perform facial recognition and surveillance? And while AI use may raise the share prices of some of the world’s richest companies, the first steps in technological dominance are not necessarily about making systems more intelligent, but about securing the systems we already have, as evidenced by the recent data breaches of Optus and Medibank. Also, many people in the developing world still lack an internet connection, limiting their access to the opportunities afforded by tools like online banking and the digital economy.

Philosophical differences in the deployment of AI between countries are amplifying an already tense geostrategic environment. However, technology, even one as impressive as AI, is not a substitute for diplomacy. Good diplomacy is more important now than ever.

As I discuss these topics with my fellow travellers, drinking tea and watching the sunset over the stunning rock formations that Wadi Rum is so famous for, I notice we all have different perspectives and answers to these challenges. So does everyone I discuss these issues with professionally. This confirms why it’s so important to intensify the international dialogue around AI and continue discussing common tools, philosophies and standards for its use as both a technical and strategic tool. But for now, I’m going to enjoy my last few moments of technological isolation, because a good conversation with a real human and a beautiful sunset trump an AI-generated fabrication every time.

Red Cross is seeking rules for the use of ‘killer robots’

As autonomous weapons become rapidly more lethal, the International Committee of the Red Cross is in a race to develop a legal framework for the use of ‘killer robots’.

Netta Goussac, a legal adviser with the ICRC’s Geneva-based arms unit, tells The Strategist that nations need to consider the issue of how much control people have over autonomous weapons—which can select and attack a target without human intervention.

‘They need to do it urgently because technological developments are moving very, very quickly’, Goussac says. ‘We think states should not consider this to be an inevitable development but rather make conscious choices now about what is and isn’t acceptable.’

Once a capability has been acquired, it’s extremely difficult to convince states not to use it, she says. ‘It’s easier to reach agreement on what is and isn’t acceptable before it’s a reality.’

An Australian, Goussac previously worked as the principal legal adviser in the Attorney-General’s Department’s Office of International Law.

She says the international discussion has to focus on the role of the humans who deploy autonomous weapons. Those sending them onto the battlefield must take all feasible measures to prevent violations of international humanitarian law.

These responsibilities cannot be delegated to the device, because only humans are responsible for complying with the law, she says.

As the world’s armed forces rely increasingly on technology, artificial intelligence, algorithms and machine learning for military decision-making, judgements must be made about the level of control a human deploying an autonomous weapon has to have in order to meet their legal and ethical responsibilities.

That involves examining the person’s ability to stop the weapon, to supervise it, to communicate with it and to predict reliably what it will do in the environment in which it’s being used.

Guns and explosives still do the greatest humanitarian harm and the Red Cross applies the same approach to new technologies as it does to them. ‘We ask, what are the real and foreseeable humanitarian consequences of these weapons, and what does the law say about their use?

‘We’ve applied that logic to chemical weapons, to landmines, and now we’re applying it to cyber warfare and to autonomous weapon systems. Do they pose any challenges to complying with the rules of international humanitarian law that require parties to a conflict to distinguish between civilians and combatants, to use force proportionally and to exercise caution?’

Technology developed to benefit society generally is also driving advances in arms as militaries demonstrate that they favour greater autonomy in weapons systems. They want more precision, faster decision-making and longer range.

An autonomous weapon is distinct from a remote-controlled system, such as a drone, in which a human selects and attacks the target using a communication link that gives them constant control and supervision over the deployed system.

‘With autonomous weapon systems, the human designs and programs the system, the human launches the system, but it’s the system that selects and attacks the target’, Goussac says.

‘Yes, the system is running a program that’s created by the human, but the human who launches the system doesn’t necessarily know where and when the attack will take place.’

The more autonomous weapon systems are deployed, the greater the chance that they’ll cause humanitarian risks, she says.

With autonomous systems, the human’s decision to use force can be distant in both geography and time.

‘It’s that distance between the human and the effects of their decisions that we’re concerned about because we think that if you stretch that too far you make it difficult for the human to comply with the rules that they’re required to comply with, to make the legal judgements that they have to make at the time they decide to use force.’

A key question, says Goussac, is whether an autonomous weapon system hinders the human’s ability to stop an attack if the circumstances change. What if, for instance, civilians arrive in a killing zone?

In some cases, autonomous systems are used in a very predictable and controlled environment—generally in the air or on the sea—where there’s no likelihood of civilians or ‘non-targetable objects’ being hit.

‘But the more complex the environment, the more mixed it is, the more dynamic it is, the less predictable it is, and the more important it is to have that supervision and ability to control it once the system has been launched’, Goussac says.

‘It’s not just the technical characteristics of the weapon that are important, it’s the circumstances of use. What an appropriate level of control over a system might mean in one context is totally different in another context.’

A range of defensive systems are designed to autonomously select and attack targets in a space where there are no civilian aircraft and when the target is flying at a high velocity (the Iron Dome system is one example). ‘There’s been a certain pre-determinacy here’, Goussac says, ‘but it’s an acceptable level of pre-determinacy’.

She says it’s difficult to set rules based on technical characteristics. ‘We’re really more interested in talking about the role of the human because that’s what we think is universal in all of this.’

‘At what point do we start having ethical concerns about the delegation of decisions to kill or injure, or to destroy property, to machines?’

The widening gap between ethics and international relations

In 1918 prominent American philosopher James H. Tufts asked, ‘Is there, can there be, any ethics of international relations?’ In the turbulent century since, that question has inspired many attempts at an answer. Contemporary events press the issue again.

Tufts was a collaborator of John Dewey, who also turned to the issue in 1923. Dewey saw ‘the extraordinary confusion that is found in current moral ideas as they are reflected in the ethics of international relations’. He asked why it is that ‘morals have so little effect in regulating the attitude of nations to one another’. To cut through the confusion, Dewey was left promoting an idea suggested by Salmon Levinson in his monograph Outlawry of war (1921).

The questions posed by Tufts and Dewey remain unanswered as a century marked by almost continuous war and conflict followed their formulation. But Dewey’s simple response became a principle in law if not in strategic practice. Developments in international law have ‘not only outlawed war as a legitimate means of settlement of international disputes but also banned most uses of military force short of war and even threats to use force in international relations’.

Nevertheless, those engaging in aggressive war itself, often accompanied by frequent heinous acts which are self-evidently morally unjustified and often criminal by domestic norms, are rarely held to account. An exception was the German and Japanese war crimes trials following World War II.

At the time, these trials seemed to cement into international law the offences of engaging in a conspiracy to commit crimes against the peace, war crimes and crimes against humanity; planning and waging war; mistreating enemy combatants and prisoners of war; deliberately causing death or injury to civilian populations outside of military necessity; and murdering, exterminating, enslaving, deporting or persecuting an individual on political, racial or religious grounds.

The long, convoluted and uncertain path that the advocates of outlawing war followed throughout the first half of the 20th century finally seemed to end in the United Nations Charter. But of course international practice, even the conduct of the nations that were responsible for shaping international law on aggressive war, has fallen far short of the ideal.

The gap between ethics and international relations has only widened since Tufts and Dewey pondered the relationship. They could have had no premonition of the incomprehensible and horrific moral catastrophe perpetrated by the National Socialists in Europe or the cruelty of the Japanese imperialist war in East Asia. But parties on all sides of the postwar conflicts in Vietnam, Algeria, Palestine, Iraq, Afghanistan and Syria, among others, have perpetrated the crimes that were identified in the Nuremburg trials and entrenched in the UN Charter.

History appears to demonstrate the rightness of the sceptical opinions of the political realists about the relevance of ethics in relations between states. Realists like Reinhold Niebuhr and Hans Morgenthau were critical of moralism, objecting to the ‘abstract moral discourse that does not take into account political realities’. George Kennan noted that ‘there are no internationally accepted standards of morality’. Other realists in what might be termed the more-or-less Machiavellian tradition, such as E.H. Carr, considered that the ‘standards by which policies are judged are the products of circumstances and interests’ and that ‘morality can only be relative, not universal’.

More optimistically, the major contributors to the debate over the conduct of international relations have felt impelled to confront the issue of ethics. Among philosophers of ethics the position of the political realists is highly contestable. The response to the appearance of Michael Walzer’s Just and unjust wars (1977) highlights the intensity of the debate.

Opposition appeared to Walzer’s argument that foreign military intervention is always morally wrong, irrespective of how despotic, tyrannical or oppressive the domestic government. Walzer’s only exceptions were when a state was massacring or enslaving its own citizens or when a legitimate secession was being forcibly prevented. Gerard Doppelt criticised ‘a rhetoric of morality in international relations which places the rights of de facto states above those of individuals’. Walzer’s ethical framework boiled down to just one substantive unethical act; as Brian Orend summarised it, ‘the only just cause for resorting to war is to resist aggression’. Like Dewey and Levinson 50 years earlier, Walzer was left with outlawing aggressive war.

Dewey would be even more confounded by the general contemporary confusion in ethical theory, let alone its application to international affairs (for example, in peer-reviewed journals like Ethics & International Affairs and Philosophy & Public Affairs). Moreover, the world’s challenges have been made far more complex through technology, globalisation, decolonialisation and shifting geopolitical power—even more than the problems World War I raised for him.

Still, war remains a great scourge and rationally should be thought of as a last resort, if it is contemplated at all. To generalise, perhaps unjustly, it seems that even in the muddle still enveloping the Tufts question, the one thing nearly all contributors to the debate agree on is that aggressive war is the action that attracts the greatest anathema.

Tufts finished on a somewhat hopeful note, arguing that ‘the give and take of scholarship in pursuit of truth bespeak a democratic value that is real’. He put his trust in ‘the appeal of the finer institutions which man is building’. More pessimistically, Dewey concluded that, while ‘the first move in improving international morality is to outlaw war’, that didn’t mean ‘that wars would necessarily cease’.

Why the ADF handgun is an ethics issue

A British soldier aims a Browning 9mm pistol on a shooting range at Basra, Iraq

Last January, after an extensive period of testing, the British Army announced that their venerable Browning Hi-Power Mk III pistols would be replaced with modern Glock 17 Generation 4 pistols, as a result of hard-won, on-the-ground, operational experience. Many observers of military affairs have been waiting in anticipation for a similar announcement from the ADF, which is also equipped with the FN Herstal Browning Hi-Power Mk III (or the ‘Self-Loading Pistol 9 millimeter Mark 3’ in ADF parlance). However, a year later, no such announcement has been forthcoming and there haven’t been any indications from Defence circles of a change in policy in the foreseeable future. This is an ethics issue.

The state has a clear duty of care to ensure that its armed servants are as well equipped as possible to face the dangers of combat and prevail over their adversaries. On the whole, the ADF  goes to considerable lengths to fulfil that duty of care, from equipping frontline troops with quality armoured personnel carriers, to ensuring top-notch medical treatment for those injured in the line of duty. But there’s a blind spot when it comes to handguns. Outside of the special forces community, very few members of the ADF are issued with pistols, and most of them are in support rather than direct combat roles. Contrast this with the British Army’s commitment to rushing its newly-acquired Glock pistols to its frontline units deployed in Afghanistan, where pistols have been credited with saving the lives of several soldiers.

Read more

Doing the right thing is the right thing to do

Leon E. Panetta takes the oath of office as the 23rd U.S. Secretary of Defense during a ceremony at the Pentagon July 1, 2011. Image courtesy of Flickr user US Department of Defense Current Photos.

There’s an old maxim in military affairs: ‘lose moral authority, lose the war’! It’s most often quoted in the context of the conduct of armed forces towards third parties, most notably the civil population living within a theatre of operations. Occasionally, the maxim applies to one’s enemies, who may be spurred to fight on against those they consider to be a morally debased opponent. For example, fighters based in the tribal areas of Afghanistan and Pakistan have been incensed by the use of unmanned drones, which they consider to be the coward’s weapon of choice.

However, there’s a further context in which the maxim applies—in relation to the quality of leadership displayed within one’s own ranks. One might suppose that it is with this in mind that US Secretary of Defense, Leon Panetta, has asked the Chairman of the Joint Chiefs of Staff, General Dempsey, to review the quality and character of ethical instruction made available to senior officers—a task given added urgency in the wake of scandal surrounding the recent resignation of David Petraeus as Director of the CIA. Read more