Tag Archive for: autonomous systems

More than innovation: Australia needs fast, low-cost defence production

Conflicts in Ukraine and the Middle East have shown that mass and asymmetry characterise modern warfare. The challenge is to deliver affordable mass—weapons in great numbers—while ensuring technology is evolving ahead of rapidly changing threats. This is both a hardware and software challenge.

We rightly applaud when an innovative prototype is delivered in record time, but that is only half the battle won. The rubber hits the road in the defence sector when innovations are efficiently and inexpensively produced at-scale to deliver an unexpected asymmetric massed effect to defeat an adversary or disrupt its decision calculus.

In releasing Defence’s innovation, science and technology strategy, Accelerating Asymmetric Advantage—Delivering More, Together, Chief Defence Scientist Tanya Monro in September called on the defence industrial base to help deliver credible, potent and future-ready technology.

The question industry must confront is, how do we evolve our thinking, processes and manufacturing to answer this call? Fundamentally, the entire acquisition process and industrial approach need to be redesigned. They need to embrace modern manufacturing processes with the scale necessary for this era. From concept to prototype, to hardware and software integration, to supply chain and manufacturing systems, industry must be much more agile and use viable commercial processes that offer abundant and readily available materials.

The gold-plated processes of the legacy defence industry are optimised for reliability and exquisiteness, not mass and efficiency. To truly deliver asymmetric advantage to the warfighter we must do things in bold and unconventional ways. That will take vision, creativity and courage.

Manufacturing at scale is hard. Historically, it’s not been an Australian strength, in fact, we have a history of thinking that the invention of new things is the end of the innovation process. The reality is that at-scale manufacturing is 1000 times harder than innovation. The industrial redesign needs to be properly funded and become the end point of Australia’s sovereign manufacturing strategy.

We have seen tremendous examples of Australian ingenuity, with several companies making great progress in developing cheap, responsive and high-rate defence capabilities for Australian and global markets.

In 2023 SYPAQ won the Australian Financial Review’s 2023 Most Innovative Companies Award for its Corvo Precision Payload Delivery System—a flat packed, easily assembled, easily operated, low cost, expendable drone now used on the Ukraine battlefield.

Nulka, a defence system developed by Australian scientists, is one of Australia’s most successful defence exports. It’s been in operation with the Royal Australian Navy and the US Navy for more than 20 years. Now made by BAE Systems, it’s a leading example of innovation progressing to scale and success across global markets

Australia doesn’t have the privilege of investing endlessly in innovative ideas without progressing to scale production. We need to invest in the right innovations, those that demonstrate ability to go from ideas to production to scale rapidly.

Another example is the US company Anduril, which has an Australian subsidiary of which I am executive chairman and chief executive. Everything Anduril builds is focused on large scale deployment. We ensure products are ready for manufacture from the get-go by designing the production system and the product simultaneously. So, when we have our first prototype of the product, we also know what the production system looks like. We have a full digital twin of the production system, a virtual representation of it. So, when we are ready to scale up production, we already understand what high-rate manufacturing will look like and how to do it efficiently. Our factories and products are modelled on high-rate automotive and consumer technology that delivers mass at a cost-effective price point.

For example, Anduril’s Barracuda (a family of autonomous-aircraft designs, including a cruise missile) is designed for low-cost, hyper-scale production. A Barracuda takes 50 percent less time to produce, requires 95 percent fewer tools and 50 percent fewer parts than competing products on the market today. The fuselage is made using hot pressing, similar to the method used for producing acrylic bathtubs. It’s incredibly cheap and broadly available.

We have used this approach in our Australian Ghost Shark program for an autonomous submarine, working with Australian suppliers to create a world-class massed production system. We are not just building three Ghost Shark prototypes. Each prototype is an opportunity to rapidly incorporate what we’re learning. The modular design and software-first approach allow accelerating improvements to be attained in military service. The agility in the design allows payloads to be rapidly switched in and out for different roles, This is game changing and could bring a battle-winning advantage to the ADF.

Our asymmetric approach is also being baked into our Australian supply chain. It’s modernising and scaling with us. Our suppliers are installing advanced machinery and AI-powered robotics. We are learning together fast.

Examples like SYPAQ and Anduril show the potential for at-scale manufacturing in Australia to deliver disruptive miliary value—potentially for allies through exports.

The National Defence Strategy 2024 emphasises the notion of human-machine teaming, in which sophisticated crewed systems are combined with massed autonomy. Australia must be capable of large-scale manufacturing of autonomous systems to deliver asymmetric, low-cost massed effects.

Artificial intelligence needs humanity

Many have heralded artificial intelligence as a force-multiplier for defence and intelligence capabilities.

Do you want armed autonomous vehicles to comply with legal and ethical obligations as set out in the Royal Australian Navy’s robotics, autonomous systems and AI strategy? AI can help. Do you want to more effectively analyse intelligence to predict what an adversary will do next? AI can help. And AI’s proponents are right—it could, and likely will, do all of those things, but not yet.

Its ability to spot patterns, compute figures and calculate optimum solutions on an ‘if X happens then do Y’ basis is now unmatched by any human being. But it has a fundamental flaw: we do not measure human motivations solely by numbers.

Classical game theory has been trying to measure this since the 1940s and its practitioners have had so little success that they labelled many such motivations as ‘irrational’ and decided that quantitative modelling is not possible. But, if we could, modelling of non-material payoffs could answer such questions as ‘How will Russia change its defensive posture if Vladimir Putin loses face from military setbacks in Ukraine?’ or ‘Why would a rational person volunteer as a suicide bomber?’

Instead, game theory has only proposed high-level conceptual frameworks in an attempt to guide decision-makers. It has looked, for example, at whether we should have modelled the Cold War nuclear arms race as an iterated prisoner’s dilemma.

This is not the fault of the economists and maths-trained game theory experts or their successors, trying earnestly and for good cause to predict the probability of human actions. Many are experts in the art of programming, but not in all the intricate detail that looks to explain why we humans do what we do. We cannot expect a programmer’s life experience to compare to thousands of years of philosophy, historical precedent and more recent psychological studies. The programmers need back-up and humanities departments are what they need.

The advent of AI has led to consideration of its ethics and the involvement of humanities specialists, often employed to guide programmers with high-level principles-based frameworks and/or to rule on AI testing and wargames as ethical or non-ethical. Both are vital to ensuring AI better understands humanity and our expectations of it, but this engagement is insufficient. To borrow an analogy from mathematics, the former supplies an example answer but no formula to apply, and the latter marks the answer but does not check the working-out.

We need humanities specialists involved at the coding level to help programmers assign mathematical functions to the various factors influencing human decision-making. It is not enough to say love of money, family or duty motivates a person. A fit-for-purpose AI will need to know how much they are motivated and how these motivations interact. In short, we should have a mathematical proof for these factors.

And for those who call such rigour ‘onerous’, they are correct. It will be difficult, detailed and it could be disastrous for our national security community if we do not try. A cursory review of published government AI programs shows just how high the stakes are.

The navy released its AI strategy with particular emphasis on autonomous undersea warfare systems and the Australian Signals Directorate recently announced the REDSPICE investment to boost its AI capabilities; both mark new eras in the incorporation of AI. It should be noted these developments are also happening within police forces at the federal, state and territory levels. And while the national security community no doubt has more opaque AI operations, they are likely taking heed of the recent ASPI report highlighting noteworthy precedents from the US and UK for improving use of AI.

The implications of this pervading emphasis in AI was recently summarised by Michael Shoebridge in another ASPI report:

The national security implications of this for Australia are broad and complicated but, boiled down, mean one thing: if Australia doesn’t partner with and contribute to the US as an AI superpower, it’s likely to be a victim of the Chinese AI superpower and just an AI customer of the US.

Building Australia to become an AI superpower will require collaboration, such as with private companies (like Google) or academia (as in the Hunt Laboratory for Intelligence Research) and employing the ‘build on the low side, deploy on the high’ methodology. Alternatively, it could be delivered in-house through either agency-specific taskforces, the Office of National Intelligence’s joint capability fund or forums to be created under the new action plans from non-traditional security government sectors.

Whatever the manner of collaboration, using humanities specialists to develop a common language for human motivations would solve the so-called Tower of Babel problem between qualitative and quantitative analysts. Its development would be comparable to standardising the type of brick and mortar used in the construction industry, or shipping containers used in the freight industry.

Only by harnessing both ‘soft’ and ‘hard’ sciences to code our humanity can we give the national security community the tools needed for Australia to become an AI superpower.

Innovation in Australia’s electricity sector holds lessons for Defence

Disruptive innovation almost always takes the incumbent by surprise. Successful organisations are very often good at sustaining innovation (that is, incrementally improving their business model), but they’re generally unwilling to foster disruptive innovation, and are often structurally or culturally unable to. But those disruptive innovations can blow their business model away—and by the time they realise what’s happening, it’s too late to adjust.

We can see a striking case of disruptive innovation in action around us, and it’s a case that has key lessons for the Department of Defence. Australia’s energy sector is deep into a fundamental transition. It wasn’t long ago that renewables were dismissed as being unable to generate ‘baseload’ power. They now regularly generate over 40% of Australia’s electricity, and sometimes more than half. Rooftop solar was mocked as virtue signalling; it now regularly hits 25% of our generation. And in the space of two years, utility-scale solar has gone from virtually nothing to nearly 10% at times.

The electricity sector has reached a tipping point in Australia. Not one where all or even most of our electricity is consistently being generated by renewables (although they totalled more than 27% in 2020), but rather a tipping point in that nobody wants to invest their own money in fossil-fuel generation, certainly not in developing new generating capacity.

Almost every day we see another story about a traditional generating company admitting that it didn’t see this coming and that it’s now having to write down or close generating assets. Companies aren’t doing this out of a woke sense of obligation to stop climate change, but because they can’t make money running them. Renewables are cheaper to install and run and they massively outperform coal-fired generators in responding to the energy spot market. They are also much faster to build, resulting in a faster return on investment.

The question facing companies built around traditional generators is not whether their business model will survive, but whether they can move out of that business model into renewables fast enough to survive.

A key reason we have got to this point is that our somewhat privatised and deregulated electricity sector has so many players. There are the federal and state governments; generators, both publicly and privately owned; energy distributors and retailers; international players, providing technology, investing in the sector and sending demand signals about the future of requirements for green energy; and Aussie mums and dads installing solar panels and batteries, making themselves both consumers and generators, and an increasingly key part of a resilient grid. Many of them are in direct competition and all of them, except governments, are playing with their own money.

Electricity generators are not the only incumbents who can be taken by surprise. Military organisations can be good at sustaining innovation, but not necessarily at disruptive innovation. That’s a key vulnerability for them. A major reason for that is that there simply isn’t the same number of competing players in the defence sector. Rather, it’s characterised by its monopoly–monopsony nature. That is, Defence is a monopoly provider of security services to the government and is a monopsony consumer of security products from industry. Governments have some options for providers, such as private security companies, but mums and dads can’t simply install a grenade launcher on their roof to take care of their security needs.

In peacetime, military organisations are protected monopolies. In such an environment there isn’t the same constant pressure to innovate. The acid test of whether a military organisation has innovated adequately, technologically or conceptually, comes on the first day of war, and it doesn’t want to find that its business model has been rendered uncompetitive.

In the absence of the daily existential grind of competition that marks the commercial sector, military organisations need to work hard to drive innovation. One way to do this is to look at external analogies for lessons and inspiration. There are a lot of lessons Defence can draw from the transition in our electricity sector. Since I’ve been very interested in robotic and autonomous systems recently, I’ll propose a couple that are relevant to that area.

New forms of power generation were loudly dismissed. They were criticised as being economically uncompetitive, despite their proponents’ arguments that this was a transitory problem. That’s a standard feature of the approach taken by incumbents that are blind-sided by disruptive innovation—why take resources out of your core business to put into new, unproven ideas that aren’t making any money? But renewables are now cheaper and become cheaper by the day. The lesson for autonomous systems is that we need to be investing heavily in them now, even if they are less efficient or effective than crewed systems. It may well take more people to operate autonomous systems right now. But we will reach a tipping point.

It’s also important to be aware of the conscious or unconscious biases we bring to the issue. The electricity transition in this country has been overlaid by the rhetoric of our culture wars, blinding many to what was actually going on. The Australian Defence Force’s views that ‘people are our most valuable resource’ and ‘wars ultimately are human activity’ are undoubtedly true, but they shouldn’t blind us to opportunities where machines can replace humans, even in tasks long considered the sole province of human intelligence.

Another lesson is that we need to assess innovative potential at the system level rather than regarding innovations as simply replacements for existing things. It was easy to dismiss rooftop solar panels as poor competitors with or replacements for large-scale carbon-fired generators. A grid based on renewables could never be reliable and resilient, it was claimed. But a grid based on renewables and large-scale batteries is proving itself to be more reliable than a traditional one. It functions differently, with much larger numbers of smaller generators.

One reason the emerging grid will be more reliable is that it is more responsive—batteries can respond instantaneously to stabilise the grid. This agility also means batteries and renewables can outcompete traditional generators in the spot market. So, while old and new generators have some overlap in their roles, new generators create a robust system—just one that operates differently.

Again, there are clear parallels with the military. If we compare uncrewed systems with large, multi-role systems, they look like very poor replacements; no uncrewed underwater vessel can currently come close to doing what a crewed submarine can do. But if instead of comparing platform with platform, we look at how robotic and autonomous elements can contribute to the entire warfighting system (in other words, the grid), they have much to offer. It’s highly likely that a military employing large numbers of robotic and autonomous systems will become more effective than one based primarily on a small number of crewed platforms. The future system will look very different from the current one, just as our electricity system already looks very different from the one we had only a few years ago.

Australia should do more than just wait for the Attack-class submarines to arrive

Debate on Australia’s future submarines is understandably focused on the information that floats out of the Defence Department about France’s Naval Group and the $80 billion program to design and build the boats.

It’s very sensible to be concerned about the program because of the contrast between the assurances Defence officials provide to parliament that everything is proceeding according to plan and the episodic need for high-level political intervention to resolve fundamental issues between Defence and its French industrial partner.

We saw this with the tortured, delayed effort to resolve the ‘strategic partnering agreement’ that took almost three years to sign. We saw it again this year when Naval Group and Defence were unable to resolve the issue of the Australian industry share in the program, a year after apparently agreeing to do so, without the intervention of various ministers during Naval Group boss Pierre Eric Pommellet’s recent visit to Australia.

Let’s take it as a given that the government doesn’t cancel the Attack-class program in the next few years, for all the reasons ASPI’s Marcus Hellyer has set out, that the program continues to absorb growing amounts of government funding, and that it continues to experience implementation troubles.

If this is the future, left to itself, the program will be a running sore for whichever political parties and prime ministers are in office over the next 15 years—lots of cash outflow, plenty to justify and defend, but no capability to show for it until long after they leave parliament.

The key unanswered questions out of this are, what can be done to improve Australia’s undersea warfare capabilities between now and the mid-2030s, and how can the single big bet on the Attack class be hedged with other complementary capabilities?

Answering these questions would increase Australia’s ability to defend itself and deter aggression between now and the mid-2030s, and so align with the government’s realistic if bleak assessment that Australia no longer has 10 years’ warning time to prepare the Australian Defence Force for involvement in major conflict.

It would also take some of the heat and light out of the public debate on the developmental Attack-class program.

And there’s a large opportunity right now to take a major step towards resolving these questions that would be good for our defence, support the growth of our high-technology industrial base, and play into the bigger picture of our developing alliance with the United States.

It’s called the Orca.

The Orca is the extra-large unmanned undersea vehicle developed by Boeing and Huntington Ingalls Industries for the US Navy. Five of them are scheduled to be built by the end of 2022 under a US$274 million contract signed in early 2019.

The unmanned submarine has a range of about 6,500 nautical miles (12,000 kilometres) and can perform dangerous, dirty and dull work like intelligence-gathering, surveillance and deployment of other systems (such as smart sea mines), with a development path up to and including deployment of other weapons to attack adversary ships, submarines and other systems.

They will probably work best as part of a manned–unmanned undersea team, less closely tethered but a bit like the rapidly developed ‘loyal wingman’ unmanned aerial vehicle that the Royal Australian Air Force is developing and testing with Boeing Australia.

There’s plenty to work out to be able to operate Orcas effectively and get the most out of the combination of manned and unmanned undersea systems. The good news is that the Royal Australian Navy is already advanced with this thinking and analysis, through its RAS-AI Strategy 2040.

But there’s a practical limit to how much planning and preparation can be done with experiments and demonstrations, rather than the direct operational experience of possessing and using a system like the Orca. Concepts for use and ways to resolve difficult problems like tasking and controlling undersea systems will be resolved much faster once navy personnel get their hands on live systems; that’s what’s happened throughout the history of warfare.

And while there’s a navy strategy, it’s not funded and it hasn’t yet found its way into Defence’s massive $270 billion, 10-year capital investment plan in any way that brings major new capabilities to the fleet anytime soon. Enigmatic funding for an ‘integrated undersea surveillance system’ begins in 2024 or 2025, for example.

It’s unfortunately obvious that even the most sophisticated manned submarine will need to work with a range of sensors and other systems, including UUVs, if it is to operate safely and effectively against the kind of adversary systems that a power like China is already fielding, and which are proliferating across our region. That’s probably true right now in a place like the South China Sea, and it’ll only get more manifestly obvious between now and 2035.

The RAN working in early, close partnership with the US Navy and US and Australian industrial partners to develop and field the Orca, and make a range of different payloads for it, is the path that is likely to bring the most undersea combat power most quickly to Australia’s military. It’s also the best way for Defence to create new challenges for adversaries that are thinking of coercing Australia or increasing their military presence in Australia’s near region.

Orcas working with upgraded Collins-class submarines would change the calculus around Australian defence well before the first Attack-class boat turns up. And the experience of our submariners working with Orcas for 10 years before the Attack class arrives will ensure that the future submarine and its crews are designed and prepared to operate with unmanned systems.

Right now is the best time in Boeing’s history for Australia to negotiate an attractive industrial and commercial deal on the Orca for our navy. Last year was the worst year in Boeing’s corporate history, with the grounding of its 737 MAX; trouble with its big new jet, the 777X; and a collapse in global aircraft orders because of the pandemic, resulting in a net annual loss of US$11.9 billion.

Combine this with the confidence that working with Boeing Australia on the loyal wingman must be giving both the government and Defence.

At the level of the Australia–US alliance, an Australian proposal to buy into the Orca program and develop and coproduce it between the US and Australia would be an attractive example of a new administration delivering on President Joe Biden’s promises to work constructively with allies to advance both allied and US security interests.

The partnership could be structured to include manufacturing here in Australia through Boeing Australia, perhaps partnering with ASC and other undersea specialists. The Trusted Autonomous Systems CRC would also bring expertise to the program that would be of value to the RAN and the US Navy.

The US Navy’s example of spending US$274 million to acquire five Orcas that are all being delivered within three years of contract shows the affordable, rapid change that Australia joining this program could bring to our own naval capability.

Wouldn’t it be welcome to have some fast-moving good news out of Defence when it comes to submarines? It’s time to push Defence to move faster than it will left to itself.

The eye of the Tiger: Is the Australian Army preparing for the right conflict?

The decision to replace the Australian Army’s 22 Airbus ‘Aussie Tiger’ armed reconnaissance helicopters was first announced in the 2016 defence white paper and then re-announced in this year’s force structure plan. The process to choose a replacement is underway, but it’s clear that retaining the Tigers and using the billions of dollars that would otherwise be spent on a new helicopter to provide complementary unmanned systems doesn’t seem to be on the cards. That’s despite the fact that these aircraft now have greater levels of operational readiness and capability than ever before and despite the rapid development in technologies that will threaten helicopters on the battlefield.

If the army is determined to simply conduct a like-for-like replacement of the Tiger, the logical choices are the Boeing AH-64E Apache and the Bell AH-1Z Viper. But limiting the project in this way would mark a missed opportunity for Defence to take on a major transformation of its land-warfare capability. Such a transformation seems justified by the technological developments already evident now, which will only accelerate over the time it will take take to acquire and introduce a new helicopter.

Defence should make this decision in a way that treats the new helicopter as part of a networked system of capabilities, rather than as a stand-alone platform. A key failing of the Tiger has been its inability to network with other elements of the Australian Defence Force, and the new platform must plug and play seamlessly and securely with existing army and ADF forces from the outset.

The new helicopter is one part of a team of crewed and uncrewed platforms. ‘Manned–unmanned teaming’ on the future battlespace, in which crewed attack helicopters work alongside swarms of armed uncrewed aerial vehicles, and even armed autonomous ground vehicles complementing crewed armoured fighting vehicles, has to be the vision that the army aspires to.

What’s most important is that the decision should reflect a land-warfare vision that includes large-scale use of armed autonomous systems in the air and on land. Adversaries will be taking advantage of the military power of such systems to complement—or even replace—crewed systems, and if our own defence organisation doesn’t, Australia will be at a disadvantage.

Such a system will demand investment in battlespace command and control that is resistant to countermeasures such as electronic warfare, cyberattack and kinetic attack. Such a capability needs to embrace the ‘small, cheap and many’ approach of ‘command clouds’ operating in the air, over land and even from space, using low-cost, small and easily deployable components. Very high altitude, long-endurance UAVs operating in near space can complement locally developed and launched small satellites and constellations of smart cubesats to provide tactical communications and intelligence, surveillance and reconnaissance support to swarms of lethal autonomous weapons. In such a scenario, an attack helicopter would hang back, managing the swarms via the command cloud, and avoid needlessly putting itself at risk over what will be an intensely contested battlespace.

Factoring in the role of autonomous systems is crucial to thinking about the future of army capability, and the decision to replace the Tiger should be used as an inflection point in how we think about future war.

As I noted in an earlier article, armed drones are becoming more and more prevalent, placing large armoured fighting vehicles in increasing peril. We’ve watched cheap suicide drones strike large, complex and expensive tanks in the battles between Armenia and Azerbaijan over Nagorno-Karabakh with relatively impunity. We should fully expect an adversary to use similar capabilities against the ADF. China’s military, for example, has already tested massive swarms of suicide drones that can be launched from the back of a truck. Add in much more capable battlefield rocket artillery, more advanced armoured fighting vehicles, and advanced electronic warfare and cyberattack capability, and 29 helicopters alone aren’t going to provide a clear solution to a much more challenging future battlespace.

With these challenges in mind, the Tiger replacement project must at a minimum provide a follow-on phase that considers lethal autonomous weapons to defend the helicopters themselves, protect forces on the ground and attack an opponent’s forces—including enemy swarms of UAVs. And it would be far preferable to begin such a complementary capability approach now, rather than after any new helicopter is acquired, even if this means buying fewer airframes.

Obviously, operational context matters, and whatever platform Defence chooses will have to be able to operate in Australia’s predominantly maritime environment, particularly as part of operations supporting the US to deter and counter any threat posed by an adversary in the Indo-Pacific. We shouldn’t buy helicopters with a view to using them only in a low-intensity, lightly contested operational environment against an irregular adversary.

Yet, even a low-tech threat can be dangerous. The US lost 19 Apaches and 3 AH-1W Super Cobras to hostile fire in Iraq between 2003 and 2009. With the ADF facing the worsening strategic outlook outlined in the strategic update, the Tiger replacement decision should be looked at as part the army’s preparation for Australian involvement in a high-intensity war in the Indo-Pacific.

The thorny issue of the army’s strategic readiness and mobility must also be addressed, as well as Australia’s ability to sustain high-intensity combat operations potentially far from home. The army has to be able to play a role in such a fight. So, how will it do it? The next attack helicopter isn’t the biggest issue on the table—it’s how new capabilities are used to ensure the ADF can defend Australia’s interests in a very high intensity operational environment during or even before war.

Achieving a mass of combat forces by investing in large swarms of lethal autonomous weapons to operate alongside crewed platforms such as Apaches or Vipers would be a start. Complementing that capability with resilient battlespace command and control is essential. And ensuring the Australian Army is highly mobile and can decisively project force into a contested operational environment should be our goal in thinking about the future of war in our region.

Balancing the lopsided debate on autonomous weapon systems

The question of whether new international rules should be developed to prohibit or restrict the use of autonomous weapon systems has preoccupied governments, academics and arms-control proponents for the better part of a decade. Many civil-society groups claim to see growing momentum in support of a ban. Yet broad agreement, let alone consensus, about the way ahead remains elusive.

In some respects, the discussion that’s occurring within the UN Group of Governmental Experts (GGE) on lethal autonomous weapons systems differs from any arms-control negotiations that have  taken place before. In other respects, it’s a case of déjà vu.

To begin with, disagreements about the humanitarian risk–benefit balance of military technology are nothing new. Chemical weapons and cluster munitions provide the clearest examples of such controversies.

Chemical weapons have come to be regarded as inhumane, mainly because of the unacceptable suffering they can cause to combatants. But the argument has also been made that they’re more humane than the alternatives: some have described the relatively low ratio of deaths and permanent injuries resulting from chemical warfare as an ‘index of its humaneness’.

Cluster munitions, meanwhile, have been subjected to regulation because of the harm they can inflict on civilians and civilian infrastructure. Yet many have claimed that these weapons are particularly efficient against area targets, and that banning them is therefore counter-humanitarian because it leads to ‘more suffering and less discrimination’.

Autonomous weapon systems have triggered a similar debate: each side claims to be guided by humanitarian considerations. But the debate remains lopsided.

The autonomous weapons debate is unique at least in part because its subject matter lacks proper delimitation. Existing arms-control agreements deal with specific types of weapons or weapon systems, defined by their effects or other technical criteria. The GGE, in contrast, is tasked with considering functions and technologies that might be present in any weapon system. Unsurprisingly, then, it has proven difficult to agree on the kinds of systems that the group’s work should address.

Some set the threshold quite high and see an autonomous weapon system as a futuristic system that ‘can learn autonomously, [and] expand its functions and capabilities in a way exceeding human expectations’. Others consider autonomy to be a matter of degree, rather than a matter of kind, so that the functions of different weapon systems fall along a spectrum of autonomy. According to that view, autonomous weapons include systems that have been in operation for decades, such as air-defence systems (Iron Dome), fire-and-forget missiles (AMRAAM) and loitering munitions (Harop).

All of this has made it harder to pin down the object of the discussion. The GGE so far hasn’t made much headway on clarifying the amorphous concept. Indeed, rather than treat autonomous weapon systems as a category of weapons, the group’s recent reports refer circuitously to ‘weapons systems based on emerging technologies in the area of lethal autonomous weapons systems’. No wonder participants in the debate keep talking past each other.

The uncertainty about what autonomous weapon systems are has led to hypotheses about their adverse effects. The regulation of most other weapons has been achieved in large part due to their demonstrable or clearly predictable humanitarian harm. This is true even with respect to blinding laser weapons, the pre-emptive prohibition of which is often touted as a model to follow for autonomous weapon systems. The early evidence of battlefield effects of laser devices enabled reliable predictions to be made about the humanitarian consequences of wide-scale laser weapons use.

When autonomous weapon systems are considered to be some yet-to-exist category, it’s only possible to talk about potential adverse humanitarian consequences—in other words, humanitarian risks. The possible benefits of autonomous weapon systems also have a degree of uncertainty to them. However, the use of limited autonomous functionality in existing systems allows for some generalisations and projections to be made.

The range of risks has been discussed in detail and explicitly referenced in the consensus-based GGE reports. Such risks include harm to civilians and combatants in contravention of international humanitarian law, a lowering of the threshold for use of force, and vulnerability to hacking and interference.

Potential benefits of autonomous functions—for example, increased accuracy in some contexts or autonomous self-destruction, both to reduce the risk of indiscriminate effects—barely find their way into the GGE reports. The closest the most recent report gets to this issue is a suggestion that consideration be given to ‘the use of emerging technologies in the area of lethal autonomous weapons systems in upholding compliance with … applicable international legal obligations’. This vague language has been used despite some governments highlighting a range of military applications of autonomy that further humanitarian outcomes, and others noting that autonomy helps to overcome many operational and economic challenges associated with manned weapon systems.

The issue has become politicised and ideological: many see a discussion of benefits in this context as a way to legitimise autonomous weapon systems, thus getting in the way of a ban.

We do not wish to suggest that risks of autonomous technology be disregarded. Quite the opposite: a thorough identification and a careful assessment of risks associated with autonomous weapon systems remains crucial. However, rejecting the notion that there might also be humanitarian benefits to their use, or refusing to discuss them, is highly problematic and likely to jeopardise the prospect of finding a meaningful resolution to the debate.

Reasonable regulation cannot be devised by focusing on risks or benefits alone; some form of balancing must take place. Indeed, humanitarian benefits might sometimes be so significant as to make the use of an autonomous weapon system not only permissible, but also legally or ethically obligatory.

Defence should accelerate Australia’s adoption of autonomous systems

It’s commonplace in commentary about the Australian Defence Force to say that its force structure looks today a lot like it did 30, 40 or 50 years ago. The structure remains largely the same, while the systems in it are replaced with newer, better, often larger, and always more expensive versions of the old systems. While Defence likes to talk about ‘effects’, when it comes to buying actual equipment, it defaults to getting something that looks a lot like what it’s familiar with.

When we combine this deep-seated institutional trait with the very human tendency to judge the performance of machines more harshly than that of humans, it’s not surprising that Defence’s adoption of autonomous systems has been incremental, to use a polite term, or slow, to use another one.

That’s despite the widespread recognition of the benefits that autonomous weapons systems can potentially provide to militaries have been widely recognised. They include removing humans from high-threat environments, breaking out of manned platforms’ vicious cost cycle, achieving greater mass on the battlefield, exploiting asymmetric advantages, leveraging the civil sector’s massive research and development spending on autonomous systems, and accelerating capability development timelines.

In a new ASPI report I suggest ways to accelerate the adoption of autonomous systems in the ADF and turn the potential benefits into actual ones.

At its core, it’s an issue of trust. Defence has been gradually improving its members’ trust in autonomous systems, both individually and collectively. It’s also been making moderate investments in improving autonomous technologies so that they are more trustworthy. But others are moving much faster, including potential state and non-state adversaries, and the civilian world.

It’s also a matter of reconsidering how we view risk. While it’s easy to see risk in autonomous systems, we need to recognise that manned platforms can also present significant capability risk; if they can’t protect their precious human cargo on an increasingly dangerous battlefield, we won’t deploy them, rendering the investment in them worthless. Defence’s investment strategy of doubling-down on manned platforms is itself high risk.

It’s time to do more. Securing greater investment in autonomous systems will be difficult, considering Defence’s continued heavy investment in traditional platforms, which is unlikely to be moderated in the near term. However, autonomous systems offer the potential for Defence to hedge its capability risk, particularly if they can come at reduced cost and relieve pressure on Defence’s investment program.

How can Defence jump-start its approach to autonomous systems? One way to achieve this is to not replace manned platforms with other manned platforms where there’s no compelling need to do so. This frees up funding not only for autonomous systems but for other emerging priorities. Another way is to not seek to replace manned platforms with an autonomous solution that essentially does the same job. Rather, Defence could think disruptively and explore new roles that autonomous systems can perform that are quite different from those of current manned platforms.

The Tiger armed reconnaissance helicopter (ARH) provides a clear case in which it’s possible for Defence to avoid an expensive ‘like for like’ replacement of a manned platform. While the Tiger has had a troubled history, the army has publicly stated that it now provides a high level of capability, including operating from the navy’s landing helicopter docks in amphibious roles. Defence’s Integrated Investment Program is also delivering systems like the Reaper armed unmanned aerial vehicle and long-range rocket systems that provide many of the effects sought from an ARH.

Therefore, this is an area where Defence can experiment safely with the accelerated adoption of autonomous systems without extreme capability risk should that experiment not succeed. It’s an ideal area to explore human–machine teaming. It’s also an area where accelerated experimentation can produce positive lessons for Defence more broadly.

As part of the strategic and capability review that Defence is currently conducting, it should avoid investment of the roughly $3 billion needed to acquire a new ARH. Rather, it should keep the Tiger in service while investing around $1 billion of the funds saved in the development and acquisition of autonomous systems.

While these systems could deliver some of the effects sought from an ARH, Defence shouldn’t seek primarily to develop an unmanned version of an ARH. Instead, it should actively explore in an open-ended way the disruptive potential of armed unmanned and autonomous systems for battlefield aviation.

Such systems would initially complement the Tiger to create greater effects than the Tiger can generate alone. Eventually, this pathway would allow Defence to remove the Tiger and its human crews from the battlespace.

To accelerate this development, Defence could establish an interdisciplinary team, including representatives from a broad range of the army’s trades as well as industry and academia, whose sole function would be to identify and experiment with disruptive autonomous innovations in battlefield aviation. By sitting outside Defence’s day-to-day business, they would have the ability to think disruptively—to the point of replacing the business as usual model.

And to promote technological innovation more broadly, around $850 million of the savings realised by not replacing Tiger with a manned ARH could be dedicated to doubling Defence’s innovation funds. Currently they represent less than 0.5% of Defence’s funding. Doubling them (at no net cost) would send a clear signal that Defence sees itself as a leader in technological innovation.

This approach would offer greater benefits to both the ADF and Australian defence industry than acquisition of a new, manned off-the-shelf ARH and jump-start the transition to an increasingly autonomous future.

Robots and the future Army

5352462251_c3f612e1bb_z

A recent post on The Strategist wrote persuasively about the potential offered by robots for future naval shipbuilding productivity, urging each of the three SEA 1000 Competitive Evaluation Process contenders to include robot research and development (R&D) projects in their final submissions due by 30 November 2015.

These R&D projects would investigate how and where to use robotic technology when building Australia’s Future Submarines and the Future Surface Combatants.

What use is robotic technology to Australia’s Future Army?

Given that the first of science fiction writer Isaac Asimov’s three laws of robots states ‘a robot may not injure a human being or, through inaction, allow a human being to come to harm’ what does this mean for army robot applications?

There’s a legitimate role for robots on the battlefield, separating soldiers from avoidable threats, and there will be an increase in their use following ADF experience in Afghanistan. In addition to unmanned ground vehicles (UGVs) themselves, other key areas of development include the vehicle payloads and attachments such as sensors, cameras and interrogation tools.

Actually, looking more deeply at the second part of the First Law one could see how robotic UGVs could prevent harm to soldiers by carrying out various tasks including improvised explosive device (IED) searches, bomb disposal, ground surveillance, checkpoint operations, urban street presence, reconnoitring urban settings prior to military raids, and ‘drawing first fire’ from insurgents and terrorists. Humanoid robots under development are capable of detecting then rescuing/recovering wounded and dead soldiers from the battlefield.

Apart from when directly controlled by a soldier through a tethered lead or radio link, there’s the possibility of programming a robot for limited autonomous applications like patrolling a battleground perimeter after setting GPS way points. The Israeli Defence Force already uses their Guardium MK III autonomous UGVs to monitor Israel’s land borders.

Semi-autonomous robots, although sometimes slightly noisy, can ‘follow the leader’ in logistics operations carrying a heavy load, leaving the soldier free and fully fit for action. The US Marines deployed their MULE at RIMPAC 2014. This is a quadruped robot developed as a mule, which can traverse difficult terrain carrying 180kg of soldiers’ combat supplies for over 30km without refueling.

ADF used small tracked Talon RPVs in Afghanistan for IED detection and disposal, as well as identification of hazardous materials and combat engineering support. It has brought them back to Australia. Talon, directed by an operator control unit through a two-way radio or fibre-optic link, has impressive performance across ground, around and over obstacles, as well as the capability provided by fitting different sensors and tools.

Army currently employs eight remote control tracked MV-10 Mine Flails, developed by Croatia-based DOK-ING. The medium sized MV-10 was selected for the LAND 144 Countermine Capability project. The system can clear all types of anti-personnel mines, anti-tank mines and unexploded ordnance.

The Defence White Paper 2015 will reveal more thinking on the potential use of robotic devices in the future army. Apart from examples quoted above, those could include searching areas for survivors during humanitarian and disaster response operations; as mobile communications nodes when fixed communications networks are disabled in a natural disaster; improving targeting in an urban setting to minimise collateral damage; and as ‘eyes and ears’ maintaining watch over an ADF defended area.

The Directorate of Future Land Warfare, in conjunction with the Defence Science and Technology Group (DST Group), is currently undertaking a line of research to assess how robotics and autonomous systems can be best utilised by future land forces. DST Group and University of Sydney’s Australian Centre for Field Robotics have formed a Centre of Expertise in Defence Autonomous and Uninhabited Vehicles to pursue academic research and develop patented technologies in this field.

One chilling international military development is the possibility of lethal autonomous robots (LARs) which, once activated, will be able to roam without further human intervention. In the same way that land mines can’t discriminate between innocent civilians and combat troops, so LARs can make their own targeting decisions.

United Nations Human Rights Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, says ‘Machines lack morality and mortality, and many people believe they should as a result not have life and death powers over humans.’ He believes there’s a need to have a human decision-maker in the sensor-to-shooter loop and that deployment of LARS would weaken the role and rule of international law, undermining the international security system.

Australia’s Future Army will take advantage of robotic technology to save soldiers’ lives and rescue survivors from natural disasters, but it’s difficult to see how in our culture they will ever permit any armed robots to be deployed without ultimate human control before lethal force is used.

Drones and the kill-decision-making loop

 MQ-9 Reaper unmanned aerial vehicle

Globally, state use of armed unmanned aerial vehicles (UAV aka ‘drone’) technology has gone ahead in leaps and bounds in recent years as they have provided significant advantages in counter-terrorism and warfare. The US, for example, has clearly established itself as the most prolific user of drone technology with such success against al Qaeda that the group recently developed a manual on how to avoid drone strikes. At the 2013 Australian International Airshow, Minister for Defence Stephen Smith indicated that armed UAVs might have a role in the ADF in the future and has called for a debate. He’s right in identifying the need for this conversation to take place, especially given that it is such a rapidly evolving and highly utilised global technology. An important part of any debate should be about what comes next: autonomous killer drones.

By some estimates, fully autonomous systems might be as close as five years from now. This will depend upon the pace of innovation, societal acceptance and the security requirements of states, but also on how quickly we progress toward what computer scientists call singularity— the point when the power of computers exceeds the power of human brains.

When it comes to more complex autonomous systems such as drones, a key question becomes, how ‘autonomous’ do we want them, especially when it concerns targeting with lethal force? The United States has already begun a debate about autonomous lethal systems. Having the Australian debate now will enable us to shape the future development of drone technology and avoid some of the potential mistakes that could be made. Read more