Tag Archive for: Artificial Intelligence

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Artificial Intelligence

Is China hyping its ‘intelligent’ cruise missile capability?

Image courtesy of Flickr user Times Asi.

China is waging a relentless propaganda campaign against its opponents in the South China Sea. Following the Hague arbitral tribunal’s verdict rejecting Beijing’s historical rights within the nine-dash line, China’s publicity managers have raised their game with devastating effect. With well-timed reports suggesting a plan for a Chinese ADIZ in the South China Sea, the fresh reclamation of reefs and shoals in the Spratly group of islands and even reports of military aircraft patrols over the disputed islands, they’ve managed to convince many regional watchers that China has emerged as a dominant maritime power in the Asia–Pacific.

Meanwhile, Beijing’s kept an eye on the evolving situation in contested regional littorals. Chinese leaders know that for the military posturing to be effective, an illusion needs to be created that if pushed hard, the People’s Liberation Army won’t desist from initiating hostilities. Beijing’s latest gambit is the release of a media report about the development of a family of cruise missiles with artificial intelligence (AI) and autonomous capabilities. Apparently, China’s aerospace industry is working to equip its tactical missiles with in-built intelligence that would help them seek out targets in combat. The ‘plug and play’ approach, the report says, could potentially enable China’s military commanders to launch missiles tailor-made for specific combat conditions.

Expectedly, Chinese sources offer no clarifications for what ‘tailor-made cruise missiles with high levels of artificial intelligence and automation’ really means. Apart from reiterating China’s global leadership status in the field of artificial intelligence, the report doesn’t provide any insight into the specific nature of the autonomous capability being developed. The PLA, too, has maintained a conspicuous silence over its plans to provide real AI in its combat missiles. This, despite its public pursuit of advanced long-range precision strike weapons, including a family of modular cruise missiles with retarget in-flight capability like Block IV Tomahawks.

Part of the problem here is the dichotomy between the theoretical definition of AI and its popular interpretation. Technically, AI is any onboard intelligence that allows machines in combat to execute regular tasks, allowing human operators more time to focus on demanding and complex missions. In theory, AI provides the technology to augment human decision-making by capturing knowledge that can be re-applied to enable critical high-tempo operations.

In practice, however, AI is a term used for a combat system that has the ability to take targeting decisions. It’s more in the realm of ‘who to target’, rather than ‘how to target’—a task that guided missiles have been performing for some time with reasonable precision. Maritime forces, however, remain skeptical of autonomous weapon systems targeting an enemy platform, which is universally construed as an act of war. The decision to execute a missile launch is still considered to be exclusive preserve of the command team (led by the ship’s captain) that must independently assess the threat and act in pursuit of war objectives.

Despite several advancements allowing for a more precise targeting of platforms, the logic of maritime operations hasn’t fundamentally changed. As a result, naval missiles haven’t been invested with any serious intelligence to make command decisions to target enemy units. While their ability to strike targets has been radically enhanced—through the use of superior onboard gyros, computing systems and track radars—the basic mode of operation of cruise missiles remains the same.

In recent years, both the US and China have revealed their plans to develop a dispersed maritime force, with long-range sensors, armor protection and networking technologies. Increasingly, precision-guided munitions and drones are using satellite-based navigation systems and inertial navigation backups to target enemy systems. Despite the prospect for greater autonomous maneuvering, however, both the US and China appear to have desisted from developing missile systems with artificial intelligence.

In part, that may be a consequence of a debate surrounding AI and autonomous naval platforms. While the developments in AI have the potential to radicalise naval operations at sea, many maritime practitioners are uncomfortable with the use of unmanned and autonomous systems in combat—particularly the development of lethal autonomous weapons systems (LAWS). The ethical dilemma arises from the LAWS ability to kill people, and the policymakers’ reservations about inanimate systems that can take decisions to terminate lives.

The ethical dimension of using ‘intelligent’ weapons is important because international humanitarian law—that which governs attacks on humans in times of war—has no specific provisions for such autonomy. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage. Evidently, these are subjective judgments no current AI system seems able to fully to satisfy.

But humanitarian and ethical predicaments don’t seem to be concerning Chinese officials intelligent missiles and combat systems are up for discussion. Beijing’s priority is to highlight President Xi Jinping’s ambitious military modernisation program, and its psychological impact on China’s adversaries. Signaling an ‘intelligent’ cruise missile capability is China’s way of expressing to its adversaries a firm intent to protect its interests.

Killer robots? Getting LAWS right

Percy the robot

Technology is steadily marching in the direction of increased autonomy, a change that will undoubtedly influence weapon platforms in the future. The notion of offensive use of lethal autonomous weapon systems (LAWS)—systems that can independently identify targets and take lethal action—has already stirred disquiet in the international community (even though no such capability exists). While discussion of the legal and ethical ramifications of LAWS is welcome and crucial, it often gets confused and tangled around the technicalities of autonomous systems and artificial intelligence (AI). The “killer robots” rhetoric could stifle valuable technological advances that might produce greater precision and discrimination.

Attention around LAWS skyrocketed in 2012 when Human Rights Watch released Losing humanity: the case against killer robots, arguing for a legal ban on the development of fully autonomous weapons and for the creation of a code of conduct for R&D of autonomous robotic weapons. The report spurred the 2014 UN Convention on Certain Conventional Weapons (CCW) Meeting of Experts, which convened again last month.

There’s also concern about LAWS closer to home. At a Senate Committee hearing last month on the use of unmanned platforms by the Australian Defence Force,  witnesses from the Red Cross raised issues about the development of fully-autonomous systems and the capacity of these systems to discriminately target. (You can read the testimonies to the committee here (PDF), including my contribution with Andrew Davies.)

It’s a great sign that the CCW and other bodies are anticipating the challenges posed by LAWS. The US stirred up serious consternation when they first deployed Predators with Hellfire missiles after 9/11, but there were no meetings of experts or inquiries beforehand. A decade on from the first lethal drone strikes, concerns about lethal unmanned aerial vehicles persist despite consensus that the technology doesn’t contravene international humanitarian law. But a bad reputation is hard to shake, and LAWS have already been saddled with the “killer robot” label. The provocative branding has started an important conversation about the extent to which the world is comfortable with autonomous targeting.

But budding discussions on the potential legal and normative challenges of LAWS don’t clearly define what LAWS actually are—the UN’s still without an official definition. This creates confusion as to whether to include capabilities such as missile defence systems that autonomously identify and destroy incoming missiles and rockets. There’s also a complex and evolving spectrum of technological autonomy to take into account. On one end, there’s technology in use today with autonomous functions—like missile defence systems. At the other end, there are systems that have advanced ‘reasoning’ and adaptive problem solving skills, which could more accurately be defined as artificially intelligent rather than autonomous. Systems with human-like reasoning skills don’t yet exist but they’re certainly on the agenda of research groups like DARPA.

Confusion on this subject is in large part created by the novelty of autonomous systems and AI. While we’re only in the early stages of development, general unease is reflected in the blanket bans proposed by Human Rights Watch, along with other initiatives like the Campaign to Stop Killer Robots. These groups assume that LAWS will undermine international humanitarian law (IHL) and challenge the status of civilians in warfare since they would lack the human judgement and decision making. But there’s nothing in IHL currently that states that only a human can make lethal decisions, nor any reason to suggest that those systems won’t eventually be capable of distinguishing between civilians and lawful targets at least as well as humans can.

As Kenneth Anderson and Matthew Waxman have argued, LAWS of the future might actually be more discriminate and proportionate as weaponry. The processing speed possible for LAWS and their ability to remain on station for extended periods without interruption could lead to greatly enhanced battlefield awareness—’dumb’ drones are already providing some of these benefits. There’s also the possibility that removing human emotions—those which can cloud decision-making—could result in fewer civilian casualties. A ban on R&D would suppress potentially ground-breaking developments.

There are many unknowns surrounding the future of autonomous systems and AI. The technology has a long way to go before we can field a system that’s capable of decision-making, reasoning and problem solving in a complex environment on par with a highly trained soldier. There’s also no guarantee that science will ever develop this level of AI. As Chris Jenks commented in his recent lecture on autonomous systems at the ANU, humans are tremendously poor predictors of the future, especially when it comes to technology.

For now, the international community should work to develop an accepted definition for LAWS. It needs to be flexible enough to account for the many unknowns, and capable of evolving to match the development of autonomous systems and AI. Establishing a definition will be challenging but it’s needed to advance the important dialogue around the laws and norms of potential offensive use of LAWS. The use of inflammatory labels like “killer robots” should be discouraged—they serve only to encourage falsehoods and engender confusion about LAWS.