Tag Archive for: Cybersecurity

Hacking for ca$h

Is China still stealing Western IP?

Introduction

In September 2015, following mounting pressure exerted by the US on China, Chinese President Xi Jinping agreed to a US proposal that neither country would steal the other’s intellectual property (IP) for commercial gain. This bilateral agreement was quickly expanded when the US succeeded in inserting similar language into the November 2015 G20 communique. A handful of other countries also pursued their own bilateral agreements.

Three years after the inking of the US–China agreement, this report examines China’s adherence to those agreements in three countries: the US, Germany and Australia. This work involved a combination of desktop research as well as interviews with senior government officials in all three countries.

The rationale for this multi-country report was to examine patterns and trends among countries that had struck agreements with China.

In all three countries, it was found that China was clearly, or likely to be, in breach of its agreements. China has adapted its approach to commercial cyber espionage, and attacks are becoming more targeted and use more sophisticated tradecraft. This improved tradecraft may also be leading to an underestimation of the scale of ongoing activity.

Despite initial hopes that China had accepted a distinction between (legitimate) traditional political–military espionage and (illegal) espionage to advantage commercial companies, assessments from the three countries suggest that this might be wishful thinking.

China appears to have come to the conclusion that the combination of improved techniques and more focused efforts have reduced Western frustration to levels that will be tolerated. Unless the targeted states ramp up pressure and potential costs, China is likely to continue its current approach.

United States

By Adam Segal

In September 2015, presidents Barack Obama and Xi Jinping stood next to each other and declared that neither the US nor the Chinese government ‘will conduct or knowingly support cyber-enabled theft of intellectual property, including trade secrets or other confidential business information for commercial advantage’.1 Despite significant scepticism about whether China would uphold its pledge, cybersecurity companies and US officials suggested that the number of attacks did in fact decline
in the first year of the agreement. China inked similar deals with Australia, Canada, Germany and the UK, and, in November 2015, China, Brazil, Russia, the US and other members of the Group of Twenty accepted the norm against conducting cyber-enabled theft of IP.2 The agreement has been held up as evidence that a policy of public ‘naming and shaming’ tied to a threat of sanctions can change state actions, and as a success by the US and its allies in defining a norm of state behaviour in cyberspace.

There is, however, increasing evidence that Chinese hackers re-emerged in 2017 and are now violating both the letter and the spirit of the agreement. CrowdStrike, FireEye, PwC, Symantec and other companies have reported attacks on US companies, and the Trump administration has claimed that ‘Evidence indicates that China continues its policy and practice, spanning more than a decade, of using cyber intrusions to target US firms to access their sensitive commercial information and trade secrets.’3 The initial downturn in activity appears less to be the result of US pressure and more of an internal reorganisation of cyber forces in the People’s Liberation Army (PLA). Moreover, it’s increasingly clear that the number of attacks isn’t the correct metric for the Sino-US cyber relationship. A decline in the number of attacks doesn’t necessarily mean a decrease in their impact on US economic interests, as Chinese operators have significantly improved their tradecraft.

Washington and its allies will soon have to decide what they’re going to do (again) about Chinese industrial cyber espionage. The Trump administration’s approach so far has been indirect, raising China-based hacking in the context of a larger critique of Beijing’s industrial policy and failure to protect IP. Without significant pushback, China is likely to believe that it has reached a new equilibrium with Washington defined by an absolute smaller number of higher impact cyber operations.

The challenge of industrial cyber espionage

For at least a decade and a half, Chinese hackers have conducted a widespread campaign of industrial cyber espionage, targeting private sector companies in an effort to steal IP, trade secrets and other information that could help China become economically more competitive. President Xi has set the goal for China to become a ‘world leading’ science and technology power by 2049, and the country has significantly ramped-up spending on research and development, expanded enrolment in science, technology, engineering and mathematics disciplines at universities, and pushed industrial policy in areas such as semiconductors, artificial intelligence and quantum computing. However, the country also continues to rely on industrial espionage directed at high-technology and advanced manufacturing companies. Hackers have also reportedly targeted the negotiation strategies and financial information of energy, banking, law, pharmaceuticals and other companies. In 2013, the Commission on the Theft of American Intellectual Property, chaired by former Director of National Intelligence Admiral Dennis Blair and former US Ambassador to China Jon Huntsman, estimated that the theft of IP totalled US$300 billion (A$412 billion, €257 billion) annually, and that 50–80% of thefts
were by China.4

The US responded to state-sponsored Chinese cyberattacks with a two-step process. First, Washington created a distinction between legitimate espionage for political and military purposes and the cyber-enabled theft of IP. As President Obama framed it:

Every country in the world, large and small, engages in intelligence gathering. There’s a big difference between China wanting to figure out how can they find out what my talking points are when I’m meeting with the Japanese which is standard and a hacker directly connected with the Chinese government or the Chinese military breaking into Apple’s software systems to see if they can obtain the designs for the latest Apple product. That’s theft. And we can’t tolerate that.5

Espionage against defence industries, such as the theft of highly sensitive data related to undersea warfare, first reported in June 2018, would be considered legitimate, and the onus would be on the defender to keep hackers out of its systems.6

Second, Washington directly and increasingly publicly confronted Beijing. In the winter of 2013, the incident response firm Mandiant, now part of FireEye, put out a report tracing cyber espionage on American companies to Unit 61938 of the PLA, located in a building on the outskirts of Shanghai.7 A few days later, the Department of Homeland Security provided internet service providers with the IPs of hacking groups in China. In March 2013, at a speech at the Asia Society, National Security Advisor Tom Donilon spoke of ‘serious concerns about sophisticated, targeted theft of confidential business information and proprietary technologies through cyber intrusions emanating from China on an unprecedented scale’.8 When the two met at Sunnylands in June 2013, then President Obama warned President Xi that the hacking could severely damage the bilateral relationship.

In May 2014, the Federal Bureau of Investigation indicted five PLA hackers for stealing the business plans and other IP of Westinghouse Electric, United States Steel Corporation and other companies.9 In April 2015, the President signed an executive order that would allow for economic sanctions against companies or individuals that profited from the ill-gotten gains of cyber theft. The order threatened to block financial transactions routed through the US, limit access to the US market and prevent company executives from travelling through the US. The Washington Post reported in August 2015 that the administration planned to levy those sanctions against Chinese companies.10 Worried that sanctions or indictments would cast a pall over the September presidential summit, Meng Jianzhu, a member of the political bureau of the Central Committee of the Chinese Communist Party, flew to Washington to make a deal.

First year decline

In the first year, the available evidence suggested that Beijing was upholding the agreement and that the overall level of Chinese hacking had declined. FireEye released a report in June 2016 that showed the number of network compromises by the China-based hacking groups that it was tracking dropping from 60 in February 2013 to fewer than 10 by May 2016.11 However, FireEye noted that Chinese hackers could drop the total number of attacks while increasing their sophistication. Around the same time, US Assistant Attorney General John Carlin confirmed the company’s findings that attacks were fewer but more focused and calculated.

As the report also noted, the decline began before September 2015, undermining the causal link between US policy and Chinese behaviour. There were two internal factors in play. First, soon after taking office, Xi launched a massive and sustained anticorruption campaign. Many hackers were launching attacks for private gain after work, misappropriating state resources by using the infrastructure they had built during official hours. Hacking for personal profit was caught up in a broad
clampdown on illegal activities.

Second, the PLA was engaged in an internal reorganisation, consolidating forces and control over activities. Cyber operations had been spread across 3PLA and 4PLA units, and the General Staff Department Third Department had been managing at least 12 operational bureaus and three research institutes. In December 2015, China established its new Strategic Support Force, whose responsibilities include electronic warfare, cyber offence and defence, and psychological warfare. In effect, PLA cyber forces were told to concentrate on operations in support of military goals and move out of industrial espionage.

The first publicly reported cyber espionage attempts in the wake of the agreement were either against military targets or involved the theft of dual-use technologies that would fall in the grey zone. Cyber industrial espionage attacks didn’t end, but instead were transferred to units connected with the Ministry of State Security.12 While the organisation of these groups is less well understood, the ministry appears more willing than PLA groups to use contractors to maintain plausible deniability and reduce the risk of attribution.

Several US cybersecurity company analysts have described the ministry groups’ tradecraft as significantly better than that displayed by the PLA.13 Hackers have made more use of encryption and gone after cloud providers and other IT services that would provide access to numerous targets. In April 2017, for example, security researchers at PwC UK and BAE Systems claimed that China-based hackers were targeting companies through their managed IT service providers.14 The Israeli cybersecurity company Intezer Labs concluded that Chinese hackers embedded malware in the popular file-cleaning program CCleaner.15 In June 2018, Symantec attributed attacks on satellite communications and telecommunication companies in the US and Southeast Asia to a China-based group.16

Outlook

Almost three years after the agreement, judgements on its effectiveness are much harsher. While a former intelligence official argued that US efforts did succeed in getting Beijing to acknowledge a difference between the cyber-enabled theft of IP and political–military espionage, other security researchers were more sceptical. As one put it, ‘Beijing never intended to stop commercial espionage. They just intended to stop getting caught.’ Another believed that Chinese policymakers decided to get credit for a decline in activity that was inevitable in the wake of the PLA reorganisation—a move that had been long in the works.

The Trump administration has pressed Beijing on cyberespionage but as part of much bigger push on trade policy and economic security. In November 2017, the Justice Department indicted three Chinese nationals employed by Chinese cybersecurity firm Boyusec, charging them with hacking into the computer systems of Moody’s Analytics, Siemens AG, and GPS developer Trimble Inc. ‘for the purpose of commercial advantage and private financial gain’.17 US Government officials reportedly asked for Chinese Government help in stopping Boyusec’s activities, but received no reply. Despite Recorded Future and FireEye claiming a connection between Boyusec and the Ministry of State Security, the indictment didn’t call out Chinese Government support for the hackers.18

The US Trade Representative’s March 2018 investigation of China’s policies and practices related to tech transfer and IP states that the US:

has been closely monitoring China’s cyber activities since this [the September 2015] consensus was reached, and the evidence indicates that cyber intrusions into US commercial networks in line with Chinese industrial policy goals continue. Beijing’s cyber espionage against US companies persists and continues to evolve.19

A draft trade framework allegedly provided by US negotiators to their Chinese counterparts, which circulated on Twitter and Weibo in May 2018, calls on Beijing to ‘immediately cease the targeting of American technology and intellectual property through cyber operations, economic espionage, counterfeiting, and piracy’.20

The current trade war with China has two sources: US concern about the bilateral trade deficit, and opposition to Beijing’s use of industrial policy and the theft of IP to compete in high-technology areas. While President Trump has been focused on the deficit, those within the administration pressuring Beijing on its mercantilism should push the cyber issue further up the bilateral agenda. A more direct policy would include a statement from a high-level US official, perhaps Secretary of State Michael Pompeo, that the hacking has resumed and that the US is prepared to use Executive Order 13694, ‘Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities’.21 Soon after, Washington would sanction individuals involved in the hacking as well as the firms that benefit from it.

Even if the White House were to follow such a policy line, it’s likely that Beijing will continue industrial cyber espionage. James Mulvenon argues that Chinese policymakers now believe that they’ve reached a new equilibrium with the US. Shifting industrial cyber espionage to the Ministry of State Security and deploying a higher level of tradecraft have created an equivalent of the hacking conducted by the US National Security Agency. If this is the case, it means that Beijing never truly accepted the distinction that Washington promoted between ‘good’ and ‘bad’ hacking, between cyber-enabled theft to support the competitiveness of Chinese industry and political–military espionage. Instead, Chinese policymakers saw the issue in terms of a high level of relatively ‘noisy’ activity (for which they were likely to get caught and be called out on). Bringing the hacking more in line with what it believes the National Security Agency conducts—a smaller number of hacks that nevertheless give the US large-scale access to Chinese assets—has, in Beijing’s view, resolved the issue. This isn’t the resolution the US hoped for when it first announced the September 2015 agreement, but it may be the one it has to live with now.

Australia

By Fergus Hanson and Tom Uren

The agreement

On 21 April 2017, Following the groundbreaking Obama–Xi agreement in September 2015 and the G20’s acceptance of the norm against the ‘ICT-enabled theft of intellectual property’,22 Australia and China reached their own bilateral agreement. Buried somewhat within the joint statement that followed the inaugural Australia–China High-Level Security Dialogue was a paragraph on commercial cyber espionage:

Australia and China agreed not to conduct or support cyber-enabled theft of intellectual property, trade secrets or confidential business information with the intent of obtaining competitive advantage.23 

As with previous agreements, the statement made an implicit distinction between tolerable espionage for political–military reasons and unacceptable espionage for commercial gain.

Both countries also agreed to act in accordance with the reports of the UN Group of Governmental Experts. The two countries agreed to establish a mechanism to discuss cybersecurity and cybercrime issues with a view to preventing cyber incidents that could create problems between them. This was highlighted in Australia’s International Cyber Engagement Strategy, in which Australia’s dialogues with other states, including China, were characterised as ‘an opportunity to deepen understanding of responsible state behaviour in cyberspace and foster cooperation to deter and respond to malicious cyber activities’.24

In China, the agreement received very limited attention. Xinhua produced a translation of the joint statement, which was then reproduced by the People’s Daily and posted on the Minister of Justice’s website.25

In Australia it received more attention, but the government wasn’t naive about the prospects for success. The Ambassador for Cyber Affairs, Tobias Feakin, was reported as saying ‘We do go into these things with our eyes wide open.’26

Pre-agreement commercial cyber espionage

Reliable public accounts of nation-state cyber espionage in Australia are hard to come by. Both government and industry have been reticent about openly attributing hacks and data breaches to particular nations. The Australian Government has also only more recently begun to ramp up its efforts to deal with the challenge of cybersecurity. The 2009–10 annual report of the Australian Security Intelligence Organisation (ASIO) stated that ‘cyber espionage is an emerging issue’.27 Since that time, ASIO’s annual reports have consistently mentioned that cyber espionage affecting commercial interests and for commercial intelligence is occurring, although details of what’s been stolen and by whom are omitted.

The Australian Cyber Security Centre (ACSC) Threat reports, issued from 2015, have also consistently mentioned threats to commercial IP and to other sensitive information, such as negotiation strategies or business plans.28 But, again, the reports fail to provide enough detail to determine whether it was Chinese espionage that occurred for commercial advantage.

While not publicly named, China is regarded as Australia’s primary cyber adversary, including in the area of IP theft. The fact that it remains unnamed in public statements from the government is perhaps the start of the explanation of why Australia’s policy response so far has been ineffective.

The miners

Australia is a large and significant exporter of iron ore, nickel, coal and other mineral resources to China. Iron ore is particularly significant in the trading relationship—China is the world’s largest importer and Australia the largest exporter, and in 2017 over 80% of Australian iron ore exports were to China.29

Although iron ore contracts are now based on monthly average prices, in the lead-up to 2010 iron ore prices were negotiated between buyers and sellers in fixed one-year contracts.30 Iron ore exports to China were large and growing rapidly, and the price negotiations had tremendous importance for the companies, economies and governments involved. Furthermore, a possible takeover bid for Rio Tinto from BHP led the state-owned Aluminium Corporation of China, Chinalco, to take an overnight 9% stake in Rio Tinto.

In this high-stakes environment, all three major iron ore miners in Australia were the victims of cyber espionage that was informally attributed to China.31 Given the large volume of iron ore trade, any information that could provide advantage in negotiations would be tremendously valuable. In 2012, MI5 Director-General Jonathan Evans revealed that an attack had cost a company—subsequently revealed to be Rio Tinto—an estimated £800 million (US$1.04 billion, A$1.43 billion, €891 million) in lost revenue, ‘not just through intellectual property loss but also from commercial disadvantage in contractual negotiations’.32

It also seems that a bribery case against a Rio Tinto executive and Chinese-born Australian citizen was used to enable further cyber espionage. It’s reported that their Rio Tinto credentials were used to download material from the Rio Tinto corporate network after they were arrested in China.33 If true, this sensational allegation directly links Chinese law enforcement actions to commercial espionage.

Since 2010, the mechanisms that determine prices are now based on market fluctuations, so the very strong incentives to gather information on annual price negotiations have been diminished. However, the high priority that the Chinese Communist Party gives to the secure supply of raw materials means there’s still an ongoing interest in gathering commercial intelligence on Australian mining companies.

The Bureau of Meteorology

In 2015, the Australian Bureau of Meteorology was compromised and a foreign intelligence service — subsequently reported to be Chinese34 — searched for and copied ‘an unknown quantity of documents from the Bureau’s network’.35 In this case it’s hard to definitively categorise the underlying motive. There doesn’t seem to be a direct motive to gather government or defence intelligence, but the bureau’s network could have been used as a launching point for further attacks into government networks. IP theft seems likely, as the bureau is a leading science-based services organisation in Australia, has strong international research partnerships and is involved in international research and development programs. Its compromise also provides the opportunity for widespread economic disruption, given that airlines, logistics organisations and industries such as agriculture rely on its services to operate. Its significant weather forecasting and supercomputer expertise would be valuable, too. But for all that this potential IP would be worth, it’s hard to confirm that it was both stolen and used for commercial advantage.

Operation Cloud Hopper

In April 2017, BAE Systems and PwC UK released a report into what they called Operation Cloud Hopper,36 a systematic global espionage campaign that compromised managed IT service providers, which remotely manage customer IT and end-user systems and generally have direct and unfettered access to client networks. The successful compromise of managed service providers for espionage allows considerable access to client networks and data.

This operation was attributed to a China-based group that’s widely known as APT 10 and Stone Panda. CERT Australia identified 144 partner companies that could have been affected.37 However, it isn’t publicly known which companies were affected and what was stolen. 

Summary

Official statements from ASIO and the ACSC indicate that commercial espionage before 2017 was a large and growing concern, but several factors make it difficult to determine who was stealing data and why they were doing it.

First, both government and business remain reluctant to formally attribute attacks to states because of both technical uncertainty (it takes time, skill and effort to develop high levels of confidence) and because of fears of damaging possibly important diplomatic, economic and intelligence relationships. 

Second, Australia implemented a data breach notification law only in February 2018, and that law doesn’t apply to the theft of IP and commercial-in-confidence data. 

Finally, before the ACSC was formally assigned whole-of-economy responsibilities in July 2018, there was no cybersecurity centre of gravity that could determine whether formal attribution was desirable and necessary.

Post-agreement commercial cyber espionage

The Australian National University hack

In July 2018, it was reported that Chinese hackers had ‘successfully infiltrated the IT systems at the Australian National University’ (ANU)38 and that a remediation effort had been ongoing for several months. As with the Bureau of Meteorology, it’s hard to definitively determine what was stolen and for what purpose. The ANU conducts research that has a wide range of applications, including defence, strategic and commercial applications, and it isn’t known what was stolen.

Many ANU graduates subsequently work in the Australian Government, and the ANU also hosts the National Security College, which conducts courses for defence and intelligence officials. Access to ANU IT systems would possibly be of value to enable follow-on espionage. Disentangling all the possible uses that access to ANU could have been used for is impossible without a forensic accounting of what was stolen. In August, the university advised that ‘current advice is that no staff, student or research data has been taken’, although that assessment was questioned by the International Cyber Policy Centre.39

The only publicly known target of Chinese hacking—the ANU—isn’t directly a government or military espionage target, but it’s possible the stolen data won’t be used for commercial gain (and therefore falls outside the scope of China’s agreement with Australia).

Outlook

Despite China’s commitments to Australia and the limited public evidence of commercial cyber espionage, Beijing doesn’t appear to have ceased commercial cyber espionage activities in Australia. However, assessing the scale of China’s ongoing commercial cyber espionage activity is difficult. The Australian Government has been reluctant to publicly name and shame adversary states engaging in cyber theft for commercial gain. China has also improved its tradecraft, making detection
harder and perhaps leading to a mistaken perception that activity has become more focused. This professionalisation followed the exposure of the PLA’s previously sloppy tradecraft and probably the internal restructure (mentioned in the ‘United States’ section of this report) that shifted responsibility for commercial cyber espionage from the PLA to the Ministry of State Security. Australia also has relatively less commercially attractive IP than countries such as the US and Germany, so few examples come to light.

Official statements from ASIO and the ACSC don’t reflect a significant decline in the threat of IP or commercial-in-confidence data theft. Public statements from government officials and the publicly known target—a university—don’t indicate a significant change in the nature of Chinese cyber espionage. While this review indicates how difficult it is to clearly identify cyber espionage for competitive advantage, China remains Australia’s primary cyber adversary and is making greater
efforts to disguise and focus its commercial cyber espionage.

In a partial nod to keeping its agreements, China seems to be focusing on the theft of dual-use and national security related data. For China, this seems to incorporate a fairly wide range of sectors (such as mining) that goes well beyond sectors such as defence. To begin the process of increasing pressure on China to adhere to its agreements, Australia should identify opportunities to formally name adversary states, including China, in public documents and statements. A good place to start is the annual ACSC Threat report. Australia should also consider partnering with states subjected to similar IP theft by China to build and sustain pressure on Beijing to
adhere to its agreements. The G20 offers a multilateral venue for keeping up pressure, but other ad hoc opportunities should also be identified.

Germany

By Dr Samantha Hoffman

Consultation mechanism

No formal bilateral agreement on preventing commercial cyber espionage exists between Germany and China. However, a joint declaration from the June 2016 4th China–Germany Intergovernmental Consultations stated that the two governments would set up a ‘bilateral cyber security consultation mechanism’.40 Both sides also agreed that neither operates or knowingly supports ‘the infringement of intellectual property, trade or business secrets through the use of cyberspace in order to attain
competitive advantage for their businesses or commercial sectors’.

The first cybersecurity consultation wasn’t held until 17 May 2018.41 Efforts to establish the consultation were delayed, in part because the two sides had different expectations regarding topics and participants. The delays also led to a public exchange between German Ambassador to China Michael Clauss and the Chinese Foreign Ministry. In a December 2017 interview with the Hong Kong-based South China Morning Post, Clauss was quoted saying that he expected the Chinese Government to join Germany in setting up the agreed consultation mechanism. He also said, ‘Our repeated requests to have a meaningful dialogue on [virtual private networks] and cyber-related questions with the relevant Chinese authorities have regrettably not yet received a positive response.’ The comments prompted a reply from Chinese Foreign Ministry spokeswoman Hua Chunying, who claimed, ‘China has repeatedly invited a German delegation to China for consultation, but Germany has never responded on time … It’s unreasonable for Germany now to criticise Beijing for not being sincere.’

The eventual May 2018 consultation, which took place in Beijing, was co-chaired by Chinese Vice Minister of Public Security Shi Jun and German Parliamentary State Secretary at the Federal Ministry of the Interior Professor Dr Günter Krings. The German Government insisted that the Ministry of Public Security and a member of the Central Political and Legal Affairs Commission were also present.

Although the meeting was officially described as a success,42 no tangible progress was made during the consultation to substantively address key issues. The German Government insisted that discussion focus on commercial cyber espionage and issues such as data protection and virtual private networks. These were all topics that the Chinese Government preferred to avoid. The Chinese Government instead wanted to discuss cybercrime and cyber terrorism, but there are major differences in the way those concepts are defined. Chinese officials have regularly pushed the German Government to deport political opponents in the Uygur community, which Berlin has continually refused to do because Beijing can provide no evidence to support its claims.

The cyber consultation was again discussed during the July 2018 5th China–Germany Intergovernmental Consultations in Berlin. A joint statement said that the consultation would continue as a key platform for discussing cyber issues, including cross-border data protection and IP and trade infringements.43

Dealing with commercial cyber espionage

The 2016 and 2017 editions of the German Federal Ministry of the Interior’s Annual report on the protection of the Constitution (published in July 2017 and July 2018, respectively) both specifically identified China alongside Russia and Iran as the primary countries responsible for espionage and cyberattacks against Germany.44 The reports said that ‘Chinese intelligence services focus on industry, research, technology and the armed forces (structure, armament and training of the Bundeswehr, modern weapons technology).’45 A separate July 2017 report by Bitkom, Germany’s digital industry association, found that German companies lose €55 billion (US$64 billion, A$88 billion) annually due to commercial cyber espionage affecting about 53% of German companies.46

The number of known China-originated commercial cyber espionage attacks against German companies dropped in the past two years, according to the head of the Federal Office for the Protection of the Constitution (BfV), the German domestic intelligence agency.47 Other German Government officials confirmed the appearance of a decrease, but added that they’re unsure whether there had been one. There’s an equally high likelihood that cyber espionage has become more sophisticated, and better targeted, and therefore has been undetected.

The decline in known cyber espionage incidents has also been linked to a sharp increase in Chinese foreign direct investment in high-tech and advanced manufacturing industries in 2016. The BfV head, Hans-Georg Maassen, made a similar claim and linked the decline with an increase in the use of legal tools for obtaining the same information, such as corporate takeovers. Maassen said ‘industrial espionage is no longer necessary if one can simply take advantage of liberal economic regulations to buy companies and then disembowel them or cannibalise them to gain access to their know-how.’48 The German Government took steps in July 2017 to address concern by amending the Foreign Trade and Payments Ordinance to tighten restrictions on non-EU foreign investment in Germany. The move was partly triggered by the €4.5 billion (US$5.3 billion, A$7.2 billion) takeover of German industrial robotics maker Kuka by Chinese appliance maker Midea.

The amendment identified several sectors that would be subject to higher scrutiny. They include companies operating critical infrastructure, IT and  telecommunications, and certain cloud computing providers. Previously, non-EU companies weren’t obliged to inform the government of an acquisition (of 25% or more of voting rights) of a German company unless they were involved in the development and manufacturing of defence and encryption technology. The July 2017 amendment, however, expanded the notification requirement to include critical infrastructure and other security-related technology.49 The amendment refers to sectors identified in the 2013 Foreign Trade and Payments Ordinance section 55, which include energy, water, IT, financial services, insurance, transportation, food and health.50

The amendment also extended the period for the Ministry of Economic Affairs and Energy to conduct reviews. There are two foreign investment review categories: ‘cross-sectoral investment review’ and ‘sector-specific investment review’. Cross-sector reviews apply to the acquisition of any company where the investor is located outside the EU or the European Free Trade Association and plans to acquire ownership of 25% or more.51 Sector-specific reviews apply to the acquisition of a company that operates in sensitive security areas. In addition to military weapons and equipment, this includes ‘products with IT security features that are used for processing classified government information’. 52

Similar rules apply for companies that operate high-grade remote sensing systems under the Act on Satellite Data Security.53 Previously, the ministry was required to conduct a cross-sectoral investment review within two months, but is now given four months.54 For sector-specific reviews, it was previously required to conduct a review within one month and is now given three months.55 The German Government has further identified a need to tighten controls on the loss of sensitive information in the area of cross-border data protection.

Outlook

Assessing the scale of Chinese commercial espionage activity is difficult, and very little information is made publicly available. The German Government remains sceptical about China’s commitment to cease the infringement of IP, trade or business secrets through the use of cyberspace. However, the government feels that some dialogue is better than no dialogue. It hopes to leave open the possibility of a more intensive dialogue in future. One German official said that the government is pushing for the Chinese side to ‘behave as [it would] wish to be treated’ in an increasingly interconnected world.


What is ASPI?

The Australian Strategic Policy Institute (ASPI) was formed in 2001 as an independent, non‑partisan think tank. Its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. ASPI is responsible for informing the public on a range of strategic issues, generating new thinking for government and harnessing strategic thinking internationally.

ASPI International Cyber Policy Centre

The ASPI International Cyber Policy Centre’s mission is to shape debate, policy and understanding on cyber issues, informed by original research and close consultation with government, business and civil society. It seeks to improve debate, policy and understanding on cyber issues by:

  1. conducting applied, original empirical research
  2. linking government, business and civil society
  3. leading debates and influencing policy in Australia and the Asia–Pacific.

We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018
This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

First published September 2018

Cover image: Illustration by Wes Mountain. ASPI ICPC and Wes Mountain allow this image to be re-published under the Creative Common License Attribution-Share Alike. Users of the image should use this sentence for image attribution: ‘Illustration by Wes Mountain, commissioned by ASPI’s International Cyber Policy Centre’.

Defining offensive cyber capabilities

Introduction

States are developing and exercising offensive cyber capabilities. The United States, the United Kingdom and Australia have declared that they have used offensive cyber operations against Islamic State,1 but some smaller nations, such as the Netherlands, Denmark, Sweden and Greece, are also relatively transparent about the fact that they have offensive cyber capabilities.2 North Korea, Russia and Iran have also launched destructive offensive cyber operations, some of which have caused widespread damage.3 The US intelligence community reported that as of late 2016 more than 30 states were developing offensive cyber capabilities.4

There is considerable concern about state-sponsored offensive cyber operations, which this paper defines as operations to manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.

It is assumed that common definitions of offensive cyber capabilities and cyber weapons would be helpful in norm formation and discussions on responsible use.

This paper proposes a definition of offensive cyber operations that is grounded in research into published state doctrine, is compatible with definitions of non-kinetic dual-use weapons from various weapons conventions and matches observed state behaviour.

In this memo, we clearly differentiate offensive cyber operations from cyber espionage. We address espionage only in so far as it relates to and illuminates offensive operations. Only offensive cyber operations below the threshold of armed attack are considered, as no cyber operation thus far has been classified as an armed attack, and it appears that states are deliberately operating below the threshold of armed conflict to gain advantage.5

This paper examines the usefulness of defining cyber weapons for discussions of responsible use of offensive cyber capabilities. Two potential definitions of cyber weapons are explored—one very narrow and one relatively broad—before we conclude that both definitions are problematic and that a focus on effects is more fruitful.

Finally, the paper proposes normative courses of action that will promote greater strategic stability and reduce the risk of offensive cyber operations causing extensive collateral damage.

Definitions of offensive cyber capabilities

This section examines definitions of offensive cyber capabilities and operations in published military doctrine and proposes a definition consistent with state practice and behaviour. We first define operations and capabilities to clarify the language used in this report.

What are capabilities? In the context of cyber operations, having a capability means possessing the resources, skills, knowledge, operational concepts and procedures to be able to have an effect in cyberspace. In general, capabilities are the building blocks that can be employed in operations to achieve some desired objective. Offensive cyber operations use offensive cyber capabilities to achieve objectives in or through cyberspace.

US military joint doctrine defines offensive cyber operations as ‘operations intended to project power by the application of force in and through cyberspace’. One category of offensive cyber operations that US doctrine defines is ‘cyberspace attack’—actions that manipulate, degrade, disrupt or destroy targets.6

UK military doctrine defines offensive cyber operations as ‘activities that project power to achieve military objectives in, or through, cyberspace. They can be used to inflict temporary or permanent effects, thus reducing an adversary’s confidence in networks or capabilities. Such action can support deterrence by communicating intent or threats.’7 UK doctrine further notes that ‘cyber effects will primarily be in the virtual or physical domain, although some may also be in the cognitive domain, as we seek to deny, disrupt, degrade or destroy.’

In both UK and US military doctrine, offensive operations are a distinct subset of cyberspace operations that also include defensive actions; intelligence surveillance and reconnaissance and operational preparation of the environment—non-intelligence enabling activities conducted to plan and prepare for potential follow-on military operations.

This is consistent with the Australian definition, which is that offensive cyber operations ‘manipulate, deny, disrupt, degrade or destroy targeted computers, information systems or networks’.8

The Netherlands’ defence organisation sees offensive cyber operations as ‘digital resources whose purpose it is to influence or pre-empt the actions of an opponent by infiltrating computers, computer networks and weapons and sensor systems so as to influence information and systems’.9

Two common threads in state definitions are identified. Offensive cyber operations:

  • are intended to deny, disrupt, degrade, destroy or manipulate targets to achieve broader objectives (henceforth called denial and manipulation effects)
  • have a ‘direct real-world impact’.10

Another observation is that these definitions stress that ‘while cyber operations can produce stand-alone tactical, operational, and strategic effects and achieve objectives, they must be integrated’ in a military commander’s overall plan.6  This doctrine, however, originates from military establishments within a relatively narrow range of countries. In other states, offensive cyber operations may well be less integrated into military planning and will occur to achieve the political and/or strategic goals of the state leadership.11

This paper proposes that offensive cyber operations manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.

offensive cyber operations manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.

There are relatively few publicly available offensive cyber doctrine documents, but observed behaviour indicates that states such as Iran, North Korea and Russia are using operations that cause denial and manipulation effects to support broader strategic or military objectives.

By definition, offensive cyber operations are distinct from cyber-enabled espionage, in which the goal is to gather information without having an effect. When information gathering is a primary objective, stealth is needed to avoid detection in order to maintain persistent access that allows longer term intelligence gathering.

This definition does classify relatively common events, such as ransomware attacks, website defacements and distributed denial of service (DDoS) attacks, as offensive cyber operations.

Although the ‘manipulate, deny, disrupt, degrade or destroy’ element of the definition lends itself to segmentation into different levels, further examination shows that segmentation based on the type of attack is not particularly useful. Information and communication technology (ICT) infrastructure is inherently interconnected, and even modest disruption can cause relatively drastic second-order effects. Modifying the state of a control system, for example, could lock a person’s garage or launch a nuclear missile.

Conversely, seriously destructive attacks, such as data wipers, can have damaging effects on different scales. Compare the damage caused when North Korea infiltrated the Sony Pictures Entertainment network12 with the damage caused during the Russian-launched NotPetya attack’13 At Sony Pictures, more than 4,000 computers were wiped and, although that cost US$35 million to investigate and repair, it did not significantly affect the broader Sony corporation14 and did not directly affect other entities. The NotPetya event also involved data destruction, but it was probably the most damaging cyberattack thus far: US$300 million in damages for FedEx; US$250–300 million for Danish shipper Maersk15; more than US$310 million for American pharmaceutical giant Merck; US$387 million for French construction giant Saint-Gobain; and US$150 million for UK chocolate maker Mondelez International. It is possible that flow-on effects from the disruption to the logistics and pharmaceutical industries may have affected the broader global economy.

Table 1 is a selected list of state activities that this paper defines as offensive cyber operations. Those operations are assessed for the scale, seriousness, duration and specificity of their effect.

Ultimately, the seriousness of a cyberattack is based on its ultimate effects or on the effects that it enables. The scale and seriousness of incidents should be based upon measuring the ultimate consequences of an incident and the economic and flow-on effects.

Table 1: State offensive cyber operations

OperationSeriousnessScaleDurationSpecific
NotPetyaHigh—data destructionGlobal. Affected organisations in Europe, US and Asia (Maersk, Merck, Rosneft, Beiersdorf, DHL and others) but also a concentration in Ukraine (banking, nuclear power plant, airports, metro services).Short-term, with recovery over months to a year.No
WannaCryHigh—data destructionGlobal, but primarily in Russia, Ukraine, India and Taiwan, affecting multinationals, critical infrastructure and government.Short-term, with recovery over months to a year.No
Sony Pictures EntertainmentHigh—data destructionFocused on Sony Pictures Entertainment (<7,600 employees), a subsidiary of Sony Corporation (131,700 employees in 2015) (a)Short-term, with recovery in months.Yes
StuxnetHigh—destruction of centrifugesFocused on Iran’s nuclear weapon development programme<1 yearYes
Various offensive cyber operations against ISIS by US, Australia, UKVaried—some data destruction but also denial and manipulation effectsFocused on Islamic StateUnknownYes
Estonia 2007Medium—temporary denial of servicePrincipally Estonian electronic services, affecting many European telcos and US universities3 weeksYes

(a)  Sony Corporation, US Securities and Exchange Commission Form 20-F, FY 2016 [online]

Cyber weapons and arms control

Cyber weapons are often conceived of as ‘powerful strategic capabilities with the potential to cause significant death and destruction’,16 and in an increasingly interconnected world it is easy to speculate about catastrophic effects. It is also difficult to categorically rule out even seemingly outlandish offensive cyber scenarios; for example, it seems unlikely that a fleet of self-driving cars could be hacked to cause mass destruction, but it is hard to say with certainty that it is impossible.17 Although the reality is that offensive cyber operations have never caused a confirmed death, this ‘uncertainty of effect’ is potentially destabilising, as states may develop responses based on practically impossible worst-case scenarios.

In a Global Commission on the Stability of Cyberspace issue brief, Morgus et al. look at countering the proliferation of offensive cyber capabilities and conclude that limiting the development of cyber weapons through traditional arms control or export control is unlikely to be effective.18 This paper agrees, and contends that previous arms or export control agreements may succeed where the following three conditions are present:

  1. Capability development is limited to states, usually because weapons development is complex and highly industrialised.
  2. There is a common interest in limiting proliferation.
  3. Verification of compliance is possible.

Perhaps only one of these three conditions—a common interest in limiting proliferation—exists in the world of cyber weapons, although even this is not immediately self-evident.

In the context of international arms control, a limited number of capability developers usually means that only states (and ideally only a small number of states) have the ability to develop weapons of concern, that states have effective means to control proliferation, or both. In cyberspace, however, there are many non-state actors—in the cybersecurity industry and in the criminal underworld19—developing significant cyber capability. Additionally, the exchange of purely digital goods is relatively difficult for states to control compared to exchanges of physical goods. States do not have a monopoly on capability development and find it difficult to effectively control the spread of digital goods, and so therefore cannot credibly limit broader capability development.

For chemical, biological and nuclear weapons, the human suffering caused by their use is generally abhorred and there is a very broad interest in restraining the use of those weapons. Offensive cyber operations, by contrast, could achieve military objectives without causing human suffering; for example, the warfighting capability of an adversary could be degraded by disrupting their logistics such that military objectives could be achieved without fighting. It has been suggested that states have a ‘duty to hack’ when the application of offensive cyber operations will result in less harm than all other applications of force,20 and the UK’s Minister of State for the Armed Forces, Nick Harvey, noted in 2012 that offensive cyber operations could be ‘quite a civilised option’ for that reason.21

Additionally, cyber weapons can be developed entirely in environments where visibility for verification is impossible, such as in air-gapped networks in nondescript office buildings. Unlike for weapons of mass destruction, there are no factories or supply chains that can be examined to determine whether capabilities exist and stockpiles are being generated.22

Unlike many military capabilities—say, nuclear-armed submarines or ballistic missiles—offensive cyber capabilities are unique in that once defenders have technical knowledge of the potential attack, effective countermeasures can be developed and deployed relatively easily.23

For this reason, states already have considerable interest in limiting the proliferation of offensive cyber capabilities—they want to keep those capabilities secret so they can exploit them. The US Vulnerabilities Equities Process (VEP) policy document24 states that when the US Government discovers vulnerabilities25 most are disclosed, but some will be kept secret to satisfy law enforcement or national intelligence purposes where the risk of the vulnerability is judged to be outweighed by possible intelligence or other benefits. Undoubtedly, all states that engage in vulnerability discovery will have a common interest in keeping at least some secret so that they can be exploited for national security purposes.

Defining cyber weapons

Despite scepticism about the effectiveness of traditional arms control, this paper develops both a narrow and a broad definition of cyber weapons to test whether those definitions could be useful in arms control discussions. The definitions have been developed by examining selected international weapons conventions and previously published definitions.

One problem with defining cyber weapons is that cyber technologies are primarily dual-use: they can be used for both attack and defence, for peaceful and aggressive purposes, for legal and illegal activities. Software can also be quite modular, such that many cybersecurity or administrative tools can be brought together to form malware.

Weapons in the physical domain have been categorised into three groups: small arms and light weapons; conventional arms; and weapons of mass destruction (WMD).26 Given that cyber weapons are often conceived of as potentially causing mass destruction and because WMDs are subject to the most rigorous international counter-proliferation regimes, this paper examines definitions through the perspective of the dual-use WMD counter-proliferation Chemical Weapons Convention and Biological Weapons Convention.27

Biological weapons, a class of WMD, are described as (our emphasis):28

  1. microbial or other biological agents, or toxins whatever their origin or method of production, of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes;
  2. weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict.

The Chemical Weapons Convention defines chemical weapons as (our emphasis):29

  • toxic chemicals and their precursors, except where intended for purposes not prohibited under the Convention and as long as the types and quantities are consistent with such purposes; and
  • munitions and devices, specifically designed to cause death or other harm through the toxic properties of those chemicals …

These conventions, both of which deal with dual-use goods, define by exclusion: only substances that do not or cannot have peaceful purposes are defined as weapons. The material of concern is not inherently a problem—it is how it is used.

In the context of armed conflict, the Tallinn Manual characterises cyber weapons by the effects they have, not by how they are constructed or their means of operation:

cyber weapons are cyber means of warfare that are used, designed, or intended to be used to cause injury to, or death of, persons or damage to, or destruction of, objects, that is, that result in the consequences required for qualification of a cyber operation as an attack.30

Herr and Rosenzweig define cyber weapons as malware that has a destructive digital or physical effect, and exclude malware used for espionage.31 Herr also considers that malware is modular and consists of a propagation element that the malware uses to move from origin to target; an exploit that will allow the malware to execute arbitrary commands on the target system; and a payload that will execute some malicious instructions.

Rid and McBurney define cyberweapons as ‘computer code that is used, or designed to be used, with the aim of threatening or causing physical, functional, or mental harm to structures, systems, or living beings’.32

A narrow definition

Following the logic of dual-use weapons conventions, a narrow definition of cyber weapons is software and information technology (IT) systems that, through ICT networks, cause destructive effects and have no other possible uses. The IT system aspect of this definition requires some level of integration and automation in a weapon: code that wipes a computer hard disk is not a weapon by itself—by itself it cannot achieve destructive effects through cyberspace—but could form part of a weapon that wipes hard drives across an entire organisation.

Based on this narrow definition, Table 2 shows our assessment of whether reported malware examples would be defined as cyber weapons.

Table 2: Cyber weapon assessment

Malware or systemDescriptionWeapon
Distributed denial of service (DDoS) systemsAggregation of components, including bots and control software, such that they have no other purpose than to disrupt internet services.Yes, although this is arguable because effects tend to be temporary (disruptive and not destructive). Each individual component is likely to have non-destructive uses.
Dragonfly a.k.a. Energetic Bear campaign (a)Espionage campaign against energy critical infrastructure operators that developed industrial control system sabotage capabilities.No. This was both manual and for espionage only; it never disrupted critical operations. However, the intent demonstrated is to develop capabilities to disrupt critical infrastructure.
Blackenergy 2015 Ukrainian energy grid attack (b)Access to Ukrainian energy company was used to disrupt electricity supply.No. Blackenergy malware was very modular and this attack was quite manual. This malware does contain destructive capability.
Industroyer a.k.a. Crashoverride malware (c)Malware in a Ukrainian energy supply company was used to disrupt electricity supply.Yes. Integrated malware disrupted electricity supply automatically.
TRISIS malware (d)Malware intended to sabotage a Saudi Arabian petrochemical plant.Yes. Malware with no espionage capability was specifically designed to destroy a petrochemical plant.
WannaCryA self-propagating data wiper.Yes. Malware with no espionage capability was designed to irreversibly encrypt computer hard drives.
MetasploitAn integrated collection of hacking tools that can be used for defence, for espionage, or for destruction and manipulation.No. Metasploit has many non-destructive uses and is not integrated into a system that causes destruction.
NotPetyaA self-propagating data wiper.Yes. Automatically destroyed data.
Flame, Snake, ReginVery advanced modular malware.No. These could cause denial and manipulation effects and could be automated but have other uses. They seem to be designed primarily for espionage.
StuxnetSelf-propagating malware that subverted industrial control systems to destroy Iranian nuclear fuel enrichment centrifuges.Yes. Highly tailored to automatically destroy targeted centrifuges.
Large-scale man-in-the-middle attack system (e.g. mass compromise of routers) (e)Compromise of many mid-points could enable large-scale access that could be used to enable intelligence, destruction or manipulation, or even to patch systems.No. Intent is everything here.
PowershellA powerful scripting and computer administration language installed by default with the Windows operating system.No. Many non-destructive uses.
A Powershell script designed to automatically move through a network and wipe computers.Destructive intent is codified within the script commands.Yes.
  • a) Symantec, Dragonfly: Western energy companies under sabotage threat, 2014, online.
  • b) Kim Zetter, ‘Inside the cunning, unprecedented hack of Ukraine’s power grid’, Wired, 3 March 2016, online.
  • c) Andy Greenburg, ‘“Crash override”: the malware that took down a power grid’, Wired, 12 June 2017, online; Robert M Lee, ‘Crashoverride’, Dragos, 12 June 2017, online; Anton Cherepanov, Robert Lipovsky, ‘Industroyer: biggest threat to industrial control systems since Stuxnet’, welivesecurity, 12 June 2017, online.
  • d) Nicole Perlroth, Clifford Krauss, ‘A cyberattack in Saudi Arabia had a deadly goal: experts fear another try’, New York Times, 15 March 2018, onlineTRISIS malware: analysis of safety system targeted malware, Dragos, online.
  • e) US CERT, Russian state-sponsored cyber actors targeting network infrastructure devices, Alert TA18-106A, 16 April 2018, online.

This narrow definition is consistent with the narrowness of definitions from both the Biological Weapons Convention and the Chemical Weapons Convention, both of which deal with dual-use goods.

The definition captures intent by excluding all other tools where intent is ambiguous; only tools that can only be used for destruction are included.

This narrow definition is problematic for at three reasons.

First, it does not map directly onto state definitions of offensive cyber activities—actions that manipulate, disrupt, deny and degrade would likely not be captured and so much offensive cyber activity will not involve cyber weapons. The offensive cyber operation, for example, that US Cyber Command conducted against Islamic State’s propaganda operations did not require cyber weapons. Cyber Command obtained Islamic State administrator passwords and deleted content and changed passwords to lock out the original owners.33 This offensive cyber operation could have been entirely conducted using standard computer administration tools. No malware, no exploit, no software vulnerability and certainly no cyber weapon was needed.

Second, even the most destructive offensive cyber operations could be executed without ever using a cyber weapon. For example, a cyber operation that triggered the launch of conventional or nuclear weapons would not require a cyber weapon.

Third, this definition could easily be gamed by adding non-destructive functionality to otherwise malicious code.

A broader definition

A broader definition of cyber weapons could be software and IT systems that, through ICT networks, manipulate, deny, disrupt, degrade or destroy targeted information systems or networks.

This definition has the advantage that it would capture the entirety of tools that could be used for offensive cyber operations.

Many cyber operations techniques, however, take advantage of computer administration tools, and the difference between espionage and offensive action is essentially a difference in intent; for example, the difference between issuing a command to copy files and issuing one to delete files. Indeed, it is possible to conduct cyber operations—both intelligence and offensive operations—using only legitimate tools such as the scripting language Windows Powershell.34 Yet it makes no sense to define what could be used for destructive effects as a cyber weapon; it is nonsensical to label Powershell as a cyber weapon.

This definition would also include perfectly legitimate tools that state authorities and the cybersecurity community use for law enforcement, cyber defence, or both.

These two definitions highlight the dilemma involved in defining cyber weapons. A narrow definition can perhaps be more readily agreed to by states, but excludes so much potential offensive cyber activity that efforts to limit cyber weapons based on that definition seem pointless. The broader definition would capture tools used for so many legitimate purposes that agreement on their status as weapons is unlikely, and limitations could well harm network defenders more than attackers.

Options for control

This paper therefore agrees with Morgus et al.35 that limiting the development of cyber weapons by controlling the development of defined classes of weapons is unlikely to be effective. There are, however, options for more effective responses that focus on affecting the economics of offensive cyber operations and the norms surrounding their application.

Affecting the markets involved in offensive cyber capability development would raise the cost of capability development and encourage states to conduct operations sparingly.

One market associated with cyber capabilities is that for software vulnerabilities and their associated exploits (code that takes advantage of a vulnerability). Software vulnerabilities are often exploited by malware to gain unauthorised access to computer systems and are often—although not always—required for offensive cyber capabilities. Ablon and Bogart have found that the market price for software exploits is sensitive to supply and that prices can rise dramatically for in-demand, low-supply products.36 A multifaceted approach to restricting supply could raise the cost of acquiring exploits and therefore the cost of building offensive cyber capabilities.

Shifting the balance of vulnerability discovery towards patching (rather than exploitation for malicious purposes) would raise the value of all vulnerabilities. As suggested by Morgus et al., one possibility is that software vulnerabilities are bought for the express purpose of developing fixes and patches, as suggested by Dan Geer in a 2014 BlackHat conference keynote.37

A secondary response would be to enable more effective repair of vulnerabilities that would close the loopholes that enable computer exploitation. NotPetya, assessed by the US Government to be the most destructive cyberattack thus far,38 used publicly known vulnerabilities for which patches had been available for months. Effective cyber hygiene would have prevented much of the damage that NotPetya caused.

From a policy point of view, this could be attacked at several levels by encouraging research into vulnerability mitigation and more effective patching processes; educating decision-makers to prioritise and resource vulnerability discovery and patching; government policy to encourage more effective patching regimes; and promoting VEP policies in other states (discussed below).

Whenever a vulnerability is exploited for any purpose—including cyber espionage, offensive operations and cybercrime—there is a risk of discovery, which could ultimately result in patching and loss of the ability to exploit the vulnerability. Raising the value of all vulnerabilities will encourage states to use offensive cyber capabilities sparingly to avoid discovery and hence loss of capability via patching.

A complementary approach would be to change incentives within software development to encourage secure application development. Again, this could be approached at many levels: altering computer science curriculums; promulgating secure coding standards;39 and altering the balance of liability in commercial code, for example.

Reducing the supply of exploits and raising their cost encourages states to conduct cyber operations in a way that avoids attracting attention to mitigate the risk of discovery and loss of capability. This effort to operate quietly would vastly reduce the risk of inadvertent large-scale damaging events.40

Recommendation: Encourage the establishment of national vulnerabilities equities processes

There is a common interest among all states that are conducting cyber operations—defensive or offensive—in actively assessing the risk and benefits of keeping vulnerabilities secret for exploitation. The US VEP document states that in ‘the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest’. Assuming this is true, the presence of VEP policies in many states would tend to result in more responsible disclosure and patching and therefore result in a reduced supply of vulnerabilities and exploits.

This reduced supply of vulnerabilities would raise the cost of offensive capability development and therefore restrict proliferation and reduce the use of offensive operations.

Recommendation: Promote focused operations

Unlike a kinetic weapon, for which direct consequences such as blast radius may be well understood, offensive cyber operations can easily have unintended consequences. Since states are conducting offensive cyber operations below the threshold of armed conflict, another option to limit offensive operations is to promote operations that are tightly focused so that operations do not affect innocent bystanders.

We have assessed that both the Sony Pictures and Stuxnet attacks were specific, as both affected specific targets and did not cause direct effects elsewhere (Table 1). The NotPetya and WannaCry incidents were not specific: they affected many organisations world-wide.

It is possible, therefore, to conduct focused offensive cyber operations that are specific and limit collateral damage; it is not an inherent fact of cyberspace that operations cannot be targeted and specific. To reduce the risks of collateral damage, there would be merit in promoting a norm of ‘due diligence’ for offensive cyber operations, requiring that states invest in rigorous testing to ensure that effects are contained before engaging in offensive cyber operations.

Recommendation: Measure damage for more effective responses

In addition to altering the computer vulnerability lifecycle, governments should also respond directly to cyber operations. Effective responses should be both directed against perpetrators and proportionate. Currently, both the identification of perpetrators (attribution) and the assessment of damage (to determine a proportionate response) are suboptimal. Much has been said about attribution, and this paper will not cover it further.

When state-sponsored operations such as NotPetya and WannaCry occur, there is no independent assessment of damage. An accurate accounting of harm could be used to justify an appropriately proportionate response.

NotPetya has been called ‘the most destructive and costly cyber-attack in history’.41 It seems that total cost estimates of over US$1 billion are based on collating the financial reports of public companies such as Merck,42 Maersk,43 Mondelez International44 and FedEx,45 and then adding a ‘fudge factor’ to account for all other affected entities. Publicly listed companies have formal reporting obligations, but the vast majority of entities affected by NotPetya do not, and it seems likely that the cost of NotPetya has been significantly understated.

An independent body that identifies common standards, rules and procedures for assessing the cost of cyberattacks could enable a more accurate measure of damage. The International Civil Aviation Organization’s system for air crash investigations may provide a framework.46 It assigns a role for various stakeholders, including the airline, the manufacturer, the registrar and so on. The investigation is assigned to an autonomous safety board with the task of assessing what happened, not who was at fault.47 For a cyber incident, an investigation board could include a national cybersecurity centre, the affected entity, the manufacturer of the affected IT system, relevant software developers and other stakeholders.

Using assessments of scope and seriousness to develop proportionate responses would encourage attackers to construct focused and proportionate offensive cyber operations.

Recommendation: Invest in transparency and confidence building

We have noted above that uncertainty about the effects caused by offensive cyber operations has the potential to be destabilising. State transparency in the use of offensive cyber operations could address this concern and help promote norms of responsible state behaviour.

Figure 1 shows the lifecycle of an offensive cyber capability, starting at the point that a state forms an intent to develop capability. Resources are committed; intelligence is gathered to support capability development; capability is developed; the environment is prepared (by deploying malware, for example); and finally the operation is launched and effects are observed. Crucially, there are distinct elements during this lifecycle that require operation on the public internet and are therefore potentially observable: intelligence gathering, operational preparation of the environment, and offensive cyber effects (in orange).48

Figure 1: Offensive cyber capability lifecycle

Although it is not possible to see or measure cyber weapons, to quantify them or inspect ‘cyber weapon factories’, a level of confidence-building transparency can still be achieved. Public doctrine that defines a nation’s strategic intent and its assessment of acceptable and responsible uses of offensive cyber operations would be extremely helpful.

This visibility may be sufficient to enhance confidence building as predictability is increased. Many responsible states will be reluctant to deviate from public statements regarding offensive cyber capability development because effects will possibly become visible at a later stage that will prompt incident response, forensic analysis and maybe political attribution and embarrassment.

There is already some public documentation of offensive cyber capabilities. There are unclassified doctrines, official statements and unofficial reporting on the states that have—or are developing—offensive capability. There are also voluntary national reports in the context of the UNGGE. Additionally, open source verification by research institutes such as the SIPRI Yearbook, IISS Military Balance and reports similar to the Small Arms Survey are authoritative and credible sources that inform policy actions by states. Finally, independent analysis and reporting from cybersecurity companies such as Symantec, Crowdstrike, BAE Systems and FireEye provides invaluable technical information. These firms also play a key role in early detection and response.

Summary and conclusion

Offensive cyber capabilities are defined as operations in cyberspace to manipulate, deny, disrupt, degrade, or destroy targeted computers, information systems or networks.

This paper has examined narrow and broad definitions of cyber weapons and found them problematic for use in control discussions.

However, a range of other measures would help limit the use of offensive cyber capabilities and reduce the risk of collateral damage when they are used:

  • Markets for the vulnerabilities that are used to create offensive cyber capabilities can be affected to make capability development more expensive. VEP processes would form one element of a broader effort to patch vulnerabilities and restrict supply.
  • Promoting the principle that offensive cyber operations should be focused and taking active steps to limit unintended consequences could limit the effects of operations on innocent bystanders, including through the promotion of the concept of ‘due diligence’.
  • Responses to cyber incidents could also be improved by better accounting of the damage incurred. A robust assessment of damage using agreed standards would enable a more directly proportionate response and would help reinforce the expectation of specific and proportionate offensive cyber operations.

Finally, increased state transparency would promote acceptable norms of behaviour. Although monitoring and verification are difficult, this paper presents an offensive cyber operation lifecycle that indicates that various stages provide some visibility, which could build confidence.


Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Michael S Rogers, Commander US Cyber Command, statement to the Senate Committee on Armed Services, 27 February 2018, online; Prime Minister Malcolm Turnbull, ‘Offensive cyber capability to fight cyber criminals’, media release, 30 June 2017, online; Director GCHQ, speech at CyberUK18, 12 April 2018. ↩︎
  2. Council on Foreign Relations, Europe is developing offensive cyber capabilities: the United States should pay attention, 26 April 2017, online. ↩︎
  3. Council on Foreign Relations Cyber Operations Tracker, online. ↩︎
  4. James Clapper, Marcel Lettre, Michael S Rogers, Foreign cyber threats to the United States, joint statement for the record to the Senate Armed Services Committee, 5 January 2017. ↩︎
  5. Although offensive cyber operations have been used by combatants in the context of armed conflicts. ↩︎

Deterrence in cyberspace

Spare the costs, spoil the bad state actor: Deterrence in cyberspace requires consequences

Foreword

In the past three years, barely a week has gone by without a report of a critical cyberattack on a business or government institution. We are constantly bombarded by revelations of new ransomware strains, new botnets executing denial of service attacks, and the rapidly expanding use of social media as a disinformation and propaganda platform.

Perhaps most alarmingly, a great many of these attacks have their origin in the governments of nation states.

In the past decade we have moved well beyond business as usual signals intelligence operations. Some of the largest malware outbreaks in recent years, such as NotPetya and WannaCry, had their origins in state-run skunkworks.

Cyberattacks initiated by nation states have become the new normal, and countries including Australia have struggled with the challenge of how to respond to them. Far too often they’re considered a low priority and met with a shrug of the shoulders and a “What can you do?”

In this paper, Chris Painter offers us a way forward. Chris presents a reasonable framework for deterrence, a way that we as a nation can help limit the deployment of cyberwarfare tools.

His recommendations are designed to properly punish bad actors in a way that discourages future bad behaviour. They’re modelled on actions that have worked in the past, and serve, if not as a final solution, at least as a starting point for us to scale back on the increasing number of state-sponsored cyber attacks.

Most importantly, these actions aren’t just to the benefit of the state—they will allow us to better protect private citizens and companies that all too often get caught in the cyberwarfare crossfire. To put it simply, if we can ensure there are costs and consequences for those who wrongly use these tools to wreak damage, bad actors might start thinking twice before engaging in this destructive behaviour.

Yohan Ramasundara
President, Australian Computer Society

What’s the problem?

Over the past few years, there’s been a substantial increase in state attacks on, and intrusions into, critical information systems around the globe—some causing widespread financial and other damage.1 They have included:

  • attacks by North Korea on Sony Pictures in 2014
  • widespread Chinese theft of trade secrets and intellectual property
  • Russian state-sponsored interference in the US elections
  • North Korea’s sponsorship of the WannaCry ransomware worm that caused, among other things, a meltdown of the UK’s National Health System
  • the Russian-sponsored NotPetya worm that caused tens of millions of dollars of damage and disruption around the world.

The pace and severity of these attacks show no sign of declining. Indeed, because there have usually been little or no consequences or costs imposed on the states that have taken these actions, they and others have little reason not to engage in such acts in the future.

The US, Australia and many other countries have spent years advancing a framework for global stability in cyberspace. This framework comprises:

  • the application of international law to cyberspace
  • acceptance of certain voluntary norms of state behaviour in cyberspace (essentially, voluntary rules of the road)
  • the adoption of confidence and transparency building measures.

Although much progress has been achieved in advancing this framework, the tenets of international law and norms of state behaviour mean little if there are no consequences for those states that violate them. This is as true in the cyber world as in the physical one. Inaction creates its own norm, or at least an expectation on the part of bad state actors that their activity is acceptable because there are no costs for their actions and no likely costs for future bad acts.

Individually as countries and as a global community, we haven’t done a very effective job of punishing and thereby deterring bad state actors in cyberspace. Part of an effective deterrence strategy is a timely and a credible response that has the effect of changing the behaviour of an adversary who commits unacceptable actions.

Although there are some recent signs of change, in the vast majority of cases the response to malicious state actions has been neither timely nor particularly effective. This serves only to embolden bad actors, not deter them. We must do better if we’re to achieve a more stable and safe cyber environment.

What’s the solution?

It is a well-worn and almost axiomatic expression that deterrence is hard in cyberspace. Some even assert that deterrence in this realm is impossible.

Although I don’t agree with that fatalistic outlook, it’s true that deterrence in cyberspace is a complex issue. Among other things, an effective deterrence framework involves strengthening defences (deterrence by denial); building and expanding the consensus for expectations of appropriate state behaviour in cyberspace (norms and the application of international law); crafting and communicating—to potential adversaries, like-minded partners and the public—a strong declaratory policy; timely consequences, or the credible threat thereof, for transgressors; and building partnerships to enable flexible collective action against those transgressors.

Although I’ll touch on a couple of those issues, I’ll focus here on imposing timely and credible consequences.

The challenge of attribution

One of the most widely cited reasons for the lack of action is the actual and perceived difficulty in attributing malicious cyber activity.

Unlike in the physical world, there are no launch plumes to give warning or the location of the origin of a cyberattack, and sophisticated nation-states are adept at hiding their digital trail by using proxies and routing their attacks through often innocent third parties. But, as recent events illustrate, attribution, though a challenge, is not impossible. Moreover, attribution involves more than following the digital footprints; other forms of intelligence, motive and other factors all contribute to attribution. And, ultimately, attribution of state conduct is a political decision. There’s no accepted standard for when a state may attribute a cyberattack, although, as a practical, political and prudential matter, they’re unlikely to do so unless they have a relatively high degree of confidence. Importantly, this is also true of physical world attacks. Certainly, a state doesn’t require 100% certainty before attribution can be made or action taken (as some states have suggested). Whether in the physical or the cyber world, such a standard would practically result in attribution never being made and response actions never being taken.

Although attribution is often achievable, even if difficult, it still seems to take far too long—at least for public announcements of state attribution. Announcing blame, even if coupled with some responsive actions, six months to a year after the event isn’t particularly timely. Often by that point the impact of the original event has faded from public consciousness and so, too, has the will to impose consequences.

Part of this delay is likely to be due to technical difficulties in gathering and assembling the requisite evidence and the natural desire to be on solid ground; part is likely to be due to balancing public attribution against the possible compromise of sources and methods used to observe or detect future malicious activity; but part of it’s probably due to the need to summon the political will to announce blame and take action—particularly when more than one country is joining in the attribution. All of these cycles need to be shortened.

Naming and shaming

Public attribution of state conduct is one tool of deterrence and also helps legitimise concurrent or later responses.

The US, the UK, Australia and other countries came together recently to attribute the damaging NotPetya worm to Russia and, a few months ago, publicly attributed the WannaCry ransomware to North Korea. This recent trend to attribute unacceptable state conduct is a welcome development and should be applauded.2 It helps cut through the myth that attribution is impossible and that bad state actors can hide behind the internet’s seeming anonymity.

However, public attribution has its limits. Naming and shaming has little effect on states that don’t care if they’re publicly outed and has the opposite effect if the actor thinks their power is enhanced by having actions attributed to them. In the above two cases, it’s doubtful that naming and shaming alone will change either North Korea’s or Russia’s conduct. Public attribution in these cases, however, still serves as a valuable first step to taking further action. Indeed, in both cases, further actions were promised when public attribution was made.

That raises a couple of issues. First, those actions need to happen and they need to be effective. President Obama stated after the public attribution to North Korea in relation to the Sony Pictures attack that some of the response actions ‘would be seen and others unseen’. A fair point, but at least some need to be seen to reinforce a deterrent message with the adversary, other potential adversaries and the public at large.

The other issue is timing. The public attribution of both WannaCry and NotPetya came six months after the respective attacks. That delay may well have been necessary either for technical reasons or because of the work required to build a coalition of countries to announce the same conclusion, but attribution that long after the cyber event should be coupled with declared consequences—not just the promise that they’re to come. Some action did in fact come in the NotPetya case about a month after public attribution, when the US sanctioned several Russian actors for election interference, NotPetya and other matters. That was a very good start but would be even more effective in the future if done when the public attribution occurs.

Action speaks louder than attribution alone, and they must be closely coupled to be effective.

Action speaks louder than attribution alone, and they must be closely coupled to be effective.

General considerations

A few general considerations apply to any contemplated response action to a cyber event.

First, when measures are taken against bad actors, they can’t just be symbolic but must have the potential to change that actor’s behaviour. That means that one size does not fit all. Different regimes hold different things dear and will respond only if something they prioritise or care about is affected. Tailored deterrence strategies are therefore required for different states.3

For example, many have opined that Russia is more likely to respond if sanctions are targeted at Putin’s financial infrastructure and that of his close elites than if simply levied in a more general way.

Second, the best response to a cyberattack is seldom a cyber response. Developing cybertools and having those tools as one arrow in the quiver is important, but other responses will often be more effective.

Third, the response to a cyber event shouldn’t be approached in a cyber silo but take into account and leverage the overall relationship with the country involved. The agreement that the US reached with China that neither should use cyber means to steal the trade secrets and intellectual property of the other to benefit its commercial sectors wouldn’t have come about if widespread cyber-enabled intellectual property theft was seen only as a cyber issue. Only when this problem was seen as a core national and economic security issue, and only when President Obama said that the US was willing to bear friction in the overall US–China relationship, was progress really possible.

Fourth, a responsive action and accompanying messaging needs to be appropriately sustained and not a one-off that can be easily ignored. Fifth, potential escalation needs to be considered. This is a particularly difficult issue when escalation paths aren’t well defined for an event that originates in cyberspace, whether the response is a cyber or a physical one, and the chance of misperception is high. And finally, any response should comport with international law.

Collective action

Collective action against a bad actor is almost always more effective than a response by just one state and garners more legitimacy on the world stage.

Of course, if the ‘fiery ball of cyber death’ is hurtling towards you, every country has the right to act to defend itself, but, if possible, acting together, with each country leveraging its capabilities as appropriate, is better. Collective action doesn’t require any particular organised group or even the same countries acting together in each instance.

Flexibility is the key here and will lead to swifter results. The recent attribution of NotPetya by a number of countries is a good example of collective action to a point. It will be interesting to see, following the US sanctioning of Russia, whether other states join in imposing collective consequences.

One challenge for both collective attribution and collective action is information sharing. Naturally, every state will want to satisfy itself before taking the political step of public attribution, and that’s even more the case if it’s taking further action against another transgressing state. Sharing sensitive attribution information among states with different levels of capability and ability to protect that information is a tough issue even in the best of times. But, if collective action is to happen, and happen on anything approaching a quick timeline, enhancing and even rethinking information sharing among partner countries is foundational.

Using and expanding the tools in the toolkit

The current tools that can be used in any instance to impose consequences are diplomatic, economic (including sanctions), law enforcement, cyber responses and kinetic responses.

Some of them have been used in the past to varying degrees and with varying levels of effectiveness but not in a consistent and strategic way. Some, like kinetic responses, are highly unlikely to be used unless a cyber event causes death and physical injury similarly to a physical attack. Others admittedly take a while to develop and deploy, but we have to have the political willingness to use them decisively in the appropriate circumstances and in a timely manner. For example, the US has had a cyber-specific sanctions order available since April 2015 and, before its recent use against Russian actors in March, it had only been used once in December 2017 against Russian actors for election interference. For the threat of sanctions to be taken seriously, they must be used in a more regular and timely manner, and their targets should be chosen to have a real effect on the violating state’s decision-making.

Our standard tools are somewhat limited, so we must also work to creatively expand the tool set so that we can better affect the unique interests of each adversarial state actor (identified in a tailored deterrence strategy), so that they’ll change course or think twice before committing additional malicious acts in the future. That is likely to need collaboration not just within governments but between them and the private sector, academia, civil society and other stakeholders in order to identify and develop new tools.

Recommendations

Of course, foundational work on the application of international law and norms of voluntary state behaviour should continue. That work helps set the expectation of what conduct is permissible. In addition, states should articulate and communicate strong declaratory policies. Declaratory statements put potential adversaries on notice about what’s unacceptable 4 and can contain some detail about potential responses. In addition, a number of other things can aid in creating an environment where the threat of consequences is credible:

1. Shorten the attribution cycle.

Making progress on speeding technical attribution will take time, but delays caused by equity reviews, inter-agency coordination, political willingness, and securing agreement among several countries to share in making attribution are all areas that can be streamlined. Often the best way to streamline these kinds of processes is to simply exercise them by doing more public attribution while building a stronger political commitment to call bad actors out. The WannaCry and NotPetya public attributions are a great foundation for exercising the process, identifying impediments and speeding the process in the future. Even when attribution is done privately, practice can help shorten inter-agency delays and equity reviews.

2. If attribution can’t be made or announced in a fairly brief period, couple any later public attribution with at least one visible responsive action.

Attribution six months or a year after the fact with the vague promise of future consequences will often ring hollow, particularly given the poor track record of imposing consequences in the past. When attribution can be made quickly, the promise of a future response is understandable, but delaying the announcement until it can be married with a response may be more effective.

3. Mainstream and treat cybersecurity as a core national and economic security concern and not a boutique technical issue.

If cyberattacks really pose a significant threat, governments need to start thinking of them like they think of other incidents in the physical world. It is telling that Prime Minister Theresa May made public attribution of the Salisbury poisonings in a matter of days and followed up with consequences shortly thereafter. Her decisive action also helped galvanise an international coalition in a very short time frame. Obviously that was a serious matter that required a speedy response, but the speed was also possible because government leaders are more used to dealing with physical world incidents. They still don’t understand the impact or importance of cyber events or have established processes to deal with them. Mainstreaming also expands and makes existing response options more effective. As noted above, a prime reason for the US–China accord on intellectual property theft was the fact that it was considered a core economic and national security issue that was worth creating friction in the overall US–China relationship.

4. Build flexible alliances of like-minded countries to impose costs on bad actors.

A foundational element of this is improving information sharing, both in speed and substance, to enable better collective attribution and action. Given classification and trust issues, improving tactical information sharing is a difficult issue in any domain. However, a first step is to discuss with partners what information is required well in advance of any particular incident and to create the right channels to quickly share that information when needed. It may also require a re-evaluation of what information must absolutely be classified and restricted and what can be shared through appropriately sensitive channels. If there’s greater joint attribution and action, this practice will presumably also help build mechanisms to share information and build trust and confidence in the future with a greater number of partners.

5. Improve diplomatic messaging to both partners and adversaries.

Improved messaging allows for better coordinated action and serves to link consequences to the actions to which they’re meant to respond. Messaging and communication with the bad actor while consequences are being imposed can also help with escalation control. Of course, effective messaging must be high-level, sustained and consistent if the bad actor is to take it seriously. Sending mixed messages only serves to undercut any responsive actions that are taken.

6. Collaborate to expand the toolkit.

Work with like-minded states and other stakeholders to expand the toolkit of potential consequences that states can use, or threaten to use, to change and deter bad state actors.

7. Work out potential adversary-specific deterrence strategies.

Actual or threatened responsive actions are effective only if the target of those actions is something that matters to the state in question, and that target will differ according to the particular state involved. Of course, potential responses should be in accord with international law.

8. Most importantly, use the tools we already have to respond to serious malicious cyber activity by states in a timely manner.

Imposing consequences for bad action not only addresses whatever the current bad actions may be but creates a credible threat that those consequences
(or others) will be imposed in the future.

None of this is easy or will be accomplished overnight, and there are certainly complexities in escalation, proportionality and other difficult issues, but a lot comes down to a willingness to act—and the current situation isn’t sustainable. The recent US imposition of sanctions is a step in the right direction, but imposing tailored costs when appropriate needs to be part of a practice, not an aberration, and it must be accompanied by high-level messaging that supports rather than undercuts its use.

The 2017 US National Security Strategy promises ‘swift and costly consequences’ for those who target the US with cyberattacks. Australia’s International Cyber Engagement Strategy states that ‘[h]aving established a firm foundation of international law and norms, the international community must now ensure there are effective consequences for those who act contrary to this consensus.’ On the other hand, Admiral Rogers, the head of US Cyber Command and the National Security Agency, recently told US lawmakers that President Putin has clearly come to the conclusion that there’s ‘little price to pay here’ for Russia’s hacking provocations, and Putin has therefore concluded that he ‘can continue this activity’.

We must change the calculus of those who believe this is a costless enterprise. Imposing effective and timely consequences for state-sponsored cyberattacks is a key part of that change.

  1. Of course, there are an ever-increasing number of attacks and intrusions by criminals, including transnational criminal groups, as well. Deterring this activity is a little more straightforward—the consequences for criminals are prosecution and punishment and, in particular, a heightened expectation that they’ll be caught and brought to justice. I don’t address deterring criminal actors in this paper, although there have been advances in ensuring that countries have the laws and capacity to tackle these crimes and there have been a number of high-profile prosecutions, including transnational cases. Much more needs to be done to deter these actors, however, as many cybercriminals still view the possibility that they’ll be caught and punished as minimal. ↩︎
  2. One downside of a practice of publicly attributing state conduct is that it creates an expectation that victim states will do this in every case and leads to the perception that when they don’t it means they don’t know who is responsible—even if they do. For that reason, states, including the US, have often said in the past that they’ll make public attribution when it serves their deterrent or other interests. There are also cases in which a state or states may want to privately challenge a transgressor state to change its behaviour or in which calling out bad conduct publicly risks sources and methods that may have a greater value in thwarting future malicious conduct. Nevertheless, the seeming trend to more cases of public attribution is a good one, and these concerns and expectations can be mitigated in a state’s public messaging or by delaying public attribution when necessary. ↩︎
  3. Defence Sciences Board, Task Force on Cyber Deterrence, February 2017. ↩︎
  4. Such statements should be relatively specific but need not be over-precise about exact ‘red lines’, which might encourage an adversary to act just below that red line to escape a response. ↩︎

ASPI International Cyber Policy Centre

The ASPI International Cyber Policy Centre’s mission is to shape debate, policy and understanding on cyber issues, informed by original research and close consultation with government, business and civil society.

It seeks to improve debate, policy and understanding on cyber issues by:

  1. conducting applied, original empirical research
  2. linking government, business and civil society
  3. leading debates and influencing policy in Australia and the Asia–Pacific.

We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors but special mention in this case should go to the Australian Computer Society (ACS), which has supported this research.

Chris Painter’s distinguished visiting fellowship at ASPI’s International Cyber Policy Centre was made possible through the generous support of DFAT through its Special Visits Program. All views expressed in this policy brief are the authors.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

Australia’s Offensive Cyber Capability

FOREWORD

Yohan RamasundaraThe reality of the world we live in today is one in which cyber operations are now the norm. Battlefields no longer exist solely as physical theatres of operation, but now also as virtual ones. Soldiers today can be armed not just with weapons, but also with keyboards. That in the modern world we have woven digital technology so intricately into our businesses, our infrastructure and our lives makes it possible for a nation-state to launch a cyberattack against another and cause immense damage — without ever firing a shot.

ACS’s aim in participating in this policy brief is to improve clarity of communication in this area. For Australia, both defensive and offensive cyber capabilities are now an essential component of our nation’s military arsenal, and a necessary step to ensure that we keep up with global players. The cyber arms race moves fast, so continued investment in cyber capability is pivotal to keep ahead of and defend against the latest threats, while being able to deploy our own capabilities when and where we choose.

So, too, is ensuring that we have the skills and the talent to drive cyber capabilities in Australia. This means attracting and keeping the brightest young minds, the sharpest skilled local talent and the most experienced technology veterans to drive and grow a pipeline of cyber specialists, and in turn help protect and serve Australia’s military and economic interests.

Yohan Ramasundara
President, Australian Computer Society

What’s the problem?

In April 2016, Prime Minister Turnbull confirmed that Australia has an offensive cyber capability. A series of official disclosures have provided further detail, including that Australia will use this capability against offshore cybercriminals.

This was the first time any state has announced such a policy.

However, this commendably transparent approach to telegraphing our capability and intentions hasn’t been without challenges. In some cases, these communications have created confusion and misperceptions. There’s a disconnect between popular perceptions, typified by phrases like ‘cyber Pearl Harbor’, and the reality of offensive cyber operations, and reporting has at times misrepresented how these tools will be used. Public disclosures and the release of the report of the Independent Intelligence Review have also raised questions about how Australia will build and maintain this capability.

What’s the solution?

To reduce the risk of misunderstanding and misperception and to ensure a more informed debate, this policy brief seeks to further clarify the nature of Australia’s offensive cyber capability. It recommends improving communications, using innovative staff recruitment and retention options, deepening industry engagement and reviewing classification levels in some areas. Looking forward, the government could consider increasing its investment in our offensive capability to create an asymmetric capability; that is, a capability that won’t easily be countered by many militaries in our region.

Introduction

Governments routinely engage in a wide spectrum of cyber operations, and researchers have identified more than 100 states with military and intelligence cyber units.1

The cyber units range considerably in both their capability and their compliance with international law. Leaks have highlighted the US unit’s advanced capability, and public documents reveal its size. US Cyber Command’s action arm, the Cyber Mission Force, is building to 6,200 military and civilian personnel, or about 10% of the ADF, and for the 2018 financial year requested a US$647 million budget allocation.2 China has been widely accused of stealing enormous quantities of intellectual property. North Korea has used cyber tools to steal money, including in a US$81 million heist on the Bangladesh central bank. Russia is accused of using a range of online methods to influence the 2016 US presidential election and has engaged in a wide spectrum of actions against its neighbours, such as turning off power stations in Ukraine and bringing down government websites in Georgia and Estonia. Israel is suspected of using a cyber operation in conjunction with its bombing raid on a Syrian nuclear reactor in 2007 by temporarily ‘tricking’ a part of Syria’s air defence system to allow its fighter jets to enter Syria undetected.3

In Australia, the government has been remarkably transparent in declaring the existence of its offensive cyber capability and its applications: to respond to serious cyberattacks, to support military operations, and to counter offshore cybercriminals. It has also established robust structures to ensure its compliance with international law. Three additional disclosures about Australia’s offensive cyber capability have followed the Prime Minister’s initial April 2016 announcement. In November 2016, he announced that the capability was being used to target Islamic State,4 and on 30 June 2017 Australia became the first country to openly admit that its cyber offensive capabilities would be directed at ‘organised offshore cyber criminals’.5 The same day, the then Minister Assisting the Prime Minister for Cyber Security, Dan Tehan, announced the formation of an Information Warfare Division within the ADF.

While these disclosures have raised awareness of Australia’s offensive cyber capability, the limited accompanying detail has meant that the ensuing public debate has often been inaccurate or misleading. One major news site, for example, led a report with the title ‘Australia launches new military information unit to target criminal hackers’.6 Using the ADF to target criminals would have been a radical departure from established protocols.

This policy brief seeks to clarify some of the misunderstandings arising from sensationalist reporting.

The report has the following parts:
1. What’s an offensive cyber operation?
2. Organisation, command and approvals
3. Operations against declared targets
4. Risks
5. Checks, balances and compliance with international law
6. Strengths and weaknesses
7. Future challenges and recommendations.

Tom Uren and Fergus Hanson on Offensive Cyber

1. What’s an offensive cyber operation?

For the purposes of this policy brief, we use a draft definition that’s being developed as part of the Department of the Prime Minister and Cabinet’s Cyber Lexicon project. It defines offensive cyber operations as ‘activities in cyberspace that manipulate, deny, disrupt, degrade or destroy targeted computers, information systems, or networks’.7 Given the range of countries with varying capabilities and using examples from open sources, offensive cyber operations could range from the subtle to the destructive: removing computer accounts or changing passwords; altering databases either subtly or destructively; defacing web pages; encrypting or deleting data; or even attacks that affect critical infrastructure, such as electricity networks.

Even though it may use the same tools and techniques, cyber espionage, by contrast, is explicitly designed to gather intelligence without having an effect—ideally without detection. The Global Commission on the Stability of Cyberspace has commissioned ASPI’s International Cyber Policy Centre to do further work on defining offensive cyber capabilities.

2. Organisation, Command and Approvals

Australia’s offensive cyber capability resides within the Australian Signals Directorate (ASD).8 It can be employed directly in military operations, in support of Australian law enforcement activities, or to deter and respond to serious cyber incidents against Australian networks. While physically housed within ASD, the military and law enforcement applications have different chains of command and approvals processes.

MILITARY

The Information Warfare Division within the Department of Defence was formed in July 2017 and is headed by the Deputy Chief Information Warfare, Major General Marcus Thompson.

Major General Thompson has presented the ADF approach to cyber capabilities as two distinct functions: cybersecurity (consisting of self-defence and passive defence 9), and cyber operations (consisting of active defence and offence 10).

Figure 1

The Australian Government’s offensive cyber capability sits within ASD and works closely with each of the three services, which embed staff assigned to ASD from the ADF’s Joint Cyber Unit. Offensive cyber in support of military operations is a civil–military partnership. The workforce to conduct offensive cyber operations resides within ASD and is largely civilian. Advice from Defence is that the laws of armed conflict are considered during the development and execution of operations, and that ASD personnel will act in accordance with legally approved instructions. There’s no reason to doubt that, and the Inspector-General of Intelligence and Security has noted in the context of cyber operations in support of the ADF operations in Iraq and Syria that ‘guidance in place at the time was appropriate and followed by staff, and no issues of legality or propriety were noted’.

The ability to conduct an operational planning process that takes into account the desired outcome, situational awareness and the possible range of effects is a military discipline that resides in the ADF. This arrangement is expected to continue under proposals from the 2017 Intelligence Review to make ASD a statutory authority within the Defence portfolio.

As clarified in Australia’s International Cyber Engagement Strategy, ‘Offensive cyber operations in support of [ADF] operations are planned and executed by ASD and Joint Operations Command under direction of the Chief of Joint Operations.’11 Targeting for offensive cyber operations occurs in the same manner as for kinetic ADF operations. Any offensive cyber operation in support of the ADF is planned and executed under the direction of the Chief of Joint Operations and, as with any other military capability, is governed by ADF rules of engagement.

© Commonwealth of Australia, Department of Defence
ADF soldier undergoing Cyber training. © Commonwealth of Australia, Department of Defence.

Law Enforcement

The announcement that Australia would be using its offensive cyber capability against offshore cybercriminals created considerable confusion. Public messaging was one contributing factor: the announcement about the ADF’s Information Warfare Division bled into the same-day announcement that the government would also be using its offensive cyber capability to deter offshore cybercriminals, making them appear one and the same thing.14

While some media outlets characterised the announcement as Australia potentially attacking the whole suite of ‘organised offshore criminals’, the announcement focused only on offshore actors who commit cybercrimes affecting Australia.

Decisions on which cybercriminal networks to target follow a similar process to those for military operations, including that particularly sensitive operations could require additional approvals, although the exact processes haven’t been disclosed. Again, these operations would have to comply with domestic law and be consistent with Australia’s obligations under international law.

3. Operations against declared targets

Australia has declared that it will use its offensive cyber capabilities to deter and respond to serious cyber incidents against Australian networks; to support military operations, including coalition operations against Daesh in Iraq and Syria; and to counter offshore cybercriminals. Given ASD’s role in intelligence gathering, operations can integrate intelligence with cyber operations—a mission critical element.

…will use its offensive cyber capabilities to deter and respond to serious cyber incidents against Australian networks…

4. Risks

Offensive cyber operations carry several risks that need to be carefully considered. For cyber operations in support of the ADF, as with conventional capabilities, the commander must weigh up the potential for achieving operational goals against the risk of collateral effects and damage.

When offensive cyber capabilities are used, there’s a high chance that future effectiveness might be compromised. Unlike defending against kinetic weapons, an information system might be protected from cyberattack through relatively simple measures, such as upgrades, patches or configuration changes.

Another risk is that, despite extensive efforts to disguise the origin of the attack, the Australian Government could lose plausible deniability or be identified (including contextually) as the source and face embarrassment or retaliation.

5. Checks, balances and compliance with international law

When the first public disclosure of Australia’s offensive cyber capability was made, the Prime Minister emphasised Australia’s compliance with international law: ‘The use of such a capability is subject to stringent legal oversight and is consistent with our support for the international rules-based order and our obligations under international law.’15

Interviews for this policy brief suggest that the users of the capability take compliance with domestic and international law extremely seriously. The core principles are as follows:

  1. Necessity: ensuring the operation is necessary to accomplish a legitimate military / law enforcement purpose.
  2. Specificity: ensuring the operation is not indiscriminate in who and what it targets.
  3. Proportionality: ensuring the operation is proportionate to the advantage gained.
  4. Harm: considering whether an act causes greater harm than is required to achieve the legitimate military objective.

These capabilities are subject to ASD’s existing legislative and oversight framework, including independent oversight by the Inspector-General of Intelligence and Security. However, there seems to be room for updating these provisions to account for technological developments. Section 7(e) of the Intelligence Services Act 2001, for example, authorises ASD ‘to provide assistance to Commonwealth and State authorities in relation to … (ii) other specialised technologies’—a foundation that could be strengthened for 21st-century technological applications.

When seeking approval for operations from the Minister for Defence, ASD seeks legal, foreign policy and national security advice from sources external to Defence. Every offensive cyber operation is planned and conducted in accordance with domestic law and is consistent with Australia’s obligations under international law

6. Strengths and weaknesses

Offensive cyber capabilities have both strengths and weaknesses.

STRENGTHS

  • For military tasks, they can be integrated with ADF operations, adding a new capability and creating a force multiplier.
  • They can engage targets that can’t be reached with conventional capabilities without causing unacceptable collateral damage or overt acknowledgement.
  • They provide global reach.
  • They provide an asymmetric advantage against an adversary for a relatively modest cost.
  • They can be overt or clandestine, depending on the intended effect.

WEAKNESSES

  • Capabilities need to be highly tailored to be effective (such as the Stuxnet worm that targeted Iran’s nuclear centrifuges), meaning that they can be expensive to develop and lack flexibility.
  • When used in isolation, they are unlikely to be decisive.
  • Major, blunt attacks (such as Wannacry or NotPetya) are relatively cheap and easy, but are unusable by responsible state actors such as Australia. Achieving the appropriate specificity and proportionality requires investment of time and effort.
  • The capability requires constant, costly investment as cybersecurity evolves.
  • Government must compete for top-tier talent with private industry.
  • For operations short of ‘cyber attacks’,16 the effects can be relatively short-lasting and limited.
  • Capability can’t be showcased as a deterrent in the same way that conventional capability can, because revealing specific capability renders it redundant as defences are repaired.
  • Target development can require intensive intelligence support and can take a very long time.
Plus, Minus

7. Future challenges and recommendations

Offensive cyber operations are relatively new and developing in a fast-moving environment. Below are issues and recommendations stemming from research for this report.

RECOMMENDATION 1: CAREFULLY STRUCTURE COMMUNICATIONS TO REASSURE NATION-STATES AND ENFORCE NORMS

As Australia’s offensive cyber capability has only recently been publicly acknowledged and is subject to sensationalist reporting, careful communication is required. When he first acknowledged the capability, the Prime Minister said doing so ‘adds to our credibility as we promote norms of good behaviour on the international stage’.17 Poor communications, however, can have the opposite effect. The limited detail and mixed reporting of the announcement that Australia would use offensive cyber capability against offshore cybercriminals inadvertently sent the message that it was acceptable for states to launch cyberattacks against people overseas whom they considered to be criminals. This might encourage some states to use crime as a pretext to launch cyber operations against individuals in Australia.

To address this, the Australian Government should be careful when publicly discussing the offensive capability, particularly to distinguish the military and law enforcement roles. One option to do this would be to have the Attorney-General, the Minister for Justice or the new Home Affairs Minister discuss operations related to law enforcement aspects of the capability and to have the Minister for Defence discuss those related to military capabilities.

RECOMMENDATION 2: USE INNOVATIVE STAFF RECRUITMENT AND RETENTION OPTIONS

Recruiting and retaining Australia’s top technical talent is a major hurdle. In the medium term, ASD will have to continue to invest heavily in training, raise salaries (ASD becoming a statutory authority will help it address this) and develop an alumni network and culture that allow former staff to return in new roles after a stint in private industry. A pool of alumni working as cleared reservists could also be used as an additional workforce without the significant investment required in conducting entirely new clearances.

RECOMMENDATION 3: DEEPEN INDUSTRY ENGAGEMENT

ASD capability being deployed against cybercriminals is likely to generate increased interest from corporate Australia. There’s a policy question about whether or not Australia’s offensive cyber capability should be used in support of Australian corporate interests. Given the finite resources and the tricky situations that could arise, government should consider useful ways industry could engage, clarify the limits of industry engagement and assess how to handle industry requests to use the offensive cyber capability against actors targeting its operations.

RECOMMENDATION 4: CLASSIFY INFORMATION AT LOWER LEVELS

It has long been argued that over-classification of material, such as threat intelligence, by governments prevents easy information exchange with the outside world, including key partners such as industry. The government has recognised this and is positioning ‘Australian Cyber Security Centre (ACSC) 2.0’ to facilitate a more cooperative and informed relationship with the private sector. Similarly, the government should continue to scope the potential benefits from lowering the classification of information associated with offensive cyber operations. In particular, there are benefits in operating at the SECRET level for workforce generation and training, and providing a ‘halfway house’ to usefully employ incoming staff as they wait during vetting procedures. More broadly, excessive classification slows potentially valuable two-way information exchange with the information security community.

RECOMMENDATION 5: INVEST TO CREATE AN ASYMMETRIC CAPABILITY

The 2016 Defence White Paper noted that ‘enhancements in intelligence, space and cyber security will require around 900 ADF positions’.18 Those positions were part of the $400 million 19 in spending announced in the White Paper and will be spread across the ADF. While this is significant, given the limits of what can be achieved with current spending on conventional kit, the Australian Government should consider conducting a cost–benefit analysis on the relative value of substantial further spending on cyber to provide it with an asymmetric capability against future adversaries. This would need to include a considerable investment in training.

RECOMMENDATION 6: CONSIDER UPDATING THE POLICY AND LEGISLATIVE FRAMEWORK

There appears to be sufficient legislation, policy and oversight to ensure that ASD and the ADF work together in a lawful, collaborative and cooperative manner to support military operations. The 2017 Independent Intelligence Review noted that ASD’s support to military operations is indispensable, and will remain so.

While those oversight arrangements may be sufficient for now, the ADF will inevitably need to incorporate offensive cyber on the battlefield as a way to create local effects, including force protection measures and to deliver effects currently generated by electronic warfare (such as jamming communications technology). It should not always be necessary to reach back to the national authorities for clear-cut and time critical battlefield decisions. There appears to be scope to update the existing policy and legislative framework that governs the employment of offensive cyber in deployed operations to support those kinds of activities.


Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Noah Shachtman, Peter W Singer, The wrong war: the insistence on applying Cold War metaphors to cybersecurity is misplaced and counterproductive, Brookings Institution, Washington DC, 15 August 2011, online. ↩︎
  2. Michael S Rogers, Statement of Admiral Michael S Rogers, Commander, United States Cyber Command, before the House Committee on Armed Services Subcommittee on Emerging Threats and Capabilities, 23 May 2017, p. 1, online; Laura Criste, ‘Where’s the cyber money for fiscal 2018?’, Bloomberg Government, 19 July 2017, online. ↩︎
  3. Thomas Rid, Cyber war will not take place, Oxford University Press, 2013, p. 42. ↩︎
  4. Malcolm Turnbull, ‘Address to parliament: national security update on counter terrorism’, 23 November 2016,
    transcript, online. ↩︎
  5. Malcolm Turnbull, ‘Offensive cyber capability to fight cyber criminals’, media release, 30 June 2017, online. ↩︎
  6. ‘Cyber warfare: Australia launches new military information unit to target criminal hackers’, The Australian, 30 June 2017, online. ↩︎

The Internet of Insecure Things

Introduction

The Internet of Things (IoT) is the term used to describe the growing number of devices being connected to the internet. Some of the more common IoT devices include home appliances such as Google Home, wearable devices, security cameras and smart meters.It’s been predicted that the number of connected devices was close to 8.4 billion in 2017 and that there will be over 20 billion devices connected by 2020.1 Even though the IoT has been developing since the rise of the internet in the early 1990s, there’s no universally accepted definition. Kevin Ashton, who coined the phrase in 1999, says the IoT is much more than just connected appliances and describes it as a ‘ubiquitous sensor network’ in which automation leads to innovation.2 While there are some justifiable cybersecurity concerns about the IoT, there are also many notable advantages to living in a connected world. The IoT is saving lives through advanced healthcare technology, manufacturers are saving time and money through automation and tracking, and a plethora of home devices are adding value to people’s lives by providing a range of different services.

There are many different ways to categorise IoT devices, which makes safeguarding the technology challenging. The IoT can be dissected by industry, such as healthcare, transport, manufacturing and consumer electronics. One major subcategory of the IoT has earned its own acronym: the IIoT, to which control systems belong. Another way of categorising devices is by looking at their individual capabilities. Devices that can take action pose a different threat from devices that simply collect data to report back to the user.

The IoT offers benefits to all industries, but the connectivity of these once isolated things also introduces new vulnerabilities that can affect our homes and industries. As well as promising convenience and efficiency, the IoT is a problem because a vast number of internet connected devices with poor default security create a large attack surface that bad actors could take advantage of for malicious ends. A variety of international organisations and government groups are working on issues pertaining to the IoT, but at present there’s no coordinated vision to implement standards for the IoT on a global scale. Similarly, in Australia, a host of different cyber agencies and industrial groups are working to overcome some of the cybersecurity issues that the IoT presents, but a coordinated strategy detailing how government and industry can collaborate on the IoT is needed.

This issues paper aims to give a broad overview of IoT issues to increase awareness and public discussion on the IoT.

In December 2017, ASPI’s International Cyber Policy Centre produced a discussion draft asking stakeholders key questions about IoT regulation, governance, market incentives and security standards to help inform this issues paper. We received responses from government, industry representatives, technical experts and academics. While those stakeholders were consulted in the research phase of this paper, the views here are those of the authors.

THREAT TO CRITICAL INFRASTRUCTURE

In 2016, a severe storm disrupted crucial services in South Australia, resulting in a loss of power for 850,000 customers.3 Trains and trams stopped working, as did many traffic lights, creating gridlock on flooded roads. The storm, together with the failure of backup processes, resulted in the death of a number of embryos at a fertility clinic in Flinders Hospital.4 The total cost for South Australian businesses as a result of the blackout was estimated to be $367 million.5

Some have noted that, due to the interconnectedness of infrastructure, this event mirrored the potential effects of a large-scale cyberattack.6

Disrupting utilities that power an entire city could cause more damage than traditional terror tactics and can be done externally and with more anonymity.

Again, severe storms demonstrate that a loss of power can cause more deaths than the physical destruction of infrastructure.

When Hurricane Irma caused the air conditioning at a Florida nursing home to fail, 12 residents died of suspected heat-related causes.7

Digital weapons are being used intentionally by nation-states to inflict physical destruction or compromise essential services. The now infamous attack on Iran’s nuclear program, known as Stuxnet, used infected USB drives to contaminate computer systems with malware,8 which caused physical damage to a number of uranium centrifuges.9 In 2015, hackers used stolen user credentials to attack a Ukrainian power grid, which resulted in loss of power for more than 230,000 people.10 In 2016, the attackers used malware specifically designed to attack Ukraine’s power grid to disrupt the power supply to Kiev. This indicates that malicious actors have both the resources and the intent to develop cyberattack capabilities targeted at essential services.11

The IoT overlaps with critical infrastructure because many control systems are also now connected to the internet. Kaspersky researchers found more than 3,000 industrial control systems in Australia by using Shodan and Censys IoT search engines.12 Studies have also revealed vulnerabilities in control systems made by major vendors, such as Schneider Electric and Siemens.13

In the discussion version of this paper, several respondents expressed the view that a separate cyber organisation focusing specifically on the security of critical assets and services would be unhelpful. However, many acknowledged a need for greater collaboration between those responsible for protecting these assets to help mitigate IoT-related threats.

The Australian Cyber Security Centre (ACSC) could seek to increase coordination between owners and operators of critical assets, helping with the technical aspects of adopting voluntary industry standards for the IoT. The ACSC has the technical expertise to participate in the formation of international standards and could work with policy experts in the Department of Home Affairs to encourage national adoption.

THE CYBER LANDSCAPE IN AUSTRALIA

The cyber landscape in Australia is complex. Government cybersecurity responsibilities have recently been reorganised through the establishment of the Department of Home Affairs and structural changes to the Australian Signals Directorate and ACSC. Getting a clear picture of roles and responsibilities was difficult, and it would be beneficial to identify any gaps in roles and responsibilities after these recent organisational changes have been properly implemented. Industry roles could be identified in an IoT road map that helps industry and government bodies work together to more effectively mitigate IoT threats. Consumers should be educated on cybersecurity and responsible ownership of IoT devices, including patching and updating, building on initiatives such as Stay Safe Online.

The IoT has exacerbated an already confronting problem: the lack of skilled cybersecurity professionals both nationally and globally.

The Australian Cyber Security Growth Network estimates that a further 11,000 skilled experts will be needed in the next decade.14 In January 2018, the network announced that cybersecurity qualifications will be offered at TAFE institutions around Australia, which is a significant step forward.15 However, cybersecurity is a broad domain that requires not only workers with technical skills but also experts in risk management and policymaking, among other areas. Advances in automation and data analytics could help to address the skills shortage, as those technologies will increase the availability of cybersecurity experts, by replacing technical jobs in other areas.

We need to think about IoT security as a holistic system that combines practical skills-based training with industry best practise. The under-representation of women in cybersecurity has been widely noted and overcoming it was listed as a priority in Australia’s Cyber Security Strategy.16 The government has conducted research to better understand the issue and is running workshops to help increase participation.17

SECURITY RATINGS AND CERTIFICATIONS

A number of countries, including Australia, are considering the value of security ratings for IoT devices. In October 2017, Dan Tehan, the then Minister Assisting the Prime Minister on Cybersecurity, suggested in a media interview that such ratings should be created by the private sector, not by the Australian Government.18 The UK Government is also exploring ‘how to encourage the market by providing security ratings for new products’, as outlined in its National Security Strategy.19 Introducing a product security rating for consumer electronics has the potential to improve awareness of cybersecurity issues and to encourage industry to adhere to minimum security standards. But whether the ratings should be initiated by government or industry is only the beginning of the issue, as there are several problems with cybersecurity ratings that need to be addressed.

First, the vulnerability of an IoT device could potentially vary over its lifetime as weaknesses are discovered and then patched. The energy efficiency of a refrigerator or washing machine, by contrast, is relatively fixed, and so energy-efficiency ratings can be trusted over the device’s lifetime. With IoT devices, new vulnerabilities are constantly being exposed. At best, a security rating would reflect the security of a device based on the information available at the time of the security assessment. It would need to be adapted as security standards evolve and new vulnerabilities are discovered.

Second, it’s worth investigating whether a cyber rating could lull consumers into a false sense of security by negating their own role in protecting themselves from attack. Before implementing a security rating system, we need to research whether purchasing a device that claims to be secure could make consumers less likely to install updates or change default passwords.

Third, as mentioned in the introduction of this report, there’s considerable variation in IoT products. A Jeep Cherokee and a baby monitor (both of which have been compromised) present vastly different dangers, but the compromise of either can have serious consequences. While all IoT devices should include baseline security features in the design phase, devices deemed to be high risk should also require commensurately robust security features. Burdening otherwise cheap, low-risk devices with expensive certifications or strict security regulations, however, could make them commercially unviable in Australia. It’s important to recognise that it will be challenging and expensive to come up with a rating that appropriately addresses all the different categories of IoT devices.

In 2018, the IoT Alliance Australia (IoTAA) is prioritising the introduction of an ‘IoT product security certification program’ as a part of its strategic plan.20 Exactly what this will look like remains unknown, but it’s likely to be performed by accredited independent bodies that evaluate products based on security claims. The Australian Information Industry Association recommends an accreditation scheme that would also certify organisations making IoT devices. The authors’ view is that some manufacturers (for example, Samsung) make so many products that this would be ineffective as a stand-alone tactic, but this idea could be used in collaboration with an individual product rating.

REGULATION AND STANDARDS

Regulation and standardisation are at the forefront of the IoT debate, and positions tend to be polarised, as reflected in the responses to our discussion draft. The respondents acknowledged that regulation isn’t always effective and can impose a significant cost, but some also said that there’s potentially room for government to play a more direct role if a device is deemed to provide a critical service to the community. Some industries, such as transport and healthcare, already have safety standards addressing a wide range of security concerns; those standards need to prioritise current and emerging cybersecurity threats.

Multiple IoT-related bills introduced into the US Congress last year exemplified some of the legislative attempts to enforce IoT security by way of law. The Internet of Things (IoT) Cybersecurity Improvement Act of 2017 stresses the importance of built-in security and the provision of security patches,21 while the Cyber Shield Act of 2017 seeks to introduce a voluntary certification process for IoT devices.22

While US lawmakers have proposed some government regulation, some in Australia believe that IoT security would be more effectively regulated by industry.

Legislation takes time to introduce and often struggles to keep pace with the quickly evolving technology it seeks to control.

Taking a market-driven approach to IoT security may mean that imposed standards will more rapidly adapt to the changing security climate.

Some classes of IoT devices, however, present little threat to their owners, but their poor security allows them to be co-opted in ways that can be used to harm other internet users or internet infrastructure. This is similar to a widget-making factory that causes air pollution; the factory owner and widget buyer both benefit from lower costs of production and neither has a strong incentive to do the work needed to reduce air pollution, as that would raise costs. In economics, this is described as a negative externality, and negative externalities can be effectively dealt with through regulation. The authors’ view is that incentives do not exist for effective industry-led standards to develop, especially for consumer IoT devices.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are the two major global providers of standards. The ISO and IEC have a joint technical committee focusing on information technology and a subcommittee focusing on the IoT and related technologies. Australia is a member of the subcommittee through Standards Australia. ISO/IEC also has the 27000 series, which is a series of standards that addresses the security of information security management systems.23

The European Union Agency for Network and Information Security released baseline security recommendations for the IoT in late 2017.24 Standards have also been developed in Asia, including a draft policy on the IoT by India25 and a general framework by Japan.26 Other organisations working on IoT standards include the IEEE (Institute of Electrical and Electronics Engineers), The Open Group, and SAE International. While a considerable amount of work on IoT standards has been completed, a draft report on the status of global IoT standards by the National Institute of Standards Technology in the US indicates that there’s a long way to go. The report reveals several gaps in current standards development and implementation, including network security, IT system security evaluation and system security engineering.27 It also highlights the variety of SDOs (standards development organisations) working in this space. There’s currently a need for international consensus on IoT standards and a clear pathway to implementation.

Locally, the IoTAA has drafted multiple versions of IoT security guidelines to help promote secure designs for manufacturers and to support industry in understanding security and privacy issues. The IoTAA has also outlined key focus areas for 2018 in its Strategic Plan to Strengthen IoT Security. Australia also has iotsec, a non-profit start-up that promotes security in IoT devices to help industry and consumers.

While regulation and standardisation are often thought of in a binary way (enforced by either government or industry), the feedback from the discussion draft highlighted the importance of approaching IoT security in a holistic manner, in which government, industry and consumers all play a role. Furthermore, IoT cybersecurity is a problem of global, not national, proportions. Devices sold in Australia are manufactured all over the world. Being only a small proportion of the IoT market, Australia risks becoming a dead-end market if device makers’ security costs outweigh their income from sales. For this reason, any attempt to introduce standards for IoT devices in Australia must be done with a global mindset. The challenge now is to reach international consensus and to encourage manufacturers to adopt the standards. An IoT definition would help to focus global efforts both to secure and to develop the technology and help to articulate its scope.

CONCLUSION

The IoT offers Australia many economic and social advantages and should be embraced and used to benefit all Australians. However, it also introduces new risks and vulnerabilities that our current regulatory systems aren’t necessarily mitigating effectively.

It’s the authors’ view that our current policy and regulatory settings are almost certainly sub-optimal, but effective management of the IoT from a government policymaking perspective requires many difficult trade-offs, and easy answers aren’t immediately apparent. Corruption of traditional ICT devices such as phones and laptops has resulted in the theft of both personal and corporate data. Connecting more devices, such as watches, whitegoods, automobiles and industrial equipment, has intensified this problem and introduced new types of threats. Other incidences of organised crime and terrorism have shown that malicious actors exploit seams in systems, regulation and security.

For this reason, it is imperative that we continue to address gaps in these areas to limit opportunities for the exploitation of IoT devices.

This paper is intended to illuminate some of the issues involved in managing IoT risk so that industry and government can have a robust discussion and work collaboratively to improve the security of IoT devices.

  1. Gartner, ‘Gartner says 8.4 billion connected “things” will be in use in 2017, up 31 percent from 2016’, 2017, Gartner.com, online. ↩︎
  2. Rain RFID Alliance, ‘RAIN Q&A with Kevin Ashton RFID and the internet of things’, 2015, pp. 1–4 ↩︎
  3. Australian Energy Market Operator, Black System, 2017, p. 5 ↩︎
  4. ‘SA weather: human error to blame for embryo-destroying hospital blackout during wild storms’, ABC News, 23 January 2017 ↩︎
  5. Business SA, Blackout Survey Results, 2016 ↩︎
  6. Roger Bradbury, ‘South Australian power shutdown “just a taste of cyber attack”’, The Australian, 2016. ↩︎
  7. ‘12 of 14 nursing home deaths after Irma ruled homicides’, VOA News ↩︎
  8. European Union Agency for Network and Information Security, Stuxnet analysis ↩︎
  9. Council on Foreign Relations Cyber Operations Tracker, Stuxnet ↩︎
  10. Council on Foreign Relations, Compromise of a power grid in eastern Ukraine ↩︎
  11. ‘CRASHOVERRIDE: analysis of the threat to electric grid operations’, Dragos.com, pp. 10–11 ↩︎
  12. Oxana Andreeva, Sergey Gordeychik, Gleb Britsai, Olga Kochetova, Evgeniya Potseluevskaya, Sergey I Sidorov, Alexander A Timorin, Industrial control systems and their online availability, p. 8 ↩︎
  13. IEEE. Sagar Samtani, Shuo Yu, Hongyi Zhu, Mark Patton, Hsinchun Chen, Identifying SCADA vulnerabilities using passive and active vulnerability assessment techniques, University of Arizona, 2016 ↩︎
  14. Australian Cyber Security Growth Network, Cyber security sector competitiveness plan, 2017 ↩︎
  15. Australian Cyber Security Growth Network, Australian TAFEs join forces to tackle the cyber security skills gap, 2018 ↩︎
  16. Australian Government, Australia’s Cyber Security Strategy, p. 53 ↩︎
  17. PMC. Australian Government, Women in cyber security ↩︎
  18. Denham Sadler, Security ratings for IoT devices?, 2017 ↩︎
  19. UK Government, National Cyber Security Strategy 2016–2021, 2016, pp. 36–37 ↩︎
  20. IoT Alliance Australia, ‘Strategic plan to strengthen IoT security in Australia’, 2017 (unpublished material) ↩︎
  21. Mark Warner, Cory Gardner, Internet of Things Cybersecurity Improvement Act of 2017, 2017 ↩︎
  22. Cyber Shield Act of 2017, 2017 ↩︎
  23. ISO, ISO/IEC 27000 family— Information security management systems ↩︎
  24. European Union Agency for Network and Information Security, Baseline security recommendations for IoT, 2017 ↩︎
  25. Department of Electronics and Information Technology, Draft policy on internet of things, Indian Government, 2015 ↩︎
  26. National Center of Incident Readiness and Strategy for Cybersecurity, General framework for secure IoT systems,
    Japanese Government, 2016 ↩︎
  27. National Institute Standards Technology, Interagency report on status of international cybersecurity standardization for the internet of things (IoT), 2018, pp. 54–55 ↩︎

© The Australian Strategic Policy Institute Limited 2018
This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

Important disclaimer
This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

Acknowledgements
We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors but special mention in this case should go to JACOBS, which has supported this research.

Cyber Maturity in the Asia Pacific Region 2017

The Cyber Maturity in the Asia–Pacific Region report is the flagship annual publication of the ASPI International Cyber Policy Centre.

This report assesses the national approach of Asia–Pacific countries to the challenges and opportunities of cyberspace, taking a holistic approach that assesses governance and legislation, law enforcement, military capacity and policy involvement, and business and social engagement in cyber policy and security issues.

The 2017 report is the fourth annual cyber maturity report. It covers 25 countries and includes assessment of Taiwan and Vanuatu for the first time.

The United States continues its leadership of the country rankings and although the transition to the Trump administration caused a pause while cyber policy was reviewed, the US military is recognising the importance of cyber capability and elevating US Cyber Command to a unified combatant command to give it increased independence and broader authorities.

Australia has moved up in our rankings from fourth to equal second on the back of continued investment in governance reform and implementation of the 2016 Cyber Security Strategy. Australia’s first International Cyber Engagement Strategy was released and the 2017 Independent Intelligence review made a number of recommendations that strengthen Australia’s cyber security posture – this includes broadening the Australian Cyber Security Centre’s (ACSC) mandate as a national cyber security authority and clarifying ministerial responsibility for cyber security and the ACSC,.

Japan (equal second with Australia), Singapore, and South Korea round out a very close top five countries. All countries in this leading group have improved their overall cyber maturity although very tight margins have seen some change in rankings: Australian and Japan moving up to equal second and Singapore and South Korea dropping to fourth and fifth.

Taiwan and Vanuatu both made strong initial entries into the Cyber Maturity Report. Taiwan ranked ninth, just behind China, hampered by difficulties with international engagement, while Vanuatu came seventeenth, best of the Pacific islands.

https://www.youtube.com/watch?v=nEszlPxaATMhttps://www.youtube.com/watch?v=nEszlPxaATM

Cyber maturity in the Asia-Pacific region 2016

The 2016 Cyber Maturity report is the culmination of 12 months’ research by the ASPI International Cyber Policy Centre. The report assesses the approach of 23 regional countries to the challenges and opportunities that cyberspace presents, in terms of their governance structure, legislation, law enforcement, military, business and social engagement with cyber policy and security issues.

The 2016 report includes an assessment of three new countries, Bangladesh, Pakistan and the Solomon Islands. It also features, for the first time, separate data points on fixed line and mobile connectivity to better reflect the growth of mobile-based internet access across the region, its role in facilitating increased connectivity and opening new digital markets.  

Turning to the country rankings, coming in at top of the table for the third year running is the United States. In 2016 the United States continued to further refine its national policy approach to cyber issues, with President Obama’s National Security Action Plan and 30-day Cybersecurity Sprint, and the passing of the Cybersecurity Act. South Korea, Japan, Australia and Singapore round out the top five.

South Korea and Japan have swapped positions in second and third place, and Australia has leapfrogged Singapore into fourth place, recovering after dropping to fifth place in 2015. Australia’s improved position reflects the changes taking place as part of the implementation of the new Australian Cyber Security Strategy.

This includes the appointment of Australia’s first ministerial level cyber position (Minister Assisting the Prime Minister The Hon. Dan Tehan) and a new coordinator within the Department of the Prime Minister and Cabinet for government for cyber issues (Alastair MacGibbon).

Agenda for Change 2016: Strategic choices for the next government

The defence of Australia’s interests is a core business of federal governments. Regardless of who wins the election on July 2, the incoming government will have to grapple with a wide range of security issues. This report provides a range of perspectives on selected defence and national security issues, as well as a number of policy recommendations.

Contributors include Kim Beazley, Peter Jennings, Graeme Dobell, Shiro Armstrong, Andrew Davies, Tobias Feakin, Malcolm Davis, Rod Lyon, Mark Thomson, Jacinta Carroll, Paul Barnes, John Coyne, David Connery, Anthony Bergin, Lisa Sharland, Christopher Cowan, James Mugg, Simon Norton, Cesar Alvarez, Jessica Woodall, Zoe Hawkins, Liam Nevill, Dione Hodgson, David Lang, Amelia Long and Lachlan Wilson.

ASPI produced a similar brief before the 2013 election. There are some enduring challenges, such as cybersecurity, terrorism and an uncertain global economic outlook. Natural disasters are a constant feature of life on the Pacific and Indian Ocean rim.

But there are also challenges that didn’t seem so acute only three years ago such as recent events in the South China Sea, North Korea’s nuclear and missile programs, and ISIS as a military threat and an exporter of global terrorism.

The incumbent for the next term of government will have to deal with these issues.

Launch Video

Cyberspace and armed forces: The rationale for offensive cyber capabilities

Aserious approach to military modernisation requires countries to equip, train, and organise cyberforces for what has become an essential component of national defence and deterrence. A force without adequate cyber capabilities is more dangerous to itself than to its opponents. As nations move forward in rethinking the role and nature of their military forces, and as they study the problems of organisation, doctrine and use of cyber operations, they need to:

  • develop the full range of military cyber capabilities with both offensive and defensive application
  • create a centralised command structure for those capabilities, with clear requirements for political-level approval for action
  • embed those capabilities in doctrine and a legal framework based on international law.

Cyber maturity in the Asia-Pacific Region 2015

The second edition of the International Cyber Policy Centre’s annual Cyber Maturity in the Asia Pacific is the culmination of 12 months research and analysis delving into the cyber maturity of 20 countries within our region. It is a usable, quick-reference resource for those in government, business, academia, and the wider cyber community who are looking to make considered, evidence-based cyber policy judgements in the Asia-Pacific. It provides a depth of information and analysis that  builds a deeper understanding of regional countries’ whole of nation approach to cyber policy, crime, and security issues, and identifies potential opportunities for engagement. 

This years’ maturity metric contains five new countries and integrates a stand-alone assessment category on cybercrime enforcement. This new cybercrime category joins continuing assessments of whole-of-government policy and legislative structures, military organisation, international engagement and CERT team maturity in addition to business and digital economic strength and levels of cyber social awareness. This information is distilled into an accessible format, using metrics to provide a snapshot by which government, business, and the public alike can garner an understanding of the cyber profile of regional actors.

Tag Archive for: Cybersecurity

The future of digital identity in Australia

It seems that hardly a day goes by without news of another Australian organisation being hit by a data breach. While the underlying causes and the actions we need to take to prevent them are many and varied, one question to ask is whether all these organisations need to collect and store all this data.

In the case that started the recent tsunami of breaches, Optus, the company was obliged by legislation to verify the identity of its customers. Certainly, verification is important in preventing and detecting other crimes that might be perpetrated using telecommunications networks, but what if there was a way of doing this without every new customer needing to hand over details of their identity documents, which then have to be verified and stored for audit purposes?

Digital identity systems provide exactly such a solution. A digital identity system enables individuals to prove their verified identity (and potentially other personal details) online. Properly designed, it provides people with the ability to control exactly what is shared with whom, and ensures that only the minimum necessary data is shared.

To its credit, the Albanese government appears to have realised this and plans to revive the proposed ‘Digital Identity System’, which has largely lain dormant since the previous government published draft legislation at the tail end of last year. The concept of digital identity has been around for many years; the first iteration of the government’s ‘Trusted Digital Identity Framework’ was drafted in 2015. Budget data shows that more than $600 million has been spent since then on the nascent system, yet most people’s interactions with it occur at most once a year, when filing their tax returns. To be an effective tool to facilitate and secure digital transactions, a digital identity system needs to be something that a large number of people use regularly.

Our latest research at ASPI has shown that there are several barriers that are making state governments, businesses and customers reluctant to join the government’s proposed system. If it doesn’t achieve a critical mass of participants to make it worthwhile for people to engage with it, it will fail. As the federal government moves forward with proposed legislation, it will need to address these barriers in order to build confidence and drive take-up.

Obviously, given recent events, security is a major concern. To supplement the current framework for accrediting organisations using the system, a transparent process needs to be established to allow researchers to report vulnerabilities and for them to be addressed. We also need well-resourced monitoring systems that can quickly detect any illegitimate activity, and robust processes to fix any incidents of stolen identity so that the impact on the people affected is minimised.

Any digital identity system will build on our existing underlying identity systems. The fallout from the Optus data breach has shown some of the limitations of the current patchwork of systems across states and federal government departments. We need common standards and safeguards; otherwise, criminals will find and exploit systemic weaknesses.

Another equally important factor is privacy. Recent cyber breaches have raised public awareness of the data that companies collect and hold. To gain acceptance, we need to ensure that a digital identity system doesn’t have the unwelcome side effect of further enabling the rise of ‘surveillance capitalism’. We need to avoid a dystopian future where the profiles already built up by Facebook and others are linked to verified personal details, which would make them even more valuable and increase the incentives for intrusiveness. This will require a combination of regulations that govern how data can be used and technical measures that limit the personal data shared with organisations.

Sometimes, the requirements for security and privacy need to be balanced. For example, the system is designed to allow individuals to set up multiple identities. This is intended to preserve privacy—for example, allowing someone to separate their business and personal transactions. However, it raises an obvious question about how to build effective safeguards to detect fraudulent duplication of identities.

Governance arrangements will also need to be addressed. If the federal government effectively owns the system and has decision-making powers over detailed technical standards, this is unlikely to give the states and territories, or commercial organisations, the confidence to make long-term commitments to the system. If all players decide that the only way to have control over the systems is to build their own, such fragmentation would probably be fatal.

A better idea would be to hand over ownership and control of the system to an independent entity governed by representatives from all stakeholders, with the federal government’s role limited to setting the regulatory environment. Examples such as the bank payment system provide a potential model.

Finally, Australia doesn’t operate in a digital vacuum. When developing digital identity systems, we should aim to align as much as possible with international partners. This will not only encourage participation by multinational companies—which will be reluctant to develop bespoke systems for each country—but also could unlock additional benefits in facilitating digital trade.

Digital identity systems offer opportunities to reduce the cyber risks posed by the sharing of personal identity information, and to unlock economic benefits by building trust and reducing friction in the digital economy. The estimated annual microeconomic benefits are $11 billion, which could finally justify the significant costs to date.

However, to be successful, the system needs to reach a critical mass of organisations and users and become part of everyday digital life. The government has an opportunity to reset the approach and focus on the issues that could impede take-up, engaging with stakeholders and making them part of the journey. The time to do this is now.

Undetected and dormant: managing Australia’s software security threat

Software has spread to almost every aspect of our lives—from our watches to our combat aircraft—and nearly every organisation, from the Department of Defence to your local shopfront, relies on software to operate. It is no longer confined to laptops or computers. Software now controls the operations of power plants, medical devices, cars and much of our national security and defence platforms.

At the same time as software has become integral to our prosperity and national security, attacks on software supply chains are on the rise.

A software supply chain attack occurs when an attacker accesses and maliciously modifies legitimate software in its development cycle to compromise downstream users and customers. Software supply chain attacks take advantage of established channels of system verification to gain privileged access to systems and compromise networks. Traditional cybersecurity approaches, such as those deployed on the perimeter, have limited capability to detect these attacks since they often leverage legitimate certificates or credentials and so don’t raise any ‘red flags’.

Software supply chain attacks are popular, can have a big impact and are used to great effect by a range of cyber adversaries. Attackers can sit undetected on networks for months and deliver remote-code execution into target environments. Efforts to disrupt or exploit supply chains—including software supply chains—have become a ‘principal attack vector’ for adversarial nations seeking to take advantage of vulnerabilities for espionage, sabotage or other malicious activities.

The growing prevalence of sophisticated supply chain attacks, like SolarStorm and Not Petya, has seen governments around the world increasingly focused on identifying and mitigating risks to the software supply chain.

In the US, a recent executive order requires government agencies to purchase only software that meets secure development standards to protect government data. To support the order, in February the National Institute of Standards and Technology issued guidance that provides federal agencies with best practices for enhancing the security of the software supply chain. Two guidelines were released: the Secure software development framework and the companion Software supply chain security guidance.

The executive order directs the US Office of Management and Budget to take appropriate steps to require that agencies comply with the guidelines within 30 days. This means that federal agencies must begin adopting the framework and related guidance immediately while customising it to their agency-specific risk profile and mission. Vendors that supply software to the US government will soon also have to attest to meeting these guidelines.

In the Australian context, however, software supply chain risks remain largely underappreciated and unaddressed. So, what two key things could the Australian government do to manage these risks?

First, it should update government procurement policies and processes to manage software supply chain risks.

The government should ensure that there are adequate mechanisms to assess software supply chain risks early in the acquisition or procurement process. At the later stages of the acquisition process, which in some cases can be years later, a supply chain risk may be realised and the government may be overly committed to the solution of choice—forcing it to either pay significant costs to remove the risk or attempt to manage the risk. Strengthening references to the importance of software supply chain risks in key procurement policies would support the government to make more informed purchasing decisions and embed risk management practices at the early stages of the acquisition process.

In particular, the government should consider adopting the US guidelines and integrate them into its procurement policies and practices. These documents are intended to help government agencies get the necessary information from software producers in a form that can help guide risk-based decisions. The recommendations span many types of software, along with firmware, operating systems, applications and application services, among other things.

Procurement processes should include asking software companies about their product integrity practices. This could include key questions about their internal processes and oversight mechanisms to mitigate the risk of modification during the development lifecycle and whether they undertake third-party testing to ensure that security vulnerabilities are identified earlier in the process?

The government should also take steps to protect source code integrity by understanding whether vendors have shared their unique intellectual property as a condition of market access. Increasingly, we have seen instances of countries implementing new requirements—most notably, mandates to review or even hold source code—as a condition to sell technology to certain parts of their market. Widespread source code disclosure, however, could actually weaken security, since source code can be leveraged to detect and exploit vulnerabilities in software used by organisations globally. Currently, the Australian government doesn’t have visibility as to whether companies it deals with have shared their source code with foreign governments—posing a potential security risk.

Procurement policies should be amended to identify the companies that have shared the source code of their unique intellectual property with governments as a condition of access to certain markets. A similar approach is being taken by the US government.

Second, the Australian government should establish practices and procedures to regularly review business-critical software.

While some organisations might look at how a company manages its software supply chain at the point of purchase, few would undertake regular and continuous reviews of these practices. However, as we have seen from global attacks, regular reviews of key software companies—their culture and software development practices—may be helpful in preventing exposure to supply chain attacks.

As part of this review process, the government could collaborate with vendors of critical software on risk-based principles, including relevant changes to their software development practices or key  personnel changes (for example, the chief security officer leaving the organisation). It should also consider the ‘red line’ for removing software from its environment—in other words, at what point or risk level would an agency reconsider having a particular software product, and who can sign off on removing it?

As our world becomes increasingly digitised and connected, attacks on software supply chains are only set to increase. Compromising them can be an effective technique to gain widespread and undetected access to networks and systems. These risks are particularly acute for the defence and national security communities, which depend on software for key functions such as surveillance, data analytics and weapon systems, most of which is developed in the private sector.

The dangers of a ‘zero trust’ digital world

In the early days of cybersecurity, organisations adopted the model of Berlin during the Cold War: a wall high enough to prevent unwanted border crossings and a Checkpoint Charlie to regulate the rest.

But the physical world doesn’t map easily onto the digital. Perimeter-based approaches fail in an ever-shifting network of highly interconnected systems, where even physical disconnection does not suffice for separation, given wi-fi, Bluetooth and other electromagnetic phenomena. And because software is ever changing—that’s one of its strengths—digital systems are never complete or fully known. That means static structures and single solutions, such as walls and checkpoints, generally fail to prevent evolving threats.

So, cybersecurity thinking has adapted to the reality of constantly shifting, often unknowable systems, replete with continual interactions and adjustments between users, technology, data and the environment. Since 2010, the favoured approach to this perpetual state of insecurity—to harden the ‘chewy centre’ of information systems—is ‘zero trust’.

Zero trust acknowledges that malware and intruders may penetrate barriers and checkpoints. Every packet of data moving into, out of and within organisational systems is regarded with suspicion. Nor is it just about the technology. Core to its premise is that users cannot be trusted. User access is hard; once granted, it’s limited typically to ‘least privileged’ role-based permissions, and user behaviour is monitored to identify aberrant patterns.

Zero trust is neither a cheap nor a quick fix. The considerable setup, operational and compliance costs are most often justified by the prospective or realised costs of a breach, data loss or ransomware attack.

In zero-trust environments, nothing is trusted. In the words of one cybersecurity executive, ‘Trust is a vulnerability and, like all vulnerabilities, should be eliminated.’ The premise of zero trust, after all, is not limited by boundaries or platforms, but seeps, stepwise, to include partners, supply chains and regulatory systems.

As governments struggle to meet the challenges of fast-changing technological disruption, a growing plethora of threats to stability and an increasingly precarious geopolitical environment, all exacerbated by an ongoing pandemic, there’s a temptation to latch onto concepts that promise control and certainty. Security and safety often trump other arguments in policy debates, especially as politics becomes partisan. As such, the ideas that motivate zero-trust approaches, facilitated by digital technology, appeal more and more.

But that path leads ever down into darkness. Considerable dangers exist in extending approaches that may suit digital needs within contained environments to the broader spheres of social, political and economic life.

There’s the question of fit. Digital systems are particularly parsimonious. That may sound odd, given the apparent tangle of modern technological systems and their ubiquity. But as the American political scientist Herbert A. Simon demonstrated, it’s impossible for an artificial system to replicate the real world; they are always incomplete representations.

Moreover, digital systems are fundamentally unlike social systems. Not only do they lack the richness, multiplicity and ambiguity that characterise human relations, but their underlying network structure and behaviours differ. Applying misaligned and overly rigid order through a zero-trust approach to social systems would force disassociation within those systems: ‘they will cut our life within to pieces’.

Then there’s the question of cost. Beyond establishing the necessary surveillance infrastructure, the costs include the burden on and erosion of human relationships, culture and practice. It’s not simply the extra time and effort needed to negotiate internal rules and boundaries imposed by others; the lack of privacy inevitably generates self-censorship, an unwillingness to participate or debate, and an avoidance of risky ideas or ventures. A zero-trust culture valorises control—at the cost of efficiency, effectiveness, innovation, creativity and contestability.

There’s also the question of power. Technological design and operation comprise a series of choices—purpose, costs, compromises and privileges. Those making design and operational decisions, typically hidden from scrutiny, exert a tremendous amount of power through access control, surveillance and defining acceptable behaviour. Rights once assumed—privacy, freedom of expression, intellectual property, increasingly identity and avenues for redress—are eroded or lost.

Zero trust embeds and deepens an imbalance of power in favour of the few over the many. Zero-trust systems are not democratic systems: they are inherently authoritarian, even totalitarian, in nature.

And there’s the rub: our society is fundamentally based on trust. To imagine a zero-trust social order, think not Cold War West Berlin, but a supercharged Stasi-run East Germany, where every individual, device and interaction is continuously tracked, interrogated and measured against a profile set by a data-enabled intelligence apparatus.

There are strong national security reasons for containing the damage that untrustworthy technologies can wreak on our society. But there are stronger reasons to ensure that security doesn’t come at the cost of weakening societal fabric, crippling innovative or productive capacity, or damming the wellsprings of democracy.

Zero trust is the latest effort to tame the inherently wicked problem of cybersecurity; there will be others. Nobler concepts are needed by the public, policymakers and even security experts to ensure a healthy, resilient civil society. It’s now, as authoritarian states engage in wordplay, disinformation and lawfare, that trust matters most.

Australia needs a sovereign ICT capability

Information and communications technology, encompassing digital services and infrastructure, cybersecurity and software, is ubiquitous throughout the economy and society. As the digital transformation gathers pace, the number and complexity of ICT services is accelerating.

The ubiquity of services in Australia is reflected in the divers federal ministerial responsibilities for policies and legislation setting the national direction of ICT—our cyber and digital security and the ongoing development of our digital economy, including technological advancement.

Consequently, our strategies, legislation, regulation and policy initiatives are being developed separately, with no overarching vision, resulting in point solutions and stove-piped policy development, implementation and governance.

This is happening as Australia is dealing with ongoing challenges associated with the pandemic and increasing geostrategic competition in our region, escalating our need for critical technologies and more robust cybersecurity. These challenges are prompting a renewed emphasis on national sovereignty, with the goal of greater national resilience and self-reliance.

Digital sovereignty means self-reliance where ICT, data and technology are concerned. Since ICT and technology fundamentally underpin every sector of our economy, it’s not possible to think seriously about national resilience without considering digital sovereignty.

As we plan for our post-pandemic economic recovery, it’s time Australia had an overarching strategy for ICT capability that aligns all relevant legislation, policy, governance, capability and priorities into an integrated national plan. This is the only way we can ensure Australia has a safe and prosperous digital economy and digital society, that is cyber secure, resilient to supply-chain challenges and contributes to a more prosperous and secure Indo-Pacific region.

We need this plan to address both social and economic threats, including cyberattacks, which are growing exponentially in number and seriousness, and opportunities that have the power to reshape our economy and society forever. These include the internet of things, quantum computing and 6G.

Australia needs to address the national need for a sovereign ICT capability from which a whole-of-government, whole-of-nation approach to these threats and opportunities can be formulated. It will require a thorough assessment of our current and future capability requirements, a process that can be delivered through a sovereign ICT capability framework that would enable Australia to adopt an integration-by-design approach.

A sovereign ICT capability framework would underpin the development and sustainability of Australia’s digital and cybersecurity future. It would create new and expanded opportunities for Australian companies to provide ICT solutions to Australia and to export them to our near neighbours.

That would, in turn, create sustainable ICT-focused employment pathways underpinned by a thriving local industry. This will allow Australia to deliver strategic capabilities by prioritising strategic requirements. National delivery plans could include:

  • a cybersecurity capability plan addressing skills and capability needs throughout the economy
  • a digital infrastructure plan addressing future requirements for data storage and processing, submarine cables, telecommunications poles and wires, and 6G plus all that it brings
  • a sovereign data plan addressing necessary controls for various government and non-government data.

Recently announced policies and legislation, including the digital economy strategy and amendments to the Security of Critical Infrastructure Act 2018, need to interlink with plans for our digital infrastructure, cybersecurity capability and technology development, with these capabilities supported by the Australian people and the organisations that will deliver them.

This design approach would allow the nation to determine just what ICT capability it needs, and then decide what will be built and maintained onshore (on-shoring), what can be built and maintained in partnership with our allies (ally-shoring) and what needs to be obtained through global supply chains (offshoring).

This approach, which will optimise and grow the capabilities we already have in Australia through direct government procurement and via ally-shoring, is a ‘smart sovereignty’ approach as it assesses the degrees of sovereignty required to deliver and sustain the capabilities we need.

Everything from research and development to capability interoperability, skills development and raw materials to support ICT resilience would be captured under this framework, providing confidence to our ICT industry and to the nation.

Australia needs holistic management of eight essential elements for sovereign ICT capability:  organisation, management, personnel, training, systems, facilities, supplies and support, and industry.

The need for an overarching framework linking these elements is compelling.

To deliver this, Australia should better utilise its governmental purchasing power to tighten ICT standards, regulations and procurement approaches. Doing so will facilitate and lift up ICT capability nationally. Australia’s sovereign ICT capability should be designed to improve national resilience and not be left solely to market forces, much less our alliances, new or old.

A dedicated minister for ICT capability (or cybersecurity) should be appointed with responsibility for building sovereign ICT capability. This should be a cabinet-level appointment to provide consistency of direction for national ICT capability and add weight to the message that sovereign ICT and cybersecurity are critical to our future economy and security.  The minister could also bring together relevant domestic and foreign affairs aspects as the nation seeks a safe, secure and prosperous Australia, Indo-Pacific and world, enabled by cyberspace and critical technology.

Creation of a sovereign ICT capability framework would enable Australia to prioritise the strategic ICT capabilities we need, while building on the strong foundations we already have.

Policy, Guns and Money: Marietje Schaake on technology, democracy and accountability

This special episode features an excerpt from a recent ASPI webinar with international cyber expert Marietje Schaake on technology, democracy and the question of accountability.

She joined ASPI’s Fergus Hanson for a conversation on the challenges that technologies create and how democracies can work to better regulate them amid rising authoritarianism.

They discussed the proliferation of surveillance tools like Pegasus spyware and the need for companies to move away from a values-agnostic approach to one centred on human rights.

Schaake is the international policy director at Stanford University’s Cyber Policy Center. She is also International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence and president of the Cyber Peace Institute.

Threats to Australia shift to new domains: cyber, technology and information

Australia’s strategic environment is changing rapidly. Once shaped exclusively by traditional security concerns where what mattered most were our military alliances, the state of our armed forces and diplomacy, today’s environment is increasingly shaped by new domains. Chief among them: cyberspace, technology and our online information landscape. This overlapping trio is currently front-page news in a busy month that has highlighted just how entrenched this strategic shift now is.

We have seen how the world’s booming surveillance industry continues to be given permission to operate in the shadows, with dangerous consequences. Consider stunning revelations that high-end spyware sold by Israel’s most notorious spyware company, NSO Group, designed to track terrorists and criminals, was instead being used to spy on journalists, human rights activists, government ministers, diplomats and businesspeople in democracies and autocratic regimes alike. Ensnaring world leaders and several Arab royal family members and dominating media headlines from Europe to India, this exposé may finally force a moment of reckoning for this unregulated and unchecked industry.

Next we come to the collision of social media with the ongoing pandemic. Just last week, US President Joe Biden said social media platforms like Facebook ‘are killing people’ for allowing misinformation about Covid-19 vaccines to spread on their platforms, in some of his strongest language yet about the issue. The Covid-19 pandemic has pushed our already messy information environments into a new era where we can see the daily erosion of credible information online. The president’s comments come at a time when tensions between democracies and US internet companies are at an all-time high as they continue to spar about how to moderate our information ecosystem, while keeping it as free and open as possible.

Then on Monday we saw an unprecedented global coalition come together, including Five Eyes alliance members, European countries and Japan, to hold the Chinese state ‘responsible for gaining access to computer networks around the world via Microsoft Exchange servers’. For the first time, NATO joined in with a public statement calling on China to act in line with internationally agreed norms of behaviour. The Chinese state’s voracious appetite for wide-ranging intelligence collection, intellectual property theft and foreign interference activities have prompted a growing global culture of collective attribution and action that will continue far into the future.

For Australia, this is a significant development that reinforces the importance of the cyber domain. Attributing malicious cyber behaviour to countries like Russia, Iran and North Korea—something Australia has done several times over the past few years—brings far fewer complications. Attributing such behaviour to our largest trading partner, which has shown itself to favour economic coercion and wolf warrior diplomacy in dealing with so many of its bilateral relationships around the world, is fraught but necessary.

The Biden administration concurrently released a Department of Justice indictment that named and shamed four hackers from China’s Ministry of State Security—the country’s sprawling domestic intelligence agency (whose international operations, cyber included, are vast). An additional part of the US announcements was the release of a unique and detailed report listing more than 50 tactics and techniques used by Chinese state-sponsored cyber actors to target US and allied networks. Importantly, this report also provided actionable recommendations for how targeted organisations could detect and mitigate the risks of these operations.

Such useful advice should inspire other governments, Australia included, to provide more practical advice tailored to dealing with our key threat actors in cyberspace (state and non-state alike). While large parts of the Australian public service still prefer a ‘country agnostic’ approach to policymaking and planning, such a stance leaves us at a disadvantage and ill-equipped to deal with the actual threats we face.

More than anything, developments this month highlight the importance of Australia getting its own house in order.

The Australian public is informed about the stakes at play. The recent 2021 Lowy Institute poll found that 98% of Australians viewed ‘cyber attacks from other countries’ as a critical (62%) or important (36%) threat to Australia over the next decade, beating out other enormously important issues including climate change, international terrorism and a severe downturn in the global economy.

There is positive momentum underway—across government and the business community—to boost our cybersecurity posture and culture. However cyber, technology and information ecosystems are a trio of overlapping policy issues, and one can’t be tackled without the others. Having a cybersecurity strategy alone, for example, doesn’t provide the toolkit to deal with the global rise in cyber-enabled foreign interference that is currently targeting populations around the world via a suite of online platforms from YouTube to TikTok. This is an issue the Australian government is currently struggling to deal with, having yet to assign an agency to lead on countering this new threat.

While we wait for policymaking to catch up, it is worth noting that parliamentarians increasingly see this gap. Our savviest politicians now know that, beyond their immediate patch, there are two key issues they and their advisers need to get across—and stay across. The first is this new domain of cyber, information and technology threats. And the second? China, of course.

Government needs to ensure Australia’s digital sovereignty

The concept of ‘sovereignty’ has recently gained new life in Australia and around the world. Increased tensions with China, a constant flow of fake news, frequent references to cyberattacks conducted by sophisticated state actors, and public announcements on foreign espionage have placed sovereignty front and centre in the Australian psyche. We’re in an era of cyber spies and cyber warriors.

Territorial sovereignty has always been understood and accepted. Increasing geopolitical uncertainty for Australia has seen political and economic sovereignty dominate conversations from the barbecue to the boardroom.

But Facebook’s recent shutdown of its Australian services has brought digital sovereignty squarely into the national consciousness. Digital sovereignty is harder to explain and conceptualise. In turning off our digital assets, Facebook said to the world that any nation’s digital sovereignty—the data each nation publishes on its platform—is Facebook’s to control as it pleases. Imagine if Facebook was a water utility or an energy company.

Like all services and resources, digital services and capabilities are vital to our society. Digital information is at the heart of how Australians work, live, play and interact. From the Treasury’s forecasts to our digital wallets, we know that the digital economy is key to Australia’s national prosperity.

Data privacy and security are core to this prosperity—our very functioning depends on data. The personal information of every Australian—from where we live, work and shop, to details about our social habits, health information and financial status—is all digitised. For this reason, data has been described as the ‘new oil’, and, like our natural resources, it belongs to us.

With near-universal dependence on digital information and electronic devices, cybersecurity has become critically important. The first rules of cybersecurity are to know what data is most valuable and where it’s physically located.

This has led to an explosion in international demand for data storage, and therefore data centres—large, windowless structures housing long rows of computer servers, and very large air conditioners that keep them cool. Many of these data centres, such as those owned and operated by big US technology companies, are seamlessly connected by networks that cross international borders. Sydney’s well-publicised Global Switch data centre is owned wholly by Chinese interests. It hosts some of the Australian government’s data, including data owned by the Department of Defence.

So, where are our digitised selves—our data, our new individual and national reflections—stored? Are they held within Australian territorial confines? Can they be accessed by foreign nationals? Are they subject to foreign laws?

It’s becoming increasingly difficult to answer these questions. This led to the publication of the government’s Data hosting strategy to address ‘risks to data sovereignty, data centre ownership and the supply chain’.

Storing Australia’s data within Australia so only Australians can access it seems right. But the government has released its hosting certification framework to explicitly exclude sovereign in term and concept, because ‘given the potential for a level of foreign investment, any publicly listed hosting provider would be ineligible for the higher level of certification’.

So, with sovereignty removed, does this mean the government is happy to hand over our data to foreign data centre operators?

The framework tries to repair this apparent gap: ‘Sovereignty refers to the ability of the government to specify and maintain stringent ownership and control conditions.’ But this explanation presents more questions than it answers.

Will our data be stored offshore? Who will control it? Will access be subject to foreign government policies? How secure is it? Will we be able to access it whenever we need it?

These are important questions for Australia’s friends and allies, not just our competitors. Several big US tech companies maintain and operate data centres in mainland China. The US is our friend, but the recent action by Facebook raises questions as to the willingness of big US tech companies to use their considerable might to influence and coerce customer nations.

In January, US President Joe Biden signed an executive order requiring US government departments to ‘buy American’. It was a perfectly reasonable action to take as the US seeks to rebuild its economy following Biden’s multitrillion-dollar recovery package. The Australian government should follow suit. ‘Buy Australian’ for government agencies should be a position our government is prepared to adopt, and it should include sovereign data storage and sovereign digital technologies as its centrepiece.

There are so many outstanding Australian technology and cybersecurity companies that are either wholly or majority Australian owned. They’re also Australian controlled, which is critically important for security and sovereignty. Yet these companies will struggle to compete unless the government wrests back a measure of control from the US tech giants and prioritises Australian sovereign technology companies.

Data and technology are essential to our way of life. If the digital economy truly is key to Australia’s national prosperity, then the government should provide clarity on the security, privacy and protection of data for Australia and Australians.

With next week’s budget, the government has an opportunity to stand behind and promote Australian technology companies. Failing to do so will leave us all asking if Australia is fair dinkum about digital sovereignty.

Government must take cyber threat to democracy seriously

With voting underway in the US, the eyes of the world are focused on America’s democratic process.

Unfortunately, so is the attention of groups of state-backed hackers from around the world as the US’s adversaries seek to interfere in the election.

In the last fortnight, the FBI, the Director of National Intelligence and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency publicly attributed another cyberattack on the 2020 election to Iranian hackers.

Emails using the threat of violence to coerce voters’ behaviour began arriving in the inboxes of registered Democrats in Alaska, Arizona and Florida. They purportedly came from the far-right Proud Boys group, but actually originated from Iranian state-backed hackers.

This attack follows the successful hack-and-leak campaign undertaken by Russian-backed hackers against the 2016 US election and are part of an accelerating trend we’re seeing around the world. One where nation-state hackers are targeting the IT systems of non-governmental democratic institutions in an effort to interfere in other countries’ democratic process.

As ASPI’s recently released report Cyber-enabled foreign interference in elections and referendums notes, these operations have emerged as a high-impact and increasingly common threat to sovereignty in democracies.

Similar attacks have occurred since 2016 in the United Kingdom, France, Germany and many other democratic nations.

Indeed, ASPI has found that 41 elections have been targeted by cyber-enabled operations in the last decade.

They are a salutary warning for Australia.

More often than not, the targets of these hacks are not government or parliamentary IT systems, but other institutions of democracy—political parties, media outlets, research institutes and non-governmental organisations.

A recent report from Microsoft found that NGOs were the most common targets for state-backed cyber operations, constituting 32% of all targets.

Australia is currently unprepared for cyberattacks on democratic institutions outside government. Something similar to Iran’s ‘Proud Boys’ effort would be easily replicable in Australia through attacks on the IT infrastructure of Australia’s political parties.

After the February 2019 cyberattacks on the networks of Australian Parliament House and the major political parties, Prime Minister Scott Morrison told parliament that our democratic process was ‘our most critical piece of national infrastructure’.

But today, while the government does consider the IT systems of Parliament House and the Australian Electoral Commission critical infrastructure, it doesn’t extend this to the IT systems of the other organisations targeted in this attack—Australia’s political parties.

The Morrison government has offered band-aid fixes to this problem over the past three years, including $2.7 million over four years allocated to the major parties’ cybersecurity in the 2019 Mid-Year Economic and Fiscal Outlook.

However, Australia lacks an ongoing institutional framework to build resilience against cyberattacks on non-governmental democratic institutions.

The cyber resilience of these institutions falls through the cracks of our current security arrangements.

Government security agencies provide robust cybersecurity protections for parliamentary email systems, but these protections stop when MPs use private emails, social media accounts, privately hosted websites and smartphone apps.

Home Affairs, the Australian Security Intelligence Organisation, the Australian Signals Directorate and the Department of Parliamentary Services all have some indirect responsibility here, but none take ownership of the issue.

While I’m sure there would be a significant incident response in the wake of a successful attack, there’s little being done to prevent attacks in the first place—or to build resilience through our information system to mitigate the impact of such attacks once they occur.

There’s no capacity-building program for our democratic institutions, no targeted cyber hygiene training, no real-time sharing of threat intelligence, no assistance with vulnerability assessments, no monitoring of logs.

Nor are there any public awareness campaigns on the nature of this threat to our sovereignty, or any clear institutional responsibility for identifying and informing the public about cyber-enabled foreign interference. The government hasn’t even given any advice on which smartphone apps to avoid.

The government recently released a consultation paper on Australia’s arrangements for protecting critical infrastructure and systems of national significance.

This paper frames an expanded definition of critical infrastructure as infrastructure supporting services ‘crucial to Australia’s economy, security and sovereignty’.

Despite the demonstrable threat that cyberattacks on non-governmental democratic institutions pose to our sovereignty, the paper fails to address this challenge.

This oversight is striking in the context of a recent speech by Home Affairs Minister Peter Dutton to The Age and Sydney Morning Herald’s National Security Summit in which he correctly warned of the threats to democratic institutions from foreign interference.

Yet when the vector is a hack-and-leak campaign against these targets, the government remains blind to the threat.

As a result, these non-governmental democratic institutions are left to face advanced persistent threats from sophisticated state-backed hackers largely on their own.

It’s not a fair fight, and the stakes couldn’t be higher.

The Morrison government likes to talk big on fighting foreign interference, but if it’s serious it must start taking the threat of cyberattacks against non-governmental democratic institutions seriously.

Mitigating diffused security risks in Australia’s north: a case for digital inclusion

Australians’ daily reliance on digital communications infrastructure—from smartphones and social media platforms to the National Broadband Network—is changing the nature of national security risks.

Just like our networked communication patterns, contemporary security risks are becoming increasingly diffused—geographically dispersed, nonlinear in their causes and outcomes, and difficult to predict and contain.

Lessons from armed conflicts in other regions can help policymakers think about how to develop resilient social and communications infrastructure in Australia’s critical northern approaches.

The ongoing Russia–Ukraine conflict is one such case study. On 25 February 2014, polite uniformed men with no insignia inconspicuously took over the administrative buildings of Simferopol, Ukraine. These events represent a significant development in the conduct of modern-day warfare: not a shot was fired in the course of the Russian annexation of Crimea, in what is now known as the most significant breach of state borders since World War II.

In retrospect, the photos and news reports from those turbulent events in Crimea were deliberately obscure, which has become a distinctive feature of contemporary military conflicts. Due to their diffused nature, identifying and mitigating security risks calls for a combination of digital literacy and inclusion among the civilian population.

Following the annexation of Crimea, Russia repeated the same scenario in other eastern Ukrainian cities, including Donetsk and Luhansk. In response, citizens of Mariupol—the next strategic object of Russian interest on the map—took to social media to pre-emptively identify these patterns and mitigate the diffused and otherwise inconspicuous security threats in their city.

Grassroots open-source-intelligence communities are an emerging type of social infrastructure in which civilian networks rely on widely available information and communications technologies to build resilience to security threats. Thousands of Mariupol civilians spent their days and nights collecting and verifying intelligence from local social media posts and informants, and fighting the spread of false and misleading information about events in their city. This civilian effort became part of a coordinated response with local security services and the state military.

Arguably, had Mariupol been the first eastern Ukrainian city on the line of Russia’s ‘non-occupation’ tactics, or had it lacked the critical communications infrastructure at the time when these events were unfolding, it would likely have joined the ranks of the separatist republics. Yet, six years later, Mariupol firmly remains a part of Ukraine.

The success of the city’s citizen-led campaign makes a strong case for strategic investment in digital inclusion and digital literacy as a pathway for identifying and mitigating hybrid, externally orchestrated interventions. In a context where ‘every battle seems personal, but every conflict is global’, as argued by 21st century war experts P.W. Singer and Emerson T. Brooking, what lessons can be applied from the Crimean scenario to the Northern Territory?

Despite the absence of historical claims on the Northern Territory by other nations in the Asia–Pacific region, the two territories—pre-2014 Crimea and the Northern Territory—share some commonalities. Both bear a centuries-long legacy of colonial violence toward the Indigenous populations, which resulted in socioeconomic disparities that continue to shape the local context.

Both are home to large-scale infrastructure developments, including externally funded private-sector-initiated projects, and rely heavily on tourism. Both also have a fair degree of self-governance within a broader national framework yet have a strategic geopolitical significance in maintaining domestic and regional security.

The changing nature of contemporary military conflicts calls for the ability to effectively mitigate diffused security threats. Hybrid conflicts, which blur the distinctions between digital and physical battlefronts and between military and civilian actors, call for an expanded understanding of the role Australian civilians can play in supporting these strategic capabilities.

Countermeasures should extend beyond the cybersecurity domain and focus on two key aspects: first, supporting national efforts in expediting the NBN rollout to remote areas in the NT while also ensuring the service is affordable, especially for young people and marginalised groups; and second, strengthening civil society institutions and promoting public education campaigns on disinformation and media manipulation.

Contributing to Australia’s defence shouldn’t be the exclusive purview of the Australian Defence Force members. As the Ukrainian example demonstrates, committed citizens and community groups with high digital media skills and a good knowledge of the local context can become key actors in identifying hybrid, externally orchestrated interventions.

While a direct military attack on the Northern Territory may be unlikely, civilian resilience—the ability of citizens to identify and react to diffused security threats locally—is becoming paramount in maintaining domestic security in hybrid contexts.

In the present environment where most of us work, shop and socialise remotely, this combination of digital literacy and digital inclusion would feed into strengthening long-term civilian resilience capabilities and contribute to the defence of Australia’s north.

Policy, Guns and Money: Conflicts and Covid-19, election interference and cybercrime

In this episode, Lisa Sharland, head of ASPI’s international program, speaks to Robert Malley, president and CEO of International Crisis Group, about conflicts during Covid-19 and prospects for peace in Afghanistan. They also discuss Crisis Group’s annual ‘10 conflicts to watch’, including what’s changed since the last edition, and what might feature in the next edition later this year.

Next, The Strategist’s Brendan Nicholson and Anastasia Kapetas discuss the recently released US Senate Intelligence Committee report on Russian interference in the 2016 election and foreign interference in the US. (Hint: There was plenty.)

Finally, ASPI’s Tom Uren and John Coyne continue the conversation on Australia’s 2020 cybersecurity strategy, explaining where it falls short and what the challenges are in policing cybercrime.