Tag Archive for: Cyber

Social Credit

Technology-enhanced authoritarian control with global consequences

What’s the problem?

China’s ‘social credit system’ (SCS)—the use of big-data collection and analysis to monitor, shape and rate behaviour via economic and social processes1—doesn’t stop at China’s borders. Social credit regulations are already being used to force businesses to change their language to accommodate the political demands of the Chinese Communist Party (CCP). Analysis of the system is often focused on a ‘credit record’ or a domestic ranking system for individuals; however, the system is much more complicated and expansive than that. It’s part of a complex system of control—being augmented with technology—that’s embedded in the People’s Republic of China’s (PRC’s) strategy of social management and economic development.2 It will affect international businesses and overseas Chinese communities and has the potential to interfere directly in the sovereignty of other nations. Evidence of this reach was seen recently when the Chinese Civil Aviation Administration accused international airlines of ‘serious dishonesty’ for allegedly violating Chinese laws when they listed Taiwan, Hong Kong and Macau on their international websites.3 The Civil Aviation Industry Credit Management Measures (Trial Measures) that the airlines are accused of violating were written to implement two key policies on establishing the SCS.4

As businesses continue to comply, the acceptance of the CCP’s claims will eventually become an automatic decision and hence a norm that interferes with the sovereignty of other nations. For members of the public on the receiving end of such changes, the CCP’s narrative becomes the dominant ‘truth’ and alternative views and evidence are marginalised. This narrative control affects individuals in China, Chinese and international businesses, other states and their citizens.

What’s the solution?

Democratic governments must become more proactive in countering the CCP’s extension of social credit. This includes planning ahead and moving beyond reactive reciprocal responses. Democratic governments can’t force firms to refuse to comply with Beijing’s demands, but they also shouldn’t leave businesses alone to mitigate risks that are created by the Chinese state’s actions. Democratic governments should identify the potential uses of certain technologies with application to the Chinese state’s SCS that could have serious human rights or international security implications. Export controls that prevent supplying or cooperating to develop such technologies for the Chinese state would buy time, but this is only a short-term and partial solution. Where social credit extends beyond China’s borders, the penetration is often successful through the exploitation of existing weaknesses and loopholes in democratic countries. A large part of the solution for addressing these easily exploitable weaknesses is through strengthening our own democracies. Issues such as data protection, investment screening and civil liberties protection are most pressing. Transparency, while not a solution, will help to identify breaches and to prosecute abuses where necessary. Steps must be taken to shield overseas Chinese communities from the kinds of CCP encroachment that will only proliferate with a functioning and tech-enabled SCS.

China’s social credit system

China’s SCS augments the CCP’s existing political control methods. It requires big-data collection and analysis to monitor, shape and rate behaviour. It provides consequences for behaviour by companies and individuals who don’t comply with the expectations of the Chinese party-state. At its core, the system is a tool to control individuals’, companies’ and other entities’ behaviour to conform with the policies, directions and will of the CCP. It combines big-data analytic techniques with pervasive data collection to achieve that purpose.

Social credit supports the CCP’s everyday economic development and social management processes and ideally contributes to problem solving. That doesn’t make social credit less political, less of a security issue or less challenging to civil liberties. Instead, it means that the threats that this new system creates are masked through ambiguity. For the system to function, it must provide punishments for acting outside set behavioural boundaries and benefits to incentivise people and entities to voluntary conform, or at least make participation the only rational choice.

Social credit and the technology behind it help the Chinese party-state to:

  • Control discourse that promotes the party-state leadership’s version of the truth, both inside and outside China’s geographical borders
  • integrate information from market and government sources, optimising the party-state’s capacity to pre-empt and solve problems, including preventing emerging threats to the CCP’s control
  • improve situational awareness with real-time data collection, both inside and outside China’s geographical borders, to inform decision-making
  • use solutions to social and economic development problems to simultaneously augment political control.

Source: Created by Samantha Hoffman, June 2018.

Extending control outside the PRC’s borders

For decades, the CCP has reached beyond its borders to control political opponents. Tactics are not changing under Xi Jinping, but techniques and technology are. For example, in several liberal democracies, Chinese officials have harassed ‘Xi Jinping is not my president’ activists and their families after messages were posted to WeChat.5 Research for this report also found other examples of harassment, including attempts by Chinese officials to coerce overseas Chinese citizens to install surveillance devices in their businesses.6 More commonly, the CCP doesn’t exert control overseas with direct coercion. Instead, it uses ‘cooperative’ versions of control.

For example, a function of Chinese student and scholar associations — which are typically ties to the CCP7 — is to offer services such as airport pick-up.8 Beyond providing necessary services, these techniques reinforce the simple message that the CCP is everywhere (and so are its rules). Social credit embeds such existing processes in a new toolkit for regulatory and legal enforcement.

On 25 April 2018, the Chinese Civil Aviation Administration accused United Airlines, Qantas and dozens of other international airlines of ‘serious dishonesty’ for allegedly violating Chinese laws in how they listed Taiwan, Hong Kong and Macau on their websites.9 To clarify: those websites, which belong to international companies, are for global clients. The Chinese authorities said failure to classify the places as Chinese property would count against the airlines’ credit records and would lead to penalties under other laws, such as the Cybersecurity Law.

The Planning Outline for the Construction of a Social Credit System (2014–2020) (the Social Credit Plan) specifically identified ‘improving the country’s soft power and international influence’ and ‘establishing an objective, fair, reasonable and balanced international credit rating system’ as goals.10

The goals aren’t credit ratings like those done by Standard & Poor’s or Moody’s, but are instead about ensuring state security. State security here, though, is not the simple protection of domestic and foreign security.11 It’s also about protecting the CCP and securing the ideological space both inside and outside the party. That task transcends geographical borders.

The Civil Aviation Industry Credit Management Measures that the airlines are accused of violating were written to implement two key policy guidelines on establishing China’s SCS. The measures are among many other implementing regulations of the Social Credit Plan. Social credit was used specifically in these cases to compel international airlines to acknowledge and adopt the CCP’s version of the truth, and so repress alternative perspectives on Taiwan. Shaping and influencing decision-making is a pre-emptive tactic for ensuring state security and party control. The CCP deals with threats by ‘combining treatment with prevention, but primarily focusing on prevention.’12 That doesn’t make the outcome less coercive.

Social credit records (for individuals and entities) are the outcome of data integration. Technical capacity for data collection and management, therefore, is the key to realising the envisioned SCS.13 Data integration and management don’t simply aid the process of putting individuals or entities on lists. They also support decision-making—some of which ideally will be done automatically through algorithms—and enhance the CCP’s awareness of the PRC’s internal and external environments. The key to understanding this aspect of social credit is the first line of the Social Credit Plan. The document says that social credit supports ‘China’s economic system and social governance system’.14 Social credit is about problem-solving but it’s also designed to thrive on its own contradictions, just like the social governance process (hereafter ‘social management’) that it supports.15 Social management isn’t simply the management of civil unrest. Social management as a concept requires the provision of services and the use of normal economic and social management to exert political control. Yet therein lies the contradiction: the Chinese state does not prioritise solving problems above political security. In fact, problem solving is simultaneously directed at political security. The system will also increasingly rely on technology embedded in everyday life to manage social and economic development problems while simultaneously using the same resources to expand control. Understanding this dual-use nature of the SCS is the key: the system’s ability to solve and manage problems does not diminish its political or coercive capacity.

Credit records are global and political

A January 2018 article published by the Overseas Chinese Affairs Office of the State Council for the attention of ‘overseas Chinese and ethnic Chinese’ (华侨华人) warned that the Civil Aviation Industry Credit Management Measures also applied to them.16 Violations would lead to greylisting and blacklisting and would be included in individuals’ and organisations’ overall credit records, it said. Importantly, ‘overseas Chinese and ethnic Chinese’ can cover anyone who the CCP claims is ‘Chinese’, whether or not they have PRC citizenship. In addition to expatriates, it can include someone who was never a PRC citizen, such as citizens of Taiwan.17 A PRC-born person with citizenship in another country is also considered subject to the rules.18

Political uses for social credit’s implementing regulations might seem disconnected from the idea that credit records should create trust and encourage moral behaviour, but they are not. ‘Trust’ and ‘morality’ have dual meanings in the context of social credit. One side is focused on the reliability of an individual or entity, and the other on making the CCP’s position in power reliably secure. Trust and& morality serve their purpose only if they’re created on the party’s terms and if they produce reliability in the CCP’s capacity to govern. So the language itself promotes the party’s authority and control.

The market and legal data that make up a person’s or entity’s credit record is intrinsically political, while input sources can be simultaneously political and non-political.19 For instance, Article 8, Section 3 of the Civil Aviation Industry Credit Management Measures sanction individuals and entities for ‘a terrorist event’ or a ‘serious illegal disturbance’. Such disturbances could include safety incidents, such as a passenger opening an emergency exit door in a non-emergency.20 They could also include false terrorism charges against those considered political opponents, such as Uygurs (the CCP already uses false-charge tactics against individuals and NGOs).21 This year’s civil aviation cases are not an irregularity. Similar demands on companies have accumulated since January 2018. For instance, the Shanghai Administration for Industry and Commerce fined Japanese retailer Muji’s Shanghai branch 200,000 yuan (A$41,381) over packaging that listed Taiwan as a country.22 The fine cited a violation of Article 9, Section 4 of the PRC advertising law, which sanctions any activity ‘damaging the dignity or interests of the state or divulging any state secret’. The violation was then recorded on the National Enterprise Credit Information Publicity System.

The timing of these cases coincides with a regulation that took effect on 1 January 2018, under which every company with a business licence in China was required to have an 18-digit ‘unified social credit code’. Every company without a business licence designating its code was required to update its licence.23 Euphemistically, the code is to ‘improve administrative efficiency’.24 ‘Efficiency’ includes the meaning that any sanction against a company filed on the company’s credit record could trigger sanctions under other relevant legislation. Similar cases may multiply after 30 June 2018 because unified social credit codes will also be required for government-backed public institutions, social organisations, foundations, private non-enterprise units, grassroots self-governing mass organisations and trade unions.25

Generating ‘discourse power’ through data

An overlooked purpose of the SCS is to strengthen the PRC’s ‘discourse power’ or ‘right to speak’ (话语权).26 This can also be understood as the idea of creating the CCP’s narrative control. Discourse power is ‘an extension of soft power, relating to the influence and attractiveness of a country’s ideology and value system’.27 Discourse power allows a nation to shape and control its internal and external environments.

In the hands of political opponents, discourse power is a potential threat. According to the CCP, ‘hostile forces’ can incite and exploit economic and social disorder in other countries.28 This threat has been tied directly to leading international credit agencies—Moody’s Investors Service, Standard & Poor’s and Fitch Ratings—seen as potential threats to China. One article claimed that the agencies can ‘destroy a nation by downgrading their credit score, utilising the shock power of “economic nukes”’.29 Another article tied the problem to the One Belt, One Road scheme (Belt and Road Initiative, BRI), because participant countries accept the current international ratings system. For the CCP, the solution is to increase the ‘discourse power [that China’s] credit agencies possess on the international credit evaluation stage’.30 China’s SCS provides an alternative to the existing international credit ratings system. It does some similar things to the existing system, but is designed to give the Chinese state a more powerful voice in global governance. As we saw in the international airlines case, this louder voice is being used to exert influence on the operations of foreign companies.

Preventing the sort of credit crisis described above requires the CCP to have control over the narrative to prevent a political opponent from taking over the narrative—in other words, it requires the CCP to strengthen its ‘discourse power’. Discourse power is directly embedded in the trust and morality that social credit is supposed to create in Chinese society, and not only because trust and morality help with everyday social and economic problem solving. Trust and morality, in the way the Chinese state uses the terms, include as a core concept support for and adherence to CCP control and directions. This linkage can be traced at least as far back as an early 1980s propaganda effort related to ‘spiritual culture’, which responded to ‘popular disillusionment with the CCP’ and the promotion of Western politics as ‘superior’ to China’s.31

The concern only increased as China’s present day perception of threat was shaped by events such as Tiananmen in 1989, Kosovo in 1999, China’s entry into the World Trade Organization, and the ‘colour revolutions’ of the early 2000s. For instance, one article said that, despite mostly positive benefits from China entering the World Trade Organization, ‘Western civilisation-centred ideology, and aggressive Western culture can erode and threaten the independence and diversity of [China’s] national culture through excessive cultural exchanges.’ 32

One reason social credit contributes to strengthening the CCP’s discourse power is that the system relies on the collection and integration of data to improve the party’s awareness of its internal and external environments. In, 2010 Lu Wei described in great detail the meaning of ‘discourse power’ as referring not only to the ‘right to speak’, but also to guaranteeing the ‘effectiveness and power of speech’.33 He elaborated that for China to have discourse power requires both collection power and communication power. Collection power is the ability to ‘collect information from all areas in the world in real time’. Communication power, which ‘decides influence’, becomes stronger with more timely collection.

Data collection supporting China’s environmental awareness doesn’t stop at the country’s borders. Social credit requires real-time monitoring through big-data tools that can inform decision-making and the implementation of the credit system. In 2015, Contemporary World, a magazine affiliated with the International Liaison Department, published an article focused on big-data collection associated with the BRI.34 It said that data could be used to inform diplomatic and economic decision-making, as well as emergency mobilisation capacity. ‘Data courier stations’ within foreign countries would send data via back-ends to a centralised analysis centre in China. Data collection would come from legal information mining, such as information on the internet and database purchases, and from market operations. The data courier stations would include ‘e-commerce (platforms), Confucius Institutes, telecoms, transportation companies, chain hotels, financial payment institutions and logistics companies’.35

The collection method and use of data would differ according to the source. The most obvious and practical reason for data collection at Confucius Institutes is to support teaching. Eventually, the same data would inform decisions on cultural exchange (ostensibly using Confucius Institute databases).36 The objective of ‘cultural exchange’ isn’t merely soft power creation. As ‘discourse power’ suggests, the CCP views ‘language’ as a ‘non-traditional’ state security issue and a means of influencing other states, businesses, institutions and individuals. One publication on the BRI linked to the propaganda department explained that ethnic minorities in China ‘use similar languages to others outside of our borders and are frequently subjected to hostile forces outside of the border’. To reduce the ‘security risk’, ‘resource banks’ or ‘language talent’ projects would support the automatic translation of both Chinese and non-common ‘strategic languages’.37 Automatic translation would help to ‘detect instability in a timely manner, [assist] rapid response to emergencies, and exert irreplaceable intelligence values over the course of prevention, early warning and resolution of non-traditional security threats, in order to ensure national security and stability’.38

According to the Ministry of Education, automatic translation would be implemented through technologies such as big data, cloud computing, artificial intelligence and mobile internet. 39 This kind of technology already supports online teaching platforms affiliated with Confucius Institutes. They are at least partly reliant on technology from Chinese firm iFlytek. In addition to language learning software, iFlytek develops advanced surveillance for ‘public security’ and ‘national defence’, including voice recognition and keyword identification.40 Data collection and integration serve the purpose of increasing real-time situational awareness and simultaneously support the SCS’s discourse power objectives.

Technology, social management and economic development

The CCP saw crises such as the colour revolutions in Central Asia and Europe as illustrations of potential risks to its own power in China. Increasing the party’s discourse power has been justified as one response. The CCP’s perception of its exposure to risk increased with events such as the milk powder scandal in 2008 and the SARS outbreak between 2002 and 2003.41 Each crisis revealed significant problems with the PRC’s crisis prevention and response capacity due to a combination of political, logistical and technical faults.42 The SCS is part of an attempt to address those faults and to prevent the party’s competence or legitimacy from being questioned.

An innocuous line in the Social Credit Plan called for ‘the gradual establishment of a national commodity circulation (supply chain) traceability system based on barcodes and other products’.43 Barcodes are commonly used in supply-chain management to improve product traceability. ‘Other products’ include radio-frequency identification (RFID), which is also used for supply-chain management. RFID is an electronic tagging technology, readable through sensors or satellites, that ‘would gradually replace barcodes in the era of the internet of things’.

Most narrowly and directly, ‘barcodes and other products’ will help to manage food safety and health risks. The integration of information, supported by technology, facilitates risk identification. As technology’s ability to effectively identify risks improves, the government would be able to improve the regulation of behaviours that heighten ‘risk’, as defined and perceived by the CCP. As a result, potentially destabilising crises can be prevented through the optimisation of everyday governance tasks.

In future, the technologies used for supply-chain management will form an integral part of China’s development of ‘smart cities’. Smart cities in China harness ‘internet of things’ technology in support of resource optimisation and service allocation for both economic development and social management. A plan for standardising smart cities in China said that data mining using chips, sensors, RFID and cameras contributes to processes such as ‘identification, information gathering, surveillance and control’ of infrastructure, the environment, buildings and security within a city.44 Data mining covers such areas as ‘automatic analysis, classification, summarization, discovery and description of data trends’, and can be applied to decision-making about a city’s ‘construction, development and management’.45

All of these things contribute to building the capacity to make decisions and prevent threats from emerging by early intervention. Social credit will require big-data integration and data recording through information systems. Real-time decision-making capabilities are central to the success of the monitoring and assessment systems discussed in the Social Credit Plan, particularly in areas such as traffic management and e-commerce. Decision-making is enabled through ‘decision support systems’, which provide support for complex decision-making and problem solving.46 In China, present-day research emerges from a field called ‘soft science’ (软科学) that developed in the 1980s.47

Soft science is defined in China as a ‘system of scientific knowledge sustaining democratic and scientific decision-making’ that can be used in China to ‘ensure the correctness of our decision-making and the efficacy of our execution.’48 Correctness has as much a political meaning as its more usual one.

The use of decision support systems directly contributes to mechanisms for crisis prevention and response planning. Technologies such as barcodes and RFID are found in the logistical mobilisation strategies of many countries, not just the PRC. In China, however, civilian resources are multi-use, with simultaneous economic and social development and political control functions. The same systems support mobilisations for crises. At a study session on a speech that Xi Jinping gave at the 13th National People’s Congress, delegates from the People’s Liberation Army and People’s Armed Police learned about ‘infrastructure construction and resource sharing’. Efforts to improve those areas would support a ‘coordinated development of social services and military logistics’, while utilising various strategic resources and strength in areas such as politics, the economy, the military, diplomacy and culture. 49

This integration of technology with social management, political control and economic development brings back into focus the concept of discourse power. Like the other aspects of social credit, those systems don’t stop at China’s borders. As part of the BRI, China plans to leverage smart cities, and technologies such as 5G, to ‘create an information superhighway’. 50 Combined with channels for information collected from projects ranging from logistics to e-commerce or Confucius Institutes, information can be integrated to support social credit objectives such as increased discourse power.

Future challenges and recommendations

How social credit will exactly develop is not entirely known because the system itself is a multi-stage, multi-decade project. In order to deal with the international consequences of social credit, foreign governments must act now while also applying long-term strategic thought and commitment to dealing with the international elements of this system. Although China’s development of the SCS can’t be stopped, its progress can be delayed and the system’s coercive aspects reduced while better solutions for dealing with the problem are found.

Recommendation 1: Control the export of Western technologies and research already used in—and potentially useful to—the Chinese state’s SCS.

Recommendation 2: Review emerging and strategic technologies, paying particular attention to university and research institute partnerships.

Controlling the export of Western technology is a key short-term solution. Governments should review strategic and emerging technologies that are already or could be used in the SCS. Universities and research organisations partnering with Chinese counterparts and contributing to the development or implementation of the CCP’s SCS should be included in this review. Universities can’t be blind to the impact and end uses of research that they conduct or contribute to with overseas partners. Besides the clear political and social control purposes, contributing to such a system also doesn’t align well with the ethical framework for most Western universities’ research; nor is it good for their global reputations. The findings of such reviews should help Western governments determine where to control access and what legislation is therefore appropriate.

Obvious starting points would be preventing situations such as, for example, the University of Technology Sydney’s Global Big Data Technologies Centre accepting $20 million from the state-owned defence enterprise China Electronics Technology Group Corporation (CETC).51 CETC is one of the key state-owned enterprises behind China’s increasingly sophisticated video surveillance apparatus, including facial recognition systems and scanners. One of University of Technology Sydney’s most recent 2018 CETC-funded projects is in fact research on a ‘public security online video retrieval system’.52 Another example that highlights policy gaps is the recently reported case in which surveillance technology developed by Duke University and originally intended for the US Navy was sold into China with ‘clearance from the US State Department’ because the technology failed to secure backing in the US.53

Recommendation 3: Strengthen democratic resilience to counter foreign interference.

At least part of the solution requires acknowledgement that the spread of social credit beyond China’s borders takes advantage of easily exploitable weaknesses. The problems are compounded when a government opposed to liberal democratic values and institutions exploits those weaknesses. Australia’s foreign interference law could provide a framework for other countries looking to deal with the problem via legislation, as increased transparency is a foundation for an informed response.

Recommendation 4: Fund research to identify dual-purpose technologies and data collection systems.

While it isn’t a complete solution, funding research that contributes to greater transparency and public debate about China’s SCS is very important. Understanding what the Chinese state is doing, and what the implications are for other countries, requires asking the right questions. The problem is not just technology per se, but the ways in which processes and information are used to feed into and support the SCS, as well as other technology-enabled methods of control.

Recommendation 5: Governments and entities must strengthen data protection.

A crucial step is to limit the way data can be exported, used and stored overseas. Auditing should be conducted to ensure that any breaches are detected and to identify loopholes. For example, in the case of Confucius Institutes mentioned above, any data collected for any purpose should be stored using university-owned hardware and software, and only in university-operated databases. In the case of any violations, the university’s obligations to protect privacy and personal data on individuals that it holds should be enforced.

Recommendation 6: New legislation should reflect that this is also a human rights issue.

China’s SCS is not only an issue of political influence and control internationally. It’s also a human rights issue, and new legislation should reflect that. Through contributions to smart cities development in China, for example, Western companies are providing support to build a system that has multiple uses, including uses that are responsible for serious human rights violations. The US’s Global Magnitsky Act is an example of the type of legislation that could be used to hold companies and entities accountable for—willingly or not—enabling the Chinese party-state’s human rights violations.

Recommendation 7: Support companies threatened by China’s social credit system

Western governments need to more actively and publicly support the private sector in mitigating risks that are created by the SCS. This should include collective counter-measures that impose costs for coercive acts.

Recommendation 8: Overseas Chinese communities must be protected from social credit’s overseas expansion.

Western governments must take steps to protect overseas Chinese from the kinds of CCP encroachment that have taken place for decades but that are now increasingly augmented through a functioning and tech-enabled SCS. Democratic governments must ensure that they legislate against the implementation and use of China’s SCS across and within their borders.


Acknowledgements

The author would like to thank Danielle Cave, Didi Kirsten Tatlow, Dimon Liu, Gregory Walton, Kitsch Liao, Nigel Inkster, Peter Mattis, Fergus Ryan and Rogier Creemers, as well as the Mercator Institute for China Studies. Disclaimer: All views and opinions expressed in this article are the author’s own, and do not necessarily reflect the position of any institution with which she is affiliated.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Samantha Hoffman, ‘Managing the state: social credit, surveillance and the CCP’s plan for China’, China Brief, Jamestown Foundation, 17 August 2017, 17(11), online. ↩︎
  2. Concepts summarised in this paper, including on social management, pre-emptive control, social credit and the ‘spiritual civilisation’, crisis response and threat perceptions, are drawn from my PhD thesis: Samantha Hoffman, ‘Programming China: the Communist Party’s autonomic approach to managing state security’, University of Nottingham, 29 September 2017. ↩︎
  3. China Civil Aviation Administration General Division, ‘关于限期对官方网站整改的通知’ (‘Notice Relating to Rectification of the Official Website within a Specified Timeframe’), 25 April 2018; James Palmer, Bethany Allen-Ebrahimian, ‘China threatens US airlines over Taiwan references’, Foreign Policy, 27 April 2018, online; Josh Rogin, ‘White House calls China’s threats to airlines “Orwellian nonsense”’, The Washington Post, 5 May 2018, online. ↩︎

  4. The two key guidances directly referred to in the opening of the Civil Aviation Industry Credit Management Measures (Trial Measures) are the Planning Outline for the Construction of a Social Credit System (2014–2020) and 关于印发《民航行业信用管理办法(试行) 》的通知 (Civil Aviation Industry Credit Management Measures (Trial Measures)), 7 November 2017. ↩︎

Technological entanglement

Cooperation, competition and the dual-use dilemma in artificial intelligence

What’s the problem?

Despite frequent allusions to a race—or even an ‘arms race’—in artificial intelligence (AI), US leadership and China’s rapid emergence as an AI powerhouse also reflect the reality of cooperation and engagement that extend across the boundaries of strategic competition.1 Even as China and the US, the world’s emergent ‘AI superpowers’,2 are increasingly competing in AI at the national level, their business, technology and research sectors are also deeply ‘entangled’ through a range of linkages and collaborations. That dynamic stems from and reflects the nature of AI research and commercialisation—despite active competition, it is open and often quite collaborative.3 These engagements can, of course, be mutually beneficial, but they can also be exploited through licit and illicit means to further China’s indigenous innovation and provide an asymmetric advantage.4 The core dilemma is that the Chinese party-state has demonstrated the capacity and intention to co-opt private tech companies and academic research to advance national and defence objectives in ways that are far from transparent. 

This has resulted in a ‘dual-use dilemma’ in which the openness that’s characteristic of science and innovation in democracies can result in unforeseen consequences, undermining the values, interests and competitiveness of the US, Australia and other like-minded nations in these strategic technologies.5 These ‘entanglements’ have included ties between US tech firms and Chinese partners with military connections,6 as well as cooperation between Australian universities and the Chinese People’s Liberation Army (PLA).7 Despite the genuine advantages they may offer, such problematic partnerships can also result in the transfer of dual-use research and technologies that advance Chinese military modernisation, perhaps disrupting the future balance of power in the Indo-Pacific, or facilitate the party-state’s construction of surveillance capabilities that are starting to diffuse globally.

These adverse externalities have troubling implications for US military advantage, authoritarian regime resilience and even the future of democracy.8 How should policymakers balance the risks and benefits of such entanglement,9 while enhancing competitiveness in this strategic technology?

What’s the solution?

These unique and complex dynamics require a range of policy responses that balance the risks and benefits of these partnerships, collaborations and engagements. To enhance situational awareness, policymakers should examine closely research, academic and commercial partnerships that may prove problematic, and then consider updates and revisions to national export controls, defence trade controls and investment review mechanisms as targeted countermeasures. While there is a rationale for visa screening of foreign nationals who plan to study or research sensitive technologies, restrictions should be imposed only on the basis of evidence of direct and clear connections to foreign militaries, governments or intelligence services,10 and scrutiny should focus more on organisations engaging in talent recruitment that are linked to the Chinese central and local governments or to the Chinese Communist Party (CCP). At the same time, there are compelling reasons to sustain scientific cooperation, with safeguards for risk mitigation, including transparency and the protection of sensitive data.

Critically, the US and Australia must pursue policies that actively enhance the dynamism of their own innovation ecosystems to ensure future competitiveness. It is vital to bolster declining support for science and commit to increasing funding for basic research and the long-term development of strategic technologies. Given the criticality of human capital, governments should prioritise improving the accessibility and affordability of STEM education at all levels, while attracting and welcoming talent through favourable immigration policies. In this quest for competitive advantage, the US and Australia must also pursue closer public–private partnerships and expand alliance cooperation on defence innovation.

AI ‘without borders’

Today, national competition in AI is intensifying at a time when the engine for technological innovation in such dual-use technologies has shifted from governments to commercial enterprises. In today’s complex, globalised world, flows of talent, capital and technologies are rapid, dynamic and not readily constrained by borders. Chinese investments and acquisitions in Silicon Valley—and US investments in China—are sizable and increasing, despite intense concerns about the security risks of such investments,11 which have motivated reforms to the Committee on Foreign Investment in the United States (CFIUS) and could result in discretionary implementation of China’s national security review mechanism in response.12 This increased globalisation of innovation ecosystems has proven beneficial to AI development, and dynamic US and Chinese companies are emerging as world leaders in the field.

Increasingly, these enterprises are quite international in their outlook, presence and workforce while engaging in a global quest for talent.13 For the time being, the US remains the centre of gravity for the top talent in AI, and Silicon Valley is the epicentre of this talent ‘arms race’.14 While currently confronting major bottlenecks in human capital, China has great potential, given the number of graduates in science and engineering and the range of new training and educational programs dedicated to cultivating AI talent.15 At the same time, the Chinese government is actively incentivising the return and recruitment of ‘strategic scientists’ via state talent plans.16 At the forefront of the AI revolution, Baidu and Google epitomise in their strategic decisions and activities the linkages and interconnectivity among such global centres of innovation as Silicon Valley and Beijing.17

Baidu has prioritised AI and has emerged as a leading player in this domain. It created the Institute for Deep Learning in Beijing in 2013 and then established its Silicon Valley Artificial Intelligence Laboratory (SVAIL), which employs about 200 people, in 2014.18 Baidu’s CEO, Li Yanhong (李彦宏, or Robin Li), advocated as early as 2015, prior to the Chinese Government’s decision to prioritise AI, for a ‘China Brain’ plan that would involve a massive national initiative in AI, including welcoming military funding and involvement.19

Increasingly, Baidu has actively invested in and acquired US AI start-ups, including xPerception and Kitt.ai,20 while seeking to expand its US-based workforce. The company has stated that Silicon Valley ‘is becoming increasingly important in Baidu’s global strategy as a base for attracting world-class talent.’21 In March 2017, Baidu announced plans to establish a second laboratory in Silicon Valley, which is expected to add another 150 employees.22 Notably, Baidu has also launched the Apollo project, which is a collaborative initiative to advance the development of self-driving cars that involves more than 100 tech companies and automakers, including Ford, NVIDIA, and Microsoft.23 At the same time, Baidu is engaged in research on military applications of AI, particularly command and control.24

Google remains at the forefront of AI development, leveraging an international presence and global workforce. Beyond Silicon Valley, Google has opened AI research centres in Paris, New York and Tokyo,25 and it will soon add Beijing and then Accra, Ghana.26 When Google announced the opening of the Google AI China Center in December 2017, chief scientist Fei-Fei Li declared, ‘I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing, or anywhere else, it has the potential to make everyone’s life better for the entire world.’27 She emphasised, ‘we want to work with the best AI talent, wherever that talent is, to achieve’ Google’s mission.28

Google’s decision to expand its presence and activities in China, after withdrawing its search product from the market due to concerns over censorship, surveillance and the theft of intellectual property via cyber espionage in 2010,29 reflects this enthusiasm for the potential of future talent in China—and probably the availability of a sizable market and massive amounts of data as well.30 At the same time, this decision presents an interesting counterpoint to Google’s recent issuing of a statement of principles that included a commitment not to build technologies used for surveillance.31 Given the dual-use nature of these technologies, Google’s choice to engage in China may involve risks and raise ethical concerns,32 especially considering the Chinese party-state’s agenda for and approach to AI.

China’s global AI strategy and ambitions

At the highest levels, the Chinese Government is prioritising and directing strong state support to AI development, leveraging and harnessing the dynamism of tech companies that are at the forefront of China’s AI revolution. The New Generation Artificial Intelligence Development Plan (新一代人工 智能发展规划), released in July 2017, recognised this strategic technology as a ‘new focal point of international competition’, declaring China’s intention to emerge as the world’s ‘premier AI innovation centre’ by 2030.33 The Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry (促进新一代人工智能产业发展三年行动计划) (2018–2020), released in December 2017, called for China to achieve ‘major breakthroughs in a series of landmark AI products’ and ‘establish international competitive advantage’ by 2020.34 China’s central and local governments are providing high and ever-rising levels of funding for research and  development on next-generation AI technologies, while seeking to create a robust foundation for innovation by introducing new talent and education initiatives, developing standards and regulatory frameworks, and supporting the availability of data, testing and cloud platforms.35

China’s ambition to ‘lead the world’ in AI is self-evident.

China’s ambition to ‘lead the world’ in AI is self-evident.36 These plans and policies should be contextualised by its tradition of techno-nationalism and current aspirations to emerge as a ‘science and technology superpower’ (科技强国).37 In recent history, indigenous Chinese innovations, particularly defence technological developments, have been advanced and accelerated through licit and illicit means of tech transfer, including extensive industrial espionage.38 However, pursuing a new strategy of innovation-driven development,39 China is actively seeking to progress beyond more absorptive approaches to innovation and instead become a pioneer in emerging technologies, including through increasing investment in basic research.40 To further this agenda, the Chinese government is avidly targeting overseas students and scientists, offering considerable incentives via talent plans and engaging in recruitment via ‘talent bases’ and organisations that are often linked to the CCP or to central or local governments.4142

At this point, the success of these initiatives remains to be seen, and there are even reasons to question whether an AI bubble may arise due to excessive enthusiasm and investments. Although China’s future potential for innovation shouldn’t be dismissed or discounted, this ‘rise’ in AI often generates alarm and exuberance that can distract from recognition of major obstacles that remain. As its plans openly admit, China continues to lag behind the US in cutting-edge research and is attempting to compensate for current shortfalls in human capital.43 Notably, China confronts continued difficulties in the development of indigenous semiconductors,44 which will be critical to the hardware dimension of future advances in AI,45 despite billions in investment and quite flagrant attempts to steal intellectual property from US companies.46

While gradually becoming more capable of truly independent innovation, China also intends to coordinate and optimise its use of both domestic and international ‘innovation resources’.47 Notably, the New Generation AI Development Plan calls for an approach of ‘going out’ (走出去) involving overseas mergers and acquisitions, equity investments and venture capital, along with the establishment of R&D centres abroad.48 For instance, a subsidiary of the China Electronics Technology Group Corporation (CETC), a state-owned defence conglomerate, established an ‘innovation centre’ in Silicon Valley in 2014, which seeks to take advantage of that ecosystem with a focus on big data and other advanced information technologies.49 In Australia,50 CETC established a joint research centre with the University of Technology Sydney (UTS), which will focus on AI, autonomous systems and quantum computing, in April 2017.51 Starting in 2018, CETC’s Information Science Academy is also funding a project at UTS on ‘A Complex Data Condition Based Public Security Online Video Retrieval System’, which could have clear applications in surveillance.52 There have been extensive collaborations on dual-use AI technologies between PLA researchers from the National University of Defence Technology and academics at UTS, the University of New South Wales and the Australian National University.53

Meanwhile, Huawei is actively funding research and pursuing academic partnerships in the US and Australia, including through its Huawei Innovation Research Program.54 China’s ‘One Belt, One Road’ strategy is also concentrating on scientific and technological cooperation, including educational exchanges and research partnerships, such as a new Sino-German joint AI laboratory.55 Some of these new collaborations will focus on robotics and AI technologies, often enabling access to new sources of data that may facilitate China’s emergence as a global leader in AI development.56 In certain instances, China’s provision of funding to these initiatives may also reorient the direction of research based on its own priorities.57

As China seeks to advance indigenous innovation, the strategy of ‘going out’ is complemented by a focus on ‘bringing in’ (引进来) to ensure that vital talent and technologies are drawn back into China.58 At the same time, the Chinese government is evidently seeking to ensure that innovation ‘made in China’ will stay in China. As the US undertakes reforms to CFIUS, China could respond by recalibrating the implementation of its own national security review process, which is ambiguous enough to allow for great discretion in its application, pursuant to an expansive concept of national or state security (国家安全).59 Notably, the State Council has also issued a new notice that requires that scientific data generated within China be submitted to state data centres for review and approval before publication.60 The policy purports to promote open access to and sharing of scientific data within China, while creating ambiguous new restrictions that, depending upon their implementation, could render future cooperation asymmetrical in its benefits.61 Given these factors, while opportunities for research cooperation should often be welcomed, it is also important to ensure transparency regarding the research and intellectual property that may result from it, as well as the security of valuable or
sensitive datasets.

China’s integrated approach to indigenous innovation

In pursuit of its dreams of AI dominance, China is pioneering a new paradigm of indigenous innovation that takes advantage of critical synergies through creating mechanisms for deeper integration among the party-state, technology companies and the military. The CCP seeks not only to support private Chinese companies in their quest for innovation but also to control and guide them, ensuring that the companies serve the needs of the party and don’t become a threat to it. China’s ‘champions’ in AI— Baidu, Alibaba, Tencent and iFlytek—are at the forefront of innovation in the field, and this ‘national team’ will be supported and leveraged to advance state objectives and national competitiveness.62

For instance, Baidu is leading China’s National Engineering Laboratory for Deep Learning Technologies and Applications (深度学习技术及应用国家工程实验室),63 and iFlytek is leading the State Key Laboratory of Cognitive Intelligence (认知智能国家重点实验室).64 It seems likely that the research in these new laboratories will be directed to dual-use purposes. These champions will also undertake the development of new open innovation platforms in AI: Baidu will be responsible for autonomous vehicles, Alibaba Cloud (Aliyun) for smart cities, Tencent for medical imaging and iFlytek for smart voice (e.g., speech recognition, natural-language processing, machine translation, etc.).65 The platforms will be piloted in the Xiong’an New Area, a development southwest of Beijing that’s intended to be a futuristic demonstration of Chinese innovation and to showcase AI technologies and applications in action.66

Meanwhile, Xi Jinping has recently reaffirmed the Mao-era sentiment that ‘the party leads everything’, and China’s advances in AI must also be understood in the context of this system, in which the CCP is steadily increasing its control over private companies.67 In recent years, the CCP has introduced representatives of party branches and committees into notionally private companies,68 which have started to undertake more active ‘party building’ (党建) activities that are intended to expand the CCP’s presence and influence.69 Just about every major tech company, including Baidu, Alibaba, Tencent, Sohu, Sina and NetEase, has a party secretary, who is often a fairly senior figure within the company, and new requirements may even require all listed companies to ‘beef up party building’.70 For example, in March 2017, the CCP Capital Internet Association Commission (中共首都互联网协会 委员会) convened a party committee expansion meeting and a work meeting on grassroots party building that brought together the leaders of many prominent companies.71 At the meeting, Baidu Party Secretary Zhu Guang (朱光), who is also a Senior Vice President responsible for public relations and government affairs,72 talked about innovation in ‘party building work’, including the development of a mobile solution for ‘party building’. He committed Baidu to leveraging its capabilities in big data and AI applications, as well as its ‘ecological advantage’, to enhance the effectiveness of such efforts.73

This blurring of the boundaries between the party-state and its champions may create a tension between national strategic objectives and these companies’ global commercial interests.74 Increasingly, the CCP is even attempting to extend its reach into, and authority over, foreign companies operating in China.75

The dual-use dilemma in China’s AI development

The future trajectory of AI in China will inherently be shaped and constrained by the interests and imperatives of the party-state, and international collaboration with Chinese research institutions and corporate actors needs to be understood, and engaged in, with this important context in mind. Critically, AI will enhance both economic development and military modernization, while reinforcing the party’s ability to control its population through domestic surveillance, all of which are integral to the regime’s security and legitimacy. China’s AI plans and policies include the concern that AI will remain ‘secure and controllable’ (安全 , 可控), given the risks of societal disruption, while highlighting the importance of AI ‘to elevate significantly the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability’, thus bolstering regime security.76

Indeed, the pursuit of such ‘innovations’ in social governance through big data and AI has included the construction of predictive policing and surveillance capabilities, often developed with the assistance of start-ups such as SenseTime and Yitu Tech, that have often been abused, particularly in Xinjiang.77 Given the party’s attempts to extend its reach—and the trend towards deeper integration in civilian and military AI efforts in China—it can be difficult to disentangle notionally commercial activities from those directly linked to the party-state’s agendas for social control, indigenous innovation and military modernisation.

… a national strategy of ‘military–civil fusion’…

China seeks to take full advantage of the dual-use nature of AI technologies through a national strategy of ‘military–civil fusion’ (军民融合). This high-level agenda is directed by the CCP’s Military–Civil Fusion Development Commission (中央军民融合发展委员会) under the leadership of President Xi Jinping himself.78 Through a range of policy initiatives, China intends to ensure that advances in AI can be readily turned to dual-use applications to enhance national defence innovation. Although the effective implementation of military–civil fusion in AI may involve major challenges, this approach is presently advancing the creation of mechanisms and institutions that can integrate and coordinate R&D among scientific research institutes, universities, commercial enterprises, the defence industry and military units.79 For instance, in June 2017, Tsinghua University announced its plans to establish a Military–Civil Fusion National Defence Peak Technologies Laboratory (清华大学军民融合国防尖端技术实验室) that will create a platform for the pursuit of dual-use applications of emerging technologies, especially AI.80 Notably, in March 2018, China’s first ‘national defence science and technology innovation rapid response small group’ (国防科技创新快速响应小组) was launched by the CMC Science and Technology Commission in Shenzhen,81 and is intended to ‘use advanced commercial technologies to serve the military.’82

China’s AI ‘national champions’ may often be engaged in support of this agenda of military-civil fusion. Notably, in January 2018, Baidu and the 28th Research Institute of the China Electronics Technology Group’s (CETC), a state-owned defence conglomerate, established the Joint Laboratory for Intelligent Command and Control Technologies (智能指挥控制技术联合实验室), located in Nanjing.83 The CETC 28th Research Institute is known as a leading enterprise in the development of military information systems, specializing in the development of command automation systems,84 and it seeks to advance the use of new-generation information technology in defence ‘informatization’ (信息化).85

This partnership is directly linked to China’s national strategy of military-civil fusion, leveraging the respective advantages of CETC and Baidu to take advantage of the potential of big data, artificial intelligence, and cloud computing. Going forward, the new joint laboratory will focus on increasing the level of ‘intelligentization’ (智能化) in command information systems, as well as designing and developing new-generation command information systems ‘with intelligentization as the core.’ Baidu’s involvement in this new laboratory reflects its active contribution to military-civil fusion, a strategy that is resulting in a further blurring of boundaries between commercial and defence developments.

Policy considerations and recommendations

There is no single or simple solution, and policy responses must take into account the inherent complexities of these global dynamics, which necessitate highly targeted and nuanced measures to mitigate risk.86 At the same time, real and serious concerns about China’s exploitation of the openness of our democracies must not lead to reactive or indiscriminate approaches that could cause collateral damage to the inclusivity and engagement that are critical to innovation.

The benefits of scientific collaboration are compelling, and continued cooperation should be supported, with appropriate awareness and safeguards. In future, the quest to achieve an advantage in emerging technologies will only intensify, and the US and Australia must also look to enhance their own competitiveness in these strategic technologies.87

The options for policy response include, but aren’t limited to, the measures detailed below.

Strengthen targeted, coordinated countermeasures.

1: Review recent and existing research and commercial partnerships on strategic technologies that involve support and funding from foreign militaries, governments or state-owned/supported enterprises, evaluating the dual-use risks and potential externality outcomes in each case.

  • ​​Evaluate early-stage research to determine the likelihood that it may turn out to have disruptive dual-use implications in the future.
  • Present a public report with findings and recommendations to raise awareness and ensure transparency.
  • Continue to push back against forced tech transfer in joint ventures.88

2: Explore updates and revisions to national export controls, defence trade controls and investment review mechanisms that take into account the unique challenges of dual-use commercial technologies; communicate those updates clearly and publicly to relevant stakeholders.

  • Share lessons learned and pursue coordination with allies and partners to account for the global scope and scale of these dynamics.
  • Ensure that these restrictions are applied to sensitive datasets associated with AI development, including data used for training purposes.

3: Engage in visa screening of foreign nationals who plan to study or research sensitive or strategic technologies, targeting scrutiny on the basis of whether or not students or researchers have direct and clear connections to foreign militaries, governments or intelligence services.

  • Deny visas to those who are determined to be likely to leverage their studies or research in support of a foreign military that is not a security partner.
  • Incorporate an independent review mechanism into the process to assess evidentiary standards and mitigate risks of bias in visa determinations.

4: Identify organisations engaging in talent recruitment that are linked to the Chinese central and local governments or to the CCP, and require their registration as foreign agents where appropriate.

5: Enhance counterintelligence capabilities, particularly by augmenting language and technical expertise.

Encourage best practices and safeguards for risk mitigation in partnerships and collaborations, with a particular focus on universities.

6: Introduce stricter accountability and reporting requirements, managed by departments of education, which make transparent international sources of funding for research strategic technologies

7: Engage in outreach to companies, universities and think tanks in order to highlight the potential for risk or unintended externalities in joint ventures and partnerships, including through developing and presenting a series of case studies based on past incidents.

8: Propose best practices for future academic collaborations and commercial partnerships, including transparency about the terms for scientific data and intellectual property, as well as clear standards on ethics and academic freedom.

  • Identify favourable domains to sustain open collaboration and engagement, such as issues of safety and standards.

9: Introduce, or where appropriate adjust, policies or guidelines restricting those who work for national or military research institutes and laboratories or receive public funding at a certain level from organisations accepting funding from or collaborating with a foreign military, state-owned enterprise or ‘national champion’ that is not an ally.

Go on the offensive through policies to enhance national competitiveness in technological innovation.

10: Increase and commit to sustaining funding for basic research and the long-term development of AI technologies.

11: Prioritise improving the accessibility and affordability of STEM education at all levels, including creating new scholarships to support those studying computer science, AI and other priority disciplines.

12: Sustain openness to immigration, welcoming graduating students and talented researchers, while potentially offering a fast-track option to citizenship.

13: Pursue closer public–private partnerships through creating new incubators and institutions that create a more diverse and dynamic community for innovation.89

  • Encourage dialogue and engagement between the tech and defence communities on issues of law, ethics and safety.

14: Explore the expansion of alliance coordination and cooperation in defence innovation, including collaboration in research, development and experimentation with new technologies and their applications.

15: Engage with like-minded nations to advance discussions of AI ethics and standards, as well as potential normative and governance frameworks.


Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Elsa B Kania, ‘The pursuit of AI is more than an arms race’, Defense One, 19 April 2018, online. ↩︎
  2. Kai-Fu Lee, AI superpowers: China, Silicon Valley, and the new world order, Houghton Mifflin Harcourt, 2018, forthcoming ↩︎
  3. For prior writing on these issues, see Elsa Kania, ‘Tech entanglement—China, the United States, and artificial intelligence’, Bulletin of the Atomic Scientists, 5 February 2018, online. ↩︎
  4. For a detailed study on these issues, see Office of the United States Trade Representative, Executive Office of the President, Findings of the investigation into China’s acts, policies, and practices related to technology transfer, intellectual property, and innovation under section 301 of the Trade Act of 1974, 22 March 2018, online. ↩︎
  5. Throughout this policy paper, I use the concept of ‘entanglement’ to characterise the close linkages and range of mechanisms for engagement in the research, development and commercialisation of technologies, particularly in the context of AI. In historical perspective, entanglement, whether in alliances or economics, has proven to be both a factor restraining conflict and a major source of friction. ↩︎
  6. ‘US tech companies and their Chinese partners with military ties’, New York Times, 30 October 2015, online. ↩︎
  7. Clive Hamilton, Alex Joske, ‘Australian universities are helping China’s military surpass the United States’, Sydney Morning Herald, 27 October 2017, online. ↩︎
  8. Josh Chin, Clément Bürge, ‘Twelve days in Xinjiang: how China’s surveillance state overwhelms daily life’, Wall Street Journal, 19 December 2017, online. ↩︎
  9. For the purposes of this paper, I target the proposed policy responses to the context of the US and Australia, but the suggested responses are intended to be applicable to other liberal democratic states. ↩︎
  10. These screenings should not extend to outright restrictions or unwarranted discrimination on the basis of nationality. For a compelling perspective on the imperative of keeping the door open to foreign scientists, read Yangyang Cheng, ‘Don’t close the door on Chinese scientists like me’, Foreign Policy, 4 June 2018, online. ↩︎
  11. For a notable report on these concerns, see Michael Brown, Pavneet Singh, China’s technology transfer strategy: how Chinese investments in emerging technology enable a strategic competitor to access the crown jewels of US innovation, Defense Innovation Unit Experimental (DIUx), January 2018, online. ↩︎
  12. ‘CFIUS reform: House and Senate committees unanimously clear bills that would greatly expand CFIUS authority’, Lexology, 1 June 2018, online. National/State Security Law of the People’s Republic of China [中华人民共和国国家安全法], 7 July 2015, online. For further discussion of the concept of ‘state security’, see Samantha Hoffman, ‘China’s state security strategy: “everyone is responsible”’, The Strategist, 11 December 2017, online. ↩︎
  13. For an interview that describes the campaign from the perspective of an organiser, see ‘Tech workers versus the Pentagon’, Jacobin, 6 June 2018, online. ↩︎

Big data in China and the battle for privacy

Big data in China and the battle for privacy

Introduction

If data is the new oil, China is oil super-rich. Data is the essential ingredient for artificial intelligence (AI) and is underpinning a wide-ranging revolution.

China’s massive population, lack of privacy protections, controlled tech sector and authoritarian system of governance give it a huge edge in collecting the data needed for that revolution (Figure 1). But the Chinese state and Chinese businesses are also using this wealth of data to pursue state and business goals without the constraints present in other jurisdictions. A lack of privacy protections and rule-of-law protections leaves Chinese citizens at the whim of sophisticated, and often state-controlled, data-driven technologies.

Private companies are not only sharing users’ personal data with the authorities in compliance with China’s regulatory environment such as the most recent Cybersecurity Law but many of those companies—including the industry leaders—are building their business model predominantly around the needs of the state.

The success of these technologies in enabling potential mass surveillance and exerting a chilling effect on individuals deserves more attention.

Figure 1: Top 20 internet populations, by country

This paper examines Chinese state policy on big data industries and analyses the laws and regulations on data collection that companies in China are required to comply with. It also looks at how those rules may affect foreign companies eyeing the China market. Case studies are included to demonstrate the ongoing tensions between big data applications and privacy. The paper concludes by outlining the implications and lessons for other countries.

An ambitious big data vision supported by China’s internet companies

China’s State Council has laid out an ambitious road map outlining its AI vision, which includes creating a US$150 billion industry and becoming the world leader in AI by 2030.1 Enormous state financial backing aside, a controlled tech industry,2 huge data availability and relatively scant privacy protections mean that China is well placed to become a global AI leader; or, to be more accurate, a leader in the development of big-data-driven technologies. 

China’s online ecosystem is unique compared to Western equivalents. Unlike their Silicon Valley competitors, Chinese technology and internet companies typically design their products to include not just one, but various types of services. Tencent’s WeChat, for example, China’s most popular mobile chat application, is more than an instant messaging app: it’s an all-in-one superapp. A billion active WeChat users now use it to chat with their friends and families, communicate with supervisors and
work colleagues, play games, hail taxis, make online purchases and conduct financial investments.3 WeChat is now even used to handle sensitive government paperwork, such as visa applications, and could soon be used for entry into Hong Kong.4

Tencent vowed—openly and ambitiously—to become the fundamental platform for the Chinese internet: a platform ‘as vital as the water and electricity resources in daily life’.5 Alibaba’s Alipay, China’s Paypal-like e-payment service, has incorporated social functions through which it encourages users to share location data, personal information and purchasing habits with others. Combined with China’s real-name registration system,6 these consolidated functions enable the government and industry
to effortlessly profile individual users. In addition, even when an individual’s information has been anonymised, their identity can still be re-identified by any interested parties if they have access to two or more sets of data to find the same user in both. In other countries, such identification would attract public concern, but research indicates that there’s a lack of awareness and a willingness to trade off privacy for lower cost services among Chinese consumers.7 For example, research that compared global consumers’ views on sharing personal information online found that consumers in China had a more lackadaisical attitude towards privacy protection than consumers in most Western countries.8

Big data analytics offers invaluable insights to inform the use and delivery of public goods, including increased public safety, law enforcement, resource allocation, urban planning9 and healthcare systems.10 But how data is collected and used affects a country’s digital ecosystem and its citizens’ social and political participation. How China’s regulatory environment handles these interactions is analysed in the following section.

Big data and public security

China is placing huge bets on big data, and a range of policies have been introduced over the past two years to flesh out the government’s vision. On October 18 2017, Chinese President Xi Jinping promoted the integration of the internet, big data and AI with the real-world economy in his 19th Party Congress report.11 But China’s interest in big data can be dated to as early as the early 2010s. In July 2012, the State Council specifically mentioned the importance of ‘strengthening the development of basic software—especially those that are able to handle large volumes of data’—in a policy document in its 12th Five-Year Plan . The current administration has beefed up the conceptualisation of China’s big data vision.

Chinese Premier Li Keqiang, for example, proposed the concept of ‘Internet Plus’ (互联网+),12 calling for the integration of mobile internet, cloud computing, big data and the ‘internet of things’ with modern manufacturing in his March 2015 Government Work Report.13

In the months following Li’s report, China’s central government released a number of top-down designs and guidelines on big data policies (Table 1). By the end of 2016, various government bureaucracies 14 and more than 20 provincial and municipal governments issued their own regulations and development plans for big data industries.15 Unsurprisingly, most of these government initiatives and policies have a special interest in developing and supporting big data technologies that can be applied to the security sector. Security experts argue that contribution to the emerging social credit system is likely as part of these related initiatives.16 Statistics from 2016 show that most of the government’s domestic government investment in big data industries has gone to public security projects.17

Table 1: Major big data policies issued by the Chinese Government

TitleIssuerDate issuedMain takeaways
Made in China 2025 《中国制造2025》, onlineState CouncilMay 2015Lays out a road map for the transformation and upgrade of China’s traditional and emerging manufacturing industry, with a focus on big data, cloud computing, the internet of things and related smart technologies. (a)
Action Outline for
Promoting the
Development of Big Data
《促进大数据发展行动纲
要》, online
State CouncilAugust 2015Provides a top-down action framework for promoting big data. Details yearly goals such as establishing a platform for sharing data between government departments by the end of 2017, a unified platform for government data before the end of 2018, and nurturing a group of 500 companies in the industry, including 10 leading global enterprises focused on big data application, services and manufacturing by the end of 2020. It is widely perceived to be a programmatic document guiding the long-term development of China’s big data industries.
Outline of the 13th Five-Year Plan for the National Economic and Social Development of the People’s Republic of China 《中华人民共和国经济和 社会发展第十三个五年规 划纲要》, online.National People’s CongressMarch 2016Identifies big data as a ‘fundamental strategic resource’ (基础性战略资源). Pushes for further sharing of data resources and applications. Lists big data applications as one of the eight major informatisation projects. It’s the first time China incorporated big data into state-centric strategy plans. (b)
The National Scientific and Technological Innovation Planning for the 13th Five Years 《’十三五’国家创新规划》, online.State CouncilJuly 2016Prioritises big-data-driven breakthroughs in AI technologies.
Development Plan for Big Data Industries (2016–2020) 《大数据产业发展规划 2016-2020年)》, online.Ministry of Industry and Information TechnologyDecember 2016Sets an overarching goal for China’s big data industries: by 2020, related industry revenue should exceed 1 trillion RMB, with a compound annual growth rate of 30%.

a) 徐永华,陈怀宇, 陈亦恺, Anthony Marshall, 何志强,夏宇飞, 温占鹏,张龙,孙春华, ‘中国制造业走向2025 构建以数据洞察为驱动的新价值网络’,IBM商业价值研究院, 中国电子信息产业发展研究院, 13 October 2015 online.

b) 林巧婷, ‘我国首次提出推行国家大数据战略’ 中央政府门户网站, 3 November 2015 online.

In the outline of the 13th Five-Year Plan, big data applications were listed as one of the eight major ‘informatisation’ projects. Informatisation (信息化)—the process by which the political, social and economic interactions in a society have become networked and digitised—cannot be overstated when analysing China’s big data vision, especially in the public security sector. Over the past two decades, the Ministry of Public Security has taken an adaptive approach to this trend. It has made continuous efforts18 to harness the advances of information and communications technologies for security operations—a process called ‘public security informatisation’ (公安信息化). At its core, public security informatisation relates to shifting police work from reactive to pre-emptive through the use of data collection and synthesis. “Security” is a broad concept when applied by the Chinese state and is sufficiently broad to enable the control and censoring of public debate in ways that may affect the power or standing of the ruling Chinese Communist Party.

A few statistics help put these concepts and policies in context. Across China, there’s a network of approximately 176 million surveillance cameras—expected to grow to 626 million by 202019—that monitor China’s 1.4 billion citizens. Powered by big-data-driven facial recognition technology, these cameras are able to identify a person’s name, identification card number, gender, clothing and more. Meanwhile, Chinese police have reportedly been collecting DNA samples, fingerprints, iris scans, and blood types of all residents, using questionable methods, in places such as Xinjiang.20

Backed by an oceanic amount of data and advanced analytic technologies, Chinese public security forces are emerging as a powerful and dominant intelligence and security sector.21 The interest from the public security forces in using big data to support government systems for faster and more extensive surveillance and social control largely explains the rapid rise of China’s big data industries.22

Private companies are not only sharing users’ personal data with the authorities in compliance with China’s Cybersecurity Law,23 the National Intelligence Law24 and other relevant internet management regulations, but many of them—including the industry leaders25—are building their business model predominantly around the needs of the state.

Diminishing rights: China’s data laws and regulations

On the other end of the spectrum of the all-encompassing, data-driven analytic technologies are citizens’ de facto diminishing rights to privacy and growing challenges of protecting individuals’ data security. In contrast to the wide scope of central- and local-level policy initiatives and government-backed projects on big data collection and use, there’s no uniform law or a national authority to ensure or coordinate data protection in China. Privacy advocates have been striving to have a national privacy protection law passed since 2003.26 Fifteen years later, the National People’s Congress, China’s highest legislative body, still has not included such uniform law in its agenda.27

A number of articles in China’s recent Cybersecurity Law pertain to data collection and privacy protection. However, they take a state-centric approach, expanding the government’s direct involvement in companies’ operations. Missing in this approach is any support for an independent privacy watchdog or support for independent civil society organisations. For now, regulations on data protection remain largely domain-specific, such as those relating to telecommunications and online banking, which are issued by different ministries or local governments (Table 2 summarises the main relevant regulations in China).

Table 2: Chinese laws, regulations and guidelines on data collection

TitleIssuerDate issuedRelevance
Information Security Technology: Guidelines for Personal Information Protection Within Public and Commercial Services Information Systems 《信息安全技术公共及商用服务信息系统个人信息保护指南》, online.General Administration of Quality Supervision, Inspection and Quarantine & Standardisation Administration of ChinaNov 2012Establishes basic principles for personal data collection, processing and transfers, including the principles of ‘parity of authority and responsibility’, ‘minimum necessary and not excessive’ and ‘consent of the individual’. Remains non-compulsory for companies
Decision on Strengthening Information Protection on Networks 《关于加强网络信息保护的 规定》, online.Standing Committee of the National People’s CongressDec 2012Specifies that the state protects ‘electronic information by which individual citizens can be identified and which involves the individual privacy of citizens’.
Provisions on Protecting the Personal Information of Telecommunications and Internet Users 电信和互联网用户个人信息保护规定, online.Ministry of Industry and Information TechnologyJuly 2013Regulates how telecommunications and internet service providers may collect and use users’ personal data.
Regulation on the Administration of Credit Investigation Industry 征信业管理条例, online.State CouncilJan 2013Encompasses China’s grand plan of building a ‘social credit system’. Regulates the collection, storage and processing of personal information by credit investigation enterprises. Article 14 points out that ‘credit investigation institutions are prohibited from collecting information about the religious belief, genes, fingerprints, blood type, disease or medical history of individuals, as well as other individual information the collection of which is prohibited by laws or administrative regulations.’
Amendment (IX) to the Criminal Law of the People’s Republic of China 刑法修正案(九), online.Standing Committee of the National People’s CongressAug 2015Criminalises the sale or provision of citizens’ personal data, with a penalty of up to seven years imprisonment.
Cybersecurity Law 网络安全法, online.Standing Committee of the National People’s CongressNov 2016Article 76 (5) defines ‘personal information’ in legal documents for the first time. ‘Personal information’ refers to all kinds of information, recorded electronically or through other means, that can determine the identity of natural persons independently or in combination with other information, including, but not limited to, a natural person’s name, date of birth, identification number, personal biometric information, address and telephone number.
E-commerce Law (draft) 电子商务法 (草案)Under review by Standing Committee of the National People’s CongressMay be passed in 2018Regulates data collection by e-commerce operators
Interim Security Review Measures for Network Products and Services 《网络产品和服务安全审查办法(试行)》, online.Cyberspace Administration of ChinaMay 2017Specifies that a cybersecurity review will include reviewing risks that product or service suppliers illegally collect, store, process or use user-related information while providing products or services.
Information Security Technology: Personal Information Security Specification 《信息安全技术 个人信息安全规范》, online.General Administration of Quality Supervision, Inspection and Quarantine & Standardisation Administration of ChinaDec 2017 (took effect in May 2018)Clarifies the definition of ‘personal sensitive information’, which includes information on one’s wealth, biometrics, personal identity, online identity identifiers and so on. Remains non-compulsory for companies

The lack of a legal framework on privacy protection has led to open disputes over who has access to user data. One of the most high-profile cases is the dispute between Tencent, China’s first internet giant to enter the elite US$500 billion tech club,28 and Huawei, the Chinese telecom equipment and smartphone maker. Huawei was seeking to collect user data from Tencent’s WeChat, China’s most popular chat app, installed on its Honor Magic phone. The data would help Huawei advance its AI projects. Tencent was quick to object, claiming it would violate user privacy and demanded that the Chinese Government intervene.29 Huawei argued that users have the right to choose whether and with whom their data is shared. The government suggested the two companies ‘follow relevant laws and regulations’,30 but existing regulations fail to specify who can collect and process user data.31 It’s still unclear how the two settled the dispute—or even whether they’ve settled it.32

Huawei and Tencent aren’t the first Chinese tech giants to rub shoulders other over access to data. In June 2017, Alibaba’s logistics arm, Cainiao, and China’s biggest private courier, SF Express, were in a month-long stand-off over access to consumer data. The fight was eventually resolved with the State Post Bureau’s intervention.33 Cainiao and SF Express both cited noble-sounding reasons, such as ‘data security’ and ‘user privacy’, for refusing to share data with each other, but the dispute was really about protecting their commercial interests and determining who had access to merchant and shopper data on China’s US$910 billion online retail market.34 In the case of Huawei versus Tencent, it’s about who may get to dominate the AI race with the help of massive amounts of data, including users’ chat logs. Due to a void in the current legal framework, it’s likely that disputes between companies over user data access will continue.

Lack of transparency and accountability

Most of the regulations are aimed at holding companies and individuals—rather than government bodies—accountable for data collection and protection. By contrast, government authorities now have access to more sensitive personal data than ever (through either court orders or surveillance). In addition, law enforcers are requiring companies to ensure a longer period of data retention and zero exemptions from real-name registration policies.

In June 2016, for example, China’s Cyberspace Administration issued the Provisions on the Administration of Mobile Internet Applications Information Services (移动互联网应用程序服务管理规定),35 which require, among other things, that:

  • app providers and app stores cooperate with government oversight and inspection
  • app providers keep records of users’ activities for 60 days
  • app providers ensure that new app users register with their real names by verifying users’ mobile phone numbers, other identifying information, or both.

In September 2016, Chinese authorities issued new regulations stating explicitly that user logs, messages and comments on social media platforms such as WeChat Moments—a feature that resembles Facebook’s timeline feed—can be collected and used as ‘electronic data’ to investigate legal cases.36 Cases of WeChat users being arrested for ‘insulting police’37 or ‘threatening to blow up a government building’38 on Moments indicate that the feature may be subject to monitoring by the authorities or the company.

Observers have raised concerns over authorities’ use of big-data-driven and AI-enabled technologies such as facial recognition and voice recognition, which may lead to an all-seeing police state. iFlytek, a Chinese information technology company designated by the Ministry of Science and Technology to lead the country’s speech recognition development, has partnered with the Ministry of Public Security to develop a joint research lab. According to a report by the company, it has also partnered
with local telecommunication companies in eastern Anhui Province to establish a surveillance system that ‘notifies public security departments as soon as a suspicious voice is detected’.39 In the highly restricted Xinjiang region, local authorities are reportedly collecting highly sensitive personal information, including DNA samples, fingerprints and iris scans.40

A case that demonstrates ongoing tensions between big data applications and privacy concerns in China is the building of a national social credit system 社会信用体系 (SCS), which is the subject of a forthcoming ICPC policy brief by Samantha Hoffman. The SCS, currently planned for a full launch by 2020, aims to aggregate data on the country’s 1.4 billion citizens and assign each person a credit rating based on their socioeconomic status and online behaviour.41 So far, there’s little detail on exactly
how the system will unfold. Some companies and local governments have created their own systems (such as Tencent’s Tencent Credit,42 Alibaba’s Sesame Credit43 and many other social credit products developed by smaller players).44 While a final reward and punishment mechanism remains uncertain, existing reports show some consistent themes. For example, based on their social credit score and behaviours that affect one’s credit, a citizen’s access to aeroplane or express train travel will be denied and their privileges, such as faster visa approval and easier access to apartment rentals, will be restricted if the person has a bad social credit score.

The justifications for this scheme include the idea that it’s a remedy for the deficit of trust in society.45 Southern Metropolis Daily, a Guangzhou-based liberal-leaning newspaper, surveyed 700 people on their attitudes towards China’s social credit system in 2014.46 It found that even though 40% of the respondents expressed privacy concerns, 80% were in support of this national program because ‘it helps build a society of trust’ and ‘provides a safer and more reliable environment for business’. Yet, the complete lack of transparency and clarity on data protection raise the alarming prospect of big-data-enabled mass surveillance in China and other authoritarian states.

Both Alibaba47 and Tencent48 have rolled out their own versions of social credit systems, which offer a holistic assessment of character based on vaguely defined categories and non-transparent algorithms.49 According to material collected by researchers at the University of Toronto’s Citizen Lab, the chief credit data scientist of Alibaba’s Ant Financial, Yu Wujie, has said, ‘If you regularly donate to charity, your credit score will be higher, but it won’t tell you how many payments you need to make every month … but [development] in this direction [is undertaken with] the hope that everyone will donate.’50 Tencent has revealed little about its credit system thus far, but the company already has access to a huge amount of users’ social data, including chat logs, via WeChat, QQ and many of its gaming products.

Due to the lack of data protection laws, few, including state regulators, have an understanding of what kinds of data a private company can access and use.51 It’s also unclear whether online comments and activities deemed undesirable by the government would negatively affect a person’s creditworthiness. The scheme is wide open to abuse by government authorities, including in tracking dissidents and exerting chilling effects on ordinary citizens.52

International implications

The tensions between privacy protection and data collection will be felt not only in China. In recent years, companies and governments in both authoritarian and democratic countries have vowed to develop big-data-based surveillance technologies and tighten internet management in the name of public and national security.53

At the international level, cross-border transfers of personal information, courtesy of the increasingly interdependent global economy in the age of big data, have become a pressing issue for private and state actors. Following the enactment of the Cybersecurity Law, which sets data localisation requirements, China has released administrative documents and guidelines detailing the conditions companies need to meet for data export (Table 3).

Table 3: Regulations on cross-border data transfer or data export

TitleIssuerDate issuedRelevance
Cybersecurity Law 网络安全法, online.Standing Committee of the National People’s CongressNov 2016Article 37: Personal information and important data collected and generated by critical information infrastructure operators in China must be stored domestically. For information and data that is transferred overseas due to business requirements, a security assessment will be conducted in accordance with measures jointly defined by China’s cyberspace administration bodies and the relevant departments under the State Council. Related provisions of other laws and administrative regulations shall apply.
Circular of the State Internet Information Office on the Public Consultation on the Measures for the Assessment of Personal Information and Important Data Exit Security (Draft for Soliciting Opinions) 《个人信息和重要数据出境 安全评估办法(征求意见稿)》, online.Cyberspace Administration of ChinaApr 2017Extends the scope of outbound data security assessment. While the Cybersecurity Law requires security evaluations to be conducted on critical information infrastructure operators (关键信息基础设施运营者), the measures stipulate that all network operators (网络运营者) must go through the check. Establishes the basic framework for outbound data security assessment, including its processes, responsible parties and main focuses.
Information Security Technology: Guidelines for Data Cross-Border Transfer, online. Security Assessment (second draft), online. 《信息安全技术 数据出境安 全评估指南 (第二稿)》National Information Security Standardisation Technical CommitteeAug 2017Clarifies the definition of data cross-border transfer, which is ‘the one-time or continuous activity in which a network operator provides personal information and important data collected and generated by network or other means in the course of operations within the territory of China to overseas institutions, organisations or individuals by means of directly providing or conducting business, providing services or products, etc.’ Further breaks down the conditions for initiating security self-assessment, government assessment and their processes. Details what is ‘important data’ and ‘personal data’. Non-compulsory for companies.

Under these regulations, foreign companies will have to either invest in new data servers in China that may be subject to monitoring by the government or incur new costs to partner with a local server provider, such as Tencent or Alibaba. Apple’s recent decision to migrate its China iCloud data to Guizhou Big Data and Amazon’s sell-off of its China cloud assets to its local Chinese partner are just two examples of how China’s tightening rules on data retention and transfers may affect foreign companies. By requiring data localisation, the Chinese Government is bringing data under Chinese jurisdiction and making it easier to access user data and penalise companies and individuals seen as violating China’s vaguely defined internet laws and regulations.

Meanwhile, Chinese-manufactured tech devices and applications that have taken over large portions of overseas markets are raising questions about data security. The Australian Defence Department has recently banned staff and serving personnel from downloading WeChat, China’s most popular social media app, onto their work phones.54 The heads of six top US intelligence agencies, including the Federal Bureau of Investigation, the Central Intelligence Agency and the National Security Agency, told the Senate Intelligence Committee in February that they would not advise Americans to use products or services from Chinese telecommunications companies Huawei and ZTE. In April 2018, the tension escalated into a seven-year ban imposed by the US Commerce Department, prohibiting American companies from selling parts and software to ZTE, although at the time of publishing it’s unclear whether this ban will be enforced or overturned.55 In December 2017, the Ministry of Defence in India issued a new order to the Indian armed forces requiring officers and all security personnel to remove more than 42 Chinese apps, including Weibo, WeChat and UC Browser, which were classified as ‘spyware’.

Conclusion

This paper highlights the conflict between the fast-developing big data technologies and citizens’ diminishing rights to privacy and data security in China. A review of major Chinese big-data-related policy initiatives shows that many of those policies reflect special interest from Chinese authorities, its public security forces in particular, in potentially using data-driven analytic technologies for more effective and extensive surveillance and social control.

Compared to the growing number of regulations and national plans that support the research and development of big data technologies, there’s a lack of data protection laws and guidelines to hold relevant parties, especially the government, accountable for the collection and use of personal data. The ambivalent legal framework of data security and privacy protection, which enables state use of collected data, has led to multiple incidences of commercial disputes over access to users’ data. It’s likely we’ll see more such cases in the future.

Addressing these conflicts and advocating for the protection of users’ rights to privacy in China—where the state dominates every sector of society and suppresses civil society—is not easy. The Chinese state’s approach is a reminder to users, both in China and elsewhere, of the importance of protecting personal privacy and online security.

Using China as a case study also offers a number of takeaways for policymakers in other countries. International developments, such as ongoing privacy issues with Facebook data, show that tension between governments, businesses and users in the age of big data is not unique to any country. To that end, the EU’s General Data Protection Regulation has set a good example for containing companies’ exploitation of personal data.

There’s a trend, in China and elsewhere, for governments to use the excuse of ‘protecting user privacy’ to justify a more powerful state and more state involvement in private companies’ and organisations’ operations. Civil society groups, whenever and wherever possible, should assume a stronger role in addressing these challenges and raising awareness . A US-based study released in April 2018, for example, highlighted consumer misconceptions about privacy while using popular browsers, including that they would ‘prevent geo-location, advertisements, viruses, and tracking by both the websites visited and the network provider’.56 Further work and support are needed to equip users with sufficient knowledge to understand how data-related technologies work and what those technologies mean to them in everyday life.

The attractiveness of the Chinese state’s surveillance and social control systems to other authoritarian states means we may see other states adopt them, unless the negative aspects of these approaches are made more transparent. The consequences of reduced personal freedom combined with greater state control of societies and individuals are disturbing for advocates of the vitality and strength of open societies. Beyond these concerns, the strategic consequences of the tight integration of the
Chinese tech sector with the Chinese state is an area for further analysis.


Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. 中华人民共和国国务院, ‘国务院关于印发 新一代人工智能发展规划的通知’, 国务院, 8 July 2017, online. ↩︎
  2. China has permitted only some foreign direct investment through Chinese entities with partial or full foreign ownership in many tech sectors. See more detailed analysis by Paul Edelberg, ‘Is China Really Opening Its Doors to Foreign Investment?’, China Business Review, 8 November 2017, online and Jianwen Huang, ‘China’, The Foreign Investment Regulation Review – Edition 5, October 2017, online. ↩︎
  3. Yang Ruan, Cheek, Social media in China: what Canadians need to know; Nicole Jao, ‘WeChat now has over 1 billion active monthly users worldwide’, Technode, 5 March 2018, online. ↩︎
  4. Mason Hinsdale, ‘Tencent wants to make WeChat a digital travel ID’, Jing Travel, 6 June 2018, online. ↩︎
  5. 马化腾, ‘互联网像水和电一样成为‘传统行业’, Digitaling.com, 12 August 2014, online. ↩︎
  6. Catherine Shu, ‘China attempts to reinforce real‑name registration for internet users’, Techcrunch.com, 1 June 2016, online. ↩︎
  7. Hui Zhao, Haoxin Dong, ‘Research on personal privacy protection of China in the era of big data’, Open Journal of Social Sciences, 19 June 2017, 5:139–145, online. ↩︎
  8. Boston Consulting Group, Data privacy by the numbers, March 2014, online. ↩︎
  9. Linda Poon, ‘Finally, Uber releases data to help cities with transit planning’, CityLab.com, 11 January 2017, online. ↩︎
  10. Linda Lew, ‘How Tencent’s medical ecosystem is shaping the future of China’s healthcare’, Technode.com, 11 February 2018, online ↩︎

Deterrence in cyberspace

Spare the costs, spoil the bad state actor: Deterrence in cyberspace requires consequences

Foreword

In the past three years, barely a week has gone by without a report of a critical cyberattack on a business or government institution. We are constantly bombarded by revelations of new ransomware strains, new botnets executing denial of service attacks, and the rapidly expanding use of social media as a disinformation and propaganda platform.

Perhaps most alarmingly, a great many of these attacks have their origin in the governments of nation states.

In the past decade we have moved well beyond business as usual signals intelligence operations. Some of the largest malware outbreaks in recent years, such as NotPetya and WannaCry, had their origins in state-run skunkworks.

Cyberattacks initiated by nation states have become the new normal, and countries including Australia have struggled with the challenge of how to respond to them. Far too often they’re considered a low priority and met with a shrug of the shoulders and a “What can you do?”

In this paper, Chris Painter offers us a way forward. Chris presents a reasonable framework for deterrence, a way that we as a nation can help limit the deployment of cyberwarfare tools.

His recommendations are designed to properly punish bad actors in a way that discourages future bad behaviour. They’re modelled on actions that have worked in the past, and serve, if not as a final solution, at least as a starting point for us to scale back on the increasing number of state-sponsored cyber attacks.

Most importantly, these actions aren’t just to the benefit of the state—they will allow us to better protect private citizens and companies that all too often get caught in the cyberwarfare crossfire. To put it simply, if we can ensure there are costs and consequences for those who wrongly use these tools to wreak damage, bad actors might start thinking twice before engaging in this destructive behaviour.

Yohan Ramasundara
President, Australian Computer Society

What’s the problem?

Over the past few years, there’s been a substantial increase in state attacks on, and intrusions into, critical information systems around the globe—some causing widespread financial and other damage.1 They have included:

  • attacks by North Korea on Sony Pictures in 2014
  • widespread Chinese theft of trade secrets and intellectual property
  • Russian state-sponsored interference in the US elections
  • North Korea’s sponsorship of the WannaCry ransomware worm that caused, among other things, a meltdown of the UK’s National Health System
  • the Russian-sponsored NotPetya worm that caused tens of millions of dollars of damage and disruption around the world.

The pace and severity of these attacks show no sign of declining. Indeed, because there have usually been little or no consequences or costs imposed on the states that have taken these actions, they and others have little reason not to engage in such acts in the future.

The US, Australia and many other countries have spent years advancing a framework for global stability in cyberspace. This framework comprises:

  • the application of international law to cyberspace
  • acceptance of certain voluntary norms of state behaviour in cyberspace (essentially, voluntary rules of the road)
  • the adoption of confidence and transparency building measures.

Although much progress has been achieved in advancing this framework, the tenets of international law and norms of state behaviour mean little if there are no consequences for those states that violate them. This is as true in the cyber world as in the physical one. Inaction creates its own norm, or at least an expectation on the part of bad state actors that their activity is acceptable because there are no costs for their actions and no likely costs for future bad acts.

Individually as countries and as a global community, we haven’t done a very effective job of punishing and thereby deterring bad state actors in cyberspace. Part of an effective deterrence strategy is a timely and a credible response that has the effect of changing the behaviour of an adversary who commits unacceptable actions.

Although there are some recent signs of change, in the vast majority of cases the response to malicious state actions has been neither timely nor particularly effective. This serves only to embolden bad actors, not deter them. We must do better if we’re to achieve a more stable and safe cyber environment.

What’s the solution?

It is a well-worn and almost axiomatic expression that deterrence is hard in cyberspace. Some even assert that deterrence in this realm is impossible.

Although I don’t agree with that fatalistic outlook, it’s true that deterrence in cyberspace is a complex issue. Among other things, an effective deterrence framework involves strengthening defences (deterrence by denial); building and expanding the consensus for expectations of appropriate state behaviour in cyberspace (norms and the application of international law); crafting and communicating—to potential adversaries, like-minded partners and the public—a strong declaratory policy; timely consequences, or the credible threat thereof, for transgressors; and building partnerships to enable flexible collective action against those transgressors.

Although I’ll touch on a couple of those issues, I’ll focus here on imposing timely and credible consequences.

The challenge of attribution

One of the most widely cited reasons for the lack of action is the actual and perceived difficulty in attributing malicious cyber activity.

Unlike in the physical world, there are no launch plumes to give warning or the location of the origin of a cyberattack, and sophisticated nation-states are adept at hiding their digital trail by using proxies and routing their attacks through often innocent third parties. But, as recent events illustrate, attribution, though a challenge, is not impossible. Moreover, attribution involves more than following the digital footprints; other forms of intelligence, motive and other factors all contribute to attribution. And, ultimately, attribution of state conduct is a political decision. There’s no accepted standard for when a state may attribute a cyberattack, although, as a practical, political and prudential matter, they’re unlikely to do so unless they have a relatively high degree of confidence. Importantly, this is also true of physical world attacks. Certainly, a state doesn’t require 100% certainty before attribution can be made or action taken (as some states have suggested). Whether in the physical or the cyber world, such a standard would practically result in attribution never being made and response actions never being taken.

Although attribution is often achievable, even if difficult, it still seems to take far too long—at least for public announcements of state attribution. Announcing blame, even if coupled with some responsive actions, six months to a year after the event isn’t particularly timely. Often by that point the impact of the original event has faded from public consciousness and so, too, has the will to impose consequences.

Part of this delay is likely to be due to technical difficulties in gathering and assembling the requisite evidence and the natural desire to be on solid ground; part is likely to be due to balancing public attribution against the possible compromise of sources and methods used to observe or detect future malicious activity; but part of it’s probably due to the need to summon the political will to announce blame and take action—particularly when more than one country is joining in the attribution. All of these cycles need to be shortened.

Naming and shaming

Public attribution of state conduct is one tool of deterrence and also helps legitimise concurrent or later responses.

The US, the UK, Australia and other countries came together recently to attribute the damaging NotPetya worm to Russia and, a few months ago, publicly attributed the WannaCry ransomware to North Korea. This recent trend to attribute unacceptable state conduct is a welcome development and should be applauded.2 It helps cut through the myth that attribution is impossible and that bad state actors can hide behind the internet’s seeming anonymity.

However, public attribution has its limits. Naming and shaming has little effect on states that don’t care if they’re publicly outed and has the opposite effect if the actor thinks their power is enhanced by having actions attributed to them. In the above two cases, it’s doubtful that naming and shaming alone will change either North Korea’s or Russia’s conduct. Public attribution in these cases, however, still serves as a valuable first step to taking further action. Indeed, in both cases, further actions were promised when public attribution was made.

That raises a couple of issues. First, those actions need to happen and they need to be effective. President Obama stated after the public attribution to North Korea in relation to the Sony Pictures attack that some of the response actions ‘would be seen and others unseen’. A fair point, but at least some need to be seen to reinforce a deterrent message with the adversary, other potential adversaries and the public at large.

The other issue is timing. The public attribution of both WannaCry and NotPetya came six months after the respective attacks. That delay may well have been necessary either for technical reasons or because of the work required to build a coalition of countries to announce the same conclusion, but attribution that long after the cyber event should be coupled with declared consequences—not just the promise that they’re to come. Some action did in fact come in the NotPetya case about a month after public attribution, when the US sanctioned several Russian actors for election interference, NotPetya and other matters. That was a very good start but would be even more effective in the future if done when the public attribution occurs.

Action speaks louder than attribution alone, and they must be closely coupled to be effective.

Action speaks louder than attribution alone, and they must be closely coupled to be effective.

General considerations

A few general considerations apply to any contemplated response action to a cyber event.

First, when measures are taken against bad actors, they can’t just be symbolic but must have the potential to change that actor’s behaviour. That means that one size does not fit all. Different regimes hold different things dear and will respond only if something they prioritise or care about is affected. Tailored deterrence strategies are therefore required for different states.3

For example, many have opined that Russia is more likely to respond if sanctions are targeted at Putin’s financial infrastructure and that of his close elites than if simply levied in a more general way.

Second, the best response to a cyberattack is seldom a cyber response. Developing cybertools and having those tools as one arrow in the quiver is important, but other responses will often be more effective.

Third, the response to a cyber event shouldn’t be approached in a cyber silo but take into account and leverage the overall relationship with the country involved. The agreement that the US reached with China that neither should use cyber means to steal the trade secrets and intellectual property of the other to benefit its commercial sectors wouldn’t have come about if widespread cyber-enabled intellectual property theft was seen only as a cyber issue. Only when this problem was seen as a core national and economic security issue, and only when President Obama said that the US was willing to bear friction in the overall US–China relationship, was progress really possible.

Fourth, a responsive action and accompanying messaging needs to be appropriately sustained and not a one-off that can be easily ignored. Fifth, potential escalation needs to be considered. This is a particularly difficult issue when escalation paths aren’t well defined for an event that originates in cyberspace, whether the response is a cyber or a physical one, and the chance of misperception is high. And finally, any response should comport with international law.

Collective action

Collective action against a bad actor is almost always more effective than a response by just one state and garners more legitimacy on the world stage.

Of course, if the ‘fiery ball of cyber death’ is hurtling towards you, every country has the right to act to defend itself, but, if possible, acting together, with each country leveraging its capabilities as appropriate, is better. Collective action doesn’t require any particular organised group or even the same countries acting together in each instance.

Flexibility is the key here and will lead to swifter results. The recent attribution of NotPetya by a number of countries is a good example of collective action to a point. It will be interesting to see, following the US sanctioning of Russia, whether other states join in imposing collective consequences.

One challenge for both collective attribution and collective action is information sharing. Naturally, every state will want to satisfy itself before taking the political step of public attribution, and that’s even more the case if it’s taking further action against another transgressing state. Sharing sensitive attribution information among states with different levels of capability and ability to protect that information is a tough issue even in the best of times. But, if collective action is to happen, and happen on anything approaching a quick timeline, enhancing and even rethinking information sharing among partner countries is foundational.

Using and expanding the tools in the toolkit

The current tools that can be used in any instance to impose consequences are diplomatic, economic (including sanctions), law enforcement, cyber responses and kinetic responses.

Some of them have been used in the past to varying degrees and with varying levels of effectiveness but not in a consistent and strategic way. Some, like kinetic responses, are highly unlikely to be used unless a cyber event causes death and physical injury similarly to a physical attack. Others admittedly take a while to develop and deploy, but we have to have the political willingness to use them decisively in the appropriate circumstances and in a timely manner. For example, the US has had a cyber-specific sanctions order available since April 2015 and, before its recent use against Russian actors in March, it had only been used once in December 2017 against Russian actors for election interference. For the threat of sanctions to be taken seriously, they must be used in a more regular and timely manner, and their targets should be chosen to have a real effect on the violating state’s decision-making.

Our standard tools are somewhat limited, so we must also work to creatively expand the tool set so that we can better affect the unique interests of each adversarial state actor (identified in a tailored deterrence strategy), so that they’ll change course or think twice before committing additional malicious acts in the future. That is likely to need collaboration not just within governments but between them and the private sector, academia, civil society and other stakeholders in order to identify and develop new tools.

Recommendations

Of course, foundational work on the application of international law and norms of voluntary state behaviour should continue. That work helps set the expectation of what conduct is permissible. In addition, states should articulate and communicate strong declaratory policies. Declaratory statements put potential adversaries on notice about what’s unacceptable 4 and can contain some detail about potential responses. In addition, a number of other things can aid in creating an environment where the threat of consequences is credible:

1. Shorten the attribution cycle.

Making progress on speeding technical attribution will take time, but delays caused by equity reviews, inter-agency coordination, political willingness, and securing agreement among several countries to share in making attribution are all areas that can be streamlined. Often the best way to streamline these kinds of processes is to simply exercise them by doing more public attribution while building a stronger political commitment to call bad actors out. The WannaCry and NotPetya public attributions are a great foundation for exercising the process, identifying impediments and speeding the process in the future. Even when attribution is done privately, practice can help shorten inter-agency delays and equity reviews.

2. If attribution can’t be made or announced in a fairly brief period, couple any later public attribution with at least one visible responsive action.

Attribution six months or a year after the fact with the vague promise of future consequences will often ring hollow, particularly given the poor track record of imposing consequences in the past. When attribution can be made quickly, the promise of a future response is understandable, but delaying the announcement until it can be married with a response may be more effective.

3. Mainstream and treat cybersecurity as a core national and economic security concern and not a boutique technical issue.

If cyberattacks really pose a significant threat, governments need to start thinking of them like they think of other incidents in the physical world. It is telling that Prime Minister Theresa May made public attribution of the Salisbury poisonings in a matter of days and followed up with consequences shortly thereafter. Her decisive action also helped galvanise an international coalition in a very short time frame. Obviously that was a serious matter that required a speedy response, but the speed was also possible because government leaders are more used to dealing with physical world incidents. They still don’t understand the impact or importance of cyber events or have established processes to deal with them. Mainstreaming also expands and makes existing response options more effective. As noted above, a prime reason for the US–China accord on intellectual property theft was the fact that it was considered a core economic and national security issue that was worth creating friction in the overall US–China relationship.

4. Build flexible alliances of like-minded countries to impose costs on bad actors.

A foundational element of this is improving information sharing, both in speed and substance, to enable better collective attribution and action. Given classification and trust issues, improving tactical information sharing is a difficult issue in any domain. However, a first step is to discuss with partners what information is required well in advance of any particular incident and to create the right channels to quickly share that information when needed. It may also require a re-evaluation of what information must absolutely be classified and restricted and what can be shared through appropriately sensitive channels. If there’s greater joint attribution and action, this practice will presumably also help build mechanisms to share information and build trust and confidence in the future with a greater number of partners.

5. Improve diplomatic messaging to both partners and adversaries.

Improved messaging allows for better coordinated action and serves to link consequences to the actions to which they’re meant to respond. Messaging and communication with the bad actor while consequences are being imposed can also help with escalation control. Of course, effective messaging must be high-level, sustained and consistent if the bad actor is to take it seriously. Sending mixed messages only serves to undercut any responsive actions that are taken.

6. Collaborate to expand the toolkit.

Work with like-minded states and other stakeholders to expand the toolkit of potential consequences that states can use, or threaten to use, to change and deter bad state actors.

7. Work out potential adversary-specific deterrence strategies.

Actual or threatened responsive actions are effective only if the target of those actions is something that matters to the state in question, and that target will differ according to the particular state involved. Of course, potential responses should be in accord with international law.

8. Most importantly, use the tools we already have to respond to serious malicious cyber activity by states in a timely manner.

Imposing consequences for bad action not only addresses whatever the current bad actions may be but creates a credible threat that those consequences
(or others) will be imposed in the future.

None of this is easy or will be accomplished overnight, and there are certainly complexities in escalation, proportionality and other difficult issues, but a lot comes down to a willingness to act—and the current situation isn’t sustainable. The recent US imposition of sanctions is a step in the right direction, but imposing tailored costs when appropriate needs to be part of a practice, not an aberration, and it must be accompanied by high-level messaging that supports rather than undercuts its use.

The 2017 US National Security Strategy promises ‘swift and costly consequences’ for those who target the US with cyberattacks. Australia’s International Cyber Engagement Strategy states that ‘[h]aving established a firm foundation of international law and norms, the international community must now ensure there are effective consequences for those who act contrary to this consensus.’ On the other hand, Admiral Rogers, the head of US Cyber Command and the National Security Agency, recently told US lawmakers that President Putin has clearly come to the conclusion that there’s ‘little price to pay here’ for Russia’s hacking provocations, and Putin has therefore concluded that he ‘can continue this activity’.

We must change the calculus of those who believe this is a costless enterprise. Imposing effective and timely consequences for state-sponsored cyberattacks is a key part of that change.

  1. Of course, there are an ever-increasing number of attacks and intrusions by criminals, including transnational criminal groups, as well. Deterring this activity is a little more straightforward—the consequences for criminals are prosecution and punishment and, in particular, a heightened expectation that they’ll be caught and brought to justice. I don’t address deterring criminal actors in this paper, although there have been advances in ensuring that countries have the laws and capacity to tackle these crimes and there have been a number of high-profile prosecutions, including transnational cases. Much more needs to be done to deter these actors, however, as many cybercriminals still view the possibility that they’ll be caught and punished as minimal. ↩︎
  2. One downside of a practice of publicly attributing state conduct is that it creates an expectation that victim states will do this in every case and leads to the perception that when they don’t it means they don’t know who is responsible—even if they do. For that reason, states, including the US, have often said in the past that they’ll make public attribution when it serves their deterrent or other interests. There are also cases in which a state or states may want to privately challenge a transgressor state to change its behaviour or in which calling out bad conduct publicly risks sources and methods that may have a greater value in thwarting future malicious conduct. Nevertheless, the seeming trend to more cases of public attribution is a good one, and these concerns and expectations can be mitigated in a state’s public messaging or by delaying public attribution when necessary. ↩︎
  3. Defence Sciences Board, Task Force on Cyber Deterrence, February 2017. ↩︎
  4. Such statements should be relatively specific but need not be over-precise about exact ‘red lines’, which might encourage an adversary to act just below that red line to escape a response. ↩︎

ASPI International Cyber Policy Centre

The ASPI International Cyber Policy Centre’s mission is to shape debate, policy and understanding on cyber issues, informed by original research and close consultation with government, business and civil society.

It seeks to improve debate, policy and understanding on cyber issues by:

  1. conducting applied, original empirical research
  2. linking government, business and civil society
  3. leading debates and influencing policy in Australia and the Asia–Pacific.

We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors but special mention in this case should go to the Australian Computer Society (ACS), which has supported this research.

Chris Painter’s distinguished visiting fellowship at ASPI’s International Cyber Policy Centre was made possible through the generous support of DFAT through its Special Visits Program. All views expressed in this policy brief are the authors.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

Weibo diplomacy and censorship in China

Sina Weibo

Since its inception in 2009, Sina Weibo – China’s souped-up version of Twitter – has provided a rare foothold for foreign governments in the PRC’s tightly-controlled media environment.

Yet while the PRC is allowed free reign to push its messages in Western media and social media platforms, Beijing’s censors have been hampering the legitimate digital diplomacy efforts of foreign embassies.

This ASPI ICPC report provides an in-depth look at the increasingly sophisticated censorship methods being used on foreign embassies on Weibo and provides a series of recommendations for foreign governments, including Australia, to address these policy challenges.

What’s the problem?

As the Chinese Communist Party (CCP)-led state extends its reach into other nations, it’s actively limiting the ability of other countries to do the same in the People’s Republic of China. Seeing itself in an ideological confrontation with ‘the West’,1 the CCP under Xi Jinping is determined to ensure ideological conformity in its own information space.

A key battleground is Weibo, the Chinese micro-blogging service most closely analogous to Twitter. Since Weibo’s inception, embassies have maintained a presence on it—a rare foothold for foreign governments in China’s tightly controlled information space.

While some governments, particularly those of Western countries, have occasionally spoken outside the CCP’s frame of acceptable public discourse, most do not. As Weibo continues to introduce new and subtle methods of direct censorship, foreign embassies are both self-censoring their messaging and failing to speak up when their content is being censored.

In Australia’s case, this lack of transparency and cycle of self-censorship sits oddly with the description of Australia as ‘a determined advocate of liberal institutions, universal values and human rights’ in the 2017 Foreign Policy White Paper.2

What’s the solution?

To not be seen as agreeing to the CCP’s ideological agenda, like-minded governments, in coordination with each other, should commit to publishing transparency reports to reveal the extent to which their legitimate online public diplomacy efforts are being curtailed in China.

Foreign governments should establish and publish clear terms of use for their social media accounts in China so that they don’t fall into the trap of self-censoring their policy messages and advocacy. They should use uncensored social media platforms such as Twitter—which, despite being blocked in China, still has an estimated 10 million active users in the country.3

Embassies could cross-post all of their content there so that audiences are both aware of any incidences of censorship and have alternative avenues to access their full content. The Australian Government should establish Weibo accounts for the positions of Prime Minister and Foreign Minister.

‘Orwellian nonsense’

In early May 2018, the US Embassy in China put Weibo censors in a delicate bind when it issued a provocative slapdown of Beijing’s censorship overreach.

‘President Donald J Trump ran against political correctness in the United States’, read the White House statement, which had been translated into Mandarin.4 ‘He will stand up for Americans resisting efforts by the Chinese Communist Party to impose Chinese political correctness on American companies and citizens.’

The statement was put out in response to the Chinese Civil Aviation Administration’s call on 36 foreign airlines, asking them to come into line with Beijing’s preferred terms of reference for Taiwan, Hong Kong and Macau as ‘Chinese territories’.

The statement continued: ‘This is Orwellian nonsense and part of a growing trend by the Chinese Communist Party to impose its political views on American citizens and private companies.’ It went further still: ‘China’s internal Internet repression is world-famous. China’s efforts to export its censorship and political correctness to Americans and the rest of the free world will be resisted.’

The post, most likely penned by White House press secretary Sarah Huckabee Sanders, was a deliberate poke in the eye for Beijing and it promptly caused a firestorm on the platform.

In the short history of Weibo diplomacy, sometimes referred to as ‘Weiplomacy’, it was the most direct challenge to China’s censorship regime yet. Having shone a mirror on their own activities, Sina Weibo’s censors were put on the spot.

‘Only folks with strong connections (like you) can avoid getting censored’ read the most upvoted comment in the hour immediately after the post went out (Figure 1). ‘I can imagine the censorship department scratching their heads over this,’ read another comment.5

Notably, Hu Xijin, the chief editor of Global Times, the nationalist newspaper owned by the CCP, took to his own Weibo account to call on ‘Weibo management’ to refrain from intervening.6

Instead, in the ensuing few hours, Sina Weibo’s censors used every tool at their disposal short of deleting the post to ensure that the missive had as little impact as possible. Not only was the sharing function for the post switched off, but the comments section under the post was carefully manicured to remove liberal voices and replace them with CCP-approved sentiment (Figure 2).

Figure 1: The comments section under the US Embassy post less than an hour after it was published included users directly challenging the censorship regime.

Translation

  • Only folks with strong connections (like you) can avoid getting censored. [2,656 Likes]
  • I’m also against political correctness or imposing your ideology on others but respecting the sovereignty and territorial integrity of other countries should not be mixed up with ideology. [2,077 Likes]
  • If we were exercising extreme oppression on the domestic Internet, do you think you’d still be talking shit here? [1,277 Likes]
  • Hahahahaha seeing in my living years the US opposing China’s political correctness.. [1,027 Likes]
  • How does our press freedom rank in the world again, one hundred and something right? [634 Likes]
  • I sincerely hope the Indians can claim back their land and establish their own country, while Hawaii could become an independent country. [814 Likes]
  • If you don’t want to do business here, then f&#% off. If you do want to do business here, respect our laws. [497 Likes]
  • [I] support President Trump’s thinking, the world belongs to the people, not a certain party. [378 Likes]
  • Leave your name here before the post gets deleted. [321 Likes]

Figure 2: The comments section under the US Embassy post (now seen in mobile view) around 2 hours after it was published and after censors removed posts that didn’t toe the party line.

Translation

  • If you don’t want to do business here, then f&#% off. If you do want to do business here, respect our laws. [110,000 Likes] 
  • When China and the US established diplomatic relations in 1972, Nixon openly accepted China’s political correctness during his trip here. Are you now denying the establishment of diplomatic relations? [7,854 Likes]
  • Independence for Hawaii
    Independence for Alaska
    Independence for California
    Independence for Texas
    Independence for New Mexico [7,108 Likes]
  • 1. This is not political correctness, this is the one-China principle.
  • 2. Please abide by the terms of the Sino-US joint communique, if you choose to unilaterally go against them, it will be seen as a violation of the agreement. [6,560 Likes]

The incident was an object lesson in how sophisticated the PRC censorship apparatus has become and how precisely it can be deployed. It may be ‘Orwellian nonsense’, but it does largely work. While some Western media reports 7 took care to note that more varied opinions were expressed by Weibo users under the post before the censors swooped in, most reports didn’t.8

What remained after the censors had done their work was nothing more than a Potemkin post, with the comments under it carefully selected to give the impression of a uniformly nationalistic online Chinese public. Such an impression has led previous scholarship on ‘Weiplomacy’ to conclude that the power of Weibo to further the goals of public diplomacy might have been overestimated.9

But a closer examination of the comment section under the post revealed a plethora of viewpoints that the censors failed to expunge. Even though the censors had cherrypicked CCP-approved comments to feature as the most upvoted comments, many of the comments under those comments weren’t toeing the party line (Figure 3). Peeling back the curtain on the Potemkin post reveals the raucous marketplace of ideas that still exists on Weibo, if one takes the time to seek it out.

Figure 3: The comments under the cherrypicked nationalist comments reveal sentiment from opposing ideological clusters.

Translation

  • If you don’t want to do business here, then f&#% off. If you do want to do business here, respect our laws. [12,076 Likes]
  • ‘Little pink’ maggots [a derogatory term for young nationalists] are really disgusting [4,879 Likes]
  • So ZTE deserved to be prosecuted in the US because it didn’t obey their laws. [3,319 Likes]
  • ‘War Wolves’ [a reference to patriotic hit Chinese film Wolf Warrior] always think the rest of the world couldn’t survive without China. [3,302 Likes]
  • Saying it like this is a bit extreme. China and the US affect each other mutually. Chinese airlines need to fly to the US and US airlines need to fly to China. It’s not possible for only one side to depend on the other for business. [3,091 Likes]
  • [The commenter] is obviously a slave but one who talks with the tone of a master. [1,970 Likes]

Weibo and foreign governments: a history of censorship and self-censorship

Three years after the UK Embassy became the first foreign embassy to open an account on Sina Weibo, Jonas Parello-Plesner warned that diplomats should be wary of creeping self-censorship.

‘Embassies shouldn’t accept self-censorship by only posting innocuous tweet[s] that can pass through the censors,’ Parello-Plesner wrote in The Diplomat in 2012.10 ‘Instead they should give the full spectrum of views including on values—even if it means more deleted postings.’

In the intervening years, some foreign embassies took up the challenge, showing a willingness to push the envelope even at the risk of having their content censored. At times, the envelope pushing has been inspired. Doing this required them to be quite creative, because being predictable means being easily blocked.

On 30 May 2012, the US Embassy tapped into Michael Jackson’s popularity in China to give a boost to a politically sensitive interview with then ambassador Gary Locke.11

‘Michael Jackson has an album called Thriller, one of the best selling records in the history of music. The story we’re telling today is also a Thriller. Click to read,’ read the post, which also included a picture of the famous album (Figure 4).

The link led to a Newsweek interview titled ‘Ambassador to China Gary Locke talks Chen, Drama in China’,12 which included details about the attempt by former Chongqing police chief Wang Lijun to get political asylum from the US, as well as the dramatic story of activist Chen Guangcheng’s successful bid for political asylum.

Figure 4: The censored 2012 Weibo post from the US Embassy, which used Michael Jackson’s celebrity as a smokescreen for a politically sensitive interview with then ambassador Gary Locke. The post was archived on FreeWeibo.com.

In 2014, the UK Embassy posted a 2013 human rights report to Weibo using ‘Martian’, a coded language based on Chinese characters (Figure 5).13

Figure 5: The 2014 Weibo post from the UK Embassy, which used coded language in an attempt to evade censorship.

If the post had gone out using standard Chinese, keywords deemed sensitive by the party-state, such as ‘human rights’, would have been flagged automatically. But by using the ‘Martian’ coded language, the longevity of the post was prolonged before the censors became aware of it.14

In other instances, embassies have posted ‘sensitive’ content on Weibo in order to address what they have perceived as unfair treatment by China’s state-controlled media.

On 3 August 2011, the Canadian Embassy was censored for the first time after it posted about Chinese fugitive Lai Changxing. The post included a full federal court decision that resulted in his deportation to China. It included mentions of Liu Xiaobo and Falun Gong and was deleted almost immediately.15

At other times, foreign embassies have tested the boundaries of what is deemed acceptable discourse by Beijing’s censors. In 2016, the US consulate in Shanghai sent out a Weibo post asking for virtual private network (VPN) supplier recommendations. The post was deleted within an hour of its appearance.16

On 1 February 2017, the British Embassy posted an EU statement calling for the investigation of allegations of torture of detained human rights lawyers.17 According to Citizen Lab, Weibo users weren’t able to forward or comment on the post.18 The post was subsequently deleted. And on 3 June 2014, a day before the 25th anniversary of the massacre at Tiananmen Square, the Canadian Embassy posted a photo of Ambassador Guy Saint-Jacques posing with his wife at the site (Figure 6). The low comments-to-shares ratio on the provocative post would suggest some form of censorship, with comments either being deleted or not allowed at all.

Figure 6: Canadian Ambassador and his wife at Tiananmen Square, 2014

The text reads:

  • ‘On June 1, ambassador Guy Saint-Jacques and wife Sylvie Cameron took a tour around the Chairman Mao Memorial on their bikes. A visit to the place they once saw reminded of various past events associated with the square, including the once more cordial and relaxed atmosphere there.’
  • Despite being shared 917 times, the post only displays a few comments—a telltale sign that censors had throttled engagement with it.
  • One share of the post added the comment: ‘There are only a few comments on this post, and you can’t see any of the shares of it.’

At times, the act of censorship happens not because an embassy has made a decision to push the envelope, but because it’s made a diplomatic faux pas. On 26 March 2014, the Russian Embassy Weibo account made what Foreign Policy called a ‘large digital diplomacy gaffe’ when it made mention of the Tiananmen incident. The embassy argued that ‘Russia’s current situation’, following Western sanctions after Russia’s annexation of Crimea, ‘somewhat resembles what China suffered after the Tiananmen incident.19

More recently, however, the instances of blatant censorship—in which posts and even the accounts themselves are deleted—appear to have dropped off. Instead, as this report shows, the invisible hand of Beijing’s censors is, for the most part, eschewing heavy-handed censorship for more surreptitious forms. At the same time, it appears that foreign embassies on Weibo are pulling their punches and accepting ‘the sliding slope of red lines and self-censorship inside the Chinese system’ that Parello-Plesner warned about.20 The combination results in the suppression of ideas that are different from the CCP’s ‘correct line’.

Websites FreeWeibo and Weiboscope have been extremely useful for uncovering examples of blatant censorship, including deletions of posts and keyword blocking. However, less obvious forms of censorship are more difficult to detect. Some of those methods include disabling the comments section under posts and switching off their sharing functionality.

The disabling of comments has been one of many levers that Sina Weibo’s censors have been able to pull from as early as 2012, when, rather heavy-handedly, all comments on all posts were switched off after rumours of a coup spread on the platform.21

Similar forms of surreptitious censorship include ‘shadow-banning’, in which users are under the impression that their posts are being seen when in fact they’re being hidden from other users. The practice is known to be used, if only anecdotally, on Sina Weibo, but has been proven to be in use on China’s dominant chat application, WeChat. 22

These stealthier forms of censorship are less noticeable to the user and therefore less likely to provoke any unwanted backlash.23 As Lawrence Lessig observed in 1999, it’s the underlying code that determines ‘whether access to information is general or whether information is zoned’.24 Or to rework the old aphorism, ‘If a message is posted on social media, but the algorithm doesn’t prioritise it, does it really make a sound?’

How censorship on Weibo works

An analysis of three months’ worth of Weibo posts between November 2017 and January 2018 from the top 10 foreign embassies in China (measured by follower numbers) found 51 instances of censored posts, mostly on the US Embassy account (Figure 7).25

Figure 7: Three months of Weibo posts from November 2017 to January 2018 resulted in 51 instances of censorship.

The US Embassy account had 28 instances of censorship in total, and a variety of methods were used to reduce or erase the impact of its posts. Those methods ranged from the blunt to the subtle:

  • Six posts were deleted—some immediately, some weeks after the fact.
  • Fifteen posts had their comments sections disabled immediately.
  • Three posts had comments sections disabled immediately and then re-enabled weeks later.
  • Two posts had their comments sections allowed, then disabled and hidden at some later stage.
  • In two posts, Weibo notified users that comments were being accepted but asked that they wait patiently for a ‘server synchronisation’. The user comments never made it through.

A range of censorship methods were used on US Embassy posts, ranging from the blunt to the subtle (Figure 8).

Figure 8: Censorship methods used on the US Embassy Weibo account

In a blatant act of censorship, a post sent out by the US Embassy on 7 November 2017 showing the first leg of President Trump’s Asian tour, in Japan, was immediately deleted. The deleted post—captured and archived by FreeWeibo.com 26—was also tweeted from the US Embassy Twitter account,27 helping to make its absence on Weibo more noticeable (Figure 9).

Figure 9: The US Embassy tweet, the Weibo equivalent of which was deleted by Chinese censors.

TranslationPresident Trump and First Lady Melania Trump were welcomed by the Emperor and Queen of Japan on the second day of their Japan visit. They also met with the families of North Korean abductees. President Trump held bilateral talks with Abe, and met with Japanese and American business leaders, while the First Lady had a joyous meeting with some Japanese primary school students. #POTUSinAsia

Two days later, on 9 November 2017—the second day of President Trump’s first state visit to the PRC—a post sent out by the US Embassy linking to a transcript of a press briefing by Secretary of State Rex Tillerson (Figure 10)28 had its comments section immediately disabled.

The post contained a statement from Secretary Tillerson that presented President Trump and President Xi as being on a joint ticket in regard to denuclearisation of the Korean Peninsula, and quickly became that week’s most shared post from the embassy, with 523 shares and 441 ‘Likes’.

Figure 10: The tweet about Rex Tillerson, the Weibo equivalent of which was deleted by Chinese censors.

Translation: President Trump and President Xi confirmed their determination in realising the complete, verifiable and ever lasting denuclearisation of the Korean peninsula. President Trump and President Xi won’t accept a North Korea that is armed with nuclear weapons. We thank China’s cooperation. Secretary of State Rex Tillerson at Beijing Press Conference. Read the brief.

On 17 November, another post quoted a different part of Secretary Tillerson’s earlier press briefing:

The key topic of discussion was our continued joint effort to increase pressure on North Korea, to convince them to abandon their nuclear and missile program. President Trump and President Xi affirmed their commitment to achieve a complete, verifiable, and permanent denuclearization of the Korean Peninsula. President Trump and President Xi will not accept a nuclear-armed North Korea.

On 24 November, another post quoted President Trump from his joint press conference with President Xi two weeks earlier: 29

All responsible nations must join together to stop arming and financing, and even trading with the murderous North Korean regime. Together we have in our power to finally liberate this region and the world from this very serious nuclear menace. But it will require collective action, collective strength, and collective devotion to winning the peace.

And on 30 November 2017, a US Embassy Weibo post announced a call between President Trump and President Xi after Pyongyang tested a missile reportedly capable of reaching the US mainland (Figure 11).30 A copy of the post remains on the US mission’s Twitter account.31

Figure 11: The tweet about Trump’s phone call with Xi, the Weibo equivalent of which was deleted by Chinese censors on Weibo.

Translation: President Trump spoke with President Xi to discuss North Korea’s latest missile test. President Trump stressed America’s determination to defend itself and its allies from the growing threat posed by the North Korean regime. November 29, 2017, the White House President Trump and President Xi call briefing.

Six months after these four posts were published, they no longer exist. It’s unclear when exactly the censors deleted them. This method of delayed censorship avoids detection on FreeWeibo.com, where there are no records of the posts being censored. With the North Korea nuclear crisis still a live issue, the deletions suggest that Beijing is trying to regain control of the narrative inside its own information space.

On 27 December 2017, the US Embassy was censored again after it sent out a post linking to a US– German embassy joint statement about the sentencing of activist Wu Gan and his lawyer, Xie Yang:

We see lawyers and defenders of rights as aiding the strengthening of the Chinese society via developing governance by law. Click the link here to view the recent cases.

The post was captured on FreeWeibo.com after being censored on Weibo.32

Aside from these six instances of deleted posts, all other instances of censorship captured in this report involved the disabling of the comments section under posts. This softer, less noticeable form of censorship is what’s more generally applied to posts from foreign embassies, resulting in suspiciously low levels of reported engagement from users. Engagement levels are artificially deflated when comments are disabled.

In a response to a list of questions asked by ASPI’s International Cyber Policy Centre (ICPC), three governments—the US, Australian and Japanese—confirmed that their embassies in Beijing never disable the comment sections under their Weibo posts.33

‘We don’t delete our own posts,’ a US Embassy spokesperson told ASPI ICPC via email. ‘The US Embassy faces regular and routine blocking of social media posts in China.’ 34

‘We don’t disable the comments section ourselves,’ a Japanese Embassy spokesperson told an ASPI ICPC researcher over the phone. ‘When comments are closed for posts it’s always done by Sina. They will always disable comments for posts mentioning the names of Chinese political leaders, for example.’

In fact, in the data covered in this report, 75% of the time censorship appears to have been meted out because a top Chinese official (living or dead) was mentioned by name or was in a photo in the post.

The sensitivity around senior Chinese officials isn’t surprising. In his 2013 book, Blocked on Weibo, Jason Q Ng found that the largest share of blocked words he discovered through his research were names of people, mostly CCP members.

‘[P]rotection from criticism on Weibo seems to be a perk for rising up the ranks—while dissidents and people caught up in scandals or crimes make up the rest of the names,’ Ng wrote.35

A post by the Cuban Embassy on 25 January 2018 mentions Song Tao (宋涛) , the head of the CCP’s International Department. The post described Song as ‘Secretary Xi Jinping’s Special Envoy’, which was probably the reason for the censorship that followed (Figure 12).

Figure 12: A Cuban Embassy post runs into trouble

Translation of error message: Sorry, you cannot proceed with your attempt as the content contains information that has violated relevant laws and regulations or Weibo community guidelines.

Even when posts mentioning Xi Jinping are positive, they still attract the attention of censors. In October 2017, former Australian Prime Minister Kevin Rudd posted a photo of himself ‘studying’ Xi’s report to the 19th CPC National Congress (Figure 13). ‘China has entered a new age,’ he wrote. According to Rudd, comments under the post were disabled by Weibo. 36

Figure 13: Comments were disabled after Kevin Rudd posted on Weibo

A Sina spokesperson confirmed to ASPI’s ICPC that government-affiliated Weibo accounts with a blue verified badge have the ability to disable the comment sections on their own posts.37 However, in the dataset collected for this report, only one instance of a foreign embassy disabling its own comments was found, on the South Korean embassy’s Weibo account (Figure 14).

Figure 14: The error message reads ‘Due to this user’s settings, you’re unable to comment.’ The South Korean embassy did not respond to ASPI ICPC’s enquiries.

Occasionally, there are exceptions to the censorship rules. An uncensored post from Canadian Prime Minister Justin Trudeau sent on 6 December 2017 included Chinese Premier Li Keqiang’s name in the text, as well as Li’s image in a photo.38 

The outsized success of a selfie taken by Indian Prime Minister Narendra Modi and Chinese Premier Li Keqiang and posted to Weibo in July 2015 is another exception to the rule (Figure 15). 39 The virality of the post was due not only to the rare inclusion of a top Chinese leader, but also due to the content, in which Modi wishes Li a happy birthday. Premier Li’s exact birthday hadn’t been publicly disclosed before.40

Figure 15: Indian Prime Minister Narendra Modi and Chinese Premier Li Keqiang post a selfie

In a rare case during the 2017 G20 summit in Germany, any mention of Russian President Vladimir Putin was blocked on Weibo, according to the Financial Times.41

The move was interpreted by the paper as ‘giving Russia’s president an immunity from public criticism usually reserved for China’s Communist Party elite.’ In that instance, any mention of Putin on the accounts of Weibo users with more than 1,000 followers triggered the message: ‘This post does not allow commenting.’

Out of 51 instances of suspected censorship over the three-month study period, only 13 were posts that didn’t mention any top Chinese leaders.

One particularly notable instance of censorship was of a 13 November 2017 post from the US Embassy Weibo account, which included a video of President Donald Trump emphasising the US as a country whose ‘home’ is ‘on the Pacific’ (Figure 16).

Figure 16: Comments are disabled on US Embassy’s post of President Trump speaking about the US and the Pacific.

Translation of error message: Sorry, you cannot proceed with your attempt as the content contains information that has violated relevant laws and regulations or Weibo community guidelines.

Other, more personal, attempts at cross-cultural communication were also hamstrung by the censors. On the final day of President Trump’s state visit to the PRC, a video of Trump’s 6-year-old granddaughter Arabella Kushner that Trump had personally shown President Xi and his wife Peng Liyuan was published on the US Embassy account and immediately had the comments section on it disabled (Figure 17).

Figure 17: Screenshot of the US embassy’s post of Arabella Kushner singing in Chinese. Comments on the post were immediately disabled.

On the same day, a Weibo post written in the first person by President Trump at the end of his state visit to the PRC appeared:

I’m now leaving China for Vietnam for the APEC meeting #APEC2017#. First Lady Melania will stay here to visit the zoo, and of course, the Great Wall of China. Then she will go to Alaska to greet our amazing troops.

The post prompted some users to ask in comments whether Trump had taken over control of the US Embassy account.

After 39 comments were made, any subsequent attempt to comment resulted in an error message reading: ‘Posted successfully. Please be patient about 1–2 minutes delay due to server synchronization, thank you’ (Figure 18).

Figure 18: The Trump post at the end of his China visit.

Translation of error message: Posted successfully. Please be patient about 1–2 minutes delay due to server synchronization, thank you.

Two other posts by the US Embassy probably drew the ire of Weibo’s censors by providing an opportunity for Chinese netizens to draw comparisons between conditions in the US and China.

One such post answered a question posed to the US Embassy Weibo account about whether American officials were provided with special food supplies (Figure 19).42 Chinese news reports in 2011 revealed that Chinese Government officials have exclusive suppliers of organic food.43 Given that the post didn’t include any sensitive words that might cross a censorship fault line, it managed to garner at least 88 comments before commenting was disabled by the censors.

Figure 19: One of only 13 censored posts that didn’t refer to a senior Chinese leader, this post seemed to invite a comparison of US officials to Chinese officials, and comments were disabled.

Weibo accounts run by the US Government have been suspended and even completely deleted in the past. The US Shanghai consulate’s Weibo account was shut down on 14 July 2012, while the US Embassy account was suspended briefly on 5 May 2016, according to China Digital Times, which is a website following social and political developments in China and run by the University of California.44

At times, it’s less clear why a decision to disable comments was made. When the US Embassy posted that it wouldn’t be able to continue posting to Weibo and WeChat during a government shutdown on 22 January 2018, the post went viral (Figure 20).45 It was the second most shared of all posts gathered during the three-month reporting period for this report.

Figure 20: A post by the US Embassy, explaining that it wouldn’t be posting during a government shutdown, was picked up by the Chinese media.

Translation: Due to an unresolved issue with funding, the US embassy’s social media account will cease its regular updates. While the funding issue remains unresolved, all regular and emergency consular, citizen and immigration services will continue as usual. Those seeking visa or citizen services who have secured an appointment in advance should attend as scheduled. In the exception of emergency security and safety information, the embassy website will not continue its regular updates before full resumption of operations.

However, after the post garnered 1,893 comments, further comments were disabled, despite the Global Times’ gleeful reporting on the incident.46

For China’s overzealous censors, even posts that could be used to show the apparent weaknesses of liberal democracies, such as the US Embassy’s government shutdown post, need to be censored—presumably for fear that discussion of the US Government will prompt users to draw comparisons to their own government. Clearly, the censors, of which Sina Weibo employs an estimated 13,000,47 are highly sensitive to any content that falls outside the boundaries of acceptable CCP-approved discourse.

It follows that a country such as Australia, which claims to be ‘a determined advocate of liberal institutions, universal values and human rights’,48 should expect such advocacy to attract the attention of China’s censors. If it didn’t, something would be odd. However, the Australian Embassy Weibo account doesn’t appear to be attracting much CCP censorship. In the three months of data collected for this report, the embassy’s Weibo account was censored only three times, all for mentioning Xi Jinping. Whether this lack of censorship reflects savvy account management, the CCP’s disinterest in the embassy Weibo account or self-censorship by the Australian Government is the important question.

Rising nationalism

Rising Chinese nationalism online has been allowed to foment amid recent social media campaigns against companies such as South Korean conglomerate Lotte Group, German carmaker Daimler’s Mercedes-Benz brand and Marriott International. The campaigns have received support from both state-run media and the Chinese Government.49

On 17 November 2017, an innocuous post by the German Embassy explaining the meaning of the German word Lückenbüßer (stopgap)50, became a place for nationalists to congregate and protest after pro-Tibetan independence flags were sighted at a soccer match in Germany involving Chinese players (Figure 21).

Figure 21: The German Embassy Weibo post and angry responses from nationalists.

Translation: Luther invented the word Lückenbüßer while translating the Old Testament. The word is about holes and cracks needing to be mended in the Holy Wall in Jerusalem. This is the origin of the word. Today, it refers to a person who acts as a replacement for the one missing from the original plan, although the plan does
not work out in the end. No one wants to be a measure of expediency, but we often cannot do without one. During a period of transition when changes are about to happen, or when a final choice has yet to be made, it usually connects the world together.

Translation of comments:

  • You want freedom of speech? Sure! Next time you Germans want to come to China for any games, we will bombard with swastika flags and photos of Hitler, and salute and chant the name of Hilter throughout, and belt out Nazi songs! Then you’d be happy, be content! A nation that cannot retain its roots is really pathetic, of course, they will treat the territorial integrity of other nations as bullshit!
  • You deserve terrorist attacks in Europe, it’s all your own making!
  • Can we perform Nazi rituals and bear Nazi flags when the German team comes to China?
  • Since some people purposely provoked aggression with flags for Tibetan independence during a China–Germany soccer match, while you brushed it aside with the excuse of freedom of speech, I think it would not be an issue to paste around your embassy all with flags of east Germany!
  • What is freedom of speech? If the separation of China can be counted as freedom of speech, then we sincerely hope that you would again divide Germany into two countries.

The prevalence of such deep nationalism, both real and manufactured, has prompted some, like Adelaide University scholar Ying Jiang, in her pioneering research into ‘Weiplomacy’ efforts, to suggest that the power of Weibo to further the goals of public diplomacy might have been overestimated.51 It’s easy to see how that could be the case. While liberal voices face extra scrutiny from the censors, nationalist voices are allowed to flourish. Even foreigners on Weibo have been tapping into Chinese nationalism as a fast track to viral fame on the platform.

David Gulasi, a China-based Australian English teacher, attracted attention on the platform with funny videos, but saw it skyrocket when he started aping nationalistic views. State media outlet Xinhua has noted that videos uploaded by Gulasi include one in which he ‘professed his love for China and denounced foreigners who did not share his passion for the country’.52

In 2016, when thousands of China-based trolls attacked Australian Olympic swimmer Mack Horton and his supporters after Horton called his Chinese rival Sun Yang a ‘drug cheat’, Gulasi joined in on Weibo (Figure 22).53

Figure 22: Joining a Chinese nationalist pile-on on Australian Olympian Mack Horton helped David Gulasi achieve viral fame on Weibo.

In another video, Gulasi complains about the slow pace of life in Australia and tells his audience he has come to China to pursue his ‘Chinese Dream’ 54 —a populist slogan introduced by Xi Jinping in 2013. Astoundingly, Gulasi was chosen by the Australian Embassy to feature in its 45 Years, 45 Stories campaign to commemorate the 45th anniversary of Australia–China diplomatic relations.55

Foreign embassies and even national leaders such as India’s Narendra Modi have had their Weibo accounts deluged with angry nationalistic messages.56 But in an increasingly censored and controlled online media environment, foreign embassy accounts can also be a channel for netizens to protest about their own government.

In early February 2018, the comments section on posts sent out by multiple foreign embassies, including the US, Japanese and UK embassies, as well as the United Nations, spontaneously became a space for Weibo users to protest the China Securities Regulatory Commission and its head, Liu Shiyu (Figure 23).57

Figure 23: A screenshot of the US embassy Weibo account from 9 February 2018. The screenshot was censored on Weibo but retrieved by FreeWeibo.com, a censorship monitoring site. Source: 科学自然 ‘科学自然:激动的中国股民涌到美国驻…’, FreeWeibo.com, 10 February 2018, online

Translation:

  • Since the China Securities Regulatory Commission Weibo has banned hundreds of millions of investors from protesting, all we can do is voice our fury here and strongly demand Liu Shiyu to step down.
  • Please have your American reporters go to the CSRC to interview Liu Shiyu, [and ask him] why is the Chinese stock market so unable to take a hit?
  • As our official platform has been censored, I just want to borrow this space to call for Liu Shiyu to step down. The stock market has crashed five times in two years, slaughtering hundreds of millions of investors
  • ‘641’ (a homonym for Liu Shiyu) must step down immediately, you’ve already seriously hurt hundreds of millions of families.

In April 2018, Weibo reversed a ban on content ‘related to’ homosexuality after an unusually fierce backlash from internet users.58

Both incidents reveal the diversity of views and ideological groupings that continue to exist online in China despite the party-state’s efforts to promote nationalism. Research by the Mercator Institute for China Studies (MERICS) demonstrates how those widely differing views coexist on Chinese social media, even after extensive efforts by the CCP to repress liberal voices on the platform.59

Its research shows that ,while party-state propaganda plays a dominant role, a number of other distinct ideological clusters exist on Chinese social media sites such as Sina Weibo. Among the groupings they identify are ‘Market Lovers’, ‘Democratizers’, ‘Humanists’ and ‘US Lovers’.

Furthermore, a survey conducted by MERICS for the report shows that Chinese nationalism isn’t necessarily anti-Western. While 62% of respondents in the online survey said China should be more assertive internationally, 75% also supported the ‘spread of Western values’. As the paper points out, ‘the CCP’s strategy of denouncing so-called Western values has repeatedly backfired when netizens pointed out the lack of better Chinese alternatives.’ Western embassies’ public diplomacy efforts seem to have some fertile ground, despite the censorship.

Israel, the Weibo stand-out

The ICPC’s analysis of three months of posts from the top 10 foreign embassies on Weibo shows that a failure to cut through can’t be blamed only on censorship. Many foreign embassies simply aren’t putting enough resources into ensuring that their content is engaging enough to succeed in a highly competitive online media environment, or creative enough to not be easily spotted by censors.

The Israeli Embassy is a stand-out exception: it has a highly successful content strategy that has proved highly popular on the platform.
In her own research into ‘Weiplomacy’ efforts, Adelaide University scholar Ying Jiang captured 2015 data from the top 10 embassies on Weibo, and Israel didn’t make the list. Just a year later, research by Manya Koetse, editor-in-chief of the Chinese social trend tracking website What’s on Weibo, showed that the Israeli Embassy had come out of nowhere to take the top spot (Table 1).

Table 1: The top 10 foreign embassies on Weibo, 2015 to 2017

(Table-1)

Sources:
a) Ying Jiang, ‘Weibo as a public diplomacy platform’, Social Media and e-Diplomacy in China, 10 August 2017, online.
b) Manya Koetse, ‘Digital diplomacy: these foreign embassies are most (un)popular on Weibo’, What’s On Weibo, 20 December 2016, online.
c) Data collected by Fergus Ryan, December 2017.

Of course, a successful digital public diplomacy effort on Weibo should not only be judged by how many posts are censored; it should also be pragmatic. Above all, any digital diplomacy, or ‘e-diplomacy’, effort is fundamentally about the use of the internet and new information and communications technologies to help achieve diplomatic objectives.60

Drawing on data from late 2017, this report has Israel maintaining its lead at number 1 (despite losing followers), while the US and Canada continue to vie for second and third place. The UK has recovered from its loss of two places to regain the number 6 slot, while Australia has managed to re-enter the top 10.

However, follower counts can be a somewhat crude metric, as they can be easily gamed.

A 2014 investigation by The Globe and Mail found that large chunks of those followers were fake. According to the online tool used by the paper, 45.8% of the US Embassy’s followers, 39.9% of the UK’s and 51.2% of Japan’s were real. Only 12.9% of the Canadian Embassy’s 1.1 million followers were determined to be real.61

Another more meaningful metric is to examine the number of shares, likes and comments that each post gets on average to arrive at an idea of how ‘influential’ each embassy is (Figure 24).

Figure 24: Top 10 foreign embassies, by shares and likes per post

Using these engagement metrics, the Japanese, UK, US, Israeli and Canadian embassies are the top 5 leading the pack. 

Central to the success of the top 5 accounts is a tendency to not just promote the image of their own countries, but to engage with and leverage Chinese culture, particularly pop culture. Weibo’s audience skews young (88% of Weibo users are under 33 years of age) and, after its most vocal liberal voices were purged, is now largely dominated by entertainment.62

If the aim of foreign embassies on Weibo is to enhance soft power and to shift public opinion around to supporting their foreign policy positions, the Israeli Embassy Weibo account is exemplary. Shimi Azar, who worked as social media manager at the embassy from late 2014 to early 2016, says the country received a lot of exposure through state visits by Israel’s leaders to China.

‘The first visit of Israel’s Prime Minister Netanyahu to China in 2013 and the visit of the late president Shimon Peres in 2014 created a big buzz in the media,’ Azar told the Global Times.

‘So the embassy took advantage of this buzz and created a Sina Weibo account for Shimon Peres, which was very successful and soon attracted half a million followers.’63

But the outsized success of the Israeli Embassy Weibo account also occurred in the context of a number of deadly terrorist attacks by jihadist-inspired separatist groups in Xinjiang Province.64 As Peter Cai noted in 2014, the majority of comments under an Israeli Embassy Weibo post that likened Hamas to the Islamic State terrorist group were supportive of Israeli attacks on Hamas.

‘Israel, you must control the population in Gaza, otherwise it’s impossible for you to win. You should ditch your humanitarian principles and the only hope for you is to fight evil with evil,’ read one representative comment under the post.

Chinese netizen support for Israeli foreign policy, which goes against the official Beijing position, is still ongoing. Nine sentences sent out by the Israeli Embassy following US President Trump’s decision to recognise Jerusalem as the capital of Israel was the most shared piece of embassy content (the item was shared 2,298 times) in the three-month period covered in this report (Figure 25).65

Figure 25: The most shared piece of embassy content—on the US recognition of Jerusalem as Israel’s capital

The post, which outlines the official Israeli view of the history of Jerusalem, was positively received by Weibo users. ‘The world will rest assured and the people will be satisfied when Jerusalem is given to you,’ reads the most liked comment underneath the post.

‘Put the boot into the cancer of humanity’, the second most liked comment reads—a sentiment typical of a growing anti-Muslim sentiment online that has gone unchecked by Beijing’s censors. Islamophobia has been given a wide berth online in China as authorities continue to crack down in its restive region of Xinjiang. Frequent anti-Muslim comments under many Israeli embassy posts suggest that there’s a perception in their audience that the Israeli Embassy Weibo account is itself anti-Muslim.

A lack of coordination and transparency

But the efficacy of even the most well-resourced and strategic use of Chinese social media platforms such as Weibo is ultimately limited by the party-state. On his second official visit to China in December 2017, Canadian Prime Minister Justin Trudeau sought to parlay his image as a ‘Weibo addict’ into a public diplomacy coup when he made his first stop a visit to Sina Weibo headquarters in Beijing.

Promotional material released before Trudeau’s visit to Weibo claimed the Q&A with the Canadian Prime Minister would be broadcast live, via video stream onto Weibo (Figure 16). But instead of seeing a live-stream of the proceedings, Weibo users at first saw only a delayed 36-second clip of the PM. It was only hours later that more of his appearance was made available.66 As the Canadian Government intended the event to be live-streamed, a reasonable conclusion is that the abrupt cancellation was due to Weibo censors.

Figure 26: A Sina Weibo poster advertising Canadian Prime Minister Justin Trudeau’s video live-stream from Sina Weibo HQ. The poster refers to Trudeau as a ‘Weibo addict’.

Chinese officials, when questioned about the practice of censoring the comments section on foreign embassy Weibo accounts, pass the buck back to Sina Weibo. An exchange between a foreign journalist and an official at a recent Foreign Ministry press conference provides an illustrative example:

Q: Some Chinese investors were angry about the decline in the domestic stock market last week, and they used the US Embassy’s Weibo account to vent, posting comments to that account. On Saturday, we saw these comments have been blocked. Can you tell us your understanding as to what happened there? Does China see that the US is doing anything incorrect in this matter?

A: You might as well ask the US Embassy in China, whose staff is responsible for the maintenance of their own account.

Follow-up: It appears from our report that they did not take actions to block anything. That may have been the Weibo that blocked them.

A: I have not heard about what you mentioned. As I understand, you need to ask them if there are problems with their Weibo account. If the problem cannot be solved, they may contact relevant competent authorities. 67

Conclusion and policy recommendations

It’s estimated that Beijing spends US$10 billion a year on external propaganda, an order of magnitude higher than the US, which spent US$666 million on public diplomacy in 2014.68 Content from Chinese state media has featured in major Western outlets such as The Sydney Morning Herald, The Washington Post, the UK’s Daily Telegraph and Le Figaro as well as on the social media platforms Twitter and Facebook.

The reverse would be unthinkable in the PRC’s tightly controlled media environment. This is despite the fact that the PRC backed a landmark resolution in July 2012 at the UN Human Rights Council, which affirmed that ‘the same rights that people have offline must also be protected online, in particular freedom of expression, which is applicable regardless of frontiers and through any media of one’s choice.’69

Insisting that the PRC uphold the rights of its citizens to engage freely with the legitimate online public diplomacy efforts of foreign embassies isn’t a boutique concern. It’s a parallel issue to seeking reciprocity from the Chinese state for numerous other things, such as intellectual property regimes and market access. The PRC’s online censorship regime cloisters its netizens in an information environment that’s cut off from the rest of the world and primed with a nationalistic ideology. The more the Chinese party-state controls the media to promote its own narrative, the more it limits its own options for how it can resolve international conflicts.70

While CCP statements at the UN are reassuring, the trendlines for censorship in China are moving in the opposite direction. Under Xi’s rule, China has increasingly tightened its grip on the internet, concerned about the erosion of its ideology and policy by a vibrant online culture and the spectre of so-called ‘hostile foreign forces’. As this paper shows, Beijing’s censors aim to use almost imperceptible amounts of censorship to throttle discussion on Weibo that they deem falls outside the frame of discourse acceptable to the CCP party-state. For foreign governments, the temptation to self-censor is increasing.

Foreign governments should demand that Beijing refrain from censoring their legitimate and overt digital diplomacy efforts. Short of that, and probably more powerful for the netizen community, like-minded governments, in coordination with each other, should commit to publishing transparency reports, both to reveal the level of censorship that they’re receiving on Weibo and to demonstrate their commitment to presenting Western political norms and values to Chinese civil society. This can be very influential public diplomacy. It’s important that embassy Weibo accounts speak to China’s diverse netizen groups. Publishing a transparency report about CCP censorship will also inform those groups of their own government’s actions.

The continued meaningful presence of foreign embassy accounts—which occasionally speak outside the bounds of the CCP’s frame of acceptable discourse—will demonstrate those countries’ commitment to presenting Western political norms and values to Chinese civil society.

These accounts can also help reduce misunderstandings between foreign governments and the population of one of the world’s most powerful countries.
Changes need to be made to the way governments engage online in China. Those changes need to include preventive measures to stop governments falling into a cycle of self-censorship. This paper makes the following recommendations:

  1. Governments need to become more assertive and more creative in their messaging on Chinese social media platforms. Of course, some content should be tailored for local audiences. But foreign governments must ensure that they’re communicating the same policy and political messages to the Chinese public as they are to other publics around the world. They are likely to be censored for this.
  2. Foreign governments should use uncensored social media platforms such as Twitter—which, despite being blocked in China, still has an estimated 10 million active users in the country 71 — to cross-post all of their content. That way, incidences of censorship will be transparent and available to global audiences. Cross-posting content elsewhere also gives Chinese netizens an alternative avenue to access and engage with uncensored content. The US Embassy’s Twitter account—which as 738,000 followers—provides other countries with a good model.72
  3. When governments have their official content censored on Chinese online platforms, they should raise this censorship directly with their Chinese Government counterparts. Those countries 73 which allow the Chinese Communist Party an open media and cyber environment to communicate all of its official messages should request reciprocity.
  4. The Australian Government needs more avenues to engage the Chinese public and to put different messages forward. Dedicated official accounts for the positions of Prime Minister and Foreign Minister should be established immediately.

Acknowledgements

The author would like to thank Amber Ziye Wang for her help researching this paper. He’d also like to thank Richard McGregor, Peter Cai and Alex Joske for their comments, which greatly improved the final product. He’s also immensely grateful to my colleagues at ASPI, Danielle Cave, Fergus Hanson and Michael Shoebridge, for their crucial assistance.


ASPI International Cyber Policy Centre

The ASPI International Cyber Policy Centre’s mission is to shape debate, policy and understanding on cyber issues, informed by original research and close consultation with government, business and civil society.

It seeks to improve debate, policy and understanding on cyber issues by:

  1. conducting applied, original empirical research
  2. linking government, business and civil society
  3. leading debates and influencing policy in Australia and the Asia–Pacific.

We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors.

Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Mareike Ohlberg, Boosting the party voice: China’s quest for global ideological dominance, Mercator Institute for China Studies, 2016, online. ↩︎
  2. Australian Government, 2017 Foreign Policy White Paper, 2017, online. ↩︎
  3. Jon Russell, ‘Twitter estimates that it has 10 million users in China’, TechCrunch, 5 July 2016, online. ↩︎
  4. US Embassy, Weibo post, 7 May 2018, online. ↩︎
  5. Jiayun Feng, ‘US Embassy bashes Chinese “political correctness” on Weibo, sending the Chinese internet into a frenzy’, SupChina, 7 May 2018, online. ↩︎
  6. Hu Xijin (胡锡进), Weibo post, 7 May 2018. ↩︎
  7. Jiayun Feng, ‘US Embassy bashes Chinese “political correctness” on Weibo, sending the Chinese internet into a frenzy’. ↩︎
  8. Sidney Leng, Jane Li, ‘US, China in fresh row as Beijing tells foreign airlines they will be punished for failing to respect territorial claims, report says’ South China Morning Post, 7 May 2018, online. ↩︎
  9. Ying Jiang, ‘Weibo as a public diplomacy platform’, Social Media and e-Diplomacy in China, 10 August 2017, online. ↩︎

Australia’s Offensive Cyber Capability

FOREWORD

Yohan RamasundaraThe reality of the world we live in today is one in which cyber operations are now the norm. Battlefields no longer exist solely as physical theatres of operation, but now also as virtual ones. Soldiers today can be armed not just with weapons, but also with keyboards. That in the modern world we have woven digital technology so intricately into our businesses, our infrastructure and our lives makes it possible for a nation-state to launch a cyberattack against another and cause immense damage — without ever firing a shot.

ACS’s aim in participating in this policy brief is to improve clarity of communication in this area. For Australia, both defensive and offensive cyber capabilities are now an essential component of our nation’s military arsenal, and a necessary step to ensure that we keep up with global players. The cyber arms race moves fast, so continued investment in cyber capability is pivotal to keep ahead of and defend against the latest threats, while being able to deploy our own capabilities when and where we choose.

So, too, is ensuring that we have the skills and the talent to drive cyber capabilities in Australia. This means attracting and keeping the brightest young minds, the sharpest skilled local talent and the most experienced technology veterans to drive and grow a pipeline of cyber specialists, and in turn help protect and serve Australia’s military and economic interests.

Yohan Ramasundara
President, Australian Computer Society

What’s the problem?

In April 2016, Prime Minister Turnbull confirmed that Australia has an offensive cyber capability. A series of official disclosures have provided further detail, including that Australia will use this capability against offshore cybercriminals.

This was the first time any state has announced such a policy.

However, this commendably transparent approach to telegraphing our capability and intentions hasn’t been without challenges. In some cases, these communications have created confusion and misperceptions. There’s a disconnect between popular perceptions, typified by phrases like ‘cyber Pearl Harbor’, and the reality of offensive cyber operations, and reporting has at times misrepresented how these tools will be used. Public disclosures and the release of the report of the Independent Intelligence Review have also raised questions about how Australia will build and maintain this capability.

What’s the solution?

To reduce the risk of misunderstanding and misperception and to ensure a more informed debate, this policy brief seeks to further clarify the nature of Australia’s offensive cyber capability. It recommends improving communications, using innovative staff recruitment and retention options, deepening industry engagement and reviewing classification levels in some areas. Looking forward, the government could consider increasing its investment in our offensive capability to create an asymmetric capability; that is, a capability that won’t easily be countered by many militaries in our region.

Introduction

Governments routinely engage in a wide spectrum of cyber operations, and researchers have identified more than 100 states with military and intelligence cyber units.1

The cyber units range considerably in both their capability and their compliance with international law. Leaks have highlighted the US unit’s advanced capability, and public documents reveal its size. US Cyber Command’s action arm, the Cyber Mission Force, is building to 6,200 military and civilian personnel, or about 10% of the ADF, and for the 2018 financial year requested a US$647 million budget allocation.2 China has been widely accused of stealing enormous quantities of intellectual property. North Korea has used cyber tools to steal money, including in a US$81 million heist on the Bangladesh central bank. Russia is accused of using a range of online methods to influence the 2016 US presidential election and has engaged in a wide spectrum of actions against its neighbours, such as turning off power stations in Ukraine and bringing down government websites in Georgia and Estonia. Israel is suspected of using a cyber operation in conjunction with its bombing raid on a Syrian nuclear reactor in 2007 by temporarily ‘tricking’ a part of Syria’s air defence system to allow its fighter jets to enter Syria undetected.3

In Australia, the government has been remarkably transparent in declaring the existence of its offensive cyber capability and its applications: to respond to serious cyberattacks, to support military operations, and to counter offshore cybercriminals. It has also established robust structures to ensure its compliance with international law. Three additional disclosures about Australia’s offensive cyber capability have followed the Prime Minister’s initial April 2016 announcement. In November 2016, he announced that the capability was being used to target Islamic State,4 and on 30 June 2017 Australia became the first country to openly admit that its cyber offensive capabilities would be directed at ‘organised offshore cyber criminals’.5 The same day, the then Minister Assisting the Prime Minister for Cyber Security, Dan Tehan, announced the formation of an Information Warfare Division within the ADF.

While these disclosures have raised awareness of Australia’s offensive cyber capability, the limited accompanying detail has meant that the ensuing public debate has often been inaccurate or misleading. One major news site, for example, led a report with the title ‘Australia launches new military information unit to target criminal hackers’.6 Using the ADF to target criminals would have been a radical departure from established protocols.

This policy brief seeks to clarify some of the misunderstandings arising from sensationalist reporting.

The report has the following parts:
1. What’s an offensive cyber operation?
2. Organisation, command and approvals
3. Operations against declared targets
4. Risks
5. Checks, balances and compliance with international law
6. Strengths and weaknesses
7. Future challenges and recommendations.

Tom Uren and Fergus Hanson on Offensive Cyber

1. What’s an offensive cyber operation?

For the purposes of this policy brief, we use a draft definition that’s being developed as part of the Department of the Prime Minister and Cabinet’s Cyber Lexicon project. It defines offensive cyber operations as ‘activities in cyberspace that manipulate, deny, disrupt, degrade or destroy targeted computers, information systems, or networks’.7 Given the range of countries with varying capabilities and using examples from open sources, offensive cyber operations could range from the subtle to the destructive: removing computer accounts or changing passwords; altering databases either subtly or destructively; defacing web pages; encrypting or deleting data; or even attacks that affect critical infrastructure, such as electricity networks.

Even though it may use the same tools and techniques, cyber espionage, by contrast, is explicitly designed to gather intelligence without having an effect—ideally without detection. The Global Commission on the Stability of Cyberspace has commissioned ASPI’s International Cyber Policy Centre to do further work on defining offensive cyber capabilities.

2. Organisation, Command and Approvals

Australia’s offensive cyber capability resides within the Australian Signals Directorate (ASD).8 It can be employed directly in military operations, in support of Australian law enforcement activities, or to deter and respond to serious cyber incidents against Australian networks. While physically housed within ASD, the military and law enforcement applications have different chains of command and approvals processes.

MILITARY

The Information Warfare Division within the Department of Defence was formed in July 2017 and is headed by the Deputy Chief Information Warfare, Major General Marcus Thompson.

Major General Thompson has presented the ADF approach to cyber capabilities as two distinct functions: cybersecurity (consisting of self-defence and passive defence 9), and cyber operations (consisting of active defence and offence 10).

Figure 1

The Australian Government’s offensive cyber capability sits within ASD and works closely with each of the three services, which embed staff assigned to ASD from the ADF’s Joint Cyber Unit. Offensive cyber in support of military operations is a civil–military partnership. The workforce to conduct offensive cyber operations resides within ASD and is largely civilian. Advice from Defence is that the laws of armed conflict are considered during the development and execution of operations, and that ASD personnel will act in accordance with legally approved instructions. There’s no reason to doubt that, and the Inspector-General of Intelligence and Security has noted in the context of cyber operations in support of the ADF operations in Iraq and Syria that ‘guidance in place at the time was appropriate and followed by staff, and no issues of legality or propriety were noted’.

The ability to conduct an operational planning process that takes into account the desired outcome, situational awareness and the possible range of effects is a military discipline that resides in the ADF. This arrangement is expected to continue under proposals from the 2017 Intelligence Review to make ASD a statutory authority within the Defence portfolio.

As clarified in Australia’s International Cyber Engagement Strategy, ‘Offensive cyber operations in support of [ADF] operations are planned and executed by ASD and Joint Operations Command under direction of the Chief of Joint Operations.’11 Targeting for offensive cyber operations occurs in the same manner as for kinetic ADF operations. Any offensive cyber operation in support of the ADF is planned and executed under the direction of the Chief of Joint Operations and, as with any other military capability, is governed by ADF rules of engagement.

© Commonwealth of Australia, Department of Defence
ADF soldier undergoing Cyber training. © Commonwealth of Australia, Department of Defence.

Law Enforcement

The announcement that Australia would be using its offensive cyber capability against offshore cybercriminals created considerable confusion. Public messaging was one contributing factor: the announcement about the ADF’s Information Warfare Division bled into the same-day announcement that the government would also be using its offensive cyber capability to deter offshore cybercriminals, making them appear one and the same thing.14

While some media outlets characterised the announcement as Australia potentially attacking the whole suite of ‘organised offshore criminals’, the announcement focused only on offshore actors who commit cybercrimes affecting Australia.

Decisions on which cybercriminal networks to target follow a similar process to those for military operations, including that particularly sensitive operations could require additional approvals, although the exact processes haven’t been disclosed. Again, these operations would have to comply with domestic law and be consistent with Australia’s obligations under international law.

3. Operations against declared targets

Australia has declared that it will use its offensive cyber capabilities to deter and respond to serious cyber incidents against Australian networks; to support military operations, including coalition operations against Daesh in Iraq and Syria; and to counter offshore cybercriminals. Given ASD’s role in intelligence gathering, operations can integrate intelligence with cyber operations—a mission critical element.

…will use its offensive cyber capabilities to deter and respond to serious cyber incidents against Australian networks…

4. Risks

Offensive cyber operations carry several risks that need to be carefully considered. For cyber operations in support of the ADF, as with conventional capabilities, the commander must weigh up the potential for achieving operational goals against the risk of collateral effects and damage.

When offensive cyber capabilities are used, there’s a high chance that future effectiveness might be compromised. Unlike defending against kinetic weapons, an information system might be protected from cyberattack through relatively simple measures, such as upgrades, patches or configuration changes.

Another risk is that, despite extensive efforts to disguise the origin of the attack, the Australian Government could lose plausible deniability or be identified (including contextually) as the source and face embarrassment or retaliation.

5. Checks, balances and compliance with international law

When the first public disclosure of Australia’s offensive cyber capability was made, the Prime Minister emphasised Australia’s compliance with international law: ‘The use of such a capability is subject to stringent legal oversight and is consistent with our support for the international rules-based order and our obligations under international law.’15

Interviews for this policy brief suggest that the users of the capability take compliance with domestic and international law extremely seriously. The core principles are as follows:

  1. Necessity: ensuring the operation is necessary to accomplish a legitimate military / law enforcement purpose.
  2. Specificity: ensuring the operation is not indiscriminate in who and what it targets.
  3. Proportionality: ensuring the operation is proportionate to the advantage gained.
  4. Harm: considering whether an act causes greater harm than is required to achieve the legitimate military objective.

These capabilities are subject to ASD’s existing legislative and oversight framework, including independent oversight by the Inspector-General of Intelligence and Security. However, there seems to be room for updating these provisions to account for technological developments. Section 7(e) of the Intelligence Services Act 2001, for example, authorises ASD ‘to provide assistance to Commonwealth and State authorities in relation to … (ii) other specialised technologies’—a foundation that could be strengthened for 21st-century technological applications.

When seeking approval for operations from the Minister for Defence, ASD seeks legal, foreign policy and national security advice from sources external to Defence. Every offensive cyber operation is planned and conducted in accordance with domestic law and is consistent with Australia’s obligations under international law

6. Strengths and weaknesses

Offensive cyber capabilities have both strengths and weaknesses.

STRENGTHS

  • For military tasks, they can be integrated with ADF operations, adding a new capability and creating a force multiplier.
  • They can engage targets that can’t be reached with conventional capabilities without causing unacceptable collateral damage or overt acknowledgement.
  • They provide global reach.
  • They provide an asymmetric advantage against an adversary for a relatively modest cost.
  • They can be overt or clandestine, depending on the intended effect.

WEAKNESSES

  • Capabilities need to be highly tailored to be effective (such as the Stuxnet worm that targeted Iran’s nuclear centrifuges), meaning that they can be expensive to develop and lack flexibility.
  • When used in isolation, they are unlikely to be decisive.
  • Major, blunt attacks (such as Wannacry or NotPetya) are relatively cheap and easy, but are unusable by responsible state actors such as Australia. Achieving the appropriate specificity and proportionality requires investment of time and effort.
  • The capability requires constant, costly investment as cybersecurity evolves.
  • Government must compete for top-tier talent with private industry.
  • For operations short of ‘cyber attacks’,16 the effects can be relatively short-lasting and limited.
  • Capability can’t be showcased as a deterrent in the same way that conventional capability can, because revealing specific capability renders it redundant as defences are repaired.
  • Target development can require intensive intelligence support and can take a very long time.
Plus, Minus

7. Future challenges and recommendations

Offensive cyber operations are relatively new and developing in a fast-moving environment. Below are issues and recommendations stemming from research for this report.

RECOMMENDATION 1: CAREFULLY STRUCTURE COMMUNICATIONS TO REASSURE NATION-STATES AND ENFORCE NORMS

As Australia’s offensive cyber capability has only recently been publicly acknowledged and is subject to sensationalist reporting, careful communication is required. When he first acknowledged the capability, the Prime Minister said doing so ‘adds to our credibility as we promote norms of good behaviour on the international stage’.17 Poor communications, however, can have the opposite effect. The limited detail and mixed reporting of the announcement that Australia would use offensive cyber capability against offshore cybercriminals inadvertently sent the message that it was acceptable for states to launch cyberattacks against people overseas whom they considered to be criminals. This might encourage some states to use crime as a pretext to launch cyber operations against individuals in Australia.

To address this, the Australian Government should be careful when publicly discussing the offensive capability, particularly to distinguish the military and law enforcement roles. One option to do this would be to have the Attorney-General, the Minister for Justice or the new Home Affairs Minister discuss operations related to law enforcement aspects of the capability and to have the Minister for Defence discuss those related to military capabilities.

RECOMMENDATION 2: USE INNOVATIVE STAFF RECRUITMENT AND RETENTION OPTIONS

Recruiting and retaining Australia’s top technical talent is a major hurdle. In the medium term, ASD will have to continue to invest heavily in training, raise salaries (ASD becoming a statutory authority will help it address this) and develop an alumni network and culture that allow former staff to return in new roles after a stint in private industry. A pool of alumni working as cleared reservists could also be used as an additional workforce without the significant investment required in conducting entirely new clearances.

RECOMMENDATION 3: DEEPEN INDUSTRY ENGAGEMENT

ASD capability being deployed against cybercriminals is likely to generate increased interest from corporate Australia. There’s a policy question about whether or not Australia’s offensive cyber capability should be used in support of Australian corporate interests. Given the finite resources and the tricky situations that could arise, government should consider useful ways industry could engage, clarify the limits of industry engagement and assess how to handle industry requests to use the offensive cyber capability against actors targeting its operations.

RECOMMENDATION 4: CLASSIFY INFORMATION AT LOWER LEVELS

It has long been argued that over-classification of material, such as threat intelligence, by governments prevents easy information exchange with the outside world, including key partners such as industry. The government has recognised this and is positioning ‘Australian Cyber Security Centre (ACSC) 2.0’ to facilitate a more cooperative and informed relationship with the private sector. Similarly, the government should continue to scope the potential benefits from lowering the classification of information associated with offensive cyber operations. In particular, there are benefits in operating at the SECRET level for workforce generation and training, and providing a ‘halfway house’ to usefully employ incoming staff as they wait during vetting procedures. More broadly, excessive classification slows potentially valuable two-way information exchange with the information security community.

RECOMMENDATION 5: INVEST TO CREATE AN ASYMMETRIC CAPABILITY

The 2016 Defence White Paper noted that ‘enhancements in intelligence, space and cyber security will require around 900 ADF positions’.18 Those positions were part of the $400 million 19 in spending announced in the White Paper and will be spread across the ADF. While this is significant, given the limits of what can be achieved with current spending on conventional kit, the Australian Government should consider conducting a cost–benefit analysis on the relative value of substantial further spending on cyber to provide it with an asymmetric capability against future adversaries. This would need to include a considerable investment in training.

RECOMMENDATION 6: CONSIDER UPDATING THE POLICY AND LEGISLATIVE FRAMEWORK

There appears to be sufficient legislation, policy and oversight to ensure that ASD and the ADF work together in a lawful, collaborative and cooperative manner to support military operations. The 2017 Independent Intelligence Review noted that ASD’s support to military operations is indispensable, and will remain so.

While those oversight arrangements may be sufficient for now, the ADF will inevitably need to incorporate offensive cyber on the battlefield as a way to create local effects, including force protection measures and to deliver effects currently generated by electronic warfare (such as jamming communications technology). It should not always be necessary to reach back to the national authorities for clear-cut and time critical battlefield decisions. There appears to be scope to update the existing policy and legislative framework that governs the employment of offensive cyber in deployed operations to support those kinds of activities.


Important disclaimer

This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

  1. Noah Shachtman, Peter W Singer, The wrong war: the insistence on applying Cold War metaphors to cybersecurity is misplaced and counterproductive, Brookings Institution, Washington DC, 15 August 2011, online. ↩︎
  2. Michael S Rogers, Statement of Admiral Michael S Rogers, Commander, United States Cyber Command, before the House Committee on Armed Services Subcommittee on Emerging Threats and Capabilities, 23 May 2017, p. 1, online; Laura Criste, ‘Where’s the cyber money for fiscal 2018?’, Bloomberg Government, 19 July 2017, online. ↩︎
  3. Thomas Rid, Cyber war will not take place, Oxford University Press, 2013, p. 42. ↩︎
  4. Malcolm Turnbull, ‘Address to parliament: national security update on counter terrorism’, 23 November 2016,
    transcript, online. ↩︎
  5. Malcolm Turnbull, ‘Offensive cyber capability to fight cyber criminals’, media release, 30 June 2017, online. ↩︎
  6. ‘Cyber warfare: Australia launches new military information unit to target criminal hackers’, The Australian, 30 June 2017, online. ↩︎

The Internet of Insecure Things

Introduction

The Internet of Things (IoT) is the term used to describe the growing number of devices being connected to the internet. Some of the more common IoT devices include home appliances such as Google Home, wearable devices, security cameras and smart meters.It’s been predicted that the number of connected devices was close to 8.4 billion in 2017 and that there will be over 20 billion devices connected by 2020.1 Even though the IoT has been developing since the rise of the internet in the early 1990s, there’s no universally accepted definition. Kevin Ashton, who coined the phrase in 1999, says the IoT is much more than just connected appliances and describes it as a ‘ubiquitous sensor network’ in which automation leads to innovation.2 While there are some justifiable cybersecurity concerns about the IoT, there are also many notable advantages to living in a connected world. The IoT is saving lives through advanced healthcare technology, manufacturers are saving time and money through automation and tracking, and a plethora of home devices are adding value to people’s lives by providing a range of different services.

There are many different ways to categorise IoT devices, which makes safeguarding the technology challenging. The IoT can be dissected by industry, such as healthcare, transport, manufacturing and consumer electronics. One major subcategory of the IoT has earned its own acronym: the IIoT, to which control systems belong. Another way of categorising devices is by looking at their individual capabilities. Devices that can take action pose a different threat from devices that simply collect data to report back to the user.

The IoT offers benefits to all industries, but the connectivity of these once isolated things also introduces new vulnerabilities that can affect our homes and industries. As well as promising convenience and efficiency, the IoT is a problem because a vast number of internet connected devices with poor default security create a large attack surface that bad actors could take advantage of for malicious ends. A variety of international organisations and government groups are working on issues pertaining to the IoT, but at present there’s no coordinated vision to implement standards for the IoT on a global scale. Similarly, in Australia, a host of different cyber agencies and industrial groups are working to overcome some of the cybersecurity issues that the IoT presents, but a coordinated strategy detailing how government and industry can collaborate on the IoT is needed.

This issues paper aims to give a broad overview of IoT issues to increase awareness and public discussion on the IoT.

In December 2017, ASPI’s International Cyber Policy Centre produced a discussion draft asking stakeholders key questions about IoT regulation, governance, market incentives and security standards to help inform this issues paper. We received responses from government, industry representatives, technical experts and academics. While those stakeholders were consulted in the research phase of this paper, the views here are those of the authors.

THREAT TO CRITICAL INFRASTRUCTURE

In 2016, a severe storm disrupted crucial services in South Australia, resulting in a loss of power for 850,000 customers.3 Trains and trams stopped working, as did many traffic lights, creating gridlock on flooded roads. The storm, together with the failure of backup processes, resulted in the death of a number of embryos at a fertility clinic in Flinders Hospital.4 The total cost for South Australian businesses as a result of the blackout was estimated to be $367 million.5

Some have noted that, due to the interconnectedness of infrastructure, this event mirrored the potential effects of a large-scale cyberattack.6

Disrupting utilities that power an entire city could cause more damage than traditional terror tactics and can be done externally and with more anonymity.

Again, severe storms demonstrate that a loss of power can cause more deaths than the physical destruction of infrastructure.

When Hurricane Irma caused the air conditioning at a Florida nursing home to fail, 12 residents died of suspected heat-related causes.7

Digital weapons are being used intentionally by nation-states to inflict physical destruction or compromise essential services. The now infamous attack on Iran’s nuclear program, known as Stuxnet, used infected USB drives to contaminate computer systems with malware,8 which caused physical damage to a number of uranium centrifuges.9 In 2015, hackers used stolen user credentials to attack a Ukrainian power grid, which resulted in loss of power for more than 230,000 people.10 In 2016, the attackers used malware specifically designed to attack Ukraine’s power grid to disrupt the power supply to Kiev. This indicates that malicious actors have both the resources and the intent to develop cyberattack capabilities targeted at essential services.11

The IoT overlaps with critical infrastructure because many control systems are also now connected to the internet. Kaspersky researchers found more than 3,000 industrial control systems in Australia by using Shodan and Censys IoT search engines.12 Studies have also revealed vulnerabilities in control systems made by major vendors, such as Schneider Electric and Siemens.13

In the discussion version of this paper, several respondents expressed the view that a separate cyber organisation focusing specifically on the security of critical assets and services would be unhelpful. However, many acknowledged a need for greater collaboration between those responsible for protecting these assets to help mitigate IoT-related threats.

The Australian Cyber Security Centre (ACSC) could seek to increase coordination between owners and operators of critical assets, helping with the technical aspects of adopting voluntary industry standards for the IoT. The ACSC has the technical expertise to participate in the formation of international standards and could work with policy experts in the Department of Home Affairs to encourage national adoption.

THE CYBER LANDSCAPE IN AUSTRALIA

The cyber landscape in Australia is complex. Government cybersecurity responsibilities have recently been reorganised through the establishment of the Department of Home Affairs and structural changes to the Australian Signals Directorate and ACSC. Getting a clear picture of roles and responsibilities was difficult, and it would be beneficial to identify any gaps in roles and responsibilities after these recent organisational changes have been properly implemented. Industry roles could be identified in an IoT road map that helps industry and government bodies work together to more effectively mitigate IoT threats. Consumers should be educated on cybersecurity and responsible ownership of IoT devices, including patching and updating, building on initiatives such as Stay Safe Online.

The IoT has exacerbated an already confronting problem: the lack of skilled cybersecurity professionals both nationally and globally.

The Australian Cyber Security Growth Network estimates that a further 11,000 skilled experts will be needed in the next decade.14 In January 2018, the network announced that cybersecurity qualifications will be offered at TAFE institutions around Australia, which is a significant step forward.15 However, cybersecurity is a broad domain that requires not only workers with technical skills but also experts in risk management and policymaking, among other areas. Advances in automation and data analytics could help to address the skills shortage, as those technologies will increase the availability of cybersecurity experts, by replacing technical jobs in other areas.

We need to think about IoT security as a holistic system that combines practical skills-based training with industry best practise. The under-representation of women in cybersecurity has been widely noted and overcoming it was listed as a priority in Australia’s Cyber Security Strategy.16 The government has conducted research to better understand the issue and is running workshops to help increase participation.17

SECURITY RATINGS AND CERTIFICATIONS

A number of countries, including Australia, are considering the value of security ratings for IoT devices. In October 2017, Dan Tehan, the then Minister Assisting the Prime Minister on Cybersecurity, suggested in a media interview that such ratings should be created by the private sector, not by the Australian Government.18 The UK Government is also exploring ‘how to encourage the market by providing security ratings for new products’, as outlined in its National Security Strategy.19 Introducing a product security rating for consumer electronics has the potential to improve awareness of cybersecurity issues and to encourage industry to adhere to minimum security standards. But whether the ratings should be initiated by government or industry is only the beginning of the issue, as there are several problems with cybersecurity ratings that need to be addressed.

First, the vulnerability of an IoT device could potentially vary over its lifetime as weaknesses are discovered and then patched. The energy efficiency of a refrigerator or washing machine, by contrast, is relatively fixed, and so energy-efficiency ratings can be trusted over the device’s lifetime. With IoT devices, new vulnerabilities are constantly being exposed. At best, a security rating would reflect the security of a device based on the information available at the time of the security assessment. It would need to be adapted as security standards evolve and new vulnerabilities are discovered.

Second, it’s worth investigating whether a cyber rating could lull consumers into a false sense of security by negating their own role in protecting themselves from attack. Before implementing a security rating system, we need to research whether purchasing a device that claims to be secure could make consumers less likely to install updates or change default passwords.

Third, as mentioned in the introduction of this report, there’s considerable variation in IoT products. A Jeep Cherokee and a baby monitor (both of which have been compromised) present vastly different dangers, but the compromise of either can have serious consequences. While all IoT devices should include baseline security features in the design phase, devices deemed to be high risk should also require commensurately robust security features. Burdening otherwise cheap, low-risk devices with expensive certifications or strict security regulations, however, could make them commercially unviable in Australia. It’s important to recognise that it will be challenging and expensive to come up with a rating that appropriately addresses all the different categories of IoT devices.

In 2018, the IoT Alliance Australia (IoTAA) is prioritising the introduction of an ‘IoT product security certification program’ as a part of its strategic plan.20 Exactly what this will look like remains unknown, but it’s likely to be performed by accredited independent bodies that evaluate products based on security claims. The Australian Information Industry Association recommends an accreditation scheme that would also certify organisations making IoT devices. The authors’ view is that some manufacturers (for example, Samsung) make so many products that this would be ineffective as a stand-alone tactic, but this idea could be used in collaboration with an individual product rating.

REGULATION AND STANDARDS

Regulation and standardisation are at the forefront of the IoT debate, and positions tend to be polarised, as reflected in the responses to our discussion draft. The respondents acknowledged that regulation isn’t always effective and can impose a significant cost, but some also said that there’s potentially room for government to play a more direct role if a device is deemed to provide a critical service to the community. Some industries, such as transport and healthcare, already have safety standards addressing a wide range of security concerns; those standards need to prioritise current and emerging cybersecurity threats.

Multiple IoT-related bills introduced into the US Congress last year exemplified some of the legislative attempts to enforce IoT security by way of law. The Internet of Things (IoT) Cybersecurity Improvement Act of 2017 stresses the importance of built-in security and the provision of security patches,21 while the Cyber Shield Act of 2017 seeks to introduce a voluntary certification process for IoT devices.22

While US lawmakers have proposed some government regulation, some in Australia believe that IoT security would be more effectively regulated by industry.

Legislation takes time to introduce and often struggles to keep pace with the quickly evolving technology it seeks to control.

Taking a market-driven approach to IoT security may mean that imposed standards will more rapidly adapt to the changing security climate.

Some classes of IoT devices, however, present little threat to their owners, but their poor security allows them to be co-opted in ways that can be used to harm other internet users or internet infrastructure. This is similar to a widget-making factory that causes air pollution; the factory owner and widget buyer both benefit from lower costs of production and neither has a strong incentive to do the work needed to reduce air pollution, as that would raise costs. In economics, this is described as a negative externality, and negative externalities can be effectively dealt with through regulation. The authors’ view is that incentives do not exist for effective industry-led standards to develop, especially for consumer IoT devices.

The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are the two major global providers of standards. The ISO and IEC have a joint technical committee focusing on information technology and a subcommittee focusing on the IoT and related technologies. Australia is a member of the subcommittee through Standards Australia. ISO/IEC also has the 27000 series, which is a series of standards that addresses the security of information security management systems.23

The European Union Agency for Network and Information Security released baseline security recommendations for the IoT in late 2017.24 Standards have also been developed in Asia, including a draft policy on the IoT by India25 and a general framework by Japan.26 Other organisations working on IoT standards include the IEEE (Institute of Electrical and Electronics Engineers), The Open Group, and SAE International. While a considerable amount of work on IoT standards has been completed, a draft report on the status of global IoT standards by the National Institute of Standards Technology in the US indicates that there’s a long way to go. The report reveals several gaps in current standards development and implementation, including network security, IT system security evaluation and system security engineering.27 It also highlights the variety of SDOs (standards development organisations) working in this space. There’s currently a need for international consensus on IoT standards and a clear pathway to implementation.

Locally, the IoTAA has drafted multiple versions of IoT security guidelines to help promote secure designs for manufacturers and to support industry in understanding security and privacy issues. The IoTAA has also outlined key focus areas for 2018 in its Strategic Plan to Strengthen IoT Security. Australia also has iotsec, a non-profit start-up that promotes security in IoT devices to help industry and consumers.

While regulation and standardisation are often thought of in a binary way (enforced by either government or industry), the feedback from the discussion draft highlighted the importance of approaching IoT security in a holistic manner, in which government, industry and consumers all play a role. Furthermore, IoT cybersecurity is a problem of global, not national, proportions. Devices sold in Australia are manufactured all over the world. Being only a small proportion of the IoT market, Australia risks becoming a dead-end market if device makers’ security costs outweigh their income from sales. For this reason, any attempt to introduce standards for IoT devices in Australia must be done with a global mindset. The challenge now is to reach international consensus and to encourage manufacturers to adopt the standards. An IoT definition would help to focus global efforts both to secure and to develop the technology and help to articulate its scope.

CONCLUSION

The IoT offers Australia many economic and social advantages and should be embraced and used to benefit all Australians. However, it also introduces new risks and vulnerabilities that our current regulatory systems aren’t necessarily mitigating effectively.

It’s the authors’ view that our current policy and regulatory settings are almost certainly sub-optimal, but effective management of the IoT from a government policymaking perspective requires many difficult trade-offs, and easy answers aren’t immediately apparent. Corruption of traditional ICT devices such as phones and laptops has resulted in the theft of both personal and corporate data. Connecting more devices, such as watches, whitegoods, automobiles and industrial equipment, has intensified this problem and introduced new types of threats. Other incidences of organised crime and terrorism have shown that malicious actors exploit seams in systems, regulation and security.

For this reason, it is imperative that we continue to address gaps in these areas to limit opportunities for the exploitation of IoT devices.

This paper is intended to illuminate some of the issues involved in managing IoT risk so that industry and government can have a robust discussion and work collaboratively to improve the security of IoT devices.

  1. Gartner, ‘Gartner says 8.4 billion connected “things” will be in use in 2017, up 31 percent from 2016’, 2017, Gartner.com, online. ↩︎
  2. Rain RFID Alliance, ‘RAIN Q&A with Kevin Ashton RFID and the internet of things’, 2015, pp. 1–4 ↩︎
  3. Australian Energy Market Operator, Black System, 2017, p. 5 ↩︎
  4. ‘SA weather: human error to blame for embryo-destroying hospital blackout during wild storms’, ABC News, 23 January 2017 ↩︎
  5. Business SA, Blackout Survey Results, 2016 ↩︎
  6. Roger Bradbury, ‘South Australian power shutdown “just a taste of cyber attack”’, The Australian, 2016. ↩︎
  7. ‘12 of 14 nursing home deaths after Irma ruled homicides’, VOA News ↩︎
  8. European Union Agency for Network and Information Security, Stuxnet analysis ↩︎
  9. Council on Foreign Relations Cyber Operations Tracker, Stuxnet ↩︎
  10. Council on Foreign Relations, Compromise of a power grid in eastern Ukraine ↩︎
  11. ‘CRASHOVERRIDE: analysis of the threat to electric grid operations’, Dragos.com, pp. 10–11 ↩︎
  12. Oxana Andreeva, Sergey Gordeychik, Gleb Britsai, Olga Kochetova, Evgeniya Potseluevskaya, Sergey I Sidorov, Alexander A Timorin, Industrial control systems and their online availability, p. 8 ↩︎
  13. IEEE. Sagar Samtani, Shuo Yu, Hongyi Zhu, Mark Patton, Hsinchun Chen, Identifying SCADA vulnerabilities using passive and active vulnerability assessment techniques, University of Arizona, 2016 ↩︎
  14. Australian Cyber Security Growth Network, Cyber security sector competitiveness plan, 2017 ↩︎
  15. Australian Cyber Security Growth Network, Australian TAFEs join forces to tackle the cyber security skills gap, 2018 ↩︎
  16. Australian Government, Australia’s Cyber Security Strategy, p. 53 ↩︎
  17. PMC. Australian Government, Women in cyber security ↩︎
  18. Denham Sadler, Security ratings for IoT devices?, 2017 ↩︎
  19. UK Government, National Cyber Security Strategy 2016–2021, 2016, pp. 36–37 ↩︎
  20. IoT Alliance Australia, ‘Strategic plan to strengthen IoT security in Australia’, 2017 (unpublished material) ↩︎
  21. Mark Warner, Cory Gardner, Internet of Things Cybersecurity Improvement Act of 2017, 2017 ↩︎
  22. Cyber Shield Act of 2017, 2017 ↩︎
  23. ISO, ISO/IEC 27000 family— Information security management systems ↩︎
  24. European Union Agency for Network and Information Security, Baseline security recommendations for IoT, 2017 ↩︎
  25. Department of Electronics and Information Technology, Draft policy on internet of things, Indian Government, 2015 ↩︎
  26. National Center of Incident Readiness and Strategy for Cybersecurity, General framework for secure IoT systems,
    Japanese Government, 2016 ↩︎
  27. National Institute Standards Technology, Interagency report on status of international cybersecurity standardization for the internet of things (IoT), 2018, pp. 54–55 ↩︎

© The Australian Strategic Policy Institute Limited 2018
This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission. Enquiries should be addressed to the publishers.

Important disclaimer
This publication is designed to provide accurate and authoritative information in relation to the subject matter covered. It is provided with the understanding that the publisher is not engaged in rendering any form of professional or other advice or services. No person should rely on the contents of this publication without first obtaining advice from a qualified professional person.

Acknowledgements
We thank all of those who contribute to the ICPC with their time, intellect and passion for the subject matter. The work of the ICPC would be impossible without the financial support of our various sponsors but special mention in this case should go to JACOBS, which has supported this research.

Preventing another Australia Card fail

Unlocking the potential of digital identity

What’s the problem?

Another major government digitisation scheme—digital identity—is set to cause controversy and risk further disempowering Australians in the absence of clearer policy and legislative controls. That’s problematic because digital identity has the potential to power the 21st-century economy, society and government by providing easy, high-confidence verification of identity that will allow millions of offline transactions to move online and enable a string of enhanced services, such as easy delegation of authority (for example, to pick up prescriptions) and verifications (such as proof of age online).

However, the national digital identity program, known as GovPass, faces obstacles on multiple fronts:

  • Public communication about the scheme and its implications has been wanting, leaving the public largely unaware of the change afoot.
  • A key biometric enabling service for digital identity, the Face Verification Service (FVS), risks being conflated with the far-reaching law enforcement biometric enabler—the Face Identification Service (FIS)—that’s part of the same national facial biometric matching capability agreed to by Australian Government and state and territory government leaders in October 2017. The FIS lacks adequate safeguards and in its current form is likely to attract public opposition far exceeding that directed towards the My Health Record scheme.
  • The government is now building two digital identity schemes that will compete against each other. The first, which is already operational, was built by Australia Post at a cost of $30–50 million and is known as Digital iD. The second scheme, GovPass, secured $92.4 million in the 2018–19 Budget to create the infrastructure that will underpin it and fund its initial rollout.
  • Neither GovPass nor Digital iD is governed by dedicated legislation, beyond existing laws such as the inadequate Privacy Act 1988, leaving Australians vulnerable to having their data misused.
  • The lack of clarity about how the private sector will and will not be able to use the schemes will turbocharge the ability to gather detailed profiles of individual Australians. Controls are needed to prevent a Western version of China’s ‘social credit’ scheme emerging.

What’s the solution?

National multi-use identity schemes have a poor track record in Australia. To gain public approval for this major reform, the government needs a fresh approach that places the citizen at the centre of the system. To help restore public confidence in digital initiatives after a string of failures, the introduction of this reform needs to be accompanied by an overhaul of citizens’ and consumers’ rights so that they’re fit for purpose in the 21st century.

The government should work with civil society to stimulate and lead a national debate on the benefits of digital identity, including medium- to long-term plans for the scheme. It should emphasise the strengthened protections that the public will gain against the encroachment on citizens’ rights that this and other digital reforms are producing.

Proposed legislation enabling the FVS and FIS should be far more tightly drafted, paring back the applications that the FVS and the FIS can be used for and precisely defining their uses. Dedicated legislation should be introduced to govern both government digital identity schemes.

Opportunities should be explored to avoid duplication between the two schemes. Protections for individuals in the schemes should be strengthened to prevent private-sector actors using the service to build profiles of individual citizens and on-selling those profiles in a for-profit version of China’s social credit scheme. While detailed customer profiles can already be built through methods such as loyalty programs, digital identity will enable a vastly expanded range of activities to be linked to verified identities and so exponentially expand the scope for profile building and ranking if left unchecked.

Introduction

The 2014 Financial System Inquiry recommended that the government ‘develop a national strategy for a federated-style model of trusted digital identities’ that would be accessible for both public and private identity verification.1 The recommendation was subsequently agreed to by government.2 Creating this digital identity is a major micro-economic reform. How it’s deployed, structured, understood and protected will fundamentally shape the sort of Australia we end up with.

On 5 October 2017, the then Prime Minister and state and territory leaders laid the foundation for digital identity when they agreed to establish a ‘national facial biometric matching capability’. This connects national, state and territory photographic databases via an exchange. It has two key components. The FVS will use the exchange to allow digital identity verification. This is a one-to-one image-based verification that matches a person’s photo against an image on one of their government records (such as a passport photo) to help verify their identity. The second component, the FIS, is a one-to-many image-based identification service that matches a photo of an unknown person against multiple government records to help establish their identity and is designed for law enforcement purposes.3

What’s digital identity?

Digital identity is essentially a credential scheme allowing you to quickly confirm your personal details, entitlements and authorisations, such as proving you are over 18 years old or an Australian citizen, online or in person via your phone.

It requires a one-off verification—for example, by photographing your driver’s licence with your phone (the details of which are then checked against the relevant government database) or, for higher level verification, taking a selfie (which is then checked against a biometric template of your face that the government has collated).4

The selfie is tested against only one image—the document consented to and nominated by the individual.5 Through the FVS, the selfie would be checked separately against a template of the photos that it’s compared against, which would be your driver’s licence photo, a passport photo or a visa/citizenship photo.

Stored on a mobile app, you can use this digital identity to transact with government and companies (for example, by entering your phone number on their websites and then providing permission to undertake the identity check via your digital identity mobile app) or in person, without needing to carry a wallet and identity documents.

Australians make more than 800 million transactions with government annually; 26 million of those transactions involve face-to-face verifications, and more than 300 million require phone or other authentications. Some 750,000 applications for tax file numbers are made each year, requiring in-person verification or the sending of certified copies—a process that can take up to 40 days.6

More broadly, the government operates more than 30 different logins for online services.7 A single government digital identity can simplify this landscape, allowing a single login for each individual across governments—federal, state and territory—and also simplify the 800 million transactions. This can significantly reduce irritation on the part of citizens accessing government services, and if done properly should in fact enhance privacy by tailoring the amount of personal information disclosed to the bare minimum required for the specific transaction. It has many other far-reaching applications, such as improving child safety online, reducing cyberbullying and de-anonymising the online experience.

Decoding the jargon

MyGov: the existing common credential for authenticating to many government departments, but without strong identity verification (generally, you have to prove who you are to each department).

MyGovID: the brand name for the Australian Taxation Office’s (ATO’s) new ‘Commonwealth digital identity provider’ (formerly, AUSid). This is the portal through which people can validate their identity under the GovPass scheme.

GovPass: the overall system name for the federated identity scheme of the Digital Transformation Agency (DTA). MyGovID will be one of the components of GovPass and will allow people to validate their identity. GovPass is a DTA-led multiagency program in which the DTA plays an oversight, integration and delivery role, working in collaboration with the ATO, the Department of Human Services (DHS) and the Department of Home Affairs.

Trusted Digital Identity Framework (TDIF): the standards that describe the GovPass identity federation, which include provision for multiple identity providers, subject to their accreditation (currently Australia Post’s Digital iD and the ATO’s MyGovID).8 This creates consumer choice, but also means that all identity providers need to maintain high security standards if citizens’ data is to be protected. The TDIF defines the requirements to be met by government agencies and organisations in order to achieve TDIF accreditation for their identity services (for example, as an identity provider).

Face Verification Service (FVS): a one-to-one image-based verification service that can match a person’s photo against an image on one of their government records, such as a passport photo, to help verify their identity. Often, these transactions occur with the individual’s consent.9

Face Identification Service (FIS): a one-to-many image-based identification service that can match a photo of an unknown person against multiple government records to help establish their identity. Access to the FIS will be restricted to agencies with law enforcement or national security related functions.10

Boston Consulting Group has estimated that digital identity could save $11 billion annually ‘through reduced cost to serve, cost of fraud and improved customer experience’.11 Deloitte Access Economics has estimated ‘productivity and efficiency savings of $17.9 billion over 10 years (if we reduce the number of transactions completed via non-digital channels from 40 percent to 20 percent)’.12 Identity crime is estimated to cost over $2.2 billion annually and affects one in five Australians during their lives.13 While the government estimates that it costs $17–20 each time someone tries to prove their identity to access a service, the cost of doing so digitally is somewhere between $0.40 and $2.00.14 Various different schemes are already operational in places such as New Zealand (RealMe), the UK (GOV.UK Verify), India (Aadhaar), Estonia (ID-card), Sweden and Norway (the last two have separate systems, both called BankID).

Digital identity, properly applied, should significantly improve users’ experiences when they deal with the public and private sectors. In 2015, 61% of Australians said they had used the internet for their most recent dealings with local, state or federal government, but only 29% were satisfied with their experience, and 58% encountered some problem with the online service. ‘The most common issue was that the process was long or difficult (21%). 15% had technical difficulties and for 13%, the service they needed was not available online. 11% couldn’t remember their user name or password.’15 Digital identity should help significantly to alleviate these problems.

Meet Digital iD and GovPass

The Australian Government is building two competing digital identity schemes. The first one, known as Digital iD, is already operational. It has been developed by Australia Post, an Australian government-owned corporation, at an estimated cost of $30–50 million.16 The second is GovPass, a scheme being developed by the DTA.

Australia Post’s Digital iD now has a product team actively selling access to the private sector. This identity service is already accepted in licensed venues in the Australian Capital Territory, the Northern Territory, Queensland, Tasmania and Victoria, and by companies such as Travelex and Airtasker.17 For individual users, the scheme is free of charge.

To function, Digital iD uses Australia Post’s access to government identity databases as well as private-sector databases, such as credit header records, and postal records. Creating a digital identity is quick and is done over the Digital iD app.18 It essentially involves verifying your mobile number by entering a code sent to your phone and taking a photo of an identity document (driver’s licence, passport or Medicare card), which is checked against the government databases.

To validate your ID on, say, Airtasker, you click ‘connect’ and input your mobile number, and that sends an alert to your phone (Figure 1). Once you open the app, you’re notified that Airtasker would like to connect and are offered the option of ‘connect’ or ‘cancel’. If you hit ‘connect’, you’re notified that Airtasker is requesting confirmation of your identity plus your date of birth and name, giving you the option to ‘allow’ or ‘cancel’.

Figure 1: Using Digital iD to engage with AirTasker

Parallel to the Australia Post scheme, the Digital Transformation Office (now the DTA) was given the task of developing a second scheme, known as GovPass.19 Underway since 2016 (Australia Post’s foundational research on digital identity was also released in 201620), the scheme was initially intended to start public beta testing in mid-2018, but has been delayed.21 It finally secured $92.4 million in funding in the 2018–19 Budget22 to create the infrastructure that will underpin GovPass and roll out the scheme, initially for grants management, the My Health Record, Youth Allowance, business registration, NewStart, the Unique Student Identifier and tax file numbers. The government aims to roll out pilot services to half a million users by the end of June 2019.23

DHS will operate the exchange or gateway between the services and identity providers, the ATO will be the initial identity service provider,24 and the DTA will oversee the program. DHS will be the scheme administrator and the operator of the interoperability hub that will provide access to verification services run by or on behalf of other government agencies. Australia Post will be seeking accreditation as an identity provider (alongside the ATO), in addition to maintaining its existing Digital iD system. The range of actors involved in GovPass and the complexity of the model will make it difficult to deliver the project on time and without incident.

Digital iD is distinguished from GovPass mainly by the fact that it isn’t a federated model (Australia Post is the only entity through which you can verify your identity for Digital iD). It’s envisaged that multiple entities could provide this service under the GovPass scheme, giving consumers choice about which entity they use to prove their identity.

Some companies, such as Mastercard (and likely others) through its My Digital Life program, are positioning themselves to facilitate access to the rich data pools that the digital identity service will enable by serving as a platform through which third-party attribute vendors can sell data on individual Australians. If poorly regulated, these sorts of schemes could create serious privacy issues involving third-party data access. An indicator of this can be seen in the controversy over Facebook providing personal data to third-party organisations, including Cambridge Analytica. (Australia Post isn’t selling access to personal information; rather, companies that use Digital iD to verify their customers’ identities are being enabled to easily gather related data, such as purchase history, location and so on, and link it to a confirmed individual identity.)

A key enabler for both schemes will be the FVS, which will be vital for higher level identity checks that are required for transactions requiring greater confidence that someone is who they say they are, such as creating tax file numbers (Australia Post’s existing scheme currently performs lower level checks using biographic data). This was made possible by the Intergovernmental Agreement on Identity matching Services.25 The agreement essentially enabled the federal, state and territory governments to share access to their databases of government-issued photographic identity documents (such as driver’s licence and passport databases) for a broad range of applications spanning road safety, law enforcement and identity checking. For identity checking, this will simplify the process of confirming identity, and the photos will enable higher levels of identity assurance. The FVS’s creation is enabled by the Identity-matching Services Bill 2018, which at the time of writing is still before the House of Representatives.26

As with the Australia Post scheme, it’s envisaged that the private sector will be able to rely on GovPass for identity checking in future. An example of how this would work is Australia Post’s Digital iD, which is already used by Australia’s largest credit union, CUA, for new members applying for some CUA accounts online or via their mobile devices. This allows accounts to be created in minutes without visiting a branch.27

Challenges

The take-up by individuals of digital identity schemes will require the government to overcome challenges in the areas of communication, rights protection, limit setting, coordination, commercialisation and security.

Communication

In all discussions about GovPass, the Australia Card experience looms large, and GovPass has been designed to deliberately distinguish it from previous efforts. The Australia Card was proposed by Prime Minister Bob Hawke in 1985 and eventually led to a double dissolution election before the proposal was dropped. Other failures also overshadow the rollout of GovPass. In 2006, Prime Minister John Howard made another attempt with the Access Card,28 before it too was shut down by the new Rudd government in 2007.

The government’s own polling suggests that it’s right to be fearful of scaring the Australian public.

Sixty-nine per cent of Australians are more concerned about their online privacy than they were five years ago. A majority (58%) of Australians are ‘somewhat concerned’ or ‘very concerned’ about biometric data being used to gain access to a licensed pub, club or hotel (although that percentage is down from 71% in 2013), and 56% are concerned about using biometric information for day-to-day banking and 43% for boarding flights.29 Only a third of Australians are comfortable with the government sharing their personal information with other government agencies, and only 10% are comfortable with businesses sharing their information with other organisations.30 The controversy over police access to the My Health Record and the need to add further privacy protections in that scheme also point to heightened public awareness and concern about digitisation processes, including about losing control of personal information that might be used to cause harm.31

The DTA has issued regular updates on the progress of the GovPass scheme, but, with few exceptions, the updates haven’t been brought to the public’s attention by leaders,32 and there’s been very little discussion of the scheme in the media. When the Council of Australian Governments (COAG) announced the key underlying agreement to share identity information and create a national biometric exchange system, the focus was placed on the counterterrorism potential of the biometric database, not the broad digital identity possibilities for the Australian population. As the then Prime Minister said at the time, ‘Imagine the power of being able to identify, to be looking out for and identify a person suspected of being involved in terrorist activities walking into an airport, walking into a sporting stadium … This is a fundamentally vital piece of technology.’33

Ending the erosion of rights

The shift to a digital world is eroding citizens’ rights. With each new digitisation initiative, people are forced to trade off more of their rights for the convenience offered. Repeatedly, they’re assured that everything’s fine, only to discover that they’ve been hoodwinked. ‘Opt in’ becomes ‘opt out’. ‘Safe and secure’, it’s later discovered, means warrantless police access. Over time, people are being disempowered, but these initiatives could have the opposite effect if properly implemented and communicated.

Instead of thinking about how digital identity can solve a departmental problem and focusing narrowly on users’ experience in that context, a citizen-centric perspective is needed. In a citizen-centred society, the role of government should be as the custodian of citizen data—guaranteeing its security and integrity and the citizen’s inviolable rights to and control of their data.34 

For government, this requires an overhaul in approach. What’s needed is a root-and-branch review of how citizen protections can be made fit for purpose in the 21st century and of opportunities to take advantage of digitisation to simplify the web of rules that we created for our paper-based society. Those rules are often needlessly complicated due to misaligned incentives between competing bureaucracies and rent-seekers who have fed off complexity. The Australian Treasury’s ‘consumer data right’35 is a step in the right direction to empower citizens, but a far more holistic approach is needed.

Clearer limits are needed

The creation of the FVS and FIS is enabled by the Identity-matching Services Bill 2018, but loose drafting leaves so much scope for unexpectedly broad use of the FIS (for law enforcement purposes) that it risks public backlash against the FVS (which is critical for identity matching). As the backlash against My Health Record demonstrated, sharing without consent is almost certain without well-crafted policy and legislation that’s accompanied by an effective public communications campaign.

An important provision of the COAG agreement that establishes the national biometric exchange system is that it can only be used for ‘general law enforcement’ purposes when suspected offences carry ‘a maximum penalty of not less than three years imprisonment’.36 This key provision is missing from the Identity-matching Services Bill.

In practice, this will mean that for requests between jurisdictions (for example, a NSW agency checking a Victorian’s identity), the three-year-penalty rule agreed by COAG would need to be spelled out in interagency agreements. If NSW police wanted to check a photo of a suspect they would need to log the crime the person was suspected of (carrying at least a three-year prison sentence) and then run the check. It’s also possible that they could still run the check if the crime carried at least a three-year penalty in NSW, but less than a three-year sentence in Victoria.37

For intrastate biometric identity searches (such as NSW police searching NSW databases), it’s up to individual states to set any limits on what state police could use the federally run system for (that is, it could potentially be applied to any petty offence). Without clearer restrictions, the FIS in particular is open to serious misuse, especially given the Bill’s stated purpose of allowing it to be used for ‘preventing’ crime.

The parliamentary reviews of the legislation raised multiple concerns about the Bill that are beyond the scope of this paper but point to the need for far tighter controls.38

Competing government schemes and lack of oversight

It’s unfortunate that Australia has ended up with two taxpayer-funded digital identity systems. How this competition will play out is still to be seen. However, given the differences between the schemes and the groups behind them, it’s possible to foresee how it might evolve.

GovPass may dominate for government-linked identity checks, and Digital iD for private-sector identity checks. Australia Post is far more entrepreneurial than most government agencies, and if its scheme continues to operate without dedicated legislation it will also be more attractive to private-sector clients (the private sector’s ability to verify identity using GovPass is likely to be more restricted). Another potential advantage Australia Post might enjoy is working to achieve some degree of global harmonisation by working with other international postal services’ digital identity systems39 (although the DTA is considering similar international harmonisation for GovPass40).

While the Identity-matching Services Bill governs the use of the biometric FVS, it isn’t specifically focused on regulating the GovPass scheme. It’s yet to be decided whether dedicated legislation to cover GovPass will be developed. Given the sweeping applications of the scheme and open questions on issues such as liability, potential for misuse and privacy concerns, legislation is needed for both GovPass and Digital iD.

Commercial applications

Both digital identity schemes offer significant potential benefits for the private sector. If used, they should reduce identity fraud and theft. Some 69% of Australians are concerned about becoming victims of those crimes,41 which cost the Australian economy billions of dollars. The schemes will also make it much easier for consumers to transact with businesses and have the potential to better control and manage personal data.

Digital identity will also allow more limited sharing of personal information. At present, most identity checks involve an over-sharing of personal information. The person selling you a beer doesn’t need to know your name, home address, driver’s licence number, or even your date of birth. They just need a yes/no answer that you are 18 years old or older.

However, without safeguards, digital identity opens up the possibility of serious misuse. With digital identity, the shop assistant selling you alcohol might see less of your personal information but, because they are able to confirm who you are, your purchase information could be on-sold to interested parties, such as your health insurer (affecting your premium) or DHS (affecting your cashless debit card payments). The DTA has advised that it’s currently considering establishing an oversight authority, oversight rules, or both, that would seek to prevent the on-selling of data the gathering of which is facilitated through digital identity verification.42 This sort of oversight is critical for both GovPass and Digital iD.

As we move to a world where identity can be confirmed easily and cheaply, it opens up the possibility of building up profiles of individuals. If digital identity becomes the de facto way to buy alcohol, log on to social media, buy tickets, travel and shop, all of the data that those transactions collect (such as where you are, how much you spend, what you buy and what you look at) can be linked to an individual identity and sold (via your agreement in fine-print terms and conditions) to a third-party profile builder.

Commercial operators are already exploring this possibility. Mastercard (and no doubt competitors), for example, is considering using Australia as the first country to test and deploy its My Digital Life program. This will be a platform through which third-party ‘attribute vendors’ can confirm different attributes of individual consumers, many of which will be enabled via digital identity. For example, when you engage with a company you have never dealt with before, the company might request half a dozen attributes about you via the My Digital Life app to improve its confidence that you will be a good customer to engage with or are worth offering a higher level of customer service. This might include confirming that you have a perfect credit score, that you always pay your bills on time, that you never gamble, that you purchase fewer than 20 standard drinks of alcohol each week, that you give at least $1,000 a year to charity and that you volunteer. With your consent, My Digital Life will then request confirmation of those attributes from the third parties who have collected this information to on-sell via platforms such as My Digital Life and will send the results to the requesting company.

The private sector has been a leader in the development of ‘know your customer’ best practices and privacy protections, and some sharing of attributes (such as credit scores, police checks, speciality licences and working with children certificates) may facilitate commerce and community engagement. However, without tighter constraints, the potential applications of Westernised versions of China’s social credit scheme could seriously encroach on basic rights.

Security

It’s difficult to provide detailed cybersecurity risk assessments of GovPass (which is still being designed) and Digital iD (for which detailed architectural designs aren’t available). However, one area where risks are likely is in spoofing the FVS. Researchers in the US have demonstrated that wearing specially designed eyeglass frames ‘can effectively fool state-of-the-art face recognition systems’.43 Technical means to overcome these immediate challenges are likely to emerge, but this demonstrates that biometrics won’t be a panacea for identity fraud.

More broadly, this ASPI policy brief has identified several issues of concern, including the security risks presented by having multiple identity providers, each of which will need to maintain rigorous security standards, as well as the potential for the schemes to be used to facilitate vastly more ambitious profile building of Australians. There also appears to be no legislative impediment to the ATO using its existing powers to use the GovPass exchange to request information that would allow for data matching—something likely to attract public concern. Data from the ATO-run MyGovID identity service portal could be used to match a particular user with other government services. The DTA exchange is designed at a technical level to resist an identity provider trying to do this sort of matching but won’t stop an authority with legislative power to demand the data.

A range of other security-related issues remain open. If either or both of the schemes are widely adopted, it’s unclear whether companies could mandate the use of them (for example, for online banking), making them de facto compulsory. It’s also unclear whether companies that have traditionally not required validated identity checks could start to do so. For example, companies such as Facebook that have a real-name policy could adopt mandatory digital identity verification for Australian users to enforce that policy.44

Opportunity ahead

Despite the challenges, digital identity is critical for a 21st-century economy. Done properly, it will allow citizens to enhance their privacy by sharing less personal information and save time by doing more things online with less hassle. If it’s accompanied by an overhaul of citizens’ rights, it could put Australians back in charge of their online lives, allow them to monitor and easily contest inappropriate uses of their data, and remove unnecessary regulatory and legislative complexity as the shift from offline to online proceeds.

Features of GovPass

User-centred design: User-centred design is a key principle for GovPass, and the program is being developed in accordance with the Digital Service Standard, which aims to ensure that digital teams build government services that are simple, clear and fast.45 In addition, the TDIF has a component dealing with usability and accessibility requirements that government agencies and organisations need to meet in order to be accredited under the TDIF.

Privacy: The GovPass platform’s conceptual architecture is designed to be consistent with ‘privacy by design’ principles. Personal information that’s essential to provide the requested service will be collected and used with informed consent.46 Govpass has been designed as a federation of identity providers and an exchange using ‘double-blind’ architecture. Having the exchange means the service doesn’t see your identity documents, the identity provider doesn’t know what service you’re accessing, and your identity attributes aren’t stored centrally. The exchange merely passes those attributes on to the service. It doesn’t retain the attributes, but only some logs to record what occurred. The DTA advises that its research suggests that there’s community demand for multiple identity providers so citizens have choice for different transactions (for example, using a government provider for government transactions and a private-sector entity for commercial transactions).

Express consent: The GovPass program has been explicitly designed to be ‘opt in’ for users, although other schemes such as My Health Record have transitioned from ‘opt in’ to ‘opt out’. The exchange will be the vehicle for a user to express consent. Once a user has established their identity through an identity provider, the exchange will ask them to consent for their attributes to be passed to the requesting service (relying party). Unless the user gives explicit consent, the attributes can’t be passed on.

Recommendations

1. Accompany the introduction of digital identity with an overhaul of online citizens’ and consumers’ rights.

In democracies, governments exist to serve the citizenry, so it’s only logical that the citizen be placed at the centre as far-reaching schemes such as digital identity are introduced. Helpfully, this will also provide the most important ingredient needed for the success of digital identity: trust.

The government should conduct a root-and-branch review of how citizen protections can be made fit for purpose in the 21st century and of opportunities to take advantage of digitisation to simplify rules created for our paper-based society. This should include ensuring that minimum security baselines and rules for data use are maintained, regardless of who has custody of the information (government or the private sector).

The review should look at reforms that provide citizens with easy and meaningful control over their data. It should consider providing citizens with an online log every time their personal information is accessed by any arm of government or the private sector, and with a one-click process for contesting any access they believe may be unauthorised. It should allow citizens to decide who can access different components of their data (such as individual records) and provide strong default settings to protect those who don’t bother to adjust their settings.

The Privacy Act should be amended, including to create a principle that all digital identity checks gather only the minimum necessary personal information and where possible in de-identified ways (such as via yes/no answers for proof-of-age verification, rather than date of birth transmission).

2. Communicate with the public about the schemes and the accompanying rights overhaul.

After announcing a review to strengthen online citizen protections, the government should lead a national debate on the benefits of digital identity schemes, including by outlining medium- to long-term plans for the schemes and the strengthened protections that citizens will receive to guard against encroachments on their rights. This should include the production of an issues paper that clearly sets out the major implications and long-term plans for digital identity. The paper should be followed up with traditional consultation mechanisms, such as town hall meetings, industry round-tables and media engagement.

3. Place both Digital iD and GovPass under legislative oversight and protect both schemes from overreach. Expressly prohibit ‘social credit’ schemes that are facilitated by government-enabled digital identity checking.

Given that Digital iD and GovPass rely on government identity databases to operate and have far-reaching applications, both schemes should be brought under dedicated legislative oversight. The legislation should place strict limits on information about individual citizens that can be gathered through the use of digital identity verification and on-sold. The development of social-credit-style schemes should be expressly prohibited.

4. Explore options to join the schemes.

Opportunities should be explored to avoid duplication between the two schemes. This could include reviewing whether Australia Post’s already operational scheme could be adopted as a national scheme (and GovPass scrapped, although keeping the existing FVS), or strengthened sufficiently so that it is suitable by drawing on the TDIF. At a minimum, Australia Post should replace the ATO as the government identity provider under the GovPass scheme. This would be consistent with one of the DTA’s own core procurement principles of avoiding duplication by not building platforms that other agencies have already built.47

5. Apply stricter and clear limits on the use of biometrics at the federal, state and territory levels.

The governance of the FIS is largely beyond the scope of this paper, but is still relevant because current overreach threatens to undermine the digital identity schemes. Parliamentary inquiries into the Identity-matching Services Bill have exposed a litany of shortcomings, including inadequate privacy protections, insufficiently precise drafting, potential for overreach, and the key issue that Australians never consented to having their photographs for government identity documents repurposed for use in the biometric identity matching services now being contemplated.

Identity matching uses a relatively benign one-to-one match of a particular user’s photo against a reference photo via the FVS (although, as this policy brief has outlined, it could still be seriously misused if sufficient controls aren’t in place). The FIS is a one-to-many match of an unknown user against millions of possible matches, which has far-reaching privacy implications and the potential for serious misuse and expansion into many-to-many matching by adjusting the way the FIS works. Specific recommendations to strengthen the Identity-matching Services Bill have been provided in a separate submission to the Parliamentary Joint Committee on Intelligence and Security.48

6. Establish a national taskforce.

Discussions with government agencies working on different applications of face-matching services, which include the FVS and the FIS, suggest that second- and third-order consequences of different aspects of the schemes haven’t been considered because they fall outside specific agency or department remits. Developments at the state and territory level and within the private sector also need to be considered as part of a national approach that puts citizens at the centre. A taskforce (federal, state and territory) that includes key private-sector and civil society actors should be established to ensure that whole-of-nation implications are considered and addressed.49


© The Australian Strategic Policy Institute Limited 2018

This publication is subject to copyright. Except as permitted under the Copyright Act 1968, no part of it may in any form or by any means (electronic, mechanical, microcopying, photocopying, recording or otherwise) be reproduced, stored in a retrieval system or transmitted without prior written permission.

Enquiries should be addressed to the publishers. Notwithstanding the above, educational institutions (including schools, independent colleges, universities and TAFEs) are granted permission to make copies of copyrighted works strictly for educational purposes without explicit permission from ASPI and free of charge.

First published October 2018

Cover image: Illustration by Wes Mountain. ASPI ICPC and Wes Mountain allow this image to be republished under the Creative Commons License Attribution-Share Alike. Users of the image should use this sentence for image attribution: ‘Illustration by Wes Mountain, commissioned by ASPI’s International Cyber Policy Centre’.

  1. Financial System Inquiry, Final report, 7 December 2014 ↩︎
  2. Australian Government, Improving Australia’s financial system: government response to the Financial System Inquiry, 20 October 2015, p. 15 ↩︎
  3. Department of Home Affairs, Face matching services, Australian Government, no date. ↩︎
  4. Financial System Inquiry, Final report ↩︎
  5. Financial System Inquiry, Final report. ↩︎
  6. Michael Keenan, ‘Delivering Australia’s digital future’, address to the Australian Information Industry Association, 13 June 2018. Angus Taylor, ‘National standards to support government digital ID’, media release, 5 October 2017. Sara Howard, Unlocking up to $11 billion of opportunity, Australia Post, 5 December 2016. ↩︎
  7. Angus Taylor, ‘What a Govpass digital ID would look like for Australians’, media release, 17 October 2017 ↩︎
  8. Financial System Inquiry, Final report ↩︎
  9. Financial System Inquiry, Final report ↩︎
  10. Financial System Inquiry, Final report ↩︎
  11. Australia Post, A frictionless future for identity management: a practical solution for Australia’s digital identity challenge, White Paper, December 2016, p. 7 ↩︎
  12. Australia Post, Choice and convenience drive ‘digital first’ success, Insight paper, November 2016, p. 5 ↩︎
  13. Parliament of Australia, Identity-matching Services Bill 2018, Explanatory memorandum, p. 3 ↩︎
  14. Digital Transformation Agency (DTA), ‘Digital identity: enabling transformation’, handout, Australian Government; Keenan, ‘Delivering Australia’s digital future’. ↩︎
  15. Australia Post, Choice and convenience drive ‘digital first’ success, p. 7. ↩︎
  16. DTA, ‘Digital identity: enabling transformation’ and interviews for this research ↩︎
  17. Australia Post, Digital iD ↩︎
  18. One-off versions can also be created on the Australia Post website. ↩︎
  19. Rachel Dixon, ‘Digital identity: early days in the discovery process’, DTA, 8 March 2016 ↩︎
  20. Australia Post, Digital identity white paper: a single digital identity could unlock billions in economic opportunity, no date ↩︎
  21. Taylor, ‘National standards to support government digital ID’. ↩︎
  22. Australian Government, Budget 2018–19: Budget strategy and outlook, Budget paper no. 1, 2018–19, pp. 1–22 ↩︎
  23. Keenan, ‘Delivering Australia’s digital future’. Level 2 identity verifications don’t require biometric verification. Four of the eight services being developed require a Level 2 identity verification and therefore aren’t dependent on the FVS. ↩︎
  24. Keenan, ‘Delivering Australia’s digital future’. ↩︎
  25. Council of Australian Governments (COAG), Intergovernmental Agreement on Identity Matching Services, 5 October 2017 ↩︎
  26. Parliament of Australia, Identity-matching Services Bill 2018 ↩︎
  27. Credit Union Australia Limited, ‘CUA leading the way in bringing Digital iD to banking’, media release, 8 August 2017 ↩︎
  28. Office of Access Card, ‘What is the Access Card?’, Australian Government. ↩︎
  29. Office of the Australian Information Commissioner (OAIC), Australian community attitudes to privacy survey, 2017, Australian Government, 2017, pp. i, 21 ↩︎
  30. OAIC, Australian community attitudes to privacy survey, 2017, p. ii. ↩︎
  31. Dana McCauley, ‘Health Minister backs down on My Health Record’, Sydney Morning Herald, 31 July 2018 ↩︎
  32. Keenan, ‘Delivering Australia’s digital future’. ↩︎
  33. Karen Barlow, ‘Turnbull dismisses privacy concerns in asking for a national facial recognition database’, Huffington Post, 4 October 2017 ↩︎
  34. See David McCabe, ‘Scoop: 20 ways Democrats could crack down on Big Tech’, Axios, 30 July 2018 ↩︎
  35. The Treasury, Consumer data right, Australian Government, 9 May 2018 ↩︎
  36. COAG, Intergovernmental Agreement on Identity Matching Services, p. 12. ↩︎
  37. There’s provision in the COAG agreement to review this after the first 12 months of operation; COAG, Intergovernmental Agreement on Identity Matching Services, section 4.25. ↩︎
  38. Parliament of Australia, Identity-matching Services Bill 2018. ↩︎
  39. Sara Howard, A world without borders, Australia Post, 19 December 2016 ↩︎
  40. Asha McLean, ‘DTA considering international “brokerage” of digital identities’, ZDNet, 9 February 2018 ↩︎
  41. OAIC, Australian community attitudes to privacy survey, 2017, p. 33. ↩︎
  42. The potential oversight authority would have legal authority to enforce operating rules and the TDIF on participants of the identity federation. The operating rules would set out the legal framework for the operation of the identity federation, including the key rights, obligations and liabilities of participants (including relying party services). ↩︎
  43. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael Reiter, ‘Accessorize to a crime: real and stealthy attacks on
    state-of-the-art face recognition’, CCS 16, 24–28 October 2016, Vienna, p. 12 ↩︎
  44. ‘What names are allowed on Facebook?’, Facebook, 2018 ↩︎
  45. Financial System Inquiry, Final report. ↩︎
  46. Financial System Inquiry, Final report. ↩︎
  47. DTA, Digital sourcing framework for ICT procurement, Australian Government, no date. ↩︎
  48. Parliamentary Joint Committee on Intelligence and Security, Review of the Identity-matching Services Bill 2018 and the Australian Passports Amendment (Identity-matching Services) Bill 2018, ‘Submissions received by the committee’, submission no. 18 ↩︎
  49. GovPass has a steering committee that reports to the Digital Leadership Group and is exploring how to broaden the group. ↩︎

Cyber Maturity in the Asia Pacific Region 2017

The Cyber Maturity in the Asia–Pacific Region report is the flagship annual publication of the ASPI International Cyber Policy Centre.

This report assesses the national approach of Asia–Pacific countries to the challenges and opportunities of cyberspace, taking a holistic approach that assesses governance and legislation, law enforcement, military capacity and policy involvement, and business and social engagement in cyber policy and security issues.

The 2017 report is the fourth annual cyber maturity report. It covers 25 countries and includes assessment of Taiwan and Vanuatu for the first time.

The United States continues its leadership of the country rankings and although the transition to the Trump administration caused a pause while cyber policy was reviewed, the US military is recognising the importance of cyber capability and elevating US Cyber Command to a unified combatant command to give it increased independence and broader authorities.

Australia has moved up in our rankings from fourth to equal second on the back of continued investment in governance reform and implementation of the 2016 Cyber Security Strategy. Australia’s first International Cyber Engagement Strategy was released and the 2017 Independent Intelligence review made a number of recommendations that strengthen Australia’s cyber security posture – this includes broadening the Australian Cyber Security Centre’s (ACSC) mandate as a national cyber security authority and clarifying ministerial responsibility for cyber security and the ACSC,.

Japan (equal second with Australia), Singapore, and South Korea round out a very close top five countries. All countries in this leading group have improved their overall cyber maturity although very tight margins have seen some change in rankings: Australian and Japan moving up to equal second and Singapore and South Korea dropping to fourth and fifth.

Taiwan and Vanuatu both made strong initial entries into the Cyber Maturity Report. Taiwan ranked ninth, just behind China, hampered by difficulties with international engagement, while Vanuatu came seventeenth, best of the Pacific islands.

https://www.youtube.com/watch?v=nEszlPxaATMhttps://www.youtube.com/watch?v=nEszlPxaATM

Securing Democracy in the Digital Age

The proliferation of cyberspace and rise of social media have enriched and strengthened the application of democratic governance.

Technological developments have expedited the international flow of information, improved freedom of speech in many areas of the world, and increased the quality of interaction, accountability and service delivery from democratic governments to their citizens. But these benefits must be balanced against a longstanding vulnerability of democracy to manipulation that cyberspace has enhanced in both scope and scale.

The 2016 US presidential election demonstrated the increasingly complex cyber and information environment in which democracies are operating. Using US case study illustrations, this report offers a conceptual framework by which to understand how cybersecurity and information security techniques can be used to compromise a modern-day election.

The report places this case study in its historical context and outlines emerging approaches to this new normal of election interference before identifying associated policy considerations for democracies.

Tag Archive for: Cyber

Nothing Found

Sorry, no posts matched your criteria

Tag Archive for: Cyber

Cyber wrap

Tim Cook

Western Australia’s parliament was hacked last Tuesday with a computer virus forcing the shutdown of its telecommunications systems. According to Speaker Michael Sutherland, the attack impeded a number of house operations including, ‘Hansard publications, the preparation and processing of questions on notice and answers to questions on notice’. Fortunately, the breach didn’t prevent Parliament sitting as usual.

The incident comes following a 2015 audit of sections of the WA government’s digital infrastructure. The assessment found that some agencies didn’t adequately protect information to prevent unauthorised access and data loss. Specifically, it noted the lack of basic controls over passwords, patching, setting of user privileges, copies of sensitive information across systems and poorly configured databases. Cyber security within state governments in Australia often lags behind best practice, but news last week that Queensland is establishing its own cybersecurity unit can be taken as a welcome sign that this trend may soon reversed.

Last week’s ruling that Apple must assist the FBI to unlock an iPhone linked to San Bernardino gunmen Syed Farook has reignited the smouldering discussion on encryption and the difficult balance between privacy and public safety. More public figures have recently come out on one side of the debate or the other. NSA chief Admiral Mike Rogers surprisingly came out on the side of encryption, saying that it’s ‘foundational to the future’, while Microsoft founder Bill Gates has chastised Apple CEO Tim Cook for opposing the court order. Surveys of public opinion in the US have found that there’s a roughly 50/50 split between support for the FBI or Apple. This is significant as Apple will reportedly seek to propel the case out of the courts this week and into the hands of Congress to decide.

Also in the US, the Hollywood Presbyterian Medical Centre in LA has paid 40 bitcoins (equivalent to US$17,000 in ransom to retrieve access to its patient files after a malware attack. The attack prevented access to the computer systems and restricted the ability to share communications electronically, successfully forcing the hospital to return to manual paper and pen patient submissions. Ransomware locks computer systems through file encryption which then demands a ransom payment in exchange for the decryption key.

Japanese companies have been targeted by a highly skilled and well financed state actor according to cyber security firm Cylance. The campaign, named Operation Dust Storm, previously targeted major industry in Japan, South Korea the US, Europe and South East Asia, but has now narrowed its target set to Japanese organisations. The intent of the hackers appears to be long term presence on networks to exfiltrate data, particularly from electricity, oil, gas and transpiration companies. Japan is a frequent target for hackers, however security consultants to Japanese firms and the government continue to highlight weaknesses in corporate culture that views breaches as a loss of face, preventing disclosure and cooperation on common threats.

Quantum technologies: investing in our future security

20445410520_0f8c325f83_z

The Australian Government recently announced plans to invest $26 million in the development of quantum computing technology as part of the National Innovation and Science Agenda (NISA). Prime Minister Turnbull has argued that NISA is part of a new ‘ideas boom’ designed to ‘create a modern, dynamic and 21st century economy for Australia’. It emphasises quantum computing as an important area for government investment based on its ability to produce ‘jobs and economic growth’. And while this industry could certainly be ‘worth billions’, it offers much more than financial prosperity: quantum technologies could play a significant role in our future defence and security.

Quantum technology harnesses the obscure properties of subatomic matter to achieve computing processes unobtainable with classic computers. Today’s computers run on binary digits, or bits, which exist as either 1s or 0s. In contrast, quantum bits, or qubits, exploit the bizarre principle of ‘superposition’ that enables them to occupy all possible states (both 1 and 0) at the same time. This allows quantum computers to undertake multiple calculations in parallel, unlocking unprecedented processing power that could ‘solve problems that would take conventional computers centuries’.

Another important quantum quality, ‘entanglement’, means two qubits can become inextricably linked, such that a change in one causes a change in the other. The qubits can remain connected even when separated across large distances. This delicate connection can be used for instantaneous communication, and its vulnerability to interference means the act of eavesdropping fundamentally alters the transmission, rendering it provably secure.

NISA asserts that those technological tricks will have a ‘transformational impact on Australian and global businesses’ but fails to mention the revolutionary role they could play in improving Australia’s defence force in three key areas.

Efficiency

The ability of quantum computers to undertake multiple calculations at once makes them an enormous asset for the optimisation of defence logistics. A quantum computer could examine all possible strategies and quickly identify the most rapid or low-energy solution, in order to determine the military’s preferable travel path, which is likely to increase the efficiency and speed of military operations.

Increasingly complex weapons systems also rely on ever-growing volumes of activation software. For example, the F-35 Joint Strike Fighter now requires more than 20 million lines of code to be fully operational. The brute force of quantum computers could offer a strategic advantage by improving the efficiency of code validation where defence assets are deployed in time-sensitive scenarios.

Intelligence

Quantum computers are most infamous for their potential to decrypt communications and other data. Current encryption models rely on the limited computing power available to hackers (both state and non-state) and the unreasonable amount of time required to solve long encryption keys. However, the immense processing power of quantum computers will be able to solve those previously impossible problems in little to no time, eventually rendering the majority of the world’s information security frameworks completely useless. The ability to hack an adversary’s (previously secret) communications would provide a government with access to incredibly sensitive intelligence and a decisive strategic advantage.

The accuracy of a military’s positioning, navigation and timing intelligence could also be improved through the precision of quantum sensor technologies. The old and expensive Global Positioning System (GPS) is increasingly unreliable and vulnerable to denial and sabotage. However, quantum location technologies are expected to be near impossible to jam and ‘1,000 times more accurate’ than today’s systems.

Security

While the advent of quantum computing may mean ‘some widespread and crucial encryption methods will be rendered obsolete’, quantum technology also promises a whole new generation of secure communication. The quantum property of ‘entanglement’ makes ‘Quantum Key Distribution’ possible, providing the basis of an ‘un-hackable’ encryption model that’s ‘impervious to eavesdroppers’, even quantum computers. With quantum computers potentially ushering in a ‘cryptopocalypse’, investing in enduring information security is a sensible insurance policy.

In light of these strategic applications, it’s not only the familiar tech giants such as Intel, IBM, Microsoft and Google racing to harness the power of quantum mechanics, governments worldwide are investing in this area to maintain or obtain strategic advantage.

The US Defense Undersecretary Frank Kendall recently stated that ‘quantum science is an area that could yield fundamental changes in military capabilities’. As such, the US Army, Navy and Air Force are working together with a $45 million grant to establish a secure long-distance quantum communication network ‘for the war-fighter’.

Quantum science also ‘figures centrally in the objectives of the Chinese military’, with the technology having been a focus of the National University of Defense Technology and the People’s Liberation Army’s University of Science and Technology for several years now. In fact, a Chinese project is underway to establish the longest quantum communication network in the world, stretching 2,000km between Shanghai and Beijing and including the world’s first quantum-enabled satellite.

The UK’s National Strategy for Quantum Computing argues that quantum technologies will have a ‘major impact’ on the defence industry, and the Defence Science and Technology Laboratory was already showcasing new quantum navigation technologies early last year.

The good news is that Australia’s quantum technology research is ‘world leading’. The Centre for Quantum Computing and Communications (CQC2T), recipient of the NISA grant, recently made breakthrough proof of concept for silicon quantum computing. In fact, lead scientist Michelle Simmons expects the centre to develop a scalable quantum computer within the next five years. The government’s recent investment is a great step in ensuring Australia’s continued efforts in this field.

There’s no doubt this industry promises enormous economic benefits. However, we mustn’t become complacent by thinking about quantum technology in purely economic terms. It’s also an essential national investment in the context of an ‘international race’ to quantum pre-eminence and the strategic advantage it’s likely to afford. The Australian government must continue to invest in this technology, while broadening its view to see the many benefits that quantum research and innovation brings to our national defence and security.

Cyber wrap

2568510756_c1a4620ed8_z

We’re kicking off this week over the ditch with our Kiwi friends who have been very busy on the cyber policy front. In Auckland last Friday, Communications Minister Amy Adams launched an updated version of the country’s national Cyber Security Strategy. The NZ government also produced an accompanying ‘living’ Action Plan that will be updated annually, and a National Plan to Address Cybercrime. The strategy aims to deepen public–private engagement on cyber issues building upon the already successful Connect Smart initiative, which reaches out to private residences, schools and businesses. Other initiatives include a ‘cyber security tick’ scheme, similar to those used to indicate healthy foods, which will recognise businesses with good cyber security practices. New Zealand will also establish a stand-alone national Computer Emergency Response Team (CERT). Currently CERT responsibilities lie within the National Cyber Security Centre, but the decision has been made to bring New Zealand ‘into alignment’ with its key international partners by creating the new body. The decision mirrors that of the UK government, which successfully launched their first national CERT early last year.

Australia’s national CERT has released a survey of the cyber security postures and attitudes present amongst its major Australian businesses partners. The survey found that over half of the respondents had experienced an incident that had compromised ‘confidentiality, integrity or availability of a network’s data or systems in the last year’. Positively, the survey found that in response many businesses had introduced or improved their information security practices including both policy and technical responses. Mirroring stories throughout the media this year, major Australian businesses reported being subject to a substantial amount of Ransomware attacks—four times as many as were reported in 2013.

Twitter has warned a number of its users this week that their accounts may have been targeted by something a bit more malicious than the usual run-of-the-mill spam. The social media giant informed several account holders via email that their Twitter accounts were part of ‘a small group of accounts that may have been targeted by state-sponsored actors’. Those affected included activists, security specialists and privacy advocates, in what Twitter believes was an attempt to gain access to personal information including phone numbers and email addresses. While Twitter claims there was no evidence that the attempts were successful, it recommended that those affected use identity protections measures, such as the Tor browser.

Joe Nye had an interesting piece published on Project Syndicate on deterrence in cyber space, where he discusses how the traditional difficulties surrounding attribution have hampered effective deterrence and tipped the see-saw in favour of attackers. But he stresses that increased technological capability, more robust encryption and economic enmeshment may tip the advantage back to the defenders and eventually enable more effective cyber deterrence.

And finally, just in time for the holiday break, the US Department of Homeland Security has put out a useful tip sheet on good cybersecurity practices to use while travelling. It includes advice on connecting to Wi-Fi, data protection and maintaining the physical security of personal devices.

Learning lessons from the UK’s confident approach to cyber

An aerial image of the Government Communications Headquarters (GCHQ) in Cheltenham, Gloucestershire.

The launch of the 2015 SDSR provided evidence that UK Defence and Security agencies are being re-invigorated after a period of extensive cuts. Over the next ten years £178 billion will be spent on a range of military platforms. While this won’t elevate the UK to the peak of global military powers, it will reassure allied partners that it’s a reliable security partner.

Large quantities of money are often associated with ‘big ticket’ military hardware, yet the UK has spent comparable sums on its cyber capabilities. At the launch of the 2010 SDSR, the sting of looming cuts were softened by the announcement that the Government would invest £500 million in cyber security. In the intervening period, that’s risen to an £860 million investment in a growing area of national security concern and potential advantage.

The 2015 SDSR announced that spending on cyber security will grow again with a commitment to invest a further £1.9 billion (A$S3.9 billion) over the next five years. When that sum is added to the core spending on cyber security capabilities to protect UK networks, the total spend amounts to more than £3.2 billion (A$6.5 billion).

The clear and concise wording of the document is just as significant as the money attached to it. The 2015 SDSR weaves together a clear articulation of the UK’s strategic goals in cyber along with a comprehensive narrative about the importance of cyber security to national and economic security, and introduces measures to enhance capability and skills in both areas. It commits the UK to remaining a world leader in cyber security to protect critical networks, to maintain high levels of confidence in its ability to protect business from cyber threats, to bolstering the digital economy to help it reap the economic rewards of high value cyber security technology and skills.

The lead component of the cyber section of the SDSR is the newly formed National Cyber Centre established under GCHQ’s leadership. This centre will have charge over operational responses to cyber incidents. Not only will it have an operational lead but it will also act as a focal point for companies seeking advice on cyber issues, simplifying previous arrangements.

There are three areas worthy of specific comment. First, the UK has worked hard over the past 10 years to mature the Government’s relationship with the private sector on cyber.. There’s a clear commitment to ‘share knowledge with British industry and with allies’, ‘help companies and the public do more to protect their own data’, and ‘simplifying private sector access to government cyber security advice’. That’s evidenced most strongly in the promise to develop a ‘series of measures to actively defend…against cyber attacks’, alluding to active defence tactics which aim to disrupt attackers prior to, or while they’re attacking a network. The SDSR states that those capabilities will be ‘developed and operated by the private sector’, which is a leap forward in coordination between the UK’s public and private sectors.

Despite efforts to build stronger relationships with the private sector on cyber, Australia is some way off being able to make these kinds of statements. There’s a continuing journey that needs to be undertaken in order to reach the same level of maturity that the UK has achieved.

Second, the SDSR details a significant investment in creating highly qualified and skilled personnel, including £20 million to open an Institute of Coding to fill the current gap in higher education. A £165 million Defence and Cyber Innovation Fund was also announced to support innovative procurement across government, alongside two new cyber ‘start-up’ centres where new companies can incubate their tech in the early stages of development.

Finally, one of the most striking aspects of the plan was the emphasis placed on developing offensive cyber capability. The UK has firmly stated that it has this capability and will use it as a tool of national power and to respond to security threats. George Osborne used strong words to underscore this part of the plan:

‘Part of establishing deterrence will be making ourselves a difficult target…We need to destroy the idea that there is impunity in cyberspace…We are building our own offensive cyber capability—a dedicated ability to counter-attack in cyberspace.’

Following on from the US admission in 2010, this further illustrates an emerging trend among Australia’s allies to publically state their capacity to conduct or develop offensive cyber operations. A clear statement of the way Australia views the use of offensive cyber capabilities would be a welcome addition to the Australian Defence White Paper when it emerges.

There are lessons for Australia on the cyber front here. First is the use of committed, firm ideas and language which are backed financially. We are yet to see how much the Australian Government will invest in this important area of national security. Second, there’s a clear articulation of the linkage between cyber security, economic security, digital innovation and national security. Australian cyber strategy will hopefully follow suit. Finally, there’s evidence of a mature and trusted relationship between Government and the private sector built over time, which Australia can afford to do much better at. With both a Cyber Review and a Defence White Paper due imminently, expectations will be high that Australia can deliver on both fronts.

War and peace in China’s cyber space

13334080323_641e55ab35_z

China’s top spy and Politburo member, Meng Jianzhu, made a highly unusual four-day visit to the US in early September where he forged an agreement between China and the US to cooperate more deeply on cyber security issues. The Meng visit was intended to smooth the way for the visit of President Xi and to allow him to announce with President Obama on 25 September new progress by senior officials in this area.

The two countries agreed to investigate complaints by each other about malicious cyber activity, to cooperate more on resolving criminal investigations, and not to undertake commercial espionage. On this last issue, they agreed not to ‘conduct or knowingly support cyber-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors’.

There’s clear distinction, as the two countries agree, between economic espionage for national purposes and commercial espionage intended only to benefit a firm in the civil economy. It’s a declared US policy to conduct economic espionage in order to maintain its technological edge over other countries; and China does it to catch up.

New disputes about the September 2015 cyber agreement will inevitably arise, and sooner rather than later. The saving grace, and the most solid diplomatic achievement on this front, was the agreement to set up at Cabinet level a Working Group to resolve disputes. On the US side, the Secretary of Homeland Security and the Attorney General announced on 25 September that they would be the leads and that their counterparts on the Chinese side would also participate.

This new Working Group follows a failed attempt beginning in April 2013 to set up a working level mechanism between the foreign ministries of the two countries on cyber security. This was suspended by China in May 2014 when the US brought indictments against five personnel from China’s armed services for industrial espionage.

The agreements will provide a brief respite in diplomatic angst around China and cyber space. But some of this concern is misplaced, as leading scholars and I have argued at length elsewhere in respect of the economic impacts of Chinese cyber espionage.

Yet, regardless of the estimates of impact on the US economy, there is little likelihood that China’s PLA will change tack on cyber-enabled economic espionage. Its internal security agencies may be watching more closely any illegal relationships between PLA cyber units and the Chinese ‘commercial sector’.

The US and China are locked in a fierce struggle in cyber space. It’s intensifying, even as the two sides manage to agree occasional elements of détente to defuse the tension. The stakes are high; at the extreme end of the list of concerns, the command and control of nuclear weapons (especially intelligence, surveillance and targeting aspects) depend in part on a securable cyber space.

For its part, China sees the cyber struggle with the US in three dimensions: internal security to support the continued rule of the Communist Party, China’s relative military backwardness in cyber space; and its overall technological backwardness in cyber space aspects of the civil economy. As outlined in my book, Cyber Policy in China, Beijing sees this struggle as asymmetrical, a situation that imposes on it an obligation to ‘push the envelope’ in all areas of policy while trying to maintain functional cooperation with the US in important areas of economic policy.

While pushing the envelope, there are several reasons why economic considerations force China to maintain a cap on its cyber confrontations with the US. The biggest reason is that China badly needs US investment and advanced know-how in the information technology sector. China has only a weakly developed cyber security industry.

This imperative is captured in a gem of understatement by the Chinese foreign ministry when it said in a summary of the Xi visit that future cyber strategies by both sides ‘should be consistent with WTO agreements, …take into account international norms, be nondiscriminatory, and not impose nationality-based conditions or restrictions’ on bilateral ICT trade and technology transfer—at least not ‘unnecessarily’.

The final sentence above is important for Australia. The US and China have agreed to limit the use of national security as a criterion for evaluating bilateral cyber trade and investments. Australia now needs to abandon its blind belief in US propaganda about the exaggerated economic impact of China’s cyber espionage. It needs to pursue more aggressively the opportunities for investment in China’s ICT sector, including in cyber security.

Most importantly, it needs to emulate the recent US move and set up a high level working group with China led by our Attorney General to address deeper cyber cooperation, as well as malicious activity.

Cybersecurity: escaping future shock

11407107023_b52fa108f7_z

Mike Burgess, Telstra’s Chief Information Security Officer, claims that attributing blame for cyberattacks is a ‘distraction’. It’s hard not to empathise with his views when, according to the Australian Centre for Cyber Security, 85% of the threat of intrusion could be mitigated by the implementation of baseline protection measures. Burgess also pointed out that while attributing blame is an important component of preventing attacks, they are too often discussed in amorphous and hyperbolic terms when describing their sophistication.

Burgess has decades of experience in intelligence and security and unlike many others is well past the future shock of cybersecurity. At present, the capability of actors to penetrate networks is increasing, as is their ability to do damage. If you ever want to induce a sense of utter helplessness in your CEO, just show them this raw feed of cyberattacks from Norse. The reality is that good intelligence on actors and their capabilities is fairly useless unless a company has a strong understanding of what it’s exposed to. Context is everything when it comes to raw data.

It doesn’t help that many leading cybersecurity researchers fail to discuss these events within their wider context; and many of the top cybersecurity programs treat the subject matter as an extension of computer science and engineering. To security studies, cybersecurity is one area, amongst a range of others where threats exist in asymmetric terms. Authors are largely yet to work across these disciplines when discussing such threats. An example of this is found in Sandria’s work on Cyber Threat Metrics, which attempts to reinvent the wheel rather than work within the existing context of existing security threats. In Clausewitzian terms, cybersecurity is just security by other means. The literature on security, threat, perception, signaling and a range of other areas is sitting there waiting to be leveraged rather than reinvented.

There’s a great deal to be gained by discussing cybersecurity as an extension of existing trends. The relative youth of cybersecurity as a subject area means that it hasn’t yet been integrated into the wider literature. While that’s not unexpected it is something that needs to be addressed. In the early days of nuclear weapons, Oppenheimer had to reach for the Bhagavad Gita for an eloquent expression of future shock. Today’s cybersecurity researchers have no need to reach so far back for a meaningful comparison. Science transforming the security environment is nothing new: nuclear weapons were first discussed by the scientists that invented them, then by the military and then finally harnessed by statesmen. A similar thing is naturally occurring in cyberspace.

In reckoning with present trends, cybersecurity faces an uphill battle where opponents are increasing in capability and responses are uneven. While it is all well and good to proactively deter attack, if a company has a flat network architecture and never updates its software, it probably won’t do very much to limit exposure in the medium to long term. A few years ago US retail giant Target had invested over a million dollars in malware detection tools from FireEye. So despite possessing functioning notification tools, Target did nothing when they detected an attack. The breach compromised 40 million debit cards and the personal details of 70 million people, and cost the company more than US$146 million. Spending money on the right tool is useless if the company isn’t getting the basics right.

Another area of potential change is the ongoing debate over when to report an attack. Companies are not convinced that it will be to their benefit if they disclose attacks, and an increasing number in Australia don’t. Companies also fear a loss of confidence by investors if they disclose an attack. There’s presently a move in the European Union towards mandatory reporting. In 2013, Pricewaterhouse Coopers estimated that in the course of a year some 93% of large British companies had suffered a cybersecurity breach. The same report lists the median number of breaches per company at 113 over the same period, with the average cost of a large company’s worst breach coming in at over £400k.

So we understand, to some degree, the context and scope of the threat. And while it’s a threat that’s increasing, the vast majority of cybersecurity conundrums are manageable at present. New methods of attack aren’t a distraction but they are a second order problem and ought to be treated as such. The first step is to recognise the risk and implement the already identified best practice strategies to manage the threat. Moving on from there, the second order debates of reporting, classifying threats and auditing systems will take place. Burgess is right when he notes that second order problems currently dominate the discussion, but can this be seen as a natural extension of the uneven development of cybersecurity as a field? Many companies, for their part, must work to escape the future shock—cyber threats are real but so are the basic strategies on how to manage the risks they pose.

Australia’s in a strong position to close the gap between awareness and response. Under Burgess, Telstra has proactively produced industry-based reporting on the present situation. Along with this, the Australia Cyber Security Centre was launched in November 2014. That same organisation has produced an unclassified threat report that represents a good first step. Finally, CERT Australia is attempting to develop the space between government and industry where effective collaboration can flourish.

Each of the organisations mentioned above has constructed the beginnings of a collective response to cybersecurity. Being overwhelmed by risks is an extension of not understanding their context. Australia is on a strong path to cooperatively and proactively respond to cyber threats if those problems are tackled in order and in a collaborative manner.

Cyber wrap

T-Mobile

Researchers in Singapore have demonstrated how hackers can use a smartphone mounted on a drone to steal data intended for wireless printers. The technology detects an insecure printer and intercepts documents by establishing a fake access point that mimics the printer, tricking the computer into sending potentially sensitive data straight to the hacker’s device. Thankfully, this research springs from benevolent motivations and the ‘Cybersecurity Patrol’ app that has been produced is a cost-effective way to scan office spaces and alert corporations to any insecure printers. However, it’s a good reminder for companies to address a vulnerability that’s frequently overlooked. Watch a video exhibiting both the malicious and beneficial uses of this technology here.

Speaking of hackers, Russian hacker Dimitry Belorossov has been sentenced to four and a half years in prison for distributing and operating part of the infamous ‘Citadel’ botnet. Also known as ‘Rainerfox’, Belorossov used the banking Trojan to infect and remotely control more than 7,000 computers of unsuspecting individuals and financial institutions. The US Department of Justice estimates that Citadel reached over 11 million computers worldwide and resulted in more than US$500 million in losses. The 22 year old was sentenced this week after being arrested in Spain in 2013 and pleading guilty to conspiracy to commit computer fraud last year.

In Washington DC, Ari Schwartz this week stepped down as Senior Director for Cybersecurity on the National Security Council. Schwartz joined the White House in 2013 as Director for Cybersecurity Privacy, Civil Liberties and Policy, has been a vocal advocate of information sharing and became a trusted advisor to the Obama administration. The administration has a successor in mind so watch this space for an announcement.

A ruling from the European Court of Justice is pending on the future of ‘Safe Harbour’, an agreement that enables the transfer of customer data from the EU to the US. Since 2000, Safe Harbour has allowed US companies to self-certify that they fulfil EU data security standards and today is used by some of the world’s biggest technology groups including Facebook and Amazon. Concerns over the US’ laissez-faire approach to privacy, exemplified by recent NSA whistle-blower Edward Snowden, have elevated the sustainability of this agreement to the highest court in the EU. The ruling could give national data protection authorities the power to challenge data transfers or even void the agreement altogether. Those outcomes would have massive implications for international technology companies, and some fear it may contribute to the widening cyber policy gap across the Atlantic.

The personal details of roughly 15 million T-Mobile customers have been compromised in a massive data breach this week. Names, addresses, birthdates, encrypted social security numbers, drivers’ license and passport numbers have been stolen from Experian, a vendor T-Mobile uses to process its customer credit applications. Fortunately the compromised data contained no credit card or banking information, however the details could be used to commit identity theft. CEO John Legere has said he will undertake a ‘thorough review’ of T-Mobile’s relationship with Experian and is offering affected customers two years of free credit monitoring.

Ironically for T-Mobile, the first week of October marked the beginning of America’s National Cybersecurity Awareness Month. President Obama designated the tenth month of every year as a time to ‘engage and educate public and private sector partners’ of the importance of cybersecurity. Sponsored by the Department of Homeland Security, this month-long awareness campaign promotes cybersecurity as a ‘shared responsibility’. Stay tuned for related events, speeches and weekly themes.

War at sea 1914-15: The virtual unreality (Part 2)

Ship radio

Command and control were key naval unknowns in August 1914. What hadn’t been properly appreciated in set-piece, largely visually conducted exercises before the war were the problems with radio. The full conceptual and practical difficulties associated with its use really only became apparent in the Grand Manoeuvres. These were neither frequent nor long enough to fully make the point. This would have fundamental implications for naval operations.

The potential of wireless to coordinate widely separated forces was appreciated almost from the first, just as the telegraph cable had been recognised in the nineteenth century. Before 1914, the Admiralty made heroic efforts to develop ‘network enabled’ warfare, with an Admiralty War Room as the operational command centre. But radios had problems of range, reception, wavelength, mutual interference and reliability, while there were difficulties with security, the encryption and decryption of signals and, above all, with the combined true and relative errors of navigation which meant that the ‘pool of errors’ was often very much greater than the prevailing visibility, particularly in the North Sea. Even if you were told were the enemy was and where he was heading, there was no guarantee—or even probability—that you would find him.

The greatest difficulty with radio, however, was that it created a ‘virtual unreality’, an unreality that navies were all too ready to immerse themselves within. Too many acted as though their commander were in sight—and this mattered.

Navies had a bi-polar culture of command, perhaps most extreme at the beginning of the twentieth century. Andrew Gordon has written an extraordinary book called The Rules of the Game examining the failures at Jutland. Gordon presents a compelling picture of the way that an over-controlling approach to tactics and manoeuvres created a system of operating a fleet at sea incapable of managing events under the actual stress of combat.

By their nature, however, navies arguably always operate this way. If ships are in company, then the culture is one of obedience to allow the admiral to coordinate the force to achieve the operational intent. This is still the case, because it generally works—and disobedience by a subordinate, such as Nelson’s apparent disobedience at the Battle of Cape St Vincent in 1797, is the exception that proves the rule. Such control is, of course, more effective if achieved by ensuring prior understanding of the intent, rather than frequent signalling.

Some of the British problems would be caused by more than the tight control of formations at sea, because this actually extended to control of everything. Following the flagship’s movements and routines was compulsory. If the flagship spread awnings, so did you. If the flagship declared a rest afternoon, so did you. This sort of thing, continued for day after day in every fleet or squadron when assembled, created not so much a culture of ‘senior officer veneration’, as one of ‘the senior officer present’.

This idea of ‘presence’ making the difference is important. The other part of the naval split personality was very different. If officers were out of visual contact, then they were expected to exercise their initiative. And they generally did. During the century of the Pax Britannica, such enterprise was consistently demonstrated, creating an expectation summed up by Lord Palmerston’s declaration that he would send a naval officer to solve a problem in distant lands.

There was at least partial awareness of the situation. We tend in 2014 to think of communication as practically instantaneous, but in the navy of the last century it was not, even for radio. A 1906 expert estimated that visual signalling speed rarely exceeded two and a half miles per minute in effect—and was often slower. Early experience of radio showed that its problems—even when ciphers weren’t in use—meant that its effective speed was often not much better and sometimes, much worse. The greater the distance, the greater the delay.

Furthermore, neither the language nor the concepts for communication by radio existed. This was why Army observers of naval manoeuvres had good reason to criticise. One senior observer noted that ‘the preparation of orders is not understood in the Navy, making all allowance for the general differences inherent’. The Navy had yet to develop a system for coordinating remote formations in a tactical environment, something with which the Army had been struggling for more than a century.

There were key aspects to be resolved. Before the radio, all tactical reporting was visual. This meant that the enemy had to be so close (either on the horizon or just over it) that absolute positional errors did not matter—what a commander was interested in was what the enemy bore and in what direction he was steaming. A remote report required not only much more precision—and the greater the distance the more important precision was – but also much more information. This was not fully understood. The first British radio format for an enemy contact report didn’t include either the enemy or reporting unit’s position, while the concept and practice of a tactical plot would take years to formulate.

However, the ‘virtual unreality’ came in the fact that, despite the limitations of radio, many commanders behaved as though their remote senior officer always knew more than they did, sometimes in direct contradiction of what they themselves were seeing. In the pre-war Grand Manoeuvres there were multiple instances of officers failing to act on their own initiative because they thought that higher authority somehow knew more.

Learning to use the new technology and changing the culture of command would take more than just the First World War to achieve. After the failures of the 1916 Battle of Jutland, an effort to return to the ideals of Nelson would be one of the principal concerns of the Royal Navy between 1919 and 1939. Events of the Second World War, starting with the Battle of the River Plate, suggest that this work to achieve cultural change wasn’t wasted.

History doesn’t repeat itself, but it does rhyme, and one particular rhyme of 2015 with 1914-15 is apparent to me. The ever greater reliance upon networks and the instantaneous exchange of information in what have, since the end of the Cold War, been largely uncontested electronic environments may have created a new ‘virtual unreality’ with an expectation that higher command will always be accessible, not only to give direction but to be consulted. Thus, commanders at sea may complain they are being micro-managed, but at the same time become reluctant to do anything without first clearing it with their seniors.

Will such a culture work in a cyber war?

Cyber wrap

Anonymous

China and the US have stolen the show this week with their negotiations of what may become the world’s first major arms control agreement for cyberspace. Bilateral discussions focus on establishing a no first use policy in regards to the targeting a state’s critical national infrastructure during peacetime. While potentially ground breaking, the agreement would bear no relevance to China’s alleged hacking of either US corporations or the Office of Personnel Management.

It’s a promising turn of events for what’s been a highly sensitive topic in the bilateral relationship. Obama has also refrained from enacting the proposed economic sanctions on Chinese corporations for the cyber theft of US intellectual property. There’s been a noticeable drop in the frequency of Chinese cyber attacks against US corporations recently, which may be an effort to build  good will in the lead up to Xi’s first state visit to Washington later this week. Unfortunately, tensions are far from resolved. At the same time as Xi’s visit, China will host a tech forum in Seattle where it’ll pressure US corporations to adopt a ‘pledge of compliance’ regarding company networks within China. The pledge requires companies to make their data ‘secure and controllable’, a condition that may involve providing authorities with backdoors to systems for surveillance. By successfully drawing large players such as Apple, Facebook, IBM, Google and Uber to the forum despite current bilateral tensions, the Chinese are set to demonstrate the leverage they wield over any cybersecurity discussion.

Two US Democratic Senators have shone the spotlight on automakers’ responsibility to secure their increasingly networked vehicles. Edward Markey and Richard Blumenthal requested information about cyber security policy from 18 large automakers this week, including BMW, Fiat Chrysler and Toyota Motor Co. A similar survey was conducted in December 2013, however, the recent hacking of a Jeep Cherokee in July has returned attention to the vulnerabilities of vehicle connectivity. Intel, which provides infotainment technology to some of the largest automakers and is a key target for potential hackers, has this week revealed its interest in the issue by establishing a new Automotive Security Review Board. This board will conduct security audits on its products and has already released a ‘white paper’ outlining automotive cybersecurity best practice.

There was a win for cybercrime fighters this week, with infamous Russian hacker Vladimir Drinkman pleading guilty to criminal charges. Drinkman and four other defendants are on trial in the US for stealing 160 million credit card numbers from corporations including Diners Singapore, Nasdaq, JCP, 7-Eleven, Dow Jones and Jet Blue. The group exploited SQL database vulnerabilities in order to install ‘packet sniffers’—malware that monitors and documents network traffic. Drinkman initially pled not guilty when captured in 2010 but has now confessed to his cybercrimes and is facing up to 30 years in prison. The theft incurred a corporate cost of $300 million, plus enormous private losses, and has been deemed the largest data breach scheme ever prosecuted.

The US has announced plans to post a prosecutor at Europol in order to facilitate greater international cooperation in the fight against cybercrime. US Attorney General Loretta Lynch said that the representative will be a day-to-day presence, aiding investigations into botnet networks and dark web marketplaces. Europol Director, Rob Wainwright, is hopeful that the presence of a US prosecutor will encourage the support of large US technology companies in international cybercrime investigations.

Anonymous has been busy this week, hacking government websites of both Vietnam and the Philippines. The infamous hacktivist group defaced the homepage of the Philippines’ National Telecommunications Commission (NTC) as a demonstration against poor internet service delivery. The group’s hack left a message protesting against the ‘over promised, under delivered system’ that it believes is an obstacle to equality of internet access. The breach came days after the agency’s service test which revealed ISPs falling short of advertised speeds, and has prompted the NTC to guarantee an increase in the monitoring of internet speeds starting in October.

Across the South China Sea, Vietnam suffered a blow to its government portal on its recent National Day thanks to the collaborative hacking efforts of Anonymous, AntiSec and HagashTeam. The cyber vandalism was an attempt to pressure the Vietnamese government to include political activists, journalists, bloggers and human rights defenders in their recent mass pardoning of more than 15,000 prisoners, including drug traffickers and murderers.

We’re (not really) under cyber attack

Not under attack

Last week’s release of the first Australian Cyber Security Centre (ACSC) Threat Report provides some sobering statistics and interesting case studies on the cyber threats facing Australia. It outlines the problem well, but beyond the usual missives to implement ASD’s Top Four Mitigation Strategies, it’s relatively mute on the response. This is a task that has likely been left for the Government’s Cyber Security Review to complete in the coming weeks.

It’s unsurprising to most that Australia endures constant attempts to breach public and private networks. The combined 12,204 incidents that either ASD or CERT Australia responded to in 2014, around 33 each day, provides some insight to the scale of cyber intrusions that the ACSC handles. The threat is also growing in sophistication, as cybercrime groups begin to rival the capability of some state-sponsored actors, demonstrating the enormous resources they’ve accumulated from their successes.

The ACSC has categorised cyber adversaries into three tiers: foreign state-sponsored, serious and organised crime, and issue motivated groups. The motivations of those groups vary, as does their capabilities. State-sponsored actors are the most capable, closely followed by the larger cybercrime syndicates. Those two actors have the most sophisticated capability and potentially the biggest effect on our national security and economic well-being. Issue motivated groups use less complex, more readily available capabilities, such as DDOS, to bring attention to their cause, without causing serious damage or harm. The ACSC predicts that terrorist groups will continue to be a nuisance in cyber space by defacing websites and using DDOS capabilities to draw attention to their cause, rather than pursuing the use of more destructive cyber capability as the financial and technical barriers to these more sophisticated tools lower further. .

The careful definition of the term cyber-attack is of particular interest to policy wonks. Used colloquially to describe just about any malicious act in cyberspace, for Government—and in particular the Defence-dominated ACSC—the term is defined as an act that seriously compromise national security. The report notes that Australia has never suffered an event that Government would consider to be a cyber-attack, but if it did, it may be considered to be an act of war. The imprecision of the common usage of ‘cyber-attack’ would be unsettling for an agency that’s primarily responsible for responding to armed attacks on Australia. Careful definition provides greater clarity about how and when Defence is involved in responding to the many thousands of cyber intrusions Australia is subject to.

Government’s efforts appear to be bearing some fruit as the number of incidents ASD responded to has grown at a slower pace than in previous years, and the confirmed number of significant breaches of Australian Government networks has . The biggest hole in the statistics noted in the report is intrusions against the private sector, which the ACSC admits it has a more limited understanding of. This means that there’s potentially more cyber intrusion attempts occurring than is known, with attempts going undetected and unreported.

CERT’s statistics show that the energy, banking and financial services, and the communications, defence industry and transport sectors have reported the most cyber intrusion attempts. These sectors are more likely to have implemented the required capabilities to identify cyber intrusions as they are well aware of the impact of cyber threats on their business. Other industry sectors, like mining and resources and agriculture, also face similar risks, but report far fewer incidents. Government is encouraging the private sector to implement adequate measures and share information. However, without adequate understanding of the risk, there’s often little incentive to invest in expensive cyber security capabilities until a major incident has damaged a business’ reputation and bottom line. This is a shared problem, and many of our key partners such as the United States are struggling with the same issue.

While the Government’s work to build stronger cyber defences appear to be successful in the face of more numerous and sophisticated cyber adversaries, it seems that the private sector is still struggling to come to terms with cyber threats. ACSC offers its usual advice—that implementing ASD’s Top Four Mitigation Strategies will assist in deflecting all but the most determined adversary—but Government can do more. Better two-way information sharing with businesses will highlight the need for investment in cyber defence, a task made difficult by the classified nature of much of the cyber threat intelligence Government holds, and the sensitive nature of ACSC’s current accommodation which it shares with ASIO. This makes it difficult for business to engage with ACSC and to use the information it can furnish. The forthcoming Cyber Security Review should provide greater clarity on how Government intends to address the threats outlined in ACSC’s report, and hopefully how it will work with the private sector to make all of Australia a difficult cyber target.