Tag Archive for: cyber security

Getting Australia’s digital Trust Exchange right

To realise the potential of the Digital ID Act and the recently unveiled Trust Exchange (TEx), the government must move past political soundbites and develop a comprehensive identity and credentials strategy that includes building technical architecture and conducting an end-to-end security assessment.

The government is yet to publish the rules and standards in relation to the Digital ID Act, which was finally passed May. We’re also still waiting to hear details of TEx, a world-leading digital identity verification system, which Government Services Minister Bill Shorten unveiled in August.

The Digital ID Act was the government’s response to the 2022 data breaches at Optus and Medibank, which prompted a fundamental reassessment of what sensitive data should be collected and how long it should be stored. Businesses should still conduct checks on customers—for example, to prevent money-laundering or alcohol sales to minors—but a better solution is needed than simply storing digitised copies of paper identification documents.

A digital ID scheme had been proposed for many years in different guises, but the 2022 breaches finally led to a new draft legislation in September 2023, kicking off the process that led to the Digital ID Act.

This Act is a major step in the right direction. It provides a legislated basis for a federated trust system and avoids creating a unique identifier for every citizen or a centralised ‘honeypot’ of data about people and their transactions. The accreditation rules include strong privacy and security safeguards to build trust in the system and put individuals in control of what personal data is disclosed to whom and when. However, as I outline in a recent ASPI report, there are several policy issues which, if left unresolved, could jeopardise successful deployment and adoption of the digital ID system.

Based on the limited details released so far, TEx could be on the verge of repeating many of the same missteps.

TEx appears to be a system that securely shares specific identity attributes for in-person interactions through a digital identity app on a handheld device. One example is proof-of-age checks at licenced premises: in lieu of physical documentation that shows the customer’s date of birth, the app simply verifies whether they are over or under 18. This would prevent data breaches such as the Clubs NSW incident, in which hackers stole data from patrons’ drivers licences that had been routinely scanned and stored.

But the sparse details about TEx are contradictory and ambiguous, causing some to be sceptical of the scheme. Shorten has suggested that it will ‘build upon digital ID infrastructure’, using the existing identity exchange operated by Services Australia and the myGov app, supported by some sort of record of each identity verification transaction. But this contradicts accreditation rules for the identity exchange, which specifically prohibit it from keeping logs of user activity.

This sort of ambiguity leads some to assume the worst, such as Electronic Frontiers Australia who claim the system will create the ‘mother of all honeypots’ and enable centralised surveillance. It doesn’t help that a recent Ombudsman report suggested that the myGov app currently falls well short of expectations on security and fraud prevention.

The government is also setting unrealistic expectations about the benefits of TEx, with Shorten suggesting that it will achieve ‘some of the best aspects of the GDPR’. The introduction of GDPR—the European Union’s data privacy and security law—had a dramatic effect on companies’ security and privacy practices because it was backed by massive penalties for non-compliance and encompassed all aspects of data collection, storage and usage. In contrast, Australia’s TEx, a voluntary system that might allow some organisations to opt out of collecting some personal data, is never going to have the same level of impact.

The incentives for companies to opt-in are unclear. Big names such as CBA and Seek have apparently offered ‘in-principle’ support, but this may change when they hear more details, particularly about costs.

It is also unclear how these different IT systems, owned and operated by different departments, will fit together to provide end-to-end service, security and privacy. TEx will be built by Services Australia, ‘on top of’ Digital ID infrastructure set up by the Department of Finance. Meanwhile the Attorney-General’s Department is developing a mobile app that alerts users whenever their identity credentials are used.

To execute these systems successfully, the government must develop an overarching identity and credentials strategy across the Commonwealth and the states and territories. This should include technical architecture, based on sound system engineering principles, that outlines how the different systems will work together. There should also be an end-to-end security assessment to ensure data confidentiality and resilience in the system. To achieve this, the government must break down departmental silos and build public support through transparent information and debate.

These new digital ID systems have the potential to increase privacy standards, reduce data breaches and improve the public’s experience of government service delivery—but only if it is properly executed. This opportunity is too big to squander.

Sovereign data: Australia’s AI shield against disinformation

Any attempt to regulate artificial intelligence is likely to be ineffective without first ensuring the availability of trusted large-scale sovereign data sets.

For the Australian government, AI presents transformative potential, promising to revolutionise the way in which government departments and agencies operate. The allure of AI-driven efficiency, precision and insight is irresistible. Yet, amid the chorus of AI evangelists, a discordant note rings true: establishment of robust AI policy guardrails now would be premature and potentially counterproductive without first addressing the fundamental issue of sovereign trusted data.

This contentious stance is rooted in the understanding that trustworthy AI hinges on the availability of trusted large-scale data sets. Without this bedrock, current attempts to regulate AI could end up being built on quicksand and become ineffective in mitigating the menace of misinformation and disinformation. Proposed Australian regulations are focusing on privacy requirements, labelling of AI-generated work, the legal consequences of AI choices and understanding how the software makes decisions.

AI, in its essence, is a data-driven phenomenon. The algorithms that power AI systems are not imbued with inherent intelligence; rather, they learn and evolve through ingestion and analysis of vast quantities of information. The quality, accuracy and representativeness of this data directly influence the performance and trustworthiness of the resulting AI models. In the absence of robust, verifiable data, AI can purvey misinformation, amplifying biases, perpetuating stereotypes and undermining public trust. Such risks are particularly acute for government agencies, for which stakes are high and the impact of erroneous decisions can be far-reaching.

Regulations that are not grounded in the realities of data quality and provenance risk being toothless and easily circumvented by those seeking to exploit AI for nefarious purposes.

Australian sovereign data is data that is owned, controlled and governed within Australia’s borders. It is subject to Australian laws and regulations, ensuring its collection, storage and use adhere to the highest standards of privacy, security and ethics. This control is crucial in mitigating the risks of foreign interference, data manipulation and the spread of misinformation. By maintaining sovereignty over the source data, the Australian government can ensure that the AI systems it deploys are built on a foundation of trust and transparency.

Sovereign data empowers Australian government agencies to build AI models that are tailored to the unique needs and context of the nation. By training AI systems on data that accurately reflects the diversity and complexity of Australian society, we can ensure that these models are not only effective but also equitable and just. Furthermore, sovereign data fosters transparency and accountability, allowing for independent scrutiny of the data and algorithms that underpin AI decision making. This transparency is essential for building public trust in AI and ensuring its responsible use in government.

Establishing trusted large-scale sovereign data sets is undeniably complex. It requires overcoming four challenges:

Data Collection and Integration. Gathering comprehensive, high-quality data from disparate sources across government agencies is a logistical and technical challenge. Data must be standardised, cleaned and de-identified to ensure its usability and protect privacy.

Data Governance. Robust data governance frameworks must be established to ensure data quality, security and ethical use. This includes defining clear roles and responsibilities for data management, implementing access controls and establishing mechanisms for addressing data breaches and misuse.

—Expertise and Resources. Building and maintaining sovereign data capabilities will require significant investment in infrastructure, technology and skilled people. Data scientists, analysts and governance experts are essential for ensuring effective management and utilisation of sovereign data.

—Cultural Shift. A cultural shift towards data-sharing and collaboration is needed across government agencies. Breaking down silos and fostering a culture of open data can accelerate the creation of comprehensive, multi-dimensional data sets that reflect the complexity of real-world challenges.

Despite these complexities, several countries have made significant strides in establishing sovereign data capabilities. The European Union’s General Data Protection Regulation (GDPR) is a prime example, setting a global standard for data privacy and control. India’s push for data localisation and its efforts to build a national digital infrastructure also highlight the growing recognition of the strategic importance of sovereign data.

The concept of sovereign data linkages with trusted nations also presents an opportunity for Australia. By establishing secure and mutually beneficial data-sharing agreements with like-minded and aligned countries, Australia could expand its access to high-quality data while maintaining control and ensuring ethical use. Such linkages would require careful negotiation and robust governance frameworks to ensure data privacy, security and alignment with shared values.

The Chinese government’s unfettered access to vast amounts of citizen data, coupled with its willingness to deploy AI for surveillance and social control, raises serious concerns about the future of AI ethics and governance. Australian collaboration with like-minded nations can counterbalance China’s AI ambitions.

Responsible and effective deployment of AI in Australian government is not merely a technical challenge but a strategic imperative. Sophisticated sovereign data sets can provide the bedrock for trustworthy AI, mitigating the risks of misinformation and disinformation while unlocking the full potential of AI for public good.

‘Weaponisation’ of religious sentiment in Indonesia’s cyberspace

The announcement that prominent Indonesia Ulema Council chairman and cleric Ma’ruf Amin will be President Joko ‘Jokowi’ Widodo’s vice-presidential running mate for the 2019 election has stimulated fresh debate about the ‘Islamisation’ of Indonesian politics.

Amin is the head of Indonesia’s largest Muslim organisation, the 45-million-member Nahdlatul Ulama (NU). Jokowi’s preferred pick had been former Constitutional Court Chief Justice Mahfud MD, but he and his Indonesian Democratic Party of Struggle  bowed to political pressure to choose a running mate with high-level Islamic credentials. The NU-linked National Awakening Party and United Development Party had threatened to leave Jokowi’s governing coalition if Amin were not chosen.

Islamic conservatism has been ascendant in Indonesia ever since Saudi-sponsored theological influence began in the 1980s. Growing Islamic conservatism became even more pronounced  after the fall of Indonesia’s second president, Suharto, and his authoritarian ‘New Order’ regime.

Indonesia’s post-Suharto reformasi saw the opening up of public discourse, and subsequent rise of previously suppressed conservative Islamic rhetoric and its ‘hardliner’ proponents. These hardliner Islamists emerged from decades of marginalisation and repression, under the regimes of both Suharto and his predecessor Sukarno, with little appetite for pluralism and tolerance.

The proliferation of social media in Indonesia has allowed greater unrestrained expression of strong religious views. This has allowed groups such as the Muslim Cyber Army, an organisation described as being without structure and similar to the ‘hacktivist’ group Anonymous, to reach and access a larger audience.

One way the Muslim Cyber Army targets liberal opponents is through ‘doxing’. ‘Doxing’ refers to the theft and publishing of personal details online, which are then used by groups—such as the far-right Sunni fundamentalist group the Front Pembela Islam (Islamic Defenders Front)—to hunt down and physically attack their liberal opponents.

This ‘weaponisation’ of conservative Islamic sentiment and religious intolerance has involved doctored online content and disinformation, deliberately spread through social media. As more Indonesians have gained access to the internet, mainly through low-cost smartphone technology, Indonesia has developed a disinformation problem.

The most prominent example of this phenomenon was the 2017 jailing of former Jakarta governor Basuki Tjahaja Purnama (Ahok) for two years for alleged blasphemy. In a September 2016 speech, Ahok asserted that politicians shouldn’t mislead voters by misinterpreting the Koran in advising Muslims against voting for non-Muslim political candidates. ‘Ladies and gentlemen … you’ve been lied to by those using [the Koran’s] Surah al-Maidah verse 51’, he said. The speech was then edited to seem as though Ahok was saying that it was the Koran itself that was misleading voters. The resulting video was used by anti-Ahok forces, including Ma’ruf Amin, to mobilise mass demonstrations that forced the government to charge Ahok with blasphemy.

The Indonesian government needs to reduce the effect of disinformation, especially ahead of the 2019 general election. Indonesia’s outdated legislation allows cybercriminals and botnets to thrive. The government also needs to do more to stop Indonesia from being used as a haven for these activities, which enables the spread of the doctored content that is in turn used to ‘weaponise’ Islamic sentiment.

While it’s important that Indonesia create appropriate legislative reform that helps reduce cybercrime and botnets, it should not endanger free speech. The recent draft revision to Article 309 of the Criminal Code proposes six years’ imprisonment for ‘any person who broadcasts fake news or hoaxes resulting in a riot or disturbance’.

While the code needs to be updated, the proposed revision is worrying because it doesn’t define or explain what constitutes a ‘disturbance’ or what is considered to be ‘fake’. That’s a problem because it opens the system up to abuse: anything not approved could be labelled as ‘disturbing’. As it stands, the clause could potentially be used to prosecute journalists, threatening press freedom.

The long-awaited creation of the Badan Siber dan Sandi Negara (BSSN), Indonesia’s new national cyber agency—after years of setbacks and delays—shows that Indonesia is becoming more serious about cybersecurity. In its role as manager of Indonesia’s cyberspace as well as content moderator, the BSSN will play a pivotal role in the run-up to the 2019 general election.

The BSSN will have the difficult task of trying to protect Indonesian voters from disinformation without censoring political expression. One way it could do that is to allow individuals and groups open channels for expressing legitimate political opinion, without the threat of being criminalised as blasphemers. It’s vital that the threat represented by doctored content and disinformation doesn’t supersede the importance of BSSN remaining politically impartial.

There are plenty of opportunities for Australia and Indonesia to increase their engagement on cyber issues, which is consistent with the Australian government’s international cyber engagement strategy. Dialogues and bilateral forums should certainly continue and be increased where appropriate, not just with Indonesia but also with other more open societies in the region like Japan and South Korea.

The recently announced comprehensive strategic partnership between Australia and the Republic of Indonesia and subsequent memorandum of understanding on cyber cooperation are promising engagement strategies. The MoU is a two-year non-legally-binding agreement to share information on cyber strategies and policies, build cyber capacity through training and education programs, promote business links to enable growth in the digital economy, and tackle cybercrime by sharing training opportunities to strengthen forensic and investigation capabilities.

Australia should use both the comprehensive strategic partnership and the MoU as platforms to encourage the Indonesian government to either develop clear definitions in the proposed Criminal Code revision or scrap it altogether.

This would help promote ongoing journalistic freedom in Indonesia as well as freedom of expression more generally. The MoU also stipulates closer cooperation with the BSSN, and Australia should use that opportunity to encourage the BSSN not to fall into the trap of state censorship, damaging Indonesia’s youthful democracy.

The campaign against Huawei

The case against Huawei’s participation in bidding for the 5G network in Australia appears to be based on incomplete information, at least as far as the public record allows us to judge.

For a full picture, there are several fields of knowledge we need to understand and reconcile: espionage, computer science, information and communications technology, cyber security, business studies, foreign policy, China studies, political science, international political economy, and globalisation. But there are also political perspectives and biases. The latter issue was rather brilliantly captured in a recent Norwegian study.

This study saw the Huawei challenge, the Snowden revelations about NSA, and the Volkswagen emissions-monitoring scandal as part of a common problem: assurance of supply chain components in the information age. The study concluded that ‘the problem [of supply chain assurance] should therefore receive considerably more attention from the research community as well as from decision makers than is currently the case’.

The consensus of global scholarly opinion on these issues suggests that those in Australia advocating for a ban on Huawei in the 5G network—mimicking the opinion of US intelligence chiefs expressed in February 2018—have not reviewed all of the available information and perspectives. Public policy analysts in Australia should be wary of their own government when it so closely mirrors senior officials in the Trump administration on any issue of intelligence policy, for two reasons.

The first, and most worrying, is the poor record of the US intelligence community on big issues of analysis if they’re highly politicised. Remember Iraqi WMD as one in a 70-year saga of great US intelligence failures. The second is that internal political disputation within the Trump administration and the US Congress on relations with China is at fever pitch.

So what does the study of espionage tell us about the campaign against Huawei?

There’s no doubt that countries like China, the United States, Russia, Israel and France find it easier to implant back doors in commercially available equipment manufactured by companies domiciled in their territories. For this and a variety of other reasons, wise governments, corporations and citizens should assume that all equipment in their supply chains, regardless of the country of origin, can be compromised from a cyber security point of view. The Norwegian study found that such back doors are often very difficult to detect.

We can add to this the overwhelming evidence that vulnerabilities in Microsoft Windows have been responsible for a very large share of security breaches globally, including in Australia. As argued in a study I co-authored with German scholar Sandro Gaycken for the New York-based EastWest Institute in 2014, ‘highly secure computing’ (that is, non-vulnerable systems) has to be the approach.

The national security damage caused by vulnerabilities in Microsoft Windows puts into the shade the unsubstantiated claims (unsubstantiated in the public domain at least) that Huawei equipment has directly produced security breaches. Moreover, NSA cyber weapons based on the vulnerabilities in Windows, such as Eternal Blue, have caused more documented security breaches globally, and in Australia, than any Huawei products. Yet Australia’s Defence Department uses Microsoft Windows.

We also need to assess the relative intelligence value of back doors in Huawei products if they in fact exist. We can assume they do, either by design or by error. But the share of high-grade intelligence collected by this means would be minuscule. Chinese and American spy agencies already have easy access to most unclassified or unencrypted telecommunications from Australia without relying on back doors in telecoms equipment.

If China wanted to use a domiciled company for implanting back doors, it would not rely on the Chinese Communist Party cell in Huawei to set that up. The Huawei party cell would not be in the chain of command for Chinese intelligence operations of this kind. The cell is not oriented towards espionage, though its members would report on internal security issues to the Ministry of Public Security.

If the US wanted to plant back doors in the equipment of a US-domiciled company, it would not need a law to compel the cooperation. It would simply get consent from people at the top of the company, as it did with NSA’s PRISM program, where US telecoms companies, such as AT&T, and information utilities, such as Google, provided a direct feed to NSA headquarters of all communications, according to documents leaked by Snowden.

Beyond intelligence studies, we need industry knowledge. Huawei estimates that 50% of Australians rely on its systems of some kind for their telecommunications. This is probably a radical underestimate—I think it would be closer to 95% if we’re talking about all Chinese-made systems. Most of Australia’s unclassified communications today probably depend on systems using at least one component manufactured in China.

I base this very rough estimate on several considerations. According to a 2018 study on smaller countries like Australia, the bulk of our domestic internet traffic and email is probably routed through foreign servers and internet gateways. A large slice goes to countries like the UK where BT is the provider using Huawei equipment, not to mention other Chinese equipment manufacturers like ZTE. And not to mention the share of our communications traffic to and from China itself. According to a 2018 Chinese study, the diversion of internet traffic through other countries is increasing in spite of intensifying claims to internet sovereignty.

The campaign against Huawei imagines that Australia has a cyber border. It does not. It’s deeply entangled in a globalised laissez-faire ICT economy and diffuse internet traffic pathways. Our public policy is still learning the nature and scope of this reality.

ASPI suggests

The world

The United States this week withdrew from the UN Human Rights Council. Vox provides a clear picture of the situation while CNN discusses reactions to the move. This CFR piece investigates how prospects for international development and cooperation are dwindling as Trump continues to retreat from multilateral frameworks.

‘An inconvenient truth’ is usually associated with Al Gore’s 2006 documentary. But World Refugee Day on 20 June, brought the sobering realisation that more people than ever before have been forced to flee persecution or war. New research by UNHCR shows that the number of people forced to flee their homes had risen to 68.5 million at the end of 2017. The US administration has dominated headlines with its policy of separating children from their parents as families, flee violence in Central and South America. Snopes provides an insightful fact check on the legal situation.

Yemen is another country dealing with violent conflict and internally displaced people. For details on recent turning points, see Al Jazeera’s analysis of why the Saudi coalition is attacking Hodeidah and the humanitarian effects involved. Amnesty International has released alarming details about the attack’s impacts on the devastated Yemeni population.

Migration policy also continues to fuel friction in Europe. Germany’s coalition government is in crisis as Interior Minister Horst Seehofer and Chancellor Angela Merkel clashed over immigration policy ahead of elections in Bavaria. As the New York Times reported Donald Trump weighed into the debate again, falsely claiming that immigration increases crime. Merkel hopes to find a pan-European solution at the EU leaders’ meeting next week. Politico takes a closer look at that ‘Mother of all EU summits’, The Economist’s Jeremy Cliff has a great graph showing possible outcomes for Merkel, while Carnegie’s Judy Dempsey looks at the possible effects on future European security.

SIPRI’s new yearbook is out. Key findings include that the number of multinational peacekeepers is declining despite growing demand, and that nuclear weapons are being modernised rather than abandoned. Research by Erin Connolly and Kate Hewitt shows how shockingly little knowledge US students have about nukes, and what they’re doing about it.

With the World Cup in full swing in Russia, Wired has a couple of tips for dealing with Moscow’s approach to cybersecurity. The Atlantic discusses China’s cyber governance plan and intention to dictate the internet’s future rules (and content). This Washington Post article summarises the congressional call to have research collaborations between American universities and Huawei investigated. Sophie-Charlotte Fischer shows in this brief for ETH Zurich’s CSS that China aims to be the world leader in AI by 2030. She argues that Beijing’s drive might set off a new technology race, but that countries should see the potential for mutually beneficial cooperation.

And some more for the cyber fans: As Swedish elections near, the country is preparing for an onslaught of Russian hacking and cyber–election tampering. This ABC radio interview with Erik Brattberg contains all you need to know on the situation. It comes a day after Israeli PM Benjamin Netanyahu addressed Cyber Week at Tel Aviv University about the threats and benefits of cyber to both public and private actors.

The tech geek is on leave, contemplating all things techy and geeky, and will return in July. But we still found satisfying geeky matter: the OCCRP developed a tool to track the travel of the rich and wealthy—a big help for journos and analysts investigating money laundering and the like.

And one last thing on Trump: He wants a space force as the sixth branch of the US Department of Defense. That might violate the Outer Space Treaty of 1967. National Geographic discusses the legal issues and how existing laws provide some back doors. In saying that, Trump’s proposal has been met with plenty of opposition in the US, meaning it may not pass Congress in the first place.

Multimedia

The US Energy Information Administration has published over 600 graphs on Flickr showing a broad variety of numbers and developments of all things petroleum, oil and other liquids.

Sixty-five years after the failed uprising of East Germans on 17 June 1953, this video recounts the demonstrations and violent crackdown that followed. [7:39]

This fascinating episode of Al-Jazeera’s ‘People and Power’ profiles Wahida Mohamed Al-Jamaily, a woman leading a militia in northern Iraq. [25:00]

Podcasts

The APPS Policy Forum podcast talks about the World Cup and its meaning for Russia’s international policy game and about the country’s energy politics and goals in the Asia–Pacific. [56:04]

Pod Save the World hosts former US National Counterterrorism Center chief Nick Rasmussen to discuss the Center’s place in the national security architecture, as well as terrorism propaganda. [50:47, skip the first minute and ads at 16:25-19:50, 34:55-36:45]

The BBC’s How to Invent a Country investigates the beginnings of Amsterdam and how it went from being a swamp to being one of the world’s leading cities in such a short period of time. [30:00]

Caliphate this week interviews an ISIS returnee who has confessed to murder one year after returning to America. [33:00]

Events

Canberra, 24-26 June, ANU Crawford Leadership Forum, ‘Global realities, domestic choices.’ Details here.

Canberra, 27 June, 5.30–7 pm, National Library of Australia, ‘Who will save the world?’ with Jan Fran. More information here.

Sydney, 28 June, 5–6.30 pm, Sydney University, ‘China and global refugee crisis: external and domestic dynamics’. Free registration here.

Canberra, 4 July, 5.30–8 pm, ASPI and Thales, ‘Thales-ASPI Hamel Centenary Oration’, delivered by the incoming Chief of Defence Force, Lieutenant General Angus Campbell, AO, DSC. More here.

The Strategist Six: Chris Painter

Welcome to The Strategist Six, a feature providing a glimpse into the thinking of prominent academics, government officials, military officers, reporters and interesting individuals from around the world.

1. As a top US cyber specialist, you’ve seen the internet shrink the world by allowing people to communicate over vast distances. It’s given us access to massive amounts of information and allowed oppressed people to unite and force change. But it’s also used by terrorists to encourage attacks and by nations to steal commercial and military secrets. Overall, has the net made the world a better or a more dangerous place?

Every new technology from the beginning of mankind has been seized upon by criminals and others who have tried to exploit it. For better or worse, the internet was never conceived as a secure platform. Instead it was designed to ensure communication, survive and be resilient. On balance, it’s been a tremendous force for good in terms of social interaction, global communication and economic growth. So even with the mounting threats, I would definitely say it has made the world a better place.

However, there’s a wide range of threats and threat actors in cyberspace, including criminals, terrorists who predominantly use the internet to communicate and plan and some nation states who cause disruption and steal sensitive commercial and other information. Cyberspace is also a new domain of warfare where over 100 countries are developing offensive capability. We’ll certainly see these capabilities employed as part of a traditional physical conflict but, as recent cases like the destructive NotPetya worm attributed to Russia illustrate, we’ll also see them outside traditional conflicts. Yet, we don’t have a good idea of what escalation is in cyberspace, what are the bounds of acceptable state behaviour and what the consequences might be if those bounds are breached. We need to work all of those things out.

The US and Australia have been in the vanguard of advancing an international stability framework for cyberspace. That framework includes applying existing international law to cyberspace, getting consensus on certain voluntary norms (or rules of the road) for responsible state action, and transparency and confidence-building measures such as hotlines to help dial down the chances of misperceptions and avert escalation. There has been good progress in promoting this framework but we also need to be better at deterring bad conduct in cyberspace. There have to be timely and credible consequences for bad actors. That means enhancing our law enforcement capabilities and going after more criminals and locking them up to make clear there’s a cost for their actions.

It also means we need to be much better at imposing consequences on disruptive nation states. We must act collectively with like-minded countries, see what tools we have to deter a potential adversary’s behaviour, and be willing to use them. There’s a lot left to do in this area. For example, we still need to explore how existing international law maps to cyberspace, further articulate and gain wider acceptance of voluntary norms and improve collective response.

This work will involve governments, the private sector and civil society. Recently, for example, the Global Commission for the Stability of Cyberspace put forth a proposed norm that stated that state and non-state actors shouldn’t take actions that substantially disrupt the general availability of the global core of the internet.

If we hope to make real progress combatting the threats we face and seizing on the opportunities that cyberspace provides, we need to get away from thinking that cyber is this boutique, technical policy area and ingrain it in how we think about national security, economic security and foreign policy.

2. Where are we going with the internet?

We’re not going to go backwards. It’s not a genie you can put back into the bottle. The web is intertwined with everything we do and it’ll continue to evolve and become more useful. You’ll also have more sophisticated attacks and attackers because of that dependency. We’ll be in a cat-and-mouse game to an extent. There’ll be lots of innovation.

Those who are intent on doing evil or using it for their own purposes will find more sophisticated ways to do that. We’ll do what we can to contain the threats because, ultimately, we have the platform to do good things.

The technology that’s allowed us to communicate worldwide and enabled all this social interaction can also be used by more repressive countries to monitor and control their citizens. There’s a range of challenges that extend to cybersecurity, human rights and how the internet itself is governed in the future. Now that it’s become such a big deal, the natural instinct of many states is to want to control it. We need to be vigilant in ensuring wide participation by all stakeholders to ensure that this technology continues to evolve and thrive.

3. Can it ever be completely controlled by anyone or any state?

I don’t think so. You can imagine scenarios where states try to control their piece of the internet. That obviously undermines its global nature and its value to commerce and other things it enables. There’ll be challenges to the architecture of the internet itself but it would be very hard for any one government to control it. But that doesn’t mean there won’t be governments that try.

4. Have we reached a plateau or will we continue to see the same technological leaps in the cyber area as in the past, and what are your main concerns about the internet and computing generally?

In terms of sheer computing speed, people have often said it’ll slow down because it’s reached its practical limits with the size of circuits. But circuits keep getting smaller—down to the atomic level—and faster and more capable. Every time we think it has reached its limits, someone thinks of new things like stacking chips on top of each other. It hasn’t slowed yet. I think innovation and advancement will continue to accelerate.

Quantum computing could bring a vast increase in capability. In addition, there will continue to be great innovation in terms of the architecture of the internet and the applications that run on top of it. I don’t know what the next big leap will be but I’m confident something will happen.

There are a lot of tensions built into the internet. Obviously it would be useful to have easy attribution of internet traffic to go after bad actors, but that’s bad for human rights and privacy so you have to find middle ground. You have a lot of new things coming on line, including the internet of things and the promise of everything from self-driving cars to autonomous health systems. That’s great and can lead to amazing innovation, but if security isn’t built in as we’re doing this, they could be vulnerable to attack and the results will be physical as well. You won’t just lose your information but something bad may happen in the physical world—including critical infrastructure disruption. I worry about that.

The other thing I really worry about, and this doesn’t get a lot of discussion, is how we preserve the integrity of information. We worry about data being stolen or deleted but we don’t talk enough about what happens when data is made unreliable by a bad actor. For example, if someone gets into your health records and changes your blood type so that the next time you get a transfusion you die, that’s certainly more serious than not being able to access a webpage because of a distributed denial of service attack.

5. Much military equipment relies on satellites. Could a cyber attack render that inoperable?

Anything that relies on computers and computer networks is potentially vulnerable if the right precautions aren’t taken and protections implemented. Militaries need to be cognizant of how dependent they are on systems that ride on the back of networks—secure networks but also just networks. What would they do if they were unavailable? Some militaries try to train for that—a day or a week without cyber.

We need to be keenly aware of how dependent we are. What’s our resilience? What’s our bounce back? Attacks on critical infrastructure may be low in their probability but high in their impact if they happen.

Many people haven’t done the basic hygiene they need to in order to protect themselves and that’s a problem. You can do basic things to protect yourself from most intrusions and attacks. Even when you do that, there are still the dedicated, usually state, actors who can use tools to try to get into your system. But that allows those protecting systems to focus on that smaller set of actors and their tools.

6. How concerned should we be about advances in artificial intelligence? Are machines going to take over?

There are different camps on this. I think AI, as it’s now constituted, is very helpful. Machine learning and the like can lead to great advances. The dystopian movie view has AI with human characteristics and taking over everything. I suppose that’s a possibility but I don’t think we’re anywhere close to that and we can take precautions. I tend to take a more optimistic view. I don’t think we should shy away from exploring AI because it has so many benefits. Who knows what the future will bring but I don’t think it’ll be a binary thing where suddenly, one day they’re running us.

Is Australia’s national digital identity vulnerable to manipulation?

Just as we need to protect Australia’s critical infrastructure—our banking systems, power supplies, ports and roads—we must protect our digital information assets, particularly those that make us a nation legally, culturally, socially and historically.

Digital and digitised data is part of what makes us Australian. It underpins our democracy, our law, our society and how we see ourselves. It’s essentially the evidence of who we were, are and, probably, will be.

In early January, I started a project looking at the protection and vulnerability of Australia’s digital national identity assets. As part of a six-month fellowship with ASPI’s International Cyber Policy Centre, I’m asking:

  • What, exactly, are those assets?
  • What would happen if they were attacked, destroyed or manipulated?
  • What impact would that have on the nation, and on you and me?

I’m currently in the research and discovery phase, talking with people to identify key digital assets and collections. Some critical assets are obvious: our registries of births, deaths and marriages; immigration data identifying who has entered the country, when and where from; information about who owns what; Hansard.

I focused first on ‘digitally born’ material, but I’m now considering historical print and other content that’s been digitised: cabinet records; court cases; the national and state archives and libraries; the archives of the ABC, the Fairfaxes, the Packers and the Murdochs; and the enlistment and service records of every Australian who served in World War I. There are many more.

In 2018, a record that isn’t digitised and online might as well not exist, and that brings authenticity and reliability into the frame. How do we know that a digital record is a true copy of the original? If a digital record were destroyed, it could be recreated if the original is intact, but what if that happens the other way round? What if the sole image of a destroyed record has been manipulated digitally and then presented as true? We live in the era of PhotoShop and fake news, so why not fake history?

In the archival world, we continually decide on the importance of information. Based on those decisions, we develop a hierarchy of value, deciding what needs to be kept and for how long. The values change over time—what was important or sensitive years ago might not be now (think of expletives in broadsheet newspapers), and what was unthought of years ago might now be accepted (think of same-sex marriage).

The digital assets that I identify won’t be a definitive list but will be a solid representation of Australia’s critical information infrastructure as it stands today.

In the next part of the project, I’ll identify the ways critical digital records could be destroyed or manipulated. Australia experiences 47,000 cyber incidents a year. Who would benefit, and how, from targeting our digital heritage?

In a third and closely related element, I’ll explore the consequences. What would happen if any, some or all of those digital assets were destroyed? What would happen to our sovereignty, society, law, rights, entitlements and personal identity—who we are and what we own?

If we lost, say, our immigration and births, deaths and marriages data, how could you prove your citizenship? And what if that information were compromised and unreliable? What would then become the authoritative source of information about Australians and their citizenship? We could either throw our hands in the air and close our borders to all, or allow everyone in unless there’s some other proof that they’re not eligible to come.

Everyone I’ve spoken to so far, from heads of agencies to ASPI colleagues corralled in corridor conversations, has shown a genuine interest in and enthusiasm for my project. I have a wide remit, and I want to start a broader conversation about this issue and create awareness and understanding in different government and community sectors. Ultimately, we must get the protection of our critical data—just like our other critical infrastructure—onto the broader national agenda.

If you have further ideas or thoughts about my project, please contact me at annelyons@aspi.org.au.

Rethinking the security of our critical infrastructure

Many people believe that the internet of things (IoT) is aimed simply at supplying consumers with connected household devices. However, data from Intel shows that over 75% of devices are used in manufacturing, retail and healthcare. In short, the ‘vast majority of IoT devices today are used by businesses, not consumers’.

The introduction of industrial internet of things technology offers businesses many benefits, like production-line tracking and remote worksite management. But it also increases the attack surface for malicious actors. I wrote last year in The Strategist about the scary nature of the IoT and the difficulty in developing IoT security standards. Those issues pale in comparison to the havoc that could be caused by industry-level security breaches.

Major attacks on critical infrastructure have already occurred in Ukraine and Germany. In 2010, information about the now infamous Stuxnet virus came to light, detailing how it had been designed to ruin hundreds of centrifuges used in Iran’s uranium enrichment program. It was the first time a digital weapon was intentionally used by a nation-state to physically damage an adversary’s industrial control system.

The US Department of Homeland Security has identified 16 sectors that it considers to be vital components of critical infrastructure, including such things as ‘commercial facilities’—shopping and convention centres, office and apartment buildings, and other sites where large numbers of people gather—emergency and financial services, and information technology. In May 2017, President Donald Trump issued an executive order to further strengthen the cyber security of the nation’s critical infrastructure.

In Australia, our view of critical infrastructure is generally confined to physical systems that enable telecommunication, water and energy services to operate unimpeded. We need to rethink our approach. Our outdated, horizontal understanding of critical infrastructure downplays the co-dependent relationships between sectors. American cybersecurity expert Melissa Hathaway proposes switching the focus to critical services. Using that approach, energy and the internet (or telecommunications as a whole) would sit atop a hierarchy of other services that rely on the first two to operate.

In both the US and Australia, a majority of critical infrastructure is privately owned, making common standards difficult to enforce. In addition, many industrial control systems were constructed in the mid- to late 20th century, when the internet was fresh and cybersecurity wasn’t a major concern. Adapting or replacing legacy systems and protocols presents a serious challenge, which has often been used as an excuse to continue to use outdated and unsafe technology.

A campaign against the use of smart meters was launched in Australia in 2013 after a study from the University of Canberra revealed privacy and safety vulnerabilities in similar devices used overseas. Some smart meters collect personal information that could reveal when users are away from home, and even disclose how often appliances are used. Such devices could also prove dangerous for utility providers. Several years ago, hackers cost the Puerto Rican power company as much as $400 million by compromising smart meters.

So what damage could a cyberattack on Australia’s critical infrastructure inflict? Well, we already know. South Australia’s 2016 statewide blackout had effects similar to a cyberattack. A once-in-50-year storm disrupted crucial services such as energy, telecommunications, finance, transport and the internet. Nearly two million people lost power. Trains and trams stopped working, as did many traffic lights, creating gridlocks on flooded roads. An unknown number of embryos died at a fertility clinic in Flinders Hospital when a backup generator failed. The average financial loss to businesses was $5,000, with total losses of $367 million. The incident highlighted the danger of cascading failures in interconnected critical infrastructure.

Disrupting utilities that power an entire city could cause more damage than traditional terror tactics such as bombings, and can be performed externally with more anonymity. Again, severe storms provide an example: a loss of power can cause more deaths than the physical destruction itself. When Hurricane Irma damaged a transformer, for example, and the air conditioning failed, 12 residents at a Florida nursing home died of suspected heat-related causes.

The risks associated with industrial control systems don’t only affect human safety; they threaten the environment as well. In Australia’s first case of industrial hacking in 2000, Vitek Boden compromised the Maroochy Shire Council water system, sending a million litres of sewage into parks and waterways.

Our heavy reliance on connected devices means that exploitation of internet-dependent platforms can cause not only physical disruption, but also financial chaos. Last week the World Economic Forum revealed that the financial damage caused by an attack against a cloud-computing firm could equal or surpass that caused by Hurricane Katrina. That fact further supports the notion of switching the focus from physical infrastructure to critical services. The Australian government’s creation of the Critical Infrastructure Centre, which includes information technologies and communication networks in its definition of critical infrastructure, is a step in the right direction. And in March, ASPI will publish a report detailing IoT vulnerabilities and critical service protection, along with recommendations to address them.

But it’s clear that to safeguard Australia’s critical services from cyberattack, we need to improve communication and coordination between service providers, and to clarify the roles and responsibilities of cyber agencies. We must also prioritise the introduction and adoption of safety guidelines for IoT devices and strengthen international collaboration in this area.

The threats to energy grids, commercial facilities and online platforms vary significantly, yet all share a similar, frightening susceptibility to cyberattack. It’s a worry that’s not going to go away.

Obstacles for the cyber kangaroo

In mid-October, Dan Tehan, the minister assisting the prime minister on cyber security, announced that the Australian government is considering introducing new legislation on the internet of things (IoT; for an introduction to this topic, see my previous post). Under the proposed legislation, IoT device makers would have to include a security rating on their products. The concept is similar to an energy efficiency rating, which became mandatory for certain appliances in Australia in 2012. Introducing a ‘cyber kangaroo’ (PDF) rating is an appealingly practical measure that, if it’s done well, could improve consumer awareness of cybersecurity issues and encourage industry to adhere to minimum security standards. But there are several reasons why it would be more difficult to implement than an energy rating and could potentially increase consumers’ susceptibility to attack.

First, the vulnerability of an IoT device is likely to vary over its lifetime as weaknesses are discovered and then patched. The energy efficiency of a refrigerator or washing machine, by contrast, is relatively fixed. When UK police chief Mike Barton suggested a security rating for IoT devices earlier this year, tech editor Samuel Gibbs correctly noted that ‘a device’s resilience to attack from cyber criminals can change over time’. Cybercrime is an ever-evolving discipline and new vulnerabilities are constantly being exposed. At best, a security rating would only reflect the security information about a device at the time of manufacture.

The firmware in modern cars is one example of a product whose security may change over time.  In 2015, Charlie Miller and Chris Valasek hacked a 2014 Jeep Cherokee and were able to remotely control the steering and brakes and drive the car into a ditch. A notionally safe car had been rendered provably insecure. The vulnerability was then patched, making the car ‘safe’ again, until Miller and Valasek hacked the same car a year later (albeit not remotely). This cycle of hacks and patches could render an initial security rating meaningless and shows that the vulnerabilities of a particular device (or set of devices on wheels) can’t accurately be defined by a manufacturer’s sticker.

Another obstacle that the cyber kangaroo would need to hop over is the variation in IoT products. A Jeep Cherokee and a baby monitor present vastly different dangers, but compromise of either can have serious consequences. While there’s no doubt that the IoT needs security standards, some categories of devices that are safety-critical probably require commensurately robust security features. It will be difficult and expensive to come up with a cyber roo that appropriately rates all the different categories of IoT devices.

Finally, a cyber rating might lull consumers into a false sense of security by negating their own role in protecting themselves from attack. Knowing that they purchased an approved device could make consumers less likely to download updates or change the original password. Humans are often the weakest link in the cybersecurity chain. The idea of placing warning labels on IoT devices has been raised and amusingly compared to the warnings on Australian cigarette packages. While increasing the public’s cybersecurity awareness is important and this idea has merit, it would need to be done in a way that doesn’t create legal loopholes for industry to forgo built-in security.

With these concerns in mind, there seem to be four possible avenues for the cyber roo:

  • a pass/fail score that assesses compliance with baseline standards. For example, a product could receive a tick of approval if it has changeable passwords, uses encryption, and uses only approved communication protocols (or whatever the agreed-upon standards are)
  • a pass/fail score that assesses compliance with baseline standards and also tries to assess whether device security will be acceptable in the future. That could include assessing updateability, support lifetimes and a company’s commitment to providing regular and timely updates
  • a graded score that assesses manufacturers’ preparedness to meet basic security principles. For example, 0 = device cannot be patched, 1 = manual capability to patch exists but has never been used in practice, 2 = manufacturer patches occasionally, 3 = manufacturer investigates and patches vulnerabilities promptly
  • a security database that is combined with a warranty repair and recall system. This would involve assigning a virtual rating to a device that is adjustable through its lifetime to take account of the latest vulnerabilities. Customers could be notified of updates or recalls by a subscription service. While it would be expensive to implement, a changeable security rating would encourage manufacturers to provide lifelong security for their devices.

The cyber roo concept is so fresh that details about how it might work are scarce, which makes it challenging to definitively support or oppose the move. An advisory committee composed of industry representatives has until the end of 2017 to present ideas to the government about how the security rating system could be adopted.

Ultimately, a well-reasoned IoT rating system has the potential to add value to the cybersecurity domain in Australia. Consequently, though, a simplistic rating system that fails to differentiate between manufacturers’ and consumers’ responsibilities will have a negligible impact and waste resources. Estimates indicate that 20.4 billion devices will be connected globally by 2020, so the longer it takes to implement a security rating system the more insecure devices we’ll have in our lives. There are numerous ways that this concept could be executed, but not all paths lead to the same destination. A well-thought-out security rating system will require research and funds, and will involve much more than simply slapping a kangaroo sticker on our kitchen appliances.

You can’t write an algorithm for uncertainty: why advanced analytics may not be the solution to the military ‘big data’ challenge

The proliferation of sensors and data sources available to a modern military like the ADF often swamps the ability of the analyst to find what’s truly relevant in the sea of information. The exponential increase in sensors and data sources hasn’t been matched by an increase in human resources to process them. That imbalance makes aspirations of ‘information superiority’ untenable, leaving militaries vulnerable to promises that they’ll have machine solutions for and certainty about what’s an inherently human and uncertain problem: war.

We must be careful about proclaiming a revolution in military analytics and be cognisant of the failed promises of the last ‘revolution’ that occupied Western military attention. ‘Advanced analytics’ is a bet on computers being able to process the data deluge in a meaningful way to support military decision-making. My concern is that we don’t fully understand how difficult that is to achieve, or the significant changes that such a gamble implies for the workforce charged with implementation.

The fallacy of smart computing. Computers are only as smart as we program them to be. In the absence of Skynet-level AI, they can’t interrogate data holdings to generate links between diffuse pieces of information to predict or assess the military actions of a thinking human adversary. Existing software can’t make sense of complex human interactions in the same way, or with the same time-sensitivity, that a well-trained analyst can or should. Much is made of the ability to assist with pattern recognition, and while analytics can certainly assist with that task, it still relies on someone programming the correct patterns to recognise. But understanding what those patterns might look like implies a degree of certainty about the tactical environment that rarely exists on the battlefield.

Workforce design. The quandary we face is whether to design intelligence architecture around unproven advanced analytics platforms to get the most out of the technology, or to design an architecture that supports the analysts to understand the environment in which they work. Currently, with the personnel and technical overheads required to give advanced analytics systems a fighting chance—particularly in the fields of data entry and algorithm development—those two concerns appear mutually exclusive.

Uniqueness of military data. Advanced analytics tools are seductive when designers conduct demonstrations using carefully calibrated data to show their theoretical capability. But military data is rarely clean and is inherently difficult to control. Analysts deal with everything from UAS feeds, to Facebook posts, to scraps of paper and everything in between. Those are unstructured data sources that are ill-suited to the needs of a platform designed to ingest and analyse structured data. Data standards are incredibly hard to control, and ‘cleaning’ data to make it usable is both time-consuming and takes an analyst away from trying to fuse their assessments across data sources. When the ‘data in’ is poor, the ‘data out’ will be wrong. Many of the analytics platforms marketed to the military were developed for finance and industry, where there are limited data sources and the data can be structured to suit the purpose of analysis. That isn’t the case in the land warfare domain, and it’s largely impossible at this point to write an algorithm that can bring order to the chaos that’s inherent to war.

Stovepiped development. Powerful software is available to exploit single-source sensors, but those tools are rarely linked into an all-source fusion tool. Many of those systems are also proprietary software, meaning they can’t be exported into more powerful fusion systems. A sensible approach might be to design the all-source fusion system first and have individual sensor requirements nested underneath. However, that implies a level of capability development and acquisition alignment far in advance of existing stovepiped practice. A continuing challenge will be to find tools that work across the myriad defence systems, classified and unclassified, to provide a unified data environment as part of an enterprise approach to intelligence.

The cognitive shift. Military analysts have traditionally relied on qualitative rather than quantitative skills. Their successes have mainly been based on forming judgements from scraps of disparate information, supported by the intuition that comes from hard-won experience. The skills and aptitude needed to operate advanced analytics are largely the opposite. They rely on programming and coding skills—a quantitative aptitude to order and synchronise data. Those skills are more advanced than simply being technology ‘savvy’. If those tools are to be the centrepiece of any future intelligence, surveillance and reconnaissance enterprise, they’ll require a significant cognitive shift from the intelligence workforce. The questions must be asked: What’s lost in the process? And for what measurable gain?

Tail to wag the dog. Advanced analytics platforms require enormous back-end support to make them work, and maintaining an army of dedicated contractors is beyond the scope of most militaries. Data scientists are the most in-demand profession in today’s job market, and there’s no guarantee the military can access them in sufficient numbers to ensure the functionality of a chosen system.

Militaries must decide what they need advanced analytics to achieve. Only when that understanding is reached can they partner with industry to design the tools to achieve it. This lack of translation between user need and provider solution is the biggest stumbling block to any meaningful progress in the short term. Ultimately, however, we need to understand whether it’s even feasible to expect computers to make sense of war’s inherent unpredictability. After significant work and investment, computers may be able to assist in ordering and sequencing data to make analysis more efficient, but I’ll wager that they’ll be unable to provide any greater certainty than a team of well-trained and experienced analysts who understand the true difficulty of creating order from chaos.