Tag Archive for: Cyber

Cyber maturity in the Asia-Pacific Region 2014

To make considered, evidence-based cyber policy judgements in the Asia-Pacific there’s a need for better tools to assess the existing ‘cyber maturity’ of nations in the region.

Over the past twelve months the Australian Strategic Policy Institute’s International Cyber Policy Centre has developed a Maturity Metric which provides an assessment of the regional cyber landscape. This measurement encompasses an evaluation of whole-of-government policy and legislative structures, military organisation, business and digital economic strength and levels of cyber social awareness.

This information is distilled into an accessible format, using metrics to provide a snapshot by which government, business, and the public alike can garner an understanding of the cyber profile of regional actors.

Special Report – Compelled to control: conflicting visions of the future of cyberspace

This report looks at the desire among states for greater control over the digital domain. It considers the convergence of controlling desires among the major cyberpowers and examines some of the main dynamics of the Russian and Chinese positions. Their positions are examined relative to each other and to the Western consensus.

The paper analyses the potential implications for the global internet and the impact that developing countries may have on the dialogue.

Tag Archive for: Cyber

Cyber wrap

Don’t say stupid things online

It’s been a big week for advocates of online OPSEC. On Monday, a Google employee suffered a high-profile firing after he circulated a ‘manifesto’ railing against Google’s institutional ‘political bias’ against conservatives and the need to have an ‘honest discussion’. Google’s leaders—current and former—have universally taken issue with how consistently incorrect the manifesto is in its core argument (about how women aren’t biologically suited for tech jobs) and how damaging it has been to the company’s reputation and to the team. The fired employee is reportedly seeking any and all ‘legal remedies’; power to you, guy.

The Google anti-diversity memo is a great example of what the Australian Public Service Commission (APSC) was trying to protect against when it provided more detailed guidelines about what the APS Code of Conduct requires when it comes to making public comments, including on social media. Ironically, the APSC’s own communication about not staying stupid things online has become the latest example of poor online communication, and what was intended as guidance has been interpreted as a heavy-handed (and unconstitutional) gag order. Whether the confusion’s due to miscommunication or misrepresentation from the media isn’t clear, but it’s a reminder that confusion quickly escalates to fever pitch well before even the most eager 9-to-5 public servant has had their first coffee. And if it’s that hard to communicate guidelines on social media use, it might be impossible to raise cyber hygiene awareness (PDF) and practices.

Stop worrying and love AI

Two Tencent chatbots have been taken offline for revision after they provided politically inflammatory responses to queries about the Communist Party, insulting the party as ‘corrupt and useless’. The shutdown comes shortly after an (overblown) wave of concern about Facebook chatbots ‘inventing their own language’. The two stories seem to be being picked up as the ‘patient zero’ case studies for FUD (fear, uncertainty and doubt) about impending AI doom.

New South Wales is pushing ahead with autonomous vehicles anyway, greenlighting a program for a two-year trial program at Sydney Olympic Park. The trial will be going at a snail’s pace, though—the vehicles won’t be allowed to exceed 10 kilometres an hour along a closed-off road. Fingers crossed it all doesn’t go the way Tesla went at this year’s DEF CON.

The US Army has taken a far more cautious (but seriously belated) approach to semi-autonomous vehicles, issuing a memo mandating that all service members cease use of DJI drones, software applications, and other equipment.

Sharing is caring

The Australian Signals Directorate will be sharing threat intelligence with telcos and internet service providers, to help them provide, in turn, cost-effective cyber-security services for small to medium enterprises. This directly addresses the vulnerability to hacking of small and medium enterprises, which have been identified by both the government and the opposition as being sorely in need of protection, but without necessarily having the resources or expertise to protect themselves. Weirdly, however, this initiative ignores anti-virus and security software vendors—the companies that are perhaps best placed to immediately use this data to protect customers.

In similar research, Telstra has launched the Australian Digital Inclusion Index 2017, which has surveyed digital access disparities between socioeconomic classes and found that Australia’s getting better at digital inclusion, which could translate into better cyber-security outcomes for Australia. (For the final word on that, keep your eyes peeled for the latest edition of ASPI’s cyber maturity report later this year.)

The federal government has announced that it’ll be building a single ‘super logon’ to consolidate across the dog’s breakfast of government accounts, which currently saddles users with managing 10 to 30 accounts. It’s not clear from that exclusive interview whether the initiative is the same one as the ‘GovPass’ and ‘Tell Us Once’ initiatives announced in the 2017 budget. It’d be ironic if there were two separate programs under development to consolidate logins and accounts.

Regardless, work on GovPass continues unabated, and Airtasker, Travelex, Credit Union Australia and the Queensland Police Service have signed up for AusPost’s Digital ID service, which is currently serving as a pilot program for later reconciliation with the wider GovPass program. Gavin Slater, the CEO of the Digital Transformation Agency, which is managing the GovPass program, has announced that he’s been working to repair relationships with government agencies, after the then Digital Transformation Office became too ‘disruptive’ for the APS’s tastes.

The Australian Digital Health Agency published Australia’s National Digital Health Strategy (PDF) and outlined an action plan to make sure all Australians have a My Health record by 2018. The aim is to improve the protection of healthcare data and interoperability between healthcare organisations. However, privacy activists are concerned that the consolidated health data will present an increased privacy risk, which is why it’s a good thing that the agency will be establishing a Digital Health Cyber Security Centre to make sure Australia’s health data security is at the cutting edge of international best practice.

Open data dashboards tied up with strings

Open data dashboards have been popping up like daisies this week. The Alliance for Securing Democracy has launched a new online dashboard, Hamilton 68, tracking bot networks and troll accounts linked (after three years of observing) with Russian influence operations on Twitter. The top hashtag used by these accounts was MAGA, or Make America Great Again, the campaign slogan of US President Donald Trump. As with most social media analysis projects, the transparency of the methodology has been criticised.

Black hats, white hats, and cyber diplomats

US ‘cyber-diplomat’ Christopher Painter has signed off, writing a parting note on Medium about the continuing importance of diplomacy in cyberspace. Even after working for 26 years in this (highly depressing) space, he’s reportedly still passionate, calling cyber ‘the new black’.

Famed ‘hero’ and supposedly white-hat hacker @MalwareTechBlog, aka Marcus Hutchins, was arrested at Las Vegas Airport shortly after Black Hat and DEF CON 2017. The FBI has accused Hutchins of creating, distributing and updating ‘Kronos’ (the banking trojan that was designed to infect computers and grab online banking credentials for profit) in 2014 and 2015. Hutchins has ponied up US$30,000 in bail, and is set to face a Nevada court on 14 August.

Five fifth-generation warfare dilemmas

The future of the ADF is ‘fifth generation’, or at least the Chiefs of Army, Navy and Air Force think so. It might’ve been just a passing fad, given that the term originated as a company marketing slogan selling a long-delayed fast jet. But in recent years the expression has morphed into a useful buzzword encapsulating several deeper concepts. At its core, ‘fifth generation’ is all about ideas, about how we conceive of waging tomorrow’s wars—and preparing for them. It encompasses four major approaches:

  • Networks. Modern war uses extensive digital networks. Conceptually, four interconnected and interdependent virtual grids—information, sensing, effects and command—overlie the operational theatre. The various force elements are interacting nodes on the grids that can each receive, act on and pass forward data.
  • Combat cloud. Working together, the grids can form a virtual combat cloud—akin to commercial cloud computing—that allows users to pull and add data as necessary. The result is longer-range tactical engagements. It’s no more, ‘Fire when you see the whites of their eyes’, but rather, ‘Engage when a symbol labelled “adversary” appears on a shared display’.
  • Multi-domain battle. There are five operational domains: land, sea, air, space and cyber. The key animating idea is cross-domain synergy, where force is applied across two or more domains in a complementary manner (PDF) to achieve an operational advantage.
  • Fusion warfare. The fusion warfare concept addresses command and control concerns arising from additional information flows, software incompatibilities and intrinsic vulnerabilities to attack and deception.

The order of these approaches mostly reflects the sequence in which they’ve been incorporated into the concept of fifth-generation warfare. The oldest is network-centric warfare, dating from the mid-1990s; the others have become increasingly prominent over the last several years. The progression highlights that commercial information technology has often led military developments in the fifth generation. Cloud computing, for example, was initially implemented in the mid-2000s but it was not until the mid-2010s that the concept was embraced by military thinkers.

Each of these four conceptualisations is important, but in fifth-generation warfare they don’t exist individually; they function together as an integrated, interdependent ‘system of systems’ whose whole is greater than the sum of its parts. Fifth-generation warfare is accordingly a dynamic way of war, constantly evolving as the context changes and new demands arise.

Moving to fifth-generation warfare has several implications.

First, there are obviously two in-built technical vulnerabilities. Digital systems are inherently susceptible to cyber intrusions that may steal, delete or change data, or insert false data that can quickly spread across the network. While cybersecurity techniques are steadily improving, so are cyber intrusion methods, with neither remaining in the ascendancy for long. But it’s more than just cyber: electronic and information warfare techniques are designed to deliberately input false data into hostile networks that spreads to all users, confusing and distorting the shared picture.

Moreover, fifth-generation warfare relies on datalinks. Emitters are inherently vulnerable to detection; network participants can be located and tracked—and thereby targeted by precision-guided weapons. Some datalinks are harder to detect than others; however, as with cyber, technology continually improves. Cybersecurity and datalink emission tracking will require constant effort for the operational life of fifth-generation warfare. They are serious Achilles’ heels.

Second, modern wars inevitably involve coalition operations, so on any network there may be actors from many different countries. All involved will be doing their best, but within each country’s forces, and within the coalition overall, there’ll be elements using different intelligence sources, different threat libraries and different electronic signature data to make decisions about the identity and location of hostile and friendly forces, and neutral entities. The operational perils implicit in the ‘garbage in, garbage out’ aphorism suggest that some force elements will be more trusted than others in fifth-generation warfare. ‘Balkanised’ networks (in which some nodes are disregarded or receive degraded data) are likely, leaving some nodes to potentially fight their own separate wars instead of being part of a coherent, carefully coordinated application of coalition military force.

Reducing a force to a collection of small, independent networks undercuts the Metcalfe’s law logic of fifth-generation warfare, which asserts that the ‘power’ of a network is proportional to the square of the number of nodes in the network. The probability of blue-on-blue engagements also increases as the location of friendly forces becomes less certain to all coalition participants.

Third, individual national sovereignty is diminished, especially in the combat cloud concept, since information is pulled from the digital cloud with perhaps only limited knowledge of its source. Using such off-board information—rather than that derived from one’s own onboard sensors as happens today—to engage targets inherently reduces each nation’s responsibility and accountability. A senior ex-RAF officer complained that ‘this slaughters [the UK’s] legal stance on a clear, unambiguous and sovereign kill chain’.

Fourth, the fifth-generation warfare idea relates to what Edward Luttwak called ‘the technical dimension of strategy’. Technology influences how we fight wars, but there’s more to being successful than technology. Leading-edge technology was insufficient to win the Vietnam, Iraq and Afghanistan wars—and fifth-generation warfare so far doesn’t appear any different.

And lastly, the end of fifth-generation warfare may be in sight. In the 1990s, futurists Alvin and Heidi Toffler argued that ‘how we make war reflects how we make wealth’. They foresaw that the information technology age would necessarily compel changes in warfare. In many respects, fifth-generation warfare is the working out of that idea. Now some see another industrial revolution approaching that will change the way wealth is made. If the Tofflers are right, warfare may change again. Third offset, anyone?

On the inevitable failure of cyber security

Image courtesy of Flickr user Spry.

While the Australian Government’s Cyber Security Strategy contains many good initiatives, the government’s narrative needs to evolve to account for inevitable failures. Current government rhetoric is decidedly inconsistent: cyber espionage is alive and well, yet at the same time the data of the Australian people is safe and secure.

The Prime Minister has spoken about the importance of meaningful conversations about cybersecurity, but that narrative clearly has some internal inconsistencies and isn’t a realistic or nuanced message. As the Australian Public Service, business and the broader community raise their levels of cyber sophistication, we need to continually reframe government communications to push real cyber resilience.

Services delivered over the internet are exposed to several interesting asymmetries that all but guarantee that there’ll be cybersecurity failures of consequence. Imagine a hypothetical government IT project (let’s call it ‘Project ORCA’) that aims to provide a perfectly secure government portal to deliver vital services to the Australian public.

Our first asymmetry is that the teams building online services have only finite time to deliver their products. This is a good thing, as we want IT projects to be delivered, and infinite timelines aren’t helpful (even though that can feel like standard practice in government at times).

By contrast, malicious actors (baddies and hackers) on the internet are not time bound; their time horizon is effectively infinite. ORCA, for example, while built over a relatively short time, will be exposed to attack for the rest of its working life—which may possibly run from years to even decades. A successful attack on ORCA can be damaging to the government at any time throughout its life.

Second, teams building online services have limited skills and capabilities. The Project ORCA team is limited to the pool of skills available within the team. The very best we can hope for is that it implements the best possible solution at that point in time. But even this best-case scenario isn’t good enough.

Malicious actors can not only access the state of the art at the time when ORCA is built, but are also able to use new vulnerabilities that are discovered after the service has been delivered. In a very real sense, the Project ORCA team is trying to defeat hackers from the future!

Third, the ORCA team is focused on delivering what it uniquely adds to and builds upon the best frameworks and architectures available at the time.

Malicious actors, however, can attack not only what the ORCA team builds directly, but all the software and hardware that ORCA relies on and is connected to. The Project ORCA team can deliver its project perfectly, but the security of ORCA overall can still be undermined by factors outside the team’s control. In recent years, for example, there have been several very severe bugs that have affected internet services in totally unexpected ways, and Project ORCA can’t mitigate that class of threats.

Although this sounds pessimistic, this is broadly understood in private industry; breaches are common and inevitable, and there’s a very real focus on resilience and recovery. The cyber-mettle of an organisation isn’t measured by whether the organisation suffers a compromise, but by how quickly the compromise is discovered, how well it’s contained, and how effectively it’s cleaned up.

Government’s current narrative is focused on implementing the ‘Essential Eight’. These are the eight highest priority actions from the Australian Signals Directorate’s Strategies to Mitigate Cyber Security Incidents that help prevent cybersecurity breaches. The Essential Eight grew out of what were initially branded the ‘Top Four’, and when implemented will prevent a large majority of cyber intrusions that the ASD currently sees.

Even when these strategies are implemented, however, they are still only mitigation strategies. That is, they make things less bad than they were before. They aren’t a guarantee that security is perfect; they are just the first steps to take when your security baseline is very bad.

Real security doesn’t consist of implementing the ASD’s Top Four mitigations, and then a year or two later expanding that to the Essential Eight. Real security is the ongoing work that arises from an acceptance that failure is inevitable: understanding your network; detecting and investigating anomalies; patching, monitoring and alerting; clean-up, backup and disaster recovery.

The Prime Minister has spoken about the importance of meaningful conversations about cybersecurity events. But by denying the scope of the problem our political leaders are preventing the meaningful conversations that they desire and lulling us into a false sense of security. The conversation needs to change to account for the inevitability of failure.

Cyber wrap

The encryption debate has continued to dominate cyber security news this week. German ministers discussed measures to monitor encrypted messaging by forcing ‘source telecoms’ to install monitoring software. Conversely, the European Parliament is currently considering amending the EU Charter of Fundamental Rights to prohibit decryption or other monitoring on encrypted communications. On top of that, founder of the encrypted messaging service Telegram, Pavel Durov, has claimed that the US government previously approached Telegram to install ‘backdoors’ in the service, offering hefty financial incentives to boot.

Facebook has finally weighed in on the elephant in the room in the encryption debate—terrorist communications. In an unusually high-profile blog post, the company stated that it has embraced technological solutions to remove terrorist communications and accounts from Facebook, including artificial intelligence. The blog post will be the first in a series called ‘Hard Questions’, where Facebook seeks to address complex social issues. It’ll be a series to watch closely as Facebook begins taking a more active role in public debate.

Back in Australia, the government has selected host universities for Australia’s first Academic Centres of Cyber Security Excellence. The University of Melbourne and Edith Cowan University will be the inaugural hosts for the centres, and will receive shares of funding allocated by the Cyber Security Strategy. That positive step in cyber education comes on top of Australia’s high performance in the International Telecommunications Union’s annual Global Cybersecurity Index. Australia placed 7th out of 134 member states in cybersecurity commitments and policy, with our technical certification and standards highlighted as a strong suit. Lastly, the Department of Defence is looking at introducing intelligence analytics tools and techniques to manage natural language data, from text, speech and video.

A data firm affiliated with the Republican National Committee, Deep Root Analytics, accidentally left a database full of voter information open on the internet to random users—potentially exposing private information on 198 million US voters. Election security has been a prominent theme elsewhere this week, with early findings from investigations in Illinois indicating that cyber attackers attempted to delete or alter voter data on software systems across 39 states in the 2016 presidential election—far more than previous reports indicated. North of the border, the Canadian Communications Security Establishment released a report stating that hackers attacked the 2015 General Election using a combination of selective leaks and disinformation campaigns. The report found that the attacks were relatively unsophisticated and not conducted by nation-states, but there’s little to suggest the next Canuck election will prove as resilient.

The Trump administration has taken an aggressive approach to government deregulation, issuing a memo instructing government agencies to remove up to 50 outdated reporting requirements, seven of which had forced federal agencies to provide updates on their preparedness for the Y2K bug—17 years after the bug became a non-issue. There’s nothing quite like timeliness…

By letting registration of a control domain expire, Samsung left phones with the stock Samsung S Suggest app potentially vulnerable to hijacking. The app was discontinued in 2014, but continued to receive instructions from a web domain, which expired this week. Fortunately, ethically-minded cybersecurity researchers bought out the domain before harm could be done, but they found that the domain could have pushed malicious code directly to phones with the app.

Finally, in the US, the Girl Scouts of the USA have announced a partnership with Palo Alto Networks to introduce cybersecurity education to the girl scouts including  18 new cybersecurity badges starting in 2018. The new focus area was decided on as a result of a survey of young women, who stated they wanted to learn technical skills and boost their participation in STEM. The badges provide programs for all skill levels, from the basics of privacy and online safety to learning how to become an ethical hacker.

Australian leadership on new technologies of warfare

Image courtesy of Pixabay user Unsplash.

Since 2015 Australia—partnering with Switzerland—has built support among 36 countries to address concerns about military and police forces’ interest in the use of highly toxic chemicals, such as anaesthetic and sedative agents, as weapons for law enforcement. This is a great achievement on an issue first brought forward by the International Committee of the Red Cross (ICRC) in 2003, and on which there has been scant multilateral progress. Particularly so given that in recent years the Organisation for the Prohibition of Chemical Weapons has been focused on efforts to dismantle Syria’s chemical weapons and put a halt to the repeated use of chemical weapons in Syria and Iraq.

This development signals two important characteristics of Australia’s approach: a willingness to tackle threats to international law and civilian protection, even where there are significant differences in viewpoints among countries; and an ability to remain attentive to emerging risks, even while embroiled in an ongoing crisis.

Such international leadership is urgently needed in other areas where science and technology collide with international law and humanitarian concerns.

The ICRC, for its part, has always pressed for a realistic assessment of new technologies of warfare to ensure they are not employed prematurely if respect for the law cannot be guaranteed. And so I’m pleased to be in Australia this week to take part in the Symposium on the Ethical, Legal and Social Implications of Emerging Military Technologies at Melbourne Law School.

At the UN, efforts to address the implications of increasing autonomy in weapon systems have moved forward slowly. General agreement among States that ‘views on appropriate human involvement with regard to lethal force and the issue of delegation of its use are of critical importance’ has been an important outcome of three, week-long, informal discussions at the Convention on Certain Conventional Weapons (CCW). However, this work now needs to step up a gear.

Here there are opportunities for constructive proposals—based on states’ obligation to uphold international humanitarian law (IHL) and minimise risks to civilians and to combatants no longer taking part in hostilities.

Australia’s efforts to promote better implementation of the legal obligation, and policy necessity, for countries to conduct national legal reviews of new weapons prior to their acquisition or use, are very welcome. It’s something the ICRC has long advocated. However the ICRC believes there is a critical need to achieve an understanding at the international level on how to ensure that humans remain in control of weapon systems and the use of force while making the necessary legal decisions on targeting in armed conflict.

What’s needed now is state-driven work by the newly established CCW Group of Government Experts to start answering the difficult questions. Recognising the critical importance of human ‘involvement’, ‘control’ and ‘judgement’ in the use of force in armed conflict, the ICRC has suggested that states now determine the type and degree of human control necessary to ensure compliance with IHL, and ethical acceptability. Switzerland’s IHL ‘compliance based’ approach has gained significant support, in particular from Brazil, Chile, Finland, Netherlands, Republic of Korea, South Africa, and Sweden, at the CCW Review Conference in December 2016. Here again, Australia might consider the benefits of joining Switzerland and other concerned States.

Similar arguments for foresight and unity could be made for international debates about other new technologies of warfare. Recently, discussions about robotic weapon systems that are not autonomous but remain remote controlled have focussed on transparency in armed drone operations. With the rapid proliferation of military drones to over 90 countries, and non-State armed groups starting to employ improvised versions, the implications for IHL compliance and humanitarian consequences could evolve considerably. Could a move towards reliance on robotic weapon systems on land lead to new risks for civilian populations?

Elsewhere, international discussions on cyber warfare—notably through another UN GGE, which Australia chaired from 2012-13—have been considering the applicability of international law in cyberspace.  Australia has stressed the importance of ‘elaboration of how international law applies to states’ behaviour in cyberspace especially in non-conflict situations.’

Nevertheless, there is also a need to consider the potential humanitarian consequences of the use of cyber weapons in armed conflict and constraints that may be needed in future on cyber weapons development, acquisition and use. Some ideas are also emerging from industry, for example Microsoft’s recent proposal for a ‘Digital Geneva Convention’ for peacetime, which might influence the debate in situations of armed conflict.

The risks from weapons targeting space systems are also of increasing concern. Although the recurring UN General Assembly Resolution on the prevention of an arms race in outer space has almost universal support, there are different views among major powers on the means of prevention. Given these realities, Australia has called for greater focus on voluntary transparency and confidence building measures.

From the ICRC’s perspective, the ever-increasing military attention to the contested domains of cyber and outer space, and the reliance of civilian infrastructure and services on these interconnected networks, bring with them a particular need to consider the potential humanitarian consequences.

There’s much work to do.  Australia—with its government, think-tank, and academic expertise—is well placed to play a greater role.

Securing democracy in the digital age

Image courtesy of Flickr user damian entwistle.

Earlier this month, the campaign of French presidential election candidate Emanuel Macron fell victim to targeted cyber intrusion efforts by infamous Russian hacking collective that goes by a number of names including Pawn Storm, Strontium, Fancy Bear or APT28. Spear phishing email attacks against Macron’s En Marche! Party were followed by the public release of 9 gigabytes of reportedly confidential communications less than 48 hours before ballot boxes opened. While Macron was still able to secure the presidency on 7 May, his campaign said that the efforts had ‘put the vital interests of democracy in jeopardy’.

The French experience is just the most recent development in what appears to be a tide change in international cyber relations. The 2016 US presidential race between Hillary Clinton and Donald Trump was a wakeup call that highlighted democracy’s vulnerability to manipulation in today’s digital world. The hacking of multiple state voter registration databases, the strategic dumping of stolen email communications and the prominent position of social media all played a role in undermining public confidence and shaping public opinion. A US intelligence community assessment controversially asserted that, ‘Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election.’ Unsurprisingly, Putin has denied any involvement, but it seems the threat’s here to stay—countries such as Germany and the UK now concerned for the digital security of their upcoming elections.

As this kind of cyber operation becomes an increasingly attractive tool of statecraft, it’s important to understand the distinct variables at play in modern election security. My new report, Securing democracy in the digital age, presents a conceptual framework through which to understand the challenge. By employing the US election experience as a case study, the report outlines the distinction between the cyber vulnerabilities of election infrastructure and the possibility that public opinion is vulnerable to manipulation.

The most direct way to influence a democratic process is to compromise the practical mechanisms that are used to assess the public will. Vulnerabilities inevitably exist in any digital system, and election infrastructure is no different. Malicious actors can target weak spots in voter registration databases and e-voting machines to influence both who can vote and how their vote is recorded. While this seems like the most obvious point to target, it’s challenging to rely on these points to sway anything other than an extremely close election, especially in countries like the US which have a particularly decentralised electoral system.

A more sophisticated technique is to influence how people decide their vote. Every vote cast is the product of the information ecosystem that individual has been exposed to in the preceding months. This environment can be manipulated in three ways: by strategic disclosure of compromising information, by disseminating “fake news” and by leveraging the echo chambers of social media.

Acquiring and distributing true but previously unavailable facts about a candidate can change the way people make their election choice. Sometimes referred to as ‘doxxing’, this approach involves ‘maliciously disclosing information in a calculated fashion to inflict setbacks in political momentum and unity’. The Wikileaks dump of 20,000 Democratic National Committee emails in June followed by 58,000 from Clinton’s campaign manager in October 2016, and the targeting of Macron’s campaign emails are the most prominent examples of that tactic in recent times.

But malicious actors don’t even have to go to the effort of stealing authentic compromising information: they can just create fake news. False information can be disseminated online to influence citizens’ decision-making and the democratisation of media means that this type of mass misinformation operation is easier to carry out than ever before. The proliferation of fake news was a defining theme of the 2016 US presidential election. Worryingly, in the final months before the election, trending fake news headlines received higher Facebook engagement rates than the top headlines from traditional media outlets, such as The New York Times and The Washington Post.

The introduction of new information into the public debate, whether real or fake, can also be more impactful than ever before thanks to artificial consensus on social media. Newsfeed algorithms are designed to show people what they want to read, based on their demonstrated preferences. The result is the creation of online silos, or ‘echo chambers’, which reduce the likelihood that an individual will be exposed to views contrary to their own. The volume of those arguments can also be automatically boosted by networks of bot accounts or manually boosted by armies of trolls. Those techniques can give a voter the impression that a particular view receives a greater level of popular support than it really does.

The issue of contemporary election security isn’t going away. Democracies need to consider the vulnerability of their electoral process and craft policy solutions for their specific context, and Australia is no exception. ICPC’s new report outlines a variety of policy questions that governments need to address related to the cybersecurity of election infrastructure, the integrity of the public debate and the development of normative responses. It will pay for Australia to be on the front foot on this issue. Proactive steps should be taken to ensure that our democracy remains secure in the digital age.

Cyber wrap

Image courtesy of Pixabay user stellabelle.

The fallout from the WannaCry ransomware incident continued this week. ShadowBrokers, who released the Eternal Blue exploit used by the WannaCry ransomware, have announced a new program where members will gain access to new vulnerabilities and tools, as well as information supposedly stolen from Iranian, Chinese and North Korean missile programs. While ShadowBrokers have been linked to Russian intelligence services, it’s noteworthy that Russia itself was significantly affected by the incident. As expected, additional uses of the EternalBlue exploit have been uncovered, including to install software that mines the cryptocurrency Monero.

Speculation over whether the Hermit Kingdom is behind WannaCry has also continued this week. Cybersecurity firm Symantec’s Security Response team have released further evidence which they claim more closely ties WannaCry to the North Korean-linked Lazarus Group of hackers. Symantec notes that similarities in the tools used in last week’s attack link the ransomware to the tools used in other cyber incidents linked to North Korea—including the 2014 Sony hack and last year’s attack on Bangladesh’s Central bank. However, the difference between previous incidents and WannaCry is the nature of the malware’s autonomous propagation through networks using the EternalBlue exploit, whereas previous Lazarus Group linked malware required greater intervention by the hackers, limiting the extent of its spread.

WannaCry has also focused international attention on North Korea’s cyber capabilities. Jim Lewis from CSIS noted that the Sony hack marked a steep change in the nature of North Korean cyber espionage and hacking activity. Lewis notes that before Sony North Korea focused on espionage and harassment of South Korean political targets, but afterwards they’ve increasingly used their skills for criminal activity to generate hard currency for the regime.

Various North Korean People’s Army units have been identified as being involved in cyber operations, but Unit 180 in the Reconnaissance General Bureau has been most closely linked to WannaCry. Greg Austin from UNSW told a seminar in Canberra last week that over 6,000 North Koreans are involved in various aspects of cyber operations including disrupting the South’s military critical infrastructure and command and control systems. And over at the UN, the North Korean Sanctions Committee has warned members to be alert to North Korean hacking after one of its panel of experts was hacked. The warning ominously noted that the hackers had gained ‘very detailed insight’ into the work of the committee.

Another infamous hacking group—variously known as APT3, Gothic Panda and UPS—has been linked to the Chinese Ministry of State Security (MSS) in a blog published by Intrusion Truth, an anonymous cybersecurity blogging group. The post notes the links between two directors of the Guangzhou Boyu Information Technology Group (Boyusec), and the domains used by APT3 for their activities. Boyusec is also linked with Chinese technology firm Huawei, and the US Defense Department reportedly noted in an internal investigation in 2016 that Boyusec and Huawei had been cooperating to develop products with “backdoors” installed to enable future espionage activity. Intrusion Truth believes that Boyusec is contracted to MSS through various intermediary state organs, keeping with that agency’s  conventional intelligence collection methods by utilising  commercial organisations as cover for intelligence collection. APT3 has previously been linked with cyber operations targeting both the US and Hong Kong.

Closer to home, the Australian government has agreed to work with the Information Commissioner to develop a privacy code for Commonwealth agencies. Back in March, Commissioner Tim Pilgrim requested that the new code be developed, spurred by the fact that significant bungles including #censusfail and data breaches from the Health Department and Public Service Commission had the potential to significantly undermine public trust in the government’s ability to manage data appropriately. The code will be implemented in 2018.

Also in Canberra, in an attempt to improve their own skills and attract more tech-savvy people, teams of government cybersecurity personnel will take part in a cyber ‘war game’ this September, hosted by the Department of Human Services. The teams will work on a cyber test range to defend Lego models of trains, bridges and towns.

Cyber information sharing: achieving the Holy Grail of cooperation

Image courtesy of Flickr user Northsky71.

When confronting the problems of cybersecurity, it’s often noted that, regardless of time and space, we’re all exposed in some way to the same active and innovative threat actors. Shared threats promote cooperation, and sharing information on cyber threats has long been acknowledged as an efficient way to reduce the effectiveness of cyber threat actors. For this reason, a key initiative of Australia’s Cyber Security Strategy is the establishment of a multilayered, public–private cyber information sharing network, focused on the Australian Cyber Security Centre (ACSC) and new cross-sectoral joint cyber security centres (JCSCs) in state capitals. Cyber information sharing is not new to Australia, but this renewed focus is an opportunity to create an effective national network to share information that assists all participants to improve their security, collectively enhancing Australia’s overall cybersecurity posture and capability.

However, establishing information sharing networks isn’t simple. They can be undermined by a lack of trust, inadequate funding, and poor engagement from contributors who don’t share a common understanding of the vision and objectives of the organisation. In addition, public–private information sharing is often held back by concerns that overclassification of information and slow sharing by government agencies reduces the value and effectiveness of information sharing. This was recently highlighted in the ACSC’s 2016 Cyber Security Survey, which showed that respondents viewed information, intelligence sharing and collaboration as the least important factor in mitigating cyber risks. The survey’s poor results for perceptions of the value of information sharing indicate that the foundations of trusted information sharing networks in Australia remain weak.

As Australia embarks on a process to develop a deeper and wider national cyber information sharing network, careful consideration of the lessons learned by the US and other international partners is necessary to ensure early success and long-term sustainability. This is the focus of my  paper, Cyber information sharing: lessons for Australia, which was released today. The paper builds on a forthcoming report by ASPI’s US partner the MITRE Corporation, Building a National Cyber Information-Sharing Ecosystem.

The US has been pursuing cyber information sharing since the late 1990s, when the federal government directed the creation of public–private partnerships for critical infrastructure protection. The now decades-long development of a variety of information sharing models in the US, and the greater complexity of its industrial and commercial sectors, provide a healthy catalogue of case studies and lessons for the Australian cybersecurity community as it pursues deeper information sharing mechanisms.

MITRE has examined three US cross-sectoral, regionally based information sharing and analysis organisations: the Advanced Cyber Security Center from Massachusetts, the Northeast Ohio CyberConsortium from Ohio, and the National Cyber Exchange from Colorado. From its assessment, MITRE has devised nine questions, dubbed the ‘Gnarly 9’, which must be addressed to build a successful cross-sectoral cyber information sharing organisation. The nine questions can be further distilled into three pillars of a successful information sharing organisation: adequate funding, trust between participants, and a collaboratively developed strategic plan.

Funding and a strategic plan are factors of the investment of time, money and people in the initial stages of establishment, but trust is an intangible quality that has to grow between participants. Growing trust will take time and experience of cooperation between individuals and organisations, although there are structural components that can support the growth of trusted relationships and enable effective information sharing. There are several possible models for information sharing ecosystems, but the current approach of the Australian community, building on the ACSC and JCSCs, is leading towards a ‘hub-and-spokes’ model. In this model, the nature and role of the hub is particularly important in enabling the growth of effective sharing and trusted relationships.

Building on the lessons learned from US information sharing organisations as discussed by MITRE, Cyber information sharing: lessons for Australia presents a possible model that meets the Cyber Security Strategy’s call for a multilayered public–private information sharing network. Based on existing sharing organisations and linkages, such as the ACSC and emerging JCSCs, this information could be provided to an independent clearing house as the hub of the national network, integrating multiple information feeds. This would make it easier to ensure that information is appropriately managed and ensure a level of anonymity for information providers, supporting the development of trust in the network necessary for participant buy-in and sustained information sharing. Further investment in automated, secure, standards-based information sharing will also be necessary to provide actionable information in real time.

A national cyber information sharing network will be an important mechanism to enable the achievement of stronger national cyber defences and resilient networks. The development of this network will be an evolutionary process, but Australia should take heed of the lessons learned by partners in the US and elsewhere.

Cyber wrap

AFP Commissioner Andrew Colvin revealed last week that an AFP officer had, in the course of an investigation, accessed the call record metadata of a journalist without a warrant, putting the error down to poor processes within the AFP. While much of the data retained under the scheme can been accessed by Australian law enforcement agencies without a warrant, data on journalists is specifically exempt. The incident comes only two weeks after the controversial legislation came into force, and has added more fuel to the fire for privacy advocates. Guardian Australia journalist Paul Farrell has called the incident a ‘systemic, structural failure of the AFP’s internal policies’, and Electronic Frontiers Australia has renewed its push for a universal warrant requirement for metadata access. While the affected journalist hasn’t been told that their data was inappropriately accessed and no action has been taken against the officer, the Commonwealth Ombudsman will now conduct their own inquiry to the incident. On the same day news broke about the AFP’s bungle, UNESCO released a report Protecting Journalism Sources in the Digital Age, which notes the ‘chilling’ effect of data retention schemes, undermining public access to information, the democratic role of the media and journalistic quality.

Last Thursday marked the 10th anniversary of the 2007 cyber attack on Estonia. The three-week spat, sparked when Estonian authorities moved a Russian war memorial, resulted in a massive and coordinated campaign of cyber operations that severely disrupted the highly-connected Baltic state. In the decade since, Estonia has championed norms of responsible behavior in cyberspace, including the Tallinn Manual, and it hosts NATO’s Cooperative Cyber Defence Centre of Excellence (CCDCoE). To mark the anniversary, CCDCoE hosted NATO’s annual cyber exercise, Exercise Locked Shields, where 25 countries fought to defend themselves from a major cyber attack on military assets. Australia’s foreign minister Julie Bishop observed part of the exercise, and has indicated that Australia may join future iterations.

The 2007 attacks on Estonia have been widely attributed to Russia, and Russians remained a key issue for cyber security across Europe this week. In France evidence has emerged that Russian hacking group APT28 (aka Fuzzy Bear), also responsible for hacking the DNC last year, have been targeting centrist candidate and current frontrunner Emmanuel Macron’s campaign team since January, trying to steal email credentials. It’s also been revealed that GCHQ has been placed at a higher state of readiness to respond to cyber threats to the forthcoming general election, forming surge teams to respond to potential cyber incidents affecting the election.

Denmark’s Centre for Cyber Security has released a report into hacking incidents that targeted the Danish defence and foreign ministries in 2015 and 2016. While the report doesn’t attribute the activities to a country, it does brand APT28 as the likely culprit. The Danish defence minister later told a national newspaper that Russia was behind the incident. And the International Olympic Committee has announced a new Digital and Technology Commission charged with strengthening the organisations cyber security. The new commission comes several months after the World Anti-Doping Agency claimed it was hacked by APT28.

While the weekend marked President Trump’s first 100 days in office, concerns remain about the role of fake news in the 2016 US election. Facebook has released a report into information operations conducted through the social network to spread misinformation, noting that they will increase their monitoring of suspicious activity in order to reduce the spread of fake news.

Also in the US, CSIS hosted a panel last week on the effects of significant cyber security breaches in the US. The panelists, including former Cyber Czar Michael Daniel and CrowdStrike’s Dmitri Alperovitch, discussed the apparent futility of existing defences, the link between foreign policy and cyber security, and the need to be more forthcoming when attributing cyber incidents.

And in brief news this week, South Australia has followed NSW’s lead by appointing its first state government Chief Information Security Officer. David Goodman, currently the SA government’s cyber risk director has been appointed to develop and deliver a new government cyber security strategy. British police have charged a man who allegedly provided information to Islamic State on the use of encryption and ToR to hide their activities, including producing instructional videos. And Rwanda has passed a law to establish a National Cyber Security Authority—hats off to them!

Bitcoin’s road to legitimacy

Image courtesy of Pixabay user MichaelWuensch.

Bitcoin is often mentioned in the media in the same breath as the now defunct ‘silk road’ dark-web trading site, and usually accompanied by a nonsensical stock photo of a laptop user wearing a balaclava. The perception generated by such reporting is that bitcoin and other cryptocurrencies are purely for shady dealings. While this may once have had some truth to it, cryptocurrencies, and particularly bitcoin, are now on a trajectory towards legitimisation and widespread acceptance. This is because of the real benefits that a decentralised cryptocurrency has for legitimate users, particularly in developing countries and countries with hyperinflation.

Bitcoin is the oldest and best known cryptocurrency; cryptocurrencies being a subcategory of virtual (or digital) currencies. Unlike virtual currencies designed for virtual economies, such as those on which online games rely, cryptocurrencies are designed to be traded for real world goods and services. Bitcoin was devised in late 2008 by the mysterious and possibly pseudonymous Satoshi Nakamoto.

The subversive element of the cryptocurrency concept is the anonymity ‘baked into’ the protocol, with an obvious appeal to criminal enterprises. However, this anonymity applies to the ownership of the cryptocurrency, not to the transaction itself. Fundamental to the integrity of bitcoin is the public register of transactions—the block-chain. So while individuals may hide behind the cryptographic anonymity of their bitcoin address, transactions are in the public domain. If a bitcoin user’s private key is recovered from a seized computer, for example, their complete transaction history is then known. The protocol was also designed to inhibit the mapping of bitcoin addresses to Internet Protocol (IP) addresses, but this anonymising has subsequently been shown to be flawed and users can be traced over the network to some degree. There are technological solutions to the investigation of crime involving cryptocurrencies.

Significant trends in bitcoin adoption are occurring in the developing world where the benefits of a decentralised currency are much easier to see. Despite the volatility of bitcoin, it is less volatile than some government backed currencies. In Venezuela, people are increasingly turning to bitcoin to survive the world’s highest inflation rate. India’s demonetisation experiment is likely to drive the adoption of bitcoin in that country. Innovative uses of bitcoin micro-transactions are also appearing in Africa.

There are substantial user benefits to a currency that is not controlled by a central bank which can be transacted without a financial institution as an intermediary. This is particularly apparent in developing countries with hyperinflation or limited banking infrastructure. By December 2016, bitcoins in circulation topped USD 14 billion when bitcoins were valued at USD 850 each. Since then, the bitcoin exchange rate peaked at over USD 1000 but has most recently dropped back below these historic highs.

The reason that governments, law enforcement and intelligence agencies should care about the regulatory and technological challenges of cryptocurrencies, is that bitcoin is approaching a tipping point where governments may have little choice but to recognise it as a currency. Germany recognised bitcoin as a unit of account in 2013. The US Inland Revenue Service (IRS) treats bitcoin as property but recognises that it has some of the properties of a currency in certain situations.

At home the Australian Tax Office (ATO) treats bitcoin as a method of barter, which can cause the double taxation of some bitcoin transactions. The Federal Government has undertaken to reform the tax treatment of digital currencies in response to the Senate Economic References Committee’s report on digital currencies. As bitcoin continues to be treated less like a commodity and more like a currency, governments will find it harder to put the cryptocurrency genie back into the bottle. In a world first, the Pirate Party of Iceland, which may yet put together a coalition to govern the island nation, has a policy platform of making bitcoin legal tender.

The widespread adoption of cryptocurrencies like bitcoin will have a disruptive effect on the financial sector, reduce central bank control and, as with any new technology, create new opportunities for criminals. Governments will need to adapt to this disruptive new system of exchange, and this is occurring to some extent. The Australian Senate Economic References Committee’s report on digital currencies also recommended changes to the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 to better deal with digital currencies.

If the current rate of bitcoin adoption and capitalisation continues, governments must implement legislative, regulatory and technological controls as a matter of urgency; controls that retain the benefits of cryptocurrency and block-chain technology while minimising its utility for money laundering and terrorist financing.

The challenge for law enforcement and intelligence agencies will continue to be obtaining, developing and retaining relevant expertise, and the creation of institutional frameworks to support it. For better or worse, ‘magical internet money’ appears here to stay.

Tag Archive for: Cyber

Bitcoin Can’t Save World’s Autocrats From the Sanctions Squeeze

Bloomberg’s David Tweed discusses Bitcoin with Tom Uren, visiting fellow with ICPC

Think about how many U.S. dollars are in circulation and how much each bitcoin would have to be worth to match that value — it would be a ludicrously big number.

Read the full story here

Notorious website with naked photos of Aussie schoolgirls returns months after being shut down

Fergus Hanson of the ICPC talks with Channel 7 News.

Meltdown CPU bug

Sky News spoke to Tom Uren about the recent revelations that the Meltdown CPU flaws are widespread and pose significant threats to virtually all computer systems worldwide if unaddressed. 

Watch the interview here

Report reveals growing cyber threat in Asia Pacific

Thomas Oriti of the ABC’s The World Today speaks with lead author Tom Uren on the recently released ICPC report Cyber Maturity in the Asia-Pacific 2017.

http://www.abc.net.au/radio/programs/worldtoday/report-reveals-growing-cyber-threat-in-asia-pacific/9250494

Cyber Security: Are we doing enough?

The Australian Cyber Security Centre released their Annual Threat Report on Tuesday.

It paints a bleak picture for Australian Cyber Security in both the public and private sectors.

The Government insists this is not a serious issue but some experts argue we still have a long way to go to keep Australia safe.

In this interview, Fergus Hanson talks with Fran Kelly of ABC Radio National. 

Hacked Defence contractor hadn’t changed its passwords from their default

f

Fergus Hanson speaking on the ABC 7:30 report about the recent cyber incident which saw a Defence contractor hacked.

Video here: http://www.abc.net.au/7.30/hacked-defence-contractor-had-changed-its/9045122

North Korean Hack of U.S. War Plans Shows Off Cyber Skills

Fergus Hanson interviewed by Bloomberg Technology on the recent North Korean cyber hacks.

There is no doubt that they are using their capability in creative ways, said Fergus Hanson, head of the International Cyber Policy Centre at the Australian Strategic Policy Institute in Canberra.

“Stealing battle plans is obviously a good idea from a military point of view and they’re also monetizing their capability to get around sanctions.”

Full report at Bloomberg Technology.

Federal Government launches three year cyber strategy

The Australian Government is warning that the internet risks becoming a “dark space”, if there are not strict rules in place to govern how it is used.

The Foreign Minister, Julie Bishop, has today launched the Government’s International Cyber Engagement Strategy, outlining its cyber affairs agenda over the next three years.

In this interview, Thomas Oriti of the ABC’s “The World Today” program talks to Foreign Minister Julie Bishop and Fergus Hanson. 

http://www.abc.net.au/radio/programs/worldtoday/federal-government-launches-three-year-cyber-strategy/9014742

Australia’s cyberspace policy

Australia is renewing its push for new rules governing how nations deal with each other in cyberspace.

Foreign Minister Julie Bishop has launched the government’s three-year International Cyber Engagement Strategy.

In this video, Beverley O’Connor of ABC’s “The World” program speaks to Fergus Hanson, head of the International Cyber Policy Centre at the Australian Strategic Policy Institute. 

http://www.abc.net.au/news/programs/the-world/2017-10-04/australia-cyberspace-policy/9016844

Experts question Malcolm Turnbull’s terror crackdown on encrypted messages

Experts have warned Prime Minister Malcolm Turnbull’s bid to force social media companies to give access to encrypted messages for terror investigations is unrealistic with the pace and breadth of technological change making it too hard for law enforcement to keep up.

Fergus Hanson speaks with Andrew Tillett

Full article here: http://www.afr.com/news/experts-question-malcolm-turnbulls-terror-crackdown-on-encrypted-messages-20170626-gwyfg3#ixzz4yZRjOTbf