Tag Archive for: Democracy

Pegasus spyware and the direction of Australian policing

The US government’s recent ban of Israeli technology firm NGO Group’s Pegasus spyware has significant implications for Australian efforts to regulate digital technologies in the face of new online national security threats.

Putting human rights and democratic freedoms at the centre of US foreign policy was one of Joe Biden’s key election promises. His administration has honoured that promise by blacklisting NSO Group for selling its Pegasus software to governments that used it to abuse those principles. This move put the increasing challenge for states to regulate cyber and digital technologies squarely in the middle of US policy decisions and strategy.

This is a strong statement to the international community, especially given the US’s historical support for Israel. For Australia, it’s particularly significant given Canberra and Washington’s renewed commitment to working together to maintain security in the Indo-Pacific region through the AUKUS agreement, plus their longstanding cooperation as members of the Five Eyes intelligence-sharing arrangement. Australia’s new cross-border communications act, or ‘Cloud Act’, which enables data sharing with partners like the US based on common values, highlights the importance of Australia and the US seeing eye to eye on ethical regulation of digital technology.

Israel is already lobbying the US to remove the ban, arguing that Pegasus is critical to its foreign policy. NSO Group maintains that Pegasus is a national security tool for governments to stop transnational organised crime and violent extremist groups from using the ‘dark’ parts of the internet to conduct business.

The Australian government is not an existing or prospective client for Pegasus. But it used the same justification as NSO Group to pass a swathe of bills significantly increasing the powers of police and intelligence agencies to spy on Australians. In addition to the Cloud Act, legislation was enacted that enables agencies to access encrypted data and to alter data. The speed with which these bills passed the parliament, the uncertain safeguards against scope creep and the rushed consultation with industry sparked serious concerns. The use of the encryption law by the Australian Federal Police in Operation Ironside also raised concerns that Australia could become a policing partner of choice due to the expanded powers and undemocratic government overreach the legislation has allowed. The AFP’s refusal to say how the law’s powers were used further damaged public trust.

Certain conditions and warrants are necessary for Australian agencies to use these laws; it’s not an authoritarian free-for-all in the way Viktor Orban’s government used Pegasus to ‘wage war’ on the media in Hungary. But by overriding Australians’ civil liberties based on arguments about countering unprecedented threats to national security, the government is building a legal framework to enable policing of the internet that’s disturbingly similar to how Pegasus enables spying through the ability to access, decrypt and even alter data in online accounts and apps on devices.

Australia’s laws don’t allow devices to be remotely activated for audio recording. But given the justifications for the powers that have been granted—that new, exceptional threats justify new, exceptional measures—that may yet come.

The problem is not finding new strategies and tools to police cyberspace; those are needed. But when we legalise new security powers based on the argument that a threat landscape is ‘unprecedented’ and ‘exceptional’, it’s difficult to then define what other threats are similarly ‘exceptional’ and what is justified by ‘exceptional circumstances’. We saw this with the pseudo-legal framework built by the George W. Bush administration to allow widespread use of torture during the ‘war on terror’. That policy has been widely condemned on the basis of ethics, international and US domestic laws, and even of yielding actionable intelligence.

If the Morrison government is as serious as the Biden administration is about protecting civil liberties in the digital age, it should spend as much effort on building legal frameworks to regulate and govern the fourth industrial revolution according to democratic principles as it does on policing it. Where is Australia’s equivalent of the European Union’s regulations on data protection and privacy for digital tech, given how quickly we’ve passed these policing bills?

And why is the new artificial intelligence ethics framework for government and business entirely voluntary, when AI products are already being used by Queensland Police to risk-profile possible domestic violence offenders? And that’s despite the well-known limitations of available algorithms to accurately screen police data without exacerbating human bias and discrimination. With the threat this poses to already overpoliced communities, where are the hastily passed bills to protect democratic rights in an age where emerging technologies pose flashy ‘solutions’ with their own ethical problems on implementation?

It’s easy to applaud Washington’s decision to ban Pegasus. But are we not hurtling down a similar path, to the same place, propelled by the same ‘exceptional threat’ argument?

Government should better explain need for expanded police powers

When the Australian parliament passed the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2021 (SLAID) in August, research institutes and news media voiced concerns about an erosion of civil liberties—for just a few days.

The law grants new powers to the Australian Federal Police and the Australian Criminal Intelligence Commission in an age where technological advances have provided criminals with more devices, digital tools and dark spaces online. Key threats are the production and trading of child exploitation material, the promotion of violent extremist content and activities, and the conduct of organised crime business including the illicit drug trade.

Just a few years earlier, the Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (TOLA) gave the AFP and ACIC limited access to encrypted data to observe and collect evidence online and from the internet of things to prosecute criminal activity.

Home Affairs Minister Karen Andrews justified the expansion of powers by saying it was needed to protect communities, especially children, from transnational and local organised crime groups using the ‘dark web’ to do business.

The threats to Australians from these activities are serious, and policing the intersecting and increasing dangers presented by advances in digitisation and technological innovation is critical.

‘Think of the children’ and ‘keep communities safe’ justifications have been rolled out in Australia since the 9/11 terror attacks catalysed a rapid expansion of national security legislation. The 2015 Data Retention Bill began a wave of ‘hyper-legislation’ to police cyberspace that was vast in terms of the number of bills, the speed with which they passed parliament, the uncertain safeguards against scope creep and the rushed consultation with relevant industry.

The TOLA Bill passed in 2018 and was followed by the International Production Orders Act 2020, the Online Safety Act 2021 and then SLAID.

The security versus liberty debate is, however, no longer tipping in agencies’ favour.

Australians need more clarity on what these powers mean for them, including how the various pieces of legislation might interact and affect them. They need a justification with more nuance than ‘won’t somebody please think of the children’ and hashtag terrorism.

Strong debate is a good indicator of a healthy democracy. But Australians’ trust in government has trended steadily downwards over the past decade (despite a short-term increase in the early stages of the Covid-19 pandemic in 2020). This trend, and the persistent, unresolved concerns voiced about these laws by civil society, academia and the federal opposition (which backed them on condition that there would be a review), show that Australians are no longer compelled by the well-meaning justifications relied on by Andrews in announcing SLAID and by AFP Commissioner Reece Kershaw in using TOLA during Operation Ironside.

Each time a new bill is passed or a new power is used, the debate is framed by government officials through this binary logic. And when civil society disputes the decision in favour of security, the response is an effort to justify that calculation, as though resolving the tension between the two equally critical parts is necessary to figure out how a state can police cyberspace without eroding democracy.

This frames the discussion as one to be resolved (‘Pick which principle you value more’) before we can move on to discuss what we implement based on that decision. That’s become a roadblock to advancing the democratic project into the digital age.

We must acknowledge that a permanent tension between rights and security is fundamental to democracy. It’s not a binary choice to be resolved on a case-by-case or bill-by-bill basis.

Parliamentary debate and government messaging need to move beyond the binary and get into the details. This is an opportunity to progress the security agenda of rising to new cyber threats while building back Australians’ trust.

That might sound lofty and aspirational, but it’s how we’ll maintain the necessary, healthy balance of values, and engage in practical discussions about what we can do to regulate the use of a huge range of new and emerging technologies (on a case-by-case basis), without sacrificing either security or freedom. Artificial intelligence, for example is a large and growing field, and different types and iterations of AI should be used, regulated and restricted in different ways.

Earlier this year, the head of the Australian Security Intelligence Organisation, Mike Burgess, announced his commitment to making ASIO ‘more open and transparent’. A foundational step in rebuilding trust in government regarding increased security powers is to bring the digital and cyber spheres, plus the legislation that regulates them, similarly out of the shadows.

Government messaging that aims policing powers at the ‘dark web’ makes it hard for people to understand if or how their daily lives and liberty might be affected. The argument makes it sound as if the internet and the internet of things are spaces that have ‘gone dark’ to intelligence and so this legislation will allow agencies to look at everything people do online. What’s really meant is that established surveillance practices can’t follow criminals into cyberspace where they communicate anonymously using encrypted messaging apps or online forums, meaning that agencies can’t tap criminals’ communication the way they could when it took place over the telephone. The actual dark web isn’t visible to search engines, can only be accessed using an anonymising browser and is host to a significant amount of criminal activity.

SLAID introduces four new warrants that the AFP and ACIC can seek to disrupt data, access data on devices and networks, and take over accounts to access data. It doesn’t enable wholesale, unregulated access to all citizens’ data. But it’s not clear where SLAID draws the lines between no access and complete access, and between the ‘dark web’ and non-criminal accounts and devices.

Given the hyper-legislation trend, it’s increasingly difficult to see how different laws might interact to have unforeseen effects. For example, will TOLA plus SLAID allow international partner agencies to access data of third-party nations’ citizens that’s hosted in foreign data centres?

If, to find out how these powers can affect you, you must have the time and resources to read and interpret a 156-page bill, and hopefully a law degree, the importance of clear, layperson-friendly government messaging on what the new powers mean for how people live out the security–liberty balance cannot be overstated.

With recent riots showing how widely anti-government, accelerationist far-right narratives are resonating in Australia, the uncertainty about how the technology and the law work, and how ‘authoritarian’ this legislation might be, is a vulnerability the federal government can’t afford.

Australians trust our government less despite 20 years of increases in intelligence and policing powers to keep them safe. This suggests a population becoming more worried about how government impedes daily life, and less worried about terrorism.

Rebuilding this trust while also responding to increased criminal threats requires government messaging that makes digital technology and cyberspace real to everyone.

People don’t trust what they don’t understand, and given the speed of advancements in digital technology and legislation to match it, the conditions for distrust are prime. I have a PhD in law and research in this area, and I’m still trying to understand it.

Australia has a demonstrated capacity to communicate the real, tangible impacts of policy and to generate buy-in from the electorate in the health and human services field: road safety, AIDS awareness, even Covid lockdowns. The government needs to reconnect security agencies with the electorate by investing in making information on digital technology and regulation clear and intelligible.

The hyper-legislation trend in police and intelligence powers means that parliament adds and amends bills frequently in response to technological advancement. That’s building a network of legislation and powers that are increasingly difficult to understand and not adaptable as technology changes.

New bills should be rendered fit for purpose through unrushed, considered parliamentary debate and meaningful industry consultation and co-design and should be adaptable to ongoing technological advancement. They should be accompanied by government messaging and research making clear how they’ll work.

Industry consultation is treated as a mandatory box-ticking exercise. A rushed process such as that which produced TOLA suggests it’s not undertaken to gain or incorporate meaningful insights from industry. Meanwhile, industry relies on the government for guidance in designing technology that’s safe, as advocated compellingly by the eSafety Commissioner.

To design and implement regulations according to democratic values, the government needs to bring industry into the process meaningfully. If it doesn’t, it risks never fully understanding the implications of new bills and creating an incessant need for more legislation as industry pushes technology forward.

Learning in the grey zone: how democracies can meet the authoritarian challenge

Australian Parliament House, Canberra (#404)

Innovation by authoritarian nations in the ‘grey zone’ is becoming one of the most serious challenges facing contemporary democracies. It has long been recognised that future conflicts might be won before any shots are fired. But knowing that is cold comfort, because authoritarian states are continually evolving their capacity to develop and deploy offensive tools in their cyber-enabled, information and hybrid warfare arsenals.

Meeting this challenge requires democratic nations, including Australia, to reconceptualise how they think about strategy: its core purposes, its main instruments and capabilities, and what success or failure looks like. Democracy’s authoritarian rivals—chiefly China and Russia—play by different rules, have different ideas about vulnerabilities and strengths, and measure outcomes in broad wholistic rather than tight linear terms.

Strategy is a long game and democracies must overcome their tendency to view conflict as an end-state with a precipitating cause, rather than an ongoing phenomenon. Australian Defence Force chief Angus Campbell highlighted this, in a 2019 speech to an ASPI conference on future conflict, by tracing General Valery Gerasimov’s stages of war. Importantly, Campbell noted that Western powers tended to only react when a crisis point had been reached—when the war was already half-won.

Although there’s disagreement about a ‘Gerasimov doctrine’, much like China’s ‘three warfares’, authoritarian states do seem more suited than democracies to longer-term political warfare. Authoritarian leaders don’t have to face periodic elections (or if they do, the outcome is hardly in doubt), which aids continuity in strategic planning and execution. But beyond government structures, both Russia and China have long invested in weaponising political, economic, psychological and social tools for use in grey-zone activities.

This includes using civilian assets for quasi-military means, like China’s military–civil fusion efforts, or Russia’s use of ‘little green men’ and the Wagner PMC group as a transnational proxy defence asset. It also extends to using economic levers as strategic instruments, as witnessed by Beijing’s investment campaigns in the South Pacific, and the Kremlin’s manipulation of gas dependencies in Europe. In the information domain, reflexive control—getting your adversary to act in a way that suits your interests without being aware of it—is coupled with other hybrid tactics like adapting international law, mobilising diasporas, and employing useful idiots and cyber-proxies to spread misinformation and disinformation.

A longer view of conflict gives authoritarian states escalation control, allowing them to dictate the tempo of strategic interaction, and to achieve their objectives by presenting others with a fait accompli—as in the South China Sea and Crimea.

It’s false to argue that the West is helpless against such behaviour. But to combat it more effectively it must learn to intervene earlier, put more effort into ensuring a united approach, seize control of narratives, and be prepared to more frequently use coercive economic and other non-kinetic measures. All of these are necessary from an early stage, rather than eventually offering up sanctions and reprimands as responses to bad behaviour that has already accomplished its objectives. Democracies must act more strategically, and more proactively.

A common lament is that democracies lack freedom of action compared with authoritarian states in countering grey-zone activities because they’re bound by laws and norms. But assuming that the West’s challengers would forever be content to play by those laws and norms too, rather than sidestepping, adapting or ignoring them, remains a glaring oversight. Democracies must be flexible and adaptive.

Russia’s ability, with seeming impunity, to take over Crimea, attempt to kill dissidents abroad, co-opt politicians, bomb munitions depots in the Czech Republic, and launch adaptive cyberattacks and information operations against NATO members serves as a force multiplier. It suggests Russia is strong and assertive while democracies are flat-footed, reactive and incapable of firm united responses. It’s a similar story when Beijing changes the maritime geography of the South China Sea, targets diaspora communities with transnational repression, buys influence in Australian society, and undermines the multilateral trading order with its deliberately bilateral Belt and Road Initiative.

This underscores the need for democracies to find ways to be adaptive without compromising their core values. Indeed, to alter an adversary’s behaviour in the grey zone—to deter it—will require a dynamic process with a variety of partners, rather than a static one in which the game and the players are fixed.

That doesn’t mean that laws and norms are useless, or that the West should abandon them. But it does mean that they’ll be increasingly unreliable as instruments to constrain behaviour, especially in a more fluid environment where varied interpretations of law will allow states to forum-shop to advance their interests.

Like laws, appealing to common values should be done with clear eyes rather than rosy spectacles. In many cases, coalitions to counter authoritarian states will be based on shared threat perceptions, rather than a sense of kinship that some potential allies don’t share (and even sometimes resent). In its recent integrated review, the UK has abandoned the term ‘rules-based order’, stressing that while it seeks to work with democracies, it also will also cooperate pragmatically with those with different values. We shouldn’t fear this. If liberal democratic theory is correct, then the shared habits learned through cooperation will reinforce stability, not undermine it.

Although Australia is a leader in recognising the threat of grey-zone activities, especially in terms of combatting foreign interference, democracies have often been slow to realise that building resilience goes beyond government. Protecting Australia from such pressures can’t be done by regulation and legislation alone. For democracies to successfully insulate themselves from cyber-enabled information warfare, attacks on critical infrastructure, attempts to undermine and fragment their societies, and efforts to marginalise them from their allies requires a whole-of-society effort.

Leaders must shore up public trust in government and democratic institutions and avoid instrumentalising disinformation for political purposes, and civil society must promote information hygiene. The business, industry and education sectors need to become engaged stakeholders in ensuring transparency over hostile foreign influence and cyberattacks.

There are no easy ways to generate democratic resilience, but it is a crucial endeavour. Information sharing—rarely a strong suit for siloed government departments and businesses wary of negative press—will need to become the norm rather than the exception.

Counter-hybrid fusion centres, net assessment capabilities and other long-range tools and methodologies will be critical to building knowledge about vulnerabilities, identifying threat vectors, and devising appropriate countermeasures. So too will the experience of other nations with potentially useful models. These might include the Swedish notion of ‘total defence’, or Singapore’s ‘six pillars’ (incorporating military, civil, economic, social, digital and psychological components).

Taken together, a longer-term view of strategy characterised by earlier intervention, a flexible and adaptive approach to coalitions and partnerships, and a more integrated effort to unify governments with societies are the keys for democracies to effectively meet grey-zone challenges.

The task is not easy, but pursuing it may also help democracies rediscover that their supposed weaknesses—responsiveness, openness to change and the ability to build trust—are their greatest strengths.

Southeast Asia on the forefront of disinformation for profit and power

Some of the most egregious examples of disinformation campaigning in recent times have come not from shadowy state operatives but from mainstream political parties. Politicians and their parties in Southeast Asia have long been ahead of the curve, argued Nicole Curato, associate professor at the University of Canberra’s Centre for Deliberative Democracy and Global Governance, in a panel discussion on disinformation in the region at ASPI’s grey zone and disinformation masterclass.

‘Before Trump and Brexit, the Philippines was disinformation patient zero’, she explained. And early disinformation innovations have evolved. Once aimed at mass audiences through the use of ultra-popular online influencers, political disinformation campaigns have refined their targeting. They now use ‘nano influencers’ with fewer than 10,000 followers for political product-placement.

Curato used an example from her recent research into disinformation operations run by opposition parties in the Philippines, which are using this kind of niche audience segmentation to run racist anti-Chinese narratives in an effort to damage President Rodrigo Duterte, who has been known for his close ties to Beijing.

Curato noted that she’s received criticism for calling out opposition tactics against Duterte, given the brutal human rights record of his regime. But she believes it’s important to shine a light on information abuses on both sides.

‘To understand why disinformation in Southeast Asia is so widespread, you have to understand the particular political economy of the region’, said Curato. There’s a huge oversupply of cheap labour and at the same time, Southeast Asia is at the core of the global outsourcing industry, and the skills of these workers are highly transferable to disinformation operations. For such a worker, a disinformation side-hustle is an easy way to augment a low income.

But who’s coordinating these gig workers for disinformation campaigns? According to a recent study on the disinformation ecosystem in the Philippines, elite advertising and public relations agencies conceptualise campaign narratives and then organise influencer and troll armies to deploy messages. Disinformation is a huge source of profit for the PR industry.

Fellow panel member and Southeast Asia specialist at the Australian National University, Ross Tapsell, agreed, pointing to Malaysia, where in 2013 political parties used online influencers or ‘cybertroopers’ to influence the outcome of that year’s national election. This activity has now consolidated into a more permanent industry throughout the region, with ‘trolls’ prominent in the Philippines and ‘buzzers’ in Indonesia.

The now-notorious firm Cambridge Analytica also angled for a large slice of campaign money from the United Malays National Organisation party with the pitch ‘we won it for Trump’. They were unsuccessful in winning a contract in part because UMNO already had a domestic social media campaigning capability.

Tapsell noted that Southeast Asians are well known for their high levels of social media engagement. Jakarta was once declared the ‘capital city of Twitter’ with more users than anywhere else on the planet. The Philippines is known as a Facebook nation and Malaysia has some of the world’s highest usage of WhatsApp.

There are some big stakes here for Australia, Tapsell said. For one, this growing and highly profitable industry of dirty information tricks can be mobilised against Australian interests in the region.

Tapsell went on to say that Australia is still figuring out this terrain, giving an example of the Australian government’s handling of an anti-Australian tweet from a Chinese state official. Tapsell said that Australian government departments often focus on specific regional partnerships but sometimes miss the wider narrative formation about Australia and the region that’s happening on digital platforms across Southeast Asia.

He added that the regional prevalence of disinformation is having a hugely destabilising effect on democracy, eroding the capacity of institutions like the media and undermining elections and, potentially, broader social cohesion.

As for foreign networks, disinformation incursions in region have been minimal but there’s clear potential for growth. According to Tapsell’s research, most of the disinformation action emanates from Southeast Asian actors focusing on domestic agendas.

So what can be done? Curato noted that the tech platforms have developed menus of responses, which now include fact-checking, labelling and targeted take-downs of disinformation and inauthentic activity.

From her interviews with members of the disinformation industry in Southeast Asia, Curato said that the response they’re most scared of is the take-down, which means months or years of work down the drain. They also don’t like to be named and shamed. No company likes to be associated with electoral manipulation.

Tapsell added that the take-down approach is effective, but companies like Facebook are selective in what they take down, usually focusing usually on major markets, and have not placed enough emphasis on or hired enough people from Southeast Asia.

Curato agreed that Facebook needs to do a lot better. She’d like to see the disinformation response at Facebook democratised. This means finding a way to bring more user input into disinformation oversight decisions, as well as black-listing PR companies involved in large-scale disinformation production.

Finally, both panellists agreed that the platforms need to better understand and resist attempts by authoritarian actors to use ‘fake news’ laws to silence legitimate opposition in the region. The platforms are now key players in regional geopolitics and need to devote more resources to their growing civic responsibilities.

Myanmar’s youth demand their future

Half of Myanmar’s population is under the age of 30, and many of these young people have benefited from their country’s fragile, imperfect democratic transition over the past decade. They know the military’s return to power could reverse hard-won gains in human development and fundamental freedoms. Their future is at stake.

So are their lives. On 27 March, General Min Aung Hlaing used the occasion of Armed Forces Day to claim that the military would protect the people and promote democracy. In fact, it turned out to be the bloodiest day since the military coup on 1 February.

And yet, as a father clutching his dying son poignantly noted, ‘On this day, both lives and futures are being lost.’ With their prospects vanishing before their eyes, tens of thousands of young people have taken to the streets across Myanmar. They are refusing to live without hope.

But the country’s backsliding is already being felt acutely. In addition to Covid-19, Myanmar is confronting a compounding economic crisis. The World Bank’s recent regional forecast shows GDP on track to shrink by 10% in 2021, compared to 6.8% growth in 2019 and 1.7% growth in 2020, when the country was reeling from the pandemic.

Late last year, the United Nations Development Programme’s household vulnerability survey signalled that poor households are being pushed further below the poverty line, while many vulnerable households are being dragged towards it. Even previously financially secure households are facing massive shocks from business closures and loss of employment.

On the streets of Yangon, Mandalay and other cities, Myanmar’s citizens have come to experience these crises in the starkest of terms, with countless personal tragedies unfolding behind the data. Young people are watching employment opportunities disappear as investment plummets. International buyers and factory owners are questioning the viability of their local operations, given the lack of worker safety and security. Unable to guarantee continued production and reliable logistics, many businesses have halted operations entirely. With these closures, even more young people will lose their jobs and meagre livelihoods.

The internet is the lifeblood of this generation, but it has been restricted severely. The suppression of information, free speech and internet access threatens to push Myanmar back into isolation from the rest of the world. But this is happening at a time when a whole generation of young people have come to know what it’s like to enjoy better jobs, freedom of speech, access to information and improved education.

These developments shaped younger cohorts’ values and aspirations, instilling them with a sense of civic consciousness. Their expectations reflect the real possibilities that they had previously seen ahead of them: the potential of a different future from the one their parents had known.

History offers stark reminders of what the young are likely to face. In 1950, Myanmar’s per capita income was higher than that of Malaysia or Thailand. But the following decades of ‘closed-door’ policies resulted in severe underinvestment in the economy and public goods and services, with deep and corrosive effects on human capital. Military rule turned Myanmar from one of the most promising economies in Asia into one of its worst performers.

The parents and grandparents of today’s young people know what it was like back then, and their bitter experience has been recounted to their children and grandchildren. Young Myanmar doesn’t want the clock turned back. They will not be silenced. The 19-year-old taekwondo champion and dancer Kyal Sin (also known as Angel) was wearing an ‘Everything will be OK’ T-shirt when security forces gunned her down in Mandalay. Thousands of young people like her continue to stand on the streets each day, full of defiant optimism as they face the military and courageously demand the country they want and deserve.

As UN Secretary-General Antonio Guterres has stressed, there is only one way forward. The security forces must stop the violence, the generals must return Myanmar to the democratic path by respecting the results of last year’s election (which the National League for Democracy won decisively) and all political prisoners must be freed.

Only then can there be progress on reforms to deliver fair economic development and human rights—including freedom of movement, the right of safe return for refugees and citizenship—and on controlling the spread of the virus. Only a society that invests in its very humanity can be at peace with itself.

Sir Michael Somare’s passing marks the end of an era for Papua New Guinea

The passing of Sir Michael Somare, the first chief minister and the founding prime minister of Papua New Guinea, ends a remarkable political life that began when he was elected to the pre-independence House of Assembly in 1968 and saw him serve four terms as prime minister.

No PNG prime minister worked with more Australian prime ministers and their governments. As chief minister from 1973 he worked with the Whitlam government on the transition from self-government to independence in 1975. He subsequently worked with the Fraser, Hawke, Howard, Rudd and Gillard governments during his four terms as his nation’s head.

Prior to self-government he had gained the confidence of territories minister, Andrew Peacock, an association that helped secure bipartisan support for self-government.

Before independence could be achieved, Somare had to secure support for it in the House of Assembly after he was elected chief minister in 1973.

History records that while there was bipartisan backing in Australia for the broad independence timetable, there was far from unanimous support in the PNG House of Assembly, or in the wider community.

Somare used his negotiating skills not just to map out with Australia a pathway to independence, but also to form an effective and representative government coalition that supported the independence timetable and would deliver the national constitution necessary to secure stability and, above all, unity.

For his first administration as chief minister he chose wise, respected men as key ministers, such as Sir John Guise as deputy chief minister, Sir Julius Chan as finance minister as well as  Sir Albert Maori Kiki, Donatus Mola and Thomas Kavali. My first employer in PNG politics, Sir Iambakey Okuk, also joined the ministry.

The new administration negotiated with Australia the transition timetable and budgetary and other ongoing support enabling independence to be achieved in 1975, barely three years after self-government.

The personal relationships Somare developed with both the Whitlam government and the Coalition opposition were critical, along with evidence of parliamentary and community support within PNG as a whole, to secure independence harmoniously.

His administration undertook a comprehensive and genuine consultation process that led to the approval of the constitution, a document that has essentially served PNG well since independence.

The constitution included the appointment of Queen Elizabeth II as head of state, a representative national parliament and, importantly, an independent judiciary that would ensure the constitution was upheld by both government and parliament.

The peaceful transition to independence was unquestionably Somare’s greatest achievement.

To that needs to be added the national unity that has largely been maintained since. Holding the young nation together was not easy. It required compromises that may not have served PNG well, but which maintained national unity. With the overwhelming pro-independence vote on Bougainville in 2019, that unity is being tested and will be a significant challenge for the current and future national governments.

Somare had a remarkable capacity for forgiveness, as the make-up of his administrations in 1975–1980, 1982–1985, 2002–2010 and 2010–2011 demonstrates. Onetime opponents became political partners. He was a master at forming governments by drawing on different parties and groups to secure a parliamentary majority.

Overall, his relations with Australia were harmonious and mutually beneficial. For much of Somare’s time in office his foreign minister was Sir Rabbie Namaliu who remains widely respected in Canberra.

There were occasional differences, but they did not unduly undermine good relations.

Somare established diplomatic ties with China at independence in 1975 and pursued a ‘one China’ policy. He also maintained close relations with regional neighbours, notably Indonesia, Singapore, Malaysia, Japan and the Philippines. PNG played a positive role in the Commonwealth, the United Nations and the Pacific Islands Forum.

In all my engagements with Sir Michael I found he held Australia, and Australians, in high regard. That was especially demonstrated when he made an historic address to the Cairns session of the Queensland Parliament in 2008.

He followed Australian horse racing, and between his terms as prime minister I had the privilege of taking him to the races in Brisbane where he was absolutely in his element. He used to say to me ‘Jeffrey, please don’t get me invited to the official lunch—it interferes with my enjoyment of the races!’

Apart from his key role in achieving independence, the standout feature of his career was its longevity, from its beginning in 1968 until his retirement in 2017.

PNG has a robust and at times tumultuous parliamentary democracy. That he survived in it is a tribute to his popularity and his political skills.

Somare’s passing marks the end of an era and a unique life that will not be equalled or surpassed in the importance of its contribution to the life of our closest neighbour.

The geopolitics of artificial intelligence

As artificial intelligence technologies become more powerful and deeply integrated in human systems, countries around the world are struggling to understand the benefits and risks they might pose to national security, prosperity and political stability.

These efforts are still very much a work in progress. Australia is developing a whole-of-government AI action plan, led by the Department of Industry, Science, Energy and Resources. The department released a discussion paper this year and finalised its call for submissions in November.

In line with the department’s brief, the paper concentrates on the economic potential of AI, while acknowledging the need for a human-centred, ‘responsible’ AI regime. That reflects a push internationally to conceptualise AI in terms of digital human security and human rights.

But AI technologies also have serious implications for national security and geopolitics, which need to be thoroughly explored in any discussion of what an AI framework for Australia might look like.

In any discussion of AI, it’s important to note that definitions of the technology are not settled and the applications are vast. But most definitions circle around the idea of machine learning, the ability of a digital technology not just to automate a function, but to learn from interactions with its environment and optimise a function accordingly.

The AI systems that we need to think about in national security terms include surveillance and profiling, the persuasive AI that pervades digital social networks, predictive algorithms and autonomous systems. It is also important to think about the control of the entire AI supply chain, from the human source of the datasets that AI technologies need to learn from, to research and development and technology transfers, and the effects of AI systems on societies.

But the AI geopolitical picture is now a contested tangle of state rivalry, multinational monopoly growth and public anxiety.

That AI is deeply embedded in the discourse of geopolitical competition is well established. The belief that AI will be the key to military, economic and ideological dominance has found voice in a proliferation of grand AI mission statements by the US, China, Russia and other players.

Whether an AI advantage will deliver pre-eminent power to any one nation is arguable. National control over AI technology still remains elusive in a world of globalised R&D collaboration and supply chains and transnational digital technology companies.

But the perception, at least, has driven intense national economic competition over establishment of global AI-powered monopolies in almost every sector—energy, infrastructure, health, online gaming, telecommunications, news, social media and entertainment—and the enormous data-harvesting power that goes with them.

Governments are also racing to develop AI military technologies like autonomous lethal weapons and swarming technology as well as the AI-enhanced surveillance, communications and data-exploitation capabilities they hope will give their military forces the decisional edge on the battlefield.

At the same time, countries are trying to unwind the globalisation of AI technology in order to control R&D collaboration and technology transfers. Individual nations and alliance systems are beginning to champion their own versions of AI norms and technology bases.

In the process, the huge datasets held by governments, corporations and various data brokers have become a strategic asset. They are coveted as the raw fuel needed to train machine-learning algorithms.

Governments have been actively exploring the ways in which these datasets can be weaponised, how they might be used to create cyber weapons targeting critical infrastructure, influence the information systems of another country, build better profiles of its elites for influence targeting and form a clearer picture of the internal dynamics of a political system.

As these uses continue to be experimented with, how datasets are collected and where they are housed is becoming a national security issue. The decision by the US and others to ban Huawei and break up TikTok can be seen at least partially in this context.

But as the competition for the AI edge heats up, the initial excitement and uncritical embrace of this technology has darkened to a mood of profound unease.

Democratic governments are being forced to grapple with the fact that the AI algorithms that run social media platforms operate to maximise user engagement and encourage behavioural change which can then be sold to advertisers. And these learning algorithms have supercharged the possibilities for what some analysts have termed ‘sharp power’—the manipulation of public sentiment through computational propaganda, disinformation and conspiracism by foreign actors and their domestic proxies.

Deep fakes—synthetic media created with the help of machine learning—can be fun but are becoming another tool in the burgeoning disinformation arsenal. AI-generated disinformation was reportedly used by China to interfere in the Taiwanese presidential election in January, and by partisan operatives to discredit Democratic candidate Joe Biden’s son Hunter ahead of the US election. ​

The past year has at times seemed like a laboratory for demonstrating the malignant effects of AI-driven communications platforms on politics. The corrosive effect on credible governance and institutional legitimacy, in the case of the US, has threatened democratic norms, the ability of the government to mount a credible pandemic response and its reputational power abroad.

Further, the increasing AI-enabled convergence of the physical and digital worlds is constantly creating new infrastructure vulnerabilities. The development of 5G ‘smart cities’—the mass automation of public infrastructure via sensors and learning algorithms—will open up even more avenues for surveillance, data weaponisation and criminal cyber activity, and will provide foreign adversaries with further means to reach into societies at the granular level. The recent discovery of a massive cyber intelligence campaign against US security systems, enabled through a US government software contractor, is a reminder of what’s possible here.

All of this has governments and publics around the world signalling alarm, if not outright panic, about the destructive power of AI platforms. As the year comes to a close, the US has launched anti-trust actions against Google, which will almost certainly survive into a Biden administration. The EU has opened an investigation into anti-competitive conduct by Amazon. This alarm is no longer confined to democratic countries. China is drawing up new anti-trust measures squarely aimed at is own AI behemoths of Alibaba, Tencent and Baidu.

This fight between states and global platforms will be a defining feature of the next decade, as will be the fight for public trust in AI technologies. China’s pioneering work in deploying state-directed, AI-enhanced surveillance provides an illustration of a chilling totalitarian vision of intimate control of individual citizens through their dependence on integrated digital smart systems.

As citizens feel more and more powerless against the growing use of AI, they will increasingly push both governments and platforms to be more ambitious in designing technologies and enforceable regulatory regimes that are centred on the public interest.

By engaging transparently with the high risk to security, democracy and social cohesion inherent in many AI applications, Australia has the opportunity to develop innovative policy that could help set standards internationally.

Judiciary intervenes to end Papua New Guinea’s political crisis

For the past six weeks or more, both the political system and day-to-day government operations of Papua New Guinea have been paralysed by the impact of divisions within the Marape government, and attempts to remove James Marape as prime minister.

It has taken the decisive intervention of the nation’s independent judiciary to not only pave the way for a possible (and I put it at no greater than possible) end to the impasse, but also lay down with great clarity some rules that might minimise the risk of such a crisis arising again.

It’s a pity the Australian media has largely ignored the intervention by the PNG Supreme Court because it actually provides a welcome and overdue assurance that, despite a political system that is fractured, PNG has a robust, independent and highly competent national judiciary that ensures the constitution is not breached by prime ministers, governments and parliaments.

I have written about how the judiciary is a vital part of the system of government in PNG. It has consistently ensured that proper parliamentary procedure and process are followed by the government of the day and the national parliament.

The advantages the court’s comprehensive ruling offers PNG may not be fully apparent immediately, but in my view the democratic process, and the functioning of the national parliament, will benefit over time.

The most recent decisions, issued by a five-member bench headed by Chief Justice Gibbs Salika, represent a significant defeat for the Marape government and the speaker of the parliament, Job Pomat. Equally, they are a victory for the opposition, and for former prime minister Peter O’Neill in particular.

The unanimous decisions have been accepted by Marape and his government, even though they put his hold on the office at real risk. Even more welcome is the fact that there have been absolutely zero public disturbances despite the profound impact of the decisions.

The court action initiated by O’Neill came about as a result of the speaker’s move to effectively overrule an overwhelming vote by the house just days earlier to adjourn until 1 December. Pomat recalled parliament with just 24 hours’ notice—even though the opposition has shifted to Vanimo in the northwest of PNG to prepare for a vote of no confidence in the resumed 1 December session.

The session convened by the speaker, and attended only by government members, hurriedly passed the 2021 budget, and then adjourned until April—by which time a no-confidence motion could not be moved as it would be within 18 months of the next election.

The supreme court declared that the recalling of parliament by the speaker was invalid—as was the national budget for 2021. It also ruled that the adjournment to 1 December was lawful.

The court ordered that parliament meet on Monday 14 December.

The immediate effect of this decision is that the opposition will have an opportunity to move a motion of no confidence in the prime minister, though a vote on that will take a week or more to actually take place.

The outcome of any no-confidence motion is far from certain. The numbers on the floor of parliament are close. The result might be as close as 55 on either side, though there is some evidence the opposition might be just in front.

But the position is very fluid—even more fluid than PNG politics usually is.

There can be no question the political turmoil of recent weeks could not have come at a worse time for the country.

The PNG economy is in poor shape, and nothing better illustrates that than the dire state of the nation’s finances. The projected deficit for 2021 is eye-watering—even more than the 2020 budget will end up being.

The economy and the fiscal position cannot be readily rectified. Indeed, it will take more than the 18 months remaining in the current parliament’s term to make a significant impact on either.

But thanks to the supreme court’s courageous decisions there is a pathway to restore some semblance of order to the government and the functioning of the national parliament.

The judiciary has laid down some rules that the government of the day and the parliament now have to follow. They won’t work miracles, but they provide a level of stability and certainty that PNG so desperately needs in difficult times.

There are aspects of the parliamentary democracy, and the functioning of executive government, that are in need of urgent repair. That is not going to happen before the 2022 election.

But what might happen, thanks to the judiciary, is a return to a measure of stability that has been sorely lacking in recent weeks. PNG desperately needs a government capable of starting to address the great challenges the nation faces today, challenges that have been put in the too-hard basket or simply mismanaged for too long.

An environment in which living standards are declining and business confidence is lacking, and in which the fiscal position has never been worse, is not just unhealthy, it is dangerous.

The eight million men, women and children of PNG have been remarkably patient.

That patience will continue only if the people see at least a glimmer of improvement in their living standards and the quality and availability of basic services.

That ought to be the bottom line when parliament decides who will lead Papua New Guinea into another challenging year.

What can social media platforms do about disinformation?

On 10 November, a journalist with the right-wing news network One America News (OAN) tweeted the unsubstantiated claim that voting technology used in Michigan and Georgia had ‘glitched’ for Joe Biden. Her tweet was retweeted by President Donald Trump.

The post was dutifully marked by Twitter as a ‘disputed’ claim about election fraud. But on YouTube and Facebook, OAN’s claim didn’t receive the same treatment.

Dominion Voting Systems, whose equipment was used in several states, had some technical difficulties on election day. But the suggestion that the technology switched votes to the president-elect has been debunked multiple times.

Yet this falsehood is one of many seeking to cast doubt on the legitimacy of the election in the hyperactive and interconnected ecosystem of cable news, conservative news websites and social media. Conspiracies about dead voters now flow quickly from politicians on Fox News to YouTube. Rumours about switched votes spread from web forums to right-wing blogs to YouTube, and to the president’s own Twitter account.

But despite the quick way such ideas metastasise, social media platforms seem to have largely been moderated during the election as separate islands, with their own domestic rules and norms. Twitter, for example, added a warning label to the OAN reporter’s tweet as part of its policy to add context to disputed claims about the election. At last check, the post had been retweeted more than 71,000 times. Twitter moderators also added the label to an OAN tweet that contained a link to her report on YouTube.

But on YouTube itself, where the approach to election misinformation has been called ‘light touch’, OAN’s video about the claim appeared without any warning notice and was viewed almost 204,000 times. While YouTube says it removes videos that ‘mislead people about voting’—such as those publicising the wrong voting date—views on the outcome of the election are allowed.

The YouTube video travelled further still. It was shared on OAN’s Facebook page, where it received more than 18,000 interactions in the form of reactions, comments and shares, according to the insight tool CrowdTangle.

Before the election, Facebook said it would attach ‘an informational label’ to content that discussed issues of legitimacy of the election, but one did not appear on OAN’s YouTube clip on Facebook. Overall, the YouTube link received more than 40,000 interactions on Facebook. Just under half of those took place on public pages, suggesting it was also shared widely in private Facebook groups, which are difficult for researchers to access.

This inconsistency, while not surprising, is increasingly problematic. The spread of disinformation aimed at discrediting the results of the US election has again underscored that a disjointed approach is ineffective at mitigating the problem, with potentially disastrous consequences for democratic institutions. In an environment where a video removed from YouTube spreads on Facebook—not to mention TikTok, Parler or dozens of other platforms—some may demand a uniform approach.

The platforms do collaborate on some law enforcement and national security issues. For example, all three companies are part of initiatives to share hashes—unique fingerprints for images and video—to stop the spread of violent imagery and child exploitation material.

But coordinated efforts on disinformation have largely focused on removing foreign and state-backed disinformation campaigns. Domestic misinformation and disinformation, which became the main challenge during the US election, rarely evokes similarly unified efforts from the big three, at least publicly.

Pressure is intensifying for the platforms to ‘do something’. Yet forcing all of the major companies to work together and take the same approach could result in what Evelyn Douek, a lecturer at Harvard Law School, has called ‘content cartels’.

Decisions about what stays online and what gets removed often lack accountability and transparency, and, in some forms, coordination could help power accrue to already very powerful companies—especially if there’s no independent oversight or possibility of remediation. ‘The pressure to do something can lead to the creation of systems and structures that serve the interests of the very tech platforms that they seek to rein in’, Douek has argued.

A circuit breaker is needed to stop the cross-platform spread of deceptive or misleading claims, but even the coordinated removal of posts and videos about issues like election fraud raises concerns about false positives and censorship. And if such an approach were applied globally, the platforms could draw criticism for imposing American norms of speech in other countries.

In any case, given the now embedded use of disinformation as a campaigning tool, a where-goes-one-goes-all approach to domestic disinformation is unlikely to be legislated. Any such measure would be interpreted by critics as collusion against a political party or message, even if it’s only labels on disputed posts. And it’s not clear that such interventions even work to halt the spread of misinformation.

The clash of local sensibilities with universal content moderation practices isn’t deterring some countries from developing national regulatory regimes. Australia, for example, is developing a voluntary code on misinformation with digital platforms. The European Union also has a code of practice on disinformation.

But around the world, claims of conspiracy often flow from the very top of government through friendly media channels. Those statements are then digested, edited and posted on YouTube, and the cycle begins again.

Content moderation is therefore only a partial answer to institutional and social failures. Companies like Facebook and YouTube shouldn’t be let off the hook, but better and more moderation can’t be the only way to halt the erosion of trust in elections.

As Douek has pointed out, what we consider a legitimate political campaign and what we consider manipulative is partly a social question, and not one for the platforms alone. That’s because at the centre of these problems are individuals. People’s political loyalties and desires, expressed by their clicks and shares, have helped spread the baseless idea of a voting machine ‘glitch’—albeit people who were being worked on by a fast-moving system of politicians, pundits, mainstream media and social media algorithms in a way that’s calculated to capture their emotions and attention.

And it works. Over the past seven days according to CrowdTangle, posts with the phrase ‘election glitch’ have received more than a million interactions on yet another Facebook-owned platform caught up in the disinformation cycle—Instagram.

Cyber-enabled foreign interference in elections on the rise

Foreign governments’ efforts to interfere in the elections and referendums of other countries, and more broadly to undermine other political systems, are an enduring practice of statecraft. Over the past decade, the scale and methods through which such interference occurs has changed, with state actors exploiting the digitisation of election systems, election administration and election campaigns to influence voters and voter turnout, manipulate the information environment and weaken public trust in democratic processes.

The proliferation of actors involved in elections and the digitisation of election functions has dramatically widened the attack surface available to those who seek to disrupt democracy. This has in large part been facilitated by the pervasive and persistent growth of social media and networking platforms, which has made target populations more accessible to foreign state actors and exposed them ‘in a new, “neutral” medium, to the very old arts of persuasion or agitation’.

Our new research report, Cyber-enabled foreign interference in elections and referendums, published by ASPI’s International Cyber Policy Centre, identifies 41 elections and 7 referendums between January 2010 and October 2020 where cyber-enabled foreign interference has been reported, and it finds that there’s been a significant uptick in this activity since 2017.

Figure 1: Cases of cyber-enabled foreign interference, by year and type of political process

The data we collected (see our map) shows that Russia is the most prolific state actor engaging in cyber-enabled election interference, followed by China, Iran and North Korea. All four have sought to interfere in the 2020 US presidential election using differing cyber-enabled foreign interference tactics.

Their activity can be divided into two attack vectors:

  • cyber operations—covert activities carried out via digital infrastructure to gain access to a server or system in order to compromise its service, identify or introduce vulnerabilities, manipulate information, or perform espionage
  • online information operations—information operations carried out online to covertly distort, confuse, mislead and manipulate targets through the dissemination of deceptive or inaccurate information.

Together, these two attack vectors have been used to disrupt voting infrastructure and target electronic and online voting, including vote tabulation, as well as exploit the digital presence of election campaigns, politicians, journalists and voters. The concern with the strategic use of both attack vectors is that it further complicates the target’s ability to detect, attribute and respond.

While electronic and online voting, vote tabulation and voter registration systems are often presented as the main targets of cyber-enabled interference, it’s the level of trust the public has in the integrity of electoral systems, democratic processes and the information environment that is at stake. In Europe, a 2018 Eurobarometer survey, found that 68% of respondents were concerned about the potential for fraud or cyberattack with voting electronically and 61% were concerned about ‘elections being manipulated through cyberattacks’. This figure matched a similar survey conducted by the Pew Research Center in the US, which found that 61% of respondents believed it was likely that cyberattacks would be used in the future to interfere in their country’s elections.

The effectiveness of cyber-enabled interference in the lead-up to an election is overwhelmingly determined by the robustness and integrity of the country’s broader information environment and the extent to which the electoral process has been digitised. This means states vary in their vulnerability. The Netherlands, for example, reverted to using paper ballots to minimise its susceptibility to a cyber operation and help ensure that there wouldn’t be doubts about the electoral outcome.

While it’s difficult to assess the material impact that such efforts have had on the outcome of specific elections and referendums, our report highlights that the same foreign state actors continue to pursue this type of interference, and that for many states cyber-enabled interference has become an expected part of the political process. This perceived threat on its own has the potential to undermine the integrity of elections and referendums and voters’ trust in public and democratic institutions.

Cyber-enabled foreign interference in elections and referendums, or at least the perceived threat of such interference, will persist, and will likely accelerate as more of the world goes online. As the integrity of such processes is key to societal resilience, it’s vital that these events are better protected through greater international collaboration and stronger engagement between governments, the private sector and civil society. However, policymakers must respond to these challenges without adopting undue regulatory measures that could undermine their political systems and create ‘the kind of rigidly controlled environment autocrats seek’.