Tag Archive for: Facebook

Social media as it should be

Mathematician Cathy O’Neil once said that an algorithm is nothing more than someone’s opinion embedded in code. When we speak of the algorithms that power Facebook, X, TikTok, YouTube or Google Search, we are really talking about choices made by their owners about what information we, as users, should see. In these cases, algorithm is just a fancy name for an editorial line. Each outlet has a process of sourcing, filtering and ranking information that is structurally identical to the editorial work carried out in media—except that it is largely automated.

This automated editorial process, far more than its analogue counterpart, is concentrated in the hands of billionaires and monopolies. Moreover, it has contributed to a well-documented list of social ills, including large-scale disinformation, political polarisation and extremism, negative mental-health impacts and the defunding of journalism. Worse, social-media moguls are now doubling down, seizing the opportunity of a regulation-free operating environment under Donald Trump to roll back content-moderation programs.

But regulation alone is not enough, as Europe has discovered. If our traditional media landscape featured only a couple of outlets that each flouted the public interest, we would not think twice about using every available tool to foster media pluralism. There is no reason to accept in social media and search what we would not tolerate in legacy media.

Fortunately, alternatives are emerging. Bluesky, a younger social-media platform that recently surpassed 26 million users, was built for pluralism: anyone can create a feed based on any algorithm they choose, and anyone can subscribe to it. For users, this opens many different windows onto the world, and people can also choose their sources of content moderation to fit their preferences. Bluesky does not use your data to profile you for advertisers, and if you decide you no longer like the platform, you can move your data and followers to another provider without any disruption.

Bluesky’s potential does not stop there. The product is based on an open protocol, which means anyone can build on top of the underlying technology to create their own feeds or even entirely new social applications. While Bluesky created a Twitter-like microblogging app on this protocol, the same infrastructure can be used to run alternatives to Instagram or TikTok, or to create totally novel services—all without users having to create new accounts.

In this emerging digital world, known as the Atmosphere, so named for the underlying AT Protocol), people have begun creating social apps for everything from recipe sharing and book reviews to long-form blogging. And owing to the diversity of feeds and tools that enable communities or third parties to collaborate on content moderation, it will be much harder for harassment and disinformation campaigns to gain traction.

One can compare an open protocol to public roads and related infrastructure. They follow certain parameters but permit a great variety of creative uses. The road network can convey freight or tourists, and be used by cars, buses, or trucks. We might decide collectively to give more of it to public transportation and it generally requires only minimal adjustments to accommodate electric cars, bikes and even vehicles that had not been invented when most of it was built, such as electric scooters.

An open protocol that is operated as public infrastructure has comparable properties: our feeds are free to encompass any number of topics, reflecting any number of opinions. We can tap into social-media channels specialised for knitting, bird watching or book piles, or for more general news consumption. We can decide how our posts may or may not be used to train AI models, and we can ensure that the protocol is collectively governed, rather than being at the mercy of some billionaire’s dictatorial whims. Nobody wants to drive on a road where the fast lane is reserved for cybertrucks and the far right.

Open social media, as it is known, provides the opportunity to realise the internet’s original promise: user agency, not billionaire control. It is also a key component of national security. Many countries are now grappling with the reality that their critical digital infrastructure—social, search, commerce, advertising, browsers, operating systems and more—is subordinated to foreign, increasingly hostile, companies.

But even open protocols can become subject to corporate capture and manipulation. Bluesky itself will certainly have to contend with the usual forms of pressure from venture capitalists. As its CTO, Paul Frazee, points out, every profit-driven social-media company ‘is a future adversary’ of its own users, since it will come under pressure to prioritise profits over users’ welfare. ‘That’s why we did this whole thing, so other apps could replace us if/when it happens.’

Infrastructure may be privately provided, but it can be properly governed only by its stakeholders: openly and democratically. For this reason, we must all set our minds on building institutions that can govern a new, truly social digital infrastructure. That is why I have joined other technology and governance experts to launch the Atlas Project, a foundation whose mission is to establish open, independent social-media governance and to foster a rich ecosystem of new applications on top of the shared AT Protocol. Our goal is to become a countervailing force that can durably support social media operated in the public interest. Our launch is accompanied by the release of an open letter signed by high-profile Bluesky users such as the actor Mark Ruffalo and renowned figures in technology and academia such as Wikipedia founder Jimmy Wales and Shoshana Zuboff.

There is nothing esoteric about our digital predicaments. Despite the technology industry’s claims, social media is media, and it should be held to the same standards we expect from traditional outlets. Digital infrastructure is infrastructure, and it should be governed in the public interest.

Policy, Guns and Money: COP26, Australia’s largest heroin seizure and the Facebook outage

The COP26 climate summit kicks off on Sunday in Glasgow, where it’s expected that leaders will bring bigger commitments to 2030 emissions-reduction targets and outline bolder climate policies. ASPI’s Robert Glasser and Anastasia Kapetas discuss Australia’s climate commitments going into the summit, and whether they are sufficient to address the impacts of climate change in Australia and our region. They also discuss the recent US Department of Defense climate risk analysis.

Australian authorities recently seized a record 450 kilograms of heroin, the largest heroin shipment ever detected in Australia. ASPI’s John Coyne and Teagan Westendorf discuss the significance of this seizure and consider whether a seizure of this size leads to less product being available or less consumption.

Earlier this month, a global outage left users unable to access Facebook, Instagram, WhatsApp and Messenger for six to seven hours. Karly Winkler and Jocelinn Kang of ASPI’s International Cyber Policy Centre discuss the causes and impacts of the outage and the potential for such outages to affect critical infrastructure.

China’s information warfare darkens the doorstep of Twitter and Facebook

On Monday, Twitter and Facebook announced that they had taken down a network of accounts that were undertaking covert influence campaigns designed to undermine the protest movement in Hong Kong. In their statements, both companies were sufficiently confident to suggest that these activities were a ‘coordinated state-backed operation’ (Twitter) with ‘links to individuals associated with the Chinese government’ (Facebook). China is the authoritarian state that’s most technologically adept at controlling its domestic information environment. It appears that Beijing is now willing to disruptively contest the information domain elsewhere.

This is significant.

It’s well known that the Chinese Communist Party engages in extensive efforts to control and manipulate the political discourse within China through the Great Firewall, censorship on platforms such as WeChat and Weibo, and control of traditional domestic media. International concern about the CCP’s efforts to manipulate the information environment has focused on direct threats to mobilise diaspora populations, the party’s control over Chinese-language media, the development of international propaganda networks, and the possibility that Chinese social media apps may be used to covertly spread disinformation among diaspora populations. But it hasn’t been about direct and deceptive attempts to manipulate social media audiences on mainstream international platforms.

A preliminary analysis of the 936 accounts that Twitter removed suggests that this was not a long-running, well-incubated influence operation. Across our sampling of these accounts, Chinese-language political commentary on Hong Kong appears to have started 11 June, the day protesters blocked roads as the extradition bill was debated. Posts in English started on 1 July, the anniversary of the handover of Hong Kong from the British and the day protestors stormed the legislative chamber.

Many of the accounts in the Twitter dataset were set up some time ago, as far back as 2007 (giving them greater credibility under scrutiny from the platform), but were dormant for long periods. Some were completely inactive until a sudden spike of posting from mid-June. While many of these accounts originally posted in different languages, the two main languages used from 2017 onwards were English and Chinese. That suggests that the targeted audience may have been Hong Kong-based (with potential ripple effects through to mainland audiences) and international.

Let’s take the account @Resilceale as an example. It was created in 2011 and tweeted in English about football until 2012 when it went dormant. It re-emerged in May 2019 tweeting vacuous comments in Chinese. From 1 July, the day the protestors broke into the legislative chamber, @Resilceale began rapidly posting overtly political content about Hong Kong.

The accounts themselves are not well-established personas. There’s a cross-section of spammy, click-bait-type accounts. We’ve also identified at least a couple of innocent bystanders that are likely to have been caught up in the suspensions because of their content (China, 5G, security) and because they purchased bot follower accounts. This suggests that some of the accounts used in this influence operation had been available for purchase.

There are other elements of the Twitter and Facebook announcements that suggest this operation was hastily initiated in response to the growing intensity of the protests in Hong Kong. Twitter’s statement notes that VPNs (virtual private networks, which enable users in China to get around the ban on using non-Chinese social media platforms) were used to run some of these accounts, but that others were accessed from specific unblocked IP addresses originating in mainland China. The use of dedicated infrastructure unhindered by the Great Firewall to manage this activity provides further support to the theory that this influence operation was state sponsored.

According to Facebook, which was tipped off by Twitter, the small network it identified (five accounts, seven pages, three groups) used a familiar disinformation playbook. The fake accounts posed as news outlets and posted content to groups to build their audience. They distributed targeted, emotive, bespoke content to shape sentiment and steer users out of Facebook into other online influence environments.

These are standard disinformation tactics but they’re not ones we associate with China. This is direct intervention in a way that we haven’t seen from China before. Chinese state media do use mainstream Western platforms for influence, but they don’t deny they’re doing it. Elements of the CCP’s propaganda apparatus even purchased promoted tweets attacking the protestors. Indeed, more commentary has focused on the efforts the Chinese authorities have put into kicking people off Twitter.

The association of the Chinese government with networks of coordinated inauthentic accounts on Western social media platforms adds a new layer in our understanding of Beijing’s approach to information warfare. It’s worth noting that regimes that are willing to target foreign adversaries with information warfare first hone their techniques on their domestic populations.

There were signals that efforts were being made to disrupt the Hong Kong protests. Telegram, one of the encrypted messaging apps that the protesters were using to organise was hit by denial-of-service attacks in mid-June. Telegram founder Pavel Durov tweeted that the attacks came primarily from Chinese IP addresses. And around the same time Hong Kong police arrested a student who acted as the administrator of a Telegram group with 27,000 members that shared information about the protests.

Chinese state media became increasingly vociferous in its criticism of the protesters after they broke into Hong Kong’s legislative council building. State media outlets asserted that the CIA was behind the protests, compared the protestors to terrorists, and circulated videos of PLA troops training for urban warfare and building up their numbers in Shenzhen, close to the Hong Kong border.

The announcements from Twitter and Facebook suggest that the Chinese government is willing to go beyond using its state media apparatus to shape the narrative. They suggest that Beijing has the capability and intent to use Western mainstream social media platforms to manipulate social media audiences at scale. Any state that wishes to defend the sovereignty of its information domain should take note.

Government shouldn’t rush social media regulation

The Morrison government has announced its intention to introduce new legislation into parliament this week to stop content like the video of the Christchurch massacre from proliferating on social media platforms.

The new law would impose heavy fines and up to three years’ imprisonment for executives of social media companies which fail to ‘expeditiously’ take down content flagged to them by the Office of the eSafety Commissioner.

The government’s desire to be seen to be responding swiftly and strongly to the abuse of social media platforms in the wake of the Christchurch tragedy is understandable.

Action on the abuse of social media platforms by terrorists and their supporters is clearly necessary, and regulation may well form one part of a multifaceted response. But rushed regulation is almost inevitably bad regulation.

The government is right to highlight the importance of this issue. But it’s precisely because it is important that, rather than trying to slam this legislation through in the last sitting week before the election, the government—both the current one and the next—should slow down and take the time to get it right.

Here are three things which regulators should consider.

First, conceptual clarity. What are we trying to achieve, and is this the best way to achieve it? It’s not clear from the information currently available whether the goal of the legislation is, for example, to prevent Christchurch-style events in which a single attack is broadcast and amplified globally at lightning speed, or to disrupt long-term terrorist propaganda and influence campaigns. These goals are related but distinct from one another and the best methods for achieving them differ. Google’s senior vice president for global affairs and chief legal officer, Kent Walker, has warned that the government’s proposal may be neither feasible nor an ‘appropriate model’ for managing extremist content online.

It’s also not clear how much consideration has been given to whether imposing criminal penalties on social media companies, and a regulatory focus on content, are better options than alternative approaches such as focusing on the users uploading the content. (An objection that many of these users might be outside Australia is valid, but that also applies to the many social media companies that have no physical presence in Australia.)

Second, technical feasibility. How will it work in practice, and is it really going to be an improvement on the current situation? The mechanism proposed in the new legislation, as far as we know, will involve Australia’s eSafety Commissioner notifying social media companies of ‘abhorrent violent material’ on their platforms, which the companies must then expeditiously take down.

At the height of the online storm which followed the Christchurch shooting, new copies of the video were being uploaded to YouTube at the rate of one per second. Hundreds of new accounts were created just to share it. Facebook says it took down 1.5 million copies in the first 24 hours after the attack.

It’s hard to imagine that the 45-person-strong Office of the eSafety Commissioner would have the capacity to identify, let alone issue notices on, anywhere near that volume of content. Even if they could, it’s not clear how issuing such notices would have helped social media companies respond more effectively, or if those notices would only have sucked up resources and time which could otherwise have gone to addressing the problem directly.

This doesn’t mean that regulation isn’t worth doing. What it does mean is that—coming back to conceptual clarity—we need to recognise what this kind of regulation is, and is not, good for in practice. The government’s proposed legislation would probably not have stopped the Christchurch massacre video from spreading as quickly as it did.

Third, regulators need to consider adverse consequences. I can already tell you how terrorists, extremists and their supporters will respond to laws like the one the government is proposing. Rather than uploading their content to mainstream platforms like Facebook and YouTube, they will upload it to third-party platforms over which Australia has no influence, and then continue to share links to that content on mainstream sites. I know they’ll do this, because it’s what they do already to circumvent the efforts of the mainstream platforms to automatically detect and remove such content. An increased crackdown by the big social media players will not take this content offline; it will simply disperse it more widely.

Again, the regulation may still be worthwhile, but—coming back to conceptual clarity and technical feasibility—we need to consider whether (further) fracturing extremist communications online is something we want to do, and whether it will make it technically easier or more difficult to achieve the ultimate goal of protecting the public.

There’s another major adverse consequence of the proposed law which the government must consider. If it passes, the legislation will give companies like Google and Facebook a very strong incentive to move as many senior executives as possible out of Australia. The government has complained about the social media companies sending their junior executives rather than decision-makers to a meeting on the response to the Christchurch attack held in Brisbane last week. If the government is unhappy with the level of senior representation in the country now, just wait until social media executives foolish enough to turn up in Australia are greeted by the prospect of jail time. The ultimate effect of this regulation could well be to make it much more difficult for the government to establish high-level contacts at the big social media companies.

The Morrison government is absolutely right about the necessity to address the abuse of online spaces by terrorist and extremist movements. The events surrounding the Christchurch massacre clearly call for an effective, powerful and coordinated response. Precisely because it is so important, however, this issue should not become hostage to the electoral cycle. There’s bipartisan support for the fight against online extremism and this is an issue on which the major parties can work together. The government, both now and after the election, needs to slow down, think clearly, consult widely and take the time to get this right.

Algorithms won’t stop the spread of live-streamed terror

In the wake of the Christchurch attacks, the internet giants Facebook, Google and Twitter have come under pressure for failing to prevent the killer from using their platforms to share his message, including live-streaming the shooting.

While some of this criticism may be deserved, what this episode really shows is that automated systems for detecting and removing terrorist content are still no match for humans determined to promote vile messages.

New Zealand Prime Minister Jacinda Ardern said on Sunday that she intended to ask Facebook how the terrorist was able to live-stream his attack on its platform for a full 17 minutes. In a matter of moments, the footage was ricocheting around the internet, not just on Facebook but on YouTube, Google, Twitter, Reddit and a host of smaller social media and video-sharing platforms.

‘I do think there are further questions to be answered’, Ardern said. ‘These social media platforms have wide reach. This is a problem which goes well beyond New Zealand … So whilst we might have seen action taken here, that hasn’t prevented [the video] being circulated beyond New Zealand’s shores.’

The social media giants have been quick to highlight their efforts to stop the footage from spreading. Facebook, which also owns Instagram and WhatsApp, said on Monday that it had removed over 1.5 million copies of the footage in just the first 24 hours after the attack, 80% of which were blocked immediately when the user attempted to upload them.

Google, which also owns YouTube, deleted the terrorist’s YouTube account before he was able to upload footage of the attack, but said the number of copies and related videos being uploaded was ‘unprecedented in scale and speed, at times as fast as a new upload every second’. In response, Google called in its ‘war room’ of crisis responders and removed tens of thousands of videos which were automatically flagged as containing even a 5% match to the footage. For the first time, the company even ‘broke’ parts of YouTube’s own search function to make it more difficult for viewers to find copies of the video.

Twitter has been the least forthcoming of the three major platforms about what action it has taken to prevent the footage from proliferating, simply saying that it is ‘continuously monitoring’ content uploaded to the platform. As of Tuesday morning, it was still possible to find links to the entire, uncensored footage on Twitter within seconds with a simple search.

Under the circumstances, it’s fair to ask whether platforms are really doing enough. At the same time, it’s important to recognise the scale and nature of the challenge they’re up against. We shouldn’t allow (justifiable) anger at the platforms and their fallible algorithms to distract us from where the lion’s share of the blame really lies: the human users who are actively uploading, promoting and sharing this content.

The killer wanted his message to go viral and, as a native of the dark corners of the internet where far-right extremism grows like poisonous mushrooms, he knew how to make it happen.

There is significant evidence that he planned his communications campaign well in advance of the attack. Moments before the shooting began, he posted an 18,300 word ‘manifesto’ on Twitter, uploaded multiple times across three platforms in a clear effort to make it more difficult to take down. (As of Tuesday morning, the manifesto was also still easily accessible online as the first result in a simple Google search, despite the apparent efforts of hosting providers like Document Cloud to take it down.)

The gunman announced his attack and shared the link to the live-stream on 8chan/pol/, one of the  mouldy internet corners frequented by the kind of people who could be relied on to not only watch but gleefully promote his video and his message. ‘Please do your part by spreading my message, making memes and shitposting as you usually do’, he wrote.

And they did—or at least some of them did. Others were divided as to whether it was an attempt to entrap them in either a communist, FBI, Islamic or Jewish conspiracy, because that’s how people in these forums think.

Recordings of the initial live-stream were downloaded, saved and shared across a network of far-right and white nationalist forums. Users were uploading new versions faster than the platform providers could take them down. Google says that it deleted hundreds of YouTube accounts which were created after the shooting specifically to share the shooting footage or express sympathy with the perpetrator.

Some of the more tech-savvy users also took deliberate steps to circumvent the platform’s automated systems for recognising terrorist content. Facebook, Google, Twitter and a number of other major digital companies share ‘hashes’ (sort of like digital fingerprints) of problematic content via a specialised database. This is what Google was referring to when it said it was taking down videos with even a 5% match to the footage—its algorithms were comparing every new video to the hash of the original live-stream, and blocking everything with even a partial match.

But hashes are easy to break. Small alterations, such as skewing the video’s size or adding a watermark, can be enough to prevent the system from recognising the video, and that’s precisely what many of the uploaders did. Millions of copies may have been caught on upload, but far more clearly slipped through the net. Realistically, this footage will almost certainly never be entirely scrubbed from the web.

This tragedy should serve as a wakeup call to the internet giants. The weaponisation of information networks is not new, but the digital architecture of the major platforms creates unprecedented opportunities for bad actors to spread their message across the globe in minutes, and to keep it out there indefinitely for as long as extremists continue to lurk, like colonies of mould, in the nooks and crannies of the internet.

And as this episode shows, for all their expertise and resources, the platforms’ automated systems are not yet enough to stop the spores from spreading.

Facebook’s and Google’s ad-blocking changes: why the national security community should care

Facebook and Google disclosed last month that they would be making changes to the ways users can interact with, monitor and block advertisements. An independent programmer pointed out that the proposed changes to Google’s Chrome browser would prevent the internet’s most popular and effective ad blockers from being able to block ads at all. It was then revealed that Facebook had effectively blocked efforts that sought to hide or generate data from the ads we see in our newsfeeds.

There’s something much bigger at stake here than advertising revenue. These changes severely limit the ability of cybersecurity firms and other independent organisations to monitor and report on political influence campaigns, and therefore make it harder for the national security community to identify instances of foreign interference. Independent analysts have played a leading role in uncovering and dissecting the social media campaigns of foreign interests since the 2016 US election debacle. But, with these changes, Australia must brace for an election in which independent monitoring of political influence on social media will be next to impossible.

US-based independent newsroom ProPublica reported on 28 January that Facebook had made changes to its website code that prevented their browser extensions (small programs that integrate with web browsers to give added functionality) from discovering, tracking and reporting on the political ads that Facebook was showing its users. ProPublica, which has been curating an open database of political ads and the demographics they targeted, has an army of volunteers who monitor their Facebook feeds and the ads they contain using specially designed browser extensions.

Facebook has added code to the newsfeed that keeps extensions from interacting with ad menus, preventing automated monitoring of the ads that are being shown to users, as well as how and why they’re being shown. With this addition, tools such as ProPublica’s cease to function, unable to collect data crucial to monitoring political influence online. According to ProPublica, Facebook’s response to these concerns was that allowing automated clicks was a security risk and that this function could be used to ‘block’ ads. However, other common automated clicks (such as bot clicks liking a page, or clicking an ad to generate revenue) are unaffected.

Likewise, Google’s proposed changes will make it much harder to monitor ads on other websites and platforms for the 60% of total internet users who browse using Google Chrome. The changes would essentially remove the functionality that browser extensions rely on to block requests made by ad servers, replacing it with a system in which the final say on what a user sees is made by the website and browser.

Google claims that the change is about preserving user privacy. But regardless of the intention behind it, this change means that extensions won’t be able to examine or interact with data crucial for monitoring ads and their origins. By removing the ability for extensions to properly examine and filter web requests, Google will further erode internet transparency, essentially asking Chrome users to rely on the good word of websites as to what they contain, and limiting their capacity to peek behind the curtain to discover where what they see is coming from.

Given these companies’ dominance of the advertising market and the opaque nature of the internet itself, their constraining of independent observers from monitoring how sponsored content is delivered on their platforms should not be taken lightly. These moves run directly counter to the sort of public accountability and transparency needed after the 2016 US election revelations and stand at odds with the efforts of US lawmakers who are attempting to increase the responsibility and accountability of these companies.

Closer to home, despite Australia’s efforts to constrain foreign influence in investments and donations, Facebook’s decision means that we must prepare for an election cycle in which political influence on social media occurs without independent oversight. And even if its decision were reversed, the world’s most popular browser would be unable to provide data to researchers anyway.

Considering the proven capacity of internet-based political advertising campaigns to serve as an effective vehicle for foreign interference, it’s important that analysts be aware of the power they are about to lose, and that the national security community be aware of how much harder identifying instances of foreign interference will be.

The collection of aggregate data might be the business of Facebook and Google, but monitoring the use of that data by political interests is crucial to the health of our political process. Regardless of whether one sees these changes as justified on the part of Google and Facebook, the negative impacts they will have on the monitoring of political messaging on the internet is about to make the battle against foreign influence that much harder.