Tag Archive for: deep fake

Fighting deepfakes: what’s next after legislation?

Deepfake technology is weaponising artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life. This digital abuse requires urgent global action, as it not only infringes on human rights but also affects their democratic participation.

Britain’s recent decision to criminalise explicit deepfakes is a significant step forward. It follows similar legislation passed in Australia last year and aligns with the European Union’s AI Act, which emphasises accountability. However, regulations alone are not enough, effective enforcement and international collaboration are essential to combat this growing and complex threat.

Britain’s legislation to criminalise explicit deepfakes as part of the broader Crime and Policing Bill that will be introduced to the parliament marks a pivotal step in addressing technology-facilitated gender-based violence. This move is a response to a 400 percent rise in deepfake-related abuse since 2017, as reported by Britain’s Revenge Porn Helpline.

Deepfakes, which fabricate hyper-realistic content, often target women and girls, objectifying and eroding their public engagement. By criminalising both the creation and sharing of explicit deepfakes, Britain’s law closes loopholes in earlier revenge porn legislation. The legislation places stricter accountability on platforms hosting these harmful images, reinforcing the message that businesses must play a role in combatting online abuse.

The EU has taken a complementary approach by introducing requirements for transparency in its recently adopted AI Act. The regulation does not ban deepfakes outright but mandates that creators disclose their artificial origins and provide details about the techniques used. This empowers consumers to better identify manipulated content. Furthermore, the EU’s 2024 directive on violence against women explicitly addresses cyberviolence, including non-consensual image-sharing, providing tools for victims to prevent the spread of harmful content.

While these measures are robust, enforcement remains a challenge due to fragmented national laws, and deepfake abuse often transcends borders. The EU is working to harmonise its digital governance and promote AI transparency standards to mitigate these challenges.

In Asia, concern over deepfake technology is growing in countries such as South KoreaSingapore and especially Taiwan where it not only targets individual women but is increasingly used as a tool for politically motivated disinformation. Similarly, in the United States and Pakistan, female lawmakers have been targeted with sexualised deepfakes designed to discredit and silence them. Italy’s Prime Minister Giorgia Meloni faced a similar attack but successfully brought the perpetrators to court.

Unfortunately, many countries still lack comprehensive legislation to effectively combat the abuse of deepfakes, leaving individuals vulnerable, especially those without the resources and support to fight back. For example, similar laws in the United States remain stalled in legislative pipelines—the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Bill and Deepfake Accountability Bill.

Australia offers a strong example of legislative action as it faces similar challenges with deepfake abuse contributing to a chilling effect on women’s activity in public life, affecting underage students and politicians. This abuse not only affects individual privacy but also deters other women from engaging in public and pursuing leadership roles, weakening democratic representation.

In August 2024, Australia passed the Criminal Code Amendment, penalising the sharing of non-consensual explicit material.

While formulating legislation is the first step, to effectively address this issue, governments must enforce the regulation while ensuring that victims have accessible mechanisms to report abuse and seek justice. Digital literacy programs should be expanded to equip individuals with the tools to identify and report manipulated content. Schools and workplaces should incorporate online safety education to build societal resilience against deepfake threats.

Simultaneously, women’s representation in cybersecurity and technology governance needs to be increased. Women’s participation in shaping policies and technologies ensures that gendered dimensions of digital abuse are adequately addressed.

Although Meta recently decided to cut back on factchecking, social media platforms need to be held to account for hosting and amplifying harmful content. Platforms must proactively detect and remove deepfakes while maintaining transparency about their AI applications and data practices. The EU AI Act’s transparency requirements serve as a reference point for implementing similar measures globally.

Ultimately, addressing deepfake abuse is about creating a safe and inclusive online space. As digital spaces transcend borders, the fight against deepfake abuse must be inherently global. Countries need to collaborate with international partners to establish shared enforcement mechanisms, harmonise legal frameworks and promote joint research on AI ethics and governance. Regional initiatives, such as the EU AI Act and the Association of Southeast Asian Nations’ guidelines for combatting fake news and disinformation, can serve as a means for building capacity in nations lacking the expertise or resources to tackle these challenges alone.

In a world where AI is advancing rapidly, combatting deepfake abuse is more than regulating technology—it is about safeguarding human dignity, protecting democratic processes and ensuring that everyone, including women, can participate in society without fear of intimidation or harm. By working together, we can build a safer, more equitable digital environment for all.

Australia needs to consider global perspectives to weed out online deception and disinformation

Fallopia japonica, better known as Japanese Knotweed, is a highly invasive plant that forms dense thickets, outcompeting native vegetation.

Present day disinformation is a lot like Japanese Knotweed. It takes just one post (or plant) to kick off an infestation. It spreads fast through a continuously growing horizontal underground stem—and it’s really hard to eradicate.

Reflecting on the recent inaugural OECD conference addressing the global disinformation challenge, parallels between strategies to combat knotweed and disinformation emerged.

The conference showcased international efforts akin to battling the pervasive and aggressive weed, with different nations sharing their models for managing the complex issue. Just as various methods including herbicides and encapsulation are employed against knotweed, governments alongside academia, civil society, and the private sector must engage in a multi-pronged approach to control and prevent the spread of disinformation.

At many disinformation forums I’ve attended, conversation has admired the problem without resolve or focused on the technological components driving today’s accelerated spread of synthetic media and fake news. But this OECD summit was not a mere rumination but a focused exploration of practical solutions. Experts, policymakers, and industry leaders from around the world converged on the theme of strengthening democracy through information integrity, and the event did not disappoint.

From Europe to the Asia-Pacific and across Latin-America, disinformation has emerged as the most significant threat to societies and democracies. Next year, a record 3.2 billion people worldwide will vote in elections across 40 countries. This includes Taiwan, Indonesia, Pakistan and the US. These are consequential elections, the outcomes of which will set the tone for global events for years to come.

Events in Slovakia’s recent election clearly show the danger we already face. Slovakia’s experience brings to the fore the stark reality of deepfakes and disinformation in elections serving as a warning for the 40 countries getting ready to vote in 2024. This is not a theoretical concern; it demands our immediate attention.

It’s not just deepfakes we need to worry about. Generative AI combined with data mining is a real threat. In the same way personal data is used for micro-targeting to sell us stuff, it can also be used for personalised disinformation: creating persuasive narratives and convincing dialogue that engage us as individuals, manipulating our beliefs. It’s precise, relevant and fine-tuned to you.

The broad consensus at the conference was that social media incentive structures that reward clicks over account and content authenticity was a deeply rooted element of the problem. Modern content creation and what makes news was widely discussed, with repeated calls for social media companies to make algorithms transparent.

Opaque amplification models have created a murky world that benefits only the platforms, advertisers, content distributors and threat actors. This goes to the concept of freedom of speech versus freedom of reach, touted by many of the speakers.

One needn’t go far for a practical example. In Paris, where the conference was held, there’s been mass hysteria over a little bug. Not the cyber kind. Bed bugs. Would there be the same level of real-world panic if a few media posts hadn’t gone viral?

Unfortunately, no one at the conference saw a clear path to upending current incentive structures and obscured algorithms, even with regulation.

The call instead was for social media companies to be more transparent about their data to help researchers better understand social media networks, content distribution, recommendation algorithms and social impact. At least Meta has come to the party.

But a single post or click is not entirely the problem. The real dilemma lies in campaigns and narratives, often pushed in a coordinated and artificial way to sow discord and dissent. They attack the ideas underpinning democracy as well as institutions and individuals with privileged access.

How do we deal with this? Data and algorithmic transparency aside, one oft cited view by speakers was the need for generational change: fostering a new breed of critical thinkers. Of course, this doesn’t address the immediate disinformation challenge we face. But it serves to build awareness over time, through media literacy and education in schools and workplaces, about disinformation techniques and targets.

There’s merit in this approach. From Senegal to Colombia, young people are concerned and want to tackle the problem.

The role of independent media and journalism also received much attention, with an emphasis on the need for robust domestic information sources. Despite the era of traditional media information monopoly being over, there was a view that we could create a monopoly on quality information, over quantity and speed.

Locally, the ABC is often criticised for bias. But upon hearing my Australian accent, conference attendees had nothing but praise for the fact that Australia has public service media. Built on values of integrity, respect, collegiality and innovation, the ABC has an implied responsibility to produce fact-based news content—the antithesis of disinformation.

Another approach is to recognise that technology is both a threat and an opportunity, able to generate and amplify content while also aiding in the detection and analysis of disinformation. The newly established Advanced Strategic Capabilities Accelerator’s first focus is on synthetic media and disinformation, indicating that the Australian Defence Force thinks technology can be leveraged in information warfare, as much as pose a challenge.

Finally, there were examples aplenty of success found though coordinated government approaches. Underpinned by a commitment to democratic values, transparency, accountability and individual freedoms, several governments are working alongside civil society and the private sector through a central coordination body.

France’s Viginum agency has identified several instances of complex and persistent digital information manipulation campaigns, including involving Russia. Canada established a protecting democracy unit in its Privy Council Office bringing together traditional intelligence and security agencies with government statisticians, communication experts and election agency staff to focus on disinformation undermining democratic institutions. Lithuania has a new crisis management centre to surge against a range of challenges, including disinformation.

The message is that government coordination is vital. Just as local councils are responsible in the battle against knotweed for identifying infestations, raising awareness and implementing control measures, government plays a vital role to fight the global disinformation menace.

But in Australia, there is no single responsible body. Instead, responsibility is spread across a myriad of agencies, including Home Affairs, DFAT, Defence, ACMA, ASIO and the ASD. The absence of a coordination body means there’s no centre of excellence that can align interested parties and move with agility. We desperately need this.

Representatives from around the world agree there is no single silver bullet to a problem that exhibits many dimensions. Teaching kids to think before you link, bolstering media transparency, and regulating algorithms are ineffective on their own. We need to implement not one but all of the above approaches, strategically and simultaneously.

Disinformation is proliferating largely unchecked across the digital terrain, infiltrating minds and landscapes, and devaluing the truth. It is the digital knotweed.

And just as knotweed erodes property values, disinformation erodes trust, distorts reality, and undermines the foundations of informed societies.

While total eradication of knotweed proves elusive, similarly, complete elimination of disinformation may well be improbable. But that’s no reason not to act.

We each have individual responsibility to disable the ad blockers, reject web cookies, and encourage our communities to be alert and alarmed by the digital infodemic. Through vigilance and persistent efforts, as individuals and as a nation, taking actionable steps such as those proposed at the OEDC disinformation conference, there’s hope for effective resistance.