Tag Archive for: neurotechnology

It’s not too late to regulate persuasive technologies

Social media companies such as TikTok have already revolutionised the use of technologies that maximise user engagement. At the heart of TikTok’s success are a predictive algorithm and other extremely addictive design features—or what we call ‘persuasive technologies’. 

But TikTok is only the tip of the iceberg. 

Prominent Chinese tech companies are developing and deploying powerful persuasive tools to work for the Chinese Communist Party’s propaganda, military and public security services—and many of them have already become global leaders in their fields. The persuasive technologies they use are digital systems that shape users’ attitudes and behaviours by exploiting physiological and cognitive reactions or vulnerabilities, such as generative artificial intelligence, neurotechnology and ambient technologies.   

The fields include generative artificial intelligence, wearable devices and brain-computer interfaces. The rapidly advancing tech industry to which these Chinese companies belong is embedded in a political system and ideology that compels companies to align with CCP objectives, driving the creation and use of persuasive technologies for political purposes—at home and abroad.  

This means China is developing cutting-edge innovations while directing their use towards maintaining regime stability at home, reshaping the international order abroad, challenging democratic values, and undermining global human rights norms. As we argue in our new report, ‘Persuasive technologies in China: Implications for the future of national security’, many countries and companies are working to harness the power of emerging technologies with persuasive characteristics, but China and its technology companies pose a unique and concerning challenge. 

Regulation is struggling to keep pace with these developments—and we need to act quickly to protect ourselves and our societies. Over the past decade, the swift technological development and adoption have outpaced responses by liberal democracies, highlighting the urgent need for more proactive approaches that prioritise privacy and user autonomy. This means protecting and enhancing the ability of users to make conscious and informed decisions about how they are interacting with technology and for what purpose.  

When the use of TikTok started spreading like wildfire, it took many observers by surprise. Until then, most had assumed that to have a successful model for social media algorithms, you needed a free internet to gather the diverse data set needed to train the model. It was difficult to fathom how a platform modelled after its Chinese twin, Douyin, developed under some of the world’s toughest information restrictions, censorship and tech regulations, could become one of the world’s most popular apps.  

Few people had considered the national security implications of social media before its use became ubiquitous. In many countries, the regulations that followed are still inadequate, in part because of the lag between the technology and the legislative response. These regulations don’t fully address the broader societal issues caused by current technologies, which are numerous and complex. Further, they fail to appropriately tackle the national security challenges of emerging technologies developed and controlled by authoritarian regimes. Persuasive technologies will make these overlapping challenges increasingly complex. 

The companies highlighted in the report provide some examples of how persuasive technologies are already being used towards national goals—developing generative AI tools that can enhance the government’s control over public opinion; creating neurotechnology that detects, interprets and responds to human emotions in real time; and collaborating with CCP organs on military-civil fusion projects. 

Most of our case studies focus on domestic uses directed primarily at surveillance and manipulation of public opinion, as well as enhancing China’s tech dual-use capabilities. But these offer glimpses of how Chinese tech companies and the party-state might deploy persuasive technologies offshore in the future, and increasingly in support of an agenda that seeks to reshape the world in ways that better fit its national interests. 

With persuasive technologies, influence is achieved through a more direct connection with intimate physiological and emotional reactions compared to previous technologies. This poses the threat that humans’ choices about their actions are either steered or removed entirely without their full awareness. Such technologies won’t just shape what we do; they have the potential to influence who we are.  

As with social media, the ethical application of persuasive technologies largely depends on the intent of those designing, building, deploying and ultimately controlling the technology. They have positive uses when they align with users’ interests and enable people to make decisions autonomously. But if applied unethically, these technologies can be highly damaging. Unintentional impacts are bad enough, but when deployed deliberately by a hostile foreign state, they could be so much worse. 

The national security implications of technologies that are designed to drive users towards certain behaviours are already becoming clear. In the future, persuasive technologies will become even more sophisticated and pervasive, with the consequences increasingly difficult to predict. Accordingly, the policy recommendations set out in our report focus on preparing for, and countering, the potential malicious use of the next generation of persuasive technologies. 

Emerging persuasive technologies will challenge national security in ways that are difficult to forecast, but we can already see enough indicators to prompt us to take a stronger regulatory stance. 

We still have time to regulate these technologies, but that time for both governments and industry are running out. We must act now. 

Editors’ picks for 2021: ‘The big promises and potentially bigger consequences of neurotechnology’

Originally published 28 October 2021.

In September, Chile became the first state in the world to pass legislation regulating the use of neurotechnology. The ‘neuro-rights’ law aims to protect mental privacy, free will of thought and personal identity.

The move comes amid both growing excitement and growing concern about the potential applications of neurotechnology for everything from defence to health care to entertainment.

Neurotechnology is an umbrella term for a range of technologies which interact directly with the brain or nervous system. This can include systems which passively scan, map or interpret brain activity, or systems which actively influence the state of the brain or nervous system.

Governments and the private sector alike are pouring money into research on neurotechnology, in particular the viability and applications for brain–computer interfaces (BCI) which allow users to control computers with their thoughts. While the field is still in its infancy, it is advancing at a rapid pace, creating technologies which only a few years ago would have seemed like science fiction.

The implications of these technologies are profound. When fully realised, they have the potential to reshape the most fundamental and most personal element of human experience: our thoughts.

Technological development and design is never neutral. We encode values into every piece of technology we create. The immensely consequential nature of neurotechnology means it’s crucial for us to be thinking early and often about the way we’re constructing it, and the type of systems we do—and don’t—want to build.

A major driver behind research on neurotechnology by governments is its potential applications in defence and combat settings. Unsurprisingly, the United States and China are leading the pack in the race towards effective military neurotechnology.

The US’s Defense Advanced Research Projects Agency (DARPA) has poured many millions of dollars of funding into neurotechnology research over multiple decades. In 2018, DARPA announced a program called ‘next-generation nonsurgical neurotechnology’, or N3, to fund six separate, highly ambitious BCI research projects.

Individual branches of the US military are also developing their own neurotechnology projects. For example, the US Air Force is working on a BCI which will use neuromodulation to alter mood, reduce fatigue and enable more rapid learning.

In comparison to DARPA’s decades of interest in the brain, China’s focus on neurotechnology is relatively recent but advancing rapidly. In 2016, the Chinese government launched the China Brain Project, a 15-year scheme intended to bring China level with and eventually ahead of the US and EU in neuroscience research. In April, Tianjin University and state-owned giant China Electronics Corporation announced they are collaborating on the second generation of ‘Brain Talker’, a chip designed specifically for use in BCIs. Experts have described China’s efforts in this area as an example of civil–military fusion, in which technological advances serve multiple agendas.

Australia is also funding research into neurotechnology for military applications. For example, at the Army Robotics Expo in Brisbane in August, researchers from the University of Technology Sydney demonstrated a vehicle which could be remotely controlled via brainwaves. The project was developed with $1.2 million in funding through the Department of Defence.

Beyond governments, the private-sector neurotechnology industry is also picking up steam; 2021 is already a record year for funding of BCI projects. Estimates put the industry at US$10.7 billion globally in 2020, and it’s expected to reach US$26 billion by 2026.

In April, Elon Musk’s Neuralink demonstrated a monkey playing Pong using only brainwaves. Gaming company Valve is teaming up with partners to develop a BCI for virtual-reality gaming. After receiving pushback on its controversial trials of neurotechnology on children in schools, BrainCo is now marketing a mood-altering headband.

In Australia, university researchers have worked with biotech company Synchron to develop Stentrode, a BCI which can be implanted in the jugular and allows patients with limb paralysis to use digital devices. It is now undergoing clinical human trials in Australia and the US.

The combination of big money, big promises and, potentially, big consequences should have us all paying attention. The potential benefits from neurotechnology are immense, but they are matched by enormous ethical, legal, social, economic and security concerns.

In 2020 researchers conducted a meta-review of the academic literature on the ethics of BCIs. They identified eight specific ethical concerns: user safety; humanity and personhood; autonomy; stigma and normality; privacy and security (including cybersecurity and the risk of hacking); research ethics and informed consent; responsibility and regulation; and justice. Of these, autonomy and responsibility and regulation received the most attention in the existing literature. In addition, the researchers argued that the potential psychological impacts of BCIs on users needs to be considered.

While Chile is the first and so far only country to legislate on neurotechnology, groups such as the OECD are looking seriously at the issue. In 2019 the OECD Council adopted a recommendation on responsible innovation in neurotechnology which aimed to set the first international standard to drive ethical research and development of neurotechnology. Next month, the OECD and the Council of Europe will hold a roundtable of international experts to discuss whether neurotechnologies need new kinds of human rights.

In Australia, the interdisciplinary Australian Neuroethics Network has called for a nationally coordinated approach to the ethics of neurotechnology and has proposed a neuroethics framework.

These are the dawning days of neurotechnology. Many of the crucial breakthroughs to come may not yet be so much as a twinkle in a scientist’s eye. That makes now the ideal moment for all stakeholders—governments, regulators, industry and civil society—to be thinking deeply about the role neurotechnology should play in the future, and where the limits should be.

The big promises and potentially bigger consequences of neurotechnology

In September, Chile became the first state in the world to pass legislation regulating the use of neurotechnology. The ‘neuro-rights’ law aims to protect mental privacy, free will of thought and personal identity.

The move comes amid both growing excitement and growing concern about the potential applications of neurotechnology for everything from defence to health care to entertainment.

Neurotechnology is an umbrella term for a range of technologies which interact directly with the brain or nervous system. This can include systems which passively scan, map or interpret brain activity, or systems which actively influence the state of the brain or nervous system.

Governments and the private sector alike are pouring money into research on neurotechnology, in particular the viability and applications for brain–computer interfaces (BCI) which allow users to control computers with their thoughts. While the field is still in its infancy, it is advancing at a rapid pace, creating technologies which only a few years ago would have seemed like science fiction.

The implications of these technologies are profound. When fully realised, they have the potential to reshape the most fundamental and most personal element of human experience: our thoughts.

Technological development and design is never neutral. We encode values into every piece of technology we create. The immensely consequential nature of neurotechnology means it’s crucial for us to be thinking early and often about the way we’re constructing it, and the type of systems we do—and don’t—want to build.

A major driver behind research on neurotechnology by governments is its potential applications in defence and combat settings. Unsurprisingly, the United States and China are leading the pack in the race towards effective military neurotechnology.

The US’s Defense Advanced Research Projects Agency (DARPA) has poured many millions of dollars of funding into neurotechnology research over multiple decades. In 2018, DARPA announced a program called ‘next-generation nonsurgical neurotechnology’, or N3, to fund six separate, highly ambitious BCI research projects.

Individual branches of the US military are also developing their own neurotechnology projects. For example, the US Air Force is working on a BCI which will use neuromodulation to alter mood, reduce fatigue and enable more rapid learning.

In comparison to DARPA’s decades of interest in the brain, China’s focus on neurotechnology is relatively recent but advancing rapidly. In 2016, the Chinese government launched the China Brain Project, a 15-year scheme intended to bring China level with and eventually ahead of the US and EU in neuroscience research. In April, Tianjin University and state-owned giant China Electronics Corporation announced they are collaborating on the second generation of ‘Brain Talker’, a chip designed specifically for use in BCIs. Experts have described China’s efforts in this area as an example of civil–military fusion, in which technological advances serve multiple agendas.

Australia is also funding research into neurotechnology for military applications. For example, at the Army Robotics Expo in Brisbane in August, researchers from the University of Technology Sydney demonstrated a vehicle which could be remotely controlled via brainwaves. The project was developed with $1.2 million in funding through the Department of Defence.

Beyond governments, the private-sector neurotechnology industry is also picking up steam; 2021 is already a record year for funding of BCI projects. Estimates put the industry at US$10.7 billion globally in 2020, and it’s expected to reach US$26 billion by 2026.

In April, Elon Musk’s Neuralink demonstrated a monkey playing Pong using only brainwaves. Gaming company Valve is teaming up with partners to develop a BCI for virtual-reality gaming. After receiving pushback on its controversial trials of neurotechnology on children in schools, BrainCo is now marketing a mood-altering headband.

In Australia, university researchers have worked with biotech company Synchron to develop Stentrode, a BCI which can be implanted in the jugular and allows patients with limb paralysis to use digital devices. It is now undergoing clinical human trials in Australia and the US.

The combination of big money, big promises and, potentially, big consequences should have us all paying attention. The potential benefits from neurotechnology are immense, but they are matched by enormous ethical, legal, social, economic and security concerns.

In 2020 researchers conducted a meta-review of the academic literature on the ethics of BCIs. They identified eight specific ethical concerns: user safety; humanity and personhood; autonomy; stigma and normality; privacy and security (including cybersecurity and the risk of hacking); research ethics and informed consent; responsibility and regulation; and justice. Of these, autonomy and responsibility and regulation received the most attention in the existing literature. In addition, the researchers argued that the potential psychological impacts of BCIs on users needs to be considered.

While Chile is the first and so far only country to legislate on neurotechnology, groups such as the OECD are looking seriously at the issue. In 2019 the OECD Council adopted a recommendation on responsible innovation in neurotechnology which aimed to set the first international standard to drive ethical research and development of neurotechnology. Next month, the OECD and the Council of Europe will hold a roundtable of international experts to discuss whether neurotechnologies need new kinds of human rights.

In Australia, the interdisciplinary Australian Neuroethics Network has called for a nationally coordinated approach to the ethics of neurotechnology and has proposed a neuroethics framework.

These are the dawning days of neurotechnology. Many of the crucial breakthroughs to come may not yet be so much as a twinkle in a scientist’s eye. That makes now the ideal moment for all stakeholders—governments, regulators, industry and civil society—to be thinking deeply about the role neurotechnology should play in the future, and where the limits should be.