
Unesco has adopted global ethical standards for neurotechnology, addressing what experts describe as a “wild west” sector driven by AI and the rise of consumer devices.
The UN body’s move is the latest in an international effort to regulate technology that uses data from the brain and nervous system, as investment in the field accelerates.
The standards define “neural data” as a new category requiring protection, offering more than 100 recommendations ranging from personal rights to speculative scenarios — such as companies using neurotechnology to market to people during dreams.
“There is no control,” said Unesco’s chief of bioethics, Dafna Feinholz.
“We have to inform the people about the risks, the potential benefits, the alternatives, so that people have the possibility to say ‘I accept, or I don’t accept’.”
Feinholz said the standards were shaped by two key developments: artificial intelligence, which enables decoding of brain data, and the proliferation of consumer devices such as earbuds that claim to read brain activity and glasses that track eye movements.
“Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks,” said Unesco’s director general, Audrey Azoulay.
She said the new standards would “enshrine the inviolability of the human mind.”
Billions of dollars have been invested in neurotechnology in recent years — from Sam Altman’s stake in Merge Labs, a competitor to Elon Musk’s Neuralink, to Meta’s development of a wristband that lets users control phones or AI Ray-Bans through wrist muscle signals.
The wave of funding has spurred calls for regulation.
The World Economic Forum released a paper last month advocating a privacy-focused framework, and in the US, senator Chuck Schumer introduced the Mind Act in September, following four states that have passed laws protecting “neural data” since 2024.
Unesco’s standards highlight the need for “mental privacy” and “freedom of thought”.
However, sceptics argue that much of the legislation is motivated by fear and could hinder medical research.
“What’s happening with all this legislation is fear. People are afraid of what this technology is capable of. The idea of neurotech reading people’s minds is scary,” said Kristen Mathews, a lawyer who works on mental privacy issues at US firm Cooley.
Neurotechnology has existed for over a century.
The electroencephalogram (EEG) was invented in 1924, and brain-computer interfaces were developed in the 1970s.
The latest surge of investment has been driven by advances in AI, which allow vast amounts of brain data to be analysed — including brainwave activity.
“The thing that has enabled this technology to present perceived privacy issues is the introduction of AI,” said Mathews.
Some AI-powered neurotech developments could prove medically transformative, helping treat conditions such as Parkinson’s disease and amyotrophic lateral sclerosis (ALS) — a progressive nervous system disease affecting nerve cells.
Research published this summer described an AI-powered brain-computer interface decoding speech from a paralysis patient.
Other studies suggest AI could one day reconstruct images people are concentrating on.
The Mind Act warns that AI and “vertical corporate integration” in neurotechnology could result in “cognitive manipulation” and “erosion of personal autonomy.”
“I’m not aware of any company that’s doing any of this stuff. It’s not going to happen. Maybe two decades from now,” Mathews said.
She argues that defining “neural data” as a single category may be too broad.
“That’s the type of thing that we would want to address — monetising, behavioural advertising, using neural data.
“But the laws that are out there, they’re not getting at the stuff we’re worried about. They’re more amorphous.”
The current frontier of neurotechnology lies in improving brain-computer interfaces — still in their infancy despite recent breakthroughs — and in the growth of consumer-oriented devices that raise privacy issues highlighted by the new Unesco standards.












