AI restores voice by reading neck muscle movements

By Published On: April 21, 2026Last Updated: April 21, 2026
AI restores voice by reading neck muscle movements

Researchers have developed an AI neck sensor that uses light to read muscle movements and turn silent speech into an audible voice.

The researchers said the technology could help patients who have lost their voices because of vocal cord disease or laryngeal surgery, which is surgery on the voice box.

They also suggested it could be used in industrial settings where microphones or radios are impractical, and for silent communication in places such as libraries or conference rooms.

The study was carried out by a team at Pohang University of Science and Technology in South Korea, led by professor Sung-Min Park and Dr Sunguk Hong.

Professor Sung-Min Park said: “We hope this technology will accelerate the day when patients with speech disorders can reclaim their voices.

“It is a noteworthy technology because it has a wide range of potential applications, including assisting laryngectomised patients, communicating in noisy industrial environments, and even supporting silent conversations.”

The technology works by detecting tiny movements in the muscles and skin around the neck that happen when a person speaks.

The vocal cords produce sound, but nearby tissue also moves in patterns that can reveal what the speaker is trying to say.

To capture these subtle movements, the team created what it calls a “multiaxial strain mapping sensor”, a device that combines a miniature camera with small reference markers on soft silicone worn on the neck.

The sensor tracks microscopic skin movements and sends the data to an AI system, which estimates the words or sentences the wearer intends to say.

These are then paired with voice synthesis technology trained on the person’s vocal characteristics to recreate their own voice.

Existing voice restoration technologies have typically relied on biological signals such as electromyography, which measures electrical activity in muscles, or electroencephalography, which records brain activity.

However, these approaches have been harder to use in everyday life because they often require complex equipment and can be uncomfortable to wear.

The team said experiments showed its sensor-based approach could reconstruct speech with high accuracy even in noisy settings such as factories.

Coral raises US$12.5m to automate healthcare’s back office
NIPT in 2026: How AI and next-gen sequencing are changing prenatal screening