Cornell researchers have developed a wearable – or “listenable” – earpiece that bounces sound off the cheeks and transforms the echoes into an avatar of a person’s entire moving face, using acoustic technology to provide better privacy.
A team led by Cheng Zhang, lecturer in information science, and François Guimbretière, professor of information science, designed the system, named EarIO. It transmits facial movements to a smartphone in real time and is compatible with commercially available headsets for hands-free and wireless video conferencing.
Devices that track facial movements using a camera are “big, heavy and power-hungry, which is a big problem for wearable devices,” Zhang said. “As important, they capture a lot of private information.”
Face tracking using acoustic technology can provide better privacy, affordability, comfort and battery life, he said.
The team described their earphone in “EarIO: A Low-power Acoustic Sensing Earable for Continuously Tracking Detailed Facial Movements”, published in ACM Proceedings on Interactive, Mobile, Wearable and Pervasive Technologies.
The EarIO works like a ship sending out sonar pings. A loudspeaker on each side of the earpiece sends acoustic signals to the sides of the face and a microphone picks up echoes. When wearers speak, smile, or raise their eyebrows, the skin moves and stretches, changing echo profiles. A deep learning algorithm developed by the researchers uses artificial intelligence to continuously process data and translate changing echoes into full facial expressions.
“Using the power of AI, the algorithm finds intricate connections between muscle movements and facial expressions that human eyes cannot identify,” said co-author Ke Li, a doctoral student in the field of science. some information. “We can use it to infer complex information that’s harder to capture – the whole front of the face.”
Previous efforts by Zhang’s lab to track facial movements using headphones with a camera recreated the entire face based on cheek movements as seen from the ear.
By collecting sound instead of data-rich images, the earpiece can communicate with a smartphone over a wireless Bluetooth connection, keeping user information private. Along with the images, the device would have to connect to a Wi-Fi network and send data to the cloud, which could make it vulnerable to hackers.
“People may not realize how smart wearables are, what that information says about you, and what companies can do with that information,” Guimbretière said. With facial images, someone could also infer emotions and actions. “The goal of this project is to ensure that all information, which is very valuable for your privacy, is always under your control and calculated locally.”
Using acoustic signals also requires less power than recording images, and the EarIO uses 1/25 the power of another camera-based system previously developed by the Zhang lab. Currently, the earbud lasts about three hours on a wireless earbud battery, but future research will focus on extending the usage time.
The researchers tested the device on 16 participants and used a smartphone camera to verify the accuracy of its face imitation performance. Early experiments show it works when users are seated and walking around, and wind, road noise, and background talk don’t interfere with its acoustic signaling.
In future versions, the researchers hope to improve the earphone’s ability to attune to nearby noises and other disturbances.
“The acoustic detection method we use is very sensitive,” said co-author Ruidong Zhang, a doctoral student in the field of information science. “It’s good, because it’s able to track very subtle movements, but it’s also bad because when something changes in the environment, or when your head moves slightly, we pick it up too.”
A limitation of the technology is that before first use, the EarIO must collect 32 minutes of facial data to train the algorithm. “We hope to eventually make this device plug and play,” Zhang said.
The smart collar could track your detailed facial expressions
Ke Li et al, EarIO: a low-power acoustic sensing earpiece to continuously track detailed facial movements, ACM Proceedings on Interactive, Mobile, Wearable and Pervasive Technologies (2022). DOI: 10.1145/3534621
Quote: Wearable device uses sonar to reconstruct facial expressions (July 19, 2022) Retrieved July 19, 2022 from https://techxplore.com/news/2022-07-wearable-device-sonar-reconstruct-facial.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.