BioFace-3D

BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors


Introduction

Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications such as human-computer interactions, facial expression analysis, and emotion recognition, etc. Traditional approaches require users to be confined to a particular location and face a camera under constrained recording conditions (e.g., without occlusions and under good lighting conditions). This highly restricted setting prevents them from being deployed in many application scenarios involving human motions. In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our system can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. Extensive experiments involving 16 participants under various settings demonstrate that BioFace-3D can accurately track 53 major facial landmarks with only 1.85 mm average error and 3.38% normalized mean error, which is comparable with most state-of-the-art camera-based solutions. The rendered 3D facial animations, which are in consistency with the real human facial movements, also validate the system's capability in continuous 3D facial reconstruction.


BioFace-3D Design

  • To circumvent all the limitations of existing approaches, this paper aims to provide a wearable biosensing system that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations through fitting a 3D head model to the 2D facial landmarks.
  • The proposed system has two phases: the training phase in which our system uses the biosignals and visual information in a supervised manner to learn the real-time behavioral mapping from biosignal stream to facial landmarks, and the testing phase where the well-trained biosignal network can work independently to perform continuous 3D facial reconstruction, without any visual input.
scenarios


Figure 1: BioFace-3D system overview.


Prototype Design

  • The BioFace-3D wearable device is customized based on the dimensions of the user’s head and preference for the side of the earpiece. BioFace-3D uses an ADS1299 based bio-amplifier circuit (i.e., OpenBCI) and five biosensors (i.e., Ag/AgCl surface electrodes) that stick to the user’s face.
  • All of the components in the headset are manufactured by 3D printing using Polylactic Acid (PLA).
scenarios


Figure 2: BioFace-3D prototype.


Potential Applications

  • Enabling a fully immersive user experience by increasing the awareness of the user’s real-time facial expressions and emotional states in virtual reality (VR) scenarios.
  • Tracking facial expressions or speech-related facial movements under the current COVID-19 pandemic when people wear face masks during their daily activities.
  • Long-term facial monitoring for driver fatigue detection and student engagement evaluation in virtual learning environments.
scenarios
scenarios
scenarios
scenarios

Figure 3: Potential applications enabled by BioFace-3D.


Reconstruction Results

Some reconstruction results of Bioface-3D when the wearer performs different expressions are shown below. Reconstructed facial landmarks are derived by Bioface-3D relying on biosignals only, while ground truth is captured by a state-of-the-art camera-based solution.

Anger

Contempt



Sad

Happy



Surprise



Demo Video (Photo-realistic Animation Rendering)

Smile

Fear




Demo Video (Rendered 3D Facial Animations)


Contact

Team Members

  • Yi Wu - University of Tennessee, Knoxville
  • Vimal Kakaraparthi - University of Colorado Boulder
  • Zhuohang Li - University of Tennessee, Knoxville
  • Tien Pham - University of Texas at Arlington
  • Xiande Zhang - University of Tennessee, Knoxville
  • Tianhao Wu - University of Tennessee, Knoxville
  • Phuc Nguyen - University of Texas at Arlington
  • Jian Liu - University of Tennessee, Knoxville

Paper


scenarios
scenarios