Homepage | MoSIS Lab @ UTK
lab-logo

Zhuohang is presenting the paper at CVPR 2022 in New Orleans.

June, 2022

Our MobiCom 2021 paper "BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors" is highlighted in GetMobile.

June., 2022

Congratulations to Zhuohang for receiving UTK EECS Gonzalez Outstanding GRA Award!

Apr., 2022

Our MobiCom 2021 paper "BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors" has been selected as the SigMobile Research Highlights 2022.

Apr., 2022

Our work Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning has been accepted to IEEE EMBC 2022.

In this work, we propose the first federated-learning-based approach for achieving automatic AD diagnosis via spontaneous speech analysis while ensuring the subjects' data privacy. Extensive experiments under various federated learning settings on the ADReSS challenge dataset show that the proposed model can achieve high accuracy for AD detection while achieving privacy preservation. To ensure fairness of the model performance across clients in federated settings, we further deploy fair aggregation mechanisms, particularly q-FEDAvg and q-FEDSgd, which greatly reduces the algorithmic biases due to the data heterogeneity among the clients. (Led by Irfan).

Apr., 2022

Our work Privacy-preserving Speech-based Depression Diagnosis via Federated Learning has been accepted to IEEE EMBC 2022.

In this work, we demonstrate for the first time that speech-based depression diagnosis models can be trained in a privacy-preserving way using federated learning, which enables collaborative model training while keeping the private speech data decentralized on clients' devices. Extensive experiments under various FL settings on the DAIC-WOZ dataset show that our FL model can achieve a high performance without sacrificing much utility compared with centralized-learning approaches while ensuring users' speech data privacy. (Led by Yue).

Apr., 2022
lab-logo

Yi is presenting the BioFace-3D paper at the ACM MobiCom 2021 in New Orleans.

Mar., 2022

Our work Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage has been accepted to CVPR 2022.

In this work, we validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL). Unlike existing methods that only rely on gradient information to reconstruct data, our method leverages the latent space of generative adversarial networks (GAN) learned from public image datasets as a prior to compensate for the informational loss during gradient degradation. We hope the proposed method can serve as a tool for empirically measuring the amount of privacy leakage to facilitate the design of more robust defense mechanisms. (Led by Zhuohang).

Mar., 2022

Our work Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks has been accepted to ICASSP 2022.

In this paper, we study the feasibility of practical backdoor attacks for the compressed DNNs. More specifically, we propose a universal adversarial perturbation (UAP)-based approach to achieve both high attack stealthi- ness and high attack efficiency simultaneously.

Feb., 2022

Our work Robust Continuous Authentication Using Cardiac Biometrics from Wrist-worn Wearables has been accepted to IEEE Internet of Things Journal (IEEE IoT).

We devised a low-cost system that can exploit users’ pulsatile signals from photoplethysmography (PPG) sensors in commodity wearable devices to perform continuous authentication. Our system requires zero user effort and applies to practical scenarios that have non-clinical PPG measurements with human motion artifacts (MA).

Nov., 2021
lab-logo

Our work mPose: Environment- and Subject-Agnostic 3D Skeleton Posture Reconstruction Leveraging a Single mmWave Device has been accepted to IEEE/ACM CHASE 2021.

This paper proposes a low-cost contactless skeleton posture reconstruction system, mPose, which can reconstruct a user’s 3D skeleton postures using a single mmWave device. Particularly, the system extracts multidimensional spatial information from mmWave signals which characterizes the skeleton postures in a 3D space. To mitigate the impacts of environmental changes, mPose dynamically detects the user location and extracts spatial features from the mmWave signals reflected only from the user.

Oct., 2021

Our work Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates has been accepted to IEEE ICPADS 2021.

In this paper, we propose to mitigate the failures and attacks in federated learning systems from a spatial-temporal perspective. Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space. Moreover, to further handle malicious clients with time-varying behaviors, we propose to adaptively adjust the learning rate according to momentum-based update speculation. (Led by Zhuohang).

Sep., 2021
lab-logo

Our work Robust Detection of Machine-induced Audio Attacks Intelligent Audio Systems with Microphone Array has been accepted to ACM CCS 2021.

This paper builds a holistic solution for detecting machine-induced audio attacks leveraging multi-channel microphone arrays on modern intelligent audio systems. We utilize magnitude and phase spectrograms of multi-channel audio to extract spatial information and leverage a deep learning model to detect the fundamental difference between human speech and adversarial audio generated by the playback machines. Moreover, we adopt an unsupervised domain adaptation training framework to further improve the model’s generalizability in unseen acoustic environments. (Led by Zhuohang).

Sep., 2021
lab-logo

Our work Time to Rethink the Design of Qi Standard? Security and Privacy Vulnerability Analysis of Qi Wireless Charging has been accepted to ACSAC 2021.

In this paper, we conducted the first thorough study to explore the potential security and privacy vulnerabilities of Qi wireless charging. We demonstrated that due to the open propagation characteristic of electromagnetic signals, the Qi communication channel can be easily hijacked by injecting malicious Qi messages through stealthy placement of an adversarial coil on the charger. Additionally, an adversary is capable of snooping Qi messages transmitted between the wireless charger and the charging device to further detect and identify the device’s activities while being charged. (Led by Yi).

Aug., 2021
lab-logo

Our work BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors has been accepted to ACM MobiCom 2021.

In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. (Led by Yi).

Aug., 2021
lab-logo

Our work Face-Mic: Inferring Live Speech and Speaker Identity via Subtle Facial Dynamics Captured by AR/VR Motion Sensors has been accepted to ACM MobiCom 2021.

In this work, we show a serious privacy risk of using voice interfaces while the user is wearing the face-mounted AR/VR devices. Specifically, we design an eavesdropping attack, Face-Mic, which leverages speech-associated subtle facial dynamics captured by zero-permission motion sensors in AR/VR headsets to infer highly sensitive information from live human speech, including speech content, speaker gender, and identity.

Aug., 2021
lab-logo

Irfan joins the MoSIS lab. Welcome!

Aug., 2021
lab-logo

Yue joins the MoSIS lab. Welcome!

Aug., 2021
lab-logo

Our work Spearphone: A Speech Privacy Exploit via Accelerometer-Sensed Reverberations from Smartphone Loudspeakers has been accepted to ACM WiSec 2021.

In this paper, we build a speech privacy attack that exploits speech reverberations generated from a smartphone's inbuilt loudspeaker captured via a zero-permission motion sensor (accelerometer). We demonstrate that speech reverberations from inbuilt loudspeakers, at an appropriate loudness, can impact the accelerometer, leaking sensitive information about the speech.

May, 2021
lab-logo

Our work Enabling Fast and Universal Audio Adversarial Attack Using Generative Model has been accepted to AAAI 2021.

In this paper, we propose fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarial perturbations for the audio input in a single forward pass, thereby drastically improving the perturbation generation speed. Built on the top of FAPG, we further propose universal audio adversarial perturbation generator (UAPG), a scheme to craft universal adversarial perturbation that can be imposed on arbitrary benign audio input to cause misclassification.

Dec., 2020
lab-logo

Our work EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech has been accepted to ACM AsiaCCS 2021.

In this paper, we proposed a novel voice-based authentication system EchoVib, showing that vibrations generated from a person’s speech and captured via the accelerometer on a smartphone are unique and can be used for identifying thereby rejecting voice synthesis attack.

Oct., 2020
lab-logo

Our work HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks has been accepted to ACM AsiaCCS 2021.

In this paper, we proposed a more advanced hidden voice attack, HVAC, which can bypass existing learning-based defense classifiers while preserving all the essential characteristics of hidden voice attacks (e.g., unintelligible to humans, recognizable to machines). Specifically, we proposed a fusion-based method to combine the normal sample and corresponding obfuscated sample as a hybrid command for bypassing these defense classifiers. (Led by Yi).

Oct., 2020
lab-logo

Our work BatComm: Enabling Inaudible Acoustic Communication with High-throughput for Mobile Devices has been accepted to ACM SenSys 2020.

In this work, we proposed a high-throughput and inaudible acoustic communication system for mobile devices capable of throughput rates 12× higher than contemporary state-of-the-art acoustic communication for mobile devices.

Sep., 2020
lab-logo

Our work AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations has been accepted to ACM CCS 2020.

In this work, we proposed AdvPulse, a practical adversarial audio attack against intelligent audio systems in the scenario where the system takes streaming audio inputs (e.g., live human speech). Unlike existing attacks that require the adversary to have prior knowledge of the entire audio input, we generated input-agnostic universal subsecond audio adversarial perturbations that can be injected anywhere in the streaming audio input. (Led by Zhuohang).

Sep., 2020
lab-logo

Zhuohang is presenting the paper at ACM HotMobile 2020.

Mar., 2020

Zhuohang received "ONE-TIME" UTK EECS Fellowship Award! Congratulations!

Feb., 2020
lab-logo

One paper has been accepted to ICASSP 2020.

In this paper, we propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system. Through adding an audio-agnostic universal perturbation on arbitrary enrolled speaker's voice input, the DNN-based speaker recognition system would identify the speaker as any targeted (i.e., adversary-desired) speaker label.

Jan., 2020
lab-logo

One paper has been accepted to IEEE Transactions on Mobile Computing (IEEE TMC).

We propose the first low-cost sign language gesture recognition system that can differentiate fine-grained finger movements using the Photoplethysmography (PPG) and motion sensors in commodity wearables.

Dec., 2019
lab-logo

Our work Practical Adversarial Attacks Against Speaker Recognition Systems has been accepted to ACM HotMobile 2020.

In this paper, we propose a practical adversarial attack against the state-of-the-art speaker recognition system. By adding a well-crafted inconspicuous noise to the original audio, our attack can fool the speaker recognition system to make false predictions and even force the audio to be recognized as any adversary-desired speaker. Moreover, our attack integrates the estimated room impulse response (RIR) into the adversarial example training process toward practical audio adversarial examples which could remain effective while being played over the air in the physical world (Led by Zhuohang).

Dec., 2019
lab-logo

Three papers have been accepted to IEEE INFOCOM'20 .

The three papers are about using PPG sensor, mm-Wave or WiFi signals to capture human's unique behavioral and physiological characteristics (e.g., respiratory, heartbeat, and gait patterns) for continuous user authentication.

Dec., 2019
lab-logo

Yi is presenting the paper at IEEE DySPAN'19.

Nov., 2019
lab-logo

Our work Defeating Hidden Audio Channel Attacks on Voice Assistants via Audio-Induced Surface Vibrations has been accepted to ACSAC'19.

In this work, we show that hidden voice commands that mimic the voice features of normal commands, while remaining incomprehensible to humans, can be detected by comparing their speech features in the vibration domain with a sufficient degree of accuracy.

Aug., 2019
lab-logo

Our work Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples has been accepted to IEEE DySPAN'19.

In this paper, we propose a semi-black-box adversary attack that can embed malicious voice commands into audio clips, and these embedded commands can be recognized by the ASR system Kaldi while remaining unnoticeable to humans (Led by Yi).

Aug., 2019
lab-logo

Yi joins the MoSIS lab. Welcome!

Aug., 2019
lab-logo

Zhuohang joins the MoSIS lab. Welcome!

Aug., 2019
lab-logo

MoSIS Lab website is up!

MoSIS Lab website is up!

Aug., 2019