Homepage | MoSIS Lab @ UTK

Our work SmarCyPad: A Smart Seat Pad for Cycling Fitness Tracking Leveraging Low-cost Conductive Fabric Sensors has been accepted to UbiComp 2023.

In this paper, we propose an innovative smart seat pad that can continuously and unobtrusively track five cycling-specific metrics, including cadence, per-leg stability, leg strength balance, riding position, and knee joint angle of the cyclist. Specifically, we embed conductive fabric sensors in the seat pad to sense the pressure applied to the bike's seat exerted by the cyclist's gluteal muscles. A series of signal processing algorithms are developed to estimate the pedaling period from the sensed pressure signal and further derive the cycling cadence, per-leg stability, and leg strength balance. Additionally, we leverage a deep learning model to detect the cyclist's riding position and reconstruct the cyclist's knee joint angles via linear regression. (Led by Yi and Luis).

Jul., 2023

Yue is presenting the paper at AsiaCCS'23 in Melbourne, Australia.

Jul., 2023

MoSIS & UT students at S&P/Oakland 23!

May., 2023

Yi is presenting the paper at S&P/Oakland 2023 in San Francisco.

May., 2023

Our work DroneChase: A Mobile and Automated Cross-Modality System for Continuous Drone Tracking has been accepted to DRONET 2023.

This paper presents DroneChase, an automated sensing system that monitors acoustic and visual signals captured from a nearby flying drone to track its trajectory in both line-of-sight and non-line-of-sight conditions under mobility settings. Inspired by the human ability to localize objects in the environment using both visual and auditory signals, we develop a mobile system that integrates the information from multiple modalities into a reference scenario and performs real-time drone detection and trajectory monitoring.

May., 2023

Congratulations to Yi for receiving UTK EECS Gonzalez Outstanding GRA Award!

May., 2023
lab-logo

2023 Senior Design Showcase

May, 2023
lab-logo

Had an amazing bowling event @ Maple Hall.

March, 2023

Our work Privacy Leakage via Unrestricted Motion-Position Sensors in the Age of Virtual Reality: A Study of Snooping Typed Input on Virtual Keyboards has been accepted to IEEE S&P 2023.

In this paper, we find that accessing most on-board sensors on VR SDKs/APIs require no security permission, exposing a huge attack surface for an adversary to steal the user’s privacy. We validate this vulnerability through developing malware programs and malicious websites and specifically explore to what extent it exposes the user’s information in the context of keystroke snooping. Extensive experiments demonstrate that our proof-of-concept attack can recognize the user’s virtual typing with over 89.7% accuracy. We hope this study will help the community gain awareness of the vulnerability in the sensor management of current VR systems and provide insights to facilitate the future design of more comprehensive and restricted sensor access control mechanisms. (Led by Yi).

Mar., 2023

Our work Speech Privacy Leakage from Shared Gradients in Distributed Learning has been accepted to IEEE ICASSP 2023.

In this paper, we explore methods for recovering private speech/speaker information from the shared gradients in distributed learning settings. We conduct experiments on a keyword spotting model with two different types of speech features to quantify the amount of leaked information by measuring the similarity between the original and recovered speech signals. We further demonstrate the feasibility of inferring various levels of side-channel information, including speech content and speaker identity, under the distributed learning framework without accessing the user’s data. (Led by Zhuohang).

Mar., 2023

Our work RecUP-FL: Reconciling Utility and Privacy in Federated learning via User-configurable Privacy Defense has been accepted to ACM AsiaCCS 2023.

In this paper, we seek to reconcile utility and privacy in FL by proposing a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes while obtaining significant improvements in utility over traditional defenses. Moreover, we observe that existing inference attacks often rely on a machine learning model to extract the private information (e.g., attributes). We thus formulate such a privacy defense as an adversarial learning problem, where RecUP-FL generates slight perturbations that can be added to the gradients before sharing to fool adversary models. To improve the transferability to un-queryable black-box adversary models, inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a set of substitute models and iteratively alternates between simulations of the white-box and the black-box adversarial attack scenarios to generate perturbations. (Led by Yue).

Jan., 2023

BioFace-3D has been selected for the 2023 UTRF Maturation Grant Award!

Jan., 2023

Our work HeatDeCam: Detecting Hidden Spy Cameras via Thermal Emissions has been accepted to ACM CCS 2022.

Unlawful video surveillance of unsuspecting individuals using spy cameras has become an increasing concern. To mitigate these threats, there are both commercial products and research prototypes designed to detect hidden spy cameras in household and office environments. However, existing work often relies heavily on user expertise and only applies to wireless cameras. To bridge this gap, we propose HeatDeCam, a thermal-imagery-based spy camera detector, capable of detecting hidden spy cameras with or without built-in wireless connectivity. To reduce the reliance on user expertise, HeatDeCam leverages a compact neural network deployed on a smartphone to recognize unique heat dissipation patterns of spy cameras.

Aug., 2022

Our work RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN has been accepted to ECCV 2022.

In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC). By performing systematic analysis and exploration on the important design knobs, we propose a framework that can learn the proper trigger patterns, model parameters and pruning masks in an efficient way. Thereby achieving high trigger stealthiness, high attack success rate and high model efficiency simultaneously.

July, 2022
lab-logo

Irfan is presenting the paper at EMBC 2022 in UK.

July, 2022

Congratulations to Yue for being selected as Student Paper Competition Finalists in North America at EMBC 2022.

July, 2022

Our work Audio-domain Position-independent Backdoor Attack via Unnoticeable Triggers has been accepted to ACM MobiCom 2022.

In this work, we explore the severity of audio-domain backdoor attacks and demonstrate their feasibility under practical scenarios of voice user interfaces, where an adversary injects (plays) an unnoticeable audio trigger into live speech to launch the attack. To realize such attacks, we consider jointly optimizing the audio trigger and the target model in the training phase, deriving a position-independent, unnoticeable, and robust audio trigger.We design new data poisoning techniques and penalty-based algorithms that inject the trigger into randomly generated temporal positions in the audio input during training, rendering the trigger resilient to any temporal position variations. We further design an environmental sound mimicking technique to make the trigger resemble unnoticeable situational sounds and simulate played over-the-air distortions to improve the trigger’s robustness during the joint optimization process.

June, 2022
lab-logo

Zhuohang is presenting the paper at CVPR 2022 in New Orleans.

June, 2022

Our MobiCom 2021 paper "BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors" is highlighted in GetMobile.

June., 2022

Congratulations to Zhuohang for receiving UTK EECS Gonzalez Outstanding GRA Award!

Apr., 2022

Our MobiCom 2021 paper "BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors" has been selected as the SigMobile Research Highlights 2022.

Apr., 2022

Our work Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning has been accepted to IEEE EMBC 2022.

In this work, we propose the first federated-learning-based approach for achieving automatic AD diagnosis via spontaneous speech analysis while ensuring the subjects' data privacy. Extensive experiments under various federated learning settings on the ADReSS challenge dataset show that the proposed model can achieve high accuracy for AD detection while achieving privacy preservation. To ensure fairness of the model performance across clients in federated settings, we further deploy fair aggregation mechanisms, particularly q-FEDAvg and q-FEDSgd, which greatly reduces the algorithmic biases due to the data heterogeneity among the clients. (Led by Irfan).

Apr., 2022

Our work Privacy-preserving Speech-based Depression Diagnosis via Federated Learning has been accepted to IEEE EMBC 2022.

In this work, we demonstrate for the first time that speech-based depression diagnosis models can be trained in a privacy-preserving way using federated learning, which enables collaborative model training while keeping the private speech data decentralized on clients' devices. Extensive experiments under various FL settings on the DAIC-WOZ dataset show that our FL model can achieve a high performance without sacrificing much utility compared with centralized-learning approaches while ensuring users' speech data privacy. (Led by Yue).

Apr., 2022
lab-logo

Yi is presenting the BioFace-3D paper at the ACM MobiCom 2021 in New Orleans.

Mar., 2022

Our work Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage has been accepted to CVPR 2022.

In this work, we validate that the private training data can still be leaked under certain defense settings with a new type of leakage, i.e., Generative Gradient Leakage (GGL). Unlike existing methods that only rely on gradient information to reconstruct data, our method leverages the latent space of generative adversarial networks (GAN) learned from public image datasets as a prior to compensate for the informational loss during gradient degradation. We hope the proposed method can serve as a tool for empirically measuring the amount of privacy leakage to facilitate the design of more robust defense mechanisms. (Led by Zhuohang).

Mar., 2022

Our work Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks has been accepted to ICASSP 2022.

In this paper, we study the feasibility of practical backdoor attacks for the compressed DNNs. More specifically, we propose a universal adversarial perturbation (UAP)-based approach to achieve both high attack stealthi- ness and high attack efficiency simultaneously.

Feb., 2022

Our work Robust Continuous Authentication Using Cardiac Biometrics from Wrist-worn Wearables has been accepted to IEEE Internet of Things Journal (IEEE IoT).

We devised a low-cost system that can exploit users’ pulsatile signals from photoplethysmography (PPG) sensors in commodity wearable devices to perform continuous authentication. Our system requires zero user effort and applies to practical scenarios that have non-clinical PPG measurements with human motion artifacts (MA).

Nov., 2021
lab-logo

Our work mPose: Environment- and Subject-Agnostic 3D Skeleton Posture Reconstruction Leveraging a Single mmWave Device has been accepted to IEEE/ACM CHASE 2021.

This paper proposes a low-cost contactless skeleton posture reconstruction system, mPose, which can reconstruct a user’s 3D skeleton postures using a single mmWave device. Particularly, the system extracts multidimensional spatial information from mmWave signals which characterizes the skeleton postures in a 3D space. To mitigate the impacts of environmental changes, mPose dynamically detects the user location and extracts spatial features from the mmWave signals reflected only from the user.

Oct., 2021

Our work Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates has been accepted to IEEE ICPADS 2021.

In this paper, we propose to mitigate the failures and attacks in federated learning systems from a spatial-temporal perspective. Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space. Moreover, to further handle malicious clients with time-varying behaviors, we propose to adaptively adjust the learning rate according to momentum-based update speculation. (Led by Zhuohang).

Sep., 2021
lab-logo

Our work Robust Detection of Machine-induced Audio Attacks Intelligent Audio Systems with Microphone Array has been accepted to ACM CCS 2021.

This paper builds a holistic solution for detecting machine-induced audio attacks leveraging multi-channel microphone arrays on modern intelligent audio systems. We utilize magnitude and phase spectrograms of multi-channel audio to extract spatial information and leverage a deep learning model to detect the fundamental difference between human speech and adversarial audio generated by the playback machines. Moreover, we adopt an unsupervised domain adaptation training framework to further improve the model’s generalizability in unseen acoustic environments. (Led by Zhuohang).

Sep., 2021
lab-logo

Our work Time to Rethink the Design of Qi Standard? Security and Privacy Vulnerability Analysis of Qi Wireless Charging has been accepted to ACSAC 2021.

In this paper, we conducted the first thorough study to explore the potential security and privacy vulnerabilities of Qi wireless charging. We demonstrated that due to the open propagation characteristic of electromagnetic signals, the Qi communication channel can be easily hijacked by injecting malicious Qi messages through stealthy placement of an adversarial coil on the charger. Additionally, an adversary is capable of snooping Qi messages transmitted between the wireless charger and the charging device to further detect and identify the device’s activities while being charged. (Led by Yi).

Aug., 2021
lab-logo

Our work BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors has been accepted to ACM MobiCom 2021.

In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. (Led by Yi).

Aug., 2021
lab-logo

Our work Face-Mic: Inferring Live Speech and Speaker Identity via Subtle Facial Dynamics Captured by AR/VR Motion Sensors has been accepted to ACM MobiCom 2021.

In this work, we show a serious privacy risk of using voice interfaces while the user is wearing the face-mounted AR/VR devices. Specifically, we design an eavesdropping attack, Face-Mic, which leverages speech-associated subtle facial dynamics captured by zero-permission motion sensors in AR/VR headsets to infer highly sensitive information from live human speech, including speech content, speaker gender, and identity.

Aug., 2021
lab-logo

Our work Spearphone: A Speech Privacy Exploit via Accelerometer-Sensed Reverberations from Smartphone Loudspeakers has been accepted to ACM WiSec 2021.

In this paper, we build a speech privacy attack that exploits speech reverberations generated from a smartphone's inbuilt loudspeaker captured via a zero-permission motion sensor (accelerometer). We demonstrate that speech reverberations from inbuilt loudspeakers, at an appropriate loudness, can impact the accelerometer, leaking sensitive information about the speech.

May, 2021
lab-logo

Our work Enabling Fast and Universal Audio Adversarial Attack Using Generative Model has been accepted to AAAI 2021.

In this paper, we propose fast audio adversarial perturbation generator (FAPG), which uses generative model to generate adversarial perturbations for the audio input in a single forward pass, thereby drastically improving the perturbation generation speed. Built on the top of FAPG, we further propose universal audio adversarial perturbation generator (UAPG), a scheme to craft universal adversarial perturbation that can be imposed on arbitrary benign audio input to cause misclassification.

Dec., 2020
lab-logo

Our work EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech has been accepted to ACM AsiaCCS 2021.

In this paper, we proposed a novel voice-based authentication system EchoVib, showing that vibrations generated from a person’s speech and captured via the accelerometer on a smartphone are unique and can be used for identifying thereby rejecting voice synthesis attack.

Oct., 2020
lab-logo

Our work HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks has been accepted to ACM AsiaCCS 2021.

In this paper, we proposed a more advanced hidden voice attack, HVAC, which can bypass existing learning-based defense classifiers while preserving all the essential characteristics of hidden voice attacks (e.g., unintelligible to humans, recognizable to machines). Specifically, we proposed a fusion-based method to combine the normal sample and corresponding obfuscated sample as a hybrid command for bypassing these defense classifiers. (Led by Yi).

Oct., 2020
lab-logo

Our work BatComm: Enabling Inaudible Acoustic Communication with High-throughput for Mobile Devices has been accepted to ACM SenSys 2020.

In this work, we proposed a high-throughput and inaudible acoustic communication system for mobile devices capable of throughput rates 12× higher than contemporary state-of-the-art acoustic communication for mobile devices.

Sep., 2020
lab-logo

Our work AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations has been accepted to ACM CCS 2020.

In this work, we proposed AdvPulse, a practical adversarial audio attack against intelligent audio systems in the scenario where the system takes streaming audio inputs (e.g., live human speech). Unlike existing attacks that require the adversary to have prior knowledge of the entire audio input, we generated input-agnostic universal subsecond audio adversarial perturbations that can be injected anywhere in the streaming audio input. (Led by Zhuohang).

Sep., 2020
lab-logo

Zhuohang is presenting the paper at ACM HotMobile 2020.

Mar., 2020

Zhuohang received "ONE-TIME" UTK EECS Fellowship Award! Congratulations!

Feb., 2020
lab-logo

One paper has been accepted to ICASSP 2020.

In this paper, we propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system. Through adding an audio-agnostic universal perturbation on arbitrary enrolled speaker's voice input, the DNN-based speaker recognition system would identify the speaker as any targeted (i.e., adversary-desired) speaker label.

Jan., 2020
lab-logo

One paper has been accepted to IEEE Transactions on Mobile Computing (IEEE TMC).

We propose the first low-cost sign language gesture recognition system that can differentiate fine-grained finger movements using the Photoplethysmography (PPG) and motion sensors in commodity wearables.

Dec., 2019
lab-logo

Our work Practical Adversarial Attacks Against Speaker Recognition Systems has been accepted to ACM HotMobile 2020.

In this paper, we propose a practical adversarial attack against the state-of-the-art speaker recognition system. By adding a well-crafted inconspicuous noise to the original audio, our attack can fool the speaker recognition system to make false predictions and even force the audio to be recognized as any adversary-desired speaker. Moreover, our attack integrates the estimated room impulse response (RIR) into the adversarial example training process toward practical audio adversarial examples which could remain effective while being played over the air in the physical world (Led by Zhuohang).

Dec., 2019
lab-logo

Three papers have been accepted to IEEE INFOCOM'20 .

The three papers are about using PPG sensor, mm-Wave or WiFi signals to capture human's unique behavioral and physiological characteristics (e.g., respiratory, heartbeat, and gait patterns) for continuous user authentication.

Dec., 2019
lab-logo

Yi is presenting the paper at IEEE DySPAN'19.

Nov., 2019
lab-logo

Our work Defeating Hidden Audio Channel Attacks on Voice Assistants via Audio-Induced Surface Vibrations has been accepted to ACSAC'19.

In this work, we show that hidden voice commands that mimic the voice features of normal commands, while remaining incomprehensible to humans, can be detected by comparing their speech features in the vibration domain with a sufficient degree of accuracy.

Aug., 2019
lab-logo

Our work Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples has been accepted to IEEE DySPAN'19.

In this paper, we propose a semi-black-box adversary attack that can embed malicious voice commands into audio clips, and these embedded commands can be recognized by the ASR system Kaldi while remaining unnoticeable to humans (Led by Yi).

Aug., 2019
lab-logo

MoSIS Lab website is up!

MoSIS Lab website is up!

Aug., 2019