Publications | MoSIS Lab @ UTK

# Publications

* Bold Authors are faculty members of MoSIS lab.

* Underlined Authors are student members of MoSIS lab.

B1

# Proactive User Authentication Using WiFi Signals in Dynamic Networks

Hongbo Liu, Yan Wang, Jian Liu, Yingying Chen
Proactive and Dynamic Network Defense, Springer, 2018. (In Press).

P4

# In-Baggage Object Detection Using Commodity Wi-Fi

Yingying Chen, Chen Wang, Jian Liu, Hongbo Liu, Yan Wang
U.S. Provisional Patent Application 62/828, 151, April, 2019.

P3

# Systems and Methods for User Input and Authentication Using Vibration Analysis

Yingying Chen, Jian Liu, Chen Wang, Nitesh Saxena
U.S. Application No.: 16/432,558, June, 2019.

P2

# Device-free Activity Identification Using Fine-grained WiFi Signatures

Yingying Chen, Jie Yang, Yan Wang, Jian Liu, Marco Gruteser
U.S. Patent No. US10104195B2, March, 2016.

P1

# Vital Signs Monitoring using WiFi

Yingying Chen, Jian Liu, Yan Wang, Jie Yang, Jerry Cheng
U.S. Provisional Patent Application No. 62/180, 696, July, 2015.

C54

# HeatDeCam: Detecting Hidden Spy Cameras via Thermal Emissions CCS 2022

Zhiyuan Yu, Zhuohang Li, Yuanhaur Chang, Skylar Fong, Jian Liu, Ning Zhang
in Proceedings of the 29th ACM Conference on Computer and Communications Security (CCS 2022), Los Angeles, USA, November 2022.

Unlawful video surveillance of unsuspecting individuals using spy cameras has become an increasing concern. To mitigate these threats, there are both commercial products and research prototypes designed to detect hidden spy cameras in household and office environments. However, existing work often relies heavily on user expertise and only applies to wireless cameras. To bridge this gap, we propose HeatDeCam, a thermal-imagery-based spy camera detector, capable of detecting hidden spy cameras with or without built-in wireless connectivity. To reduce the reliance on user expertise, HeatDeCam leverages a compact neural network deployed on a smartphone to recognize unique heat dissipation patterns of spy cameras. To evaluate the proposed system, we have collected and open-sourced a dataset of a total of 22506 thermal and visual images. These images consist of 11 spy cameras collected from 6 rooms across different environmental conditions. Using this dataset, we found HeatDeCam can achieve over 95% accuracy in detecting hidden cameras. We have also conducted a usability evaluation involving a total of 416 participants using both an online survey and an in-person usability test to validate HeatDeCam.
C53

# RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN ECCV 2022

Huy Phan, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, Bo Yuan
in Proceedings of the 18th European Conference on Computer Vision (ECCV 2022), Tel Aviv, October 2022.
(Acceptance rate: 28%)

Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models. To date, most of the existing studies focus on backdoor attack against the uncompressed model; while the vulnerability of compressed DNNs, which are widely used in the practical applications, is little exploited yet. In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC). By performing systematic analysis and exploration on the important design knobs, we propose a framework that can learn the proper trigger patterns, model parameters and pruning masks in an efficient way. Thereby achieving high trigger stealthiness, high attack success rate and high model efficiency simultaneously. Extensive evaluations across different datasets, including the test against the state-of-the-art defense mechanisms, demonstrate the high robustness, stealthiness and model efficiency of RIBAC. Code is available at https://github.com/huyvnphan/ECCV2022-RIBAC.
@article{phan2022ribac, title={RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN}, author={Phan, Huy and Shi, Cong and Xie, Yi and Zhang, Tianfang and Li, Zhuohang and Zhao, Tianming and Liu, Jian and Wang, Yan and Chen, Yingying and Yuan, Bo}, journal={arXiv preprint arXiv:2208.10608}, year={2022} }
C52

# Audio-domain Position-independent Backdoor Attack via Unnoticeable Triggers MobiCom 2022

Cong Shi, Tianfang Zhang, Zhuohang Li, Huy Phan, Tianming Zhao, Yan Wang, Jian Liu, Bo Yuan, Yingying Chen
in Proceedings of the 28th Annual International Conference On Mobile Computing And Networking (MobiCom 2022), InterContinental Sydney, Australia, October 2022.
(Acceptance rate: 17.8%)

Deep learning models have become key enablers of voice user interfaces. With the growing trend of adopting outsourced training of these models, backdoor attacks, stealthy yet effective trainingphase attacks, have gained increasing attention. They inject hidden trigger patterns through training set poisoning and overwrite the model’s predictions in the inference phase. Research in backdoor attacks has been focusing on image classification tasks, while there have been few studies in the audio domain. In this work, we explore the severity of audio-domain backdoor attacks and demonstrate their feasibility under practical scenarios of voice user interfaces, where an adversary injects (plays) an unnoticeable audio trigger into live speech to launch the attack. To realize such attacks, we consider jointly optimizing the audio trigger and the target model in the training phase, deriving a position-independent, unnoticeable, and robust audio trigger.We design new data poisoning techniques and penalty-based algorithms that inject the trigger into randomly generated temporal positions in the audio input during training, rendering the trigger resilient to any temporal position variations. We further design an environmental sound mimicking technique to make the trigger resemble unnoticeable situational sounds and simulate played over-the-air distortions to improve the trigger’s robustness during the joint optimization process. Extensive experiments on two important applications (i.e., speech command recognition and speaker recognition) demonstrate that our attack can achieve an average success rate of over 99% under both digital and physical attack settings.
C51

# Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning EMBC 2022

Syed Irfan Ali Meerza, Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
in Proceedings of the 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2022), Glasgow, Scotland, UK, July 2022.

As the most common neurodegenerative disease among older adults, Alzheimer's disease (AD) would lead to loss of memory, impaired language and judgment, gait disorders, and other cognitive deficits severe enough to interfere with daily activities and significantly diminish quality of life. Recent research has shown promising results in automatic AD diagnosis via speech, leveraging the advances of deep learning in the audio domain. However, most existing studies rely on a centralized learning framework which requires subjects' voice data to be gathered to a central server, raising severe privacy concerns. To resolve this, in this paper, we propose the first federated-learning-based approach for achieving automatic AD diagnosis via spontaneous speech analysis while ensuring the subjects' data privacy. Extensive experiments under various federated learning settings on the ADReSS challenge dataset show that the proposed model can achieve high accuracy for AD detection while achieving privacy preservation. To ensure fairness of the model performance across clients in federated settings, we further deploy fair aggregation mechanisms, particularly q-FEDAvg and q-FEDSgd, which greatly reduces the algorithmic biases due to the data heterogeneity among the clients.
@inproceedings{meerza2022fair, title={Fair and Privacy-Preserving Alzheimer's Disease Diagnosis Based on Spontaneous Speech Analysis via Federated Learning}, author={Meerza, Syed Irfan Ali and Li, Zhuohang and Liu, Luyang and Zhang, Jiaxin and Liu, Jian}, booktitle={2022 44th Annual International Conference of the IEEE Engineering in Medicine \& Biology Society (EMBC)}, pages={1362--1365}, year={2022}, organization={IEEE} }
C50

# Privacy-preserving Speech-based Depression Diagnosis via Federated Learning EMBC 2022

Yue Cui, Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
in Proceedings of the 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2022), Glasgow, Scotland, UK, July 2022.

Mental health disorders, such as depression, affect a large and growing number of populations worldwide, and they may cause severe emotional, behavioral and physical health problems if left untreated. As depression affects a patient's speech characteristics, recent studies have proposed to leverage deep-learning-powered speech analysis models for depression diagnosis, in which they often require centralized learning on the collected voice data. However, this centralized training requiring data to be stored at a server raises the risks of severe voice data breaches, and people may not be willing to share their speech data with third parties due to privacy concerns. To address these issues, in this paper, we demonstrate for the first time that speech-based depression diagnosis models can be trained in a privacy-preserving way using federated learning, which enables collaborative model training while keeping the private speech data decentralized on clients' devices. To ensure the model's robustness under attacks, we also integrate different FL defenses into the system, such as norm bounding, differential privacy, and secure aggregation mechanisms. Extensive experiments under various FL settings on the DAIC-WOZ dataset show that our FL model can achieve a high performance without sacrificing much utility compared with centralized-learning approaches while ensuring users' speech data privacy.
@inproceedings{cui2022privacy, title={Privacy-preserving Speech-based Depression Diagnosis via Federated Learning}, author={Cui, Yue and Li, Zhuohang and Liu, Luyang and Zhang, Jiaxin and Liu, Jian}, booktitle={2022 44th Annual International Conference of the IEEE Engineering in Medicine \& Biology Society (EMBC)}, pages={1371--1374}, year={2022}, organization={IEEE} }
C49

# Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage CVPR 2022

Zhuohang Li, Jiaxin Zhang, Luyang Liu, Jian Liu
in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), New Orleans, Louisiana, June 2022.
(Acceptance rate: 25.3%)

@article{li2022auditing, title={Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage}, author={Li, Zhuohang and Zhang, Jiaxin and Liu, Luyang and Liu, Jian}, journal={arXiv preprint arXiv:2203.15696}, year={2022} }
C48

# Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks ICASSP 2022

Huy Phan, Yi Xie, Jian Liu, Yingying Chen, Bo Yuan
in Proceedings of the 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2022), Virtual, May 2022.

Compressed deep neural network (DNN) models have been widely deployed in many resource-constrained platforms and devices. However, the security issue of the compressed models, especially their vulnerability against backdoor attacks, is not well explored yet. In this paper, we study the feasibility of practical backdoor attacks for the compressed DNNs. More specifically, we propose a universal adversarial perturbation (UAP)-based approach to achieve both high attack stealthiness and high attack efficiency simultaneously. Evaluation results across different DNN models and datasets with various compression ratios demonstrate our approach’s superior performance compared with the existing solutions.
@inproceedings{phan2022invisible, title={Invisible and Efficient Backdoor Attacks for Compressed Deep Neural Networks}, author={Phan, Huy and Xie, Yi and Liu, Jian and Chen, Yingying and Yuan, Bo}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={96--100}, year={2022}, organization={IEEE} }
C47

# mPose: Environment- and Subject-Agnostic 3D Skeleton Posture Reconstruction Leveraging a Single mmWave Device CHASE 2021

Cong Shi, Li Lu, Jian Liu, Yan Wang, Yingying Chen, Jiadi Yu
in Proceedings of the IEEE/ACM Conference on Connected Health Applications, Systems, and Engineering Technologies (CHASE 2021), Washington DC, USA, December 2021.

Human skeleton posture reconstruction is an essential component for human-computer interactions (HCI) in various application domains. Traditional approaches usually rely on either cameras or on-body sensors, which induce privacy concerns or inconvenient practical setups. To address these practical concerns, this paper proposes a low-cost contactless skeleton posture reconstruction system, mPose, which can reconstruct a user’s 3D skeleton postures using a single mmWave device. mPose does not require the user to wear any sensors and can enable a broad range of emerging mobile applications (e.g., VR gaming and pervasive user input) via mmWave-5G ready Internet of Things (IoT) devices. Particularly, the system extracts multi-dimensional spatial information from mmWave signals which characterizes the skeleton postures in a 3D space. To mitigate the impacts of environmental changes, mPose dynamically detects the user location and extracts spatial features from the mmWave signals reflected only from the user. Furthermore, we develop a deep regression method with a domain discriminator to learn a mapping between the spatial features and the joint coordinates of human body while removing subject-specific characteristics, realizing robust posture reconstruction across users. Extensive experiments, involving 17 representative body postures, 7 subjects, and 3 indoor environments, show that mPose outperforms contemporary state-of-the-art RF-based solutions with a lower average joint error of only ∼30mm, while achieving transferability across environments and subjects at the same time.
@article{shi2022mpose, title={mPose: Environment-and subject-agnostic 3D skeleton posture reconstruction leveraging a single mmWave device}, author={Shi, Cong and Lu, Li and Liu, Jian and Wang, Yan and Chen, Yingying and Yu, Jiadi}, journal={Smart Health}, volume={23}, pages={100228}, year={2022}, publisher={Elsevier} }
C46

# Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates ICPADS 2021

Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
in Proceedings of the 27th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2021), Beijing, China, December 2021.

Federated Learning (FL) enables multiple distributed clients (e.g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the clients’ devices. Compared to traditional centralized machine learning, FL offers many favorable features such as offloading operations which would usually be performed by a central server and reducing risks of serious privacy leakage. However, Byzantine clients that send incorrect or disruptive updates due to system failures or adversarial attacks may disturb the joint learning process, consequently degrading the performance of the resulting model. In this paper, we propose to mitigate these failures and attacks from a spatial-temporal perspective. Specifically, we use a clustering-based method to detect and exclude incorrect updates by leveraging their geometric properties in the parameter space. Moreover, to further handle malicious clients with time-varying behaviors, we propose to adaptively adjust the learning rate according to momentum-based update speculation. Extensive experiments on 4 public datasets demonstrate that our algorithm achieves enhanced robustness comparing to existing methods under both cross-silo and cross-device FL settings with faulty/malicious clients.
@article{li2021byzantine, title={Byzantine-robust Federated Learning through Spatial-temporal Analysis of Local Model Updates}, author={Li, Zhuohang and Liu, Luyang and Zhang, Jiaxin and Liu, Jian}, journal={arXiv preprint arXiv:2107.01477}, year={2021} }
C45

# Robust Detection of Machine-induced Audio Attacks in Intelligent Audio Systems with Microphone Array CCS 2021

Zhuohang Li, Cong Shi, Tianfang Zhang, Yi Xie, Jian Liu, Bo Yuan, Yingying Chen
in Proceedings of the 28th ACM Conference on Computer and Communications Security (CCS 2021), November 2021.
(Acceptance rate: 196/879 = 22.3%)

With the popularity of intelligent audio systems in recent years, their vulnerabilities have become an increasing public concern. Existing studies have designed a set of machine-induced audio at- tacks1, such as replay attacks, synthesis attacks, hidden voice com- mands, inaudible attacks, and audio adversarial examples, which could expose users to serious security and privacy threats. To de- fend against these attacks, existing efforts have been treating them individually. While they have yielded reasonably good performance in certain cases, they can hardly be combined into an all-in-one solution to be deployed on the audio systems in practice. Addition- ally, modern intelligent audio devices, such as Amazon Echo and Apple HomePod, usually come equipped with microphone arrays for far-field voice recognition and noise reduction. Existing defense strategies have been focusing on single- and dual-channel audio, while only few studies have explored using multi-channel micro- phone array for defending specific types of audio attack. Motivated by the lack of systematic research on defending miscellaneous audio attacks and the potential benefits of multi-channel audio, this paper builds a holistic solution for detecting machine-induced audio attacks leveraging multi-channel microphone arrays on mod- ern intelligent audio systems. Specifically, we utilize magnitude and phase spectrograms of multi-channel audio to extract spatial information and leverage a deep learning model to detect the fun- damental difference between human speech and adversarial audio generated by the playback machines. Moreover, we adopt an unsu- pervised domain adaptation training framework to further improve the model’s generalizability in new acoustic environments. Evalua- tion is conducted under various settings on a public multi-channel replay attack dataset and a self-collected multi-channel audio attack dataset involving 5 types of advanced audio attacks. The results show that our method can achieve an equal error rate (EER) as low as 6.6% in detecting a variety of machine-induced attacks. Even in new acoustic environments, our method can still achieve an EER as low as 8.8%.
@inproceedings{li2021robust, title={Robust Detection of Machine-induced Audio Attacks in Intelligent Audio Systems with Microphone Array}, author={Li, Zhuohang and Shi, Cong and Zhang, Tianfang and Xie, Yi and Liu, Jian and Yuan, Bo and Chen, Yingying}, booktitle={Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security}, pages={1884--1899}, year={2021} }
C44

# Time to Rethink the Design of Qi Standard? Security and Privacy Vulnerability Analysis of Qi Wireless Charging ACSAC 2021

Yi Wu, Zhuohang Li, Nicholas Van Nostrand, Jian Liu
in Proceedings of the 37th Annual Computer Security Applications Conference (ACSAC 2021), December 2021.
(Acceptance rate: 80/326 = 24.5%)

With the ever-growing deployment of Qi wireless charging for mobile devices, the potential impact of its vulnerabilities is an increasing concern. In this paper, we conduct the first thorough study to explore its potential security and privacy vulnerabilities. Due to the open propagation property of electromagnetic signals as well as the non-encrypted Qi communication channel, we demonstrate that the Qi communication established between the charger (i.e., a charging pad) and the charging device (i.e., a smartphone) could be non-intrusively interfered with and eavesdropped. In particular, we build two types of attacks: 1) Hijacking Attack: through stealthily placing an ultra-thin adversarial coil on the wireless charger’s surface, we show that an adversary is capable of hijacking the communication channel via injecting malicious Qi messages to further control the entire charging process as they desire; and 2) Eavesdropping Attack: by sticking an adversarial coil underneath the surface (e.g., a table) on which the charger is placed, the adversary can eavesdrop Qi messages and further infer the device’s running activities while it is being charged. We validate these proof-of-concept attacks using multiple commodity smartphones and 14 commonly used calling and messaging apps. The results show that our designed hijacking attack can cause overcharging, undercharging, and paused charging, etc., potentially leading to more significant damage to the battery (e.g., overheating, reducing battery life, or explosion). In addition, the designed eavesdropping attack can achieve a high accuracy in detecting and identifying the running app activities (e.g., over 95.56% and 85.80% accuracy for calling apps and messaging apps, respectively). Our work brings to light a fundamental design vulnerability in the currently-deployed wireless charging architecture, which may put people’s security and privacy at risk while wirelessly recharging their smartphones.
@inproceedings{wu2021time, title={Time to Rethink the Design of Qi Standard? Security and Privacy Vulnerability Analysis of Qi Wireless Charging}, author={Wu, Yi and Li, Zhuohang and Van Nostrand, Nicholas and Liu, Jian}, booktitle={Annual Computer Security Applications Conference}, pages={916--929}, year={2021} }
C43

# BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors MobiCom 2021

Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, Phuc Nguyen
in Proceedings of the 27th Annual International Conference on Mobile Computing and Networking (MobiCom 2021), New Orleans, United States, January 2022.
(Acceptance rate: 52/299 = 17.4%)

Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications such as human-computer interactions, facial expression analysis, and emotion recognition, etc. Traditional approaches require users to be confined to a particular location and face a camera under constrained recording conditions (e.g., without occlusions and under good lighting conditions). This highly restricted setting prevents them from being deployed in many application scenarios involving human motions. In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. Extensive experiments involving 16 participants under various settings demonstrate that BioFace-3D can accurately track 53 major facial landmarks with only 1.85 mm average error and 3.38% normalized mean error, which is comparable with most state-of-the-art camerabased solutions. The rendered 3D facial animations, which are in consistency with the real human facial movements, also validate the system’s capability in continuous 3D facial reconstruction.
@inproceedings{wu2021bioface, title={BioFace-3D: continuous 3d facial reconstruction through lightweight single-ear biosensors}, author={Wu, Yi and Kakaraparthi, Vimal and Li, Zhuohang and Pham, Tien and Liu, Jian and Nguyen, Phuc}, booktitle={Proceedings of the 27th Annual International Conference on Mobile Computing and Networking}, pages={350--363}, year={2021} }
C42

# Face-Mic: Inferring Live Speech and Speaker Identity via Subtle Facial Dynamics Captured by AR/VR Motion Sensors MobiCom 2021

Cong Shi, Xiangyu Xu, Tianfang Zhang, Payton R. Walker, Yi Wu, Jian Liu, Nitesh Saxena, Yingying Chen, Jiadi Yu
in Proceedings of the 27th Annual International Conference on Mobile Computing and Networking (MobiCom 2021), New Orleans, United States, January 2022.
(Acceptance rate: 52/299 = 17.4%)

Augmented reality/virtual reality (AR/VR) has extended beyond 3D immersive gaming to a broader array of applications, such as shopping, tourism, education. And recently there has been a large shift from handheld-controller dominated interactions to headset- dominated interactions via voice interfaces. In this work, we show a serious privacy risk of using voice interfaces while the user is wear- ing the face-mounted AR/VR devices. Specifically, we design an eavesdropping attack, Face-Mic, which leverages speech-associated subtle facial dynamics captured by zero-permission motion sensors in AR/VR headsets to infer highly sensitive information from live human speech, including speaker gender, identity, and speech con- tent. Face-Mic is grounded on a key insight that AR/VR headsets are closely mounted on the user’s face, allowing a potentially ma- licious app on the headset to capture underlying facial dynamics as the wearer speaks, including movements of facial muscles and bone-borne vibrations, which encode private biometrics and speech characteristics. To mitigate the impacts of body movements, we de- velop a signal source separation technique to identify and separate the speech-associated facial dynamics from other types of body movements. We further extract representative features with respect to the two types of facial dynamics. We successfully demonstrate the privacy leakage through AR/VR headsets by deriving the user’s gender/identity and extracting speech information via the develop- ment of a deep learning-based framework. Extensive experiments using four mainstream VR headsets validate the generalizability, effectiveness, and high accuracy of Face-Mic.
@inproceedings{shi2021face, title={Face-Mic: inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors}, author={Shi, Cong and Xu, Xiangyu and Zhang, Tianfang and Walker, Payton and Wu, Yi and Liu, Jian and Saxena, Nitesh and Chen, Yingying and Yu, Jiadi}, booktitle={Proceedings of the 27th Annual International Conference on Mobile Computing and Networking}, pages={478--490}, year={2021} }
C41

# Spearphone: A Speech Privacy Exploit via Accelerometer-Sensed Reverberations from Smartphone Loudspeakers WiSec 2021

S Abhishek Anand, Chen Wang, Jian Liu, Nitesh Saxena, Yingying Chen
in Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2021), July 2021.
(Acceptance rate: 22.3%)

In this paper, we build a speech privacy attack that exploits speech reverberations generated from a smartphone's inbuilt loudspeaker captured via a zero-permission motion sensor (accelerometer). We design our attack, called Spearphone2, and demonstrate that speech reverberations from inbuilt loudspeakers, at an appropriate loudness, can impact the accelerometer, leaking sensitive information about the speech. In particular, we show that by exploiting the affected accelerometer readings and carefully selecting feature sets along with off-the-shelf machine learning techniques, Spearphone can successfully perform gender classification (accuracy over 90%) and speaker identification (accuracy over 80%). In addition, we perform speech recognition and speech reconstruction to extract more information about the eavesdropped speech to an extent. Our work brings to light a fundamental design vulnerability in many currently-deployed smartphones, which may put people's speech privacy at risk while using the smartphone in the loudspeaker mode during phone calls, media playback or voice assistant interactions.
@inproceedings{anand2021spearphone, title={Spearphone: a lightweight speech privacy exploit via accelerometer-sensed reverberations from smartphone loudspeakers}, author={Anand, S Abhishek and Wang, Chen and Liu, Jian and Saxena, Nitesh and Chen, Yingying}, booktitle={Proceedings of the 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks}, pages={288--299}, year={2021} }
C40

# Enabling Fast and Universal Audio Adversarial Attack Using Generative Model AAAI 2021

Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, Bo Yuan
in Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), February 2021.
(Acceptance rate: 21%)

@inproceedings{xie2021enabling, title={Enabling Fast and Universal Audio Adversarial Attack Using Generative Model}, author={Xie, Yi and Li, Zhuohang and Shi, Cong and Liu, Jian and Chen, Yingying and Yuan, Bo}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={35}, number={16}, pages={14129--14137}, year={2021} }
C39

# HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks AsiaCCS 2021

Yi Wu, Xiangyu Xu, Payton R. Walker, Jian Liu, Nitesh Saxena, Yingying Chen, Jiadi Yu
in Proceedings of the 16th ACM ASIA Conference on Computer and Communications Security (AsiaCCS 2021), Hong Kong, China, June 2021.
(Acceptance rate: 18.5%)

Recent years have witnessed the rapid development of automatic speech recognition (ASR) systems, providing a practical voice-user interface for widely deployed smart devices. With the ever-growing deployment of such an interface, several voice-based attack schemes have been proposed towards current ASR systems to exploit certain vulnerabilities. Posing one of the more serious threats, hidden voice attack uses the human-machine perception gap to generate obfuscated/hidden voice commands that are unintelligible to human listeners but can be interpreted as commands by machines. However, due to the nature of hidden voice commands (i.e., normal and obfuscated samples exhibit a significant difference in their acoustic features), recent studies show that they can be easily detected and defended by a pre-trained classifier, thereby making it less threatening. In this paper, we validate that such a defense strategy can be circumvented with a more advanced type of hidden voice attack called HVAC1 . Our proposed HVAC attack can easily bypass the existing learning-based defense classifiers while preserving all the essential characteristics of hidden voice attacks (i.e., unintelligible to humans and recognizable to machines). Specifically, we find that all classifier-based defenses build on top of classification models that are trained with acoustic features extracted from the entire audio of normal and obfuscated samples. However, only speech parts (i.e., human voice parts) of these samples contain the useful linguistic information needed for machine transcription. We thus propose a fusion-based method to combine the normal sample and corresponding obfuscated sample as a hybrid HVAC command, which can effectively cheat the defense classifiers. Moreover, to make the command more unintelligible to humans, we tune the speed and pitch of the sample and make it even more distorted in the time domain while ensuring it can still be recognized by machines. Extensive physical over-the-air experiments demonstrate the robustness and generalizability of our HVAC attack under different realistic attack scenarios. Results show that our HVAC commands can achieve an average 94.1% success rate of bypassing machine-learning-based defense approaches under various realistic settings.
@inproceedings{wu2021hvac, title={HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks}, author={Wu, Yi and Xu, Xiangyu and Walker, Payton R and Liu, Jian and Saxena, Nitesh and Chen, Yingying and Yu, Jiadi}, booktitle={Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security}, pages={82--94}, year={2021} }
C38

# EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech AsiaCCS 2021

S Abhishek Anand, Jian Liu, Chen Wang, Maliheh Shirvanian, Nitesh Saxena, Yingying Chen
in Proceedings of the 16th ACM ASIA Conference on Computer and Communications Security (AsiaCCS 2021), Hong Kong, China, June 2021.
(Acceptance rate: 18.5%)

Recent advances in speaker verification and speech processing technology have seen voice authentication being adopted on a wide scale in commercial applications like online banking and customer care support and on devices such as smartphones and IoT voice assistant systems. However, it has been shown that the current voice authentication systems can be ineffective against voice synthesis attacks that mimic a user’s voice to high precision. In this work, we suggest a paradigm shift from the traditional voice authentication systems operating in the audio domain but susceptible to speech synthesis attacks (in the same audio domain). We leverage a motion sensor’s capability to pick up phonatory vibrations, that can help to uniquely identify a user via voice signatures in the vibration domain. The user’s speech is played/echoed back by a device’s speaker for a short duration (hence our method is termed EchoVib) and the resulting non-linear phonatory vibrations are picked up by the motion sensor for speaker recognition. The uniqueness of the device’s speaker and its accelerometer results in a device-specific fingerprint in response to the echoed speech. The use of the vibration domain and its non-linear relationship with audio allows EchoVib to resist the state-of-the-art voice synthesis attacks, shown to be successful in the audio domain. We develop an instance of EchoVib using the onboard loudspeaker and the accelerometer embedded in smartphones, as the authenticator, based on machine learning techniques. Our evaluation shows that even with the low-quality loudspeaker and the low-sampling rate of accelerometer recordings, EchoVib can identify users with an accuracy of over 90%. We also analyze our system against state-of-art-voice synthesis attacks and show that it can distinguish between the morphed and the original speaker’s voice samples, correctly rejecting the morphed samples with a success rate of 85% for voice conversion and voice modeling attacks. We believe that using the vibration domain to detect synthesized speech attacks is effective due to the hardness of preserving the unique phonatory vibration signatures and is difficult to mimic due to the non-linear mapping of the unique speaker and accelerometer response in the vibration domain to the voice in the audio domain.
@inproceedings{anand2021echovib, title={EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech}, author={Anand, S Abhishek and Liu, Jian and Wang, Chen and Shirvanian, Maliheh and Saxena, Nitesh and Chen, Yingying}, booktitle={Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security}, pages={67--81}, year={2021} }
C37

# BatComm: Enabling Inaudible Acoustic Communication with High-throughput for Mobile Devices SenSys 2020

Yang Bai, Jian Liu, Li Lu, Yilin Yang, Yingying Chen, Jiadi Yu
in Proceedings of the 18th ACM Conference on Embedded Networked Sensor Systems (SenSys 2020), Yokohama, Japan, November 2020.
(Acceptance rate: 20.6%)

Acoustic communication is an increasingly popular alternative to existing short-range wireless communication technologies for mobile devices, such as NFC and QR codes. Unlike the current standards, there are no requirements for extra hardware, lighting conditions, or Internet connection. However, the audibility and limited throughput of existing studies hinder their deployment on a wide range of applications. In this paper, we aim to redesign acoustic communication mechanism to push the boundary of potential throughput while keeping the inaudibility. Specifically, we propose BatComm, a high-throughput and inaudible acoustic communication system for mobile devices capable of throughput rates 12× higher than contemporary state-of-the-art acoustic communication for mobile devices. We theoretically model the non-linearity of microphone and use orthogonal frequency division multiplexing (OFDM) to transmit data bits over multiple orthogonal channels with an ultrasound frequency carrier. We also design a series of techniques to mitigate interference caused by sources such as the signal’s unbalanced frequency response, ambient noise, and unrelated residual signals created through OFDM, amplitude modulation (AM), and related processes. Extensive evaluations under multiple realistic settings demonstrate that our inaudible acoustic communication system can achieve over 47kbps within a 10cm communication range. We also show the possibility of increasing the communication range to room scale (i.e., around 2m) while maintaining high-throughput and inaudibility. Our findings offer a new direction for future inaudible acoustic communication techniques to pursue in emerging mobile and IoT applications.
@inproceedings{bai2020batcomm, title={BatComm: enabling inaudible acoustic communication with high-throughput for mobile devices}, author={Bai, Yang and Liu, Jian and Lu, Li and Yang, Yilin and Chen, Yingying and Yu, Jiadi}, booktitle={Proceedings of the 18th Conference on Embedded Networked Sensor Systems}, pages={205--217}, year={2020} }
C36

# AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations CCS 2020

Zhuohang Li, Yi Wu, Jian Liu, Yingying Chen
in Proceedings of the 27th ACM Conference on Computer and Communications Security (CCS 2020), November 2020.
(Acceptance rate: 16.9%)

@inproceedings{li2020advpulse, title={AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations}, author={Li, Zhuohang and Wu, Yi and Liu, Jian and Chen, Yingying and Yuan, Bo}, booktitle={Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security}, pages={1121--1134}, year={2020} }
C35

# Towards Environment-independent Behavior-based User Authentication Using WiFi MASS 2020

Cong Shi, Jian Liu, Nick Borodinov, Bruno Leao, Yingying Chen
in Proceedings of the 17th IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS 2020), Delhi NCR, India, December 2020.

With the increasing prevalence of smart mobile and Internet of things (IoT) environments, user authentication has become a critical component for not only preventing unauthorized access to security-sensitive systems but also providing customized services for individual users. Unlike traditional approaches relying on tedious passwords or specialized biometric/wearable sensors, this paper presents a device-free user authentication via daily human behavioral patterns captured by existing WiFi infrastructures. Specifically, our system exploits readily available channel state information (CSI) in WiFi signals to capture unique behavioral biometrics residing in the user’s daily activities, without requiring any dedicated sensors or wearable device attachment. To build such a system, one major challenge is that wireless signals always carry substantial information that is specific to the user’s location and surrounding environment, rendering the trained model less effective when being applied to the data collected in a new location or environment. This issue could lead to significant authentication errors and may quickly ruin the whole system in practice. To disentangle the behavioral biometrics for practical environment-independent user authentication, we propose an end-to-end deep-learning based approach with domain adaptation techniques to remove the environmentand location-specific information contained in the collected WiFi measurements. Extensive experiments in a residential apartment and an office with various scales of user location variations and environmental changes demonstrate the effectiveness and generalizability of the proposed authentication system.
@article{shitowards, title={Towards Environment-independent Behavior-based User Authentication Using WiFi}, author={Shi, Cong and Liu, Jian and Borodinov, Nick and Leao, Bruno and Chen, Yingying} }
C34

# Mobile Device Usage Recommendation based on User Context Inference Using Embedded Sensors ICCCN 2020

Cong Shi, Xiaonan Guo, Ting Yu, Yingying Chen, Yucheng Xie, Jian Liu
in Proceedings of the 29th International Conference on Computer Communications and Networks (ICCCN 2020), Honolulu, Hawaii, USA, August 2020.

The proliferation of mobile devices along with their rich functionalities/applications have made people form addictive and potentially harmful usage behaviors. Though this problem has drawn considerable attention, existing solutions (e.g., text notification or setting usage limits) are insufficient and cannot provide timely recommendations or control of inappropriate usage of mobile devices. This paper proposes a generalized context inference framework, which supports timely usage recommendations using low-power sensors in mobile devices Comparing to existing schemes that rely on detection of single type user contexts (e.g., merely on location or activity), our framework derives a much larger-scale of user contexts that characterize the phone usages, especially those causing distraction or leading to dangerous situations. We propose to uniformly describe the general user context with context fundamentals, i.e., physical environments, social situations, and human motions, which are the underlying constituent units of diverse general user contexts. To mitigate the profiling efforts across different environments, devices, and individuals, we develop a deep learning-based architecture to learn transferable representations derived from sensor readings associated with the context fundamentals. Based on the derived context fundamentals, our framework quantifies how likely an inferred user context would lead to distractions/dangerous situations, and provides timely recommendations for mobile device access/usage. Extensive experiments during a period of 7 months demonstrate that the system can achieve 95% accuracy on user context inference while offering the transferability among different environments, devices, and users.
@article{shimobile, title={Mobile Device Usage Recommendation based on User Context Inference Using Embedded Sensors}, author={Shi, Cong and Guo, Xiaonan and Yu, Ting and Chen, Yingying and Xie, Yucheng and Liu, Jian} }
C33

# Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems ICASSP 2020

Yi Xie, Cong Shi, Zhuohang Li, Jian Liu, Yingying Chen, Bo Yuan
in Proceedings of the 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Barcelona, Spain, May 2020.

As the popularity of voice user interface (VUI) exploded in recent years, speaker recognition system has emerged as an important medium of identifying a speaker in many security-required applications and services. In this paper, we propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system. Through adding an audio-agnostic universal perturbation on arbitrary enrolled speaker’s voice input, the DNN-based speaker recognition system would identify the speaker as any target (i.e., adversary-desired) speaker label. In addition, we improve the robustness of our attack by modeling the sound distortions caused by the physical over-the-air propagation through estimating room impulse response (RIR). Experiment using a public dataset of 109 English speakers demonstrates the effectiveness and robustness of our proposed attack with a high attack success rate of over 90%. The attack launching time also achieves a 100× speedup over contemporary non-universal attacks.
@inproceedings{xie2020real, title={Real-Time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems}, author={Xie, Yi and Shi, Cong and Li, Zhuohang and Liu, Jian and Chen, Yingying and Yuan, Bo}, booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={1738--1742}, year={2020}, organization={IEEE} }
C32

# Practical Adversarial Attacks Against Speaker Recognition Systems HotMobile 2020

Zhuohang Li, Cong Shi, Yi Xie, Jian Liu, Bo Yuan, Yingying Chen
in Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications (ACM HotMobile 2020), Austin, Texas, March 2020.
(Acceptance rate: 16/48 = 33.3%)

Unlike other biometric-based user identification methods (e.g., fingerprint and iris), speaker recognition systems can identify individuals relying on their unique voice biometrics without requiring users to be physically present. Therefore, speaker recognition systems have been becoming increasingly popular recently in various domains, such as remote access control, banking services and criminal investigation. In this paper, we study the vulnerability of this kind of systems by launching a practical and systematic adversarial attack against X-vector, the state-of-the-art deep neural network (DNN) based speaker recognition system. In particular, by adding a well-crafted inconspicuous noise to the original audio, our attack can fool the speaker recognition system to make false predictions and even force the audio to be recognized as any adversary-desired speaker. Moreover, our attack integrates the estimated room impulse response (RIR) into the adversarial example training process toward practical audio adversarial examples which could remain effective while being played over the air in the physical world. Extensive experiment using a public dataset of 109 speakers shows the effectiveness of our attack with a high attack success rate for both digital attack (98%) and practical over-the-air attack (50%).
@inproceedings{li2020practical, title={Practical adversarial attacks against speaker recognition systems}, author={Li, Zhuohang and Shi, Cong and Xie, Yi and Liu, Jian and Yuan, Bo and Chen, Yingying}, booktitle={Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications}, pages={9--14}, year={2020} }
C31

# Continuous User Verification via Respiratory Biometrics INFOCOM 2020

Jian Liu, Yingying Chen, Yudi Dong, Yan Wang, Tianming Zhao, Yu-Dong Yao
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

The ever-growing security issues in various mobile applications and smart devices create an urgent demand for a reliable and convenient user verification method. Traditional verification methods request users to provide their secrets (e.g., entering passwords and collecting fingerprints). We envision that the essential trend of user verification is to free users from active participation in the verification process. Toward this end, we propose a continuous user verification system, which re-uses the widely deployed WiFi infrastructure to capture the unique physiological characteristics rooted in user’s respiratory motions. Different from the existing continuous verification approaches, posing dependency on restricted scenarios/user behaviors (e.g., keystrokes and gaits), our system can be easily integrated into any WiFi infrastructure to provide non-intrusive continuous verification. Specifically, we extract the respirationrelated signals from the channel state information (CSI) of WiFi. We then derive the user-specific respiratory features based on the waveform morphology analysis and fuzzy wavelet transformation of the respiration signals. Additionally, a deep learning based user verification scheme is developed to identify legitimate users accurately and detect the existence of spoofing attacks. Extensive experiments involving 20 participants demonstrate that the proposed system can robustly verify/identify users and detect spoofers under various types of attacks.
@inproceedings{liu2020continuous, title={Continuous user verification via respiratory biometrics}, author={Liu, Jian and Chen, Yingying and Dong, Yudi and Wang, Yan and Zhao, Tiannming and Yao, Yu-Dong}, booktitle={Proceedings of the IEEE Conference on Computer Communications (INFOCOM’20), Toronto, ON, Canada}, pages={6--9}, year={2020} }
C30

# MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios INFOCOM 2020

Xin Yang, Jian Liu, Yingying Chen, Xiaonan Guo
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

Multi-user identification could facilitate various large-scale identity-based services such as access control, automatic surveillance system, and personalized services, etc. Although existing solutions can identify multiple users using cameras, such vision-based approaches usually raise serious privacy concerns and require the presence of line-of-sight. Differently, in this paper, we propose MU-ID, a gait-based multi-user identification system leveraging a single commercial off-the-shelf (COTS) millimeter-wave (mmWave) radar. Particularly, MU-ID takes as input frequency-modulated continuous-wave (FMCW) signals from the radar sensor. Through analyzing the mmWave signals in the range-Doppler domain, MU-ID examines the users’ lower limb movements and captures their distinct gait patterns varying in terms of step length, duration, instantaneous lower limb velocity, and inter-lower limb distance, etc. Additionally, an effective spatial-temporal silhouette analysis is proposed to segment each user’s walking steps. Then, the system identifies steps using a Convolutional Neural Network (CNN) classifier and further identifies the users in the area of interest. We implement MU-ID with the TI AWR1642BOOST mmWave sensor and conduct extensive experiments involving 10 people. The results show that MU-ID achieves up to 97% single-person identification accuracy, and over 92% identification accuracy for up to four people, while maintaining a low false positive rate.
@article{yangmu, title={MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios}, author={Yang, Xin and Liu, Jian and Chen, Yingying and Guo, Xiaonan and Xie, Yucheng} }
C29

# TrueHeart: Continuous Authentication on Wrist-worn Wearables Using PPG-based Biometrics INFOCOM 2020

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen, Jerry Cheng, Jiadi Yu
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

Traditional one-time user authentication processes might cause friction and unfavorable user experience in many widely-used applications. This is a severe problem in particular for security-sensitive facilities if an adversary could obtain unauthorized privileges after a user’s initial login. Recently, continuous user authentication (CA) has shown its great potential by enabling seamless user authentication with few active participation. We devise a low-cost system exploiting a user’s pulsatile signals from the photoplethysmography (PPG) sensor in commercial wrist-worn wearables for CA. Compared to existing approaches, our system requires zero user effort and is applicable to practical scenarios with non-clinical PPG measurements having motion artifacts (MA). We explore the uniqueness of the human cardiac system and design an MA filtering method to mitigate the impacts of daily activities. Furthermore, we identify general fiducial features and develop an adaptive classifier using the gradient boosting tree (GBT) method. As a result, our system can authenticate users continuously based on their cardiac characteristics so little training effort is required. Experiments with our wrist-worn PPG sensing platform on 20 participants under practical scenarios demonstrate that our system can achieve a high CA accuracy of over 90% and a low false detection rate of 4% in detecting random attacks.
@article{zhao2020trueheart, title={Trueheart: Continuous authentication on wrist-worn wearables using ppg-based biometrics}, author={Zhao, Tianming and Wang, Yan and Liu, Jian and Chen, Yingying and Cheng, Jerry and Yu, Jiadi}, year={2020} }
C28

# Defeating Hidden Audio Channel Attacks on Voice Assistants via Audio-Induced Surface Vibrations ACSAC 2019

Chen Wang, S Abhishek Anand, Jian Liu, Payton R. Walker, Yingying Chen, Nitesh Saxena
in Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC 2019), San Juan, December 2019.
(Acceptance rate: 60/266 = 22.6%)

Voice access technologies are widely adopted in mobile devices and voice assistant systems as a convenient way of user interaction. Recent studies have demonstrated a potentially serious vulnerability of the existing voice interfaces on these systems to “hidden voice commands”. This attack uses synthetically rendered adversarial sounds embedded within a voice command to trick the speech recognition process into executing malicious commands, without being noticed by legitimate users.
In this paper, we employ low-cost motion sensors, in a novel way, to detect these hidden voice commands. In particular, our proposed system extracts and examines the unique audio signatures of the issued voice commands in the vibration domain. We show that such signatures of normal commands vs. synthetic hidden voice commands are distinctive, leading to the detection of the attacks. The proposed system, which benefits from a speaker-motion sensor setup, can be easily deployed on smartphones by reusing existing on-board motion sensors or utilizing a cloud service that provides the relevant setup environment. The system is based on the premise that while the crafted audio features of the hidden voice commands may fool an authentication system in the audio domain, their unique audio-induced surface vibrations captured by the motion sensor are hard to forge. Our proposed system creates a harder challenge for the attacker as now it has to forge the acoustic features in both the audio and vibration domains, simultaneously. We extract the time and frequency domain statistical features, and the acoustic features (e.g., chroma vectors and MFCCs) from the motion sensor data and use learning-based methods for uniquely determining both normal commands and hidden voice commands. The results show that our system can detect hidden voice commands vs. normal commands with 99.9% accuracy by simply using the low-cost motion sensors that have very low sampling frequencies.
@inproceedings{wang2019defeating, title={Defeating hidden audio channel attacks on voice assistants via audio-induced surface vibrations}, author={Wang, Chen and Anand, S Abhishek and Liu, Jian and Walker, Payton and Chen, Yingying and Saxena, Nitesh}, booktitle={Proceedings of the 35th Annual Computer Security Applications Conference}, pages={42--56}, year={2019} }
C27

# Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples DySPAN 2019

Yi Wu, Jian Liu, Yingying Chen, Jerry Cheng
in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.
@inproceedings{wu2019semi, title={Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples}, author={Wu, Yi and Liu, Jian and Chen, Yingying and Cheng, Jerry}, booktitle={2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)}, pages={1--5}, year={2019}, organization={IEEE} }
C26

# CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping MobiSys 2019

Jian Liu, Cong Shi, Yingying Chen, Hongbo Liu, Marco Gruteser
in Proceedings of the 17th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2019), Seoul, South Korea, June 2019.
(Acceptance rate: 39/172 = 22.7%)

With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%.
@inproceedings{liu2019cardiocam, title={CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping}, author={Liu, Jian and Shi, Cong and Chen, Yingying and Liu, Hongbo and Gruteser, Marco}, booktitle={International Conference on Mobile Computing, Applications and Services}, year={2019} }
C25

# WristSpy: Snooping Passcodes in Mobile Payment Using Wrist-worn Wearables INFOCOM 2019

Chen Wang, Jian Liu, Xiaonan Guo, Yan Wang, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2019), Paris, France, April-May 2019.
(Acceptance rate: 288/1464 = 19.7%)

Mobile payment has drawn considerable attention due to its convenience of paying via personal mobile devices at anytime and anywhere, and passcodes (i.e., PINs or patterns) are the first choice of most consumers to authorize the payment. This paper demonstrates a serious security breach and aims to raise the awareness of the public that the passcodes for authorizing transactions in mobile payments can be leaked by exploiting the embedded sensors in wearable devices (e.g., smartwatches). We present a passcode inference system, WristSpy, which examines to what extent the user’s PIN/pattern during the mobile payment could be revealed from a single wrist-worn wearable device under different passcode input scenarios involving either two hands or a single hand. In particular, WristSpy has the capability to accurately reconstruct fine-grained hand movement trajectories and infer PINs/patterns when mobile and wearable devices are on two hands through building a Euclidean distance-based model and developing a training-free parallel PIN/pattern inference algorithm. When both devices are on the same single hand, a highly challenging case, WristSpy extracts multi-dimensional features by capturing the dynamics of minute hand vibrations and performs machine-learning based classification to identify PIN entries. Extensive experiments with 15 volunteers and 1600 passcode inputs demonstrate that an adversary is able to recover a user’s PIN/pattern with up to 92% success rate within 5 tries under various input scenarios.
@inproceedings{wang2019wristspy, title={WristSpy: Snooping Passcodes in Mobile Payment Using Wrist-worn Wearables}, author={Wang, Chen and Liu, Jian and Guo, Xiaonan and Wang, Yan and Chen, Yingying}, booktitle={IEEE International Conference on Communications}, year={2019} }
C24

# Device-free Personalized Fitness Assistant Using WiFi UbiComp 2019

Xiaonan Guo, Jian Liu, Cong Shi, Hongbo Liu, Yingying Chen, Mooi Choo Chuah
in PACM on Interactive, Mobile, Wearable, and Ubiquitous Computing (IMWUT), (to be presented at UbiComp 2019). (Acceptance rate: ~21%)

Multi-carrier cellular access dynamically selects a preferred wireless carrier by leveraging the availability and diversity of multiple carrier networks at a location. It offers an alternative to the dominant single-carrier paradigm, and shows early signs of success through the operational Project Fi by Google. In this paper, we study the important, yet largely unexplored, problem of inter-carrier switching for multi-carrier access. We show that policy conflicts can arise between inter- and intra-carrier switching, resulting in oscillations among carriers in the worst case akin to BGP looping. We derive the conditions under which such oscillations occur for three categories of popular policy, and validate them with Project Fi whenever possible. We provide practical guidelines to ensure loop-freedom and assess them via trace-driven emulations.
@article{guo2018device, title={Device-free Personalized Fitness Assistant Using WiFi}, author={Guo, Xiaonan and Liu, Jian and Shi, Cong and Liu, Hongbo and Chen, Yingying and Chuah, Mooi Choo}, journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies}, volume={2}, number={4}, pages={165}, year={2018}, publisher={ACM} }
C23

# VPad: Virtual Writing Tablet for Laptops Leveraging Acoustic Signals ICPADS 2018

Li Lu, Jian Liu, Jiadi Yu, Yingying Chen, Yanmin Zhu, Xiangyu Xu, Minglu Li
in Proceedings of the 24th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2018), Sentosa, Singapore, December 2018.

Human-computer interaction based on touch screens plays an increasing role in our daily lives. Besides smartphones and tablets, laptops are the most popular mobile devices used in both work and leisure. To satisfy requirements of many emerging applications, it becomes desirable to equip both writing and drawing functions directly on laptop screens. In this paper, we design a virtual writing tablet system, VPad, for traditional laptops without touch screens. VPad leverages two speakers and one microphone, which are available in most commodity laptops, for trajectory tracking without additional hardware. It employs acoustic signals to accurately track hand movements and recognize characters user writes in the air. Specifically, VPad emits inaudible acoustic signals from two speakers in a laptop. Then VPad applies Sliding-window Overlap Fourier Transformation technique to find Doppler frequency shift with higher resolution and accuracy in real time. Furthermore, we analyze frequency shifts and energy features of acoustic signals received by the microphone to track the trajectory of hand movements. Finally, we employ a stroke direction sequence model based on possibility estimation to recognize characters users write in the air. Our experimental results show that VPad achieves the average trajectory tracking error of only 1.55cm and the character recognition accuracy of above 90% merely through two speakers and one microphone on a laptop.
@inproceedings{lu2018vpad, title={VPad: Virtual Writing Tablet for Laptops Leveraging Acoustic Signals}, author={Lu, Li and Liu, Jian and Yu, Jiadi and Chen, Yingying and Zhu, Yanmin and Xu, Xiangyu and Li, Minglu}, booktitle={2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS)}, pages={244--251}, year={2018}, organization={IEEE} }
C22

# Towards In-baggage Suspicious Object Detection Using Commodity WiFi CNS 2018

Chen Wang, Jian Liu, Yingying Chen, Hongbo Liu, Yan Wang
in Proceedings of IEEE International Communications and Network Security (CNS 2018) , Beijing, China, May/June 2018.
(Acceptance rate: 51/181 = 28.2%; Best paper rate: 2/181 = 1.1%)

The growing needs of public safety urgently require scalable and low-cost techniques on detecting dangerous objects (e.g., lethal weapons, homemade-bombs, explosive chemicals) hidden in baggage. Traditional baggage check involves either high manpower for manual examinations or expensive and specialized instruments, such as X-ray and CT. As such, many public places (i.e., museums and schools) that lack of strict security check are exposed to high risk. In this work, we propose to utilize the fine-grained channel state information (CSI) from off-the-shelf WiFi to detect suspicious objects that are suspected to be dangerous (i.e., defined as any metal and liquid object) without penetrating into the user's privacy through physically opening the baggage. Our suspicious object detection system significantly reduces the deployment cost and is easy to set up in public venues. Towards this end, our system is realized by two major components: it first detects the existence of suspicious objects and identifies the dangerous material type based on the reconstructed CSI complex value (including both amplitude and phase information); it then determines the risk level of the object by examining the object's dimension (i.e., liquid volume and metal object's shape) based on the reconstructed CSI complex of the signals reflected by the object. Extensive experiments are conducted with 15 metal and liquid objects and 6 types of bags in a 6-month period. The results show that our system can detect over 95% suspicious objects in different types of bags and successfully identify 90% dangerous material types. In addition, our system can achieve the average errors of 16ml and 0.5cm when estimating the volume of liquid and shape (i.e., width and height) of metal objects, respectively.
@inproceedings{wang2018towards, title={Towards In-baggage Suspicious Object Detection Using Commodity WiFi}, author={Wang, Chen and Liu, Jian and Chen, Yingying and Liu, Hongbo and Wang, Yan}, booktitle={2018 IEEE Conference on Communications and Network Security (CNS)}, pages={1--9}, year={2018}, organization={IEEE} }
C21

# RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking IMWUTUbiComp 2018

Chuyu Wang, Jian Liu, Yingying Chen, Lei Xie, Hongbo Liu, Sanglu Lu
in PACM on Interactive, Mobile, Wearable, and Ubiquitous Computing (IMWUT), (UbiComp 2018).
(Acceptance rate: ~21%)

The rising popularity of electronic devices with gesture recognition capabilities makes the gesture-based human-computer interaction more attractive. Along this direction, tracking the body movement in 3D space is desirable to further facilitate behavior recognition in various scenarios. Existing solutions attempt to track the body movement based on computer version or wearable sensors, but they are either dependent on the light or incurring high energy consumption. This paper presents RF-Kinect, a training-free system which tracks the body movement in 3D space by analyzing the phase information of wearable RFID tags attached on the limb. Instead of locating each tag independently in 3D space to recover the body postures, RF-Kinect treats each limb as a whole, and estimates the corresponding orientations through extracting two types of phase features, Phase Difference between Tags (PDT) on the same part of a limb and Phase Difference between Antennas (PDA) of the same tag. It then reconstructs the body posture based on the determined orientation of limbs grounded on the human body geometric model, and exploits Kalman filter to smooth the body movement results, which is the temporal sequence of the body postures. The real experiments with 5 volunteers show that RF-Kinect achieves 8.7° angle error for determining the orientation of limbs and 4.4cm relative position error for the position estimation of joints compared with Kinect 2.0 testbed.
@article{wang2018rf, title={RF-kinect: A wearable RFID-based approach towards 3D body movement tracking}, author={Wang, Chuyu and Liu, Jian and Chen, Yingying and Xie, Lei and Liu, Hong Bo and Lu, Sanclu}, journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies}, volume={2}, number={1}, pages={41}, year={2018}, publisher={ACM} }
C20

# PPG-based Finger-level Gesture Recognition Leveraging Wearables INFOCOM 2018

Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2018), Honolulu, HI, USA, April 2018.
(Acceptance rate: 309/1606 = 19.2%)

This paper subverts the traditional understanding of Photoplethysmography (PPG) and opens up a new direction of the utility of PPG in commodity wearable devices, especially in the domain of human computer interaction of fine-grained gesture recognition. We demonstrate that it is possible to leverage the widely deployed PPG sensors in wrist-worn wearable devices to enable finger-level gesture recognition, which could facilitate many emerging human-computer interactions (e.g., sign-language interpretation and virtual reality). While prior solutions in gesture recognition require dedicated devices (e.g., video cameras or IR sensors) or leverage various signals in the environments (e.g., sound, RF or ambient light), this paper introduces the first PPG-based gesture recognition system that can differentiate fine-grained hand gestures at finger level using commodity wearables. Our innovative system harnesses the unique blood flow changes in a user's wrist area to distinguish the user's finger and hand movements. The insight is that hand gestures involve a series of muscle and tendon movements that compress the arterial geometry with different degrees, resulting in significant motion artifacts to the blood flow with different intensity and time duration. By leveraging the unique characteristics of the motion artifacts to PPG, our system can accurately extract the gesture-related signals from the significant background noise (i.e., pulses), and identify different minute finger-level gestures. Extensive experiments are conducted with over 3600 gestures collected from 10 adults. Our prototype study using two commodity PPG sensors can differentiate nine finger-level gestures from American Sign Language with an average recognition accuracy over 87%, suggesting that our PPG-based finger-level gesture recognition system is promising to be one of the most critical components in sign language translation using wearables.
@inproceedings{zhao2018ppg, title={PPG-based finger-level gesture recognition leveraging wearables}, author={Zhao, Tianming and Liu, Jian and Wang, Yan and Liu, Hongbo and Chen, Yingying}, booktitle={IEEE INFOCOM 2018-IEEE Conference on Computer Communications}, pages={1457--1465}, year={2018}, organization={IEEE} }
C19

# Multi-Touch in the Air: Device-Free Finger Tracking and Gesture Recognition via COTS RFID INFOCOM 2018

Chuyu Wang, Jian Liu, Yingying Chen, Hongbo Liu, Lei Xie, Wei Wang, Bingbing He, Sanglu Lu
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2018), Honolulu, HI, USA, April 2018.
(Acceptance rate: 309/1606 = 19.2%)

Recently, gesture recognition has gained considerable attention in emerging applications (e.g., AR/VR systems) to provide a better user experience for human-computer interaction. Existing solutions usually recognize the gestures based on wearable sensors or specialized signals (e.g., WiFi, acoustic and visible light), but they are either incurring high energy consumption or susceptible to the ambient environment, which prevents them from efficiently sensing the fine-grained finger movements. In this paper, we present RF-finger, a device-free system based on Commercial-Off-The-Shelf (COTS) RFID, which leverages a tag array on a letter-size paper to sense the fine-grained finger movements performed in front of the paper. Particularly, we focus on two kinds of sensing modes: finger tracking recovers the moving trace of finger writings; multi-touch gesture recognition identifies the multi-touch gestures involving multiple fingers. Specifically, we build a theoretical model to extract the fine-grained reflection feature from the raw RF -signal, which describes the finger influence on the tag array in cm- level resolution. For the finger tracking, we leverage K-Nearest Neighbors (KNN) to pinpoint the finger position relying on the fine-grained reflection features, and obtain a smoothed trace via Kalman filter. Additionally, we construct the reflection image of each multi-touch gesture from the reflection features by regarding the multiple fingers as a whole. Finally, we use a Convolutional Neural Network (CNN) to identify the multi-touch gestures based on the images. Extensive experiments validate that RF -finger can achieve as high as 87% and 92% accuracy for finger tracking and multi-touch gesture recognition, respectively.
@inproceedings{wang2018multi, title={Multi-Touch in the Air: Device-Free Finger Tracking and Gesture Recognition via COTS RFID}, author={Wang, Chuyu and Liu, Jian and Chen, Yingying and Liu, Hongbo and Xie, Lei and Wang, Wei and He, Bingbing and Lu, Sanglu}, booktitle={IEEE INFOCOM 2018-IEEE Conference on Computer Communications}, pages={1691--1699}, year={2018}, organization={IEEE} }
C18

# VibWrite: Towards Finger-input Authentication on Ubiquitous Surfaces via Physical Vibration CCS 2017

Jian Liu, Chen Wang, Yingying Chen, Nitesh Saxena
in Proceedings of the 24th ACM Conference on Computer and Communications Security (CCS 2017), Dallas, USA, October-November 2017.
(Acceptance rate: 151/843 = 17.9%)

The goal of this work is to enable user authentication via finger inputs on ubiquitous surfaces leveraging low-cost physical vibration. We propose VibWrite that extends finger-input authentication beyond touch screens to any solid surface for smart access systems (e.g., access to apartments, vehicles or smart appliances). It integrates passcode, behavioral and physiological characteristics, and surface dependency together to provide a low-cost, tangible and enhanced security solution. VibWrite builds upon a touch sensing technique with vibration signals that can operate on surfaces constructed from a broad range of materials. It is significantly different from traditional password-based approaches, which only authenticate the password itself rather than the legitimate user, and the behavioral biometrics-based solutions, which usually involve specific or expensive hardware (e.g., touch screen or fingerprint reader), incurring privacy concerns and suffering from smudge attacks. VibWrite is based on new algorithms to discriminate fine-grained finger inputs and supports three independent passcode secrets including PIN number, lock pattern, and simple gestures by extracting unique features in the frequency domain to capture both behavioral and physiological characteristics such as contacting area, touching force, and etc. VibWrite is implemented using a single pair of low-cost vibration motor and receiver that can be easily attached to any surface (e.g., a door panel, a desk or an appliance). Our extensive experiments demonstrate that VibWrite can authenticate users with high accuracy (e.g., over 95% within two trials), low false positive rate (e.g., less 3%) and is robust to various types of attacks.
@inproceedings{liu2017vibwrite, title={VibWrite: Towards finger-input authentication on ubiquitous surfaces via physical vibration}, author={Liu, Jian and Wang, Chen and Chen, Yingying and Saxena, Nitesh}, booktitle={Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security}, pages={73--87}, year={2017}, organization={ACM} }
C17

# SalsaAsst: Beat Counting System Empowered by Mobile Devices to Assist Salsa Dancers MASS 2017

Yudi Dong, Jian Liu, Yingying Chen, Woo Lee
in Proceedings of the 14th IEEE International Conference on Mobile Ad hoc and Sensor Systems (MASS 2017), Orlando, Florida, USA, October 2017.

Dancing is always challenging especially for beginners who may lack sense of rhythm. Salsa, as a popular style of dancing, is even harder to learn due to its unique overlapped rhythmic patterns made by different Latin instruments (e.g., Clave sticks, Conga drums, Timbale drums) together. In order to dance in synchronization with the Salsa beats, the beginners always need prompts (e.g., beat counting voice) to remind them of the beat timing. The traditional way to generate the Salsa music with beat counting voice prompts requires professional dancers or musicians to count Salsa beats manually, which is only possible in dance studios. Additionally, the existing music beat tracking solutions cannot well capture the Salsa beats due to its intricacy of rhythms. In this work, we propose a mobile device enabled beat counting system, SalsaAsst, which can perform rhythm deciphering and fine-grained Salsa beat tracking to assist Salsa dancers with beat counting voice/vibration prompts. The proposed system can be used conveniently in many scenarios, which can not only help Salsa beginners make accelerated learning progress during practice at home but also significantly reduce professional dancers' errors during their live performance. The developed Salsa beat counting algorithm has the capability to track beats accurately in both real-time and offline manners. Our extensive tests using 40 Salsa songs under 8 evaluation metrics demonstrate that SalsaAsst can accurately track the beats of Salsa music and achieve much better performance comparing to existing beat tracking approaches.
@inproceedings{dong2017salsaasst, title={SalsaAsst: Beat Counting System Empowered by Mobile Devices to Assist Salsa Dancers}, author={Dong, Yudi and Liu, Jian and Chen, Yingying and Lee, Woo Y}, booktitle={2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)}, pages={81--89}, year={2017}, organization={IEEE} }
C16

# SubTrack: Enabling Real-time Tracking of Subway Riding on Mobile Devices MASS 2017

Guo Liu, Jian Liu, Fangmin Li, Xiaolin Ma, Yingying Chen, Hongbo Liu
in Proceedings of the 14th IEEE International Conference on Mobile Ad hoc and Sensor Systems (MASS 2017), Orlando, Florida, USA, October 2017.

Real-time tracking of subway riding will provide great convenience to millions of commuters in metropolitan areas. Traditional approaches using timetables need continuous attentions from the subway riders and are limited to the poor accuracy of estimating the travel time. Recent approaches using mobile devices rely on GSM and WiFi, which are not always available underground. In this work, we present SubTrack, utilizing sensors on mobile devices to provide automatic tracking of subway riding in real time. The real-time automatic tracking covers three major aspects of a passenger: detection of entering a station, tracking the passenger's position, and estimating the arrival time of subway stops. In particular, SubTrack employs the cell ID to first detect a passenger entering a station and exploits inertial sensors on the passenger's mobile device to track the train ride. Our algorithm takes the advantages of the unique vibrations in acceleration and typical moving patterns of the train to estimate the train's velocity and the corresponding position, and further predict the arrival time in real time. Our extensive experiments in two cities in China and USA respectively demonstrate that our system can accurately track the position of subway riders, predict the arrival time and push the arrival notification in a timely manner.
@inproceedings{liu2017subtrack, title={SubTrack: Enabling Real-time Tracking of Subway Riding on Mobile Devices}, author={Liu, Guo and Liu, Jian and Li, Fangmin and Ma, Xiaolin and Chen, Yingying and Liu, Hongbo}, booktitle={2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)}, pages={90--98}, year={2017}, organization={IEEE} }
C15

# Smart User Authentication through Actuation of Daily Activities Leveraging WiFi-enabled IoT MobiHoc 2017

Cong Shi, Jian Liu, Hongbo Liu, Yingying Chen
in Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc 2017), IIT Madras, Chennai, India, July 2017.
(Acceptance rate: 27/179 = 15.1%)

User authentication is a critical process in both corporate and home environments due to the ever-growing security and privacy concerns. With the advancement of smart cities and home environments, the concept of user authentication is evolved with a broader implication by not only preventing unauthorized users from accessing confidential information but also providing the opportunities for customized services corresponding to a specific user. Traditional approaches of user authentication either require specialized device installation or inconvenient wearable sensor attachment. This paper supports the extended concept of user authentication with a device-free approach by leveraging the prevalent WiFi signals made available by IoT devices, such as smart refrigerator, smart TV and thermostat, etc. The proposed system utilizes the WiFi signals to capture unique human physiological and behavioral characteristics inherited from their daily activities, including both walking and stationary ones. Particularly, we extract representative features from channel state information (CSI) measurements of WiFi signals, and develop a deep learning based user authentication scheme to accurately identify each individual user. Extensive experiments in two typical indoor environments, a university office and an apartment, are conducted to demonstrate the effectiveness of the proposed authentication system. In particular, our system can achieve over 94% and 91% authentication accuracy with 11 subjects through walking and stationary activities, respectively.
@inproceedings{shi2017smart, title={Smart user authentication through actuation of daily activities leveraging WiFi-enabled IoT}, author={Shi, Cong and Liu, Jian and Liu, Hongbo and Chen, Yingying}, booktitle={Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing}, pages={5}, year={2017}, organization={ACM} }
C14

# VibSense: Sensing Touches on Ubiquitous Surfaces through Vibration SECON 2017

Jian Liu, Yingying Chen, Marco Gruteser, Yan Wang
in Proceedings of the 14th IEEE International Conference on Sensing, Communication and Networking (SECON 2017), San Diego, CA, USA, June 2017.
(Acceptance rate: 45/170 = 26.5%; Best paper rate: 1/170 = 0.7%)

VibSense pushes the limits of vibration-based sensing to determine the location of a touch on extended surface areas as well as identify the object touching the surface leveraging a single sensor. Unlike capacitive sensing, it does not require conductive materials and compared to audio sensing it is more robust to acoustic noise. It supports a broad array of applications through either passive or active sensing using only a single sensor. In VibSense's passive sensing, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. Moreover, in the active mode, the received vibration signals carry richer information of the touching object's characteristics (e.g., weight, size, location and material). This further enables VibSense to match the signals to the trained profiles and allows it to differentiate personal objects in contact with any surface. VibSense is evaluated extensively in the use cases of localizing touches (i.e., virtual keyboards), object localization and identification. Our experimental results demonstrate that VibSense can achieve high accuracy, over 95%, in all these use cases.
@inproceedings{liu2017vibsense, title={VibSense: Sensing Touches on Ubiquitous Surfaces through Vibration}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco and Wang, Yan}, booktitle={2017 14th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)}, pages={1--9}, year={2017}, organization={IEEE} }
C13

# BigRoad: Scaling Massive Road Data Acquisition for Dependable Self-Driving MobiSys 2017

Luyang Liu, Hongyu Li, Jian Liu, Cagdas Karatas, Yan Wang, Marco Gruteser, Yingying Chen, Richard Martin
in Proceedings of the 15th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2017), Niagara Falls, NY, USA, June 2017.
(Acceptance rate: 34/191 = 17.7%)

@inproceedings{liu2017bigroad, title={Bigroad: Scaling road data acquisition for dependable self-driving}, author={Liu, Luyang and Li, Hongyu and Liu, Jian and Karatas, Cagdas and Wang, Yan and Gruteser, Marco and Chen, Yingying and Martin, Richard P}, booktitle={Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services}, pages={371--384}, year={2017}, organization={ACM} }
C12

# FitCoach: Virtual Fitness Coach Empowered by Wearable Mobile Devices INFOCOM 2017

Xiaonan Guo, Jian Liu, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2017), Atlanta, GA, USA, May 2017.
(Acceptance rate: 292/1395 = 20.93%)

Acknowledging the powerful sensors on wearables and smartphones enabling various applications to improve users' life styles and qualities (e.g., sleep monitoring and running rhythm tracking), this paper takes one step forward developing FitCoach, a virtual fitness coach leveraging users' wearable mobile devices (including wrist-worn wearables and arm-mounted smartphones) to assess dynamic postures (movement patterns & positions) in workouts. FitCoach aims to help the user to achieve effective workout and prevent injury by dynamically depicting the short-term and long-term picture of a user's workout based on various sensors in wearable mobile devices. In particular, FitCoach recognizes different types of exercises and interprets fine-grained fitness data (i.e., motion strength and speed) to an easy-to-understand exercise review score, which provides a comprehensive workout performance evaluation and recommendation. FitCoach has the ability to align the sensor readings from wearable devices to the human coordinate system, ensuring the accuracy and robustness of the system. Extensive experiments with over 5000 repetitions of 12 types of exercises involve 12 participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that FitCoach can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% accuracy for workout analysis.
@inproceedings{guo2017fitcoach, title={FitCoach: Virtual fitness coach empowered by wearable mobile devices}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, booktitle={IEEE INFOCOM 2017-IEEE Conference on Computer Communications}, pages={1--9}, year={2017}, organization={IEEE} }
C11

# Towards Safer Texting While Driving Through Stop Time prediction CarSys 2016

Hongyu Li, Luyang Liu, Cagdas Karatas, Jian Liu, Marco Gruteser, Yingying Chen, Yan Wang, Richard P. Martin, Jie Yang
in The First ACM International Workshop on Connected and Automated Vehicle Mobility (CarSys 2016), New York, NY, USA, October 2016.

Driver distraction due to in-vehicle device use is an increasing concern and has led to national attention. We ask whether it is not more effective to channel the drivers' device and information system use into safer periods, rather than attempt a complete prohibition of mobile device use. This paper aims to start the discussion by examining the feasibility of automatically identifying safer periods for operating mobile devices. We propose a movement-based architecture design to identify relatively safe periods, estimate the duration and safety level of each period, and delay notifications until a safer period arrives. To further explore the feasibility of such a system architecture, we design and implement a prediction algorithm for one safe period, long traffic signal stops, that relies on crowd sourced position data. Simulations and experimental evaluation show that the system can achieve a low prediction error and its converge and prediction accuracy increase proportionally to the availability of the amount of crowd-sourced data.
@inproceedings{li2016towards, title={Towards safer texting while driving through stop time prediction}, author={Li, Hongyu and Liu, Luyang and Karatas, Cagdas and Liu, Jian and Gruteser, Marco and Chen, Yingying and Wang, Yan and Martin, Richard P and Yang, Jie}, booktitle={Proceedings of the First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services}, pages={14--21}, year={2016}, organization={ACM} }
C10

# MotionScale: A Body Motion Monitoring System Using Bed-Mounted Wireless Load Cells CHASE 2016

Musaab Alaziz, Zhenhua Jia, Jian Liu, Richard Howard, Yingying Chen, Yanyong Zhang
in Proceedings of IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE 2016), Washington DC, USA, June 2016.

In-bed motion detection is an important technique that can enable an array of applications, among which are sleep monitoring and abnormal movement detection. In this paper, we present a low-cost, low-overhead, and highly robust system for in-bed movement detection and classification that uses low-end load cells. By observing the forces sensed by the load cells, placed under each bed leg, we can detect many different types of movements, and further classify them as big or small depending on magnitude of the force changes on the load cells. We have designed three different features, which we refer to as Log-Peak, Energy-Peak, ZeroX-Valley, that can effectively extract body movement signals from load cell data that are collected through wireless links in an energy-efficient manner. After establishing the feature values, we employ a simple threshold-based algorithm to detect and classify movements. We have conducted thorough evaluation, that involves collecting data from 30 subjects who perform 27 pre-defined movements in an experiment. By comparing our detection and classification results against the ground truth captured by a video camera, we show the Log-Peak strategy can detect these 27 types of movements at an error rate of 6.3% while classifying them to big or small movements at an error rate of 4.2%.
@inproceedings{alaziz2016motion, title={Motion scale: A body motion monitoring system using bed-mounted wireless load cells}, author={Alaziz, Musaab and Jia, Zhenhua and Liu, Jian and Howard, Richard and Chen, Yingying and Zhang, Yanyong}, booktitle={2016 IEEE first international conference on connected health: applications, systems and engineering technologies (CHASE)}, pages={183--192}, year={2016}, organization={IEEE} }
C9

# Leveraging Wearables for Steering and Driver Tracking INFOCOM 2016

Cagdas Karatas, Luyang Liu, Hongyu Li, Jian Liu, Yan Wang, Sheng Tan, Jie Yang, Yingying Chen, Marco Gruteser, Richard Martin
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2016), San Francisco, USA, April 2016.
(Acceptance rate: 300/1644 = 18.25%)

Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.
@inproceedings{karatas2016leveraging, title={Leveraging wearables for steering and driver tracking}, author={Karatas, Cagdas and Liu, Luyang and Li, Hongyu and Liu, Jian and Wang, Yan and Tan, Sheng and Yang, Jie and Chen, Yingying and Gruteser, Marco and Martin, Richard}, booktitle={IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications}, pages={1--9}, year={2016}, organization={IEEE} }
C8

# Snooping Keystrokes with mm-level Audio Ranging on a Single Phone MobiCom 2015

Jian Liu, Yan Wang, Gorkem Kar, Yingying Chen, Jie Yang, Marco Gruteser
in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking (MobiCom 2015), Paris, France, September 2015.
(Acceptance rate: 38/207 = 18.3%)

This paper explores the limits of audio ranging on mobile devices in the context of a keystroke snooping scenario. Acoustic keystroke snooping is challenging because it requires distinguishing and labeling sounds generated by tens of keys in very close proximity. Existing work on acoustic keystroke recognition relies on training with labeled data, linguistic context, or multiple phones placed around a keyboard --- requirements that limit usefulness in an adversarial context. In this work, we show that mobile audio hardware advances can be exploited to discriminate mm-level position differences and that this makes it feasible to locate the origin of keystrokes from only a single phone behind the keyboard. The technique clusters keystrokes using time-difference of arrival measurements as well as acoustic features to identify multiple strokes of the same key. It then computes the origin of these sounds precise enough to identify and label each key. By locating keystrokes this technique avoids the need for labeled training data or linguistic context. Experiments with three types of keyboards and off-the-shelf smartphones demonstrate scenarios where our system can recover 94% of keystrokes, which to our knowledge, is the first single-device technique that enables acoustic snooping of passwords.
@inproceedings{liu2015snooping, title={Snooping keystrokes with mm-level audio ranging on a single phone}, author={Liu, Jian and Wang, Yan and Kar, Gorkem and Chen, Yingying and Yang, Jie and Gruteser, Marco}, booktitle={Proceedings of the 21st Annual International Conference on Mobile Computing and Networking}, pages={142--154}, year={2015}, organization={ACM} }
C7

# Tracking Vital Signs During Sleep Leveraging Off-the-shelf WiFi MobiHoc 2015

Jian Liu, Yan Wang, Yingying Chen, Jie Yang, Xu Chen, Jerry Cheng
in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc 2015), Hangzhou, China, June 2015.
(Acceptance rate: 37/250 = 14.7%)

Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.
@inproceedings{liu2015tracking, title={Tracking vital signs during sleep leveraging off-the-shelf wifi}, author={Liu, Jian and Wang, Yan and Chen, Yingying and Yang, Jie and Chen, Xu and Cheng, Jerry}, booktitle={Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing}, pages={267--276}, year={2015}, organization={ACM} }
C6

# E-eyes: Device-free Location-oriented Activity Identification Using Fine-grained WiFi Signatures MobiCom 2014

Yan Wang, Jian Liu, Yingying Chen, Marco Gruteser, Jie Yang, Hongbo Liu
in Proceedings of the 20th Annual International Conference on Mobile Computing and Networking (MobiCom 2014), Maui, Hawaii, USA, September 2014.
(Acceptance rate: 36/220 = 16.4%)

Activity monitoring in home environments has become increasingly important and has the potential to support a broad array of applications including elder care, well-being management, and latchkey child safety. Traditional approaches involve wearable sensors and specialized hardware installations. This paper presents device-free location-oriented activity identification at home through the use of existing WiFi access points and WiFi devices (e.g., desktops, thermostats, refrigerators, smartTVs, laptops). Our low-cost system takes advantage of the ever more complex web of WiFi links between such devices and the increasingly fine-grained channel state information that can be extracted from such links. It examines channel features and can uniquely identify both in-place activities and walking movements across a home by comparing them against signal profiles. Signal profiles construction can be semi-supervised and the profiles can be adaptively updated to accommodate the movement of the mobile devices and day-to-day signal calibration. Our experimental evaluation in two apartments of different size demonstrates that our approach can achieve over 97% average true positive rate and less than 1% average false positive rate to distinguish a set of in-place and walking activities with only a single WiFi access point. Our prototype also shows that our system can work with wider signal band (802.11ac) with even higher accuracy.
@inproceedings{wang2014eyes, title={E-eyes: device-free location-oriented activity identification using fine-grained wifi signatures}, author={Wang, Yan and Liu, Jian and Chen, Yingying and Gruteser, Marco and Yang, Jie and Liu, Hongbo}, booktitle={Proceedings of the 20th annual international conference on Mobile computing and networking}, pages={617--628}, year={2014}, organization={ACM} }
C5

# Practical User Authentication Leveraging Channel State Information (CSI) ASIACCS 2014

Hongbo Liu, Yan Wang, Jian Liu, Jie Yang, Yingying Chen
in Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security (ASIACCS 2014), Kyoto, Japan, June 2014.
(Acceptance rate: 52/260 = 20%)

User authentication is the critical first step to detect identity-based attacks and prevent subsequent malicious attacks. However, the increasingly dynamic mobile environments make it harder to always apply the cryptographic-based methods for user authentication due to their infrastructural and key management overhead. Exploiting non-cryptographic based techniques grounded on physical layer properties to perform user authentication appears promising. In this work, we explore to use channel state information (CSI), which is available from off-the-shelf WiFi devices, to conduct fine-grained user authentication. We propose an user-authentication framework that has the capability to build the user profile resilient to the presence of the spoofer. Our machine learning based user-authentication techniques can distinguish two users even when they possess similar signal fingerprints and detect the existence of the spoofer. Our experiments in both office building and apartment environments show that our framework can filter out the signal outliers and achieve higher authentication accuracy compared with existing approaches using received signal strength (RSS).
@inproceedings{liu2014practical, title={Practical user authentication leveraging channel state information (CSI)}, author={Liu, Hongbo and Wang, Yan and Liu, Jian and Yang, Jie and Chen, Yingying}, booktitle={Proceedings of the 9th ACM symposium on Information, computer and communications security}, pages={389--400}, year={2014}, organization={ACM} }
C4

# WSF-MAC: A Weight-based Spatially Fair MAC Protocol for Underwater Sensor Networks CECNet 2012

Fei Dou, Zhigang Jin, Yishan Su, Jian Liu
in Proceedings of the 2nd International Conference on Consumer Electronic, Communications and Networks (CECNet 2012), Hubei, China, April 2012.

The high propagation delay in Underwater Sensor Networks (UWSNs) causes space-time uncertainty, making spatial fairness a challenging problem in UWSNs. In this paper, we propose a weight-based spatially fair MAC protocol (WSF-MAC) for UWSNs. It postpones sending the underwater-reply (UW-REP) packet for a silence duration time, and then determines the node to send UW-REQ first according to the sending time and competition count of the underwater-request (UW-REQ) packets and send UW-REP to get ready for transmission. The simulation results show that WSF-MAC can achieve a better performance in terms of the spatial fairness by about 10%.
@inproceedings{dou2012wsf, title={WSF-MAC: A weight-based spatially fair MAC protocol for underwater sensor networks}, author={Dou, Fei and Jin, Zhigang and Su, Yishan and Liu, Jian}, booktitle={2012 2nd International Conference on Consumer Electronics, Communications and Networks (CECNet)}, pages={3708--3711}, year={2012}, organization={IEEE} }
C3

# An Improved RED Algorithm with Sinusoidal Packet-marking Probability and Dynamic Weight ICEICE 2011

Songpo Zhang, Jiming Sa, Jian Liu, Shaoyun Wu
in Proceedings of the International Conference on Electric Information and Control Engineering (ICEICE 2011), Wuhan, China, April 2011.

Congestion control has become a research hotspot, because of the rapid growth of Internet. Random Early Detection (RED) algorithm is the most effective active queue management (AQM) techniques. This paper describes RED algorithm and its derivatives then presents a new algorithm The packet-marking probability linearly with the average queue length is improper for the arrival packet at the gateway. So we present an improved algorithm named SW-RED, which can adjust weight dynamically and make the packet-marking more reasonable. Simulations by NS2 show that SW-RED has better performance and stability comparing with RED.
@inproceedings{zhang2011improved, title={An improved RED algorithm with sinusoidal packet-marking probability and dynamic weight}, author={Zhang, Songpo and Sa, Jiming and Liu, Jian and Wu, Shaoyun}, booktitle={2011 International Conference on Electric Information and Control Engineering}, pages={1160--1163}, year={2011}, organization={IEEE} }
C2

# An Adaptive Cross-layer Mechanism of Multi-Channel Multi-Interface Wireless Networks for Real-Time Video Streaming UIC/ATC 2010

Jian Liu, Fangmin Li, Fei Dou, Xu He, Zhigang Luo, Hong Xiong
in Proceedings of the 7th International Conference on Autonomic & Trusted Computing (UIC/ATC 2010), Xi'an, China, October 2010.

Real-time video streaming over wireless links imposes strong demands on video codecs and quality of networks. Many measures are made to design proper routing protocols and channel assignments (CAs) for multi-channel multi-interface (MCMI) wireless networks, since it can provide higher performance than single channel. However, there still has not been a well-studied proposal to guarantee real-time video quality in this situation. Hence, it motivates us to explore the potential synergies of exchanging information between different layers to support real-time video streaming over MCMI wireless networks. In this article we jointly consider three layers of the protocol stack: the application, data link and physical layers, and propose an adaptive cross-layer mechanism for real-time video streaming (ACMRV) used in this scenario, which includes both an efficient CA and an adaptive FEC mechanism. We analyze the performance of the proposed architecture and extensively evaluate it via NS-2. The results show that the real-time video quality can be greatly improved by our proposal.
@inproceedings{liu2010adaptive, title={An adaptive cross-layer mechanism of multi-channel multi-interface wireless networks for real-time video streaming}, author={Liu, Jian and Li, Fangmin and Dou, Fei and He, Xu and Luo, Zhigang and Xiong, Hong}, booktitle={2010 7th International Conference on Ubiquitous Intelligence \& Computing and 7th International Conference on Autonomic \& Trusted Computing}, pages={165--170}, year={2010}, organization={IEEE} }
C1

# An Improvement of AODV Protocol Based on Reliable Delivery in Mobile Ad hoc Networks IAS 2009

Jian Liu, Fangmin Li
in Proceedings of the 5th International Conference on Information Assurance and Security (IAS 2009), Xi'an, China, August 2009.

AODV protocol is a comparatively mature on-demand routing protocol in mobile ad hoc networks. However, the traditional AODV protocol seems less than satisfactory in terms of delivery reliability. This paper presents an AODV with reliable delivery (AODV-RD), a link failure fore-warning mechanism, metric of alternate node in order to better select, and also repairing action after primary route breaks basis of AODV-BR. Performance comparison of AODV-RD with AODV-BR and traditional AODV using ns-2 simulations shows that AODV-RD significantly increases packet delivery ratio (PDR). AODV-RD has a much shorter end-to-end delay than AODV-BR. It both optimizes the network performance and guarantees the communication quality.
@inproceedings{liu2009improvement, title={An improvement of AODV protocol based on reliable delivery in mobile ad hoc networks}, author={Liu, Jian and Li, Fang-min}, booktitle={2009 Fifth International Conference on Information Assurance and Security}, volume={1}, pages={507--510}, year={2009}, organization={IEEE} }

J21

# BioFace-3D: 3D Facial Tracking and Animation via Single-ear Wearable Biosensors

Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, Phuc Nguyen
ACM GetMobile, 2022.

Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications, such as human-computer interactions, facial expression analysis, emotion recognition, etc. However, existing camera-based solutions require users to be confined to a particular location and face a camera at all times without occlusions, which largely limits their usage in practice. To overcome these limitations, we propose the first single-earpiece lightweight biosensing system, Bioface-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications.
@article{wu2022bioface, title={BioFace-3D: 3D Facial Tracking and Animation via Single-ear Wearable Biosensors}, author={Wu, Yi and Kakaraparthi, Vimal and Li, Zhuohang and Pham, Tien and Liu, Jian and Nguyen, VP}, journal={GetMobile: Mobile Computing and Communications}, volume={26}, number={1}, pages={21--24}, year={2022}, publisher={ACM New York, NY, USA} }
J19

# Robust Continuous Authentication Using Cardiac Biometrics from Wrist-worn Wearables

Tianming Zhao, Yan Wang, Jian Liu, Jerry Cheng, Yingying Chen, Jiadi Yu
IEEE Internet of Things Journal (IEEE IoT), 2021.

Traditional one-time user authentication is vulnerable to attacks when an adversary can obtain unauthorized privileges after a user’s initial login. Continuous user authentication (CA) has recently shown its great potential by enabling seamless user authentication with few users’ participation. We devise a low-cost system that can exploit users’ pulsatile signals from photoplethysmography (PPG) sensors in commodity wearable devices to perform CA. Our system requires zero user effort and applies to practical scenarios that have non-clinical PPG measurements with human motion artifacts (MA). We explore the uniqueness of the human cardiac system and develop adaptive MA filtering methods to mitigate the impacts of transient and continuous activities from daily life. Furthermore, we identify general fiducial features and develop an adaptive classifier that can authenticate users continuously based on their cardiac characteristics with little additional training effort. Experiments with our wrist-worn PPG sensing platform on 20 participants under practical scenarios demonstrate that our system can achieve a high CA accuracy of over 90% and a low false detection rate of 4% in detecting random attacks. We show that our MA mitigation approaches can improve the CA accuracy by around 39% under both transient and continuous daily activity scenarios.
@article{zhao2021robust, title={Robust Continuous Authentication Using Cardiac Biometrics from Wrist-worn Wearables}, author={Zhao, Tianming and Wang, Yan and Liu, Jian and Cheng, Jerry and Chen, Yingying and Yu, Jiadi}, journal={IEEE Internet of Things Journal}, year={2021}, publisher={IEEE} }
J18

# Wifi-enabled User Authentication through Deep Learning in Daily Activities

Cong Shi, Jian Liu, Hongbo Liu, Yingying Chen
ACM Transactions on Internet of Things, 2021.

User authentication is a critical process in both corporate and home environments due to the ever-growing security and privacy concerns. With the advancement of smart cities and home environments, the concept of user authentication is evolved with a broader implication by not only preventing unauthorized users from accessing confidential information but also providing the opportunities for customized services corresponding to a specific user. Traditional approaches of user authentication either require specialized device installation or inconvenient wearable sensor attachment. This article supports the extended concept of user authentication with a device-free approach by leveraging the prevalent WiFi signals made available by IoT devices, such as smart refrigerator, smart TV, and smart thermostat, and so on. The proposed system utilizes the WiFi signals to capture unique human physiological and behavioral characteristics inherited from their daily activities, including both walking and stationary ones. Particularly, we extract representative features from channel state information (CSI) measurements of WiFi signals, and develop a deep-learning-based user authentication scheme to accurately identify each individual user. To mitigate the signal distortion caused by surrounding people’s movements, our deep learning model exploits a CNN-based architecture that constructively combines features from multiple receiving antennas and derives more reliable feature abstractions. Furthermore, a transfer-learning-based mechanism is developed to reduce the training cost for new users and environments. Extensive experiments in various indoor environments are conducted to demonstrate the effectiveness of the proposed authentication system. In particular, our system can achieve over 94% authentication accuracy with 11 subjects through different activities.
@article{shi2021wifi, title={WiFi-Enabled User Authentication through Deep Learning in Daily Activities}, author={Shi, Cong and Liu, Jian and Liu, Hongbo and Chen, Yingying}, journal={ACM Transactions on Internet of Things}, volume={2}, number={2}, pages={1--25}, year={2021}, publisher={ACM New York, NY, USA} }
J17

# Enabling Finger-touch-based Mobile User Authentication via Physical Vibrations on IoT Devices

Xin Yang, Song Yang, Jian Liu, Chen Wang, Yingying Chen, Nitesh Saxena
IEEE Transactions on Mobile Computing (IEEE TMC), January 2021.

This work enables mobile user authentication via finger inputs on ubiquitous surfaces leveraging low-cost physical vibration. The system we proposed extends finger-input authentication beyond touch screens to any solid surface for IoT devices (e.g., smart access systems and IoT appliances). Unlike passcode or biometrics-based solutions, it integrates passcode, behavioral and physiological characteristics, and surface dependency together to provide a low-cost, tangible and enhanced security solution. The proposed system builds upon a touch sensing technique with vibration signals that can operate on surfaces constructed from a broad range of materials. New algorithms are developed to discriminate fine-grained finger inputs and supports three independent passcode secrets including PIN number, lock pattern, and simple gestures by extracting unique features in the frequency domain to capture both behavioral and physiological characteristics including contacting area, touching force, and etc. The system is implemented using a single pair of low-cost portable vibration motor and receiver that can be easily attached to any surface (e.g., a door panel, a stovetop or an appliance). Extensive experiments demonstrate that our system can authenticate users with high accuracy (e.g., over 97% within two trials), low false positive rate (e.g., less 2%) and is robust to various types of attacks.
@article{yang2021enabling, title={Enabling Finger-touch-based Mobile User Authentication via Physical Vibrations on IoT Devices}, author={Yang, Xin and Yang, Song and Liu, Jian and Wang, Chen and Chen, Yingying and Saxena, Nitesh}, journal={IEEE Transactions on Mobile Computing}, year={2021}, publisher={IEEE} }
J16

# Real-time, Robust and Adaptive Universal Adversarial Attacks Against Speaker Recognition Systems

Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, Bo Yuan
Journal of Signal Processing Systems (Springer JSPS), 2020.

Voice user interface (VUI) has become increasingly popular in recent years. Speaker recognition system, as one of the most common VUIs, has emerged as an important technique to facilitate security-required applications and services. In this paper, we propose to design, for the first time, a real-time, robust, and adaptive universal adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition systems in the white-box scenario. By developing an audio-agnostic universal perturbation, we can make the DNN-based speaker recognition systems to misidentify the speaker as the adversary-desired target label, with using a single perturbation that can be applied on arbitrary enrolled speaker’s voice. In addition, we improve the robustness of our attack by modeling the sound distortions caused by the physical over-the-air propagation through estimating room impulse response (RIR). Moreover, we propose to adaptively adjust the magnitude of perturbations according to each individual utterance via spectral gating. This can further improve the imperceptibility of the adversarial perturbations with minor increase of attack generation time. Experiments on a public dataset of 109 English speakers demonstrate the effectiveness and robustness of the proposed attack. Our attack method achieves average 90% attack success rate on both X-vector and d-vector speaker recognition systems. Meanwhile, our method achieves 100× speedup on attack launching time, as compared to the conventional non-universal attacks.
@article{xie2021real, title={Real-time, Robust and Adaptive Universal Adversarial Attacks Against Speaker Recognition Systems}, author={Xie, Yi and Li, Zhuohang and Shi, Cong and Liu, Jian and Chen, Yingying and Yuan, Bo}, journal={Journal of Signal Processing Systems}, pages={1--14}, year={2021}, publisher={Springer} }
J15

# An End-to-End Network for Continuous Human Motion Recognition via Radar Radios

Running Zhao, Xiaolin Ma, Xinhua Liu, Jian Liu
IEEE Sensors Journal, 2020.

Micro-Doppler-based continuous human motion recognition (HMR) has gained considerable attention recently. However, existing methods mainly rely on individual recurrent neural network or sliding-window-based approaches, which makes them hard to effectively exploit all the temporal information to predict motions. Additionally, they need to represent the raw radar data into other domains and then perform feature extraction and classification. Thus, the representation cannot be optimized, and its high computational complexity and independence from learning model make the network consume significant time. In this paper, to address these issues, we propose a new end-to-end network that uses radar radios to recognize continuous motion. Specifically, the fusion layer fuses the raw I & Q radar data without the need of representations, and it is integrated with subsequent networks in an endto-end manner for jointly optimization. Moreover, the attention-based encoder-decoder structure encodes the fused data and selects useful temporal information for recognition, which guarantees the effective use of all the temporal information. The experiments show that in continuous HMR, the proposed network outperforms existing methods in terms of accuracy and inference time.
@article{zhao2020end, title={An End-to-End Network for Continuous Human Motion Recognition via Radar Radios}, author={Zhao, Running and Ma, Xiaolin and Liu, Xinhua and Liu, Jian}, journal={IEEE Sensors Journal}, year={2020}, publisher={IEEE} }
J14

# Acoustic-based Sensing and Applications: a Survey

Yang Bai, Li Lu, Jerry Cheng, Jian Liu, Yingying Chen, Jiadi Yu
Computer Networks, Volume181, August 2020.

With advancements of wireless and sensing technologies, recent studies have demonstrated technical feasibility and effectiveness of using acoustic signals for sensing. In the past decades, low-cost audio infrastructures are widely-deployed and integrated into mobile and Internet of Things (IoT) devices to facilitate a broad array of applications including human activity recognition, tracking, localization, and security monitoring. The technology underpinning these applications lies in the analysis of propagation properties of acoustic signals (e.g., reflection, diffraction, and scattering) when they encounter human bodies. As a result, these applications serve as the foundation to support various daily functionalities such as safety protection, smart healthcare, and smart appliance interaction. The already-existing acoustic infrastructure could also complement RF-based localization and other approaches based on short-range communications such as Near-Field Communication (NFC) and Quick Response (QR) code. In this paper, we provide a comprehensive review on acoustic-based sensing in terms of hardware infrastructure, technical approaches, and its broad applications. First we describe different methodologies and techniques of using acoustic signals for sensing including Time-of-Arrival (ToA), Frequency Modulated Continuous Wave (FMCW), Time-Difference-of-Arrival (TDoA), and Channel Impulse Response (CIR). Then we classify various applications and compare different acoustic-based sensing approaches: in recognition and tracking, we review daily activity recognition, human health and behavioral monitoring hand gesture recognition, hand movement tracking, and speech recognition; in localization and navigation, we discuss ranging and direction finding, indoor and outdoor localization, and floor map construction; in security and privacy, we survey user authentication, keystroke snooping attacks, audio adversarial attacks, acoustic vibration attacks, and privacy protection schemes. Lastly we discuss future research directions and limitations of the acoustic-based sensing.
@article{bai2020acoustic, title={Acoustic-based sensing and applications: A survey}, author={Bai, Yang and Lu, Li and Cheng, Jerry and Liu, Jian and Chen, Yingying and Yu, Jiadi}, journal={Computer Networks}, volume={181}, pages={107447}, year={2020}, publisher={Elsevier} }
J13

Xiaonan Guo, Jian Liu, Yingying Chen
Smart Health, Volume 16, May 2020.

Acknowledging the powerful sensors on wearables and smartphones enabling various applications to improve users' life styles and qualities (e.g., sleep monitoring and running rhythm tracking), this paper takes one step forward developing FitCoach, a virtual fitness coach leveraging users' wearable mobile devices (including wrist-worn wearables and arm-mounted smartphones) to assess dynamic postures (movement patterns & positions) in workouts. FitCoach aims to help the user to achieve effective workout and prevent injury by dynamically depicting the short-term and long-term picture of a user's workout based on various sensors in wearable mobile devices. In particular, FitCoach recognizes different types of exercises and interprets fine-grained fitness data (i.e., motion strength and speed) to an easy-to-understand exercise review score, which provides a comprehensive workout performance evaluation and recommendation. Our system further enables contactless device control during workouts (e.g., gesture to pick up an incoming call) through distinguishing customized gestures from regular exercise movement. In addition, FitCoach has the ability to align the sensor readings from wearable devices to the human coordinate system, ensuring the accuracy and robustness of the system. Extensive experiments with over 5000 repetitions of 12 types of exercises involve 12 participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that FitCoach can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% and 90% accuracy for workout analysis and customized control gesture recognition, respectively.
@article{guo2020your, title={When your wearables become your fitness mate}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, journal={Smart Health}, volume={16}, pages={100114}, year={2020}, publisher={Elsevier} }
J12

# User Authentication on Mobile Devices: Approaches, Threats and Trends

Chen Wang, Yan Wang, Yingying Chen, Hongbo Liu, Jian Liu
Computer Networks, Volume 170, April 2020.

Mobile devices have brought a great convenience to us these years, which allow the users to enjoy the anytime and anywhere various applications such as the online shopping, Internet banking, navigation and mobile media. While the users enjoy the convenience and flexibility of the ”Go Mobile” trend, their sensitive private information (e.g., name and credit card number) on the mobile devices could be disclosed. An adversary could access the sensitive private information stored on the mobile device by unlocking the mobile devices. Moreover, the user’s mobile services and applications are all exposed to security threats. For example, the adversary could utilize the user’s mobile device to conduct non-permitted actions (e.g., making online transactions and installing malwares). The authentication on mobile devices plays a significant role to protect the user’s sensitive information on mobile devices and prevent any non-permitted access to the mobile devices. This paper surveys the existing authentication methods on mobile devices. In particular, based on the basic authentication metrics (i.e., knowledge, ownership and biometrics) used in existing mobile authentication methods, we categorize them into four categories, including the knowledge-based authentication (e.g., passwords and lock patterns), physiological biometric-based authentication (e.g., fingerprint and iris), behavioral biometrics-based authentication (e.g., gait and hand gesture), and two/multi-factor authentication. We compare the usability and security level of the existing authentication approaches among these categories. Moreover, we review the existing attacks to these authentication approaches to reveal their vulnerabilities. The paper points out that the trend of the authentication on mobile devices would be the multi-factor authentication, which determines the user’s identity using the integration (not the simple combination) of more than one authentication metrics. For example, the user’s behavior biometrics (e.g., keystroke dynamics) could be extracted simultaneously when he/she inputs the knowledge-based secrets (e.g., PIN), which can provide the enhanced authentication as well as sparing the user’s trouble to conduct multiple inputs for different authentication metrics.
@article{wang2020user, title={User authentication on mobile devices: Approaches, threats and trends}, author={Wang, Chen and Wang, Yan and Chen, Yingying and Liu, Hongbo and Liu, Jian}, journal={Computer Networks}, volume={170}, pages={107118}, year={2020}, publisher={Elsevier} }
J11

# Enable Traditional Laptops with Virtual Writing Capability Leveraging Acoustic Signals

Lu Li, Jian Liu, Jiadi Yu, Yingying Chen, Yanmin Zhu, Linghe Kong, Minglu Li
The Computer Journal, January 2020.

Human–computer interaction through touch screens plays an increasingly important role in our daily lives. Besides smartphones and tablets, laptops are the most prevalent mobile devices for both work and leisure. To satisfy the requirements of some applications, it is desirable to re-equip a typical laptop with both handwriting and drawing capability. In this paper, we design a virtual writing tablet system, VPad, for traditional laptops without touch screens. VPad leverages two speakers and one microphone, which are available in most commodity laptops, to accurately track hand movements and recognize writing characters in the air without additional hardware. Specifically, VPad emits inaudible acoustic signals from two speakers in a laptop and then analyzes energy features and Doppler shifts of acoustic signals received by the microphone to track the trajectory of hand movements. Furthermore, we propose a state machine-based trajectory optimization method to correct the unexpected trajectory and employ a stroke direction sequence model based on probability estimation to recognize characters users write in the air. Experimental results show that VPad achieves the average error of 1.55 cm for trajectory tracking and the accuracy over 90% of character recognition merely through built-in audio devices on a laptop.
@article{lu2020enable, title={Enable Traditional Laptops with Virtual Writing Capability Leveraging Acoustic Signals}, author={Lu, Li and Liu, Jian and Yu, Jiadi and Chen, Yingying and Zhu, Yanmin and Kong, Linghe and Li, Minglu}, journal={The Computer Journal}, year={2020} }
J10

# Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables

Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, Yingying Chen
IEEE Transactions on Mobile Computing (IEEE TMC), December 2019.

Different from traditional gestures, sign language gestures involve a lot of finger-level gestures without wrist or arm movements. They are hard to detect using existing motion sensors-based approaches. We introduce the first low-cost sign language gesture recognition system that can differentiate fine-grained finger movements using the Photoplethysmography (PPG) and motion sensors in commodity wearables. By leveraging the motion artifacts in PPG, our system can accurately recognize sign language gestures when there are large body movements, which cannot be handled by the traditional motion sensor-based approaches. We further explore the feasibility of using both PPG and motion sensors in wearables to improve the sign language gesture recognition accuracy when there are limited body movements. We develop a gradient boost tree (GBT) model and deep neural network-based model (i.e., ResNet) for classification. The transfer learning technique is applied to ResNet-based model to reduce the training effort. We develop a prototype using low-cost PPG and motions sensors and conduct extensive experiments and collect over 7000 gestures from 10 adults in the static and body-motion scenarios. Results demonstrate that our system can differentiate nine finger-level gestures from the American Sign Language with an average recognition accuracy over 98%.
@article{zhao2019towards, title={Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables}, author={Zhao, Tianming and Liu, Jian and Wang, Yan and Liu, Hongbo and Chen, Yingying}, journal={IEEE Transactions on Mobile Computing}, year={2019}, publisher={IEEE} }
J9

# Wireless Sensing for Human Activity: A Survey

Jian Liu, Hongbo Liu, Yingying Chen, Yan Wang, Chen Wang
IEEE Communications Surveys and Tutorials, 2019. (IF=22.97).

With the advancement of wireless technologies and sensing methodologies, many studies have shown the success of re-using wireless signals (e.g., WiFi) to sense human activities and thereby realize a set of emerging applications, ranging from intrusion detection, daily activity recognition, gesture recognition to vital signs monitoring and user identification involving even finer-grained motion sensing. These applications arguably can brace various domains for smart home and office environments, including safety protection, well-being monitoring/management, smart healthcare and smart-appliance interaction. The movements of the human body impact the wireless signal propagation (e.g., reflection, diffraction and scattering), which provide great opportunities to capture human motions by analyzing the received wireless signals. Researchers take the advantage of the existing wireless links among mobile/smart devices (e.g., laptops, smartphones, smart thermostats, smart refrigerators and virtual assistance systems) by either extracting the ready-to-use signal measurements or adopting frequency modulated signals to detect the frequency shift. Due to the low-cost and non-intrusive sensing nature, wireless-based human activity sensing has drawn considerable attention and become a prominent research field over the past decade. In this paper, we survey the existing wireless sensing systems in terms of their basic principles, techniques and system structures. Particularly, we describe how the wireless signals could be utilized to facilitate an array of applications including intrusion detection, room occupancy monitoring, daily activity recognition, gesture recognition, vital signs monitoring, user identification and indoor localization. The future research directions and limitations of using wireless signals for human activity sensing are also discussed.
@article{liu2019wireless, title={Wireless sensing for human activity: A survey}, author={Liu, Jian and Liu, Hongbo and Chen, Yingying and Wang, Yan and Wang, Chen}, journal={IEEE Communications Surveys \& Tutorials}, year={2019}, publisher={IEEE} }
J8

# Good Vibrations: Accessing ‘Smart’ Systems by Touching Any Solid Surface

Jian Liu, Chen Wang, Yingying Chen, Nitesh Saxena
Biometric Technology Today (BTT), Issue 4, Pages 7-10, 2018.

The process of people authenticating themselves to verify their identity is now commonplace across many areas of our daily life. It's no longer just users of touchscreen devices like mobile phones – the growing use of smart systems means people need to identify themselves to access many other devices and daily activities, like entering their apartment, driving a vehicle and using smart appliances.
@article{liu2018good, title={Good vibrations: accessing ‘smart’systems by touching any solid surface}, author={Liu, Jian and Wang, Chen and Chen, Yingying and Saxena, Nitesh}, journal={Biometric Technology Today}, volume={2018}, number={4}, pages={7--10}, year={2018}, publisher={Elsevier} }
J7

# Monitoring Vital Signs and Postures During Sleep Using WiFi Signals

Jian Liu, Yingying Chen, Yan Wang, Xu Chen, Jerry Cheng, Jie Yang
IEEE Internet of Things Journal (IEEE IoT), Volume 5, Issue 3, Pages 2071-2084, 2018. (IF = 7.596).

Tracking human sleeping postures and vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., polysomnography) are limited to clinic usage. Recent radio frequency-based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this paper, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system reuses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing noninvasive, continuous fine-grained vital signs monitoring without any additional cost.
@article{liu2018monitoring, title={Monitoring vital signs and postures during sleep using WiFi signals}, author={Liu, Jian and Chen, Yingying and Wang, Yan and Chen, Xu and Cheng, Jerry and Yang, Jie}, journal={IEEE Internet of Things Journal}, volume={5}, number={3}, pages={2071--2084}, year={2018}, publisher={IEEE} }
J6

# Authenticating Users through Fine-grained Channel Information

Hongbo Liu, Yan Wang, Jian Liu, Jie Yang, Yingying Chen, H. Vincent Poor
IEEE Transactions on Mobile Computing (IEEE TMC), Volume 17, Issue 2, Pages 251-264, 2018.

User authentication is the critical first step in detecting identity-based attacks and preventing subsequent malicious attacks. However, the increasingly dynamic mobile environments make it harderto always apply cryptographic-based methods for user authentication due to their infrastructural and key management overhead. Exploiting non-cryptographic based techniques grounded on physical layer properties to perform user authentication appears promising. In this work, the use of channel state information (CSI), which is available from off-the-shelf WiFi devices, to perform fine-grained user authentication is explored. Particularly, a user-authentication framework that can work with both stationary and mobile users is proposed. When the user is stationary, the proposed framework builds a user profile for user authentication that is resilient to the presence of a spoofer. The proposed machine learning based user-authentication techniques can distinguish between two users even when they possess similar signal fingerprints and detect the existence of a spoofer. When the user is mobile, it is proposed to detect the presence of a spoofer by examining the temporal correlation of CSI measurements. Both office building and apartment environments show that the proposed framework can filter out signal outliers and achieve higher authentication accuracy compared with existing approaches using received signal strength (RSS).
@article{liu2018authenticating, title={Authenticating users through fine-grained channel information}, author={Liu, Hongbo and Wang, Yan and Liu, Jian and Yang, Jie and Chen, Yingying and Poor, H Vincent}, journal={IEEE Transactions on Mobile Computing}, volume={17}, number={2}, pages={251--264}, year={2018}, publisher={IEEE} }
J5

# 3D Tracking via Shoe Sensing

Fangmin Li, Guo Liu, Jian Liu, Xiaochuang Chen, Xiaolin Ma
Sensors (MDPI), 2016, 16(11), 1809.

Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy.
@article{li20163d, title={3D Tracking via Shoe Sensing}, author={Li, Fangmin and Liu, Guo and Liu, Jian and Chen, Xiaochuang and Ma, Xiaolin}, journal={Sensors}, volume={16}, number={11}, pages={1809}, year={2016}, publisher={Multidisciplinary Digital Publishing Institute} }
J4

# Fusion of Different Height Pyroelectric Infrared Sensors for Person Identification

Ji Xiong, Fangmin Li, Jian Liu
IEEE Sensors Journal, Volume 16, Issue 2, Pages 436-446, 2016.

Due to the instability and poor identification ability of a single pyroelectric infrared (PIR) detector for human target identification, this paper presents a PIR detection identification system that can collect thermal infrared features from different parts of human targets through multiple PIR sensors for the human identification. First, fast Fourier transform, short-time Fourier transform, and wavelet packet transform algorithms are adopted to extract thermal infrared features of human targets. Then, the canonical correlation analysis algorithm is used to fuse different algorithm features in the feature layer. Finally, using the support vector machine to classify the human targets. In the decision-making layer, the Dempster/Shafer evidence theory is adopted to optimize the recognition results from different PIR sensors that locate at different height positions. Extensive experimental results demonstrate that the fusion of feature layer data could improve the average recognition rate of the human target with closer distance from the single sensor. In addition, the fusion of decision-making layer could improve the recognition ability of the identification system as well. When the detection distance is 6 m, the correct recognition rate of fusion system is still reached 88.75%. Compared with the system using a single sensor, the recognition rate is increased by an average of 22.67%.
@article{xiong2016fusion, title={Fusion of different height pyroelectric infrared sensors for person identification}, author={Xiong, Ji and Li, Fangmin and Liu, Jian}, journal={IEEE Sensors Journal}, volume={16}, number={2}, pages={436--446}, year={2016}, publisher={IEEE} }
J3

# Throughput-Delay Tradeoff for Wireless Multi-Channel Multi-Interface Random Networks

Xiaolin Ma, Fangmin Li, Jian Liu, Xinhua Liu
Canadian Journal of Electrical and Computer Engineering (CJECE), Volume 38, Issue 2, Pages 162-169, 2015.

Capturing throughput-delay tradeoff in wireless networks has drawn considerable attention, as it could bring better usage experience by considering different requirements of throughput/delay demands. Traditional works consider only typical single-channel single-interface networks, whereas multichannel multi-interface (MCMI) networks will become mainstream since they provide concurrent transmissions in different channels, which in turn helps each node to obtain better performance. Unlike previous works, this paper investigates the throughput-delay tradeoff for MCMI random networks. Two queuing systems, i.e., the M/M/m queuing system and the m M/M/1 queuing system, are established for MCMI nodes, and a parameter in routing implementation named routing deviation is also considered in the analytical model. This paper studies concurrent transmission capacity (CTC) using the physical interference model and also explores the impact on CTC of different physical parameters. Moreover, the relations between throughput and delay are achieved using two queuing systems in MCMI random networks respectively. The deterministic results obtained with a group of real network configuration parameters demonstrate that the proposed tradeoff model could be applied to the real network scenarios.
@article{ma2015throughput, title={Throughput--Delay Tradeoff for Wireless Multichannel Multi-Interface Random Networks}, author={Ma, Xiaolin and Li, Fangmin and Liu, Jian and Liu, Xinhua}, journal={Canadian Journal of Electrical and Computer Engineering}, volume={38}, number={2}, pages={162--169}, year={2015}, publisher={IEEE} }
J2

# The Capacity of Multi-channel Multi-interface Wireless Networks with Multi-packet Reception and Directional Antenna

Jian Liu, Fangmin Li, Xinhua Liu, Hao Wang
Wireless Communications and Mobile Computing (WCMC, Wiley), Volume 14, Issue 8, Pages 803-817, 2014.

The capacity of wireless networks can be improved by the use of multi‐channel multi‐interface (MCMI), multi‐packet reception (MPR), and directional antenna (DA). MCMI can provide the concurrent transmission in different channels for each node with multiple interfaces; MPR offers an increased number of concurrent transmissions on the same channel; DA can be more effective than omni‐DA by reducing interference and increasing spatial reuse. This paper explores the capacity of wireless networks that integrate MCMI, MPR, and DA technologies. Unlike some previous research, which only employed one or two of the aforementioned technologies to improve the capacity of networks, this research captures the capacity bound of the networks with all the aforementioned technologies in arbitrary and random wireless networks. The research shows that such three‐technology networks can achieve at most $$\frac{2\pi}{\theta}\sqrt{k}$$ capacity gain in arbitrary networks and $$(\frac{2\pi}{\theta})^2{k}$$ capacity gain in random networks compared with MCMI wireless networks without DA and MPR. The paper also explored and analyzed the impact on the network capacity gain with different $$\frac{c}{m}$$, θ, and k‐MPR ability. Copyright © 2012 John Wiley & Sons, Ltd.
@article{liu2014capacity, title={The capacity of multi-channel multi-interface wireless networks with multi-packet reception and directional antenna}, author={Liu, Jian and Li, Fangmin and Liu, Xinhua and Wang, Hao}, journal={Wireless Communications and Mobile Computing}, volume={14}, number={8}, pages={803--817}, year={2014}, publisher={Wiley Online Library} }
J1

# Routing Optimization of Wireless Sensor Network Based on Hello Mechanism

Jian Liu, Fangmin Li
Computer Engineering, Volume 36, Issue 7, Pages 99-101, 2010.

This paper analyzes the shortcomings of fixed protocol expenses that occur under network topologies with different degrees of stability for Adhoc On-demand Distance Vector(AODV) protocol which occupy network bandwith.It puts forward a method to transmit the Hello message through the auto-adapted time-lag to control the power,to enhance the network bandwidth,and to reduce the convergent time.This algorithm transmits the Hello message by adjusting time-lag automatically according to the change emergency of the network topology.Simulation tests indicate that on the same premise of other conditions,network bandwidth is more reasonably used and network performance is optimized by applying auto-adapted time-lag to transmit the Hello message.
@article{liu2010routing, title={Routing Optimization of Wireless Sensor Network Based on Hello Mechanism}, author={LIU, Jian and LI, Fang-min}, journal={Computer Engineering}, volume={36}, number={7}, pages={99--101}, year={2010} }

O15

# Poster: Deploying a Human Robot Interaction Model for Dementia Care in Federated Learning CHASE 2022

Xiaowen Su, Fengpei Yuan, Ran Zhang, Jian Liu, Marie Boltz, Xiaopeng Zhao
in Proceedings of the IEEE/ACM Conference on Connected Health Applications, Systems, and Engineering Technologies (CHASE 2022), Washington DC, USA, November 2022.

O14

# Poster Abstract: Security and Privacy in the Age of Cordless Power World SenSys 2020

Yi Wu, Zhuohang Li, Nicholas Van Nostrand, Jian Liu
Poster Session, in Proceedings of the 18th ACM Conference on Embedded Networked Sensor Systems (SenSys 2020), Yokohama, Japan, November 2020.

In this work, we conduct the first study to explore the potential security and privacy vulnerabilities of cordless power transfer techniques, particularly Qi wireless charging for mobile devices. We demonstrate the communication established between the charger and the charging device could be easily interfered with and eavesdropped. Specifically, through stealthily placing an adversarial coil on the wireless charger, an adversary can hijack the communication channel and inject malicious data bits which can take control of the charging process. Moreover, by simply taping two wires on the wireless charger, an adversary can eavesdrop Qi messages, which carry rich information highly correlated with the charging device’s activities, from the measured primary coil voltage. We examine the extent to which this side-channel leaks private information about the smartphone’s activities while being charged (e.g., detect and identify incoming calls and messages from different apps). Experimental results demonstrate the capability of an adversary to inject any desired malicious packets to take over the charging process, and the primary coil voltage side channel can leak private information of the smartphone’s activities while being charged.
@inproceedings{wu2020security, title={Security and privacy in the age of cordless power world}, author={Wu, Yi and Li, Zhuohang and Van Nostrand, Nicholas and Liu, Jian}, booktitle={Proceedings of the 18th Conference on Embedded Networked Sensor Systems}, pages={717--718}, year={2020} }
O13

# Demo: Device-free Activity Monitoring Through Real-time Analysis on Prevalent WiFi Signals DySPAN 2019

Cong Shi, Justin Esposito, Sachin Mathew, Amit Patel, Rishika Sakhuja, Jian Liu and Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we present a device-free activity monitoring platform exploiting the prevalent WiFi signals to enable real-time activity recognition and user identification in indoor environments. It supports a broad array of real-world applications, such as senior assistance services, fitness tracking, and building surveillance. In particular, the proposed platform takes advantage of channel state information (CSI), which is sensitive to environmental changes introduced by human body movements. To enable immediate response of the platform, we design a real-time mechanism that continuously monitors the WiFi signals and promptly analyzes the CSI readings when the human activity is detected. For each detected activity, we extract representative features from CSI, and exploit a deep neural network (DNN) based scheme to accurately identify the activity type/user identity. Our experimental results demonstrate that the proposed platform could perform activity/user identification with high accuracy while offering low latency.
O12

# Demo: Hands-Free Human Activity Recognition Using Millimeter-Wave Sensors DySPAN 2019

Soo Min Kwon, Song Yang, Jian Liu, Xin Yang, Wesam Saleh, Shreya Patel, Christine Mathews, Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we introduce a hands-free human activity recognition framework leveraging millimeter-wave (mmWave) sensors. Compared to other existing approaches, our network protects user privacy and can remodel a human skeleton performing the activity. Moreover, we show that our network can be achieved in one architecture, and be further optimized to have higher accuracy than those that can only get singular results (i.e. only get pose estimation or activity recognition). To demonstrate the practicality and robustness of our model, we will demonstrate our model in different settings (i.e. facing different backgrounds) and effectively show the accuracy of our network.
O11

# Demo: Monitoring Movement Dynamics of Robot Cars and Drones Using Smartphone’s Built-in Sensors DySPAN 2019

Yang Bai, Xin Yang, ChenHao Liu, Justin Wain, Ryan Wang, Jeffery Cheng, Chen Wang, Jian Liu, Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we present a smart system that can monitor the movement dynamics of any carrying platform (e.g., robot car and drone) leveraging the inertial sensors of the attached smartphone. Through the measured inertial sensor readings, we can monitor the movement dynamics of the carrying platform in real-time, such as the platform’s moving speed, displacement, and position. Unlike Global Positioning System (GPS), which shows severe accuracy degradation when GPS signals are weak (e.g., in indoor or urban environments), our system tracks the platform’s movements and performs positioning without receiving external signals. Thus, our system can be an effective alternative approach to monitor the movement dynamics of those indoor objects (e.g., sweeping robot, indoor drone). Specifically, we exploit the motion-sensing capabilities of smartphone’s inertial sensors to measure the carrying platform’s movement dynamics. The inertial magnetometer of the smartphone allows us to reorient sensors with the cardinal directions; the gyroscope and accelerometer enable measuring the velocity and displacement of the platform. Our experimental results demonstrate that our system can accurately measure the movement dynamics of carrying platform with the easy-to-access smartphone sensors, as a substitution of GPS-based positioning in indoor environments.
O10

# Demo: Toward Continuous User Authentication Using PPG in Commodity Wrist-worn Wearables MobiCom 2019

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen
Demo Session, in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom 2019), Los Cabos, Mexico, October 2019.

We present a photoplethysmography (PPG)-based continuous user authentication (CA) system leveraging the pervasively equipped PPG sensor in commodity wrist-worn wearables such as the smartwatch. Compared to existing approaches, our system does not require any users’ interactions (e.g., performing specific gestures) and is applicable to practical scenarios where the user’s daily activities cause motion artifacts (MA). Notably, we design a robust MA removal method to mitigate the impact of MA. Furthermore, we explore the uniqueness of the human cardiac system and extract the fiducial features in the PPG measurements to train the gradient boosting tree (GBT) classifier, which can effectively differentiate users continuously using low training effort. In particular, we build the prototype of our system using a commodity smartwatch and a WebSocket server running on a laptop for CA. In order to demonstrate the practical use of our system, we will demo our prototype under different scenarios (i.e., static and moving) to show it can effectively detect MA caused by daily activities and achieve a high authentication success rate.
O9

# Poster: Inaudible High-throughput Communication Through Acoustic Signals MobiCom 2019

Yang Bai, Jian Liu, Yingying Chen, Li Lu, Jiadi Yu
Poster Session, in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom 2019), Los Cabos, Mexico, October 2019.

In recent decades, countless efforts have been put into the research and development of short-range wireless communication, which offers a convenient way for numerous applications (e.g., mobile payments, mobile advertisement). Regarding the design of acoustic communication, throughput and inaudibility are the most vital aspects, which greatly affect available applications that can be supported and their user experience. Existing studies on acoustic communication either use audible frequency band (e.g., <20kHz) to achieve a relatively high throughput or realize inaudibility using near-ultrasonic frequency band (e.g., 18-20kHz) which however can only achieve limited throughput. Leveraging the non-linearity of microphones, voice commands can be demodulated from the ultrasound signals, and further recognized by the speech recognition systems. In this poster, we design an acoustic communication system, which achieves high-throughput and inaudibility at the same time, and the highest throughput we achieve is over 17× higher than the state-of-the-art acoustic communication systems.
O8

# Poster: Leveraging Breathing for Continuous User Authentication MobiCom 2018

Jian Liu, Yudi Dong, Yingying Chen, Yan Wang, Tianming Zhao
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

This work proposes a continuous user verification system based on unique human respiratory-biometric characteristics extracted from the off-the-shelf WiFi signals. Our system innovatively re-uses widely available WiFi signals to capture the unique physiological characteristics rooted in respiratory motions for continuous authentication. Different from existing continuous authentication approaches having limited applicable scenarios due to their dependence on restricted user behaviors (e.g., keystrokes and gaits) or dedicated sensing infrastructures, our approach can be easily integrated into any existing WiFi infrastructure to provide non-invasive continuous authentication independent of user behaviors. Specifically, we extract representative features leveraging waveform morphology analysis and fuzzy wavelet transformation of respiration signals derived from the readily available channel state information (CSI) of WiFi. A respiration-based user authentication scheme is developed to accurately identify users and reject spoofers. Extensive experiments involving 20 subjects demonstrate that the proposed system can achieve a high authentication success rate of over 93% and robustly defend against various types of attacks.
@inproceedings{liu2018poster, title={Poster: Leveraging Breathing for Continuous User Authentication}, author={Liu, Jian and Dong, Yudi and Chen, Yingying and Wang, Yan and Zhao, Tianming}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={786--788}, year={2018}, organization={ACM} }
O7

# Poster: Inferring Mobile Payment Passcodes Leveraging Wearable Devices MobiCom 2018

Chen Wang, Jian Liu, Xiaonan Guo, Yan Wang, Yingying Chen
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

Mobile payment has drawn considerable attention due to its convenience of paying via personal mobile devices at anytime and anywhere, and passcodes (i.e., PINs) are the first choice of most consumers to authorize the payment. This work demonstrates a serious security breach and aims to raise the awareness of the public that the passcodes for authorizing transactions in mobile payments can be leaked by exploiting the embedded sensors in wearable devices (e.g., smartwatches). We present a passcode inference system, which examines to what extent the user's PIN during mobile payment could be revealed from a single wrist-worn wearable device under different input scenarios involving either two hands or a single hand. Extensive experiments with 15 volunteers demonstrate that an adversary is able to recover a user's PIN with high success rate within 5 tries under various input scenarios.
@inproceedings{wang2018poster, title={Poster: Inferring Mobile Payment Passcodes Leveraging Wearable Devices}, author={Wang, Chen and Liu, Jian and Guo, Xiaonan and Wang, Yan and Chen, Yingying}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={789--791}, year={2018}, organization={ACM} }
O6

# Poster: Your Heart Won't Lie: PPG-based Continuous Authentication on Wrist-worn Wearable Devices MobiCom 2018

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

This paper presents a photoplethysmography (PPG)-based continuous user authentication (CA) system, which especially leverages the PPG sensors in wrist-worn wearable devices to identify users. We explore the uniqueness of the human cardiac system captured by the PPG sensing technology. Existing CA systems require either the dedicated sensing hardware or specific gestures, whereas our system does not require any users' interactions but only the wearable device, which has already been pervasively equipped with PPG sensors. Notably, we design a robust motion artifacts (MA) removal method to mitigate the impact of MA from wrist movements. Additionally, we explore the characteristic fiducial features from PPG measurements to efficiently distinguish the human cardiac system. Furthermore, we develop a cardiac-based classifier for user identification using the Gradient Boosting Tree (GBT). Experiments with the prototype of the wrist-worn PPG sensing platform and 10 participants in different scenarios demonstrate that our system can effectively remove MA and achieve a high average authentication success rate over 90%.
@inproceedings{zhao2018your, title={Your Heart Won't Lie: PPG-based Continuous Authentication on Wrist-worn Wearable Devices}, author={Zhao, Tianming and Wang, Yan and Liu, Jian and Chen, Yingying}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={783--785}, year={2018}, organization={ACM} }
O5

# Poster: Sensing on Ubiquitous Surfaces via Vibration Signals MobiCom 2016

Jian Liu, Yingying Chen, Marco Gruteser
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

This work explores vibration-based sensing to determine the location of a touch on extended surface areas as well as identify the object touching the surface leveraging a single sensor. It supports a broad array of applications through either passive or active sensing using only a single sensor. In the passive sensing, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. Moreover, in the active mode, the received vibration signals carry richer information of the touching object's characteristics (e.g., weight, size, location and material). This further enables our work to match the signals to the trained profiles and allows it to differentiate personal objects in contact with any surface. We evaluated extensively in the use cases of localizing touches (i.e., virtual keyboards), object localization and identification. Our experimental results demonstrate that the proposed vibration-based solution can achieve high accuracy, over 95%, in all these use cases.
@inproceedings{liu2016sensing, title={Sensing on ubiquitous surfaces via vibration signals: poster}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={424--425}, year={2016}, organization={ACM} }
O4

# Poster: PIN Number-based Authentication Leveraging Physical Vibration MobiCom 2016

Jian Liu, Chen Wang, Yingying Chen
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

In this work, we propose the first PIN number based authentication system, which can be deployed on ubiquitous surfaces, leveraging physical vibration signals. The proposed system aims to integrate PIN number, behavioral and physiological characteristics together to provide enhanced security. Different from the existing password-based approaches, the proposed system builds upon a touch sensing technique using vibration signals that can operate on any solid surface. In this poster, we explore the feasibility of using vibration signals for ubiquitous user authentication and develop algorithms that identify fine-grained finger inputs with different password secrets (e.g., PIN sequences). We build a prototype using a vibration transceiver that can be attached to any surface (e.g., a door or a desk) easily. Our experiments in office environments with multiple users demonstrate that we can achieve high authentication accuracy with a low false negative rate.
@inproceedings{liu2016pin, title={PIN number-based authentication leveraging physical vibration: poster}, author={Liu, Jian and Wang, Chen and Chen, Yingying}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={426--427}, year={2016}, organization={ACM} }
O3

# Poster: Automatic Personal Fitness Assistance through Wearable Mobile Devices MobiCom 2016

Xiaonan Guo, Jian Liu, Yingying Chen
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

Acknowledging the powerful sensors on wearable mobile devices enabling various applications to improve users' life styles and qualities, this paper takes one step forward developing a automatic personal fitness assistance through wearable mobile devices to assess dynamic postures in workouts. In particular, our system recognizes different types of exercises and interprets fine-grained fitness data to an easy-to-understand exercise review score. The system has the ability to align the sensor readings from wearable devices to the earth coordinate system, ensuring the accuracy and robustness of the system. Experiments with 12 types of exercises involve multiple participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that the proposed system can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% accuracy for workout analysis.
@inproceedings{guo2016automatic, title={Automatic personal fitness assistance through wearable mobile devices: poster}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={437--438}, year={2016}, organization={ACM} }
O2

# Demo: VibKeyboard: Virtual Keyboard Leveraging Physical Vibration MobiCom 2016

Jian Liu, Yingying Chen, Marco Gruteser
Demo Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

VibKeyboard could accurately determine the location of a keystroke on extended surface areas leveraging a single vibration sensor. Unlike capacitive sensing, it does not require conductive materials and compared to audio sensing it is more robust to acoustic noise. In VibKeyboard, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. VibKeyboard seeks to extract unique features in frequency domain embedded in the vibration signal attenuation and interference and perform fine grained localization. Our experimental results demonstrate that VibKeyboard could accurately recognize keystrokes from close-by keys on a nearby virtual keyboard.
@inproceedings{liu2016vibkeyboard, title={VibKeyboard: virtual keyboard leveraging physical vibration}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={507--508}, year={2016}, organization={ACM} }
O1

# Autologger: A Driving Input Logging Application MobiCom 2016

Luyang Liu, Cagdas Karatas, Hongyu Li, Jian Liu, Marco Gruteser, Yan Wang, Yingying Chen, Richard P. Martin
App Contest Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.