Publications | MoSIS Lab @ UTK

Publications

Book Chapters

B1

Proactive User Authentication Using WiFi Signals in Dynamic Networks

Hongbo Liu, Yan Wang, Jian Liu, Yingying Chen
Proactive and Dynamic Network Defense, Springer, 2018. (In Press).

U.S. Patents

P4

In-Baggage Object Detection Using Commodity Wi-Fi

Yingying Chen, Chen Wang, Jian Liu, Hongbo Liu, Yan Wang
U.S. Provisional Patent Application 62/828, 151, April, 2019.

P3

Systems and Methods for User Input and Authentication Using Vibration Analysis

Yingying Chen, Jian Liu, Chen Wang, Nitesh Saxena
U.S. Application No.: 16/432,558, June, 2019.

P2

Device-free Activity Identification Using Fine-grained WiFi Signatures

Yingying Chen, Jie Yang, Yan Wang, Jian Liu, Marco Gruteser
U.S. Patent No. US10104195B2, March, 2016.

P1

Vital Signs Monitoring using WiFi

Yingying Chen, Jian Liu, Yan Wang, Jie Yang, Jerry Cheng
U.S. Provisional Patent Application No. 62/180, 696, July, 2015.

Refereed Conference & Workshop Papers

C34

Mobile Device Usage Recommendation based on User Context Inference Using Embedded Sensors ICCCN 2020

Cong Shi, Xiaonan Guo, Ting Yu, Yingying Chen, Yucheng Xie, Jian Liu
in Proceedings of the 29th International Conference on Computer Communications and Networks (ICCCN 2020), Honolulu, Hawaii, USA, August 2020.

The proliferation of mobile devices along with their rich functionalities/applications have made people form addictive and potentially harmful usage behaviors. Though this problem has drawn considerable attention, existing solutions (e.g., text notification or setting usage limits) are insufficient and cannot provide timely recommendations or control of inappropriate usage of mobile devices. This paper proposes a generalized context inference framework, which supports timely usage recommendations using low-power sensors in mobile devices Comparing to existing schemes that rely on detection of single type user contexts (e.g., merely on location or activity), our framework derives a much larger-scale of user contexts that characterize the phone usages, especially those causing distraction or leading to dangerous situations. We propose to uniformly describe the general user context with context fundamentals, i.e., physical environments, social situations, and human motions, which are the underlying constituent units of diverse general user contexts. To mitigate the profiling efforts across different environments, devices, and individuals, we develop a deep learning-based architecture to learn transferable representations derived from sensor readings associated with the context fundamentals. Based on the derived context fundamentals, our framework quantifies how likely an inferred user context would lead to distractions/dangerous situations, and provides timely recommendations for mobile device access/usage. Extensive experiments during a period of 7 months demonstrate that the system can achieve 95% accuracy on user context inference while offering the transferability among different environments, devices, and users.
@article{shimobile, title={Mobile Device Usage Recommendation based on User Context Inference Using Embedded Sensors}, author={Shi, Cong and Guo, Xiaonan and Yu, Ting and Chen, Yingying and Xie, Yucheng and Liu, Jian} }
C33

Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems ICASSP 2020

Yi Xie, Cong Shi, Zhuohang Li, Jian Liu, Yingying Chen, Bo Yuan
in Proceedings of the 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Barcelona, Spain, May 2020.

As the popularity of voice user interface (VUI) exploded in recent years, speaker recognition system has emerged as an important medium of identifying a speaker in many security-required applications and services. In this paper, we propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system. Through adding an audio-agnostic universal perturbation on arbitrary enrolled speaker’s voice input, the DNN-based speaker recognition system would identify the speaker as any target (i.e., adversary-desired) speaker label. In addition, we improve the robustness of our attack by modeling the sound distortions caused by the physical over-the-air propagation through estimating room impulse response (RIR). Experiment using a public dataset of 109 English speakers demonstrates the effectiveness and robustness of our proposed attack with a high attack success rate of over 90%. The attack launching time also achieves a 100× speedup over contemporary non-universal attacks.
@inproceedings{xie2020real, title={Real-Time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems}, author={Xie, Yi and Shi, Cong and Li, Zhuohang and Liu, Jian and Chen, Yingying and Yuan, Bo}, booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={1738--1742}, year={2020}, organization={IEEE} }
C32

Practical Adversarial Attacks Against Speaker Recognition Systems HotMobile 2020

Zhuohang Li, Cong Shi, Yi Xie, Jian Liu, Bo Yuan, Yingying Chen
in Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications (ACM HotMobile 2020), Austin, Texas, March 2020.
(Acceptance rate: 16/48 = 33.3%)

Unlike other biometric-based user identification methods (e.g., fingerprint and iris), speaker recognition systems can identify individuals relying on their unique voice biometrics without requiring users to be physically present. Therefore, speaker recognition systems have been becoming increasingly popular recently in various domains, such as remote access control, banking services and criminal investigation. In this paper, we study the vulnerability of this kind of systems by launching a practical and systematic adversarial attack against X-vector, the state-of-the-art deep neural network (DNN) based speaker recognition system. In particular, by adding a well-crafted inconspicuous noise to the original audio, our attack can fool the speaker recognition system to make false predictions and even force the audio to be recognized as any adversary-desired speaker. Moreover, our attack integrates the estimated room impulse response (RIR) into the adversarial example training process toward practical audio adversarial examples which could remain effective while being played over the air in the physical world. Extensive experiment using a public dataset of 109 speakers shows the effectiveness of our attack with a high attack success rate for both digital attack (98%) and practical over-the-air attack (50%).
@inproceedings{li2020practical, title={Practical adversarial attacks against speaker recognition systems}, author={Li, Zhuohang and Shi, Cong and Xie, Yi and Liu, Jian and Yuan, Bo and Chen, Yingying}, booktitle={Proceedings of the 21st International Workshop on Mobile Computing Systems and Applications}, pages={9--14}, year={2020} }
C31

Continuous User Verification via Respiratory Biometrics INFOCOM 2020

Jian Liu, Yingying Chen, Yudi Dong, Yan Wang, Tianming Zhao, Yu-Dong Yao
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

The ever-growing security issues in various mobile applications and smart devices create an urgent demand for a reliable and convenient user verification method. Traditional verification methods request users to provide their secrets (e.g., entering passwords and collecting fingerprints). We envision that the essential trend of user verification is to free users from active participation in the verification process. Toward this end, we propose a continuous user verification system, which re-uses the widely deployed WiFi infrastructure to capture the unique physiological characteristics rooted in user’s respiratory motions. Different from the existing continuous verification approaches, posing dependency on restricted scenarios/user behaviors (e.g., keystrokes and gaits), our system can be easily integrated into any WiFi infrastructure to provide non-intrusive continuous verification. Specifically, we extract the respirationrelated signals from the channel state information (CSI) of WiFi. We then derive the user-specific respiratory features based on the waveform morphology analysis and fuzzy wavelet transformation of the respiration signals. Additionally, a deep learning based user verification scheme is developed to identify legitimate users accurately and detect the existence of spoofing attacks. Extensive experiments involving 20 participants demonstrate that the proposed system can robustly verify/identify users and detect spoofers under various types of attacks.
@inproceedings{liu2020continuous, title={Continuous user verification via respiratory biometrics}, author={Liu, Jian and Chen, Yingying and Dong, Yudi and Wang, Yan and Zhao, Tiannming and Yao, Yu-Dong}, booktitle={Proceedings of the IEEE Conference on Computer Communications (INFOCOM’20), Toronto, ON, Canada}, pages={6--9}, year={2020} }
C30

MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios INFOCOM 2020

Xin Yang, Jian Liu, Yingying Chen, Xiaonan Guo
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

Multi-user identification could facilitate various large-scale identity-based services such as access control, automatic surveillance system, and personalized services, etc. Although existing solutions can identify multiple users using cameras, such vision-based approaches usually raise serious privacy concerns and require the presence of line-of-sight. Differently, in this paper, we propose MU-ID, a gait-based multi-user identification system leveraging a single commercial off-the-shelf (COTS) millimeter-wave (mmWave) radar. Particularly, MU-ID takes as input frequency-modulated continuous-wave (FMCW) signals from the radar sensor. Through analyzing the mmWave signals in the range-Doppler domain, MU-ID examines the users’ lower limb movements and captures their distinct gait patterns varying in terms of step length, duration, instantaneous lower limb velocity, and inter-lower limb distance, etc. Additionally, an effective spatial-temporal silhouette analysis is proposed to segment each user’s walking steps. Then, the system identifies steps using a Convolutional Neural Network (CNN) classifier and further identifies the users in the area of interest. We implement MU-ID with the TI AWR1642BOOST mmWave sensor and conduct extensive experiments involving 10 people. The results show that MU-ID achieves up to 97% single-person identification accuracy, and over 92% identification accuracy for up to four people, while maintaining a low false positive rate.
@article{yangmu, title={MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios}, author={Yang, Xin and Liu, Jian and Chen, Yingying and Guo, Xiaonan and Xie, Yucheng} }
C29

TrueHeart: Continuous Authentication on Wrist-worn Wearables Using PPG-based Biometrics INFOCOM 2020

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen, Jerry Cheng, Jiadi Yu
in Proceedings of IEEE International Conference on Computer Communications (IEEE INFOCOM 2020), Beijing, China, April 2020.
(Acceptance rate: 268/1354 = 19.8%)

Traditional one-time user authentication processes might cause friction and unfavorable user experience in many widely-used applications. This is a severe problem in particular for security-sensitive facilities if an adversary could obtain unauthorized privileges after a user’s initial login. Recently, continuous user authentication (CA) has shown its great potential by enabling seamless user authentication with few active participation. We devise a low-cost system exploiting a user’s pulsatile signals from the photoplethysmography (PPG) sensor in commercial wrist-worn wearables for CA. Compared to existing approaches, our system requires zero user effort and is applicable to practical scenarios with non-clinical PPG measurements having motion artifacts (MA). We explore the uniqueness of the human cardiac system and design an MA filtering method to mitigate the impacts of daily activities. Furthermore, we identify general fiducial features and develop an adaptive classifier using the gradient boosting tree (GBT) method. As a result, our system can authenticate users continuously based on their cardiac characteristics so little training effort is required. Experiments with our wrist-worn PPG sensing platform on 20 participants under practical scenarios demonstrate that our system can achieve a high CA accuracy of over 90% and a low false detection rate of 4% in detecting random attacks.
@article{zhao2020trueheart, title={Trueheart: Continuous authentication on wrist-worn wearables using ppg-based biometrics}, author={Zhao, Tianming and Wang, Yan and Liu, Jian and Chen, Yingying and Cheng, Jerry and Yu, Jiadi}, year={2020} }
C28

Defeating Hidden Audio Channel Attacks on Voice Assistants via Audio-Induced Surface Vibrations ACSAC 2019

Chen Wang, S Abhishek Anand, Jian Liu, Payton R. Walker, Yingying Chen, Nitesh Saxena
in Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC 2019), San Juan, December 2019.
(Acceptance rate: 60/266 = 22.6%)

Voice access technologies are widely adopted in mobile devices and voice assistant systems as a convenient way of user interaction. Recent studies have demonstrated a potentially serious vulnerability of the existing voice interfaces on these systems to “hidden voice commands”. This attack uses synthetically rendered adversarial sounds embedded within a voice command to trick the speech recognition process into executing malicious commands, without being noticed by legitimate users.
In this paper, we employ low-cost motion sensors, in a novel way, to detect these hidden voice commands. In particular, our proposed system extracts and examines the unique audio signatures of the issued voice commands in the vibration domain. We show that such signatures of normal commands vs. synthetic hidden voice commands are distinctive, leading to the detection of the attacks. The proposed system, which benefits from a speaker-motion sensor setup, can be easily deployed on smartphones by reusing existing on-board motion sensors or utilizing a cloud service that provides the relevant setup environment. The system is based on the premise that while the crafted audio features of the hidden voice commands may fool an authentication system in the audio domain, their unique audio-induced surface vibrations captured by the motion sensor are hard to forge. Our proposed system creates a harder challenge for the attacker as now it has to forge the acoustic features in both the audio and vibration domains, simultaneously. We extract the time and frequency domain statistical features, and the acoustic features (e.g., chroma vectors and MFCCs) from the motion sensor data and use learning-based methods for uniquely determining both normal commands and hidden voice commands. The results show that our system can detect hidden voice commands vs. normal commands with 99.9% accuracy by simply using the low-cost motion sensors that have very low sampling frequencies.
@inproceedings{wang2019defeating, title={Defeating hidden audio channel attacks on voice assistants via audio-induced surface vibrations}, author={Wang, Chen and Anand, S Abhishek and Liu, Jian and Walker, Payton and Chen, Yingying and Saxena, Nitesh}, booktitle={Proceedings of the 35th Annual Computer Security Applications Conference}, pages={42--56}, year={2019} }
C27

Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples DySPAN 2019

Yi Wu, Jian Liu, Yingying Chen, Jerry Cheng
in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.
@inproceedings{wu2019semi, title={Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples}, author={Wu, Yi and Liu, Jian and Chen, Yingying and Cheng, Jerry}, booktitle={2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)}, pages={1--5}, year={2019}, organization={IEEE} }
C26

CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping MobiSys 2019

Jian Liu, Cong Shi, Yingying Chen, Hongbo Liu, Marco Gruteser
in Proceedings of the 17th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2019), Seoul, South Korea, June 2019.
(Acceptance rate: 39/172 = 22.7%)

With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%.
@inproceedings{liu2019cardiocam, title={CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping}, author={Liu, Jian and Shi, Cong and Chen, Yingying and Liu, Hongbo and Gruteser, Marco}, booktitle={International Conference on Mobile Computing, Applications and Services}, year={2019} }
C25

WristSpy: Snooping Passcodes in Mobile Payment Using Wrist-worn Wearables INFOCOM 2019

Chen Wang, Jian Liu, Xiaonan Guo, Yan Wang, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2019), Paris, France, April-May 2019.
(Acceptance rate: 288/1464 = 19.7%)

Mobile payment has drawn considerable attention due to its convenience of paying via personal mobile devices at anytime and anywhere, and passcodes (i.e., PINs or patterns) are the first choice of most consumers to authorize the payment. This paper demonstrates a serious security breach and aims to raise the awareness of the public that the passcodes for authorizing transactions in mobile payments can be leaked by exploiting the embedded sensors in wearable devices (e.g., smartwatches). We present a passcode inference system, WristSpy, which examines to what extent the user’s PIN/pattern during the mobile payment could be revealed from a single wrist-worn wearable device under different passcode input scenarios involving either two hands or a single hand. In particular, WristSpy has the capability to accurately reconstruct fine-grained hand movement trajectories and infer PINs/patterns when mobile and wearable devices are on two hands through building a Euclidean distance-based model and developing a training-free parallel PIN/pattern inference algorithm. When both devices are on the same single hand, a highly challenging case, WristSpy extracts multi-dimensional features by capturing the dynamics of minute hand vibrations and performs machine-learning based classification to identify PIN entries. Extensive experiments with 15 volunteers and 1600 passcode inputs demonstrate that an adversary is able to recover a user’s PIN/pattern with up to 92% success rate within 5 tries under various input scenarios.
@inproceedings{wang2019wristspy, title={WristSpy: Snooping Passcodes in Mobile Payment Using Wrist-worn Wearables}, author={Wang, Chen and Liu, Jian and Guo, Xiaonan and Wang, Yan and Chen, Yingying}, booktitle={IEEE International Conference on Communications}, year={2019} }
C24

Device-free Personalized Fitness Assistant Using WiFi UbiComp 2019

Xiaonan Guo, Jian Liu, Cong Shi, Hongbo Liu, Yingying Chen, Mooi Choo Chuah
in PACM on Interactive, Mobile, Wearable, and Ubiquitous Computing (IMWUT), (to be presented at UbiComp 2019). (Acceptance rate: ~21%)

Multi-carrier cellular access dynamically selects a preferred wireless carrier by leveraging the availability and diversity of multiple carrier networks at a location. It offers an alternative to the dominant single-carrier paradigm, and shows early signs of success through the operational Project Fi by Google. In this paper, we study the important, yet largely unexplored, problem of inter-carrier switching for multi-carrier access. We show that policy conflicts can arise between inter- and intra-carrier switching, resulting in oscillations among carriers in the worst case akin to BGP looping. We derive the conditions under which such oscillations occur for three categories of popular policy, and validate them with Project Fi whenever possible. We provide practical guidelines to ensure loop-freedom and assess them via trace-driven emulations.
@article{guo2018device, title={Device-free Personalized Fitness Assistant Using WiFi}, author={Guo, Xiaonan and Liu, Jian and Shi, Cong and Liu, Hongbo and Chen, Yingying and Chuah, Mooi Choo}, journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies}, volume={2}, number={4}, pages={165}, year={2018}, publisher={ACM} }
C23

VPad: Virtual Writing Tablet for Laptops Leveraging Acoustic Signals ICPADS 2018

Li Lu, Jian Liu, Jiadi Yu, Yingying Chen, Yanmin Zhu, Xiangyu Xu, Minglu Li
in Proceedings of the 24th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2018), Sentosa, Singapore, December 2018.

Human-computer interaction based on touch screens plays an increasing role in our daily lives. Besides smartphones and tablets, laptops are the most popular mobile devices used in both work and leisure. To satisfy requirements of many emerging applications, it becomes desirable to equip both writing and drawing functions directly on laptop screens. In this paper, we design a virtual writing tablet system, VPad, for traditional laptops without touch screens. VPad leverages two speakers and one microphone, which are available in most commodity laptops, for trajectory tracking without additional hardware. It employs acoustic signals to accurately track hand movements and recognize characters user writes in the air. Specifically, VPad emits inaudible acoustic signals from two speakers in a laptop. Then VPad applies Sliding-window Overlap Fourier Transformation technique to find Doppler frequency shift with higher resolution and accuracy in real time. Furthermore, we analyze frequency shifts and energy features of acoustic signals received by the microphone to track the trajectory of hand movements. Finally, we employ a stroke direction sequence model based on possibility estimation to recognize characters users write in the air. Our experimental results show that VPad achieves the average trajectory tracking error of only 1.55cm and the character recognition accuracy of above 90% merely through two speakers and one microphone on a laptop.
@inproceedings{lu2018vpad, title={VPad: Virtual Writing Tablet for Laptops Leveraging Acoustic Signals}, author={Lu, Li and Liu, Jian and Yu, Jiadi and Chen, Yingying and Zhu, Yanmin and Xu, Xiangyu and Li, Minglu}, booktitle={2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS)}, pages={244--251}, year={2018}, organization={IEEE} }
C22

Towards In-baggage Suspicious Object Detection Using Commodity WiFi CNS 2018

Chen Wang, Jian Liu, Yingying Chen, Hongbo Liu, Yan Wang
in Proceedings of IEEE International Communications and Network Security (CNS 2018) , Beijing, China, May/June 2018.
(Acceptance rate: 51/181 = 28.2%; Best paper rate: 2/181 = 1.1%)

The growing needs of public safety urgently require scalable and low-cost techniques on detecting dangerous objects (e.g., lethal weapons, homemade-bombs, explosive chemicals) hidden in baggage. Traditional baggage check involves either high manpower for manual examinations or expensive and specialized instruments, such as X-ray and CT. As such, many public places (i.e., museums and schools) that lack of strict security check are exposed to high risk. In this work, we propose to utilize the fine-grained channel state information (CSI) from off-the-shelf WiFi to detect suspicious objects that are suspected to be dangerous (i.e., defined as any metal and liquid object) without penetrating into the user's privacy through physically opening the baggage. Our suspicious object detection system significantly reduces the deployment cost and is easy to set up in public venues. Towards this end, our system is realized by two major components: it first detects the existence of suspicious objects and identifies the dangerous material type based on the reconstructed CSI complex value (including both amplitude and phase information); it then determines the risk level of the object by examining the object's dimension (i.e., liquid volume and metal object's shape) based on the reconstructed CSI complex of the signals reflected by the object. Extensive experiments are conducted with 15 metal and liquid objects and 6 types of bags in a 6-month period. The results show that our system can detect over 95% suspicious objects in different types of bags and successfully identify 90% dangerous material types. In addition, our system can achieve the average errors of 16ml and 0.5cm when estimating the volume of liquid and shape (i.e., width and height) of metal objects, respectively.
@inproceedings{wang2018towards, title={Towards In-baggage Suspicious Object Detection Using Commodity WiFi}, author={Wang, Chen and Liu, Jian and Chen, Yingying and Liu, Hongbo and Wang, Yan}, booktitle={2018 IEEE Conference on Communications and Network Security (CNS)}, pages={1--9}, year={2018}, organization={IEEE} }
C21

RF-Kinect: A Wearable RFID-based Approach Towards 3D Body Movement Tracking IMWUT UbiComp 2018

Chuyu Wang, Jian Liu, Yingying Chen, Lei Xie, Hongbo Liu, Sanglu Lu
in PACM on Interactive, Mobile, Wearable, and Ubiquitous Computing (IMWUT), (UbiComp 2018).
(Acceptance rate: ~21%)

The rising popularity of electronic devices with gesture recognition capabilities makes the gesture-based human-computer interaction more attractive. Along this direction, tracking the body movement in 3D space is desirable to further facilitate behavior recognition in various scenarios. Existing solutions attempt to track the body movement based on computer version or wearable sensors, but they are either dependent on the light or incurring high energy consumption. This paper presents RF-Kinect, a training-free system which tracks the body movement in 3D space by analyzing the phase information of wearable RFID tags attached on the limb. Instead of locating each tag independently in 3D space to recover the body postures, RF-Kinect treats each limb as a whole, and estimates the corresponding orientations through extracting two types of phase features, Phase Difference between Tags (PDT) on the same part of a limb and Phase Difference between Antennas (PDA) of the same tag. It then reconstructs the body posture based on the determined orientation of limbs grounded on the human body geometric model, and exploits Kalman filter to smooth the body movement results, which is the temporal sequence of the body postures. The real experiments with 5 volunteers show that RF-Kinect achieves 8.7° angle error for determining the orientation of limbs and 4.4cm relative position error for the position estimation of joints compared with Kinect 2.0 testbed.
@article{wang2018rf, title={RF-kinect: A wearable RFID-based approach towards 3D body movement tracking}, author={Wang, Chuyu and Liu, Jian and Chen, Yingying and Xie, Lei and Liu, Hong Bo and Lu, Sanclu}, journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies}, volume={2}, number={1}, pages={41}, year={2018}, publisher={ACM} }
C20

PPG-based Finger-level Gesture Recognition Leveraging Wearables INFOCOM 2018

Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2018), Honolulu, HI, USA, April 2018.
(Acceptance rate: 309/1606 = 19.2%)

This paper subverts the traditional understanding of Photoplethysmography (PPG) and opens up a new direction of the utility of PPG in commodity wearable devices, especially in the domain of human computer interaction of fine-grained gesture recognition. We demonstrate that it is possible to leverage the widely deployed PPG sensors in wrist-worn wearable devices to enable finger-level gesture recognition, which could facilitate many emerging human-computer interactions (e.g., sign-language interpretation and virtual reality). While prior solutions in gesture recognition require dedicated devices (e.g., video cameras or IR sensors) or leverage various signals in the environments (e.g., sound, RF or ambient light), this paper introduces the first PPG-based gesture recognition system that can differentiate fine-grained hand gestures at finger level using commodity wearables. Our innovative system harnesses the unique blood flow changes in a user's wrist area to distinguish the user's finger and hand movements. The insight is that hand gestures involve a series of muscle and tendon movements that compress the arterial geometry with different degrees, resulting in significant motion artifacts to the blood flow with different intensity and time duration. By leveraging the unique characteristics of the motion artifacts to PPG, our system can accurately extract the gesture-related signals from the significant background noise (i.e., pulses), and identify different minute finger-level gestures. Extensive experiments are conducted with over 3600 gestures collected from 10 adults. Our prototype study using two commodity PPG sensors can differentiate nine finger-level gestures from American Sign Language with an average recognition accuracy over 87%, suggesting that our PPG-based finger-level gesture recognition system is promising to be one of the most critical components in sign language translation using wearables.
@inproceedings{zhao2018ppg, title={PPG-based finger-level gesture recognition leveraging wearables}, author={Zhao, Tianming and Liu, Jian and Wang, Yan and Liu, Hongbo and Chen, Yingying}, booktitle={IEEE INFOCOM 2018-IEEE Conference on Computer Communications}, pages={1457--1465}, year={2018}, organization={IEEE} }
C19

Multi-Touch in the Air: Device-Free Finger Tracking and Gesture Recognition via COTS RFID INFOCOM 2018

Chuyu Wang, Jian Liu, Yingying Chen, Hongbo Liu, Lei Xie, Wei Wang, Bingbing He, Sanglu Lu
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2018), Honolulu, HI, USA, April 2018.
(Acceptance rate: 309/1606 = 19.2%)

Recently, gesture recognition has gained considerable attention in emerging applications (e.g., AR/VR systems) to provide a better user experience for human-computer interaction. Existing solutions usually recognize the gestures based on wearable sensors or specialized signals (e.g., WiFi, acoustic and visible light), but they are either incurring high energy consumption or susceptible to the ambient environment, which prevents them from efficiently sensing the fine-grained finger movements. In this paper, we present RF-finger, a device-free system based on Commercial-Off-The-Shelf (COTS) RFID, which leverages a tag array on a letter-size paper to sense the fine-grained finger movements performed in front of the paper. Particularly, we focus on two kinds of sensing modes: finger tracking recovers the moving trace of finger writings; multi-touch gesture recognition identifies the multi-touch gestures involving multiple fingers. Specifically, we build a theoretical model to extract the fine-grained reflection feature from the raw RF -signal, which describes the finger influence on the tag array in cm- level resolution. For the finger tracking, we leverage K-Nearest Neighbors (KNN) to pinpoint the finger position relying on the fine-grained reflection features, and obtain a smoothed trace via Kalman filter. Additionally, we construct the reflection image of each multi-touch gesture from the reflection features by regarding the multiple fingers as a whole. Finally, we use a Convolutional Neural Network (CNN) to identify the multi-touch gestures based on the images. Extensive experiments validate that RF -finger can achieve as high as 87% and 92% accuracy for finger tracking and multi-touch gesture recognition, respectively.
@inproceedings{wang2018multi, title={Multi-Touch in the Air: Device-Free Finger Tracking and Gesture Recognition via COTS RFID}, author={Wang, Chuyu and Liu, Jian and Chen, Yingying and Liu, Hongbo and Xie, Lei and Wang, Wei and He, Bingbing and Lu, Sanglu}, booktitle={IEEE INFOCOM 2018-IEEE Conference on Computer Communications}, pages={1691--1699}, year={2018}, organization={IEEE} }
C18

VibWrite: Towards Finger-input Authentication on Ubiquitous Surfaces via Physical Vibration CCS 2017

Jian Liu, Chen Wang, Yingying Chen, Nitesh Saxena
in Proceedings of the 24th ACM Conference on Computer and Communications Security (CCS 2017), Dallas, USA, October-November 2017.
(Acceptance rate: 151/843 = 17.9%)

The goal of this work is to enable user authentication via finger inputs on ubiquitous surfaces leveraging low-cost physical vibration. We propose VibWrite that extends finger-input authentication beyond touch screens to any solid surface for smart access systems (e.g., access to apartments, vehicles or smart appliances). It integrates passcode, behavioral and physiological characteristics, and surface dependency together to provide a low-cost, tangible and enhanced security solution. VibWrite builds upon a touch sensing technique with vibration signals that can operate on surfaces constructed from a broad range of materials. It is significantly different from traditional password-based approaches, which only authenticate the password itself rather than the legitimate user, and the behavioral biometrics-based solutions, which usually involve specific or expensive hardware (e.g., touch screen or fingerprint reader), incurring privacy concerns and suffering from smudge attacks. VibWrite is based on new algorithms to discriminate fine-grained finger inputs and supports three independent passcode secrets including PIN number, lock pattern, and simple gestures by extracting unique features in the frequency domain to capture both behavioral and physiological characteristics such as contacting area, touching force, and etc. VibWrite is implemented using a single pair of low-cost vibration motor and receiver that can be easily attached to any surface (e.g., a door panel, a desk or an appliance). Our extensive experiments demonstrate that VibWrite can authenticate users with high accuracy (e.g., over 95% within two trials), low false positive rate (e.g., less 3%) and is robust to various types of attacks.
@inproceedings{liu2017vibwrite, title={VibWrite: Towards finger-input authentication on ubiquitous surfaces via physical vibration}, author={Liu, Jian and Wang, Chen and Chen, Yingying and Saxena, Nitesh}, booktitle={Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security}, pages={73--87}, year={2017}, organization={ACM} }
C17

SalsaAsst: Beat Counting System Empowered by Mobile Devices to Assist Salsa Dancers MASS 2017

Yudi Dong, Jian Liu, Yingying Chen, Woo Lee
in Proceedings of the 14th IEEE International Conference on Mobile Ad hoc and Sensor Systems (MASS 2017), Orlando, Florida, USA, October 2017.

Dancing is always challenging especially for beginners who may lack sense of rhythm. Salsa, as a popular style of dancing, is even harder to learn due to its unique overlapped rhythmic patterns made by different Latin instruments (e.g., Clave sticks, Conga drums, Timbale drums) together. In order to dance in synchronization with the Salsa beats, the beginners always need prompts (e.g., beat counting voice) to remind them of the beat timing. The traditional way to generate the Salsa music with beat counting voice prompts requires professional dancers or musicians to count Salsa beats manually, which is only possible in dance studios. Additionally, the existing music beat tracking solutions cannot well capture the Salsa beats due to its intricacy of rhythms. In this work, we propose a mobile device enabled beat counting system, SalsaAsst, which can perform rhythm deciphering and fine-grained Salsa beat tracking to assist Salsa dancers with beat counting voice/vibration prompts. The proposed system can be used conveniently in many scenarios, which can not only help Salsa beginners make accelerated learning progress during practice at home but also significantly reduce professional dancers' errors during their live performance. The developed Salsa beat counting algorithm has the capability to track beats accurately in both real-time and offline manners. Our extensive tests using 40 Salsa songs under 8 evaluation metrics demonstrate that SalsaAsst can accurately track the beats of Salsa music and achieve much better performance comparing to existing beat tracking approaches.
@inproceedings{dong2017salsaasst, title={SalsaAsst: Beat Counting System Empowered by Mobile Devices to Assist Salsa Dancers}, author={Dong, Yudi and Liu, Jian and Chen, Yingying and Lee, Woo Y}, booktitle={2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)}, pages={81--89}, year={2017}, organization={IEEE} }
C16

SubTrack: Enabling Real-time Tracking of Subway Riding on Mobile Devices MASS 2017

Guo Liu, Jian Liu, Fangmin Li, Xiaolin Ma, Yingying Chen, Hongbo Liu
in Proceedings of the 14th IEEE International Conference on Mobile Ad hoc and Sensor Systems (MASS 2017), Orlando, Florida, USA, October 2017.

Real-time tracking of subway riding will provide great convenience to millions of commuters in metropolitan areas. Traditional approaches using timetables need continuous attentions from the subway riders and are limited to the poor accuracy of estimating the travel time. Recent approaches using mobile devices rely on GSM and WiFi, which are not always available underground. In this work, we present SubTrack, utilizing sensors on mobile devices to provide automatic tracking of subway riding in real time. The real-time automatic tracking covers three major aspects of a passenger: detection of entering a station, tracking the passenger's position, and estimating the arrival time of subway stops. In particular, SubTrack employs the cell ID to first detect a passenger entering a station and exploits inertial sensors on the passenger's mobile device to track the train ride. Our algorithm takes the advantages of the unique vibrations in acceleration and typical moving patterns of the train to estimate the train's velocity and the corresponding position, and further predict the arrival time in real time. Our extensive experiments in two cities in China and USA respectively demonstrate that our system can accurately track the position of subway riders, predict the arrival time and push the arrival notification in a timely manner.
@inproceedings{liu2017subtrack, title={SubTrack: Enabling Real-time Tracking of Subway Riding on Mobile Devices}, author={Liu, Guo and Liu, Jian and Li, Fangmin and Ma, Xiaolin and Chen, Yingying and Liu, Hongbo}, booktitle={2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS)}, pages={90--98}, year={2017}, organization={IEEE} }
C15

Smart User Authentication through Actuation of Daily Activities Leveraging WiFi-enabled IoT MobiHoc 2017

Cong Shi, Jian Liu, Hongbo Liu, Yingying Chen
in Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc 2017), IIT Madras, Chennai, India, July 2017.
(Acceptance rate: 27/179 = 15.1%)

User authentication is a critical process in both corporate and home environments due to the ever-growing security and privacy concerns. With the advancement of smart cities and home environments, the concept of user authentication is evolved with a broader implication by not only preventing unauthorized users from accessing confidential information but also providing the opportunities for customized services corresponding to a specific user. Traditional approaches of user authentication either require specialized device installation or inconvenient wearable sensor attachment. This paper supports the extended concept of user authentication with a device-free approach by leveraging the prevalent WiFi signals made available by IoT devices, such as smart refrigerator, smart TV and thermostat, etc. The proposed system utilizes the WiFi signals to capture unique human physiological and behavioral characteristics inherited from their daily activities, including both walking and stationary ones. Particularly, we extract representative features from channel state information (CSI) measurements of WiFi signals, and develop a deep learning based user authentication scheme to accurately identify each individual user. Extensive experiments in two typical indoor environments, a university office and an apartment, are conducted to demonstrate the effectiveness of the proposed authentication system. In particular, our system can achieve over 94% and 91% authentication accuracy with 11 subjects through walking and stationary activities, respectively.
@inproceedings{shi2017smart, title={Smart user authentication through actuation of daily activities leveraging WiFi-enabled IoT}, author={Shi, Cong and Liu, Jian and Liu, Hongbo and Chen, Yingying}, booktitle={Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing}, pages={5}, year={2017}, organization={ACM} }
C14

VibSense: Sensing Touches on Ubiquitous Surfaces through Vibration SECON 2017

Jian Liu, Yingying Chen, Marco Gruteser, Yan Wang
in Proceedings of the 14th IEEE International Conference on Sensing, Communication and Networking (SECON 2017), San Diego, CA, USA, June 2017.
(Acceptance rate: 45/170 = 26.5%; Best paper rate: 1/170 = 0.7%)

VibSense pushes the limits of vibration-based sensing to determine the location of a touch on extended surface areas as well as identify the object touching the surface leveraging a single sensor. Unlike capacitive sensing, it does not require conductive materials and compared to audio sensing it is more robust to acoustic noise. It supports a broad array of applications through either passive or active sensing using only a single sensor. In VibSense's passive sensing, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. Moreover, in the active mode, the received vibration signals carry richer information of the touching object's characteristics (e.g., weight, size, location and material). This further enables VibSense to match the signals to the trained profiles and allows it to differentiate personal objects in contact with any surface. VibSense is evaluated extensively in the use cases of localizing touches (i.e., virtual keyboards), object localization and identification. Our experimental results demonstrate that VibSense can achieve high accuracy, over 95%, in all these use cases.
@inproceedings{liu2017vibsense, title={VibSense: Sensing Touches on Ubiquitous Surfaces through Vibration}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco and Wang, Yan}, booktitle={2017 14th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)}, pages={1--9}, year={2017}, organization={IEEE} }
C13

BigRoad: Scaling Massive Road Data Acquisition for Dependable Self-Driving MobiSys 2017

Luyang Liu, Hongyu Li, Jian Liu, Cagdas Karatas, Yan Wang, Marco Gruteser, Yingying Chen, Richard Martin
in Proceedings of the 15th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2017), Niagara Falls, NY, USA, June 2017.
(Acceptance rate: 34/191 = 17.7%)

Advanced driver assistance systems and, in particular automated driving offers an unprecedented opportunity to transform the safety, efficiency, and comfort of road travel. Developing such safety technologies requires an understanding of not just common highway and city traffic situations but also a plethora of widely different unusual events (e.g., object on the road way and pedestrian crossing highway, etc.). While each such event may be rare, in aggregate they represent a significant risk that technology must address to develop truly dependable automated driving and traffic safety technologies. By developing technology to scale road data acquisition to a large number of vehicles, this paper introduces a low-cost yet reliable solution, BigRoad, that can derive internal driver inputs (i.e., steering wheel angles, driving speed and acceleration) and external perceptions of road environments (i.e., road conditions and front-view video) using a smartphone and an IMU mounted in a vehicle. We evaluate the accuracy of collected internal and external data using over 140 real-driving trips collected in a 3-month time period. Results show that BigRoad can accurately estimate the steering wheel angle with 0.69 degree median error, and derive the vehicle speed with 0.65 km/h deviation. The system is also able to determine binary road conditions with 95% accuracy by capturing a small number of brakes. We further validate the usability of BigRoad by pushing the collected video feed and steering wheel angle to a deep neural network steering wheel angle predictor, showing the potential of massive data acquisition for training self-driving system using BigRoad.
@inproceedings{liu2017bigroad, title={Bigroad: Scaling road data acquisition for dependable self-driving}, author={Liu, Luyang and Li, Hongyu and Liu, Jian and Karatas, Cagdas and Wang, Yan and Gruteser, Marco and Chen, Yingying and Martin, Richard P}, booktitle={Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services}, pages={371--384}, year={2017}, organization={ACM} }
C12

FitCoach: Virtual Fitness Coach Empowered by Wearable Mobile Devices INFOCOM 2017

Xiaonan Guo, Jian Liu, Yingying Chen
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2017), Atlanta, GA, USA, May 2017.
(Acceptance rate: 292/1395 = 20.93%)

Acknowledging the powerful sensors on wearables and smartphones enabling various applications to improve users' life styles and qualities (e.g., sleep monitoring and running rhythm tracking), this paper takes one step forward developing FitCoach, a virtual fitness coach leveraging users' wearable mobile devices (including wrist-worn wearables and arm-mounted smartphones) to assess dynamic postures (movement patterns & positions) in workouts. FitCoach aims to help the user to achieve effective workout and prevent injury by dynamically depicting the short-term and long-term picture of a user's workout based on various sensors in wearable mobile devices. In particular, FitCoach recognizes different types of exercises and interprets fine-grained fitness data (i.e., motion strength and speed) to an easy-to-understand exercise review score, which provides a comprehensive workout performance evaluation and recommendation. FitCoach has the ability to align the sensor readings from wearable devices to the human coordinate system, ensuring the accuracy and robustness of the system. Extensive experiments with over 5000 repetitions of 12 types of exercises involve 12 participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that FitCoach can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% accuracy for workout analysis.
@inproceedings{guo2017fitcoach, title={FitCoach: Virtual fitness coach empowered by wearable mobile devices}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, booktitle={IEEE INFOCOM 2017-IEEE Conference on Computer Communications}, pages={1--9}, year={2017}, organization={IEEE} }
C11

Towards Safer Texting While Driving Through Stop Time prediction CarSys 2016

Hongyu Li, Luyang Liu, Cagdas Karatas, Jian Liu, Marco Gruteser, Yingying Chen, Yan Wang, Richard P. Martin, Jie Yang
in The First ACM International Workshop on Connected and Automated Vehicle Mobility (CarSys 2016), New York, NY, USA, October 2016.

Driver distraction due to in-vehicle device use is an increasing concern and has led to national attention. We ask whether it is not more effective to channel the drivers' device and information system use into safer periods, rather than attempt a complete prohibition of mobile device use. This paper aims to start the discussion by examining the feasibility of automatically identifying safer periods for operating mobile devices. We propose a movement-based architecture design to identify relatively safe periods, estimate the duration and safety level of each period, and delay notifications until a safer period arrives. To further explore the feasibility of such a system architecture, we design and implement a prediction algorithm for one safe period, long traffic signal stops, that relies on crowd sourced position data. Simulations and experimental evaluation show that the system can achieve a low prediction error and its converge and prediction accuracy increase proportionally to the availability of the amount of crowd-sourced data.
@inproceedings{li2016towards, title={Towards safer texting while driving through stop time prediction}, author={Li, Hongyu and Liu, Luyang and Karatas, Cagdas and Liu, Jian and Gruteser, Marco and Chen, Yingying and Wang, Yan and Martin, Richard P and Yang, Jie}, booktitle={Proceedings of the First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services}, pages={14--21}, year={2016}, organization={ACM} }
C10

MotionScale: A Body Motion Monitoring System Using Bed-Mounted Wireless Load Cells CHASE 2016

Musaab Alaziz, Zhenhua Jia, Jian Liu, Richard Howard, Yingying Chen, Yanyong Zhang
in Proceedings of IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE 2016), Washington DC, USA, June 2016.

In-bed motion detection is an important technique that can enable an array of applications, among which are sleep monitoring and abnormal movement detection. In this paper, we present a low-cost, low-overhead, and highly robust system for in-bed movement detection and classification that uses low-end load cells. By observing the forces sensed by the load cells, placed under each bed leg, we can detect many different types of movements, and further classify them as big or small depending on magnitude of the force changes on the load cells. We have designed three different features, which we refer to as Log-Peak, Energy-Peak, ZeroX-Valley, that can effectively extract body movement signals from load cell data that are collected through wireless links in an energy-efficient manner. After establishing the feature values, we employ a simple threshold-based algorithm to detect and classify movements. We have conducted thorough evaluation, that involves collecting data from 30 subjects who perform 27 pre-defined movements in an experiment. By comparing our detection and classification results against the ground truth captured by a video camera, we show the Log-Peak strategy can detect these 27 types of movements at an error rate of 6.3% while classifying them to big or small movements at an error rate of 4.2%.
@inproceedings{alaziz2016motion, title={Motion scale: A body motion monitoring system using bed-mounted wireless load cells}, author={Alaziz, Musaab and Jia, Zhenhua and Liu, Jian and Howard, Richard and Chen, Yingying and Zhang, Yanyong}, booktitle={2016 IEEE first international conference on connected health: applications, systems and engineering technologies (CHASE)}, pages={183--192}, year={2016}, organization={IEEE} }
C9

Leveraging Wearables for Steering and Driver Tracking INFOCOM 2016

Cagdas Karatas, Luyang Liu, Hongyu Li, Jian Liu, Yan Wang, Sheng Tan, Jie Yang, Yingying Chen, Marco Gruteser, Richard Martin
in Proceedings of IEEE International Conference on Computer Communications (INFOCOM 2016), San Francisco, USA, April 2016.
(Acceptance rate: 300/1644 = 18.25%)

Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.
@inproceedings{karatas2016leveraging, title={Leveraging wearables for steering and driver tracking}, author={Karatas, Cagdas and Liu, Luyang and Li, Hongyu and Liu, Jian and Wang, Yan and Tan, Sheng and Yang, Jie and Chen, Yingying and Gruteser, Marco and Martin, Richard}, booktitle={IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications}, pages={1--9}, year={2016}, organization={IEEE} }
C8

Snooping Keystrokes with mm-level Audio Ranging on a Single Phone MobiCom 2015

Jian Liu, Yan Wang, Gorkem Kar, Yingying Chen, Jie Yang, Marco Gruteser
in Proceedings of the 21st Annual International Conference on Mobile Computing and Networking (MobiCom 2015), Paris, France, September 2015.
(Acceptance rate: 38/207 = 18.3%)

This paper explores the limits of audio ranging on mobile devices in the context of a keystroke snooping scenario. Acoustic keystroke snooping is challenging because it requires distinguishing and labeling sounds generated by tens of keys in very close proximity. Existing work on acoustic keystroke recognition relies on training with labeled data, linguistic context, or multiple phones placed around a keyboard --- requirements that limit usefulness in an adversarial context. In this work, we show that mobile audio hardware advances can be exploited to discriminate mm-level position differences and that this makes it feasible to locate the origin of keystrokes from only a single phone behind the keyboard. The technique clusters keystrokes using time-difference of arrival measurements as well as acoustic features to identify multiple strokes of the same key. It then computes the origin of these sounds precise enough to identify and label each key. By locating keystrokes this technique avoids the need for labeled training data or linguistic context. Experiments with three types of keyboards and off-the-shelf smartphones demonstrate scenarios where our system can recover 94% of keystrokes, which to our knowledge, is the first single-device technique that enables acoustic snooping of passwords.
@inproceedings{liu2015snooping, title={Snooping keystrokes with mm-level audio ranging on a single phone}, author={Liu, Jian and Wang, Yan and Kar, Gorkem and Chen, Yingying and Yang, Jie and Gruteser, Marco}, booktitle={Proceedings of the 21st Annual International Conference on Mobile Computing and Networking}, pages={142--154}, year={2015}, organization={ACM} }
C7

Tracking Vital Signs During Sleep Leveraging Off-the-shelf WiFi MobiHoc 2015

Jian Liu, Yan Wang, Yingying Chen, Jie Yang, Xu Chen, Jerry Cheng
in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc 2015), Hangzhou, China, June 2015.
(Acceptance rate: 37/250 = 14.7%)

Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.
@inproceedings{liu2015tracking, title={Tracking vital signs during sleep leveraging off-the-shelf wifi}, author={Liu, Jian and Wang, Yan and Chen, Yingying and Yang, Jie and Chen, Xu and Cheng, Jerry}, booktitle={Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing}, pages={267--276}, year={2015}, organization={ACM} }
C6

E-eyes: Device-free Location-oriented Activity Identification Using Fine-grained WiFi Signatures MobiCom 2014

Yan Wang, Jian Liu, Yingying Chen, Marco Gruteser, Jie Yang, Hongbo Liu
in Proceedings of the 20th Annual International Conference on Mobile Computing and Networking (MobiCom 2014), Maui, Hawaii, USA, September 2014.
(Acceptance rate: 36/220 = 16.4%)

Activity monitoring in home environments has become increasingly important and has the potential to support a broad array of applications including elder care, well-being management, and latchkey child safety. Traditional approaches involve wearable sensors and specialized hardware installations. This paper presents device-free location-oriented activity identification at home through the use of existing WiFi access points and WiFi devices (e.g., desktops, thermostats, refrigerators, smartTVs, laptops). Our low-cost system takes advantage of the ever more complex web of WiFi links between such devices and the increasingly fine-grained channel state information that can be extracted from such links. It examines channel features and can uniquely identify both in-place activities and walking movements across a home by comparing them against signal profiles. Signal profiles construction can be semi-supervised and the profiles can be adaptively updated to accommodate the movement of the mobile devices and day-to-day signal calibration. Our experimental evaluation in two apartments of different size demonstrates that our approach can achieve over 97% average true positive rate and less than 1% average false positive rate to distinguish a set of in-place and walking activities with only a single WiFi access point. Our prototype also shows that our system can work with wider signal band (802.11ac) with even higher accuracy.
@inproceedings{wang2014eyes, title={E-eyes: device-free location-oriented activity identification using fine-grained wifi signatures}, author={Wang, Yan and Liu, Jian and Chen, Yingying and Gruteser, Marco and Yang, Jie and Liu, Hongbo}, booktitle={Proceedings of the 20th annual international conference on Mobile computing and networking}, pages={617--628}, year={2014}, organization={ACM} }
C5

Practical User Authentication Leveraging Channel State Information (CSI) ASIACCS 2014

Hongbo Liu, Yan Wang, Jian Liu, Jie Yang, Yingying Chen
in Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security (ASIACCS 2014), Kyoto, Japan, June 2014.
(Acceptance rate: 52/260 = 20%)

User authentication is the critical first step to detect identity-based attacks and prevent subsequent malicious attacks. However, the increasingly dynamic mobile environments make it harder to always apply the cryptographic-based methods for user authentication due to their infrastructural and key management overhead. Exploiting non-cryptographic based techniques grounded on physical layer properties to perform user authentication appears promising. In this work, we explore to use channel state information (CSI), which is available from off-the-shelf WiFi devices, to conduct fine-grained user authentication. We propose an user-authentication framework that has the capability to build the user profile resilient to the presence of the spoofer. Our machine learning based user-authentication techniques can distinguish two users even when they possess similar signal fingerprints and detect the existence of the spoofer. Our experiments in both office building and apartment environments show that our framework can filter out the signal outliers and achieve higher authentication accuracy compared with existing approaches using received signal strength (RSS).
@inproceedings{liu2014practical, title={Practical user authentication leveraging channel state information (CSI)}, author={Liu, Hongbo and Wang, Yan and Liu, Jian and Yang, Jie and Chen, Yingying}, booktitle={Proceedings of the 9th ACM symposium on Information, computer and communications security}, pages={389--400}, year={2014}, organization={ACM} }
C4

WSF-MAC: A Weight-based Spatially Fair MAC Protocol for Underwater Sensor Networks CECNet 2012

Fei Dou, Zhigang Jin, Yishan Su, Jian Liu
in Proceedings of the 2nd International Conference on Consumer Electronic, Communications and Networks (CECNet 2012), Hubei, China, April 2012.

The high propagation delay in Underwater Sensor Networks (UWSNs) causes space-time uncertainty, making spatial fairness a challenging problem in UWSNs. In this paper, we propose a weight-based spatially fair MAC protocol (WSF-MAC) for UWSNs. It postpones sending the underwater-reply (UW-REP) packet for a silence duration time, and then determines the node to send UW-REQ first according to the sending time and competition count of the underwater-request (UW-REQ) packets and send UW-REP to get ready for transmission. The simulation results show that WSF-MAC can achieve a better performance in terms of the spatial fairness by about 10%.
@inproceedings{dou2012wsf, title={WSF-MAC: A weight-based spatially fair MAC protocol for underwater sensor networks}, author={Dou, Fei and Jin, Zhigang and Su, Yishan and Liu, Jian}, booktitle={2012 2nd International Conference on Consumer Electronics, Communications and Networks (CECNet)}, pages={3708--3711}, year={2012}, organization={IEEE} }
C3

An Improved RED Algorithm with Sinusoidal Packet-marking Probability and Dynamic Weight ICEICE 2011

Songpo Zhang, Jiming Sa, Jian Liu, Shaoyun Wu
in Proceedings of the International Conference on Electric Information and Control Engineering (ICEICE 2011), Wuhan, China, April 2011.

Congestion control has become a research hotspot, because of the rapid growth of Internet. Random Early Detection (RED) algorithm is the most effective active queue management (AQM) techniques. This paper describes RED algorithm and its derivatives then presents a new algorithm The packet-marking probability linearly with the average queue length is improper for the arrival packet at the gateway. So we present an improved algorithm named SW-RED, which can adjust weight dynamically and make the packet-marking more reasonable. Simulations by NS2 show that SW-RED has better performance and stability comparing with RED.
@inproceedings{zhang2011improved, title={An improved RED algorithm with sinusoidal packet-marking probability and dynamic weight}, author={Zhang, Songpo and Sa, Jiming and Liu, Jian and Wu, Shaoyun}, booktitle={2011 International Conference on Electric Information and Control Engineering}, pages={1160--1163}, year={2011}, organization={IEEE} }
C2

An Adaptive Cross-layer Mechanism of Multi-Channel Multi-Interface Wireless Networks for Real-Time Video Streaming UIC/ATC 2010

Jian Liu, Fangmin Li, Fei Dou, Xu He, Zhigang Luo, Hong Xiong
in Proceedings of the 7th International Conference on Autonomic & Trusted Computing (UIC/ATC 2010), Xi'an, China, October 2010.

Real-time video streaming over wireless links imposes strong demands on video codecs and quality of networks. Many measures are made to design proper routing protocols and channel assignments (CAs) for multi-channel multi-interface (MCMI) wireless networks, since it can provide higher performance than single channel. However, there still has not been a well-studied proposal to guarantee real-time video quality in this situation. Hence, it motivates us to explore the potential synergies of exchanging information between different layers to support real-time video streaming over MCMI wireless networks. In this article we jointly consider three layers of the protocol stack: the application, data link and physical layers, and propose an adaptive cross-layer mechanism for real-time video streaming (ACMRV) used in this scenario, which includes both an efficient CA and an adaptive FEC mechanism. We analyze the performance of the proposed architecture and extensively evaluate it via NS-2. The results show that the real-time video quality can be greatly improved by our proposal.
@inproceedings{liu2010adaptive, title={An adaptive cross-layer mechanism of multi-channel multi-interface wireless networks for real-time video streaming}, author={Liu, Jian and Li, Fangmin and Dou, Fei and He, Xu and Luo, Zhigang and Xiong, Hong}, booktitle={2010 7th International Conference on Ubiquitous Intelligence \& Computing and 7th International Conference on Autonomic \& Trusted Computing}, pages={165--170}, year={2010}, organization={IEEE} }
C1

An Improvement of AODV Protocol Based on Reliable Delivery in Mobile Ad hoc Networks IAS 2009

Jian Liu, Fangmin Li
in Proceedings of the 5th International Conference on Information Assurance and Security (IAS 2009), Xi'an, China, August 2009.

AODV protocol is a comparatively mature on-demand routing protocol in mobile ad hoc networks. However, the traditional AODV protocol seems less than satisfactory in terms of delivery reliability. This paper presents an AODV with reliable delivery (AODV-RD), a link failure fore-warning mechanism, metric of alternate node in order to better select, and also repairing action after primary route breaks basis of AODV-BR. Performance comparison of AODV-RD with AODV-BR and traditional AODV using ns-2 simulations shows that AODV-RD significantly increases packet delivery ratio (PDR). AODV-RD has a much shorter end-to-end delay than AODV-BR. It both optimizes the network performance and guarantees the communication quality.
@inproceedings{liu2009improvement, title={An improvement of AODV protocol based on reliable delivery in mobile ad hoc networks}, author={Liu, Jian and Li, Fang-min}, booktitle={2009 Fifth International Conference on Information Assurance and Security}, volume={1}, pages={507--510}, year={2009}, organization={IEEE} }

Journal Papers & Magazine Articles

J15

Acoustic-based Sensing and Applications: a Survey

Yang Bai, Li Lu, Jerry Cheng, Jian Liu, Yingying Chen, Jiadi Yu
Computer Networks, 2020. (To appear)

Add later
Add later
J14

When Your Wearables Become Your Fitness Mate

Xiaonan Guo, Jian Liu, Yingying Chen
Smart Health, Volume 16, May 2020.

Acknowledging the powerful sensors on wearables and smartphones enabling various applications to improve users' life styles and qualities (e.g., sleep monitoring and running rhythm tracking), this paper takes one step forward developing FitCoach, a virtual fitness coach leveraging users' wearable mobile devices (including wrist-worn wearables and arm-mounted smartphones) to assess dynamic postures (movement patterns & positions) in workouts. FitCoach aims to help the user to achieve effective workout and prevent injury by dynamically depicting the short-term and long-term picture of a user's workout based on various sensors in wearable mobile devices. In particular, FitCoach recognizes different types of exercises and interprets fine-grained fitness data (i.e., motion strength and speed) to an easy-to-understand exercise review score, which provides a comprehensive workout performance evaluation and recommendation. Our system further enables contactless device control during workouts (e.g., gesture to pick up an incoming call) through distinguishing customized gestures from regular exercise movement. In addition, FitCoach has the ability to align the sensor readings from wearable devices to the human coordinate system, ensuring the accuracy and robustness of the system. Extensive experiments with over 5000 repetitions of 12 types of exercises involve 12 participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that FitCoach can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% and 90% accuracy for workout analysis and customized control gesture recognition, respectively.
@article{guo2020your, title={When your wearables become your fitness mate}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, journal={Smart Health}, volume={16}, pages={100114}, year={2020}, publisher={Elsevier} }
J13

User Authentication on Mobile Devices: Approaches, Threats and Trends

Chen Wang, Yan Wang, Yingying Chen, Hongbo Liu, Jian Liu
Computer Networks, Volume 170, April 2020.

Mobile devices have brought a great convenience to us these years, which allow the users to enjoy the anytime and anywhere various applications such as the online shopping, Internet banking, navigation and mobile media. While the users enjoy the convenience and flexibility of the ”Go Mobile” trend, their sensitive private information (e.g., name and credit card number) on the mobile devices could be disclosed. An adversary could access the sensitive private information stored on the mobile device by unlocking the mobile devices. Moreover, the user’s mobile services and applications are all exposed to security threats. For example, the adversary could utilize the user’s mobile device to conduct non-permitted actions (e.g., making online transactions and installing malwares). The authentication on mobile devices plays a significant role to protect the user’s sensitive information on mobile devices and prevent any non-permitted access to the mobile devices. This paper surveys the existing authentication methods on mobile devices. In particular, based on the basic authentication metrics (i.e., knowledge, ownership and biometrics) used in existing mobile authentication methods, we categorize them into four categories, including the knowledge-based authentication (e.g., passwords and lock patterns), physiological biometric-based authentication (e.g., fingerprint and iris), behavioral biometrics-based authentication (e.g., gait and hand gesture), and two/multi-factor authentication. We compare the usability and security level of the existing authentication approaches among these categories. Moreover, we review the existing attacks to these authentication approaches to reveal their vulnerabilities. The paper points out that the trend of the authentication on mobile devices would be the multi-factor authentication, which determines the user’s identity using the integration (not the simple combination) of more than one authentication metrics. For example, the user’s behavior biometrics (e.g., keystroke dynamics) could be extracted simultaneously when he/she inputs the knowledge-based secrets (e.g., PIN), which can provide the enhanced authentication as well as sparing the user’s trouble to conduct multiple inputs for different authentication metrics.
@article{wang2020user, title={User authentication on mobile devices: Approaches, threats and trends}, author={Wang, Chen and Wang, Yan and Chen, Yingying and Liu, Hongbo and Liu, Jian}, journal={Computer Networks}, volume={170}, pages={107118}, year={2020}, publisher={Elsevier} }
J12

Enable Traditional Laptops with Virtual Writing Capability Leveraging Acoustic Signals

Lu Li, Jian Liu, Jiadi Yu, Yingying Chen, Yanmin Zhu, Linghe Kong, Minglu Li
The Computer Journal, January 2020.

Human–computer interaction through touch screens plays an increasingly important role in our daily lives. Besides smartphones and tablets, laptops are the most prevalent mobile devices for both work and leisure. To satisfy the requirements of some applications, it is desirable to re-equip a typical laptop with both handwriting and drawing capability. In this paper, we design a virtual writing tablet system, VPad, for traditional laptops without touch screens. VPad leverages two speakers and one microphone, which are available in most commodity laptops, to accurately track hand movements and recognize writing characters in the air without additional hardware. Specifically, VPad emits inaudible acoustic signals from two speakers in a laptop and then analyzes energy features and Doppler shifts of acoustic signals received by the microphone to track the trajectory of hand movements. Furthermore, we propose a state machine-based trajectory optimization method to correct the unexpected trajectory and employ a stroke direction sequence model based on probability estimation to recognize characters users write in the air. Experimental results show that VPad achieves the average error of 1.55 cm for trajectory tracking and the accuracy over 90% of character recognition merely through built-in audio devices on a laptop.
@article{lu2020enable, title={Enable Traditional Laptops with Virtual Writing Capability Leveraging Acoustic Signals}, author={Lu, Li and Liu, Jian and Yu, Jiadi and Chen, Yingying and Zhu, Yanmin and Kong, Linghe and Li, Minglu}, journal={The Computer Journal}, year={2020} }
J11

Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables

Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, Yingying Chen
IEEE Transactions on Mobile Computing (IEEE TMC), December 2019.

Different from traditional gestures, sign language gestures involve a lot of finger-level gestures without wrist or arm movements. They are hard to detect using existing motion sensors-based approaches. We introduce the first low-cost sign language gesture recognition system that can differentiate fine-grained finger movements using the Photoplethysmography (PPG) and motion sensors in commodity wearables. By leveraging the motion artifacts in PPG, our system can accurately recognize sign language gestures when there are large body movements, which cannot be handled by the traditional motion sensor-based approaches. We further explore the feasibility of using both PPG and motion sensors in wearables to improve the sign language gesture recognition accuracy when there are limited body movements. We develop a gradient boost tree (GBT) model and deep neural network-based model (i.e., ResNet) for classification. The transfer learning technique is applied to ResNet-based model to reduce the training effort. We develop a prototype using low-cost PPG and motions sensors and conduct extensive experiments and collect over 7000 gestures from 10 adults in the static and body-motion scenarios. Results demonstrate that our system can differentiate nine finger-level gestures from the American Sign Language with an average recognition accuracy over 98%.
@article{zhao2019towards, title={Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables}, author={Zhao, Tianming and Liu, Jian and Wang, Yan and Liu, Hongbo and Chen, Yingying}, journal={IEEE Transactions on Mobile Computing}, year={2019}, publisher={IEEE} }
J10

Spearphone: A Speech Privacy Exploit via Accelerometer-Sensed Reverberations from Smartphone Loudspeakers

S Abhishek Anand, Chen Wang, Jian Liu, Nitesh Saxena, Yingying Chen
arXiv (unpublished), 2019.

In this paper, we build a speech privacy attack that exploits speech reverberations generated from a smartphone's inbuilt loudspeaker captured via a zero-permission motion sensor (accelerometer). We design our attack, called Spearphone2, and demonstrate that speech reverberations from inbuilt loudspeakers, at an appropriate loudness, can impact the accelerometer, leaking sensitive information about the speech. In particular, we show that by exploiting the affected accelerometer readings and carefully selecting feature sets along with off-the-shelf machine learning techniques, Spearphone can successfully perform gender classification (accuracy over 90%) and speaker identification (accuracy over 80%). In addition, we perform speech recognition and speech reconstruction to extract more information about the eavesdropped speech to an extent. Our work brings to light a fundamental design vulnerability in many currently-deployed smartphones, which may put people's speech privacy at risk while using the smartphone in the loudspeaker mode during phone calls, media playback or voice assistant interactions.
@article{anand2019spearphone, title={Spearphone: A Speech Privacy Exploit via Accelerometer-Sensed Reverberations from Smartphone Loudspeakers}, author={Anand, S Abhishek and Wang, Chen and Liu, Jian and Saxena, Nitesh and Chen, Yingying}, journal={arXiv preprint arXiv:1907.05972}, year={2019} }
J9

Wireless Sensing for Human Activity: A Survey

Jian Liu, Hongbo Liu, Yingying Chen, Yan Wang, Chen Wang
IEEE Communications Surveys and Tutorials, 2019. (IF=22.97).

With the advancement of wireless technologies and sensing methodologies, many studies have shown the success of re-using wireless signals (e.g., WiFi) to sense human activities and thereby realize a set of emerging applications, ranging from intrusion detection, daily activity recognition, gesture recognition to vital signs monitoring and user identification involving even finer-grained motion sensing. These applications arguably can brace various domains for smart home and office environments, including safety protection, well-being monitoring/management, smart healthcare and smart-appliance interaction. The movements of the human body impact the wireless signal propagation (e.g., reflection, diffraction and scattering), which provide great opportunities to capture human motions by analyzing the received wireless signals. Researchers take the advantage of the existing wireless links among mobile/smart devices (e.g., laptops, smartphones, smart thermostats, smart refrigerators and virtual assistance systems) by either extracting the ready-to-use signal measurements or adopting frequency modulated signals to detect the frequency shift. Due to the low-cost and non-intrusive sensing nature, wireless-based human activity sensing has drawn considerable attention and become a prominent research field over the past decade. In this paper, we survey the existing wireless sensing systems in terms of their basic principles, techniques and system structures. Particularly, we describe how the wireless signals could be utilized to facilitate an array of applications including intrusion detection, room occupancy monitoring, daily activity recognition, gesture recognition, vital signs monitoring, user identification and indoor localization. The future research directions and limitations of using wireless signals for human activity sensing are also discussed.
@article{liu2019wireless, title={Wireless sensing for human activity: A survey}, author={Liu, Jian and Liu, Hongbo and Chen, Yingying and Wang, Yan and Wang, Chen}, journal={IEEE Communications Surveys \& Tutorials}, year={2019}, publisher={IEEE} }
J8

Good Vibrations: Accessing ‘Smart’ Systems by Touching Any Solid Surface

Jian Liu, Chen Wang, Yingying Chen, Nitesh Saxena
Biometric Technology Today (BTT), Issue 4, Pages 7-10, 2018.

The process of people authenticating themselves to verify their identity is now commonplace across many areas of our daily life. It's no longer just users of touchscreen devices like mobile phones – the growing use of smart systems means people need to identify themselves to access many other devices and daily activities, like entering their apartment, driving a vehicle and using smart appliances.
@article{liu2018good, title={Good vibrations: accessing ‘smart’systems by touching any solid surface}, author={Liu, Jian and Wang, Chen and Chen, Yingying and Saxena, Nitesh}, journal={Biometric Technology Today}, volume={2018}, number={4}, pages={7--10}, year={2018}, publisher={Elsevier} }
J7

Monitoring Vital Signs and Postures During Sleep Using WiFi Signals

Jian Liu, Yingying Chen, Yan Wang, Xu Chen, Jerry Cheng, Jie Yang
IEEE Internet of Things Journal (IEEE IoT), Volume 5, Issue 3, Pages 2071-2084, 2018. (IF = 7.596).

Tracking human sleeping postures and vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., polysomnography) are limited to clinic usage. Recent radio frequency-based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this paper, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system reuses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing noninvasive, continuous fine-grained vital signs monitoring without any additional cost.
@article{liu2018monitoring, title={Monitoring vital signs and postures during sleep using WiFi signals}, author={Liu, Jian and Chen, Yingying and Wang, Yan and Chen, Xu and Cheng, Jerry and Yang, Jie}, journal={IEEE Internet of Things Journal}, volume={5}, number={3}, pages={2071--2084}, year={2018}, publisher={IEEE} }
J6

Authenticating Users through Fine-grained Channel Information

Hongbo Liu, Yan Wang, Jian Liu, Jie Yang, Yingying Chen, H. Vincent Poor
IEEE Transactions on Mobile Computing (IEEE TMC), Volume 17, Issue 2, Pages 251-264, 2018.

User authentication is the critical first step in detecting identity-based attacks and preventing subsequent malicious attacks. However, the increasingly dynamic mobile environments make it harderto always apply cryptographic-based methods for user authentication due to their infrastructural and key management overhead. Exploiting non-cryptographic based techniques grounded on physical layer properties to perform user authentication appears promising. In this work, the use of channel state information (CSI), which is available from off-the-shelf WiFi devices, to perform fine-grained user authentication is explored. Particularly, a user-authentication framework that can work with both stationary and mobile users is proposed. When the user is stationary, the proposed framework builds a user profile for user authentication that is resilient to the presence of a spoofer. The proposed machine learning based user-authentication techniques can distinguish between two users even when they possess similar signal fingerprints and detect the existence of a spoofer. When the user is mobile, it is proposed to detect the presence of a spoofer by examining the temporal correlation of CSI measurements. Both office building and apartment environments show that the proposed framework can filter out signal outliers and achieve higher authentication accuracy compared with existing approaches using received signal strength (RSS).
@article{liu2018authenticating, title={Authenticating users through fine-grained channel information}, author={Liu, Hongbo and Wang, Yan and Liu, Jian and Yang, Jie and Chen, Yingying and Poor, H Vincent}, journal={IEEE Transactions on Mobile Computing}, volume={17}, number={2}, pages={251--264}, year={2018}, publisher={IEEE} }
J5

3D Tracking via Shoe Sensing

Fangmin Li, Guo Liu, Jian Liu, Xiaochuang Chen, Xiaolin Ma
Sensors (MDPI), 2016, 16(11), 1809.

Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy.
@article{li20163d, title={3D Tracking via Shoe Sensing}, author={Li, Fangmin and Liu, Guo and Liu, Jian and Chen, Xiaochuang and Ma, Xiaolin}, journal={Sensors}, volume={16}, number={11}, pages={1809}, year={2016}, publisher={Multidisciplinary Digital Publishing Institute} }
J4

Fusion of Different Height Pyroelectric Infrared Sensors for Person Identification

Ji Xiong, Fangmin Li, Jian Liu
IEEE Sensors Journal, Volume 16, Issue 2, Pages 436-446, 2016.

Due to the instability and poor identification ability of a single pyroelectric infrared (PIR) detector for human target identification, this paper presents a PIR detection identification system that can collect thermal infrared features from different parts of human targets through multiple PIR sensors for the human identification. First, fast Fourier transform, short-time Fourier transform, and wavelet packet transform algorithms are adopted to extract thermal infrared features of human targets. Then, the canonical correlation analysis algorithm is used to fuse different algorithm features in the feature layer. Finally, using the support vector machine to classify the human targets. In the decision-making layer, the Dempster/Shafer evidence theory is adopted to optimize the recognition results from different PIR sensors that locate at different height positions. Extensive experimental results demonstrate that the fusion of feature layer data could improve the average recognition rate of the human target with closer distance from the single sensor. In addition, the fusion of decision-making layer could improve the recognition ability of the identification system as well. When the detection distance is 6 m, the correct recognition rate of fusion system is still reached 88.75%. Compared with the system using a single sensor, the recognition rate is increased by an average of 22.67%.
@article{xiong2016fusion, title={Fusion of different height pyroelectric infrared sensors for person identification}, author={Xiong, Ji and Li, Fangmin and Liu, Jian}, journal={IEEE Sensors Journal}, volume={16}, number={2}, pages={436--446}, year={2016}, publisher={IEEE} }
J3

Throughput-Delay Tradeoff for Wireless Multi-Channel Multi-Interface Random Networks

Xiaolin Ma, Fangmin Li, Jian Liu, Xinhua Liu
Canadian Journal of Electrical and Computer Engineering (CJECE), Volume 38, Issue 2, Pages 162-169, 2015.

Capturing throughput-delay tradeoff in wireless networks has drawn considerable attention, as it could bring better usage experience by considering different requirements of throughput/delay demands. Traditional works consider only typical single-channel single-interface networks, whereas multichannel multi-interface (MCMI) networks will become mainstream since they provide concurrent transmissions in different channels, which in turn helps each node to obtain better performance. Unlike previous works, this paper investigates the throughput-delay tradeoff for MCMI random networks. Two queuing systems, i.e., the M/M/m queuing system and the m M/M/1 queuing system, are established for MCMI nodes, and a parameter in routing implementation named routing deviation is also considered in the analytical model. This paper studies concurrent transmission capacity (CTC) using the physical interference model and also explores the impact on CTC of different physical parameters. Moreover, the relations between throughput and delay are achieved using two queuing systems in MCMI random networks respectively. The deterministic results obtained with a group of real network configuration parameters demonstrate that the proposed tradeoff model could be applied to the real network scenarios.
@article{ma2015throughput, title={Throughput--Delay Tradeoff for Wireless Multichannel Multi-Interface Random Networks}, author={Ma, Xiaolin and Li, Fangmin and Liu, Jian and Liu, Xinhua}, journal={Canadian Journal of Electrical and Computer Engineering}, volume={38}, number={2}, pages={162--169}, year={2015}, publisher={IEEE} }
J2

The Capacity of Multi-channel Multi-interface Wireless Networks with Multi-packet Reception and Directional Antenna

Jian Liu, Fangmin Li, Xinhua Liu, Hao Wang
Wireless Communications and Mobile Computing (WCMC, Wiley), Volume 14, Issue 8, Pages 803-817, 2014.

The capacity of wireless networks can be improved by the use of multi‐channel multi‐interface (MCMI), multi‐packet reception (MPR), and directional antenna (DA). MCMI can provide the concurrent transmission in different channels for each node with multiple interfaces; MPR offers an increased number of concurrent transmissions on the same channel; DA can be more effective than omni‐DA by reducing interference and increasing spatial reuse. This paper explores the capacity of wireless networks that integrate MCMI, MPR, and DA technologies. Unlike some previous research, which only employed one or two of the aforementioned technologies to improve the capacity of networks, this research captures the capacity bound of the networks with all the aforementioned technologies in arbitrary and random wireless networks. The research shows that such three‐technology networks can achieve at most \(\frac{2\pi}{\theta}\sqrt{k}\) capacity gain in arbitrary networks and \((\frac{2\pi}{\theta})^2{k}\) capacity gain in random networks compared with MCMI wireless networks without DA and MPR. The paper also explored and analyzed the impact on the network capacity gain with different \(\frac{c}{m}\), θ, and k‐MPR ability. Copyright © 2012 John Wiley & Sons, Ltd.
@article{liu2014capacity, title={The capacity of multi-channel multi-interface wireless networks with multi-packet reception and directional antenna}, author={Liu, Jian and Li, Fangmin and Liu, Xinhua and Wang, Hao}, journal={Wireless Communications and Mobile Computing}, volume={14}, number={8}, pages={803--817}, year={2014}, publisher={Wiley Online Library} }
J1

Routing Optimization of Wireless Sensor Network Based on Hello Mechanism

Jian Liu, Fangmin Li
Computer Engineering, Volume 36, Issue 7, Pages 99-101, 2010.

This paper analyzes the shortcomings of fixed protocol expenses that occur under network topologies with different degrees of stability for Adhoc On-demand Distance Vector(AODV) protocol which occupy network bandwith.It puts forward a method to transmit the Hello message through the auto-adapted time-lag to control the power,to enhance the network bandwidth,and to reduce the convergent time.This algorithm transmits the Hello message by adjusting time-lag automatically according to the change emergency of the network topology.Simulation tests indicate that on the same premise of other conditions,network bandwidth is more reasonably used and network performance is optimized by applying auto-adapted time-lag to transmit the Hello message.
@article{liu2010routing, title={Routing Optimization of Wireless Sensor Network Based on Hello Mechanism}, author={LIU, Jian and LI, Fang-min}, journal={Computer Engineering}, volume={36}, number={7}, pages={99--101}, year={2010} }

Others (Posters, Demos & Apps)

O13

Demo: Device-free Activity Monitoring Through Real-time Analysis on Prevalent WiFi Signals DySPAN 2019

Cong Shi, Justin Esposito, Sachin Mathew, Amit Patel, Rishika Sakhuja, Jian Liu and Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we present a device-free activity monitoring platform exploiting the prevalent WiFi signals to enable real-time activity recognition and user identification in indoor environments. It supports a broad array of real-world applications, such as senior assistance services, fitness tracking, and building surveillance. In particular, the proposed platform takes advantage of channel state information (CSI), which is sensitive to environmental changes introduced by human body movements. To enable immediate response of the platform, we design a real-time mechanism that continuously monitors the WiFi signals and promptly analyzes the CSI readings when the human activity is detected. For each detected activity, we extract representative features from CSI, and exploit a deep neural network (DNN) based scheme to accurately identify the activity type/user identity. Our experimental results demonstrate that the proposed platform could perform activity/user identification with high accuracy while offering low latency.
add later
O12

Demo: Hands-Free Human Activity Recognition Using Millimeter-Wave Sensors DySPAN 2019

Soo Min Kwon, Song Yang, Jian Liu, Xin Yang, Wesam Saleh, Shreya Patel, Christine Mathews, Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we introduce a hands-free human activity recognition framework leveraging millimeter-wave (mmWave) sensors. Compared to other existing approaches, our network protects user privacy and can remodel a human skeleton performing the activity. Moreover, we show that our network can be achieved in one architecture, and be further optimized to have higher accuracy than those that can only get singular results (i.e. only get pose estimation or activity recognition). To demonstrate the practicality and robustness of our model, we will demonstrate our model in different settings (i.e. facing different backgrounds) and effectively show the accuracy of our network.
add later
O11

Demo: Monitoring Movement Dynamics of Robot Cars and Drones Using Smartphone’s Built-in Sensors DySPAN 2019

Yang Bai, Xin Yang, ChenHao Liu, Justin Wain, Ryan Wang, Jeffery Cheng, Chen Wang, Jian Liu, Yingying Chen
Demo Session, in Proceedings of the IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN 2019), Newark, New Jersey, November 2019.

In this demo, we present a smart system that can monitor the movement dynamics of any carrying platform (e.g., robot car and drone) leveraging the inertial sensors of the attached smartphone. Through the measured inertial sensor readings, we can monitor the movement dynamics of the carrying platform in real-time, such as the platform’s moving speed, displacement, and position. Unlike Global Positioning System (GPS), which shows severe accuracy degradation when GPS signals are weak (e.g., in indoor or urban environments), our system tracks the platform’s movements and performs positioning without receiving external signals. Thus, our system can be an effective alternative approach to monitor the movement dynamics of those indoor objects (e.g., sweeping robot, indoor drone). Specifically, we exploit the motion-sensing capabilities of smartphone’s inertial sensors to measure the carrying platform’s movement dynamics. The inertial magnetometer of the smartphone allows us to reorient sensors with the cardinal directions; the gyroscope and accelerometer enable measuring the velocity and displacement of the platform. Our experimental results demonstrate that our system can accurately measure the movement dynamics of carrying platform with the easy-to-access smartphone sensors, as a substitution of GPS-based positioning in indoor environments.
add later
O10

Demo: Toward Continuous User Authentication Using PPG in Commodity Wrist-worn Wearables MobiCom 2019

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen
Demo Session, in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom 2019), Los Cabos, Mexico, October 2019.

We present a photoplethysmography (PPG)-based continuous user authentication (CA) system leveraging the pervasively equipped PPG sensor in commodity wrist-worn wearables such as the smartwatch. Compared to existing approaches, our system does not require any users’ interactions (e.g., performing specific gestures) and is applicable to practical scenarios where the user’s daily activities cause motion artifacts (MA). Notably, we design a robust MA removal method to mitigate the impact of MA. Furthermore, we explore the uniqueness of the human cardiac system and extract the fiducial features in the PPG measurements to train the gradient boosting tree (GBT) classifier, which can effectively differentiate users continuously using low training effort. In particular, we build the prototype of our system using a commodity smartwatch and a WebSocket server running on a laptop for CA. In order to demonstrate the practical use of our system, we will demo our prototype under different scenarios (i.e., static and moving) to show it can effectively detect MA caused by daily activities and achieve a high authentication success rate.
add later
O9

Poster: Inaudible High-throughput Communication Through Acoustic Signals MobiCom 2019

Yang Bai, Jian Liu, Yingying Chen, Li Lu, Jiadi Yu
Poster Session, in Proceedings of the 25th Annual International Conference on Mobile Computing and Networking (MobiCom 2019), Los Cabos, Mexico, October 2019.

In recent decades, countless efforts have been put into the research and development of short-range wireless communication, which offers a convenient way for numerous applications (e.g., mobile payments, mobile advertisement). Regarding the design of acoustic communication, throughput and inaudibility are the most vital aspects, which greatly affect available applications that can be supported and their user experience. Existing studies on acoustic communication either use audible frequency band (e.g., <20kHz) to achieve a relatively high throughput or realize inaudibility using near-ultrasonic frequency band (e.g., 18-20kHz) which however can only achieve limited throughput. Leveraging the non-linearity of microphones, voice commands can be demodulated from the ultrasound signals, and further recognized by the speech recognition systems. In this poster, we design an acoustic communication system, which achieves high-throughput and inaudibility at the same time, and the highest throughput we achieve is over 17× higher than the state-of-the-art acoustic communication systems.
add later
O8

Poster: Leveraging Breathing for Continuous User Authentication MobiCom 2018

Jian Liu, Yudi Dong, Yingying Chen, Yan Wang, Tianming Zhao
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

This work proposes a continuous user verification system based on unique human respiratory-biometric characteristics extracted from the off-the-shelf WiFi signals. Our system innovatively re-uses widely available WiFi signals to capture the unique physiological characteristics rooted in respiratory motions for continuous authentication. Different from existing continuous authentication approaches having limited applicable scenarios due to their dependence on restricted user behaviors (e.g., keystrokes and gaits) or dedicated sensing infrastructures, our approach can be easily integrated into any existing WiFi infrastructure to provide non-invasive continuous authentication independent of user behaviors. Specifically, we extract representative features leveraging waveform morphology analysis and fuzzy wavelet transformation of respiration signals derived from the readily available channel state information (CSI) of WiFi. A respiration-based user authentication scheme is developed to accurately identify users and reject spoofers. Extensive experiments involving 20 subjects demonstrate that the proposed system can achieve a high authentication success rate of over 93% and robustly defend against various types of attacks.
@inproceedings{liu2018poster, title={Poster: Leveraging Breathing for Continuous User Authentication}, author={Liu, Jian and Dong, Yudi and Chen, Yingying and Wang, Yan and Zhao, Tianming}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={786--788}, year={2018}, organization={ACM} }
O7

Poster: Inferring Mobile Payment Passcodes Leveraging Wearable Devices MobiCom 2018

Chen Wang, Jian Liu, Xiaonan Guo, Yan Wang, Yingying Chen
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

Mobile payment has drawn considerable attention due to its convenience of paying via personal mobile devices at anytime and anywhere, and passcodes (i.e., PINs) are the first choice of most consumers to authorize the payment. This work demonstrates a serious security breach and aims to raise the awareness of the public that the passcodes for authorizing transactions in mobile payments can be leaked by exploiting the embedded sensors in wearable devices (e.g., smartwatches). We present a passcode inference system, which examines to what extent the user's PIN during mobile payment could be revealed from a single wrist-worn wearable device under different input scenarios involving either two hands or a single hand. Extensive experiments with 15 volunteers demonstrate that an adversary is able to recover a user's PIN with high success rate within 5 tries under various input scenarios.
@inproceedings{wang2018poster, title={Poster: Inferring Mobile Payment Passcodes Leveraging Wearable Devices}, author={Wang, Chen and Liu, Jian and Guo, Xiaonan and Wang, Yan and Chen, Yingying}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={789--791}, year={2018}, organization={ACM} }
O6

Poster: Your Heart Won't Lie: PPG-based Continuous Authentication on Wrist-worn Wearable Devices MobiCom 2018

Tianming Zhao, Yan Wang, Jian Liu, Yingying Chen
Poster Session, in Proceedings of the 24th Annual International Conference on Computing and Networking (MobiCom 2018), New Delhi, India, October 2018.

This paper presents a photoplethysmography (PPG)-based continuous user authentication (CA) system, which especially leverages the PPG sensors in wrist-worn wearable devices to identify users. We explore the uniqueness of the human cardiac system captured by the PPG sensing technology. Existing CA systems require either the dedicated sensing hardware or specific gestures, whereas our system does not require any users' interactions but only the wearable device, which has already been pervasively equipped with PPG sensors. Notably, we design a robust motion artifacts (MA) removal method to mitigate the impact of MA from wrist movements. Additionally, we explore the characteristic fiducial features from PPG measurements to efficiently distinguish the human cardiac system. Furthermore, we develop a cardiac-based classifier for user identification using the Gradient Boosting Tree (GBT). Experiments with the prototype of the wrist-worn PPG sensing platform and 10 participants in different scenarios demonstrate that our system can effectively remove MA and achieve a high average authentication success rate over 90%.
@inproceedings{zhao2018your, title={Your Heart Won't Lie: PPG-based Continuous Authentication on Wrist-worn Wearable Devices}, author={Zhao, Tianming and Wang, Yan and Liu, Jian and Chen, Yingying}, booktitle={Proceedings of the 24th Annual International Conference on Mobile Computing and Networking}, pages={783--785}, year={2018}, organization={ACM} }
O5

Poster: Sensing on Ubiquitous Surfaces via Vibration Signals MobiCom 2016

Jian Liu, Yingying Chen, Marco Gruteser
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

This work explores vibration-based sensing to determine the location of a touch on extended surface areas as well as identify the object touching the surface leveraging a single sensor. It supports a broad array of applications through either passive or active sensing using only a single sensor. In the passive sensing, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. Moreover, in the active mode, the received vibration signals carry richer information of the touching object's characteristics (e.g., weight, size, location and material). This further enables our work to match the signals to the trained profiles and allows it to differentiate personal objects in contact with any surface. We evaluated extensively in the use cases of localizing touches (i.e., virtual keyboards), object localization and identification. Our experimental results demonstrate that the proposed vibration-based solution can achieve high accuracy, over 95%, in all these use cases.
@inproceedings{liu2016sensing, title={Sensing on ubiquitous surfaces via vibration signals: poster}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={424--425}, year={2016}, organization={ACM} }
O4

Poster: PIN Number-based Authentication Leveraging Physical Vibration MobiCom 2016

Jian Liu, Chen Wang, Yingying Chen
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

In this work, we propose the first PIN number based authentication system, which can be deployed on ubiquitous surfaces, leveraging physical vibration signals. The proposed system aims to integrate PIN number, behavioral and physiological characteristics together to provide enhanced security. Different from the existing password-based approaches, the proposed system builds upon a touch sensing technique using vibration signals that can operate on any solid surface. In this poster, we explore the feasibility of using vibration signals for ubiquitous user authentication and develop algorithms that identify fine-grained finger inputs with different password secrets (e.g., PIN sequences). We build a prototype using a vibration transceiver that can be attached to any surface (e.g., a door or a desk) easily. Our experiments in office environments with multiple users demonstrate that we can achieve high authentication accuracy with a low false negative rate.
@inproceedings{liu2016pin, title={PIN number-based authentication leveraging physical vibration: poster}, author={Liu, Jian and Wang, Chen and Chen, Yingying}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={426--427}, year={2016}, organization={ACM} }
O3

Poster: Automatic Personal Fitness Assistance through Wearable Mobile Devices MobiCom 2016

Xiaonan Guo, Jian Liu, Yingying Chen
Poster Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

Acknowledging the powerful sensors on wearable mobile devices enabling various applications to improve users' life styles and qualities, this paper takes one step forward developing a automatic personal fitness assistance through wearable mobile devices to assess dynamic postures in workouts. In particular, our system recognizes different types of exercises and interprets fine-grained fitness data to an easy-to-understand exercise review score. The system has the ability to align the sensor readings from wearable devices to the earth coordinate system, ensuring the accuracy and robustness of the system. Experiments with 12 types of exercises involve multiple participants doing both anaerobic and aerobic exercises in indoors as well as outdoors. Our results demonstrate that the proposed system can provide meaningful review and recommendations to users by accurately measure their workout performance and achieve 93% accuracy for workout analysis.
@inproceedings{guo2016automatic, title={Automatic personal fitness assistance through wearable mobile devices: poster}, author={Guo, Xiaonan and Liu, Jian and Chen, Yingying}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={437--438}, year={2016}, organization={ACM} }
O2

Demo: VibKeyboard: Virtual Keyboard Leveraging Physical Vibration MobiCom 2016

Jian Liu, Yingying Chen, Marco Gruteser
Demo Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.

VibKeyboard could accurately determine the location of a keystroke on extended surface areas leveraging a single vibration sensor. Unlike capacitive sensing, it does not require conductive materials and compared to audio sensing it is more robust to acoustic noise. In VibKeyboard, the received vibration signals are determined by the location of the touch impact. This allows location discrimination of touches precise enough to enable emerging applications such as virtual keyboards on ubiquitous surfaces for mobile devices. VibKeyboard seeks to extract unique features in frequency domain embedded in the vibration signal attenuation and interference and perform fine grained localization. Our experimental results demonstrate that VibKeyboard could accurately recognize keystrokes from close-by keys on a nearby virtual keyboard.
@inproceedings{liu2016vibkeyboard, title={VibKeyboard: virtual keyboard leveraging physical vibration}, author={Liu, Jian and Chen, Yingying and Gruteser, Marco}, booktitle={Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking}, pages={507--508}, year={2016}, organization={ACM} }
O1

Autologger: A Driving Input Logging Application MobiCom 2016

Luyang Liu, Cagdas Karatas, Hongyu Li, Jian Liu, Marco Gruteser, Yan Wang, Yingying Chen, Richard P. Martin
App Contest Session, in Proceedings of the 22nd Annual International Conference on Computing and Networking (MobiCom 2016), New York, NY, USA, October 2016.