Ubiquitous Sensing Research Group

Summary

The Ubiquitous Sensing research group conducts basic research on extracting meaningful information about human behavior and characteristics, and on developing novel system/platform level solutions for performing the sensing tasks as accurately, robustly and resource-efficiently as possible. Examples of the kind of information that the sensing research focus on include personal characteristics (e.g., personality), social behavior (e.g., co-presence of people), competence or skillfulness of the person (e.g., fuel-efficiency in driving behavior), and so forth. Most of the research focuses on using smartphones as the sensing platform, but also the potential of new sensing technologies, such as Kinect or smartwatches, are topics of investigation.

The Ubiquitous Sensing research group operates within the NODES research programme lead by Prof. Sasu Tarkoma. The group actively participates in teaching for the Network and Services and Algorithms and Machine Learning programmes. The group is part of Helsinki Institute for Information Technology HIIT.

List of publications

Research portfolio

Examples of the activities of the group are listed below. Other examples of topics that we work on include mobile social sensing, sensing technologies for supporting computer security, sensing collective urban phenomena, and robust on-device positioning technologies.

Platform-level solutions for energy-efficient sensing and tracking of location-related information

We have developed an on-device GSM fingerprinting algorithm for mobile phones that can provide coarse-grained position information. The algorithm relies on a radio map that is constructed in an online fashion on the mobile device.

We have also developed sensing and uploading strategies for minimizing energy consumption during location and/or position tracking. The strategies rely on effective switching between different sensors (accelerometer, GPS, compass) and intelligent data uploading strategies.

Facilitating Deployment and Usage of Indoor Positioning Solutions

Location is a key source of information about human behavior. In indoor environments, current technical solutions either require additional hardware on the user’s end or are expensive and cumbersome to setup. In our research, we have developed algorithmic solutions, which are based on non-linear embedding techniques, for facilitating calibration efforts of wireless indoor positioning technologies.

Advances in Activity Recognition

We have developed novel algorithms for improving the amount of relevant information that can be extracted from accelerometer signals. In particular, we have developed a novel gravity estimation algorithm that pertains movement related information even during periods of sustained movement (e.g., when the person is in a motorized transportation modality).  We have also developed an algorithm for estimating so-called peak features that characterize acceleration and deceleration patterns, thus providing essential information, e.g., about vehicular movement patterns. Based on our techniques, we have developed a novel transportation mode detection system that is able to improve accuracy and robustness of transportation mode detection compared to hybrid GPS and accelerometer sensors, despite relying solely on the accelerometer of the mobile devices.

Collecting labels for training data is a major hurdle in much of activity recognition work. In this work we have examined how to bootstrap the performance of activity recognizers without extensive amounts of training data. Our work builds on self-taught learning, where unlabeled data is used to learn patterns that characterize the behavior of the person, and then a small set of training data is mapped onto these patterns and used to train a recognizer. Experimental evaluations demonstrate that our solution is capable of reducing the need for labeled data significantly without impacting recognizer accuracy. Moreover, the results demonstrate that our approach has better generalization performance than previous data-driven feature learning methods.

Sensing using RGB+D cameras

We have also examined how to extract relevant information from data provided by a RGB+D camera. In particular, we have looked at algorithmic solutions for estimating the direction of the user’s gaze, as well as recognizing different types of gestures. The algorithms for eye gaze estimation have been integrated into a low-cost eye-tracking solution that relies on a Kinect sensor and that has been targeted at evaluations of interactive technologies in the wild. The gesture recognition solutions have been integrated as part of a novel gesture authentication system.

Group Members

Group leader:

  • Docent, Petteri Nurmi, PhD

PhD Students:

  • Samuli Hemminki
  • Teemu Pulkkinen (currently working at Ekahau)

MSc Students

  • Farbod Faghihi
  • Jarno Leppänen
  • Mikko Pelkonen
  • Francesco Concas

Affiliates / collaborators

  • Ella Peltonen
  • Eemil Lagerspetz
  • Johannes Verwijnen

Alumni:

  • Dr. Sourav Bhattacharya, Bell-Labs, Ireland
  • Dr. Jara Uitto
  • M.Sc. Yina Ye
  • Haipeng Guo
06.11.2015 - 14:22 Petteri Nurmi
26.10.2015 - 15:46 Petteri Nurmi