Activity recognition systems must be robust against possible changes and failures in the sensor network which are typical in open-ended environments. We developed machine learning techniques and heuristics that provide these capabilities, including detection of sensor failure or degradation, dynamic fusion mechanisms and handling of missing data. These methods have been applied to activity recognition from wearable and ambient sensors, as well as BMI scenarios.

Activity recognition in real applications is a challenging task since conditions of the environment cannot be easily controlled. Although multiple sensory modalities can help to make more robust decisions, it increases complexity and multiply the possiblities of failure. We focused on the development of recognition systems that leverage data and robustness in the system to provide graceful degradation and self-correcting characteristics.

A great difficulty in this type of research is the lack of normative datasets and evaluation tools to benchmark novel methods. We developed significant efforts to create a large database to provide a relevant tool to the community. We collected, curated and annotated a dataset for benchmarking multimodal systems for activity recognition. It contains data from 23 body-worn sensors, 12 object-placed sensors, and 21 ambient sensors to measure daily living activities of four subjects. Multiple scientific papers have resulted from this work. The paper describing the data set has been cited 192 times, and the dataset page at the UCI Machine learning repository has been visited more than 80’000 times since 2012.

Opportunity architecture

Resources

Partners

Project funded by the 7th Framework Program of the European Commission from Feb. 2009 to Jan. 2012.

Selected publications