Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
In: Sensors (14248220), Jg. 16 (2016), Heft 1, S. 115-139
Online
academicJournal
Zugriff:
Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. [ABSTRACT FROM AUTHOR]
Copyright of Sensors (14248220) is the property of MDPI and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Titel: |
Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.
|
---|---|
Autor/in / Beteiligte Person: | Ordóñez, Francisco Javier ; Roggen, Daniel |
Link: | |
Zeitschrift: | Sensors (14248220), Jg. 16 (2016), Heft 1, S. 115-139 |
Veröffentlichung: | 2016 |
Medientyp: | academicJournal |
ISSN: | 1424-8220 (print) |
DOI: | 10.3390/s16010115 |
Schlagwort: |
|
Sonstiges: |
|