Mila > Publication > Deep Learning for Video > Recurrent Neural Networks for Emotion Recognition in Video

RNNs for Emotion Recognition

Deep Learning for Video
Nov 2015

Recurrent Neural Networks for Emotion Recognition in Video

Nov 2015

Deep learning based approaches to facial analysis and video analysis have recently demonstrated high performance on a variety of key tasks such as face recognition, emotion recognition and activity recognition. In the case of video, information often must be aggregated across a variable length sequence of frames to produce a classification result. Prior work using convolutional neural networks (CNNs) for emotion recognition in video has relied on temporal averaging and pooling operations reminiscent of widely used approaches for the spatial aggregation of information. Recurrent neural networks (RNNs) have seen an explosion of recent interest as they yield state-of-the-art performance on a variety of sequence analysis tasks. RNNs provide an attractive framework for propagating information over a sequence using a continuous valued hidden layer representation. In this work we present a complete system for the 2015 Emotion Recognition in the Wild (EmotiW) Challenge. We focus our presentation and experimental analysis on a hybrid CNN-RNN architecture for facial expression analysis that can outperform a previously applied CNN approach using temporal averaging for aggregation.

Reference

Samira E. Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, Christopher Pal, Recurrent Neural Networks for Emotion Recognition in Video, in: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Pages 467-474, 2015

Linked Profiles