Mila > Publication > Deep Learning for Video > Describing Videos by Exploiting Temporal Structure

Temporal Structure In Videos

Deep Learning for Video
Dec 2015

Describing Videos by Exploiting Temporal Structure

Dec 2015

Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.

Reference

Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle and Aaron Courville, Describing Videos by Exploiting Temporal Structure, in: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 4507-4515, 2015

PDF

Linked Profiles