Mila > Publication > Deep Learning for Video > Delving Deeper into Convolutional Networks for Learning Video Representations

Recurrent Convolutional Networks for Video

Deep Learning for Video
Nov 2015

Delving Deeper into Convolutional Networks for Learning Video Representations

Nov 2015

We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations.
We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features.

Reference

Aaron Courville, Christopher Pal, Nicolas Ballas, Yao Li, Delving Deeper into Convolutional Networks for Learning Video Representations, in: International Conference on Learning Representations (ICLR), 2016

[arXiv:1511.06432] [ICLR 2016]

Linked Profiles