Skip to content

Latest commit

 

History

History
51 lines (32 loc) · 2 KB

File metadata and controls

51 lines (32 loc) · 2 KB

Video Classification using Two Stream CNNs

We use a spatial and a temporal stream with VGG-16 and CNN-M respectively for modeling video information. LSTMs are stacked on top of the CNNs for modeling long term dependencies between video frames. For more information, see these papers:

Two-Stream Convolutional Networks for Action Recognition in Videos

Fusing Multi-Stream Deep Networks for Video Classification

Modeling Spatial-Temporal Clues in a Hybrid Deep Learning Framework for Video Classification

Towards Good Practices for Very Deep Two-Stream ConvNets


Here are the steps to run the project on CCV dataset:

Creating a virtual environment

First create a directory named env and then run the following inside the directory. This will create a virtual environment. Assuming we create a requirements.txt file to help install modules that are needed in the project. $ mkdir env $ cd env $ virtualenv venv-video-classification $ source env-video-classification\bin\activate $ cd .. $ pip install requirements.txt

Setting up the DataSet

  1. Get the YouTube data, remove broken videos and negative instances and finally create a pickle file of the dataset by running scripts from the utility_scripts folder

  2. Temporal Stream (in the temporal folder):

  3. Run temporal_vid2img to create optical flow frames and the related files

  4. Run temporal_stream_cnn to start with the temporal stream training

  5. Spatial Stream (in the spatial folder):

  6. Run the spatial_vid2img to create static frames and related files

  7. Download the vgg16_weights.h5 file from here and put it in the spatial folder

  8. Run spatial_stream_cnn to start with the spatial stream training

  9. Temporal Stream LSTM: Will soon update the code

  10. Spatial Stream LSTM: Will soon update the code