This is a Tensorflow implementation of the VIS + LSTM visual question answering model from the paper Exploring Models and Data for Image Question Answering by Mengye Ren, Ryan Kiros & Richard Zemel. The model architectures vaires slightly from the original - the image embedding is plugged into the last lstm step (after the question) instead of the first. The LSTM model uses the same hyperparameters as those in the Torch implementation of neural-VQA.
- Python 2.7.6
- Tensorflow
- h5py
- Download the MSCOCO train+val images and VQA data using
Data/download_data.sh
. Extract all the downloaded zip files inside theData
folder. - Download the pretrained VGG-16 tensorflow model and save it in the
Data
folder.
- Extract the fc-7 image features using:
python extract_fc7.py --split=train
python extract_fc7.py --split=val
-
Training
- Basic usage
python train.py
- Options
rnn_size
: Size of LSTM internal state. Default is 512.num_lstm_layers
: Number of layers in LSTMembedding_size
: Size of word embeddings. Default is 512.learning_rate
: Learning rate. Default is 0.001.batch_size
: Batch size. Default is 200.epochs
: Number of full passes through the training data. Default is 50.img_dropout
: Dropout for image embedding nn. Probability of dropping input. Default is 0.5.word_emb_dropout
: Dropout for word embeddings. Default is 0.5.data_dir
: Directory containing the data h5 files. Default isData/
.
- Basic usage
-
Prediction
python predict.py --image_path="sample_image.jpg" --question="What is the color of the animal shown?" --model_path = "Data/Models/model2.ckpt"
- Models are saved during training after each of the complete training data in
Data/Models
. Supply the path of the trained model inmodel_path
option.
-
Evaluation
- run
python evaluate.py
with the same options as that in train.py, if not the defaults.
- run
- fc7 relu layer features from the pretrained VGG-16 model are used for image embeddings. I did not scale these features, and am not sure if that can make a difference.
- Questions are zero padded for fixed length questions, so that batch training may be used. Questions are represented as word indices of a question word vocabulary built during pre processing.
- Answers are mapped to 1000 word vocabulary, covering 87% answers across training and validation datasets.
- The LSTM+VIS model is defined in vis_lstm.py. The input tensors for training are fc7 features, Questions(Word indices upto 22 words), Answers(one hot encoding vector of size 1000). The model depicted in the figure is implemented with 2 LSTM layers by default(num_layers in configurable).
The model achieved an accuray of 50.8% on the validation dataset after 12 epochs of training across the entire training dataset.
The fun part! Try it for yourself. Make sure you have tensorflow installed. Download the data files/trained model from this link and save them in the Data/
directory. Also download the pretrained VGG-16 model and save it as Data/vgg16.tfmodel
. You can test for any sample image using:
python predict.py --image_path="Data/sample.jpg" --question="Which animal is this?" --model_path="Data/model2.ckpt"