Code and datasets for the paper https://arxiv.org/abs/2006.10079
To install requirements:
pip install -r requirements.txt
The code for our SCN model is located in the counting/models/networks/attcount_mlb.py
file.
The code to create our ablated versions of TallyQA is located in counting/datasets/tallyqa.py
The loss we use is loacted in counting/models/criterions/counting_regression.py
The datasets are available at https://github.com/manoja328/TallyQA_dataset
Download our ablated version by running the script ./counting/datasets/scripts/download_mcd.sh
Download images features by running the script ./counting/datasets/scripts/download_features.sh
To train the model(s) in the paper, run this command:
python -m bootstrap.run \
-o counting/options/tallyqa-odd-even-val2-0.1/scn.yaml \
--exp.dir logs/tallyqa-odd-even-val2-0.1/scn
python -m bootstrap.run \
-o counting/options/tallyqa-even-odd-val2-0.1/scn.yaml \
--exp.dir logs/tallyqa-even-odd-val2-0.1/scn
python -m bootstrap.run \
-o counting/options/tallyqa/scn.yaml \
--exp.dir logs/tallyqa/scn
This will run training, evaluation and testing.
python -m counting.compare-tally-val -d logs/tallyqa-odd-even-val2-0.1/scn logs/tallyqa-even-odd-val2-0.1/scn logs/tallyqa/scn
Download the dataset by running the script ./counting/datasets/scripts/download_coco_ground.sh
You can then run the evaluation on COCOGrounding by running the following command
python -m bootstrap.run \
-o path/to/trained/model/options.yaml \
--exp.resume "best_eval_epoch.accuracy_top1" \
--dataset.train_split \
--dataset.params.path_questions data/vqa/tallyqa/coco-ground.json \
--misc.logs_name "coco_ground_0.2" \
--model.metric.score_threshold_grounding 0.2 \
--dataset.name "counting.datasets.tallyqa.TallyQA"
To perform early stopping on the validation set (if you use an ablated MCD dataset), use --exp.resume "best_validation_epoch.tally_acc.overall"
instead.
To check results, run the command
python -m counting.compare-grounding -d <exp-dir>
Download a pretrain model on
-
TallyQA Odd-Even-90% :
-
TallyQA Even-Odd-90% :
-
Original TallyQA :