Skip to content

Latest commit

 

History

History
89 lines (70 loc) · 2.2 KB

TRAIN.md

File metadata and controls

89 lines (70 loc) · 2.2 KB

SiamIRCA Training Tutorial

This implements training of SiamIRCA.

Add SiamIRCA to your PYTHONPATH

export PYTHONPATH=/path/to/pysot:$PYTHONPATH

Prepare training dataset

Prepare training dataset, detailed preparations are listed in training_dataset directory.

Download pretrained backbones

Download pretrained backbones from here and put them in pretrained_models directory

Training

To train a model, run train.py with the desired configs:

cd tools

Multi-processing Distributed Data Parallel Training

Refer to Pytorch distributed training for detailed description.

Single node, multiple GPUs (We use 3 GPUs):

CUDA_VISIBLE_DEVICES=0,1,2
python -m torch.distributed.launch \
    --nproc_per_node=3 \
    --master_port=2333 \
    ../../tools/train.py --cfg config.yaml

Testing

After training, you can test snapshots on VOT dataset. For example, you need to test snapshots from 10 to 20 epoch.

START=10
END=20
seq $START 1 $END | \
    xargs -I {} echo "snapshot/checkpoint_e{}.pth" | \
    xargs -I {} \ 
    python -u ../../tools/test.py \
        --snapshot {} \
	--config config.yaml \
	--dataset VOT2018 2>&1 | tee logs/test_dataset.log

Or:

mpiexec -n 3 python ../../tools/test_epochs.py  \
    --start_epoch 10  \
    --end_epoch 20  \
    --gpu_nums 3  \
    --threads 3  \
    --dataset VOT2018

Evaluation

python ../../tools/eval.py 	 \
	--tracker_path ./results \ # result path
	--dataset VOT2018        \ # dataset name
	--num 4 		 \ # number thread to eval
	--tracker_prefix 'ch*'   # tracker_name

Hyper-parameter Search

The tuning toolkit will not stop unless you do.

python ../../tools/tune.py  \
    --dataset VOT2018  \
    --snapshot snapshot/checkpoint_e20.pth  \
    --gpu_id 0