One beneficial usage of UniTrack is that it allows easy evaluation of pre-trained models (as appearance models) on diverse tracking tasks. By far we have tested the following models, mostly self-supervised pre-trained:
Pre-training Method | Architecture | Link |
---|---|---|
ImageNet classification | ResNet-50 | torchvision |
InsDist | ResNet-50 | Google Drive |
MoCo-V1 | ResNet-50 | Google Drive |
PCL-V1 | ResNet-50 | Google Drive |
PIRL | ResNet-50 | Google Drive |
PCL-V2 | ResNet-50 | Google Drive |
SimCLR-V1 | ResNet-50 | Google Drive |
MoCo-V2 | ResNet-50 | Google Drive |
SimCLR-V2 | ResNet-50 | Google Drive |
SeLa-V2 | ResNet-50 | Google Drive |
InfoMin | ResNet-50 | Google Drive |
BarlowTwins | ResNet-50 | Google Drive |
BYOL | ResNet-50 | Google Drive |
DeepCluster-V2 | ResNet-50 | Google Drive |
SwAV | ResNet-50 | Google Drive |
PixPro | ResNet-50 | Google Drive |
DetCo | ResNet-50 | Google Drive |
TimeCycle | ResNet-50 | Google Drive |
ImageNet classification | ResNet-18 | torchvision |
Colorization + memory | ResNet-18 | Google Drive |
UVC | ResNet-18 | Google Drive |
CRW | ResNet-18 | Google Drive |
After downloading an appearance model, please place it under $UNITRACK_ROOT/weights
. A large part of the model checkpoints are adopted from ssl-transfer, many thanks to linusericsson!
If your model uses the standard ResNet architecture, you can directly test it using UniTrack without additional modifications. If you use ResNet but the parameter names are not consistent with the standard naming, you can simply rename parameter groups and load your weights into the standard ResNet. If you are using other architectures, it is also possible to test it with UniTrack. You may need a little hack: just remember to let the model output 8x down-sampled feature maps. You can check out models/hrnet.py
for an example.