You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thank you for the great work.
I’m fine-tuning this model, and using the checkpoint trained with focal seems to lower performance on COCO and RefCOCOg. Could the batch size be influencing this as well?
I am training on 4 GPUs, the setting is "TRAIN.BATCH_SIZE_TOTAL 20 \ TRAIN.BATCH_SIZE_PER_GPU 5 \ DATALOADER_NUM_WORKERS 4" ,and results on RefCOCOg (~62-63% cIoU),COCO(~38.1% mAP, ~60.7% mIoU).
Would increasing the epoch parameter, reducing the LR_MULTIPLIER for the backbone (to 0.05), or lowering WARMUP_ITERS(to 5) be helpful?
The text was updated successfully, but these errors were encountered:
By the way, the demo code shows davit.py, but how can we calculate the parameters using davitd5_unicl_lang_v1.yaml? Also, I was wondering why the BACKBONE_NAME is set to 'davit' instead of 'davitd5'?
Hi! Thank you for the great work.
I’m fine-tuning this model, and using the checkpoint trained with focal seems to lower performance on COCO and RefCOCOg. Could the batch size be influencing this as well?
I am training on 4 GPUs, the setting is "TRAIN.BATCH_SIZE_TOTAL 20 \ TRAIN.BATCH_SIZE_PER_GPU 5 \ DATALOADER_NUM_WORKERS 4" ,and results on RefCOCOg (~62-63% cIoU),COCO(~38.1% mAP, ~60.7% mIoU).
Would increasing the epoch parameter, reducing the LR_MULTIPLIER for the backbone (to 0.05), or lowering WARMUP_ITERS(to 5) be helpful?
The text was updated successfully, but these errors were encountered: