You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During training of the appearance part I've encountered rather long delays (15-20 min) with calculation of the codebook values - obviously because of the amount of frames in the training set.
In the published code the codebook is calculated for every frame (in my case 30 videos for 30004000 frames each). On the contrary, the codebook_a in the published model counts for just 125 arrays, only dozens in length each - despite you mentioned ~2000 videos with hundreds of frames in each in your dataset.
Could you please clarify how should this codebook be properly constructed?
The text was updated successfully, but these errors were encountered:
During training of the appearance part I've encountered rather long delays (15-20 min) with calculation of the codebook values - obviously because of the amount of frames in the training set.
In the published code the codebook is calculated for every frame (in my case
30 videos for 30004000 frames each). On the contrary, the codebook_a in the published model counts for just 125 arrays, only dozens in length each - despite you mentioned ~2000 videos with hundreds of frames in each in your dataset.Could you please clarify how should this codebook be properly constructed?
The text was updated successfully, but these errors were encountered: