You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to run the sample images run on the network pretrained on the ChaLearn dataset but the output is terrible (just has high activations around the outside for each joint confidence map). Is there anything I have to change in the demo code to get the model trained on ChaLearn to work? The flic trained network seems to work well though has issues with legs (which makes sense based on the dataset). Thanks!
The text was updated successfully, but these errors were encountered:
The currently released ChaLearn model expects a background-subtracted image of the human -- i.e. set all non-human pixels to 0. The per-frame masks are included in the original ChaLearn dataset. We plan to release another model without this limitation -- meanwhile please background-subtract your input images.
I have tried to run the sample images run on the network pretrained on the ChaLearn dataset but the output is terrible (just has high activations around the outside for each joint confidence map). Is there anything I have to change in the demo code to get the model trained on ChaLearn to work? The flic trained network seems to work well though has issues with legs (which makes sense based on the dataset). Thanks!
The text was updated successfully, but these errors were encountered: