Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting of VideoCutLER's baseline #61

Open
so45jj45 opened this issue Mar 28, 2024 · 0 comments
Open

Setting of VideoCutLER's baseline #61

so45jj45 opened this issue Mar 28, 2024 · 0 comments

Comments

@so45jj45
Copy link

Thank you for your excellent research.

In the paper for VideoCutLER, the description for the baseline is as follows:
‡: "We train a CutLER [35] model with Mask2Former as a detector on ImageNet-1K, following CutLER’s official training recipe, and use it as a strong baseline."

Could you please clarify if the "strong baseline" mentioned here involves training Mask2Former at the image level only once, or if it involves multi-round self-training? Also, could you specify whether droploss was used or not?

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant