Replies: 7 comments
-
Hi, I just reread your question. To the best of my knowledge, there is no direct relation between anchor and template. |
Beta Was this translation helpful? Give feedback.
-
I've reviewed my questions and yes there should be no relation between
anchor and template, I will edit that part later.
…On Mon, Mar 9, 2020 at 4:14 AM Zhiyuan Chen ***@***.***> wrote:
Hi, I just reread your question.
There is no relation between anchor and template.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/STVIR/pysot/issues/300?email_source=notifications&email_token=AAAXUQM37BN5HE5SXI4WWMLRGTFP5A5CNFSM4K2XHEW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOGVQFY#issuecomment-596465687>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAXUQM3DMZ42LVV2OMFZCDRGTFP5ANCNFSM4K2XHEWQ>
.
|
Beta Was this translation helpful? Give feedback.
-
But my main concern still holds: if there is no enough diversity in anchor scales, samples with very large or very small bounding box will be excluded during training. That means using more scales should improve the result of this work. |
Beta Was this translation helpful? Give feedback.
-
anchors are generated based on the bounding box. and it might increase the performant, but it'll definitely increase the time of process very significantly too, which is unacceptable. |
Beta Was this translation helpful? Give feedback.
-
Anchors are NOT generated based on the bounding box. They were generated only once for all training samples. Only the positive/negative activation is based on each bounding box.
It may slightly increase training time but definitely not significant since there is always an upper-bound on for the number of activated anchors. And having more positive anchors will definitely help training, since for each sample negative anchors are always dominating. At inference time there is only marginal performance difference too. But I have to admit in my experiment using just single scale anchor seems working for detecting objects with very large bounding box, even when that size should have been excluded during training. It seems the network has the capability to scale to those cases. Anyway it's just in my case so it's not anything conclusive, and I ended using multiple anchors scales for better supporting various object sizes, at least in theory. |
Beta Was this translation helpful? Give feedback.
-
Yes that's correct, anchors are generated on the feature maps. Sorry about the wrong reply.
It is actually a hyper parameter, and highly depends on your actual scenerio. Experiment is the only way to get truth. |
Beta Was this translation helpful? Give feedback.
-
Since no further question is asked, I'm closing this issue for now. |
Beta Was this translation helpful? Give feedback.
-
Please correct me if I'm wrong for the following observations:
Based on @zyc-ai 's description in issue 129 #129, as well as the paper, anchors should be generated based on the size of template bounding box, butin the code, wherever Anchors is generated it was always through code like this:self.anchors = Anchors(cfg.ANCHOR.STRIDE,
cfg.ANCHOR.RATIOS,
cfg.ANCHOR.SCALES)
which will give anchors with constant size (64x64, before applying ratios). For generating positive classified anchors, one anchor box need to have >0.6 IOU with the ground truth, that means if the ground truth bounding box is much bigger or smaller than 64x64, no anchor will be considered positive during training, and that object won't be detected during tracking at all.
A workaround is to defined multiple anchor scales, but the paper claims only one scale is needed, which is very confusing. This will only work if all the tracking objects are around the same 64x64 size in the search image.
Am I missing something? Please point out if there is code somewhere I've missed, thanks~!
Originally posted by @qinc in #129 (comment)
Beta Was this translation helpful? Give feedback.
All reactions