-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于optimizer的问题 #26
Comments
是的,因为它里面有可训练的参数 |
thanks |
@MichaelFan01 您好,能解释一下对Detail GT采用权重优化机制的原因吗?如果直接给出精细的Detail GT与预测的Boundary计算损失结果又如何? |
为了增加Detail信息的丰富性,直接计算也行,其实差别也不太大 |
您好,请问 里面有可训练的参数 是这个么? self.fuse_kernel = torch.nn.Parameter(torch.tensor([[6./10], [3./10], [1./10]], |
这个你当它是给定的参数就行了,因为根本没有放入计算图中。你可以打印作者和 paddleseg(借鉴原作者)训练好的模型,里面并没有 综上,文章里的 "Then we upsample the detail feature maps to theoriginal size and fuse it with a trainable1×1convolutionfor dynamic re-wegihting." 这句表述是不准确的,因为并没有被输入到计算图里训练。 |
Anyway, we would support STDC-Seg on MMSegmentation for its relatively high speed and good performance on Cityscapes dataset. |
我发现STDC中的optimizer与BiSeNet的optimizer相比多传入了boundary_loss_func,输出看了下,请问是因为在训练过程中对初始设置的融合权重0.6,0.3,0.1进行优化吗?
The text was updated successfully, but these errors were encountered: