You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am confused why you only keep the 'primaryCap size' dimension and flatten the other dimensions including the 'batch size' dimension, at last, do the softmax operation on the zero dimension.
In the normal CapsuleNetwork algorithm, I think we just need to transpose the dimension 1 and dimension 2 of x, and then do torch.nn.functional.softmax on the 'digitCap size' dimension.
The text was updated successfully, but these errors were encountered:
Dear Sir or Madam,
In your codes, the input x is [batch size, digitCap size, primaryCap size, 1, digital_capslen].
softmaxed_output = F.softmax(transposed_input.contiguous().view(-1, transposed_input.size(-1)))
I am confused why you only keep the 'primaryCap size' dimension and flatten the other dimensions including the 'batch size' dimension, at last, do the softmax operation on the zero dimension.
In the normal CapsuleNetwork algorithm, I think we just need to transpose the dimension 1 and dimension 2 of x, and then do torch.nn.functional.softmax on the 'digitCap size' dimension.
The text was updated successfully, but these errors were encountered: