-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving and loading escnn models #1
Comments
Hi @kilgore92 Thanks for opening the first issue! :) You can probably solve this issue by calling Best, |
Hi @Gabri95, Thanks! That did the trick :) Cheers, |
Hi I wanted to save my escnn model using the I think there are some issues of directly saving it this way due to the library using its bespoke GeometricTensor datatype. Is there a way to directly save the entire model rather than just the state dict? |
No, there's no way to save the whole model right now. The issue is that |
Hi,
Firstly, thanks for making such an accessible library to implement equivariant models alongside such informative documentation!
I wanted to train a toy model with MNIST before moving on to a bigger architecture and chose the model provided in the model.ipynb notebook in the 'examples' folder. After plugging it into my training script, I saved it using the regular PyTorch save procedure:
torch.save(model.state_dict(), 'mnist_model_e2cnn_{}.pt'.format(n_orientations))
In my test script, when I try to load this model using:
model.load_state_dict(torch.load('mnist_model_e2cnn_{}.pt'.format(n_orientations), map_location='cpu'))
However, trying to load the model throws up the following error:
RuntimeError: Error(s) in loading state_dict for MNISTE2CNN:
Missing key(s) in state_dict: "block1.1.filter", "block2.0.filter", "block3.0.filter", "block4.0.filter", "block5.0.filter", "block6.0.filter".
I'm not sure what I'm doing incorrectly, is there a special procedure involved in saving models that use escnn.nn.SequentialModule to stack ops?
EDIT: The torch version I am using is 1.7.0
Cheers,
Ishaan
The text was updated successfully, but these errors were encountered: