We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classifier
Currently load and save operate directly over pickles. This causes issues when trying to load models across devices (GPU->CPU). These calls should wrap torch.load and torch. load_state_dict in some configuration based on what use_cuda flag is provided to the model. See. https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices
torch.load
torch. load_state_dict
use_cuda
The text was updated successfully, but these errors were encountered:
Bumping this, as currently saving/logging models with pickle will fail for any models > 4GB: https://stackoverflow.com/questions/29704139/pickle-in-python3-doesnt-work-for-large-data-saving. This is especially problematic with high-dimensional outputs (e.g. label models that spit out ~1000 different types of labels)
pickle
Sorry, something went wrong.
ajratner
No branches or pull requests
Currently load and save operate directly over pickles. This causes issues when trying to load models across devices (GPU->CPU). These calls should wrap
torch.load
andtorch. load_state_dict
in some configuration based on whatuse_cuda
flag is provided to the model.See.
https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices
The text was updated successfully, but these errors were encountered: