You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can use torch.cuda.empty_cache() to release the other memory that is not occupied by the tensors but the auto-growth caching allocators. In this way, the memory consumption value with nvidia-smi would be accurate.
We are trying to compare the GPU memory consumption between GoTorch and PyTorch with the Resnet50 model. The scripts locate at https://github.com/wangkuiyi/gotorch/tree/develop/example/resnet.
The GPU card is P100 with 16G memory.
Experiment 1:
Following is the result, it's measured with
nvidia-smi
command.We remove three-line codes in Only Forward scenario:
Experiment 2:
GPU memory with different batch size:
The text was updated successfully, but these errors were encountered: