You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The first run was executed without any issues (bash run_inference_custom.sh). However, when repeating the same operation, it fails to work. I have checked the GPU memory using nvidia-smi before running the process to ensure it's not full, and it shows only 300MiB out of 16376MiB in use.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 15.99 GiB total capacity; 5.76 GiB already allocated; 8.75 GiB free; 5.87 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered:
The first run was executed without any issues (
bash run_inference_custom.sh
). However, when repeating the same operation, it fails to work. I have checked the GPU memory using nvidia-smi before running the process to ensure it's not full, and it shows only 300MiB out of 16376MiB in use.torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 15.99 GiB total capacity; 5.76 GiB already allocated; 8.75 GiB free; 5.87 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered: