You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I am training on ImageNet1K, I noticed that memory consumption increases by approximately 254MB with each epoch. If this trend continues, the total memory usage will reach 254MB * 300 = 76.2GB.
Is this the intended behavior?
Thank you!
The text was updated successfully, but these errors were encountered:
jhkwag970
changed the title
GPU memory comsumption increases every epoch
GPU memory comsumption increases every epoch during training
Jan 23, 2025
@ahatamiz
Hello, Thank you for your response. I am using the current MambaVision repo. After validation and reloading memory for training, memory consumption is higher than in the previous epoch. I tried using torch.cuda.empty_cache() as an alternative solution for now. I just wanted to make sure this will not cause any problems during training.
Hello,
Thank you for sharing your work!
As I am training on ImageNet1K, I noticed that memory consumption increases by approximately 254MB with each epoch. If this trend continues, the total memory usage will reach 254MB * 300 = 76.2GB.
Is this the intended behavior?
Thank you!
The text was updated successfully, but these errors were encountered: