You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have pc with two 1080 gtx-ti and 32gb of ram. I want to train model from scratch on my own dataset and to utilize my gpu during training.
Whenever I want to train model with width and height above 544 I got Out Of Memory Error.
I have pc with two 1080 gtx-ti and 32gb of ram. I want to train model from scratch on my own dataset and to utilize my gpu during training.
Whenever I want to train model with width and height above 544 I got Out Of Memory Error.
[[Node: mul_31/_397 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_5148_mul_31", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I have try various combination of batch-size and subdivision but got error whenever width and height is greater than 544.
The text was updated successfully, but these errors were encountered: