You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see the process use GPU memory in nvidia-smi, but there is 0% GPU utilization and training is super slow. When I look at the devices returned by libml/utils.py:get_available_gpus, the local_device_protos are all XLA_GPU instead of GPU. Any ideas on what might be going on here and how to fix? Presumably this is some kind of version issue?
(Apologies that this is a more general TF question, but I wasn't able to find a working fix by Googling)
The text was updated successfully, but these errors were encountered:
When I run
I see the process use GPU memory in
nvidia-smi
, but there is 0% GPU utilization and training is super slow. When I look at the devices returned bylibml/utils.py:get_available_gpus
, thelocal_device_protos
are allXLA_GPU
instead ofGPU
. Any ideas on what might be going on here and how to fix? Presumably this is some kind of version issue?(Apologies that this is a more general TF question, but I wasn't able to find a working fix by Googling)
The text was updated successfully, but these errors were encountered: