-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: probability tensor contains either inf
, nan
or element < 0
#4
Comments
Hi, have you solved it, I come across the same problem |
Hi @Phoebe-ovo @xzebin775 thanks for reporting this issue! We are not able to reproduce this error on the GPUs we have. Could you please let me know what GPUs were you using? |
The GPU I used is V100, what GPUs were you using? |
It is GTX 1080 Ti. |
Thanks for confirming. Can you try if setting load_in_8bit to False in here solves the problem? |
I set load_in_8bit to False, but I get the error below. It seems I cann't load the model to GPU torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 10.92 GiB total capacity; 10.44 GiB already allocated; 22.62 MiB free; 10.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Upon thorough investigation, we are not able to reproduce the error on the GPUs we have (NVIDIA A100 and 3090), but it might related to other issues. I suggest you try these:
Also, we noticed the base model we used |
I meet the same problem, and I set "do_sample" = False, then it worked. Don't know what impact this will have. (same with GPUV100) |
Change 'decapoda-research/llama-7b-hf' to 'huggyllama/llama-7b' and 'load_in_8bit=False'. It works for me. (My env v100) |
I get error: |
Hello, when I evaluate for Perception and Action Prediction, I got this error for decapoda-research/llama-7b-hf.
How can I fix this? Thanks!
The text was updated successfully, but these errors were encountered: