Skip to content

doc_embedder raised 'AssertionError: Torch not compiled with CUDA enabled' #6410

Closed Answered by julian-risch
keebeegee asked this question in Questions
Discussion options

You must be logged in to vote

Hi @keebeegee the example code that you shared uses 4bit quantization, which is only available with CUDA installed. That's why you are seeing this error. Almost all other parts of Haystack do not require CUDA and while they benefit from running on a GPU you can run them just on a CPU too.
If you want to try the open source LLM from the example, I suggest you use the 4bit quantization and switch to an EC2 instance with a GPU and a torch version with CUDA enabled, yes.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by julian-risch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants