We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please answer the following questions for yourself before submitting an issue.
Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python to do.
llama-cpp-python
When loading a model from the HF Hub which is:
(e.g. https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-Q2_K_XS)
and specifying the additional_files to download, Llama.from_pretrained should download all the specified files.
additional_files
Llama.from_pretrained
Example code:
model = Llama.from_pretrained( repo_id="unsloth/DeepSeek-R1-GGUF", filename="DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00001-of-00005.gguf", additional_files=["DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00002-of-00005.gguf", ...], )
Please provide a detailed written description of what llama-cpp-python did, instead.
When attempting to load the additional files, it adds an additional directory to the download URL (i.e., attempts to download https://huggingface.co/unsloth/DeepSeek-R1-GGUF/resolve/main/**DeepSeek-R1-Q2_K_XS/**DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00002-of-00005.gguf, emphasis mine, instead of https://huggingface.co/unsloth/DeepSeek-R1-GGUF/resolve/main/DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00002-of-00005.gguf). This is probably caused by this line:
llama-cpp-python/llama_cpp/llama.py
Line 2301 in 710e19a
subfolder
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Please provide a detailed written description of what you were trying to do, and what you expected
llama-cpp-python
to do.When loading a model from the HF Hub which is:
(e.g. https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-Q2_K_XS)
and specifying the
additional_files
to download,Llama.from_pretrained
should download all the specified files.Example code:
Current Behavior
Please provide a detailed written description of what
llama-cpp-python
did, instead.When attempting to load the additional files, it adds an additional directory to the download URL (i.e., attempts to download https://huggingface.co/unsloth/DeepSeek-R1-GGUF/resolve/main/**DeepSeek-R1-Q2_K_XS/**DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00002-of-00005.gguf, emphasis mine, instead of https://huggingface.co/unsloth/DeepSeek-R1-GGUF/resolve/main/DeepSeek-R1-Q2_K_XS/DeepSeek-R1-Q2_K_XS-00002-of-00005.gguf). This is probably caused by this line:
llama-cpp-python/llama_cpp/llama.py
Line 2301 in 710e19a
subfolder
is not updated in theadditional_files
branch.The text was updated successfully, but these errors were encountered: