Replies: 1 comment 7 replies
-
The script should download anything you find on hugging face press the copy model name button. Get something like this "anon8231489123/vicuna-13b-GPTQ-4bit-128g" to use with the script This example had a mismatched file name though, the .safetensors file needs renamed to match the folder I also had to run it with --model-type llama You can even feed download-model.py a Lora link and it'll detect that and place it where it needs to be. As long as it's on hugging face download-model.py should support it Ps. LLaMA is a leak. It's supposed to be for researchers that have been approved by Meta. That's probably why it and it's derivatives are mostly a find it yourself kind of thing |
Beta Was this translation helpful? Give feedback.
-
there's a lot of running around finding different models, it would be nice if they were all available from that list
Beta Was this translation helpful? Give feedback.
All reactions