Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't load models: got an unexpected keyword argument 'n_threads_batch' #61

Open
ParetoOptimalDev opened this issue Oct 31, 2023 · 1 comment

Comments

@ParetoOptimalDev
Copy link

I think models are just not loading for me period. Is there some known working model I should be able to use? I tried:

TheBloke/orca_mini_v3_7B-GGUF

It downloaded fine but I get this error when trying to load with llama.cpp loader:

TypeError: Llama.__init__() got an unexpected keyword argument 'n_threads_batch'
@ParetoOptimalDev
Copy link
Author

full error
2023-11-02 12:34:13 INFO:Loading dolphin-2.1-mistral-7b.Q6_K.gguf...
2023-11-02 12:34:13 INFO:llama.cpp weights detected: /home/pod/.textgen/state/models/dolphin-2.1-mistral-7b.Q6_K.gguf
2023-11-02 12:34:13 INFO:Cache capacity is 0 bytes
Exception ignored in: <function Llama.__del__ at 0x7f6b44ef4550>
Traceback (most recent call last):
  File "/nix/store/zg0mhc43d2sd7lryfa86rvjz6cmxgina-python3-3.10.12-env/lib/python3.10/site-packages/llama_cpp/llama.py", line 1247, in __del__
    if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
2023-11-02 12:34:13 ERROR:Failed to load the model.
Traceback (most recent call last):
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/ui_model_menu.py", line 201, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(shared.model_name, loader)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/models.py", line 79, in load_model
    output = load_func_map[loader](model_name)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/models.py", line 225, in llamacpp_loader
    model, tokenizer = LlamaCppModel.from_pretrained(model_file)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 91, in from_pretrained
    result.model = Llama(**params)
TypeError: Llama.__init__() got an unexpected keyword argument 'n_threads_batch'

Exception ignored in: <function LlamaCppModel.__del__ at 0x7f6b44ef4dc0>
Traceback (most recent call last):
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 49, in __del__
    self.model.__del__()
AttributeError: 'LlamaCppModel' object has no attribute 'model'

This piece:

  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 91, in from_pretrained
    result.model = Llama(**params)
TypeError: Llama.__init__() got an unexpected keyword argument 'n_threads_batch'

is odd because I see n_threads_batch in that file:

        params = {
            # ...
            'n_threads': shared.args.threads or None,
            'n_threads_batch': shared.args.threads_batch or None,
            # ...
        }
full error
2023-11-02 12:34:13 INFO:Loading dolphin-2.1-mistral-7b.Q6_K.gguf...
2023-11-02 12:34:13 INFO:llama.cpp weights detected: /home/pod/.textgen/state/models/dolphin-2.1-mistral-7b.Q6_K.gguf
2023-11-02 12:34:13 INFO:Cache capacity is 0 bytes
Exception ignored in: <function Llama.__del__ at 0x7f6b44ef4550>
Traceback (most recent call last):
  File "/nix/store/zg0mhc43d2sd7lryfa86rvjz6cmxgina-python3-3.10.12-env/lib/python3.10/site-packages/llama_cpp/llama.py", line 1247, in __del__
    if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
2023-11-02 12:34:13 ERROR:Failed to load the model.
Traceback (most recent call last):
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/ui_model_menu.py", line 201, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(shared.model_name, loader)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/models.py", line 79, in load_model
    output = load_func_map[loader](model_name)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/models.py", line 225, in llamacpp_loader
    model, tokenizer = LlamaCppModel.from_pretrained(model_file)
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 91, in from_pretrained
    result.model = Llama(**params)
TypeError: Llama.__init__() got an unexpected keyword argument 'n_threads_batch'

Exception ignored in: <function LlamaCppModel.__del__ at 0x7f6b44ef4dc0>
Traceback (most recent call last):
  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 49, in __del__
    self.model.__del__()
AttributeError: 'LlamaCppModel' object has no attribute 'model'

This piece:

  File "/nix/store/ypar8406iyb6r22n755ygvfbplwjs050-textgen-patchedSrc/modules/llamacpp_model.py", line 91, in from_pretrained
    result.model = Llama(**params)
TypeError: Llama.__init__() got an unexpected keyword argument 'n_threads_batch'

is odd because I see n_threads_batch in that file:

        params = {
            # ...
            'n_threads': shared.args.threads or None,
            'n_threads_batch': shared.args.threads_batch or None,
            # ...
        }

I also have this installed on a mac computer without Nix and know that with whatever those version are, this model loads fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant