-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama 3 tokenizer no longer works - updated eos token #44
Comments
which part do you mean? Triton backend should have parameter like stop_words |
"chat_template": "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}", |
Hi @avianion, I am reporting the issue you are describing. Using the liquid template defined in the repo, the model is returning an empty response. I also tried converting your chat template from Jinja to Liquid, but without results. |
@poddamatt98 I will take a look at this when I have time. |
problem solved modifying |
The official llama 3 70b instruct repo has updated the eos token
"eos_token": "<|eot_id|>",
Yet when using this library and using that eos token, no output is outputted because it used the old eos token.
Suggesting to fix this @npuichigo
The text was updated successfully, but these errors were encountered: