-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Issues: EleutherAI/lm-evaluation-harness
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Llama3.1-8B-Instruct evaluation fails
asking questions
For asking for clarification / support on library usage.
#2428
opened Oct 25, 2024 by
Isaaclgz
How is the MMLU accuracy calculated here?
asking questions
For asking for clarification / support on library usage.
#2425
opened Oct 24, 2024 by
yuqinan
test speculative decode accuracy
asking questions
For asking for clarification / support on library usage.
#2424
opened Oct 24, 2024 by
baoqianmagik
Question related to how to use the validation and training splits.
#2423
opened Oct 24, 2024 by
sorobedio
bbh_zeroshot fails during to a custom filter issue.
bug
Something isn't working.
#2422
opened Oct 23, 2024 by
shamanez
vllm mode shows much slow inference speed than normal (hf) mode.
#2418
opened Oct 21, 2024 by
95jinchul
Clarification Needed on Interface Implementation
asking questions
For asking for clarification / support on library usage.
#2415
opened Oct 21, 2024 by
sorobedio
vllm mode has return "[A"
asking questions
For asking for clarification / support on library usage.
#2414
opened Oct 21, 2024 by
95jinchul
AnthropicChat
fails when "until" is not provided explicitly in "generation_kwargs" or contains whitespace-only options
bug
#2412
opened Oct 20, 2024 by
maaxap
Can I use lm-eval for training?
asking questions
For asking for clarification / support on library usage.
#2411
opened Oct 20, 2024 by
yaolu-zjut
mgsm tasks not found when using Accelerate
bug
Something isn't working.
#2405
opened Oct 15, 2024 by
Mugariya
How to evaluate local model with local-completions?
asking questions
For asking for clarification / support on library usage.
#2402
opened Oct 14, 2024 by
liuzhuotao-teresa
How to fix the token length of the model input?
asking questions
For asking for clarification / support on library usage.
#2398
opened Oct 12, 2024 by
lonleyodd
How to run MMLU with CoT
asking questions
For asking for clarification / support on library usage.
#2392
opened Oct 9, 2024 by
brando90
Tasks not found when using vllm
asking questions
For asking for clarification / support on library usage.
#2386
opened Oct 8, 2024 by
Mugariya
lm_eval --model vllm did not work when data_parallel_size > 1
bug
Something isn't working.
#2379
opened Oct 3, 2024 by
wukaixingxp
Is LLaMA3.2-Vision-90B/11B result on mmmu_val reproducible?
validation
For validation of task implementations.
#2377
opened Oct 2, 2024 by
jybbjybb
Previous Next
ProTip!
Updated in the last three days: updated:>2024-10-22.