You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I would like to know how I can use the validation split to evaluate the models, and similarly, how to use the training split for evaluation if needed. I haven't found an option where the user can specify the dataset split they want to use for model evaluation. Could you provide guidance on how to set this up?
thank you
The text was updated successfully, but these errors were encountered:
I see. So the validation set is never used when both the training and test sets are present. Is the Open LM leaderboard following the same approach?
Thank you.
I see. So the validation set is never used when both the training and test sets are present. Is the Open LM leaderboard following the same approach? Thank you.
Hello, I would like to know how I can use the validation split to evaluate the models, and similarly, how to use the training split for evaluation if needed. I haven't found an option where the user can specify the dataset split they want to use for model evaluation. Could you provide guidance on how to set this up?
thank you
The text was updated successfully, but these errors were encountered: