Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adjust_predicts in eval_method.py #65

Open
Zihan-Zhou opened this issue Jan 11, 2024 · 0 comments
Open

adjust_predicts in eval_method.py #65

Zihan-Zhou opened this issue Jan 11, 2024 · 0 comments

Comments

@Zihan-Zhou
Copy link

This function currently adjusts predictions based on ground truth labels. This would increase the F1 score for example. However, during the inference stage, there won't be any labels available. Should predictions be adjusted by other method, or they remain as they are?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant