Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter Evaluation Metrics #85

Open
5 tasks
memeeerit opened this issue Jun 10, 2023 · 0 comments
Open
5 tasks

Filter Evaluation Metrics #85

memeeerit opened this issue Jun 10, 2023 · 0 comments

Comments

@memeeerit
Copy link
Contributor

memeeerit commented Jun 10, 2023

We need to know how well our filters are doing.

  • Determine what proportion of data coming from the generic parser typically needs to be rejected.
  • Determine the accuracy and false positive/negative rates of the filter pipeline as a whole.
  • Determine the accuracy and false positive/negative rates of the local filters as a whole
  • Determine the accuracy and false positive/negative rates of the openAI filter. These should be computed in two settings, one with a thusfar unfiltered dataset, and one only using data the local filters passed through.
  • Any other metrics you think are useful

Save and track with git any tools developed to accomplish this, as we may want to use them again for strategy-specific evaluations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants