Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t2ranking #229

Open
8 tasks
seanmacavaney opened this issue Apr 20, 2023 · 0 comments
Open
8 tasks

t2ranking #229

seanmacavaney opened this issue Apr 20, 2023 · 0 comments

Comments

@seanmacavaney
Copy link
Collaborator

seanmacavaney commented Apr 20, 2023

Dataset Information:

A large-scale Chinese Benchmark for Passage Ranking

Links to Resources:

Dataset ID(s) & supported entities:

  • <propose dataset ID(s), and where they fit in the hierarchy, and specify which entity types each will provide (docs, queries, qrels, scoreddocs, docpairs, qlogs)>

Checklist

Mark each task once completed. All should be checked prior to merging a new dataset.

  • Dataset definition
  • Tests (in tests/integration/[topid].py)
  • Metadata generated (using ir_datasets generate_metadata command, should appear in ir_datasets/etc/metadata.json)
  • Documentation (in ir_datasets/etc/[topid].yaml)
  • Downloadable content (in ir_datasets/etc/downloads.json)
    • Download verification action (in .github/workflows/verify_downloads.yml). Only one needed per topid.
    • Any small public files from NIST (or other potentially troublesome files) mirrored in https://github.com/seanmacavaney/irds-mirror/. Mirrored status properly reflected in downloads.json.

Additional comments/concerns/ideas/etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant