- [Oct 2024] Leaderboard: We construct the official leaderboard on Hunggingface and we are calling for submissions!
- [Oct 2024] Camera-ready paper is out! We add multiple retrieval models including BM25, Colbertv2, GritLM.
- [Sep 2024] STaRK is accepted to 2024 NeurIPS Dataset & Benchmark Track!
- [Jun 2024] We make our benchmark as a pip package stark-qa. You can directly load the data from the package now!
- [Jun 2024] We migrate our data to Hugging Face! You don't need to change anything, the data will be automatically downloaded.
- [May 2024] We have augmented our benchmark with three high-quality human-generated query datasets which are open to access. See more details in our updated arxiv!
- [May 9th 2024] We release STaRK SKB Explorer, an interactive interface for you to explore our knowledge bases!
- [May 7th 2024] We present STaRK in the 2024 Stanford Annual Affiliates Meeting and 2024 Stanford Data Science Conference.
- [May 5th 2024] STaRK was reported on Marketpost and 智源社区 BAAI. Thanks for writing about our work!
- [Apr 21st 2024] We release the STaRK benchmark.
STaRK is a large-scale Semi-structured Retrieval Benchmark on Textual and Relational Knowledge bases, covering applications in product search, academic paper search, and biomedicine inquiries.
Featuring diverse, natural-sounding, and practical queries that require context-specific reasoning, STaRK sets a new standard for assessing real-world retrieval systems driven by LLMs and presents significant challenges for future research.
🔥 Check out our website for more overview!
With python >=3.8 and <3.12
pip install stark-qa
Create a conda env with python >=3.8 and <3.12 and install required packages in requirements.txt
.
conda create -n stark python=3.11
conda activate stark
pip install -r requirements.txt
from stark_qa import load_qa, load_skb
dataset_name = 'amazon'
# Load the retrieval dataset
qa_dataset = load_qa(dataset_name)
idx_split = qa_dataset.get_idx_split()
# Load the semi-structured knowledge base
skb = load_skb(dataset_name, download_processed=True, root=None)
The root argument for load_skb specifies the location to store SKB data. With default value None
, the data will be stored in huggingface cache.
Question answer pairs for the retrieval task will be automatically downloaded in data/{dataset}/stark_qa
by default. We provided official split in data/{dataset}/split
.
There are two ways to load the knowledge base data:
- (Recommended) Instant downloading: The knowledge base data of all three benchmark will be automatically downloaded and loaded when setting
download_processed=True
. - Process data from raw: We also provided all of our preprocessing code for transparency. Therefore, you can process the raw data from scratch via setting
download_processed=False
. In this case, STaRK-PrimeKG takes around 5 minutes to download and load the processed data. STaRK-Amazon and STaRK-MAG may takes around an hour to process from the raw data.
If you are running eval, you may install the following packages:
pip install llm2vec gritlm bm25
-
Our evaluation requires embed the node documents into
candidate_emb_dict.pt
, which is a dictionarynode_id -> torch.Tensor
. Query embeddings will be automatically generated if not available. You can either run the following the python script to download query embeddings and document embeddings generated bytext-embedding-ada-002
. (We provide them so you can run on our benchmark right away.)python emb_download.py --dataset amazon --emb_dir emb/
Or you can run the following code to generate the query or document embeddings by yourself. E.g.,
python emb_generate.py --dataset amazon --mode query --emb_dir emb/ --emb_model text-embedding-ada-002
dataset
: one ofamazon
,mag
orprime
.mode
: the content to embed, one ofquery
ordoc
(node documents).emb_dir
: the directory to store embeddings.emb_model
: the LLM name to generate embeddings, such astext-embedding-ada-002
,text-embedding-3-large
, ,voyage-large-2-instruct
,GritLM/GritLM-7B
,McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp
- See
emb_generate.py
for other arguments.
-
Run the python script for evaluation. E.g.,
python eval.py --dataset amazon --model VSS --emb_dir emb/ --output_dir output/ --emb_model text-embedding-ada-002 --split test --save_pred
python eval.py --dataset amazon --model VSS --emb_dir emb/ --output_dir output/ --emb_model GritLM/GritLM-7B --split test-0.1 --save_pred
python eval.py --dataset amazon --model LLMReranker --emb_dir emb/ --output_dir output/ --emb_model text-embedding-ada-002 --split human_generated_eval --llm_model gpt-4-1106-preview --save_pred
Key args:
dataset
: the dataset to evaluate on, one ofamazon
,mag
orprime
.model
: the model to be evaluated, one ofBM25
,Colbertv2
,VSS
,MultiVSS
,LLMReranker
.- Please specify the name of embedding model with argument
--emb_model
. - If you are using
LLMReranker
, please specify the LLM name with argument--llm_model
. - Specify API keys in command line
or
export ANTHROPIC_API_KEY=YOUR_API_KEY
orexport OPENAI_API_KEY=YOUR_API_KEY export OPENAI_ORG=YOUR_ORGANIZATION
export VOYAGE_API_KEY=YOUR_API_KEY
- Please specify the name of embedding model with argument
emb_dir
: the directory to store embeddings.split
: the split to evaluate on, one oftrain
,val
,test
,test-0.1
(10% random sample), andhuman_generated_eval
(to be evaluated on the human generated query dataset).output_dir
: the directory to store evaluation outputs.surfix
: Specify when the stored embeddings are in folderdoc{surfix}
orquery{surfix}
, e.g., _no_compact,
Please consider citing our paper if you use our benchmark or code in your work:
@inproceedings{wu24stark,
title = {STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases},
author = {
Shirley Wu and Shiyu Zhao and
Michihiro Yasunaga and Kexin Huang and
Kaidi Cao and Qian Huang and
Vassilis N. Ioannidis and Karthik Subbian and
James Zou and Jure Leskovec
},
booktitle = {NeurIPS Datasets and Benchmarks Track},
year = {2024}
}