This project provides tools for "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking." in NAACL 2022 as a long paper.
For training, a GPU is recommended to accelerate the training speed.
The code is based on PyTorch 1.6+. You can find tutorials here.
Our models are in the Joint directory, and you can also find baseline models under Pipeline directory. We give the specific usage in the corresponding directory.
./data
└── CHEF
├── train.json
├── dev.json
└── test.json
For the Joint model (Ours), you can download the data and put it in the Data directory for use. For the Pipeline model, data needs to be preprocessed, and we give the preprocessed data in the Data directory.
Kernel Graph Attention Network
If you have any problem about our code, feel free to contact: [email protected]
If the code is used in your research, hope you can cite our paper as follows:
@inproceedings{hu2022chef,
abbr = {NAACL},
title = {CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking},
author = {Hu, Xuming and Guo, Zhijiang and Wu, guanyu and Liu, Aiwei and Wen, Lijie and Yu, Philip S.},
booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2022},
code = {https://github.com/THU-BPM/CHEF}
}