Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extrinsic task code #2

Open
stephantul opened this issue Aug 12, 2022 · 6 comments
Open

Extrinsic task code #2

stephantul opened this issue Aug 12, 2022 · 6 comments

Comments

@stephantul
Copy link

Hi,

I was wondering where I could find the testing/training code for the extrinsic tasks, i.e., SST-2 and CoNLL-03.
Also, are the models included in the repository the models for which you report the scores in the paper?

Thanks!

@tigerchen52
Copy link
Owner

Hi,

I was wondering where I could find the testing/training code for the extrinsic tasks, i.e., SST-2 and CoNLL-03. Also, are the models included in the repository the models for which you report the scores in the paper?

Thanks!

Hi,

Thanks for your interest in our work!

As for the extrinsic evaluations, I have added some files for this goal.
You can follow the instructions in the README to reproduce the reported scores.

  1. SST-2: cnn_text_classification
  2. CoNLL-03: rnn ner

Besides, we provided two model files, love_fasttext and love_bert_base_uncased.
Note that the love_fasttext model is the same as the file in output/model_ merge.pt.

Best,
Lihu

@stephantul
Copy link
Author

Hi,

Thanks for the quick response. I'll check out the code! The model namespace mentions that it was trained using data/wiki_100.vec. But is this correct? I'm assuming that this is a placeholder file, rather than the real data the model was trained on.

Thanks!

@tigerchen52
Copy link
Owner

Hi,

Thanks for the quick response. I'll check out the code! The model namespace mentions that it was trained using data/wiki_100.vec. But is this correct? I'm assuming that this is a placeholder file, rather than the real data the model was trained on.

Thanks!

Yes, your understanding is totally correct.
data/wiki_100.vec is just an example to test the code. You can download your personal target pre-trained word embeddings, e.g., Fasttext.
As mentioned before, LOVE can mimic various existing word embeddings.

@pestrstr
Copy link

Hi!
Are you also planning to add the code for reproducing results for extrinsic tasks on MR and BC2GM datasets?
Thanks!

@tigerchen52
Copy link
Owner

Hi! Are you also planning to add the code for reproducing results for extrinsic tasks on MR and BC2GM datasets? Thanks!

Hi, for the two datasets, the only thing to do is to put the files into the corresponding task path.
For example, MR is a text classification dataset, just replace the file in cnn_text_classification.
Same case for the BC2GM. Note that the file format should also be the same.

@pestrstr
Copy link

Thank you for your answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants