Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of AMPLIFY for subsequent prediction tasks #13

Open
philippschw opened this issue Nov 7, 2024 · 2 comments
Open

Performance of AMPLIFY for subsequent prediction tasks #13

philippschw opened this issue Nov 7, 2024 · 2 comments

Comments

@philippschw
Copy link

I greatly appreciate you providing the full training recipes for a protein language model, however our own test on a binding prediction task showed very poor performance of embeddings obtained from AMPLIFY_120M model. Did you test the performance of AMPLIFY on subsequent prediction tasks and benchmarked against: ProstT5, ESM-2, and others?

Example prediction tasks:

  • per-residue prediction of secondary structure
  • binding residues
  • conservation
  • per-protein prediction of subcellular location
@qfournier
Copy link
Collaborator

I don't have results to share on downstream tasks like binding or secondary structure prediction at this time. The main reason is that designing biologically relevant tasks is essential but challenging and time-consuming. You are welcome to fine-tune and evaluate the pre-trained AMPLIFY models! There are several datasets on Hugging Face for the tasks you mentioned, but be sure to validate them as they may be biased and underestimate or overestimate the model's performance.

@philippschw
Copy link
Author

Actually, I have to backpedal my comment from a few days ago. When normalizing the embeddings properly the performance on our specific downstream task jumps up and exceeds T5 which is our previous best performing base model. I am happy to contribute a PR, because others may run into the same issue when trying to use AMPLIFY. Nevertheless, this project would greatly benefit from a systematic evaluation of the models on many downstream tasks. Reach out to me if you would like to collaborate on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants