You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I greatly appreciate you providing the full training recipes for a protein language model, however our own test on a binding prediction task showed very poor performance of embeddings obtained from AMPLIFY_120M model. Did you test the performance of AMPLIFY on subsequent prediction tasks and benchmarked against: ProstT5, ESM-2, and others?
Example prediction tasks:
per-residue prediction of secondary structure
binding residues
conservation
per-protein prediction of subcellular location
The text was updated successfully, but these errors were encountered:
I don't have results to share on downstream tasks like binding or secondary structure prediction at this time. The main reason is that designing biologically relevant tasks is essential but challenging and time-consuming. You are welcome to fine-tune and evaluate the pre-trained AMPLIFY models! There are several datasets on Hugging Face for the tasks you mentioned, but be sure to validate them as they may be biased and underestimate or overestimate the model's performance.
Actually, I have to backpedal my comment from a few days ago. When normalizing the embeddings properly the performance on our specific downstream task jumps up and exceeds T5 which is our previous best performing base model. I am happy to contribute a PR, because others may run into the same issue when trying to use AMPLIFY. Nevertheless, this project would greatly benefit from a systematic evaluation of the models on many downstream tasks. Reach out to me if you would like to collaborate on this.
I greatly appreciate you providing the full training recipes for a protein language model, however our own test on a binding prediction task showed very poor performance of embeddings obtained from AMPLIFY_120M model. Did you test the performance of AMPLIFY on subsequent prediction tasks and benchmarked against: ProstT5, ESM-2, and others?
Example prediction tasks:
The text was updated successfully, but these errors were encountered: