This repository contains the implementation of various protein language models trained on reduced amino acid alphabets, along with the notebooks to recreate the figures found in the paper.
For more details, see: Link after publishing.
Motivation: Protein Language Models (PLMs), which borrowed ideas for modelling and inference from Natural Language Processing, have demonstrated the ability to extract meaningful representations in an unsupervised way. This led to significant performance improvement in several downstream tasks. Clustering amino acids based on their physical-chemical properties to achieve reduced alphabets has been of interest in past research, but their application to PLMs or folding models is unexplored.
Results: Here, we investigate the efficacy of PLMs trained on reduced amino acid alphabets in capturing evolutionary information, and we explore how the loss of protein sequence information impacts learned representations and downstream task performance. Our empirical work shows that PLMs trained on the full alphabet and a large number of sequences capture fine details that are lost in alphabet reduction methods. We further show the ability of a structure prediction model(ESMFold) to fold CASP14 protein sequences translated using a reduced alphabet. For 10 proteins out of the 50 targets, reduced alphabets improve structural predictions withLDDT-Cα differences of up to 19%.
The model is trained and evaluated using publicly available datasets:
- PLM pretraining dataset: Uniref90
- Structure prediction datasest: CASP14
- Enzyme Commission (EC) dataset: IEConv_proteins
- Fold recognition dataset: TAPE
- FLIP benchmark datasests: FLIP
All of these datasets can be downloaded using the release feature on Github, apart from Uniref90 which is very large. This can be downloaded and then modified using our dataset script.
To pretrain the protein language model you can run train_prose_multitask.py
.
The implementation uses multiple GPUs and can be run on a single machine or on a cluster. The scripts for running the
file on a cluster can be found at iridis-scripts
. The progress of the training
can be monitored using tensorboard.sh
. All trained models can be downloaded in the release section.
After pretraining the protein language model, you can finetune it on downstream tasks. You can do this by running the following python files:
train_enzyme.py
for the EC datasettrain_fold.py
for the Fold recognition datasettrain_flip.py
for the FLIP benchmark datasets
If you want to run these experiments on a cluster, take a look in the folder: iridis-scripts
To reproduce the plots for the amino acid embedding projection using PCA, use the notebook aa_embeddings.ipynb
.
For experiments involving protein structure prediction using reduced amino acid alphabets, use the notebook esm-structure-prediction.ipynb
.
This notebook contains code for generating the structures with ESMFold and everything else needed to recreate the results.
For more information on the steps taken to create the WASS14 alphabet, take a look at: surface_plots.ipynb
If you want to embedd a set of protein sequences using any of the models, you can use the embedd.py
script. You only need to provide a fasta file.
This code contains various bits of code taken from other sources. If you find the repo useful, please cite the following work too:
- Surface generation code: MASIF
- LDDT calculation: AlphaFold
- Model archiecture and uniprot tokenization: Prose
- MSA plot generation ColabFold
Ioan Ieremie, Rob M. Ewing, Mahesan Niranjan
to be added
ii1g17 [at] soton [dot] ac [dot] uk