https://github.com/bioinfomachinelearning/transpro
1D transformer for predicting protein structural features (secondary structure, solvent accessibility)
https://github.com/bioinfomachinelearning/transpro
Last synced: 2 months ago
JSON representation
1D transformer for predicting protein structural features (secondary structure, solvent accessibility)
- Host: GitHub
- URL: https://github.com/bioinfomachinelearning/transpro
- Owner: BioinfoMachineLearning
- Created: 2022-06-10T02:55:16.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2022-07-12T23:18:53.000Z (almost 4 years ago)
- Last Synced: 2025-09-09T16:34:17.743Z (8 months ago)
- Language: Python
- Size: 33.4 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# TransPross: 1D transformer for predicting protein secondary structure prediction

## Description
1D transformer for predicting protein structural features (secondary structure)
## Installation
```bash
git clone https://github.com/BioinfoMachineLearning/TransPro.git
cd TransPro
mkdir env
python3.6 -m venv env/ss_virenv
source env/ss_virenv/bin/activate
pip install --upgrade pip
pip install -r requirments.txt
```
## Training data
The training protein targets were extracted from the Protein Data Bank(PDB) before May 2019 with the the sequence identity < 90%. The sequence length range: [50, 500]
All the required data for training are provided as below and avaiable at [](https://doi.org/10.5281/zenodo.6762376):
* Protein sequences in fasta file (fasta.tar.gz)
* Target id list for training
* MSA in a3m file (a3m.tar.gz is too large, stored at /bml/TransPro/a3m.tar.gz)
* True ss labels in 3 states (ss_3.tar.gz)
* True 3D structures in pdb file (atom.tar.gz)
* 5 trained TransPross models (model.tar.gz)
## Testing data
All the testing data for evaluation are provided as below:
* CASP test sets(CASP13, CASP14)
## Training
```bash
python MSA_transformer2_train.py --model_num 1 --N 6 --max_positions 1500 --BATCH_SIZE 5 --data_dir --dataset
model_num: training list model
N: number of attention layers
max_positions: maximum number of sequences allowed in the input MSA
BATCH_SIZE: batch size
data_dir: folder path for storing data
dataset: training set name
```
## Inference
**Predicting with the single a3m file as the input:**
```bash
python MSA_transformer2_predict_batch.py -i
e.g. python MSA_transformer2_predict_batch.py -i T1026.a3m
```
**Predicting multiple targets in one time:**
```bash
python MSA_transformer2_predict_batch.py --data_dir --dataset
If you want to predict multiple targets, you can create a test.lst file under the path /data_dir/dataset/test.lst in the format: length
e.g test/casp13/test.lst
data_dir: folder path for storing data
dataset: testing set name
```