An open API service indexing awesome lists of open source software.

https://github.com/carrascomj/gcn-prot

Graph convolutional networks for structural learning of proteins
https://github.com/carrascomj/gcn-prot

deep-learning gcn gcnn graph-convolutional-networks proteins

Last synced: 4 months ago
JSON representation

Graph convolutional networks for structural learning of proteins

Awesome Lists containing this project

README

          

gcn-prot
==============================

[![Build Status](https://travis-ci.com/carrascomj/gcn-prot.svg?branch=master)](https://travis-ci.com/carrascomj/gcn-prot)
[![Coverage](https://codecov.io/gh/carrascomj/gcn-prot/branch/master/graph/badge.svg)](https://codecov.io/gh/carrascomj/gcn-prot)
[![Code Style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)

Graph convolutional networks to perform structural learning of proteins. The
starting point is the work of [Zamora-Resendiz and Crivelli, 2019](https://www.biorxiv.org/content/10.1101/610444v1.full)
(repository on [Github](https://github.com/CrivelliLab/Protein-Structure-DL)).

Authors
-------
Bjorn Hansen
Jorge Carrasco Muriel (@carrascomj)

This project was developed for the Advance Machine Learning course at DTU.

Project Organization
--------------------

├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│   ├── external <- Data from third party sources.
│   ├── interim <- Intermediate data that has been transformed.
│   ├── processed <- The final, canonical data sets for modeling.
│   └── raw <- The original, immutable data dump.

├── docs <- A default Sphinx project; see sphinx-doc.org for details

├── models <- Trained and serialized models, model predictions, or model summaries

├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.

├── references <- Data dictionaries, manuals, and all other explanatory materials.

├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures <- Generated graphics and figures to be used in reporting

├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`

├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│   ├── __init__.py <- Makes src a Python module
│ │
│   ├── data <- Scripts to download or generate data
│   │   └── make_dataset.py
│ │
│   ├── features <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│ │
│   ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│ │
│   └── visualization <- Scripts to create exploratory and results oriented visualizations
│   └── visualize.py

└── tox.ini <- tox file with settings for running tox; see tox.testrun.org

--------

Project based on the cookiecutter data science project template. #cookiecutterdatascience