https://github.com/bcgsc/rsempipeline
A pipeline for running rsem analysis on thousands of samples
https://github.com/bcgsc/rsempipeline
geo ncbi python rsem sra transcript-quantification
Last synced: 3 months ago
JSON representation
A pipeline for running rsem analysis on thousands of samples
- Host: GitHub
- URL: https://github.com/bcgsc/rsempipeline
- Owner: bcgsc
- Created: 2014-09-12T17:48:09.000Z (over 10 years ago)
- Default Branch: master
- Last Pushed: 2019-02-22T17:54:50.000Z (about 6 years ago)
- Last Synced: 2025-01-13T00:47:12.938Z (4 months ago)
- Topics: geo, ncbi, python, rsem, sra, transcript-quantification
- Language: Python
- Homepage:
- Size: 331 KB
- Stars: 1
- Watchers: 15
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.rst
Awesome Lists containing this project
README
|build| |cov|
rsempipeline
========================rsempipeline is a pipeline for analyzing `GEO
`_ data using `RSEM
`_. The typical analysis process is as
follows:The input to the pipeline are mainly from two resources,
- soft files for all Series (aka. GSE)
- A GSE_species_GSM.csv file which contains a list of all interested samples
(aka. GSM) to be processedThere are three steps included in this pipeline:
1. Download the sra files for all GSMs from `GEO
`_ website using aspc from `Aspera
`_ or `wget
`_ (in case when aspc fails). aspc and
wget use different urls which are linked to copies of the same file.2. sra files are converted to fastq.gz files using fastq-dump from `SRA Toolkit
`_3. Run rsem-calculate-expression from `RSEM
`_ package with all fastq.gz files
for all GSMsThe pipeline is designed to run the first two steps (computationally cheap) on
a localhost. Step 3 (computationally expensive) is run on a HPC cluster
(e.g. genesis, westgrid cluster).Typically, about 100 GSEs and a few thousands of GSMs are picked by our
collaborators and grouped into a batch. Step 1 and 2 are done in a
sub-batch-by-sub-batch fashion where all GSMs of a sub-batch are processed in
parallel until finished. Each sub-batch of GSMs are selected based on their
file sizes (estimated from sizes of sra and its resultant fastq.gz files) and
how much disk space available on the localhost as specified in a configuration
file (``rp_config.yml``). At the end of the second step, a submission script
will be generated for each GSM, and at Step 3 a new job will be submitted to
the cluster for processing the GSM using RSEM. A control mechanism has also
been implemented to avoid overuse of the cluster resources such as compute
nodes and disk space. The first two steps are run by the command ``rp-run``
while the generation of the submission script and job submission are handled by
the command ``rp-transfer``...
It will create all folders for all GSMs according to a designated structure,
i.e. ``//``, and then fetch information of the sra files for
each GSM from `NCBI FTP server `_ "NCBI FTP
server"), and then save it to a file named `sras_info.yaml` in each GSM
directory. The fetching process will take a while depending on how many GSMs to
be processed...
3. It will filter the samples generated from Step 1 and generate a sublist of
samples that will be processed right away based on the sizes of sra files and
estimated fastq.gz files (~1.5x) as well as the sizes available to use as
specified in the ``rp_config.yml`` (mainly ``LOCAL_MAX_USAGE``,
``LOCAL_MIN_FREE``). Processed files will be saved to a file named
``sra2fastqed_GSMs.txt``...
For installation and usage instructions, please refer to ``INSTALL.rst`` and
``USAGE.rst``.If you have found any bugs, questions, comments, please contact Zhuyi Xue
([email protected])... |build| image:: https://travis-ci.org/bcgsc/rsempipeline.svg?branch=master
:alt: Build Status
:target: https://travis-ci.org/bcgsc/rsempipeline
.. |cov| image:: https://coveralls.io/repos/bcgsc/rsempipeline/badge.svg?branch=master&service=github
:alt: Coverage Status
:target: https://coveralls.io/github/bcgsc/rsempipeline?branch=master