https://github.com/projectglow/glow
An open-source toolkit for large-scale genomic analysis
https://github.com/projectglow/glow
delta genomics gwas machine-learning population-genetics regression spark
Last synced: 16 days ago
JSON representation
An open-source toolkit for large-scale genomic analysis
- Host: GitHub
- URL: https://github.com/projectglow/glow
- Owner: projectglow
- License: apache-2.0
- Created: 2019-10-04T21:26:47.000Z (about 6 years ago)
- Default Branch: main
- Last Pushed: 2024-11-10T00:12:18.000Z (12 months ago)
- Last Synced: 2024-11-16T12:48:01.422Z (12 months ago)
- Topics: delta, genomics, gwas, machine-learning, population-genetics, regression, spark
- Language: Scala
- Homepage: https://projectglow.io
- Size: 96.7 MB
- Stars: 273
- Watchers: 20
- Forks: 111
- Open Issues: 36
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE.txt
- Code of conduct: CODE-OF-CONDUCT.md
Awesome Lists containing this project
- awesome-python-machine-learning-resources - GitHub - 40% open · ⏱️ 09.05.2022): (医疗领域)
README
An open-source toolkit for large-scale genomic analyes
Explore the docs »
Issues
Glow is an open-source toolkit to enable bioinformatics at biobank-scale and beyond.
[](https://github.com/projectglow/glow/actions/workflows/tests.yml)
[](https://glow.readthedocs.io/en/latest/?badge=latest)
[](https://pypi.org/project/glow.py/)
[](https://mvnrepository.com/artifact/io.projectglow)
[](https://codecov.io/gh/projectglow/glow)
[](https://anaconda.org/conda-forge/glow)
[](https://zenodo.org/badge/latestdoi/212904926)
# Easy to get started
The toolkit includes building blocks to perform common analyses right away:
- Load VCF, BGEN, and Plink files into distributed DataFrames
- Perform quality control and data manipulation with built-in functions
- Variant normalization and liftOver
- Perform genome-wide association studies
- Integrate with Spark ML libraries for population stratification
- Parallelize command line tools to scale existing workflows
# Built to scale
Glow makes genomic data work with Spark, the leading engine for working with large structured
datasets. It fits natively into the ecosystem of tools that have enabled thousands of organizations
to scale their workflows. Glow bridges the gap between bioinformatics and the
Spark ecosystem.
# Flexible
Glow works with datasets in common file formats like VCF, BGEN, and Plink as well as
high-performance big data
standards. You can write queries using the native Spark SQL APIs in Python, SQL, R, Java, and Scala.
The same APIs allow you to bring your genomic data together with other datasets such as electronic
health records, real world evidence, and medical images. Glow makes it easy to parallelize existing
tools and libraries implemented as command line tools or Pandas functions.
# Building and Testing
This project is built using [sbt](https://www.scala-sbt.org/1.0/docs/Setup.html) and Java 8.
To build and run Glow, you must [install conda](https://docs.conda.io/en/latest/miniconda.html) and
activate the environment in `python/environment.yml`.
```
conda env create -f python/environment.yml
conda activate glow
```
When the environment file changes, you must update the environment:
```
conda env update -f python/environment.yml
```
Start an sbt shell using the `sbt` command.
> **FYI**: The following SBT projects are built on Spark 3.5.1/Scala 2.12.19 by default. To change the Spark version and
Scala version, set the environment variables `SPARK_VERSION` and `SCALA_VERSION`.
To compile the main code:
```
compile
```
To run all Scala tests:
```
core/test
```
To test a specific suite:
```
core/testOnly *VCFDataSourceSuite
```
To run all Python tests:
```
python/test
```
These tests will run with the same Spark classpath as the Scala tests.
To test a specific Python test file:
```
python/pytest python/test_render_template.py
```
When using the `pytest` key, all arguments are passed directly to the
[pytest runner](https://docs.pytest.org/en/latest/usage.html).
To run documentation tests:
```
docs/test
```
To run the Scala, Python and documentation tests:
```
test
```
To run Scala tests against the staged Maven artifact with the current stable version:
```
stagedRelease/test
```
## Testing code on a Databricks cluster
You can use the [build](https://github.com/projectglow/glow/blob/main/bin/build) script to create artifacts that you can install on a Databricks cluster.
To build Python and Scala artifacts:
```
bin/build --scala --python
```
To build only Python (no sbt installation required):
```
bin/build --python
```
To install the artifacts on a Databricks cluster after building:
```
bin/build --python --scala --install MY_CLUSTER_ID
```
## IntelliJ Tips
If you use IntelliJ, you'll want to:
- Download library and SBT sources; use SBT shell for imports and build from [IntelliJ](https://www.jetbrains.com/help/idea/sbt.html)
- Set up [scalafmt on save](https://scalameta.org/scalafmt/docs/installation.html)
To run Python unit tests from inside IntelliJ, you must:
- Open the "Terminal" tab in IntelliJ
- Activate the glow conda environment (`conda activate glow`)
- Start an sbt shell from inside the terminal (`sbt`)
The "sbt shell" tab in IntelliJ will NOT work since it does not use the glow conda environment.
To test or testOnly in remote debug mode with IntelliJ IDEA set the remote debug configuration in IntelliJ to 'Attach to remote JVM' mode and a specific port number (here the default port number 5005 is used) and then modify the `testJavaOptions` in `build.sbt` to include:
```
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"
```