Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/janeliascicomp/zarrcade
Create web-based OME-Zarr galleries
https://github.com/janeliascicomp/zarrcade
microscopy-images ome-zarr webapp
Last synced: about 3 hours ago
JSON representation
Create web-based OME-Zarr galleries
- Host: GitHub
- URL: https://github.com/janeliascicomp/zarrcade
- Owner: JaneliaSciComp
- License: bsd-3-clause
- Created: 2024-03-08T19:48:41.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-10-24T07:56:19.000Z (15 days ago)
- Last Synced: 2024-10-24T22:27:13.938Z (15 days ago)
- Topics: microscopy-images, ome-zarr, webapp
- Language: Jupyter Notebook
- Homepage:
- Size: 4.86 MB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Zarrcade
![logoz@0 1x](https://github.com/user-attachments/assets/21e45ddf-f53b-4391-9014-e1cad0243e7e)
![Python CI](https://github.com/JaneliaSciComp/zarrcade/actions/workflows/python-ci.yml/badge.svg)
Zarrcade is a web application for easily browsing, searching, and visualizing collections of [OME-NGFF](https://github.com/ome/ngff) (i.e. OME-Zarr) images. It implements the following features:
* Automatic discovery of OME-Zarr images on [any storage backend supported by fsspec](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations) including file system, AWS S3, Azure Blob, Google Cloud Storage, Dropbox, etc.
* MIP/thumbnail generation
* Web-based MIP gallery with convenient viewing links to NGFF-compliant viewers
* Searchable/filterable metadata and annotations
* Neuroglancer state generation for multichannel images
* Build-in file proxy for non-public storage backends
* Integration with external file proxies (e.g. [x2s3](https://github.com/JaneliaSciComp/x2s3))
![screenshot](https://github.com/user-attachments/assets/15ff03b4-2c90-4307-9771-fb6041676588)## Getting Started
### 1. Install miniforge
[Install miniforge](https://docs.conda.io/en/latest/miniforge.html) if you don't already have it.
### 2. Clone this repo
```bash
git clone https://github.com/JaneliaSciComp/zarrcade.git
cd zarrcade
```### 3. Initialize the conda environment
```bash
conda env create -f environment.yml
conda activate zarrcade
```### (Optional) Try an example
See the [Example](#example) section below to try out the example before working with your own data.
### 4. Create OME-Zarr images
If your images are not already in OME-Zarr format, you will need to convert them, e.g. using bioformats2raw:
```bash
bioformats2raw -w 128 -h 128 -z 64 --compression zlib /path/to/input /path/to/zarr
```If you have many images to convert, we recommend using the [nf-omezarr Nextflow pipeline](https://github.com/JaneliaSciComp/nf-omezarr) to efficiently run bioformats2raw on a collection of images. This pipeline also lets you scale the conversion processes to your available compute resources (cluster, cloud, etc).
### 5. Import images and metadata into Zarrcade
You can import images into Zarrcade using the provided command line script:
```bash
bin/import.py -d /root/data/dir -c mycollection
```This will automatically create a local Sqlite database containing a Zarrcade **collection** named "mycollection" and populate it with information about the images in the specified directory. By default, this will also create MIPs and thumbnails for each image in `./static/.zarrcade`.
To add extra metadata about the images, you can provide a CSV file with the `-i` flag:
```bash
./bin/import.py -d /root/data/dir -c collection_name -i input.csv
```The CSV file's first column must be a relative path to the OME-Zarr image within the root data directory. The remaining columns can be any annotations that will be searched and displayed within the gallery, e.g.:
```csv
Path,Line,Marker
relative/path/to/ome1.zarr,JKF6363,Blu
relative/path/to/ome2.zarr,JDH3562,Blu
```Read more about the import options in the [Data Import](./docs/DataImport.md) section of the documentation.
### 6. Run the Zarrcade web application
Start the development server, pointing it to your OME-Zarr data:
```bash
uvicorn zarrcade.serve:app --host 0.0.0.0 --reload
```Your images and annotations will be indexed and browseable at [http://0.0.0.0:8000](http://0.0.0.0:8000). Read the documentation below for more details on how to configure the web UI and deploy the service in production.
## Example
To try an example, follow steps 1 and 2 above and use the following command to import the example data:
```bash
./bin/import.py -d s3://janelia-data-examples/fly-efish -c flyefish -m docs/flyefish-example.csv
```Copy the example settings.yaml file to your working directory and start the server:
```bash
cp docs/settings.yaml.example settings.yaml
uvicorn zarrcade.serve:app --host 0.0.0.0 --reload
```The example should be visible at [http://0.0.0.0:8000](http://0.0.0.0:8000).
## Documentation
* [Overview](./docs/Overview.md) - learn about the data model and overall architecture
* [Configuration](./docs/Configuration.md) - configure the Zarrcade service using settings.yaml or environment variables
* [Deployment](./docs/Deployment.md) - instructions for deploying the service with Docker and in production mode
* [Development Notes](./docs/Development.md) - technical notes for developers working on Zarrcade itself## Known Limitations
* Zarrcade has so far only been tested with OME-Zarr images generated by the [bioformats2raw](https://github.com/ome/bioformats2raw) tool.
* The `OmeZarrAgent` does not currently support the full OME-Zarr specification, and may fail with certain types of images. If you encounter an error with your data, please open an issue on the [Github repository](https://github.com/JaneliaSciComp/zarrcade/issues).## Attributions
*
*