{"id":13483270,"url":"https://github.com/azavea/raster-vision","last_synced_at":"2025-05-13T15:09:13.217Z","repository":{"id":37423243,"uuid":"80733109","full_name":"azavea/raster-vision","owner":"azavea","description":"An open source library and framework for deep learning on satellite and aerial imagery.","archived":false,"fork":false,"pushed_at":"2025-04-04T16:56:35.000Z","size":94752,"stargazers_count":2134,"open_issues_count":38,"forks_count":391,"subscribers_count":70,"default_branch":"master","last_synced_at":"2025-04-23T18:54:59.848Z","etag":null,"topics":["classification","computer-vision","deep-learning","geospatial","machine-learning","object-detection","pytorch","remote-sensing","semantic-segmentation"],"latest_commit_sha":null,"homepage":"https://docs.rastervision.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/azavea.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"docs/CONTRIBUTING.rst","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2017-02-02T14:31:54.000Z","updated_at":"2025-04-18T13:10:54.000Z","dependencies_parsed_at":"2024-11-15T08:04:01.836Z","dependency_job_id":"39cc43fe-2fd0-4adf-9be9-768b2540ca38","html_url":"https://github.com/azavea/raster-vision","commit_stats":{"total_commits":2932,"total_committers":41,"mean_commits":71.51219512195122,"dds":0.6265347885402456,"last_synced_commit":"59149f840ebe02fa6afabfaaa71bd9e8149f175e"},"previous_names":[],"tags_count":25,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/azavea%2Fraster-vision","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/azavea%2Fraster-vision/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/azavea%2Fraster-vision/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/azavea%2Fraster-vision/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/azavea","download_url":"https://codeload.github.com/azavea/raster-vision/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253969239,"owners_count":21992262,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["classification","computer-vision","deep-learning","geospatial","machine-learning","object-detection","pytorch","remote-sensing","semantic-segmentation"],"created_at":"2024-07-31T17:01:09.563Z","updated_at":"2025-05-13T15:09:08.200Z","avatar_url":"https://github.com/azavea.png","language":"Python","readme":"![Raster Vision Logo](docs/img/raster-vision-logo.png)\n\u0026nbsp;\n\n[![Pypi](https://img.shields.io/pypi/v/rastervision.svg)](https://pypi.org/project/rastervision/)\n[![Documentation Status](https://readthedocs.org/projects/raster-vision/badge/?version=latest)](https://docs.rastervision.io/en/stable/?badge=stable)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Build Status](https://github.com/azavea/raster-vision/actions/workflows/release.yml/badge.svg)](https://github.com/azavea/raster-vision/actions/workflows/release.yml)\n[![codecov](https://codecov.io/gh/azavea/raster-vision/branch/master/graph/badge.svg)](https://codecov.io/gh/azavea/raster-vision)\n\nRaster Vision is an open source Python **library** and **framework** for building computer vision models on satellite, aerial, and other large imagery sets (including oblique drone imagery).\n\nIt has built-in support for chip classification, object detection, and semantic segmentation with backends using PyTorch.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/img/cv-tasks.png\" alt=\"Examples of chip classification, object detection and semantic segmentation\" width=\"60%\"\u003e\n\u003c/div\u003e\n\n**As a library**, Raster Vision provides a full suite of utilities for dealing with all aspects of a geospatial deep learning workflow: reading geo-referenced data, training models, making predictions, and writing out predictions in geo-referenced formats.\n\n**As a low-code framework**, Raster Vision allows users (who don't need to be experts in deep learning!) to quickly and repeatably configure experiments that execute a machine learning pipeline including: analyzing training data, creating training chips, training models, creating predictions, evaluating models, and bundling the model files and configuration for easy deployment.\n![Overview of Raster Vision workflow](docs/img/rv-pipeline-overview.png)\n\nRaster Vision also has built-in support for running experiments in the cloud using [AWS Batch](https://docs.rastervision.io/en/stable/setup/aws.html#running-on-aws-batch) as well as [AWS Sagemaker](https://docs.rastervision.io/en/stable/setup/aws.html#running-on-aws-sagemaker).\n\nSee the [documentation](https://docs.rastervision.io/en/stable/) for more details.\n\n## Installation\n\n*For more details, see the [Setup documentation](https://docs.rastervision.io/en/stable/setup/)*.\n\n### Install via `pip`\n\nYou can install Raster Vision directly via `pip`.\n\n```sh\npip install rastervision\n```\n\n### Use Pre-built Docker Image\n\nAlternatively, you may use a Docker image. Docker images are published to [quay.io](https://quay.io/repository/azavea/raster-vision) (see the *tags* tab).\n\nWe publish a new tag per merge into `master`, which is tagged with the first 7 characters of the commit hash. To use the latest version, pull the `latest` suffix, e.g. `raster-vision:pytorch-latest`. Git tags are also published, with the Github tag name as the Docker tag suffix.\n\n### Build Docker Image\n\nYou can also build a Docker image from scratch yourself. After cloning this repo, run `docker/build`, and run then the container using `docker/run`.\n\n## Usage Examples and Tutorials\n\n**Non-developers** may find it easiest to use Raster Vision as a low-code framework where Raster Vision handles all the complexities and the user only has to configure a few parameters. The [*Quickstart guide*](https://docs.rastervision.io/en/stable/framework/quickstart.html) is a good entry-point into this. More advanced examples can be found on the [*Examples*](https://docs.rastervision.io/en/stable/framework/examples.html) page.\n\nFor **developers** and those looking to dive deeper or combine Raster Vision with their own code, the best starting point is [*Usage Overview*](https://docs.rastervision.io/en/stable/usage/overview.html), followed by [*Basic Concepts*](https://docs.rastervision.io/en/stable/usage/basics.html) and [*Tutorials*](https://docs.rastervision.io/en/stable/usage/tutorials/index.html).\n\n\n## Contact and Support\n\nYou can ask questions and talk to developers (let us know what you're working on!) at:\n* [Discussion Forum](https://github.com/azavea/raster-vision/discussions)\n* [Mailing List](https://groups.google.com/forum/#!forum/raster-vision)\n\n## Developing\n\nTo set up the development environment:\n- For and clone the repo and navigate to it.\n- Create and activate a new Python virtual environment via your environment manager of choice (`mamba`, `uv`, `pyenv`, etc.).\n- Run `scripts/setup_dev_env.sh` to install all Raster Vision plugins in editable mode along with all the dependencies.\n\n## Contributing\n\n*For more information, see [Contributing](https://docs.rastervision.io/en/stable/CONTRIBUTING.html).*\n\nWe are happy to take contributions! It is best to get in touch with the maintainers\nabout larger features or design changes *before* starting the work,\nas it will make the process of accepting changes smoother.\n\nEveryone who contributes code to Raster Vision will be asked to sign a Contributor License Agreement. See [Contributing](https://docs.rastervision.io/en/stable/CONTRIBUTING.html) for instructions.\n\n## Licenses\n\nRaster Vision is licensed under the Apache 2 license. See license [here](./LICENSE).\n\n3rd party licenses for all dependecies used by Raster Vision can be found [here](./THIRD_PARTY_LICENSES.txt).\n","funding_links":[],"categories":["Python","Deep Learning","Remote Sensing and Imagery"],"sub_categories":["Deep Learning Framework for Geospatial","Shapefiles"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fazavea%2Fraster-vision","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fazavea%2Fraster-vision","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fazavea%2Fraster-vision/lists"}