https://github.com/vida-nyu/city-surfaces
CitySurfaces semantic segmentation of sidewalk surfaces
https://github.com/vida-nyu/city-surfaces
computer-vision material sidewalk sidewalk-surface urban-analytics urban-data-science
Last synced: 7 months ago
JSON representation
CitySurfaces semantic segmentation of sidewalk surfaces
- Host: GitHub
- URL: https://github.com/vida-nyu/city-surfaces
- Owner: VIDA-NYU
- License: bsd-3-clause
- Created: 2022-01-12T21:56:30.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-01-26T09:36:17.000Z (almost 2 years ago)
- Last Synced: 2025-03-24T17:55:21.163Z (8 months ago)
- Topics: computer-vision, material, sidewalk, sidewalk-surface, urban-analytics, urban-data-science
- Language: Python
- Homepage:
- Size: 4.01 MB
- Stars: 46
- Watchers: 3
- Forks: 11
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# CitySurfaces: City-scale Semantic Segmentation of Sidewalk Surfaces
CitySurfaces is a framework that combines active learning and semantic segmentation to locate, delineate, and classify sidewalk paving materials from street-level images. Our framework adopts a recent high-performing semantic segmentation model (Tao et al., 2020), which uses hierarchical multi-scale attention combined with object-contextual representations
The framework was presented in our [paper](https://www.sciencedirect.com/science/article/pii/S2210670721008933) published at the *Sustainable Cities and Society* journal (Arxiv link [here](https://arxiv.org/abs/2201.02260)).
**CitySurfaces: City-scale semantic segmentation of sidewalk materials**\
Maryam Hosseini, Fabio Miranda, Jianzhe Lin, Claudio T. Silva,
*Sustainable Cities and Society, 2022*
```
@article{HOSSEINI2022103630,
title = {CitySurfaces: City-scale semantic segmentation of sidewalk materials},
journal = {Sustainable Cities and Society},
volume = {79},
pages = {103630},
year = {2022},
issn = {2210-6707},
doi = {https://doi.org/10.1016/j.scs.2021.103630},
url = {https://www.sciencedirect.com/science/article/pii/S2210670721008933},
author = {Maryam Hosseini and Fabio Miranda and Jianzhe Lin and Claudio T. Silva},
keywords = {Sustainable built environment, Surface materials, Urban heat island, Semantic segmentation, Sidewalk assessment, Urban analytics, Computer vision}
}
```
You can use our pre-trained model to make inference on your own street-level images. Our extended model can classify eight different classes of paving materials:
The team includes:
* [Maryam Hosseini](https://maryamhosseini.me) (MIT / NYU)
* [Fabio Miranda](https://fmiranda.me) (UIC)
* Jianzhe Lin (NYU)
* [Cláudio T. Silva](https://vgc.poly.edu/~csilva/) (NYU)
## Table of contents
* [Updates](#updates)
* [Installing prerequisites](#installing-prerequisites)
* [Run inference on your own data](#run-inference-on-your-own-data)
## Updates
New weights from our updated model trained on more cities (now including DC, Chicago, and Philadelphia) is uploaded in our [Google Drive](https://drive.google.com/drive/folders/1W5STd9JmVZkAsSN3TidOMMrv7aZtR-4_?usp=sharing).
## Installing prerequisites
The framework is based on [NVIDIA Semantic Segmentation](https://github.com/NVIDIA/semantic-segmentation). The code is tested with pytorch 1.7 and python 3.9. You can use ./Dockerfile to build an image.
## Run inference on your own data
Follow the instructions below to be able to segment your own image data. Most of the steps are based on NVIDIA's original steps, with modifications regarding weights and dataset names.
### Download Weights
* Create a directory where you can keep large files.
```bash
> mkdir
```
* Update `__C.ASSETS_PATH` in `config.py` to point at that directory
__C.ASSETS_PATH=
* Download our pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1W5STd9JmVZkAsSN3TidOMMrv7aZtR-4_?usp=sharing). Weights should be under `/seg_weights`.
### Running the code
The instructions below make use of a tool called `runx`, which we find useful to help automate experiment running and summarization. For more information about this tool, please see [runx](https://github.com/NVIDIA/runx).
In general, you can either use the runx-style commandlines shown below. Or you can call `python train.py ` directly if you like.
### Inference
Update the `inference-citysurfaces.yml` under scripts directory with the path to your image folder that you would like to make inference on.
Run
```bash
> python -m runx.runx scripts/inference-citysurfaces.yml -i
```
OR
Run directly from the command line with one GPU:
```bash
> python -m val --dataset citysurfaces --bs_val 1 --eval test --eval_folder --snapshot --arch ocrnet.HRNet_Mscale --trunk hrnetv2 --result_dir
```
The results should look like the below examples, where you have your input image and segmentation mask, side by side.