Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/andreaconti/torch_kitti
PyTorch utilities to handle the KITTI Vision Benchmark Suite
https://github.com/andreaconti/torch_kitti
computer-vision deep-learning kitti-dataset machine-learning pytorch
Last synced: 11 days ago
JSON representation
PyTorch utilities to handle the KITTI Vision Benchmark Suite
- Host: GitHub
- URL: https://github.com/andreaconti/torch_kitti
- Owner: andreaconti
- License: mit
- Created: 2020-11-02T21:18:50.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2022-07-21T11:00:53.000Z (over 2 years ago)
- Last Synced: 2024-11-02T12:25:15.312Z (18 days ago)
- Topics: computer-vision, deep-learning, kitti-dataset, machine-learning, pytorch
- Language: Python
- Homepage:
- Size: 6.98 MB
- Stars: 9
- Watchers: 0
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# Pytorch KITTI
[![code style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
![Python version](https://img.shields.io/badge/python-3.6|3.7|3.8|3.9|3.10-green.svg)
[![PyPI version](https://badge.fury.io/py/torch-kitti.svg)](https://badge.fury.io/py/torch-kitti)
![License](https://img.shields.io/pypi/l/torch-kitti)This project aims to provide a simple yet effective way to scaffold and load the [KITTI Vision Banchmark Dataset](http://www.cvlibs.net/datasets/kitti/raw_data.php) providing
- **Datasets**: Pytorch datasets to load each dataset
- **Scaffolding**: to download the datasets
- **Metrics**: common metrics used for each dataset
- **Transformations**: utilities to manipulate samples
## Installation
To install `torch-kitti`
```bash
$ pip install torch-kitti
```## Scaffolding datasets
To manually download the datasets the `torch-kitti` command line utility comes in handy:
```bash
$ torch_kitti download --help
usage: Torch Kitti download [-h]
{sync_rectified,depth_completion,depth_prediction}
pathpositional arguments:
{sync_rectified,depth_completion,depth_prediction}
name of the dataset to download
path where scaffold the datasetoptional arguments:
-h, --help show this help message and exit
```Actually available datasets are:
- KITTI Depth Completion Dataset
- KITTI Depth Prediction Dataset
- KITTI Raw Sync+Rect Dataset## Loading Datasets
All datasets return dictionaries, utilities to manipulate them can be found in the `torch_kitti.transforms` module. Often each dataset provides options to include optional fields, for instance `KittiDepthCompletionDataset` usually provides simply the `img`, its sparse depth groundtruth `gt` and the sparse lidar hints `lidar` but using `load_stereo=True` stereo images will be included for each example.
```python
from torchvision.transforms import Compose, RandomCrop, ToTensorfrom torch_kitti.depth_completion import KittiDepthCompletionDataset
from torch_kitti.transforms import ApplyToFeaturestransform = ApplyToFeatures(
Compose(
[
ToTensor(),
RandomCrop([256, 512]),
]
),
features=["img", "gt", "lidar"],
)ds = KittiDepthCompletionDataset(
"kitti_raw_sync_rect_root",
"kitti_depth_completion_root",
load_stereo=False,
load_sequence=3,
transform=transform,
download=True, # download if not found
)
```## Customizing Datasets
Each dataset exposes the ``elems`` attribute, containing the paths used by each example loaded, and it can be modified to customize the loaded data, it is composed by ``DataGroup``s each containing many ``DataElem``s. These latter can load many different file types and each loader automatically will load them if added to each DataGroup, the supported data types are image, depth, pcd, calib, imu, rt and intrinsics.
For example to load only a specific drive sequence for depth completion you could:
```python
class SeqKittiDepthCompletionDataset(KittiDepthCompletionDataset):
def __init__(self, drive_name: str, *args, **kwargs):
super().__init__(*args, subset="all", **kwargs)
self.elems = sorted(
filter(lambda group: group.drive == drive_name and group.cam == 2, self.elems),
key=lambda group: group.idx
)```
## Contributing
### Developing setup
Download from kitti and `cd` in the folder then prepare a virtual environment (1), install `dev` and `doc` dependencies (2) and `pre-commit` (3).
```bash
$ git clone https://github.com/andreaconti/torch_kitti.git
$ cd torch_kitti
$ python3 -m virtualenv .venv && source .venv/bin/activate # (1)
$ pip install .[dev, doc] # (2)
$ pre-commit install # (3)
$ python3 setup.py develop
$ pytest
```Feel free to open an issue on [GitHub](https://github.com/andreaconti/torch_kitti/issues), fork the [repository](https://github.com/andreaconti/torch_kitti) and submit a pull request to solve bugs, improve docs, add datasets and features. All new feature must be tested.
## Disclaimer on KITTI Vision Benchmark Suite
This library is an utility that downloads and prepares the dataset. The KITTI Vision Benchmark Suite is not hosted by this project nor it's claimed that you have license to use the dataset, it is your responsibility to determine whether you have permission to use this dataset under its license. You can find more details [here](http://www.cvlibs.net/datasets/kitti/).