Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/neelays/gsoc-2022

Documentation of work done as part of Google Summer of Code (GSoC) 2022
https://github.com/neelays/gsoc-2022

Last synced: 9 days ago
JSON representation

Documentation of work done as part of Google Summer of Code (GSoC) 2022

Awesome Lists containing this project

README

        

## Google Summer of Code (GSoC) 2022

### Project Details

- **Contributor:** Neelay Shah
- **Mentors**: Dr. James Turner and Prof. Thomas Nowotny
- **Project Title**: Creating Benchmark Datasets for Object Recognition with Event-based Cameras
- **Project-related links**:
- [Code Repository](https://github.com/NeelayS/event_aug)
- [Documentation](https://event-aug.readthedocs.io/)
- [Tutorials](https://github.com/NeelayS/event_aug/tree/main/tutorial_ntbks)
- [GSoC Project Page](https://summerofcode.withgoogle.com/programs/2022/projects/dSlJsb1g)
- [Work Log](https://neelays.github.io/gsoc-2022/)

### Project Abstract

Event-based vision is a subfield of computer vision that deals with data from event-based cameras. Event cameras, also known as neuromorphic cameras, are bio-inspired imaging sensors that work differently to traditional cameras in that they measure pixel-wise brightness changes asynchronously instead of capturing images at a fixed rate. Since the way event cameras capture data is fundamentally different to traditional cameras, novel methods are required to process the output of these sensors. In addition to dealing with ways for capturing data with event cameras, event-based vision encompasses techniques to process the captured data - events - as well, including learning-based techniques and models, spiking neural networks (SNNs) being an example. This project aims to create benchmark datasets for object recognition tasks with event-based cameras. Using machine learning solutions for such tasks requires a sufficiently large and varied collection of data. The primary goal of this project is to develop Python utilities for augmenting event camera recordings of objects captured in an academic setting in various ways to create benchmark datasets.

### Progress Log

#### Pre-coding Period

- Met with mentors (virtually), got to know about each other, and discussed project goals
- Set up GitHub code repository for the project
- Received data relevant to the project from mentors along with some starter code

#### Week 0 ( 6th June - 12th June )

- Added code for generating 2D Perlin noise
- Added code for spike encoding of image / video data using rate coding
- Set up continuous integration (CI) for the package which includes unit tests and linting and formatting checks

#### Week 1 ( 13th June - 19th June )

- Added code coverage check to CI workflow
- Created a GitHub pages site for the project
- Added code for spike encoding of video data using a method based on thresholding differences in intensities of pixels in neighbouring frames
- Added code to generate streams of progressive Perlin noise
- Worked on creating an automatic changelog generation workflow using GitHub Actions

#### Week 2 ( 20th June - 26th June )

- Added code for end-to-end spike encoding of videos using the threshold-based method
- Tested the above method on real-world videos; happy with the results!

#### Week 3 ( 27th June - 3rd July )

- Added code for injecting events in spike-encoded custom videos into existing event recordings to augment them
- Worked on analyzing spike-encoded 3D Perlin noise

#### Week 4 ( 4th July - 10th July )

- Added code for downloading clips of desired duration from YouTube videos
- Experimented with different kinds of resampling methods with 3D Perlin noise to get smoother spike encodings

#### Week 5 ( 11th July - 17th July )

- Experimented with different settings and parameters for generation of 3D Perlin noise

#### Week 6 ( 18th July - 24th July )

- Worked on writing code for converting encoded videos / 3D noise to event sequence representation

#### Week 7 ( 25th July - 31st July )

- Completed the functionality for transforming spike encodings to the event sequence format

#### Week 8 ( 1st August - 7th August )

- Augmented multiple pre-existing object recordings using Perlin noise and videos from YouTube

#### Week 9 ( 8th August - 14th August )

- Created an IPython notebook to demonstrate the entire workflow for augmenting event-camera datasets using movies / videos

#### Week 10 ( 15th August - 21th August )

- Created another similar IPython notebook for augmentation using Perlin / Fractal noise

#### Week 11 ( 22nd August - 28th August )

- Created documentation website
- Set up continuous deployment (CD) workflow