https://github.com/samuelebortolotti/feature-detection-and-tracking
Repository concerning a feature detector and tracker developed for the Computer Vision course of the master's degree in Computer Science at University of Trento
https://github.com/samuelebortolotti/feature-detection-and-tracking
blob-detector computer-vision harris-corner-detector kalman-filter orb python shi-tomasi sift
Last synced: about 2 months ago
JSON representation
Repository concerning a feature detector and tracker developed for the Computer Vision course of the master's degree in Computer Science at University of Trento
- Host: GitHub
- URL: https://github.com/samuelebortolotti/feature-detection-and-tracking
- Owner: samuelebortolotti
- Created: 2022-05-06T12:36:10.000Z (almost 3 years ago)
- Default Branch: master
- Last Pushed: 2022-07-05T09:34:14.000Z (almost 3 years ago)
- Last Synced: 2025-01-16T06:12:58.482Z (3 months ago)
- Topics: blob-detector, computer-vision, harris-corner-detector, kalman-filter, orb, python, shi-tomasi, sift
- Language: Python
- Homepage:
- Size: 105 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Feature detection and tracking
Feture detection and tracking is a collection of methods concerning feature detection and tracking in videos, developed for the `Computer Vision` course of the master's degree program in Computer Science at the University of Trento.
## Author
| Name | Surname | MAT |
| :-----: | :--------: | :--------: |
| Samuele | Bortolotti | **229326** |## Requirements
The code as-is runs in Python 3.9 with the following dependencies
- [opencv](https://opencv.org/)
- [matplotlib](https://matplotlib.org/)And the following development dependencies
- [Sphinx](https://www.sphinx-doc.org/en/master/)
## Getting Started
Follow these instructions to set up the project on your PC.
Moreover, to facilitate the use of the application, a Makefile has been provided; to see its functions, simply call the appropriate help command with [GNU/Make](https://www.gnu.org/software/make/)
```bash
make help
```### 1. Clone the repository
```bash
git clone https://github.com/samuelebortolotti/feature-detection-and-tracking.git
cd feature-detection-and-tracking
```### 2. Install the requirements
```bash
pip install --upgrade pip
pip install -r requirements.txt
```> **Note**: it might be convenient to create a virtual enviroment to handle the dependencies.
>
> The `Makefile` provides a simple and convenient way to manage Python virtual environments (see [venv](https://docs.python.org/3/tutorial/venv.html)).
> In order to create the virtual enviroment and install the requirements be sure you have the Python 3.9 (it should work even with more recent versions, however I have tested it only with 3.9)
> ```bash
> make env
> source ./venv/fdt/bin/activate
> make install
> ```
> Remember to deactivate the virtual enviroment once you have finished dealing with the project
> ```bash
> deactivate
> ```### 3. Generate the code documentation
The automatic code documentation is provided [Sphinx v4.5.0](https://www.sphinx-doc.org/en/master/).
In order to have the code documentation available, you need to install the development requirements
```bash
pip install --upgrade pip
pip install -r requirements.dev.txt
```Since Sphinx commands are quite verbose, I suggest you to employ the following commands using the `Makefile`.
```bash
make doc-layout
make layout
```The generated documentation will be accessible by opening `docs/build/html/index.html` in your browser, or equivalently by running
```bash
make open-doc
```However, for the sake of completness one may want to run the full Sphinx commands listed here
```bash
sphinx-quickstart docs --sep --no-batchfile --project feature-detection-and-tracking --author "Samuele Bortolotti" -r 0.1 --language en --extensions sphinx.ext.autodoc --extensions sphinx.ext.napoleon --extensions sphinx.ext.viewcode --extensions myst_parser
sphinx-apidoc -P -o docs/source .
cd docs; make html
```> **Note**: executing the second list of command will lead to a slightly different documentation with respec to the one generated by the `Makefile`.
> This is because the above listed commands do not customize the index file of Sphinx. This is because the above listed commands do not customise the index file of Sphinx.### 4. Run the SIFT feature detection
To run the [SIFT](https://en.wikipedia.org/wiki/Scale-invariant_feature_transform) feature detector on an image you can type:
```bash
python -m fdt sift path_to_image [--n-features 100]
```where `path_to_image` is the path to the image you want to process with the SIFT algorithm and `--n-features` refers to the number of features you want to obtain from the detection phase.
As output, the algorithm will plot the original image with the SIFT keypoint drawn on top of it.
Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make sift
```### 5. Run the ORB feature detection
To run the [ORB](https://en.wikipedia.org/wiki/Oriented_FAST_and_rotated_BRIEF) feature detector on an image you can type:
```bash
python -m fdt orb path_to_image [--n-features 100]
```where `path_to_image` is the path to the image you want to process with the ORB algorithm and `--n-features` refers to the number of features you want to obtain from the detection phase.
As output, the algorithm will plot the original image with the ORB keypoint drawn on top of it.
Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make orb
```### 6. Run the Harris corner detector
To run the [Harris corner detector](https://en.wikipedia.org/wiki/Harris_corner_detector) on an image you can type:
```bash
python -m fdt harris path_to_image [--config-file]
```where `path_to_image` is the path to the image you want to process with the Harris corner detector and `--config-file` is used in order to load the configuration present in `fdt/config/harris_conf.py`.
As output, the algorithm will plot the original image with the Harris corners drawn on top of it.
Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make harris
```### 7. Run the Simple Blob detector
To run the [Simple blob detector](https://docs.opencv.org/3.4/d0/d7a/classcv_1_1SimpleBlobDetector.html) on an image you can type:
```bash
python -m fdt blob path_to_image [--config-file]
```where `path_to_image` is the path to the image you want to process with the Simple Blob detector and `--config-file` is used in order to load the configuration present in `fdt/config/blob_conf.py`.
As output, the algorithm will plot the original image with the blobs center keypoint drawn on top of it.
Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make blob
```### 8. Run the feature matching
```bash
python -m fdt matcher matcher_method [--n-features 100 --flann --matching-distance 60 --video material/Contesto_industriale1.mp4 --frame-update 30]
```Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make matcher
```### 9a. Run the feature detection with a the Kalman filter as tracking algorithm
```bash
python -m fdt kalman matcher_method [--n-features 100 --flann --matching-distance 60 --video material/Contesto_industriale1.mp4 --frame-update 30 --output-video-name videoname]
```If `output-video-name` is passed, then the program saves the video in [AVI](https://it.wikipedia.org/wiki/Audio_Video_Interleave) format in the `output` folder.
For generating the video [XVID](https://www.xvid.com/) codec is employed.
Therefore, if you want to precisely follow the code you may need to install it unless you already have it.
Otherwise, feel free to change it or suggest me a better alternative.Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make kalman
```You can customise the Kalman filter matrices by modifying the `current_conf` Python dictionary in the `fdt/config/kalman_config.py` file.
The current configuration is depicted here:
```python
import numpy as np"""Legend:
A (np.ndarray): state transition matrix
w (np.ndarray): process noise
H (np.ndarray): measurement matrix
v (np.ndarray): measurement noise
B (np.ndarray): additional and optional control input
"""# Configuration which is running at the moment
current_conf = {
"dynamic_params": 6,
"measure_params": 2,
"control_params": 0,
"A": np.array(
[
[1, 0, 1, 0, 1 / 33, 0],
[0, 1, 0, 1, 0, 1 / 33],
[0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1],
],
np.float32,
),
"w": np.eye(6, dtype=np.float32) * 50,
"H": np.array(
[
[1, 1 / 33, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
],
dtype=np.float32,
),
"v": np.eye(2, dtype=np.float32) * 50,
"B": None,
}
```> **Note:** if the program raises an error when the name of the output video is passed, it is possible that it is an issue with CODECS, thus consider changing the ``cv2.VideoWriter_fourcc(..)` line in the code (`tracking/kalman.py`).
### 9b. Run the feature detection with a the Kalman filter as tracking algorithm
```bash
python -m fdt lukas-kanade matcher_method [--nfeatures 100 --video material/Contesto_industriale1.mp4 --frameupdate 30]
```If `output-video-name` is passed, then the program saves the video in [AVI](https://it.wikipedia.org/wiki/Audio_Video_Interleave) format in the `output` folder.
For generating the video [XVID](https://www.xvid.com/) codec is employed.
Therefore, if you want to precisely follow the code you may need to install it unless you already have it.
Otherwise, feel free to change it or suggest me a better alternative.Alternatively, you can obtain the same result in a less verbose manner by tuning the flags in the `Makefile` and then run:
```bash
make lukas-kanade
```> **Note:** if the program raises an error when the name of the output video is passed, it is possible that it is an issue with CODECS, thus consider changing the ``cv2.VideoWriter_fourcc(..)` line in the code (`tracking/lucas_kanade.py`).
## 10 Report
The report concerning the implementation details and my considerations regarding the feature detectors is present in the `report/paper` folder.
Moreover, a simple LaTeX beamer presentation is presented in the `report/presentation`.