https://github.com/rehglab/tracking_objectness
https://github.com/rehglab/tracking_objectness
Last synced: 6 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/rehglab/tracking_objectness
- Owner: RehgLab
- Created: 2024-08-13T22:02:07.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-06T23:21:05.000Z (over 1 year ago)
- Last Synced: 2024-12-07T00:25:18.494Z (over 1 year ago)
- Language: Python
- Size: 1.1 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# tracking_objectness
Code for ECCV-ILR 2024 Workshop paper **Leveraging Object Priors for Point Tracking**
# Requirements
Create conda environment for this code base:
```
conda create -n mask_pips python=3.8
conda activate mask_pips
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
```
# Training
Use `export_mp4_dataset.py` file to generate clips from the PointOdyssey training set following [Pips++](https://github.com/aharley/pips2) and then run `python train.py` to start training.
# Testing
To evaluate the performance on the datasets reported in the paper, use testing scripts in the pips2 directory with the saved model.
For **TAP-VID-DAVIS**: `pip2/test_on_tap.py`
For **CroHD**: `pips2/test_on_cro.py`
For **PointOdyssey**: `pips2/test_on_pod.py`
To replicate the performance from the paper use our [trained weights](https://drive.google.com/drive/folders/1NStVTvo3iMRKcA3vaat7yFjwKHgpGrIH?usp=sharing) for the reference model. For TAP-VID-DAVIS, we load full sequence into memory at once, for others we use `S=36`.