{"id":24625772,"url":"https://github.com/tsenst/crowdflow","last_synced_at":"2026-03-06T06:03:36.349Z","repository":{"id":187695257,"uuid":"148167538","full_name":"tsenst/CrowdFlow","owner":"tsenst","description":"Optical Flow Dataset and Benchmark for Visual Crowd Analysis","archived":false,"fork":false,"pushed_at":"2023-08-11T13:21:35.000Z","size":2037,"stargazers_count":115,"open_issues_count":3,"forks_count":22,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-05-07T18:11:20.310Z","etag":null,"topics":["benchmark-suite","computer-vision","crowd-analysis","crowd-counting","dataset","evaluation-framework","motion-estimation","multi-object-tracking","optical-flow","synthetic-images","tracking","tracking-by-detection","trajectories","tub-crowdflow-dataset","video-analytics","video-processing","video-surveillance"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tsenst.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-09-10T14:24:44.000Z","updated_at":"2025-05-01T06:12:03.000Z","dependencies_parsed_at":"2023-09-04T20:01:00.706Z","dependency_job_id":null,"html_url":"https://github.com/tsenst/CrowdFlow","commit_stats":null,"previous_names":["tsenst/crowdflow"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/tsenst/CrowdFlow","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tsenst%2FCrowdFlow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tsenst%2FCrowdFlow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tsenst%2FCrowdFlow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tsenst%2FCrowdFlow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tsenst","download_url":"https://codeload.github.com/tsenst/CrowdFlow/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tsenst%2FCrowdFlow/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30164532,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-06T04:43:31.446Z","status":"ssl_error","status_checked_at":"2026-03-06T04:40:30.133Z","response_time":250,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark-suite","computer-vision","crowd-analysis","crowd-counting","dataset","evaluation-framework","motion-estimation","multi-object-tracking","optical-flow","synthetic-images","tracking","tracking-by-detection","trajectories","tub-crowdflow-dataset","video-analytics","video-processing","video-surveillance"],"created_at":"2025-01-25T04:39:52.742Z","updated_at":"2026-03-06T06:03:36.328Z","avatar_url":"https://github.com/tsenst.png","language":"Python","readme":"![Dataset samples](doc/sample.png)\n## TUB CrowdFlow Dataset\nOptical Flow Dataset and Evaluation Kit for Visual Crowd Analysis developed at [Communication Systems Group](https://www.nue.tu-berlin.de/) at TU-Berlin desciribed in the AVSS 2018 paper \n[Optical Flow Dataset and Benchmark for Visual Crowd Analysis](http://elvera.nue.tu-berlin.de/files/1548Schr%C3%B6der2018.pdf) or [TUBCrowdFlow@arxiv.org](https://arxiv.org/abs/1811.07170). \n\nThe Dataset contains 10 sequences showing 5 scenes. Each scene is rendered twice: with a static point of view and a dynamic camera to simulate drone/UAV based surveillance. We render at HD resolution (1280x720) at 25 fps, which is typical for current commercial CCTV surveillance systems. The total number of frames 3200.\n\nFor each sequence we provide the following **ground-truth** data:\n * **Optical flow fields** \n * **Person trajectories (up to 1451)**\n * **Dense pixel trajectories**\n\nThis evaluation framework is released under the MIT License (details in [LICENSE](LICENSE)).\nIf you use the dataset or evaluation kit or think our work is useful in your research, please consider citing:\n\n```\n@INPROCEEDINGS{TUBCrowdFlow2018,\n\tAUTHOR = {Gregory Schr{\\\"o}der and Tobias Senst and Erik Bochinski and Thomas Sikora},\n\tTITLE = {Optical Flow Dataset and Benchmark for Visual Crowd Analysis},\n\tBOOKTITLE = {IEEE International Conference on Advanced Video and Signals-based Surveillance},\n\tYEAR = {2018},\n}\n```\n\nDownload the dataset via the following direct link\n - [https://hidrive.ionos.com/lnk/LUiCHfYG](https://hidrive.ionos.com/lnk/LUiCHfYG)\n   \nThe password is the case sensitive name of the repository.\n\nUnpack the dataset by:\n```\nsudo apt-get install unrar\nunrar x TUBCrowdFlow\n```\n**The TUB CrowdFlow dataset is made available for academic use only.** If you wish to use this dataset commercially please contact  [sikora@nue.tu-berlin.de](mailto:sikora@nue.tu-berlin.de).\n\n## Contact\nIf you have any questions or encounter problems regarding the method/code or want to send us your optical flow benchmark \nresults feel free to contact me\nat [tobias.senst@gmail.com](mailto:tobias.senst@gmail.com)\n\n### Installation\nMinimum required python version: 3.5\n\n**Install dependencies on Ubuntu:**\n```\nsudo apt-get install python3-dev python3-virtualenv virtualenv\n```\nCreate a virtual environment and install python requirements:\n```\nvirtualenv -p python3 crowdflow_env\nsource crowdflow_env/bin/activate\npip3 install numpy progressbar2 opencv-contrib-python\n```\n\n## Evaluation Framework\nTo evaluate an optical flow method with the providen framework perform these step:\n * create a new directory in the `/TUBCrowdFlow/estimate` directory. \n * compute flow fields and save them in *.flo* fileformat with the structure given in by the \n `/TUBCrowdFlow/images` directory.  For example optical flow results from the image pair  `/TUBCrowdFlow/images/IM01/frame_0000.png` and `/TUBCrowdFlow/images/IM01/frame_0001.png`\n must be stored as `/estimate/[mymethod]/images/IM01/frame_0000.flo\n * run `opticalflow_evaluate.py` to compute EPE and R2 short-term metrics.\n * run `trajectory_evaluate.py` to compute tracking accuracy long-term metrics.\n\n**Optical Flow Samples**\n\n`\nopticalflow_estimate.py \u003cdataset_root_path\u003e \u003cflow_method_name_1\u003e \u003cflow_method_name_2\u003e ...\n`\n\nWith the following program optical flow fields for the TUB CrowdFlow dataset will be estimated with Dual-TVL1.\n```\nsource crowdflow_env/bin/activate\npython3 opticalflow_estimate.py TUBCrowdFlow/ dual farneback plk\n```\nThe optical flow files will be stored in the directory `/estimate/dual/` . \n\n**Short-Term Evaluation**\n\nShort-term evaluation performs classical approch for optical flow evaluation, i.e. measures based on ground-truth optical flow fields (e.g. end-point error, RX measures)\n\n`\nopticalflow_evaluate.py \u003cdataset_root_path\u003e \u003cdir_name_method_1\u003e \u003cdir_name_method_2\u003e ... \u003cdir_name_method_n\u003e \n`\n\nExample:\n```\nsource crowdflow_env/bin/activate\npython3 opticalflow_evaluate.py TUBCrowdFlow/ dual plk farneback\n```\nAfter execution the file *short_term_results.tex* will contain the evaluation results (*method 1 - method n*) in form of a latex table.\n*short_term_results.pb* will contain the evaluation results stored with pickle.\n\n**Long-Term  Evaluation**\n\nLong-term evaluation performs evaluation based on ground-truth trajectories, i.e. **person trajectories** and **dense pixel trajectories** (see paper).\n\n`\ntrajectory_evaluate.py \u003cdataset_root_path\u003e \u003cdir_name_method_1\u003e \u003cdir_name_method_2\u003e ... \u003cdir_name_method_n\u003e\n`\n\nExample:\n```\nsource crowdflow_env/bin/activate\npython3 trajectory_evaluate.py TUBCrowdFlow/ dual plk farneback \n```\nAfter execution the file *long_term_results.tex* will contain the evaluation results (*method 1 - method n*) in form of a latex table.\n*long_term_results.pb* will contain the evaluation results stored with pickle.\n\n## Results\nTo assess the quality of the optical flow we propose to use two types of metrics: i) common optical flow metrics, i.e. average endpoint error (EPE) and percentage of erroneous\npixel (RX) and ii) long-term motion metrics based on trajectories. An detailed overview of the optical flow parameters can be found in the document: [Supplemental_materials.pdf](./Supplemental_material.pdf)).\n### Common optical flow metrics (short-term)\n\n|            |FG (Static)| FG (Static)| BG (Static)|BG (Static) |FG (Dynamic)| FG (Dynamic) | BG (Dynamic)    | BG (Dynamic)     | FG(Avg.)| FG(Avg.)     | BG(Avg.) | BG(Avg.)   |  Avg.  |Avg. || \n| ---------- | -----  | ----- | ----- | ---- | ----- | ----- | ------ | ----- | ----- | ------ | ----- | ----- | ----- | ----- | ----- |\n|           | EPE |R2[%] |EPE  |R2[%] |EPE   |R2[%]  |EPE  |R2[%] |EPE   |R2[%]  |EPE |R2[%]   |EPE    |R2[%] |t[sec]|\n| **[FlowFields (Bailer2015)](https://av.dfki.de/publications/flow-fields-dense-correspondence-fields-for-highly-accurate-large-displacement-optical-flow-estimation/)** | 0.756 | 8.27  | 0.213 | 2.79 | 1.069 | 14.92 | 2.571  | 51.42 | 0.913 | 11.595 | 1.392 | 27.10 | 0.915 | 11.74 | 43.53 |\n| **[RIC (Hu2017)](https://github.com/YinlinHu/Ric)**        | 0.859 | 8.64  | 0.243 | 3.31 | 1.166 | 15.69 | 2.623  | 53.58 | 1.013 | 12.164 | 1.433 | 28.45 | 1.015 | 12.32 | 8.30 |\n|**[CPM (Li2018)](https://github.com/YinlinHu/CPM)**         |0.701 |7.09 |0.247 |3.63 |1.026 |13.94 |2.585 |51.78 |0.864 |10.517 |1.416 |27.71 |0.868 |10.69 |14.74|\n|**[DeepFlow (Weinzaepfel2013)](https://thoth.inrialpes.fr/src/deepflow/)**    |0.629 |6.19 |0.237 |3.67 |1.005 |13.95 |2.594 |51.67 |0.817 |10.069 |1.416 |27.67 |0.822 |10.25 |39.63|\n|**[RLOF6 (Geistert2016)](https://github.com/tsenst/RLOFLib)**       |0.753 |8.61 |0.315 |5.00 |1.088 |15.61 |2.655 |53.47 |0.921 |12.112 |1.485 |29.23 |0.924 |12.27 |1.49|\n|**[RLOF10 (Geistert2016)](https://github.com/tsenst/RLOFLib)**      |0.772 |8.80 |0.324 |5.10 |1.104 |15.80 |2.658 |53.60 |0.938 |12.303 |1.491 |29.35 |0.941 |12.46 |0.80|\n|**[DIS4 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        |0.627 |5.72 |0.356 |5.85 |0.928 |11.86 |2.665 |53.67 |0.777 |8.790  |1.511 |29.76 |0.784 |9.01  |1.70|\n|**[DIS2 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        |1.441 |20.40|0.528 |8.24 |1.726 |27.41 |3.001 |64.01 |1.583 |23.903 |1.765 |36.13 |1.579 |23.92 |0.28|\n|**[Farneback (Farneback2003)](http://www.diva-portal.org/smash/get/diva2:273847/FULLTEXT01.pdf)**   |0.737 |7.21 |0.441 |7.30 |0.996 |12.67 |2.491 |50.60 |0.867 |9.940  |1.466 | 28.95|0.872 |10.13 |    |\n|**[Sparse to Dense PLK (Bouguet2000)](http://robots.stanford.edu/cs223b04/algo_tracking.pdf)** | 0.793 | 8.07 | 0.563 | 9.12 | 1.041 | 13.24 | 2.875 | 56.29 | 0.917 | 10.653 | 1.719 | 32.71 | 0.925 | 10.88 |    |\n\n### Tracking Accuracy (long-term)\n\n**Dense Trajectories**\n\n|           |IM01 |(Dyn) |IM02   |(Dyn)|IM03  |(Dyn) |IM04  |(Dyn) |IM05  |(Dyn) |Avg. |\n| --------- | --- | ---- | ---- | ---- | ---- | ---- | ---- | ---  | ---- | ---- | --- | \n| **[FlowFields (Bailer2015)](https://av.dfki.de/publications/flow-fields-dense-correspondence-fields-for-highly-accurate-large-displacement-optical-flow-estimation/)** |70.63| 61.79| 56.69 |45.93| 71.46| 68.35| 42.27| 37.63| 65.15| 59.61| 57.95|\n| **[RIC (Hu2017)](https://github.com/YinlinHu/Ric)**        |74.39| 69.41| 58.72 |50.33| 54.18| 73.80| 44.21| 39.52| 60.23| 60.28| 58.51|\n|**[CPM (Li2018)](https://github.com/YinlinHu/CPM)**         |73.41| 65.16| 58.31 |47.57| 74.41| 71.13| 46.23| 41.15| 67.97| 61.68| 60.70|\n|**[DeepFlow (Weinzaepfel2013)](https://thoth.inrialpes.fr/src/deepflow/)**    |83.84| 81.90| 63.33 |55.52| 83.38| 80.87| 57.08| 56.65| 71.25| 64.67| 69.85|\n|**[RLOF6 (Geistert2016)](https://github.com/tsenst/RLOFLib)**       |82.80| 78.31| 63.16 |57.68| 87.46| 86.76| 50.56| 50.53| 69.86| 68.73| 69.59|\n|**[RLOF10 (Geistert2016)](https://github.com/tsenst/RLOFLib)**      |80.14| 73.95| 62.05 |55.54| 85.44| 84.39| 48.80| 47.84| 67.53| 67.41| 67.31|\n|**[DIS4 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        |80.44| 76.19| 64.11 |56.99| 82.89| 82.24| 53.91| 52.75| 72.11| 70.71| 69.23|\n|**[DIS2 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        |47.55| 33.03| 36.52 |25.32| 22.59| 19.76| 26.79| 20.89| 27.63| 27.91| 28.80|\n|**[Farneback (Farneback2003)](http://www.diva-portal.org/smash/get/diva2:273847/FULLTEXT01.pdf)**   |78.69| 74.24| 65.22 |59.43| 86.89| 87.17| 52.85| 55.29| 70.22| 68.94| 69.89|\n|**[Sparse to Dense PLK (Bouguet2000)](http://robots.stanford.edu/cs223b04/algo_tracking.pdf)** |75.15| 68.54| 64.71 |57.88| 84.71| 84.11| 50.08| 49.26| 68.45| 69.75| 67.26| \n\n**Person Trajectories**\n\n|           | IM01|(Dyn)|IM02 |(Dyn)|IM03 |(Dyn)| IM04 |(Dyn)|IM05  |(Dyn) | Avg.   |\n| --------- |  --- | --- | --- | --- | --- | --- | ---- | --- | ---- | ---- | ------ |\n| **[FlowFields (Bailer2015)](https://av.dfki.de/publications/flow-fields-dense-correspondence-fields-for-highly-accurate-large-displacement-optical-flow-estimation/)** | 77.94| 62.68| 52.35| 38.22| 66.76| 63.17| 30.09| 25.24| 65.67| 68.20| 55.03 |\n| **[RIC (Hu2017)](https://github.com/YinlinHu/Ric)**        | 87.88| 80.87| 56.56| 48.14| 43.49| 70.98| 32.48| 27.81| 57.47| 68.56| 57.42|\n|**[CPM (Li2018)](https://github.com/YinlinHu/CPM)**         | 82.17| 68.82| 54.56| 40.99| 70.37| 66.69| 35.98| 30.00| 69.64| 71.58| 59.08|\n|**[DeepFlow (Weinzaepfel2013)](https://thoth.inrialpes.fr/src/deepflow/)**    | 99.19| 95.32| 68.60| 63.04| 83.18| 81.20| 53.82| 52.22| 76.32| 79.15| 75.20|\n|**[RLOF6 (Geistert2016)](https://github.com/tsenst/RLOFLib)**       | 97.70| 92.37| 66.70| 65.08| 88.73| 90.22| 43.56| 46.47| 72.60| 80.12| 74.36|\n|**[RLOF10 (Geistert2016)](https://github.com/tsenst/RLOFLib)**      |96.00 | 85.02| 63.08| 59.77| 85.97| 86.69| 39.41| 40.48| 69.09| 78.70| 70.42|\n|**[DIS4 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        | 92.22| 85.98| 63.97| 56.35| 81.59| 81.61| 44.58| 42.64| 74.95| 82.09| 70.60|\n|**[DIS2 (Kroeger2016)](https://github.com/tikroeger/OF_DIS)**        | 40.81| 22.39| 22.86| 15.37| 9.05 | 6.72 | 13.63| 9.72 | 17.86| 18.10| 17.65 |\n|**[Farneback (Farneback2003)](http://www.diva-portal.org/smash/get/diva2:273847/FULLTEXT01.pdf)**   | 88.75| 81.33| 64.69| 59.05| 85.92| 87.44| 42.42| 45.35| 71.51| 79.63| 70.61| \n|**[Sparse to Dense PLK (Bouguet2000)](http://robots.stanford.edu/cs223b04/algo_tracking.pdf)** | 79.31| 66.83| 61.05| 52.41| 82.63| 83.11| 37.92| 36.81| 67.53| 76.18| 64.38|\n|**[NMC (IDREES2014)](https://www.sciencedirect.com/science/article/pii/S0262885613001637)** | 96.96 | 90.33 | 72.18 | 71.44 | 92.28 | 20.70 | 32.72 | 42.38 | 60.15 | 56.02 | 63.52 |\n\n\n\n\n## References - Optical Flow Algorithm\n\n```\n@inproceedings\n{Bailer2015,\n  title = {Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation},\n  author={Bailer, C. and Taetz, B. and Stricker, D.},\n  booktitle = {International Conference on Computer Vision},\n  pages={4015--4023},\n  year = {2015}\n}\n```\n```\n@inproceedings{Hu2017,\n  title={Robust interpolation of correspondences for large displacement optical flow},\n  author={Hu, Y. and Li, Y. and Song, R.},\n  booktitle={Conference on Computer Vision and Pattern Recognition},\n  pages={4791--4799},\n  year={2017},\n}\n```\n```\n@article{Li2018, \n  author={Y. Li and Y. Hu and R. Song and P. Rao and Y. Wang}, \n  journal={IEEE Transactions on Circuits and Systems for Video Technology}, \n  title={Coarse-to-Fine PatchMatch for Dense Correspondence}, \n  year={2018}, \n  volume={28}, \n  number={9}, \n  pages={2233-2245}, \n}\n```\n```\n@inproceedings{Weinzaepfel2013,\n  AUTHOR = {Weinzaepfel, Philippe and Revaud, Jerome and Harchaoui, Zaid and Schmid, Cordelia},\n  TITLE = {{DeepFlow: Large displacement optical flow with deep matching}},\n  BOOKTITLE = {{Intenational Conference on Computer Vision }},\n  YEAR = {2013},\n}\n```\n```\n@inproceedings{Geistert2016,\n\tAUTHOR = {Jonas Geistert and Tobias Senst and Thomas Sikora},\n\tTITLE = {Robust Local Optical Flow: Dense Motion Vector Field Interpolation},\n\tBOOKTITLE = {Picture Coding Symposium},\n\tYEAR = {2016},\n\tPAGES = {1--5},\n}\n```\n```\n@inproceedings{Kroeger2016, \n  Author = {Till Kroeger and Radu Timofte and Dengxin Dai and Luc Van Gool}, \n  Title = {Fast Optical Flow using Dense Inverse Search}, \n  Booktitle = {European Conference on Computer Vision }, \n  Year = {2016}\n }\n```\n```\n@inproceedings{Farneback2003,\n  Author = \t {Gunnar Farneb{\\\"a}ck},\n  Title = \t {Two-Frame Motion Estimation Based on Polynomial Expansion},\n  Booktitle = \t {Proceedings of the 13th Scandinavian Conference on Image Analysis},\n  Pages = \t {363--370},\n  Year = \t {2003},\n }\n```\n```\n@TECHREPORT{Bouguet2000,\n  author = {J.-Y. Bouguet},\n  title = {Pyramidal Implementation of the Lucas Kanade Feature Tracker},\n  institution = {Intel Corporation Microprocessor Research Lab},\n  year = {2000},\n  type = {Technical {R}eport},\n  publisher = {Intel Corporation Microprocessor Research Labs},\n  timestamp = {2013.04.03}\n}\n```\n\n## References - Person Tracking Algorithm\n\n```\n@article{IDREES2014,\n title = \"Tracking in dense crowds using prominence and neighborhood motion concurrence\",\n journal = \"Image and Vision Computing\",\n volume = \"32\",\n number = \"1\",\n pages = \"14 - 26\",\n year = \"2014\",\n author = \"Haroon Idrees and Nolan Warner and Mubarak Shah\",\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftsenst%2Fcrowdflow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftsenst%2Fcrowdflow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftsenst%2Fcrowdflow/lists"}