{"id":18514311,"url":"https://github.com/super3/pytrack","last_synced_at":"2025-04-09T06:34:12.390Z","repository":{"id":4960980,"uuid":"6118477","full_name":"super3/PyTrack","owner":"super3","description":"A simplified Computer Vision framework for object and motion tracking, using Python and Pygame.","archived":false,"fork":false,"pushed_at":"2013-02-14T04:57:44.000Z","size":4780,"stargazers_count":5,"open_issues_count":0,"forks_count":1,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-03-24T00:51:32.374Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"rakyll/ticktock","license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/super3.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2012-10-08T02:04:51.000Z","updated_at":"2019-03-17T19:21:08.000Z","dependencies_parsed_at":"2022-07-09T07:46:24.896Z","dependency_job_id":null,"html_url":"https://github.com/super3/PyTrack","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/super3%2FPyTrack","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/super3%2FPyTrack/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/super3%2FPyTrack/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/super3%2FPyTrack/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/super3","download_url":"https://codeload.github.com/super3/PyTrack/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247993599,"owners_count":21030043,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-06T15:42:50.173Z","updated_at":"2025-04-09T06:34:07.358Z","avatar_url":"https://github.com/super3.png","language":"Python","readme":"PyTrack\r\n========\r\nA simplified computer vision framework for object and motion tracking, using Python and Pygame. \r\n\r\nDependencies and Pre-Setup\r\n-------\r\n**You must have [PyGame](http://pygame.org/) and [NumPy](http://numpy.scipy.org/) installed.** Developed in Python 3.2 (previous versions of Python may work if you're lucky).\r\n\r\nPyTrack accepts a folder of image frames from a video. I suggest you use [IrfanView](http://www.irfanview.com/) or another tool to extract your images. \r\nThese files can be in any sequential format, but must be the same pixel dimensions. PyTrack will accept any image formats accepted by PyGames's\r\n[image modulee](http://www.pygame.org/docs/ref/image.html) (these include: JPG, PNG, GIF (non animated), BMP, PCX, TGA (uncompressed), TIF, LBM, PBM, PBM, PGM, PPM, and XPM). \r\n\r\nIf you want to get PyTrack up and running right away, [download this sample image set](https://github.com/downloads/super3/PyTrack/SampleAnt.zip). \r\nExtract `ant_maze` into root PyTrack directory. You should be able to run `viewer.py` or `process.py` now. See setup below for more detailed instructions.\r\n\r\nSetup\r\n-------\r\n1. Open `Classes/Config.py` in your text editor or IDE.\r\n2. Change the FOLDER variable to the path of the image folder.\r\n4. Select the best TOLERANCE for the data set. Default is 840000.\r\n7. Save your changes.\r\n8. Run `viewer.py` or `process.py`.\r\n\r\n####Notes:\r\n* Files must be in some sort of sequential format. Example: (image0001.jpg, image0002.jpg, etc.)\r\n* Files must be the same pixel dimensions. \r\n* Will not track more than one object at a time.\r\n\r\nModules\r\n-------\r\n\r\n####Viewer.py\r\nThis module will display annotated results. \r\n* `Forward` and `Back` arrows to move forward or back 1 frame.\r\n* `Up` and `Down` arrows to move forward or back 10 frames.\r\n* `S` key to toggle between source images and PyTrack's pixel differencing.\r\n* `1` key to toggle search area box around object.\r\n* `2` key to toggle search area box center.\r\n\r\n####Process.py\r\nThis module will quickly process image data. \r\n* Will load and process images without any input from the user.\r\n* Outputs the coordinates of the dataset to `data.txt`.\r\n\r\nInfo and Thanks\r\n-------\r\n* Author: Shawn Wilkinson \u003cme@super3.org\u003e\r\n* Author Website: http://super3.org/\r\n* Project Github: https://github.com/super3/PyTrack\r\n* License: GPLv3 \u003chttp://gplv3.fsf.org/\u003e\r\n\r\nThis project was created during the summer of 2012 at [University of Washington](https://www.washington.edu/) in the [Tom Daniel Lab](http://faculty.washington.edu/danielt/), \r\nas part of the [Center for Sensorimotor Neural Engineering](http://www.csne-erc.org/) Research Experiences for Undergraduates(REU) program. \r\nSpecial thanks to the National Science Foundation(NSF). Currently this project is part of ongoing computer vision research at\r\n[Morehouse College](https://morehouse.edu/)'s computer science department under Dr. Amos Johnson.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsuper3%2Fpytrack","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsuper3%2Fpytrack","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsuper3%2Fpytrack/lists"}