{"id":20632537,"url":"https://github.com/omimo/pymo","last_synced_at":"2025-04-06T04:11:29.206Z","repository":{"id":41322457,"uuid":"109879364","full_name":"omimo/PyMO","owner":"omimo","description":"A library for machine learning research on motion capture data ","archived":false,"fork":false,"pushed_at":"2022-09-01T10:37:02.000Z","size":6152,"stargazers_count":357,"open_issues_count":10,"forks_count":70,"subscribers_count":22,"default_branch":"master","last_synced_at":"2025-03-30T03:06:07.037Z","etag":null,"topics":["library","machine-learning","mocap","motion-capture","python","scikit-learn"],"latest_commit_sha":null,"homepage":"https://omid.al/projects/pymo/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/omimo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-11-07T19:13:29.000Z","updated_at":"2025-03-24T11:23:29.000Z","dependencies_parsed_at":"2023-01-17T19:01:11.955Z","dependency_job_id":null,"html_url":"https://github.com/omimo/PyMO","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omimo%2FPyMO","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omimo%2FPyMO/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omimo%2FPyMO/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omimo%2FPyMO/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/omimo","download_url":"https://codeload.github.com/omimo/PyMO/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247430871,"owners_count":20937874,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["library","machine-learning","mocap","motion-capture","python","scikit-learn"],"created_at":"2024-11-16T14:16:33.822Z","updated_at":"2025-04-06T04:11:29.131Z","avatar_url":"https://github.com/omimo.png","language":"Python","readme":"# PyMO\nA library for using motion capture data for machine learning\n\n**This library is currently highly experimental and everything is subject to change :)**\n\n\n## Roadmap\n* Mocap Data Parsers and Writers\n* Common mocap pre-processing algorithms\n* Feature extraction library\n* Visualization tools\n\n## Current Features\n* [Read BVH Files](#read-bvh-files)\n* Write BVH Files\n* Pre-processing pipelines\n    * [Supporting `scikit-learn` API](#scikit-learn-pipeline-api)\n    * Convert data representations \n        * [Euler angles to positions](#convert-to-positions)\n        * Euler angles to exponential maps\n        * Exponential maps to euler angles\n    * Body-oriented global translation and rotation calculation with inverse tranform\n    * Root-centric position normalizer with inverse tranform\n    * Standard scaler\n    * Joint selectors        \n* Visualization tools\n    * [Skeleton hierarchy](#get-skeleton-info)\n    * [2D frame visualization](#visualize-a-single-2d-frame)\n    * [3D webgl-based animation](#animate-in-3d-inside-a-jupyter-notebook)\n* Annotations\n    * Foot/ground contact detector\n\n\n### Read BVH Files\n\n```python\nfrom pymo.parsers import BVHParser\n\nparser = BVHParser()\n\nparsed_data = parser.parse('demos/data/AV_8Walk_Meredith_HVHA_Rep1.bvh')\n```\n\n### Get Skeleton Info\n\n```python\nfrom pymo.viz_tools import *\n\nprint_skel(parsed_data)\n```\nWill print the skeleton hierarchy:\n```\n- Hips (None)\n| | - RightUpLeg (Hips)\n| | - RightLeg (RightUpLeg)\n| | - RightFoot (RightLeg)\n| | - RightToeBase (RightFoot)\n| | - RightToeBase_Nub (RightToeBase)\n| - LeftUpLeg (Hips)\n| - LeftLeg (LeftUpLeg)\n| - LeftFoot (LeftLeg)\n| - LeftToeBase (LeftFoot)\n| - LeftToeBase_Nub (LeftToeBase)\n- Spine (Hips)\n| | - RightShoulder (Spine)\n| | - RightArm (RightShoulder)\n| | - RightForeArm (RightArm)\n| | - RightHand (RightForeArm)\n| | | - RightHand_End (RightHand)\n| | | - RightHand_End_Nub (RightHand_End)\n| | - RightHandThumb1 (RightHand)\n| | - RightHandThumb1_Nub (RightHandThumb1)\n| - LeftShoulder (Spine)\n| - LeftArm (LeftShoulder)\n| - LeftForeArm (LeftArm)\n| - LeftHand (LeftForeArm)\n| | - LeftHand_End (LeftHand)\n| | - LeftHand_End_Nub (LeftHand_End)\n| - LeftHandThumb1 (LeftHand)\n| - LeftHandThumb1_Nub (LeftHandThumb1)\n- Head (Spine)\n- Head_Nub (Head)\n```\n\n\n### scikit-learn Pipeline API\n\n```python\n\nfrom pymo.preprocessing import *\nfrom sklearn.pipeline import Pipeline\n\ndata_pipe = Pipeline([\n    ('param', MocapParameterizer('position')),\n    ('rcpn', RootCentricPositionNormalizer()),\n    ('delta', RootTransformer('abdolute_translation_deltas')),\n    ('const', ConstantsRemover()),\n    ('np', Numpyfier()),\n    ('down', DownSampler(2)),\n    ('stdscale', ListStandardScaler())\n])\n\npiped_data = data_pipe.fit_transform([parsed_data])\n```\n\n### Convert to Positions\n\n```python\nmp = MocapParameterizer('position')\n\npositions = mp.fit_transform([parsed_data])\n```\n\n### Visualize a single 2D Frame\n\n```python\ndraw_stickfigure(positions[0], frame=10)\n```\n\n![2D Skeleton Viz](assets/viz_skel_2d.png)\n\n### Animate in 3D (inside a Jupyter Notebook)\n\n```python\nnb_play_mocap(positions[0], 'pos', \n              scale=2, camera_z=800, frame_time=1/120, \n              base_url='pymo/mocapplayer/playBuffer.html')\n```\n\n![Mocap Player](assets/mocap_player.png)\n\n\n### Foot/Ground Contact Detector\n```python\nfrom pymo.features import *\n\nplot_foot_up_down(positions[0], 'RightFoot_Yposition')\n```\n\n![Foot Contact](assets/foot_updown.png)\n\n```python\nsignal = create_foot_contact_signal(positions[0], 'RightFoot_Yposition')\nplt.figure(figsize=(12,5))\nplt.plot(signal, 'r')\nplt.plot(positions[0].values['RightFoot_Yposition'].values, 'g')\n```\n\n![Foot Contact Signal](assets/footcontact_signal.png)\n\n## Feedback, Bugs, and Questions\nFor any questions, feedback, and bug reports, please use the [Github Issues](https://github.com/omimo/PyMO/issues).\n\n## Credits\nCreated by [Omid Alemi](https://omid.al/projects/)\n\n\n## License\nThis code is available under the [MIT license](http://opensource.org/licenses/MIT).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fomimo%2Fpymo","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fomimo%2Fpymo","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fomimo%2Fpymo/lists"}