{"id":13653486,"url":"https://github.com/eth-ait/spl","last_synced_at":"2026-01-16T13:54:55.867Z","repository":{"id":35268822,"uuid":"202362734","full_name":"eth-ait/spl","owner":"eth-ait","description":"Structured Prediction Helps 3D Human Motion Modelling - ICCV '19","archived":false,"fork":false,"pushed_at":"2023-06-15T15:12:46.000Z","size":69052,"stargazers_count":58,"open_issues_count":3,"forks_count":12,"subscribers_count":16,"default_branch":"master","last_synced_at":"2024-11-10T04:36:26.974Z","etag":null,"topics":["motion-forecasting","motion-models","motion-prediction","sequence-models","structured-prediction","temporal-data"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/eth-ait.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2019-08-14T14:04:37.000Z","updated_at":"2024-06-28T01:11:04.000Z","dependencies_parsed_at":"2024-02-09T16:12:56.347Z","dependency_job_id":null,"html_url":"https://github.com/eth-ait/spl","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eth-ait%2Fspl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eth-ait%2Fspl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eth-ait%2Fspl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/eth-ait%2Fspl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/eth-ait","download_url":"https://codeload.github.com/eth-ait/spl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250385123,"owners_count":21421858,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["motion-forecasting","motion-models","motion-prediction","sequence-models","structured-prediction","temporal-data"],"created_at":"2024-08-02T02:01:11.164Z","updated_at":"2026-01-16T13:54:55.861Z","avatar_url":"https://github.com/eth-ait.png","language":"Python","readme":"# Structured Prediction Helps 3D Human Motion Modelling \nCode repository for [our paper](https://ait.ethz.ch/spl) presented at ICCV '19. \n\nWe provide data preprocessing scripts, training pipeline, evaluation and visualization tools. Model implementation and pre-trained models will come soon. \n\n### Required packages\nWe recommend creating a virtual environment and install the required packages by running:\n```\npip install -r requirements.txt\n```\nWe used `Tensorflow 1.12.0` in our experiments. We also check that Tensorflow versions until `1.14.0` are also okay, but gives too many warnings due to TF 2.0. \nPlease note that having `numpy 1.14.5` is important. If you use other versions of TF or other packages, make sure that you have the correct numpy version.  \n\n### Preparing the Data\nDownload the data from the [DIP website](http://dip.is.tue.mpg.de/) and unzip it into a folder of your choice. Let's call that folder `\u003cRAW_DATA\u003e`. Create a folder where you want to store the processed data, `\u003cSPL_DATA\u003e`. In the folder [`preprocessing`](./preprocessing), run the script\n\n```\ncd preprocessing\npython preprocess_dip.py --input_dir \u003cRAW_DATA\u003e --output_dir \u003cSPL_DATA\u003e\n```\n\nBy default the script generates the data using rotation matrix representations. If you want to convert the data to angle-axis or quaternions, use the `--as_aa` or `--as_quat` flags.\n\nThis script creates the training, validation and test splits used to produce the results in the paper. Note that data split is deterministic and determined by the files `training_fnames.txt`, `validation_fnames.txt`, and `test_fnames.txt` under [`preprocessing`](./preprocessing).\n\nWhen running the script it creates two versions of the validation and test split: One where we split each motion sequence into subsequences of size 180 (3 seconds) using a sliding window and one where we do not split the sequence (referred to as `dynamic` split). When we load data during training or evaluation, we always only extract one window of size `W` from each sequence. Hence, the splitting with a sliding window ''blows up'' the number of samples. Thus, the dynamic split has effectively less samples, which is sometimes convenient (for debugging, visualization etc.).\n\nA note on the data: The data published on the DIP website is an early version of the official AMASS dataset. When we submitted the paper, the official [AMASS dataset](https://amass.is.tue.mpg.de/) was not published yet. We are planning to evaluate our model and baseline models on the official AMASS dataset and report results here. \nIf you plan to use the latest version of AMASS, we are happy to provide assistance if required. However, it shouldn't be too hard to adapt `preprocess_dip.py` to parse the AMASS data. \n\n### Training\nYou can pass data and save directory via command-line arguments everytime you run an experiment. Alternatively, you can set `AMASS_DATA` and `AMASS_EXPERIMENTS` environment variables. You can run the following commands:\n```\nexport AMASS_DATA=\u003cSPL_DATA\u003e\nexport AMASS_EXPERIMENTS=\u003cpath-to-experiment-directory\u003e\nexport PYTHONPATH=$PYTHONPATH:\u003cpath-to-this-repository\u003e\n```\nPlease note that updating `PYTHONPATH` is required while `AMASS_DATA` and `AMASS_EXPERIMENTS` are optional.\n\nYou can train `zero_velocity` model by using rotation matrix representation as follows: \n```\ncd \u003cpath-to-this-repository\u003e\npython spl/training.py --model_type zero_velocity --data_type rotmat\n```\nWith a unique timestamp, the experiment is stored under `AMASS_EXPERIMENTS` or the given target directory if you run the training command with `--data_dir` flag.\nSee flags and possible choices in `spl/training.py`. We will add our model and the baselines soon.\n\nYou can easily extend this repo by implementing a new model. Please see the docstring of `spl.model.base_model.py` to read about the interface.\n\n### Evaluation\nYou can evaluate and/or visualize models after training. The following command visualizes clips of 60 frames by evaluating the model on the test dataset with full sequences.\nSee flags and possible choices in `spl/evaluation.py`. \n```\npython spl/evaluation.py --model_id \u003cexperiment-timestamp\u003e --visualize --seq_length_out 60 --dynamic_test_split\n```\n\nPlease note that by default the visualization code displays interactive animations using matplotlib. To make interactive frame-rates possible, only the skeleton is displayed. You can also create videos of the full SMPL mesh or skeleton by adding the `--to_video` option. However, in order to get videos with SMPL mesh, you need the SMPL model, which we cannot provide due to licensing issues. If you are interested in using SMPL, the best option is to download the latest code from the AMASS repo and integrate it with our repo. Feel free to contact us if you have questions about this.\n\n### Pre-trained Models\nIn `pretrained_configs` folder, you can find the configuration we used. In order to re-run an experiment you can simply run:\n```\npython spl/training.py --from_config \u003cpath-to-a-model-config.json\u003e\n``` \nDue to the stochastic nature of training, you many not get exactly the same results. \nHowever, you should get marginally better or worse models. If this is not the case, please contact us. \nThe models we used in the paper can be [downloaded from here](https://files.ait.ethz.ch/projects/spl/spl_models.zip).\nYou can run evaluation with them or visualize their results. Note that `QuaterNet` models are not there yet. \n\n### Sample scripts\nUnder `spl/test/`, we share sample scripts showing how to use components (i.e., metrics, visualization, tfrecord data) of this repository without requiring the entire pipeline.   \n\n### Citation\nIf you use code from this repository, please cite \n\n```\n@inproceedings{Aksan_2019_ICCV,\n  title={Structured Prediction Helps 3D Human Motion Modelling},\n  author={Aksan, Emre and Kaufmann, Manuel and Hilliges, Otmar},\n  booktitle={The IEEE International Conference on Computer Vision (ICCV)},\n  month={Oct},\n  year={2019},\n  note={First two authors contributed equally.}\n}\n```\n\nIf you use data from DIP or AMASS, please cite the original papers as detailed on their website.\n\n### Contact\nPlease file an issue or contact [Emre Aksan (emre.aksan@inf.ethz.ch)](mailto:emre.aksan@inf.ethz.ch) or [Manuel Kaufmann (manuel.kaufmann@inf.ethz.ch)](mailto:manuel.kaufmann@inf.ethz.ch)\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feth-ait%2Fspl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Feth-ait%2Fspl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Feth-ait%2Fspl/lists"}