{"id":18303568,"url":"https://github.com/ericguo5513/tm2t","last_synced_at":"2025-04-11T03:35:29.949Z","repository":{"id":42560718,"uuid":"510459934","full_name":"EricGuo5513/TM2T","owner":"EricGuo5513","description":"Official implementation of \"TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV2022)\"","archived":false,"fork":false,"pushed_at":"2024-08-18T00:23:39.000Z","size":15983,"stargazers_count":114,"open_issues_count":4,"forks_count":14,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-02-12T19:15:09.727Z","etag":null,"topics":["motion-generation","motion-generator","motion-to-text","pytorch-implementation","text-to-motion"],"latest_commit_sha":null,"homepage":"https://ericguo5513.github.io/TM2T","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EricGuo5513.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-07-04T18:10:28.000Z","updated_at":"2025-02-06T14:51:48.000Z","dependencies_parsed_at":"2024-01-14T03:49:35.930Z","dependency_job_id":"c9fcbfab-a535-4b25-8918-94a9122b54db","html_url":"https://github.com/EricGuo5513/TM2T","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EricGuo5513%2FTM2T","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EricGuo5513%2FTM2T/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EricGuo5513%2FTM2T/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EricGuo5513%2FTM2T/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EricGuo5513","download_url":"https://codeload.github.com/EricGuo5513/TM2T/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":239727074,"owners_count":19687098,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["motion-generation","motion-generator","motion-to-text","pytorch-implementation","text-to-motion"],"created_at":"2024-11-05T15:26:03.602Z","updated_at":"2025-02-19T20:09:25.780Z","avatar_url":"https://github.com/EricGuo5513.png","language":"Python","readme":"# TM2T: Stochastical and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts (ECCV 2022)\n## [[Project Page]](https://ericguo5513.github.io/TM2T) [[Paper]](https://arxiv.org/abs/2207.01696.pdf)\n\n![teaser_image](https://github.com/EricGuo5513/TM2T/blob/main/docs/teaser_image.png)\n  \n## Python Virtual Environment\n\nAnaconda is recommended to create this virtual environment.\n  \n  ```sh\n  conda create -f environment.yaml\n  conda activate tm2t\n  ```\n  \nIf you cannot successfully create the environment, here is a list of required libraries:\n  ```\n  Python = 3.7.9   # Other version may also work but is not tested.\n  PyTorch = 1.6.0 (conda install pytorch==1.6.0 torchvision==0.7.0 -c pytorch)  #Other version may also work but are not tested.\n  scipy\n  numpy\n  tensorflow       # For use of tensorboard only\n  spacy\n  tqdm\n  ffmpeg = 4.3.1   # Other version may also work but are not tested.\n  matplotlib = 3.3.1\n  nlpeval (https://github.com/Maluuba/nlg-eval)     # For evaluation of motion-to-text only\n  bertscore (https://github.com/Tiiiger/bert_score) # For evaluation of motion-to-text only\n  ```\n  \n  After all, if you want to generate 3D motions from customized raw texts, you still need to install the language model for spacy. \n  ```sh\n  python -m spacy download en_core_web_sm\n  ```\n  \n  ## Download Data \u0026 Pre-trained Models\n  \n  **If you just want to play our pre-trained models, you don't need to download datasets.**\n  ### Datasets\n  We are using two 3D human motion-language dataset: HumanML3D and KIT-ML. For both datasets, you could find the details as well as download link [[here]](https://github.com/EricGuo5513/HumanML3D).   \n  Please note you don't need to clone that git repository, since all related codes have already been included in current git project.\n  \n  Download and unzip the dataset files -\u003e Create a dataset folder -\u003e Place related data files in dataset folder:\n  ```sh\n  mkdir ./dataset/\n  ```\n  Take HumanML3D for an example, the file directory should look like this:  \n  ```\n  ./dataset/\n  ./dataset/HumanML3D/\n  ./dataset/HumanML3D/new_joint_vecs/\n  ./dataset/HumanML3D/texts/\n  ./dataset/HumanML3D/Mean.mpy\n  ./dataset/HumanML3D/Std.npy\n  ./dataset/HumanML3D/test.txt\n  ./dataset/HumanML3D/train.txt\n  ./dataset/HumanML3D/train_val.txt\n  ./dataset/HumanML3D/val.txt  \n  ./dataset/HumanML3D/all.txt \n  ```\n ### Pre-trained Models\n  Create a checkpoint folder to place pre-traine models:\n  ```sh\n  mkdir ./checkpoints\n  ```\n    \n #### Download models for HumanML3D from [[here]](https://drive.google.com/file/d/1OXy2FBhXrswT6zE4SBSPpVfQhxmI8Zzy/view?usp=sharing). Unzip and place them under checkpoint directory, which should be like\n```\n./checkpoints/t2m/\n./checkpoints/t2m/Comp_v6_KLD005/                   # A dumb folder containing information for evaluation dataloading\n./checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/  # Motion discretizer\n./checkpoints/t2m/M2T_EL4_DL4_NH8_PS/              # Motion (token)-to-Text translation model\n./checkpoints/t2m/T2M_Seq2Seq_NML1_Ear_SME0_N/     # Text-to-Motion (token) generation model\n./checkpoints/t2m/text_mot_match/                  # Motion \u0026 Text feature extractors for evaluation\n ```\n #### Download models for KIT-ML [[here]](https://drive.google.com/file/d/1ied_KWvqXXsP2Gls-SvzjXIZtHHZ5zpi/view?usp=sharing). Unzip and place them under checkpoint directory.\n    \n ## Training Models\n \n All intermediate meta files/animations/models will be saved to checkpoint directory under the folder specified by argument \"--name\".\n ### Training motion discretizer \n #### HumanML3D\n```sh\npython train_vq_tokenizer_v3.py --gpu_id 0 --name VQVAEV3_CB1024_CMT_H1024_NRES3 --dataset_name t2m --n_resblk 3\n```\n#### KIT-ML\n```sh\npython train_vq_tokenizer_v3.py --gpu_id 0 --name VQVAEV3_CB1024_CMT_H1024_NRES3 --dataset_name kit --n_resblk 3\n```\n### Tokenizing all motion data for the following training\n#### HumanML3D\n```sh\npython tokenize_script.py --gpu_id 0 --name VQVAEV3_CB1024_CMT_H1024_NRES3 --dataset_name t2m\n```\n\n#### KIT-ML\n```sh\npython tokenize_script.py --gpu_id 0 --name VQVAEV3_CB1024_CMT_H1024_NRES3 --dataset_name kit\n```\n\n### Training motion2text model:\n#### HumanML3D\n```sh\npython train_m2t_transformer.py --gpu_id 0 --name M2T_EL4_DL4_NH8_PS --n_enc_layers 4 --n_dec_layers 4 --proj_share_weight --dataset_name t2m\n```\n#### KIT-ML\n```sh\npython train_m2t_transformer.py --gpu_id 0 --name M2T_EL3_DL3_NH8_PS --n_enc_layers 3 --n_dec_layers 3 --proj_share_weight --dataset_name kit\n```\n### Training text2motion model:\n#### HumanML3D\n```sh\npython train_t2m_joint_seq2seq.py --gpu_id 0 --name T2M_Seq2Seq_NML1_Ear_SME0_N --start_m2t_ep 0 --dataset_name t2m\n```\n#### KIT-ML\n```sh\npython train_t2m_joint_seq2seq.py --gpu_id 0 --name T2M_Seq2Seq_NML1_Ear_SME0_N --start_m2t_ep 0 --dataset_name kit\n```\n### Motion \u0026 text feature extractors:\nWe use the same extractors provided by https://github.com/EricGuo5513/text-to-motion\n\n    \n## Generating and Animating 3D Motions (HumanML3D)\n### Translating motions into langauge (using test set)\nWith Beam Search:\n```sh\npython evaluate_m2t_transformer.py --name M2T_EL4_DL4_NH8_PS --gpu_id 2 --num_results 20 --n_enc_layers 4 --n_dec_layers 4 --proj_share_weight --ext beam_search\n```\n\nWith Sampling:\n```sh\npython evaluate_m2t_transformer.py --name M2T_EL4_DL4_NH8_PS --gpu_id 2 --num_results 20 --n_enc_layers 4 --n_dec_layers 4 --proj_share_weight --sample --top_k 3 --ext top_3\n```\n\n### Generating motions from texts (using test set)\n```sh\npython evaluate_t2m_seq2seq.py --name T2M_Seq2Seq_NML1_Ear_SME0_N --num_results 10 --repeat_times 3 --sample --ext sample\n```\nwhere  *--repeat_time* gives how many sampling rounds are carried out for each description. This script will results in 3x10 animations under directory *./eval_results/t2m/T2M_Seq2Seq_NML1_Ear_SME0_N/sample/*.\n\n### Sampling results from customized descriptions\n```sh\npython gen_script_t2m_seq2seq.py --name T2M_Seq2Seq_NML1_Ear_SME0_N  --repeat_times 3 --sample --ext customized --text_file ./input.txt\n```\nThis will generate 3 animated motions for each description given in text_file *./input.txt*.\n\nIf you find problem with installing ffmpeg, you may not be able to animate 3d results in mp4. Try gif instead.\n\n## Quantitative Evaluations\n### Evaluating Motion2Text\n```sh\npython final_evaluation_m2t.py \n```\n### Evaluating Motion2Text\n```sh\npython final_evaluation_t2m.py \n```\nThis will evaluate the model performance on HumanML3D dataset by default. You could also run on KIT-ML dataset by uncommenting certain lines in *./final_evaluation.py*. The statistical results will saved to *./m2t(t2m)_evaluation.log*.\n\n### Misc\n Contact Chuan Guo at cguo2@ualberta.ca for any questions or comments.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fericguo5513%2Ftm2t","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fericguo5513%2Ftm2t","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fericguo5513%2Ftm2t/lists"}