{"id":19883000,"url":"https://github.com/modelscope/motionagent","last_synced_at":"2025-04-06T00:10:42.345Z","repository":{"id":191657364,"uuid":"683069767","full_name":"modelscope/motionagent","owner":"modelscope","description":"MotionAgent is your AI assistent to convert ideas into motion pictures.","archived":false,"fork":false,"pushed_at":"2024-09-02T10:47:22.000Z","size":16,"stargazers_count":293,"open_issues_count":4,"forks_count":36,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-03-29T23:09:46.690Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/modelscope.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-08-25T14:21:38.000Z","updated_at":"2025-03-21T08:05:42.000Z","dependencies_parsed_at":"2024-11-12T17:19:24.102Z","dependency_job_id":"9cafe1fb-c7e0-4b75-98e3-5ca8e2a785a8","html_url":"https://github.com/modelscope/motionagent","commit_stats":null,"previous_names":["modelscope/motionagent"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelscope%2Fmotionagent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelscope%2Fmotionagent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelscope%2Fmotionagent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/modelscope%2Fmotionagent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/modelscope","download_url":"https://codeload.github.com/modelscope/motionagent/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247415973,"owners_count":20935387,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-12T17:19:07.209Z","updated_at":"2025-04-06T00:10:42.314Z","avatar_url":"https://github.com/modelscope.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"\u003cp align=\"center\"\u003e\n    \u003cbr\u003e\n    \u003cimg src=\"https://modelscope.oss-cn-beijing.aliyuncs.com/modelscope.gif\" width=\"400\"/\u003e\n    \u003cbr\u003e\n    \u003ch1\u003eMotionAgent\u003c/h1\u003e\n\u003cp\u003e\n\n\n\n# Introduction\n\n如果您熟悉中文，可以阅读[中文版本的README](./README_ZH.md)。\n\nMotionAgent is a deep learning model tool that can generate videos from user-created scripts. Users can create scripts, generate movie stills, generate images/videos, and compose background music through our provided toolset.\n\nThe model of MotionAgent is powered by the open-source model community [ModelScope](https://github.com/modelscope/modelscope).\n\n\n# Features\n- Script Generation\n  - Users can generate scripts by specifying the story theme and background\n  - The script generation model is based on LLM (such as Qwen-7B-Chat), which can generate scripts of various styles\n- Movie still Generation\n  - Generate corresponding movie still scene images \n- Video Generation\n  - Generate videos from images\n  - Support high-resolution video generation\n- Music Generation\n  - Custom style background music\n\n\n\n# Quick Start\n\n## Compatibility Verification\nVerified environments:\n- python3.8\n- torch2.0.1\n- CUDA11.7\n- OS: Ubuntu 20.04\n- Nvidia-A100 40G\n\n\n## Resource Requirements\n- GPU memory: 36GB\n- Disk: It is recommended to reserve more than 50GB of storage space\n\n\n## Installation Guide\n\n### conda virtual environment\n\nUse the conda virtual environment, refer to [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage your dependencies, after installation, execute the following commands:\n\n```shell\nconda create -n motion_agent python=3.8\nconda activate motion_agent\n\nGIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/motionagent.git --depth 1\ncd motionagent\n\n# Install dependencies\npip3 install -r requirements.txt\n\n# Run the application\npython3 app.py\n\n# Note: MotionAgent currently supports single-card GPU, if your environment has multiple cards, please use the following command\n# CUDA_VISIBLE_DEVICES=0 python3 app.py\n# Note: If you are using the Modelscope community Notebook or if your disk memory is less than 100GB, please turn on the clear_cache switch. Each run will result in re-downloading the model, causing a significant decrease in speed. Please be patient and wait.\n# python3 app.py --clear_cache\n\n# Finally, click on the URL generated in the log to access the page.\n```\n\n\n## Model List\n\n[1]  Qwen-7B-Chat： [Model](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)  |  [Space](https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary)\n\n[2]  SDXL 1.0：[Model](https://modelscope.cn/models/AI-ModelScope/stable-diffusion-xl-base-1.0/summary)  |  [Space](https://modelscope.cn/studios/AI-ModelScope/Stable_Diffusion_XL_1.0/summary)\n\n[3]  I2VGen-XL： [Model](https://modelscope.cn/models/damo/Image-to-Video/summary)  |  [Space](https://modelscope.cn/models/damo/Video-to-Video/summary)\n\n[4]  MusicGen： [Model](https://modelscope.cn/models/AI-ModelScope/musicgen-large/summary)  |  [Space](https://modelscope.cn/studios/AI-ModelScope/MusicGen/summary)\n\n\n# More Information\n\n- [ModelScope library](https://github.com/modelscope/modelscope/)\n\n  ModelScope Library is a model ecosystem repository hosted on github, belonging to the Damo Academy Moda project.\n\n- [Contribute models to ModelScope](https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88)\n\n# License\n\nThis project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmodelscope%2Fmotionagent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmodelscope%2Fmotionagent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmodelscope%2Fmotionagent/lists"}