{"id":23233937,"url":"https://github.com/ExponentialML/ComfyUI_ModelScopeT2V","last_synced_at":"2025-08-19T19:32:14.562Z","repository":{"id":226659529,"uuid":"769317580","full_name":"ExponentialML/ComfyUI_ModelScopeT2V","owner":"ExponentialML","description":"Allows native usage of ModelScope based Text To Video Models in ComfyUI","archived":false,"fork":false,"pushed_at":"2024-03-09T00:02:47.000Z","size":41,"stargazers_count":25,"open_issues_count":1,"forks_count":3,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-04-28T05:04:57.844Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ExponentialML.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2024-03-08T19:48:58.000Z","updated_at":"2024-04-19T00:36:45.000Z","dependencies_parsed_at":"2024-03-10T09:40:38.033Z","dependency_job_id":null,"html_url":"https://github.com/ExponentialML/ComfyUI_ModelScopeT2V","commit_stats":null,"previous_names":["exponentialml/comfyui_modelscopet2v"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ExponentialML%2FComfyUI_ModelScopeT2V","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ExponentialML%2FComfyUI_ModelScopeT2V/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ExponentialML%2FComfyUI_ModelScopeT2V/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ExponentialML%2FComfyUI_ModelScopeT2V/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ExponentialML","download_url":"https://codeload.github.com/ExponentialML/ComfyUI_ModelScopeT2V/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230367781,"owners_count":18215326,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-12-19T03:02:07.651Z","updated_at":"2025-08-19T19:32:14.544Z","avatar_url":"https://github.com/ExponentialML.png","language":"Python","readme":"# ComfyUI_ModelScopeT2V\n![image](https://github.com/ExponentialML/ComfyUI_ModelScopeT2V/assets/59846140/724b8150-eb30-4f1f-9f3f-c3dc17233825)\n\n\nAllows native usage of ModelScope based Text To Video Models in ComfyUI\n\n## Getting Started\n\n### Clone The Repository\n```bash\ncd /your/path/to/ComfyUI/custom_nodes\ngit clone https://github.com/ExponentialML/ComfyUI_ModelScopeT2V.git\n```\n\n### Preparation\nCreate a folder in your ComfyUI `models` folder named `text2video`.\n\n## Download Models\nModels that were converted to A1111 format will work. \n\n### Modelscope\nhttps://huggingface.co/kabachuha/modelscope-damo-text2video-pruned-weights/tree/main\n\n### Zeroscope\nhttps://huggingface.co/cerspense/zeroscope_v2_1111models\n\n### Instructions\nPlace the models in `text2video_pytorch_model.pth` model in the `text2video` directory.\n\nYou must also use the accompanying `open_clip_pytorch_model.bin`, and place it in the `clip` folder under your `model` directory.\n\nThis is optional if you're not using the attention layers, and are using something like AnimateDiff (more on this in usage).\n\n## Usage\n\n- `model_path`: The path to your ModelScope model.\n\n- `enable_attn`: Enables the temporal attention of the ModelScope model. If this is disabled, you must apply a 1.5 based model. If this option is enabled and you apply a 1.5 based model, this parameter will be disabled by default. This is due to ModelScope's usage of the SD 2.0 based CLIP model instead of the 1.5 one.\n\n- `enable_conv`: Enables the temporal convolution modules of the ModelScope model. Enabling this option with a 1.5 based model as input will allow you to leverage temporal convoutions with other modules (such as AnimateDiff)\n\n- `temporal_attn_strength`: Controls the strength of the temporal attention, bringing it closer to the dataset input without temporal properties.\n\n- `temporal_conv_strength`: Controls the strength of the temporal convolution, bringing it closer to the model input without temporal properties.\n\n- `sd_15_model`: Optional. If left blank, pure ModelScope will be used.\n\n### Tips\n1. Use the recently released [ResAdapter](https://github.com/bytedance/res-adapter) LoRA for better quality at lower resolutions.\n2. If you're using pure ModelScope, try higher CFG (around 15) for better coherence. You may also try any other rescale nodes.\n3. When using pure ModelScope, ensure that you use a minimum of 24 frames.\n4. If using with AnimateDiff, make sure to use 16 frames if you're not using context options.\n5. You **must** use the same CLIP model as the 1.5 checkpoint if you have `enable_attn` disabled.\n\n## TODO\n\n- [ ] Uncoditional guidance (CFG 1) is currently not implemented.\n- [ ] Explore ensembling 1.5 models with the 2.0 CLIP encoder to use all modules.\n\n## Atributions\n\nThe temporal code was borrowed and leveraged from https://github.com/kabachuha/sd-webui-text2video. Thanks @kabachuha!\n\nThanks to the ModelScope team for open sourcing. Check out there [existing works](https://github.com/modelscope)https://github.com/modelscope.\n","funding_links":[],"categories":["Workflows (3395) sorted by GitHub Stars","All Workflows Sorted by GitHub Stars"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FExponentialML%2FComfyUI_ModelScopeT2V","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FExponentialML%2FComfyUI_ModelScopeT2V","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FExponentialML%2FComfyUI_ModelScopeT2V/lists"}