{"id":14342945,"url":"https://github.com/littlespray/VE-Bench","last_synced_at":"2025-08-20T03:33:17.293Z","repository":{"id":254856285,"uuid":"845439756","full_name":"littlespray/VE-Bench","owner":"littlespray","description":"[AAAI 25] Official Implementation for ”E-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment“","archived":false,"fork":false,"pushed_at":"2024-12-18T09:54:55.000Z","size":11116,"stargazers_count":27,"open_issues_count":1,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-12-18T10:39:17.313Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/littlespray.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-08-21T08:55:38.000Z","updated_at":"2024-12-18T09:54:58.000Z","dependencies_parsed_at":"2024-12-18T10:39:21.599Z","dependency_job_id":"ee065b94-9723-49d8-8bce-60ffaeeb2675","html_url":"https://github.com/littlespray/VE-Bench","commit_stats":null,"previous_names":["littlespray/e-bench","littlespray/ve-bench"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/littlespray%2FVE-Bench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/littlespray%2FVE-Bench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/littlespray%2FVE-Bench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/littlespray%2FVE-Bench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/littlespray","download_url":"https://codeload.github.com/littlespray/VE-Bench/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230391243,"owners_count":18218320,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-26T16:00:54.938Z","updated_at":"2025-08-20T03:33:17.288Z","avatar_url":"https://github.com/littlespray.png","language":"Python","readme":"# [\\[AAAI 25\\] VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment](https://arxiv.org/abs/2408.11481)\r\n\r\n\u003cdiv align=\"center\"\u003e\r\nShangkun Sun, Xiaoyu Liang, Songlin Fan, Wenxu Gao, Wei Gao* \u003cbr\u003e\r\n\r\n(* Corresponding author)\u003cbr\u003e\r\n\r\nfrom MMCAL, Peking University\r\n\u003c/div\u003e\r\n\r\n\u003c!-- \u003cdiv align=\"center\"\u003e\r\n\u003cvideo src=\"assets/demo.mp4\"\u003e\u003c/video\u003e\r\n\u003c/div\u003e --\u003e\r\n\r\n## 🎦 Introduction\r\nTL;DR: VE-Bench is an evaluation suite for text-driven video editing, consisting of a quality assessment model to provide a human-aligned metric for edited videos, and a database containing rich video-prompt pairs and the corresponding human scores.\r\n\r\n\u003cdiv align=\"center\"\u003e\r\n\u003cimg src=\"assets/overview.jpg\" width = 50% height = 50%/\u003e\r\n\u003cbr\u003e\r\nOverview of the VE-Bench Suite\r\n\u003c/div\u003e\r\n\r\nVE-Bench DB contains a rich collection of source videos, including real-world videos, AIGC videos, and CG videos, covering various aspects such as people, objects, animals, and landscapes. It also includes a variety of editing instructions across different categories, including semantic editing like addition, removal, replacement, etc., as well as structural changes in size, shape, etc., and stylizations such as color, texture, etc. Additionally, it features editing results based on different video editing models. We conducted a subjective experiment involving 24 participants from diverse backgrounds, resulting in 28,080 score samples. We further trained VE-Bench QA model based on this data. The left image below shows the box plot of average scores obtained by each model during the subjective experiment, while the right image illustrates the scores for each model across different types of prompts.\r\n\r\n\u003cdiv align=\"center\"\u003e\r\n\u003cimg src=\"assets/scores.jpg\" width = 70% height = 70%/\u003e\r\n\u003cbr\u003e\r\nLeft: Average score distributions of 8 editing methods. \u0026emsp; \u0026emsp; Right: Performance on different types of prompts from previous video-editing methods.\r\n\u003c/div\u003e\r\n\r\n## Easy Use\r\nVE-Bench can be installed with a single ``pip`` command.\r\n```\r\npip install vebench\r\n```\r\nWhen comparing videos, you can use ``python test.py``, namely:\r\n```\r\nfrom vebench import VEBenchModel\r\n\r\nevaluator = VEBenchModel()\r\n\r\nscore1 = evaluator.evaluate('A black-haired boy is turning his head', 'assets/src.mp4', 'assets/dst.mp4')\r\nscore2 = evaluator.evaluate('A black-haired boy is turning his head', 'assets/src.mp4', 'assets/dst2.mp4')\r\nprint(score1, score2) # Score1: 1.3563, Score2: 0.66194\r\n```\r\nSince the model employs normalization during training, its output does not represent exactly absolute 1 \\~ 10 scores, as demonstrated above.\r\n\r\n## Database\r\nVE-Bench DB is available here. [baidu netdisk](https://pan.baidu.com/s/1D5y6ADXgz8PPHGCxROlNIQ?pwd=sggc) | [google drive](https://drive.google.com/file/d/1SBmXK6XKuyGTaV9LUQXfy5w82bsA3Nve/view?usp=sharing)\r\n\r\n\r\n## Local Inference\r\n\r\n### 💼 Preparation\r\n``\r\ncd vebench\r\n``\r\n\r\nYou can also download all checkpoints from [google drive](https://drive.google.com/drive/folders/1kD82Ex90VP9A_AqjYV1J5DYvBQW-hkXa?usp=sharing) and put them into ``ckpts``.\r\n\r\n### ✨ Usage\r\nTo evaluate one single video:\r\n```\r\npython -m infer.py --single_test --src_path ${path_to_source_video} --dst_path ${path_to_dst_video} --prompt ${editing_prompt}\r\n\r\n# Run on example videos\r\n# python -m infer.py --single_test --src_path \"./data/src/00433tokenflow_baby_gaze.mp4\" --dst_path \"./data/edited/00433tokenflow_baby_gaze.mp4\" --prompt \"A black-haired boy is turning his head\" \r\n```\r\n\r\n\r\nTo evaluate a set of videos:\r\n```\r\npython -m infer.py --data_path ${path_to_data_folder} --label_path ${path_to_prompt_txt_file}\r\n```\r\n\r\n## 🙏 Acknowledgements\r\nPart of the code is developed based on [DOVER](https://github.com/VQAssessment/DOVER) and [BLIP](https://github.com/salesforce/BLIP). We would like to thank the authors for their contributions to the community.\r\n\r\n\r\n## 📭 Contact\r\nIf your have any comments or questions, feel free to contact [sunshk@stu.pku.edu.cn](lsunshk@stu.pku.edu.cn).\r\n\r\n\r\n\r\n## 📖 BibTex\r\n```bibtex\r\n@article{sun2024bench,\r\n  title={VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment},\r\n  author={Sun, Shangkun and Liang, Xiaoyu and Fan, Songlin and Gao, Wenxu and Gao, Wei},\r\n  journal={arXiv preprint arXiv:2408.11481},\r\n  year={2024}\r\n}\r\n```\r\n","funding_links":[],"categories":["Video Editing"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flittlespray%2FVE-Bench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flittlespray%2FVE-Bench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flittlespray%2FVE-Bench/lists"}