{"id":46551547,"url":"https://github.com/makefinks/manim-generator","last_synced_at":"2026-03-07T03:30:44.536Z","repository":{"id":279310504,"uuid":"923825886","full_name":"makefinks/manim-generator","owner":"makefinks","description":"Automatic LLM-based video generation using the manim library. Usage of a code-writer and code-reviewer feedback loop with execution logs.","archived":false,"fork":false,"pushed_at":"2026-02-15T15:52:15.000Z","size":7871,"stargazers_count":76,"open_issues_count":1,"forks_count":14,"subscribers_count":2,"default_branch":"main","last_synced_at":"2026-02-15T22:50:50.361Z","etag":null,"topics":["agents","ai","anthropic","deepseek-r1","gemini","large-language-models","llms","manim","manim-python","openai"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/makefinks.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2025-01-28T22:13:55.000Z","updated_at":"2026-02-15T15:52:19.000Z","dependencies_parsed_at":"2025-03-19T14:32:27.290Z","dependency_job_id":"6be1e78b-9d4a-4c0d-a837-4dd927c9868b","html_url":"https://github.com/makefinks/manim-generator","commit_stats":null,"previous_names":["makefinks/manim-generator"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/makefinks/manim-generator","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/makefinks%2Fmanim-generator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/makefinks%2Fmanim-generator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/makefinks%2Fmanim-generator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/makefinks%2Fmanim-generator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/makefinks","download_url":"https://codeload.github.com/makefinks/manim-generator/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/makefinks%2Fmanim-generator/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30206560,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-07T03:24:23.086Z","status":"ssl_error","status_checked_at":"2026-03-07T03:23:11.444Z","response_time":53,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","anthropic","deepseek-r1","gemini","large-language-models","llms","manim","manim-python","openai"],"created_at":"2026-03-07T03:30:43.974Z","updated_at":"2026-03-07T03:30:44.522Z","avatar_url":"https://github.com/makefinks.png","language":"Python","readme":"# AI - Manim Video Generator\n\nAutomatic video generation using an agentic LLM flow in combination with the [manim](https://www.manim.community/) python library.\n\n## Overview\n\nThe project experiments with automated Manim video creation. An agent workflow delegates code drafting to a `Code Writer` and validation to a `Code Reviewer`, using LiteLLM for model routing so different models from different providers can be compared on the same task. The flow focuses on reducing render failures and improving visual consistency through iterative feedback and optional vision inputs.\n\n## Manim Bench - Leaderboard 📊 \n\n[![Manim Bench](https://img.shields.io/badge/Manim%20Bench-Visit%20Leaderboard-blue?style=for-the-badge)](https://manim-bench.makefinks.dev)\n\nThe manim-generator harness now drives **Manim Bench**, a public leaderboard showcasing how latest AI models compare on full Manim video generation.\n\n\u003e [manim-bench.makefinks.dev](https://manim-bench.makefinks.dev) \n\nIf you want to see examples of videos created with this project, or are interested in how different models stack up, visit the link above. Feedback appreciated!\n\n\n## Current Video Generation Process\n\n![Creation flow](images/flow.png)\n\n## Installation\n\n### 1. Clone the repository:\n\n```bash\ngit clone https://github.com/makefinks/manim-generator.git\ncd manim-generator\n```\n\n### 2. Install the requirements\n\nWith [uv](https://github.com/astral-sh/uv) (recommended):\n\n```bash\nuv sync\n```\n\nOr using pip directly:\n\n```bash\npython -m venv .venv\nsource .venv/bin/activate # or .venv/scripts/activate on windows\npip install -e .\n```\n\n### 3. Additional dependencies\n\nInstall ffmpeg and, if you plan to render LaTeX, a LaTeX distribution.\n\nWindows (using Chocolatey):\n\n```bash\nchoco install ffmpeg\nchoco install miktex\n```\n\nmacOS (using Homebrew):\n\n```bash\nbrew install ffmpeg\nbrew install --cask mactex\n```\n\nLinux (Debian/Ubuntu):\n\n```bash\nsudo apt-get update\nsudo apt-get install texlive texlive-latex-extra texlive-fonts-extra texlive-science\n```\n\n### 4. Configure environment variables\n\nCreate a `.env` file from the provided template:\n\n```bash\ncp .env.example .env\n```\n\nThen edit `.env` and add your API keys. Providers available via [openrouter](https://openrouter.ai/) are supported through LiteLLM with the prefix openrouter.\nFor example `openrouter/openai/gpt-5.1`\n\nIf you configure or have an openai/anthropic API key already configured you can use their respective APIs directly: `openai/gpt-5.1` / `anthropic/claude-sonnet-4-5`\n\n## Usage\n\n### 1. Execute the script\n\nWith uv (recommended):\n\n```bash\nuv run manim-generate\n```\n\nOr if you've activated the virtual environment:\n\n```bash\nsource .venv/bin/activate\nmanim-generate\n```\n\nOr using Python directly:\n\n```bash\npython -m manim_generator.main\n```\n\n### 2. CLI Arguments\n\nThe script supports the following command-line arguments:\n\n#### Video Data Input\n\n| Argument            | Description                                        | Default          |\n| ------------------- | -------------------------------------------------- | ---------------- |\n| `--video-data`      | Description of the video to generate (text string) | -                |\n| `--video-data-file` | Path to file containing video description          | \"video_data.txt\" |\n\n#### Model Configuration\n\n| Argument         | Description                                                                              | Default                                |\n| ---------------- | ---------------------------------------------------------------------------------------- | -------------------------------------- |\n| `--manim-model`  | Model to use for generating Manim code                                                   | \"openrouter/anthropic/claude-sonnet-4\" |\n| `--review-model` | Model to use for reviewing code                                                          | \"openrouter/anthropic/claude-sonnet-4\" |\n| `--streaming`    | Enable streaming responses from the model                                                | False                                  |\n| `--temperature`  | Temperature for the LLM Model                                                            | 0.4                                    |\n| `--force-vision` | Adds images to the review process, regardless if LiteLLM reports vision is not supported | -                                      |\n| `--provider`     | Specific provider to use for OpenRouter requests (e.g., 'anthropic', 'openai')           | -                                      |\n\n#### Process Configuration\n\n| Argument                  | Description                                                                                 | Default                                        |\n| ------------------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------------- |\n| `--review-cycles`         | Number of review cycles to perform                                                          | 5                                              |\n| `--manim-logs`            | Show Manim execution logs                                                                   | False                                          |\n| `--output-dir`            | Directory for generated artifacts (overrides auto-naming)                                   | Auto (e.g., `manim_animation_20250101_120000`) |\n| `--success-threshold`     | Percentage of scenes that must render successfully to trigger enhanced visual review mode   | 100                                            |\n| `--frame-extraction-mode` | Frame extraction mode: highest_density (single best frame) or fixed_count (multiple frames) | \"highest_density\"                              |\n| `--frame-count`           | Number of frames to extract when using fixed_count mode                                     | 3                                              |\n| `--scene-timeout`         | Maximum seconds allowed for a single scene render (set to 0 to disable)                     | 400                                            |\n| `--headless`              | Suppress most output and show only a single progress bar                                    | False                                          |\n\n#### Reasoning Tokens Configuration\n\n| Argument                 | Description                                                                                  | Default |\n| ------------------------ | -------------------------------------------------------------------------------------------- | ------- |\n| `--reasoning-effort`     | Reasoning effort level for OpenAI-style models (choices: \"none\", \"minimal\", \"low\", \"medium\", \"high\", \"xhigh\") | -       |\n| `--reasoning-max-tokens` | Maximum tokens for reasoning (Anthropic-style)                                               | -       |\n| `--hide-reasoning`       | Hide reasoning tokens from response output (model still uses reasoning internally). | -       |\n\n\u003e Note: You cannot use both `--reasoning-effort` and `--reasoning-max-tokens` at the same time.\n\nProviding `--output-dir` skips the automatic descriptor-based folder name and uses the supplied path instead.\n\n### Example\n\n```bash\nuv run manim-generate --video-data \"Explain the concept of neural networks with visual examples\" --manim-model \"openrouter/anthropic/claude-sonnet-4\" --review-model \"openrouter/anthropic/claude-sonnet-4\" --review-cycles 3\n```\n\nOr with the command directly (if virtual environment is activated):\n\n```bash\nmanim-generate --video-data \"Explain the concept of neural networks with visual examples\" --manim-model \"openrouter/anthropic/claude-sonnet-4\" --review-model \"openrouter/anthropic/claude-sonnet-4\" --review-cycles 3\n```\n\nSome standard prompts for benchmarking different models are in the directoy `bench_prompts/`\n\n```\nmanim-generate --video-data-file bench_prompts/llm_explainer.txt\n```\n\n### 3. Configure image support\n\nImages are available only when the reviewer model supports multimodal input.\n\n- https://openrouter.ai/models?modality=text+image-%3Etext\n- https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json\n\n## Contributing\n\nFocus areas include prompt improvements, review loop refinements, code quality, and new features or optimizations.\n\n### Known issues\n\n- **Streaming**: current streaming implementation does not provide syntax highlighting\n- **Prompting / environment setup**: the selected LLM version may not match the local installation.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmakefinks%2Fmanim-generator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmakefinks%2Fmanim-generator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmakefinks%2Fmanim-generator/lists"}