{"id":37612594,"url":"https://github.com/thtskaran/afoptimizer","last_synced_at":"2026-01-16T10:25:12.527Z","repository":{"id":214621019,"uuid":"736963481","full_name":"thtskaran/AFOptimizer","owner":"thtskaran","description":"Enhance anime videos with AFOptimizer, a Python tool for removing static frames using Frame Difference , SSIM and Optical Flow methods. Ideal for streamlining viewing and editing.","archived":false,"fork":false,"pushed_at":"2025-10-07T16:17:43.000Z","size":59598,"stargazers_count":8,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-10-07T18:29:56.508Z","etag":null,"topics":["anime","cv","cv2","enhance","optical-flow"],"latest_commit_sha":null,"homepage":"https://replit.com/@thtskaran/AFOptimizer","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/thtskaran.png","metadata":{"files":{"readme":"readme.MD","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-12-29T11:44:03.000Z","updated_at":"2025-10-07T16:17:46.000Z","dependencies_parsed_at":"2024-01-07T07:30:59.943Z","dependency_job_id":"237180d4-be46-4e85-af75-8bb6ad849074","html_url":"https://github.com/thtskaran/AFOptimizer","commit_stats":{"total_commits":20,"total_committers":2,"mean_commits":10.0,"dds":"0.19999999999999996","last_synced_commit":"2cbe9bf6067eed2e23bf79e5cb97253ba0ac0da8"},"previous_names":["thtskaran/afoptimizer"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/thtskaran/AFOptimizer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thtskaran%2FAFOptimizer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thtskaran%2FAFOptimizer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thtskaran%2FAFOptimizer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thtskaran%2FAFOptimizer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/thtskaran","download_url":"https://codeload.github.com/thtskaran/AFOptimizer/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/thtskaran%2FAFOptimizer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28478049,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T06:30:42.265Z","status":"ssl_error","status_checked_at":"2026-01-16T06:30:16.248Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anime","cv","cv2","enhance","optical-flow"],"created_at":"2026-01-16T10:25:11.705Z","updated_at":"2026-01-16T10:25:12.283Z","avatar_url":"https://github.com/thtskaran.png","language":"Python","readme":"# Anime Frame Optimizer (AFOptimizer)\n\n## Overview\nAFOptimizer removes redundant frames from anime and stylised video sources. The project ships with two entry points:\n- **Web Dashboard** (`app.py`): A Flask-based web interface that lets you upload footage, watch live progress, and download processed clips.\n- **Command-Line Interface** (`cli.py`): A flexible CLI with per-method fine-tuning flags for automation and batch work.\n\nUnder the hood, AFOptimizer bundles four complementary pruning strategies-Optical Flow, Frame Difference, SSIM, and an advanced Unsupervised Deduplication pipeline-so you can mix accuracy, speed, and perceptual quality to fit each project.\n\n## Key Features\n- **Multi-method optimisation**: Dense motion analysis, adaptive pixel differencing, perceptual similarity, and a three-stage unsupervised deduper.\n- **GPU acceleration**: Automatic detection and utilization of NVIDIA, AMD, Intel, and Apple Silicon GPUs for faster processing.\n- **Hardware-accelerated encoding**: Uses NVENC, AMF, QuickSync, or VideoToolbox when available for faster video encoding.\n- **Web interface**: Browser workflow with background job queue, live progress updates, downloadable artefacts, and automatic upload cleanup.\n- **Command-line execution**: Reusable defaults, opt-in presets, and granular override flags for every method parameter.\n- **Video encoding**: Automatic H.264 fast-start transcode plus optional encoding controls for CRF and preset.\n- **Built-in safeguards**: Safety windows between keyframes and adaptive thresholding to limit over-pruning.\n\n## Supported Formats\nInput formats: `.mp4`, `.mov`, `.avi`, `.mkv`, `.webm`  \nOutput format: H.264 MP4 (compatible with all modern players)\n\n## Setup\n1. Create and activate a Python environment (Python 3.9+ recommended):\n   ```bash\n   python3 -m venv myenv\n   source myenv/bin/activate  # On Windows: myenv\\Scripts\\activate\n   ```\n\n2. Install dependencies:\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n3. Install `ffmpeg` and ensure it is discoverable on your `PATH` (required for video encoding):\n   - **Linux**: `sudo apt-get install ffmpeg` or `sudo yum install ffmpeg`\n   - **macOS**: `brew install ffmpeg`\n   - **Windows**: Download from [ffmpeg.org](https://ffmpeg.org/download.html)\n\n4. **(Optional) GPU Acceleration Setup**:\n   AFOptimizer automatically detects and uses GPU acceleration when available. No configuration needed!\n   \n   **For NVIDIA GPUs (CUDA)**:\n   - Install CUDA toolkit and drivers\n   - Install OpenCV with CUDA support (or build from source)\n   - Optional: `pip install cupy-cuda11x` or `cupy-cuda12x` (matching your CUDA version)\n   \n   **For AMD GPUs**:\n   - Install AMD drivers and OpenCL runtime\n   - Optional: `pip install pyopencl`\n   \n   **For Intel integrated GPUs**:\n   - Install Intel OpenCL drivers\n   - Optional: `pip install pyopencl`\n   \n   **For Apple Silicon (M1/M2/M3)**:\n   - No additional setup needed! VideoToolbox is automatically used if available.\n   \n   The system gracefully falls back to CPU processing if GPU is not available. You can test GPU detection by running:\n   ```bash\n   python3 test_gpu_detection.py\n   ```\n\n## Web Dashboard (Flask App)\n\n### Starting the Server\nLaunch the Flask application:\n```bash\npython3 app.py\n```\n\nThe server will start on `http://0.0.0.0:5000` (accessible at `http://localhost:5000` in your browser).\n\n### Using the Web Interface\n\n1. **Upload Video**: \n   - Drag and drop a video file or click to browse\n   - Supported formats: MP4, MOV, AVI, MKV, WEBM\n\n2. **Choose Method**:\n   - **Optical Flow**: Detects motion by tracking pixel flow for smooth transitions\n   - **Frame Difference**: Flags static frames by comparing brightness deltas\n   - **SSIM**: Keeps only frames with meaningful structural change\n   - **Unsupervised Dedup**: Three-stage hashing, features, and motion flow to drop redundant frames\n\n3. **Fine-Tune Parameters**:\n   - **Optical Flow**: Adjust flow magnitude threshold (default: 0.4)\n   - **Frame Difference**: Adjust base threshold (default: 10)\n   - **SSIM**: Adjust SSIM threshold (default: 0.9587)\n   - **Unsupervised Dedup**: Choose profile (Gentle/Balanced/Aggressive)\n\n4. **Monitor Progress**:\n   - Real-time progress bar with percentage completion\n   - Live FPS (frames per second) processing rate\n   - Elapsed time and estimated time remaining (ETA)\n   - Current processing stage information\n\n5. **Download Results**:\n   - Once processing completes, a download link appears\n   - Processed videos are saved in the `outputs/` directory\n   - Original uploads are automatically cleaned up after processing\n\n### Web Interface Features\n- **Background Processing**: Jobs run in separate threads, allowing multiple uploads\n- **Live Updates**: Progress updates via AJAX polling every few seconds\n- **Automatic Cleanup**: Temporary upload files are removed after job completion\n- **Error Handling**: Clear error messages displayed if processing fails\n- **Responsive Design**: Works on desktop and mobile browsers\n\n### Directory Structure\n- `uploads/`: Temporary storage for uploaded videos (auto-cleaned after processing)\n- `outputs/`: Final processed videos (persistent, available for download)\n\n## Command-Line Interface\n\nThe CLI provides full control over all optimization methods with fine-grained parameter tuning. It's ideal for batch processing, automation, and integration into larger workflows.\n\n### Basic Usage\nRun `python3 cli.py --help` to see the full reference. The general pattern is:\n\n```bash\npython3 cli.py [GLOBAL OPTIONS] INPUT_VIDEO METHOD [METHOD OPTIONS]\n```\n\n### Global Options\n- `-o, --output PATH` – Specify the output file path (defaults to `input_stem` + method suffix in the same directory)\n- `--encoding-crf VALUE` – Re-encode the final file with ffmpeg CRF value (lower = higher quality, default 18 when used)\n  - Recommended range: 18-28 (18 = high quality, 23 = default, 28 = smaller file)\n- `--encoding-preset NAME` – ffmpeg encoding preset (default `medium` when used)\n  - Options: `ultrafast`, `superfast`, `veryfast`, `faster`, `fast`, `medium`, `slow`, `slower`, `veryslow`\n  - Faster presets = larger files, slower presets = smaller files\n\n### Methods and Options\n\n#### Optical Flow (`optical-flow`)\n**Description**: Keeps frames that show meaningful pixel-wise motion between consecutive frames. Best for action-heavy sequences with smooth motion.\n\n**How it works**: Runs Farnebäck dense optical flow, computes mean vector magnitude, and writes frames whose magnitude exceeds the threshold.\n\n**Options**:\n- `--flow-mag-threshold FLOAT` (default: `0.4`)\n  - Lower values = keep more frames (less aggressive)\n  - Higher values = prune subtle motion (more aggressive)\n  - Recommended range: 0.2-0.6\n\n**Example**:\n```bash\npython3 cli.py ~/videos/episode01.mp4 optical-flow --flow-mag-threshold 0.35 -o ~/outputs/episode01_of.mp4\n```\n\n#### Frame Difference (`frame-difference`)\n**Description**: Compares brightness changes between frames and drops segments with negligible deltas. Fast and effective for static scenes.\n\n**How it works**: Adapts the supplied base threshold using an initial sampling window, then counts high-difference pixels to decide whether to preserve a frame.\n\n**Options**:\n- `--base-threshold FLOAT` (default: `10.0`)\n  - Increase to demand larger pixel swings before keeping a frame\n  - Recommended range: 5-20\n\n**Example**:\n```bash\npython3 cli.py ~/videos/episode01.mp4 frame-difference --base-threshold 14 -o ~/outputs/episode01_fd.mp4\n```\n\n#### SSIM (`ssim`)\n**Description**: Focuses on perceptual similarity-removes frames that look virtually identical to the previous one. Good balance between quality and speed.\n\n**How it works**: Calculates grayscale Structural Similarity (SSIM) for each frame pair and writes frames whose SSIM falls below the cutoff. Always appends the final frame.\n\n**Options**:\n- `--ssim-threshold FLOAT` (default: `0.9587`)\n  - Lower thresholds = stricter pruning (more frames removed)\n  - Higher thresholds = retain more visually similar frames\n  - Recommended range: 0.90-0.99\n\n**Example**:\n```bash\npython3 cli.py ~/videos/episode01.mp4 ssim --ssim-threshold 0.97 -o ~/outputs/episode01_ssim.mp4 --encoding-crf 20\n```\n\n#### Unsupervised Deduplication (`unsupervised-dedup`)\n**Description**: Combines perceptual hashing, feature clustering, and motion analysis to suppress redundant footage without labeled data. Ideal for long episodic sources.\n\n**How it works**: Three-stage pipeline:\n1. **Walsh–Hadamard hashing** with ordinal texture checks\n2. **ORB-based feature clustering** for visual similarity\n3. **Motion gating** using downscaled optical flow with safety keyframe spacing\n\n**Presets**:\n- `--profile {gentle|balanced|aggressive}` (default: `balanced`)\n  - `gentle`: Keeps more frames, less aggressive pruning\n  - `balanced`: Recommended mix for most videos\n  - `aggressive`: More aggressive pruning, smaller output files\n\n**Fine-tune Options** (all optional; override preset values):\n- `--hash-threshold INT` – Hamming distance for hash matches (default varies by profile)\n- `--ordinal-footrule-threshold FLOAT` – Maximum footrule distance between ordinal signatures\n- `--feature-similarity FLOAT` – ORB match ratio required to treat frames as equivalent (0.0-1.0)\n- `--flow-static-threshold FLOAT` – Mean flow magnitude treated as static\n- `--flow-low-ratio FLOAT` – Fraction of low-motion pixels necessary for static gating (0.0-1.0)\n- `--pan-orientation-std FLOAT` – Orientation spread threshold for detecting pans\n- `--safety-keep-seconds FLOAT` – Minimum seconds between forced keyframes to avoid over-pruning\n\n**Examples**:\n```bash\n# Using preset\npython3 cli.py ~/videos/season01.mkv unsupervised-dedup --profile aggressive -o ~/outputs/season01_dedup.mp4\n\n# Customizing preset parameters\npython3 cli.py ~/videos/season01.mkv unsupervised-dedup --profile balanced --safety-keep-seconds 2.0 --hash-threshold 10 -o ~/outputs/season01_custom.mp4\n```\n\n### Output\nAll methods produce H.264 MP4 files. Use `--encoding-crf` and `--encoding-preset` for tighter control over bitrate/quality trade-offs.\n\n## Method Comparison\n\n### Performance Characteristics\n- **Optical Flow**: Most accurate for complex motion but slower. Best for action sequences with smooth camera movement.\n- **Frame Difference**: Fastest method, lightweight. Best for static scenes with obvious changes.\n- **SSIM**: Good balance between quality and speed. Perceptually aware, suitable for most content.\n- **Unsupervised Dedup**: Most sophisticated, slower but most accurate for long-form content. Best for episodic videos with repeated scenes.\n\n### Performance Benchmarks\nSample throughput from benchmarks (4 vCPU, 8 GB RAM environment):\n- **Frame Difference**: ≈ 37.1 frames/s (fastest)\n- **SSIM**: ≈ 2.1 frames/s (moderate)\n- **Optical Flow**: ≈ 1.2 frames/s (slower)\n- **Unsupervised Dedup**: ≈ 0.8-1.5 frames/s (slowest, varies by profile)\n\n**Note**: Actual speeds vary significantly with:\n- Video resolution (1080p vs 4K)\n- Source codec and bitrate\n- Hardware (CPU cores, RAM, disk I/O)\n- Video length and complexity\n\n### Choosing the Right Method\n\n| Use Case | Recommended Method | Why |\n|----------|-------------------|-----|\n| Fast processing needed | Frame Difference | Fastest method, good for quick previews |\n| Action sequences | Optical Flow | Best motion detection |\n| General purpose | SSIM | Good balance of quality and speed |\n| Long episodes/series | Unsupervised Dedup | Most accurate for removing repeated content |\n| Static scenes | Frame Difference | Efficient for minimal motion |\n| High quality needed | SSIM or Optical Flow | Better perceptual quality |\n\n## Workflow Examples\n\n### Web Interface Workflow\n1. Start the server: `python3 app.py`\n2. Open browser to `http://localhost:5000`\n3. Upload video file\n4. Select optimization method\n5. Adjust parameters (or use defaults)\n6. Click \"Process Video\"\n7. Monitor progress in real-time\n8. Download completed video when ready\n\n### CLI Batch Processing Example\n```bash\n# Process multiple videos with SSIM\nfor video in ~/videos/*.mp4; do\n    python3 cli.py \"$video\" ssim --ssim-threshold 0.96 -o ~/outputs/$(basename \"$video\" .mp4)_optimized.mp4\ndone\n\n# Process with custom encoding settings\npython3 cli.py input.mp4 optical-flow --flow-mag-threshold 0.4 \\\n    --encoding-crf 20 --encoding-preset slow -o output.mp4\n```\n\n## Troubleshooting\n\n### Common Issues\n\n**\"ffmpeg not found\" error**\n- Ensure ffmpeg is installed and available in your PATH\n- Test with: `ffmpeg -version`\n- On Linux, you may need to install: `sudo apt-get install ffmpeg`\n\n**\"No video file provided\" (Web Interface)**\n- Ensure you've selected a file before clicking \"Process Video\"\n- Check that the file format is supported (MP4, MOV, AVI, MKV, WEBM)\n\n**Processing is very slow**\n- This is normal for high-resolution videos or complex methods\n- Try Frame Difference for faster processing\n- Consider reducing video resolution before processing\n- Check available system resources (CPU, RAM)\n\n**Output file is too large/small**\n- Adjust method-specific thresholds (lower = keep more frames)\n- Use `--encoding-crf` in CLI to control file size (higher CRF = smaller file)\n- Try different methods to find the right balance\n\n**Job fails or shows error**\n- Check that input video is not corrupted\n- Ensure sufficient disk space in `uploads/` and `outputs/` directories\n- Review error message in web interface or CLI output\n- Try a different optimization method\n\n## Technical Details\n\n### Architecture\n- **Backend**: Flask web server with threading for background job processing\n- **Video Processing**: OpenCV for frame extraction and analysis\n- **Encoding**: ffmpeg for H.264 encoding and transcoding\n- **Progress Tracking**: Real-time updates via in-memory job queue\n\n### File Structure\n```\nAFOptimizer/\n├── app.py                 # Flask web application\n├── cli.py                 # Command-line interface\n├── frame_optimization_methods/\n│   ├── opticalFlow.py     # Optical flow method\n│   ├── frameDifference.py # Frame difference method\n│   ├── ssim.py            # SSIM method\n│   ├── unsupervised_dedup.py  # Unsupervised deduplication\n│   └── video_encoding.py  # H.264 encoding utilities\n├── templates/\n│   └── index.html         # Web interface template\n├── static/\n│   ├── css/style.css      # Stylesheet\n│   └── js/main.js         # Frontend JavaScript\n├── uploads/               # Temporary upload storage\n└── outputs/               # Processed video output\n```\n\n## GPU Acceleration\n\nAFOptimizer includes comprehensive GPU acceleration support that automatically detects and utilizes available hardware:\n\n### Supported GPUs\n- **NVIDIA GPUs**: CUDA acceleration for frame processing, NVENC for encoding\n- **AMD GPUs**: OpenCL/ROCm support, AMF encoder for video encoding\n- **Intel integrated GPUs**: QuickSync Video (QSV) encoding, OpenCL support\n- **Apple Silicon**: Metal acceleration, VideoToolbox encoding\n\n### How It Works\n1. **Automatic Detection**: On startup, AFOptimizer detects available GPU hardware\n2. **Smart Fallback**: If GPU is unavailable or fails, the system automatically falls back to CPU\n3. **Hardware Encoding**: Video encoding uses hardware encoders (NVENC, AMF, QSV, VideoToolbox) when available\n4. **GPU-Accelerated Operations**: Frame processing operations (color conversion, resizing, blur, optical flow preprocessing) use GPU when available\n\n### Performance Benefits\nGPU acceleration can provide significant speedups:\n- **Frame processing**: 2-5x faster on supported GPUs\n- **Video encoding**: 5-10x faster with hardware encoders\n- **Overall workflow**: 3-7x faster end-to-end processing\n\n### Testing GPU Detection\nRun the included test script to check your GPU setup:\n```bash\npython3 test_gpu_detection.py\n```\n\nThis will show:\n- Detected GPU backend (CUDA, OpenCL, Metal, etc.)\n- Device name\n- Availability status\n- Hardware encoder support\n\n### Troubleshooting GPU Issues\n- **\"Using CPU processing\" message**: This is normal if no GPU is available or GPU libraries aren't installed\n- **GPU detection fails**: Ensure GPU drivers are installed and OpenCV has GPU support\n- **Encoding fails with hardware encoder**: System automatically falls back to CPU encoding\n- **Performance not improved**: Some operations (like SSIM calculation) still use CPU; overall speedup depends on workload\n\n## Contributing\nIssues and pull requests are welcome! Focus areas include:\n- New pruning heuristics and optimization methods\n- Improved progress reporting and UI enhancements\n- Additional GPU acceleration optimizations\n- Performance optimizations\n- Documentation improvements\n\n## License\nSee [LICENSE](LICENSE) file for details.\n\n## Support\nQuestions or feedback? Reach out at `hello@karanprasad.com`.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthtskaran%2Fafoptimizer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthtskaran%2Fafoptimizer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthtskaran%2Fafoptimizer/lists"}