{"id":24759101,"url":"https://github.com/botblake/jellybench_py","last_synced_at":"2025-10-16T07:08:27.276Z","repository":{"id":228063967,"uuid":"769247758","full_name":"BotBlake/jellybench_py","owner":"BotBlake","description":"Client for the Jellyfin Hardware Survey Server (https://hwa.jellyfin.org/). Benchmarks hardware performance for simultaneous ffmpeg transcoding, enabling detailed comparisons and insights for optimizing Jellyfin setups.","archived":false,"fork":false,"pushed_at":"2025-10-08T16:23:19.000Z","size":357,"stargazers_count":21,"open_issues_count":18,"forks_count":12,"subscribers_count":5,"default_branch":"develop","last_synced_at":"2025-10-11T06:32:11.106Z","etag":null,"topics":["amd-gpu","benchmark","ffmpeg","ffmpeg-python","ffmpeg-script","gpu-acceleration","hardware","hardware-acceleration","jellyfin","nvenc","qsv","statistics"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/BotBlake.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-03-08T16:41:16.000Z","updated_at":"2025-10-04T14:11:11.000Z","dependencies_parsed_at":"2024-03-16T22:17:30.151Z","dependency_job_id":"e1ccff07-206b-43f3-b1e1-b86d15d4e3e0","html_url":"https://github.com/BotBlake/jellybench_py","commit_stats":null,"previous_names":["botblake/pytab","botblake/jellybench_py"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/BotBlake/jellybench_py","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BotBlake%2Fjellybench_py","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BotBlake%2Fjellybench_py/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BotBlake%2Fjellybench_py/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BotBlake%2Fjellybench_py/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/BotBlake","download_url":"https://codeload.github.com/BotBlake/jellybench_py/tar.gz/refs/heads/develop","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BotBlake%2Fjellybench_py/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279164449,"owners_count":26117706,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-16T02:00:06.019Z","response_time":53,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["amd-gpu","benchmark","ffmpeg","ffmpeg-python","ffmpeg-script","gpu-acceleration","hardware","hardware-acceleration","jellyfin","nvenc","qsv","statistics"],"created_at":"2025-01-28T16:50:50.586Z","updated_at":"2025-10-16T07:08:27.271Z","avatar_url":"https://github.com/BotBlake.png","language":"Python","readme":"# jellybench_py\n\njellybench_py is a benchmarking tool designed to measure the performance of hardware when handling simultaneous ffmpeg transcoding processes. This tool tests how many parallel ffmpeg transcoding processes a system can manage, providing detailed insights into hardware performance.\n\nThe benchmark results can be uploaded to the central Jellyfin Hardware Survey Server, allowing users to compare their hardware's performance with other systems. This facilitates easy visualization of the results and serves as a valuable resource for Jellyfin users looking to optimize their transcoding capabilities.\n\n## [jellybench_py](https://github.com/BotBlake/jellybench_py) QuickStart Guide\n\n\u003e [!WARNING]\n\u003e This is an Alpha Version of the Client.\nIt has not been properly tested, nor implemented for all Platforms yet!\nUse at your own risk.\n\n\u003e [!NOTE]\n\u003e This hardware benchmark will use all system resources available.\n\n\u003e [!NOTE]\n\u003e The Benchmark will take multiple hours to finish. Make sure to run it, when the system is not used.\n\n\u003e [!WARNING]\n\u003e By default the client will use the official Jellyfin Hardware Survey Server on \u003chttps://hwa.jellyfin.org/\u003e. The script will not upload any Test results without separate user confirmation. It will only load the tests and test files based on your Operating System and Architecture.\n\n### Software Requirements\n\njellybench_py is built as a python module via uv. Therefore you need to have uv installed on your system.\nTo install uv, follow the [official install guide](https://docs.astral.sh/uv/getting-started/installation/).\n\n### Installing jellybench_py\n\n1. Clone the GitHub Repository `git clone https://github.com/BotBlake/jellybench_py`\n2. Go into the jellybench_py Folder `cd jellybench_py`\n3. Switch to the development branch `git switch develop`\n4. Sync dependencies `uv sync`\n5. Activate the venv `source .venv/bin/activate` (instead of steps 4+5, you\n   can prefix every command with `uv run`)\n6. Install pre-commit hooks. `pre-commit install`\n\n\u003e [!IMPORTANT]\n\u003e Since the state of the software often Changes, you might have to do some \"additional steps\" to ensure its running correctly. They are explained down below in the [additional Steps](https://github.com/BotBlake/jellybench_py?tab=readme-ov-file#additional-steps) section.\n\n### Running jellybench_py\n\n1. activate the venv\n2. run the script `jellybench`\n\n\u003e [!IMPORTANT]\n\u003e By default this will use the official Jellyfin Hardware Survey Server \u003chttps://hwa.jellyfin.org/\u003e. If you want to run from a custom Server, use the `--server {url}` option\n\n_If you want / need specific info about all the CLI Arguments, run `jellybench -h`_\n\n### Hardware Control\n\n_To reduce Test Runtime you can disable certain hardware reducing the number of tests you run._\n\n- CPU based tests can be disabled using the `--nocpu` flag\n- GPU based tests can be disabled using the `--gpu 0` option or by selecting 0 in the interactive GPU selector\n- If the CPU and GPU are disabled the program will error out saying \"ERROR: All Hardware Disabled\"\n\n### Path specification\n\nSince the Script downloads ffmpeg AND video files, you have the option to specify a Path for both.\nIf the files are already existing there, they will not be redownloaded.\n\n- Path to video directory via `--videos {path}`\n- Path to ffmpeg portable directory via `--ffmpeg {path}`\n\n### Additional Steps\n\nDuring development jellybench_py may require you to set up specific things manually these will change over Time\n\n- Make sure you are on the latest version `git pull`\n- Take a Look into the \"Current Issues\" section\n\n## Current Issues\n\nYou will find a List of currently known issues below.\nThese will change over time, so please ensure you check this section regularly for any changes.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbotblake%2Fjellybench_py","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbotblake%2Fjellybench_py","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbotblake%2Fjellybench_py/lists"}