{"id":25616694,"url":"https://github.com/lework/llm-benchmark","last_synced_at":"2025-04-13T22:22:42.367Z","repository":{"id":278526190,"uuid":"935895786","full_name":"lework/llm-benchmark","owner":"lework","description":"LLM 并发性能测试工具，支持自动化压力测试和性能报告生成。","archived":false,"fork":false,"pushed_at":"2025-03-24T05:50:14.000Z","size":106,"stargazers_count":19,"open_issues_count":0,"forks_count":5,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-03-24T06:31:07.786Z","etag":null,"topics":["benchmark","ollama","vllm"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lework.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-02-20T07:39:12.000Z","updated_at":"2025-03-24T06:19:48.000Z","dependencies_parsed_at":"2025-02-20T09:26:09.747Z","dependency_job_id":"be7b5cf5-b60c-4eef-975c-23613d20eb0d","html_url":"https://github.com/lework/llm-benchmark","commit_stats":null,"previous_names":["lework/llm-benchmark"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lework%2Fllm-benchmark","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lework%2Fllm-benchmark/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lework%2Fllm-benchmark/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lework%2Fllm-benchmark/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lework","download_url":"https://codeload.github.com/lework/llm-benchmark/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248789775,"owners_count":21161892,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","ollama","vllm"],"created_at":"2025-02-22T04:18:00.073Z","updated_at":"2025-04-13T22:22:42.302Z","avatar_url":"https://github.com/lework.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# LLM-Benchmark\n\nLLM 并发性能测试工具，支持自动化压力测试和性能报告生成。\n\n## 功能特点\n\n- 多阶段并发测试（从低并发逐步提升到高并发）\n- 自动化测试数据收集和分析\n- 详细的性能指标统计和可视化报告\n- 支持短文本和长文本测试场景\n- 灵活的配置选项\n- 生成 JSON 输出以便进一步分析或可视化\n\n## 项目结构\n\n```\nllm-benchmark/\n├── run_benchmarks.py     # 自动化测试脚本，执行多轮压测\n├── llm_benchmark.py      # 核心并发测试实现\n├── README.md            # 项目文档\n└── assets/              # 资源文件夹\n```\n\n## 组件说明\n\n- **run_benchmarks.py**:\n\n  - 执行多轮自动化压力测试\n  - 自动调整并发配置（1-300 并发）\n  - 收集和汇总测试数据\n  - 生成美观的性能报告\n\n- **llm_benchmark.py**:\n  - 实现核心并发测试逻辑\n  - 管理并发请求和连接池\n  - 收集详细性能指标\n  - 支持流式响应测试\n\n## 使用方法\n\n运行全套性能测试：\n\n```bash\npython run_benchmarks.py \\\n    --llm_url \"http://your-llm-server\" \\\n    --api_key \"your-api-key\" \\\n    --model \"your-model-name\" \\\n    --use_long_context\n```\n\n运行单次并发测试：\n\n```bash\npython llm_benchmark.py \\\n    --llm_url \"http://your-llm-server\" \\\n    --api_key \"your-api-key\" \\\n    --model \"your-model-name\" \\\n    --num_requests 100 \\\n    --concurrency 10\n```\n\n### 命令行参数\n\n#### run_benchmarks.py 参数\n\n| 参数               | 说明               | 默认值      |\n| ------------------ | ------------------ | ----------- |\n| --llm_url          | LLM 服务器 URL     | 必填        |\n| --api_key          | API 密钥           | 选填        |\n| --model            | 模型名称           | deepseek-r1 |\n| --use_long_context | 使用长文本测试模式 | False       |\n\n#### llm_benchmark.py 参数\n\n| 参数              | 说明                | 默认值      |\n| ----------------- | ------------------- | ----------- |\n| --llm_url         | LLM 服务器 URL      | 必填        |\n| --api_key         | API 密钥            | 选填        |\n| --model           | 模型名称            | deepseek-r1 |\n| --num_requests    | 总请求数            | 必填        |\n| --concurrency     | 并发数              | 必填        |\n| --output_tokens   | 输出 token 数限制   | 50          |\n| --request_timeout | 请求超时时间(秒)    | 60          |\n| --output_format   | 输出格式(json/line) | line        |\n\n## 测试报告示例\n\n![性能测试报告示例](./assets/image-20250220155605371.png)\n\n## 开源许可\n\n本项目采用 [MIT License](LICENSE) 开源协议。\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flework%2Fllm-benchmark","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flework%2Fllm-benchmark","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flework%2Fllm-benchmark/lists"}