{"id":43902650,"url":"https://github.com/judge0/judge0-python","last_synced_at":"2026-02-06T19:11:35.079Z","repository":{"id":260535578,"uuid":"859514391","full_name":"judge0/judge0-python","owner":"judge0","description":"The official Python SDK for Judge0.","archived":false,"fork":false,"pushed_at":"2026-01-21T19:09:21.000Z","size":21618,"stargazers_count":30,"open_issues_count":2,"forks_count":4,"subscribers_count":2,"default_branch":"master","last_synced_at":"2026-01-22T06:12:17.823Z","etag":null,"topics":["code-execution","code-execution-engine","code-interpreter","devtools","judge0","online-compiler","python","python-sdk"],"latest_commit_sha":null,"homepage":"https://python.docs.judge0.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/judge0.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-09-18T19:43:10.000Z","updated_at":"2026-01-21T19:08:06.000Z","dependencies_parsed_at":"2026-01-21T20:06:07.907Z","dependency_job_id":null,"html_url":"https://github.com/judge0/judge0-python","commit_stats":null,"previous_names":["judge0/judge0-py","judge0/judge0-python"],"tags_count":10,"template":false,"template_full_name":null,"purl":"pkg:github/judge0/judge0-python","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/judge0%2Fjudge0-python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/judge0%2Fjudge0-python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/judge0%2Fjudge0-python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/judge0%2Fjudge0-python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/judge0","download_url":"https://codeload.github.com/judge0/judge0-python/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/judge0%2Fjudge0-python/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29173019,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-06T16:33:35.550Z","status":"ssl_error","status_checked_at":"2026-02-06T16:33:30.716Z","response_time":59,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["code-execution","code-execution-engine","code-interpreter","devtools","judge0","online-compiler","python","python-sdk"],"created_at":"2026-02-06T19:11:34.386Z","updated_at":"2026-02-06T19:11:35.071Z","avatar_url":"https://github.com/judge0.png","language":"Python","readme":"# Judge0 Python SDK\n\n[![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/Judge0HQ)](https://x.com/Judge0HQ)\n[![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/hermanzvonimir)](https://x.com/hermanzvonimir)\n\n[![License](https://img.shields.io/github/license/judge0/judge0-python)](LICENSE)\n[![Release](https://img.shields.io/github/v/release/judge0/judge0-python)](https://github.com/judge0/judge0/releases)\n[![Stars](https://img.shields.io/github/stars/judge0/judge0-python)](https://github.com/judge0/judge0-python/stargazers)\n![PyPI - Downloads](https://img.shields.io/pypi/dw/judge0)\n\nThe official Python SDK for Judge0.\n```python\n\u003e\u003e\u003e import judge0\n\u003e\u003e\u003e result = judge0.run(source_code=\"print('hello, world')\")\n\u003e\u003e\u003e result.stdout\n'hello, world\\n'\n\u003e\u003e\u003e result.time\n0.987\n\u003e\u003e\u003e result.memory\n52440\n\u003e\u003e\u003e for f in result:\n...     f.name\n...     f.content\n...\n'script.py'\nb\"print('hello, world')\"\n```\n\n## Installation\n\n```bash\npip install judge0\n```\n\n### Requirements\n\n- Python 3.10+\n\n## Quick Start\n\n### Getting The API Key\n\nGet your API key from [Rapid](https://rapidapi.com/organization/judge0), or [ATD](https://www.allthingsdev.co/publisher/profile/Herman%20Zvonimir%20Do%C5%A1ilovi%C4%87).\n\n#### Notes\n\n* Judge0 has two flavors: Judge0 CE and Judge0 Extra CE, and their difference is just in the languages they support. When choosing Rapid and ATD you will need to explicitly subscribe to both flavors if you want to use both.\n\n### Using Your API Key\n\n#### Option 1: Explicit Client Object\n\nExplicitly create a client object with your API key and pass it to Judge0 Python SDK functions.\n\n```python\nimport judge0\nclient = judge0.RapidJudge0CE(api_key=\"xxx\")\nresult = judge0.run(client=client, source_code=\"print('hello, world')\")\nprint(result.stdout)\n```\n\nOther options include:\n- `judge0.RapidJudge0CE`\n- `judge0.ATDJudge0CE`\n- `judge0.RapidJudge0ExtraCE`\n- `judge0.ATDJudge0ExtraCE`\n\n#### Option 2: Implicit Client Object\n\nPut your API key in one of the following environment variables, respectable to the provider that issued you the API key: `JUDGE0_RAPID_API_KEY`, or `JUDGE0_ATD_API_KEY`.\n\nJudge0 Python SDK will automatically detect the environment variable and use it to create a client object that will be used for all API calls if you do not explicitly pass a client object.\n\n```python\nimport judge0\nresult = judge0.run(source_code=\"print('hello, world')\")\nprint(result.stdout)\n```\n\n## Examples\n### hello, world\n\n```python\nimport judge0\nresult = judge0.run(source_code=\"print('hello, world')\", language=judge0.PYTHON)\nprint(result.stdout)\n```\n\n### Running C Programming Language\n\n```python\nimport judge0\n\nsource_code = \"\"\"\n#include \u003cstdio.h\u003e\n\nint main() {\n    printf(\"hello, world\\\\n\");\n    return 0;\n}\n\"\"\"\n\nresult = judge0.run(source_code=source_code, language=judge0.C)\nprint(result.stdout)\n```\n\n### Running Java Programming Language\n\n```python\nimport judge0\n\nsource_code = \"\"\"\npublic class Main {\n    public static void main(String[] args) {\n        System.out.println(\"hello, world\");\n    }\n}\n\"\"\"\n\nresult = judge0.run(source_code=source_code, language=judge0.JAVA)\nprint(result.stdout)\n```\n\n### Reading From Standard Input\n\n```python\nimport judge0\n\nsource_code = \"\"\"\n#include \u003cstdio.h\u003e\n\nint main() {\n    int a, b;\n    scanf(\"%d %d\", \u0026a, \u0026b);\n    printf(\"%d\\\\n\", a + b);\n\n    char name[10];\n    scanf(\"%s\", name);\n    printf(\"Hello, %s!\\\\n\", name);\n\n    return 0;\n}\n\"\"\"\n\nstdin = \"\"\"\n3 5\nBob\n\"\"\"\n\nresult = judge0.run(source_code=source_code, stdin=stdin, language=judge0.C)\nprint(result.stdout)\n```\n\n### Test Cases\n\n```python\nimport judge0\n\nresults = judge0.run(\n    source_code=\"print(f'Hello, {input()}!')\",\n    test_cases=[\n        (\"Bob\", \"Hello, Bob!\"), # Test Case #1. Tuple with first value as standard input, second value as expected output.\n        { # Test Case #2. Dictionary with \"input\" and \"expected_output\" keys.\n            \"input\": \"Alice\",\n            \"expected_output\": \"Hello, Alice!\"\n        },\n        [\"Charlie\", \"Hello, Charlie!\"], # Test Case #3. List with first value as standard input and second value as expected output.\n    ],\n)\n\nfor i, result in enumerate(results):\n    print(f\"--- Test Case #{i + 1} ---\")\n    print(result.stdout)\n    print(result.status)\n```\n\n### Test Cases And Multiple Languages\n\n```python\nimport judge0\n\nsubmissions = [\n    judge0.Submission(\n        source_code=\"print(f'Hello, {input()}!')\",\n        language=judge0.PYTHON,\n    ),\n    judge0.Submission(\n        source_code=\"\"\"\n#include \u003cstdio.h\u003e\n\nint main() {\n    char name[10];\n    scanf(\"%s\", name);\n    printf(\"Hello, %s!\\\\n\", name);\n    return 0;\n}\n\"\"\",\n        language=judge0.C,\n    ),\n]\n\ntest_cases=[\n    (\"Bob\", \"Hello, Bob!\"),\n    (\"Alice\", \"Hello, Alice!\"),\n    (\"Charlie\", \"Hello, Charlie!\"),\n]\n\nresults = judge0.run(submissions=submissions, test_cases=test_cases)\n\nfor i in range(len(submissions)):\n    print(f\"--- Submission #{i + 1} ---\")\n\n    for j in range(len(test_cases)):\n        result = results[i * len(test_cases) + j]\n\n        print(f\"--- Test Case #{j + 1} ---\")\n        print(result.stdout)\n        print(result.status)\n```\n\n### Asynchronous Execution\n\n```python\nimport judge0\n\nsubmission = judge0.async_run(source_code=\"print('hello, world')\")\nprint(submission.stdout) # Prints 'None'\n\njudge0.wait(submissions=submission) # Wait for the submission to finish.\n\nprint(submission.stdout) # Prints 'hello, world'\n```\n\n### Get Languages\n\n```python\nimport judge0\nclient = judge0.get_client()\nprint(client.get_languages())\n```\n\n### Running LLM-Generated Code\n\n#### Simple Example With Ollama\n\n```python\n# pip install judge0 ollama\nimport os\n\nfrom ollama import Client\nimport judge0\n\n# Get your free tier Ollama Cloud API key at https://ollama.com.\nclient = Client(\n    host=\"https://ollama.com\",\n    headers={\"Authorization\": \"Bearer \" + os.environ.get(\"OLLAMA_API_KEY\")},\n)\n\nsystem = \"\"\"\nYou are a helpful assistant that can execute code written in the C programming language.\nOnly respond with the code written in the C programming language that needs to be executed and nothing else.\nStrip the backticks in code blocks.\n\"\"\"\nprompt = \"How many r's are in the word 'strawberry'?\"\n\nresponse = client.chat(\n    model=\"gpt-oss:120b-cloud\",\n    messages=[\n        {\"role\": \"system\", \"content\": system},\n        {\"role\": \"user\", \"content\": prompt},\n    ],\n)\n\ncode = response[\"message\"][\"content\"]\nprint(f\"CODE GENERATED BY THE MODEL:\\n{code}\\n\")\n\nresult = judge0.run(source_code=code, language=judge0.C)\nprint(f\"CODE EXECUTION RESULT:\\n{result.stdout}\")\n```\n\n#### Tool Calling (a.k.a. Function Calling) With Ollama\n\n```python\n# pip install judge0 ollama\nimport os\n\nfrom ollama import Client\nimport judge0\n\n# Get your free tier Ollama Cloud API key at https://ollama.com.\nclient = Client(\n    host=\"https://ollama.com\",\n    headers={\"Authorization\": \"Bearer \" + os.environ.get(\"OLLAMA_API_KEY\")},\n)\n\nmodel=\"qwen3-coder:480b-cloud\"\n\nmessages=[\n    {\"role\": \"user\", \"content\": \"How many r's are in the word 'strawberry'?\"},\n]\n\ntools = [{\n    \"type\": \"function\",\n    \"function\": {\n        \"name\": \"execute_c\",\n        \"description\": \"Execute the C programming language code.\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"code\": {\n                    \"type\": \"string\",\n                    \"description\": \"The code written in the C programming language.\"\n                }\n            },\n            \"required\": [\"code\"]\n        }\n    }\n}]\n\nresponse = client.chat(model=model, messages=messages, tools=tools)\n\nresponse_message = response[\"message\"]\nmessages.append(response_message)\n\nif response_message.tool_calls:\n    for tool_call in response_message.tool_calls:\n        if tool_call.function.name == \"execute_c\":\n            code = tool_call.function.arguments[\"code\"]\n            print(f\"CODE GENERATED BY THE MODEL:\\n{code}\\n\")\n\n            result = judge0.run(source_code=code, language=judge0.C)\n            print(f\"CODE EXECUTION RESULT:\\n{result.stdout}\\n\")\n\n            messages.append({\n                \"role\": \"tool\",\n                \"tool_name\": \"execute_c\",\n                \"content\": result.stdout,\n            })\n\nfinal_response = client.chat(model=model, messages=messages)\nprint(f'FINAL RESPONSE BY THE MODEL:\\n{final_response[\"message\"][\"content\"]}')\n```\n\n#### Multi-Agent System For Iterative Code Generation, Execution, And Debugging\n\n```python\n# pip install judge0 ag2[openai]\nimport os\nfrom typing import Annotated, Optional\n\nfrom autogen import ConversableAgent, LLMConfig, register_function\nfrom autogen.tools import Tool\nfrom pydantic import BaseModel, Field\nimport judge0\n\n\nclass PythonCodeExecutionTool(Tool):\n    def __init__(self) -\u003e None:\n        class CodeExecutionRequest(BaseModel):\n            code: Annotated[str, Field(description=\"Python code to execute\")]\n\n        async def execute_python_code(\n            code_execution_request: CodeExecutionRequest,\n        ) -\u003e Optional[str]:\n            result = judge0.run(\n                source_code=code_execution_request.code,\n                language=judge0.PYTHON,\n                redirect_stderr_to_stdout=True,\n            )\n            return result.stdout\n\n        super().__init__(\n            name=\"python_execute_code\",\n            description=\"Executes Python code and returns the result.\",\n            func_or_tool=execute_python_code,\n        )\n\n\npython_executor = PythonCodeExecutionTool()\n\n# Get your free tier Ollama Cloud API key at https://ollama.com.\nllm_config = LLMConfig(\n    {\n        \"api_type\": \"openai\",\n        \"base_url\": \"https://ollama.com/v1\",\n        \"api_key\": os.environ.get(\"OLLAMA_API_KEY\"),\n        \"model\": \"qwen3-coder:480b-cloud\",\n    }\n)\n\ncode_runner = ConversableAgent(\n    name=\"code_runner\",\n    system_message=\"You are a code executor agent, when you don't execute code write the message 'TERMINATE' by itself.\",\n    human_input_mode=\"NEVER\",\n    llm_config=llm_config,\n)\n\nquestion_agent = ConversableAgent(\n    name=\"question_agent\",\n    system_message=(\n        \"You are a developer AI agent. \"\n        \"Send all your code suggestions to the python_executor tool where it will be executed and result returned to you. \"\n        \"Keep refining the code until it works.\"\n    ),\n    llm_config=llm_config,\n)\n\nregister_function(\n    python_executor,\n    caller=question_agent,\n    executor=code_runner,\n    description=\"Run Python code\",\n)\n\nresult = code_runner.initiate_chat(\n    recipient=question_agent,\n    message=(\n        \"Write Python code to print the current Python version followed by the numbers 1 to 11. \"\n        \"Make a syntax error in the first version and fix it in the second version.\"\n    ),\n    max_turns=5,\n)\n\nprint(f\"Result: {result.summary}\")\n```\n\n#### Kaggle Dataset Visualization With LLM-Generated Code Using Ollama And Judge0\n\n```python\n# pip install judge0 ollama requests\nimport os\nimport zipfile\n\nimport judge0\nimport requests\nfrom judge0 import File, Filesystem\nfrom ollama import Client\n\n# Step 1: Download the dataset from Kaggle.\ndataset_url = \"https://www.kaggle.com/api/v1/datasets/download/gregorut/videogamesales\"\ndataset_zip_path = \"vgsales.zip\"\ndataset_csv_path = \"vgsales.csv\"  # P.S.: We know the CSV file name inside the zip.\n\nif not os.path.exists(dataset_csv_path):  # Download only if not already downloaded.\n    with requests.get(dataset_url) as response:\n        with open(dataset_zip_path, \"wb\") as f:\n            f.write(response.content)\n            with zipfile.ZipFile(dataset_zip_path, \"r\") as f:\n                f.extractall(\".\")\n\n# Step 2: Prepare the submission for Judge0.\nwith open(dataset_csv_path, \"r\") as f:\n    submission = judge0.Submission(\n        language=judge0.PYTHON_FOR_ML,\n        additional_files=Filesystem(\n            content=[\n                File(name=dataset_csv_path, content=f.read()),\n            ]\n        ),\n    )\n\n# Step 3: Initialize Ollama Client. Get your free tier Ollama Cloud API key at https://ollama.com.\nclient = Client(\n    host=\"https://ollama.com\",\n    headers={\"Authorization\": \"Bearer \" + os.environ.get(\"OLLAMA_API_KEY\")},\n)\n\n# Step 4: Prepare the prompt, messages, tools, and choose the model.\nprompt = f\"\"\"\nI have a CSV that contains a list of video games with sales greater than 100,000 copies. It's saved in the file {dataset_csv_path}.\nThese are the columns:\n- 'Rank': Ranking of overall sales\n- 'Name': The games name\n- 'Platform': Platform of the games release (i.e. PC,PS4, etc.)\n- 'Year': Year of the game's release\n- 'Genre': Genre of the game\n- 'Publisher': Publisher of the game\n- 'NA_Sales': Sales in North America (in millions)\n- 'EU_Sales': Sales in Europe (in millions)\n- 'JP_Sales': Sales in Japan (in millions)\n- 'Other_Sales': Sales in the rest of the world (in millions)\n- 'Global_Sales': Total worldwide sales.\n\nI want to better understand how the sales are distributed across different genres over the years.\nWrite Python code that analyzes the dataset based on my request, produces right chart and saves it as an image file.\n\"\"\"\nmessages = [{\"role\": \"user\", \"content\": prompt}]\ntools = [\n    {\n        \"type\": \"function\",\n        \"function\": {\n            \"name\": \"execute_python\",\n            \"description\": \"Execute the Python programming language code.\",\n            \"parameters\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"code\": {\n                        \"type\": \"string\",\n                        \"description\": \"The code written in the Python programming language.\",\n                    }\n                },\n                \"required\": [\"code\"],\n            },\n        },\n    }\n]\nmodel = \"qwen3-coder:480b-cloud\"\n\n# Step 5: Start the interaction with the model.\nresponse = client.chat(model=model, messages=messages, tools=tools)\nresponse_message = response[\"message\"]\n\nif response_message.tool_calls:\n    for tool_call in response_message.tool_calls:\n        if tool_call.function.name == \"execute_python\":\n            code = tool_call.function.arguments[\"code\"]\n            print(f\"CODE GENERATED BY THE MODEL:\\n{code}\\n\")\n\n            submission.source_code = code\n            result = judge0.run(submissions=submission)\n\n            for f in result.post_execution_filesystem:\n                if f.name.endswith((\".png\", \".jpg\", \".jpeg\")):\n                    with open(f.name, \"wb\") as img_file:\n                        img_file.write(f.content)\n                    print(f\"Generated image saved as: {f.name}\\n\")\n```\n\n#### Minimal Example Using `smolagents` With Ollama And Judge0\n\n```python\n# pip install judge0 smolagents[openai]\nimport os\nfrom typing import Any\n\nimport judge0\nfrom smolagents import CodeAgent, OpenAIServerModel, Tool\nfrom smolagents.local_python_executor import CodeOutput, PythonExecutor\n\n\nclass Judge0PythonExecutor(PythonExecutor):\n    def send_tools(self, tools: dict[str, Tool]) -\u003e None:\n        pass\n\n    def send_variables(self, variables: dict[str, Any]) -\u003e None:\n        pass\n\n    def __call__(self, code_action: str) -\u003e CodeOutput:\n        source_code = f\"final_answer = lambda x : print(x)\\n{code_action}\"\n        result = judge0.run(source_code=source_code, language=judge0.PYTHON_FOR_ML)\n        return CodeOutput(\n            output=result.stdout,\n            logs=result.stderr or \"\",\n            is_final_answer=result.exit_code == 0,\n        )\n\n\n# Get your free tier Ollama Cloud API key at https://ollama.com.\nmodel = OpenAIServerModel(\n    model_id=\"gpt-oss:120b-cloud\",\n    api_base=\"https://ollama.com/v1\",\n    api_key=os.environ[\"OLLAMA_API_KEY\"],\n)\n\nagent = CodeAgent(tools=[], model=model)\nagent.python_executor = Judge0PythonExecutor()\n\nresult = agent.run(\"How many r's are in the word 'strawberry'?\")\nprint(result)\n```\n\n### Filesystem\n\nThis example shows how to use Judge0 Python SDK to:\n1. Create a submission with additional files in the filesystem which will be available during the execution.\n2. Read the files after the execution which were created during the execution.\n\n```python\n# pip install judge0\nimport judge0\nfrom judge0 import Filesystem, File, Submission\n\nfs = Filesystem(\n    content=[\n        File(name=\"./my_dir1/my_file1.txt\", content=\"hello from my_file.txt\"),\n    ]\n)\n\nsource_code = \"\"\"\ncat ./my_dir1/my_file1.txt\n\nmkdir my_dir2\necho \"hello, world\" \u003e ./my_dir2/my_file2.txt\n\"\"\"\n\nsubmission = Submission(\n    source_code=source_code,\n    language=judge0.BASH,\n    additional_files=fs,\n)\n\nresult = judge0.run(submissions=submission)\n\nprint(result.stdout)\nprint(result.post_execution_filesystem.find(\"./my_dir2/my_file2.txt\"))\n```\n\n### Custom Judge0 Client\n\nThis example shows how to use Judge0 Python SDK with your own Judge0 instance.\n\n```python\n# pip install judge0\nimport judge0\n\nclient = judge0.Client(\"http://127.0.0.1:2358\")\n\nsource_code = \"\"\"\n#include \u003cstdio.h\u003e\n\nint main() {\n    printf(\"hello, world\\\\n\");\n    return 0;\n}\n\"\"\"\n\nresult = judge0.run(client=client, source_code=source_code, language=judge0.C)\nprint(result.stdout)\n```\n\n### Generating And Saving An Image File\n\n```python\n# pip install judge0\nimport judge0\n\nsource_code = \"\"\"\nimport matplotlib.pyplot as plt\n\nplt.plot([x for x in range(10)], [x**2 for x in range(10)])\nplt.savefig(\"chart.png\")\n\"\"\"\n\nresult = judge0.run(source_code=source_code, language=judge0.PYTHON_FOR_ML)\n\nimage = result.post_execution_filesystem.find(\"chart.png\")\nwith open(image.name, \"wb\") as f:\n    f.write(image.content)\nprint(f\"Generated image saved as: {image.name}\\n\")\n```\n\n## Contributors\n\nThanks to all [contributors](https://github.com/judge0/judge0-python/graphs/contributors) for contributing to this project.\n\n[![](https://contributors-img.web.app/image?repo=judge0/judge0-python)](https://github.com/judge0/judge0-python/graphs/contributors)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjudge0%2Fjudge0-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjudge0%2Fjudge0-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjudge0%2Fjudge0-python/lists"}