{"id":17033092,"url":"https://github.com/chuloai/code-it","last_synced_at":"2025-10-06T19:44:56.145Z","repository":{"id":162889960,"uuid":"637384751","full_name":"ChuloAI/code-it","owner":"ChuloAI","description":null,"archived":false,"fork":false,"pushed_at":"2023-05-19T19:29:44.000Z","size":158,"stargazers_count":64,"open_issues_count":1,"forks_count":15,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-04-12T12:57:49.128Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ChuloAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-07T11:55:25.000Z","updated_at":"2024-09-23T06:36:20.000Z","dependencies_parsed_at":"2023-09-03T01:53:04.580Z","dependency_job_id":null,"html_url":"https://github.com/ChuloAI/code-it","commit_stats":null,"previous_names":["chuloai/code-it"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/ChuloAI/code-it","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Fcode-it","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Fcode-it/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Fcode-it/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Fcode-it/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ChuloAI","download_url":"https://codeload.github.com/ChuloAI/code-it/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ChuloAI%2Fcode-it/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278671685,"owners_count":26025741,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-06T02:00:05.630Z","response_time":65,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-10-14T08:32:22.653Z","updated_at":"2025-10-06T19:44:56.116Z","avatar_url":"https://github.com/ChuloAI.png","language":"Python","readme":"# code-it\n\nCode-it is simultaneously:\n\n1. A standalone package to generate code and execute with **local** LLMs.\n2. An importable tool into langchain\n\n\nThis is a highly experimental project, the quality of the generations may not be high enough for production usage.\n\nCode-it leverages LLMs to generate code - unlike other solutions, it doesn't try to rely on the smartness of LLMs, but rather assume they are rather dumb and perform several mistakes along the way. It applies a simple algorithm to iteratively code towards it's task objective, in a similar way a programmer might do. This algorithm is implemented with control statements and different prompts to steer the LLM at performing the correct action.\n\nIt is **not** an autonomous agent - at most, we could call it semi-autonomous.\n\n\n## Overview Idea\n![Overview Diagram](/overview_diagram.jpg?raw=true \"Optional Title\")\n\n\n## Installation\n\n1. Setup https://github.com/oobabooga/text-generation-webui with API enabled\n2. Install it through pip / git on your project. For, you can define this line in your project requirements.txt:\n```text\ncode_it @ git+https://github.com/paolorechia/code-it\n```\nNote that I did not yet have tags or a PyPi package, as I'm not sure how useful this package will be in the future. \n\n3. Locally as a standalone program with your current Python shell / virtualenv:\n\n```bash\ngit clone https://github.com/paolorechia/code-it\ncd code-it\npip install -r requirements.txt\n```\n\n## Running it as a standalone program (using the package `__main__.py`)\nWARNING: the LLM will run arbitrary code, use it at your own risk.\nExecute the main:\n`python3 -m code_it`\n\nThis will save the code in `persistent_source.py`\n\nChange the task in the `task.txt` file to perform another task.\n\n## Using it as a standalone package in your program\nYou can reuse the code from https://github.com/paolorechia/code-it/blob/main/code_it/__main__.py\n\nHere's the base minimum code to use this library: \n```python\nfrom code_it.code_editor.python_editor import PythonCodeEditor\nfrom code_it.models import build_text_generation_web_ui_client_llm, build_llama_base_llm\nfrom code_it.task_executor import TaskExecutor, TaskExecutionConfig\n\n\ncode_editor = PythonCodeEditor()\nmodel_builder = build_llama_base_llm\nconfig = TaskExecutionConfig()\n\ntask_executor = TaskExecutor(code_editor, model_builder, config)\n\nwith open(\"task.txt\", \"r\") as fp:\n    task = fp.read()\n    task_executor.execute(task)\n```\n\nHere we import the `PythonCodeEditor`, currently the only supported editor, along with a llama LLM. Notice that this assumes a server running on 0.0.0.0:8000, which comes from my other repo: https://github.com/paolorechia/learn-langchain/blob/main/servers/vicuna_server.py\n\nYou can easily change this to instead use the text-generation-web-ui tool from oobagooba, by importing the builder: `build_text_generation_web_ui_client_llm`. Implementing your own model client should also be straightforward. Look at the source code in: https://github.com/paolorechia/code-it/blob/main/code_it/models.py\n\n### Modifying the behavior\nNotice that in the example above we imported the `TaskExecutionConfig`, let's look at this class:\n\n```python\n@dataclass\nclass TaskExecutionConfig:\n    execute_code = True\n    install_dependencies = True\n    apply_linter = True\n    check_package_is_in_pypi = True\n    log_to_stdout = True\n    coding_samples = 3\n    code_sampling_strategy = \"PYLINT\"\n    sampling_temperature_multipler = 0.1\n    dependency_samples = 3\n    max_coding_attempts = 5\n    dependency_install_attempts = 5\n    planner_temperature = 0\n    coder_temperature = 0\n    linter_temperature = 0.3\n    dependency_tracker_temperature = 0.2\n```\n\nYou can change these parameters to change how the program behaves. Not all settings are always applied at the same time, for instance, if you change the `code_sampling_strategy` to `NO_SAMPLING`, then of course the config parameter `sampling_temperature_multiplier` is not used.\n\nTo understand these settings better, you should read the task execution code directly, as there is no detailed documentation for this yet: https://github.com/paolorechia/code-it/blob/main/code_it/task_executor.py\n\n\n## Using it with Langchain\n\n### Task Execution Tool\nHere's an example using my other repo: https://github.com/paolorechia/learn-langchain/blob/main/langchain_app/executor_tests/chuck_norris_joke.py\n\n\n```python\nfrom langchain.agents import initialize_agent, AgentType\nfrom langchain_app.models.vicuna_request_llm import VicunaLLM\n\nfrom code_it.models import build_llama_base_llm\nfrom code_it.langchain.code_it_tool import CodeItTool\nfrom code_it.task_executor import TaskExecutionConfig\n\nllm = VicunaLLM()\nconfig = TaskExecutionConfig()\nprint(config)\nconfig.install_dependencies = True\nconfig.execute_code = True\ncode_editor = CodeItTool(build_llama_base_llm, config)\n\ntools = [\n    code_editor.build_execute_task(),\n]\n\nagent = initialize_agent(\n    tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n)\n\nagent.run(\n    \"\"\"\nRemember to use the following format:\nAction: \u003c\u003e\nAction Input:\n\u003c\u003e\n\nQuestion: Extract a joke from https://api.chucknorris.io/jokes/random - access the key 'value' from the returned JSON.\n\"\"\"\n)\n```\n\n\n### Using the Mixin class\nThe Mixin gives the option to use the pip install command from the `code_it` virtualenv manager, effectively adding package installation powers to your LLM inside langchain.\n\n**Note that the Mixin does not work as well as the task execution tool.**\n\nThe local models quite often fails to use the new actions appropriately, so even this example does not yet work as you would expect.\n\nCode from: https://github.com/paolorechia/learn-langchain/blob/main/langchain_app/agents/coder_plot_chart_mixin_test.py\n\n```python\nrom langchain.agents import (\n    AgentExecutor,\n    LLMSingleActionAgent,\n    Tool,\n    AgentOutputParser,\n)\nfrom langchain.prompts import StringPromptTemplate\nfrom langchain import LLMChain\nfrom langchain_app.models.vicuna_request_llm import VicunaLLM\nfrom langchain.schema import AgentAction, AgentFinish\n\nfrom code_it.langchain.python_langchain_tool_mixin import LangchainPythonToolMixin\n\nimport re\nfrom typing import List, Union\n\n\nllm = VicunaLLM()\n\ncode_editor = LangchainPythonToolMixin()\n\ntools = [\n    code_editor.build_add_code_tool(),\n    code_editor.build_run_tool(),\n    code_editor.build_pip_install()\n]\n\ntemplate = \"\"\"You're a programmer AI.\n\nYou are asked to code a certain task.\nYou have access to a Code Editor, that can be used through the following tools:\n\n{tools}\n\n\nYou should ALWAYS think what to do next.\n\nUse the following format:\n\nTask: the input task you must implement\nCurrent Source Code: Your current code state that you are editing\nThought: you should always think about what to code next\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: The result of your last action\n... (this Thought/Action/Action Input/Source Code/Code Result can repeat N times)\n\nThought: I have finished the task\nTask Completed: the task has been implemented\n\nExample task:\nTask: the input task you must implement\n\nThought: To start, we need to add the line of code to print 'hello world'\nAction: CodeEditorAddCode\nAction Input: \nprint(\"hello world\") end of llm ouput\nObservation:None\n\nThought: I have added the line of code to print 'hello world'. I should execute the code to test the output\nAction: CodeEditorRunCode\nAction Input: \n\nObservation:Program Succeeded\nStdout:b'hello world\\n'\nStderr:b''\n\nThought: The output is correct, it should be 'hello world'\nAction: None\nAction Input:\nOutput is correct\n\nObservation:None is not a valid tool, try another one.\n\nThought: I have concluded that the output is correct\nTask Completed: the task is completed.\n\n\nREMEMBER: don't install the same package more than once\n\nNow we begin with a real task!\n\nTask: {input}\nSource Code: {source_code}\n\n{agent_scratchpad}\n\nThought:\"\"\"\n\n\n# Set up a prompt template\nclass CodeEditorPromptTemplate(StringPromptTemplate):\n    # The template to use\n    template: str\n    code_editor: LangchainPythonToolMixin\n    tools: List[Tool]\n\n    def format(self, **kwargs) -\u003e str:\n        # Get the intermediate steps (AgentAction, Observation tuples)\n        # Format them in a particular way\n        intermediate_steps = kwargs.pop(\"intermediate_steps\")\n        thoughts = \"\"\n        for action, observation in intermediate_steps:\n            thoughts += action.log\n            thoughts += f\"\\nObservation: {observation}\\nThought: \"\n        # Set the agent_scratchpad variable to that value\n        kwargs[\"agent_scratchpad\"] = thoughts\n        kwargs[\"source_code\"] = code_editor.display_code()\n        kwargs[\"tools\"] = \"\\n\".join(\n            [f\"{tool.name}: {tool.description}\" for tool in self.tools]\n        )\n        kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n        return self.template.format(**kwargs)\n\n\nprompt = CodeEditorPromptTemplate(\n    template=template,\n    code_editor=code_editor,\n    tools=tools,\n    input_variables=[\"input\", \"intermediate_steps\"],\n)\n\n\nclass CodeEditorOutputParser(AgentOutputParser):\n    def parse(self, llm_output: str) -\u003e Union[AgentAction, AgentFinish]:\n        print(\"llm output: \", llm_output, \"end of llm ouput\")\n        # Check if agent should finish\n        if \"Task Completed:\" in llm_output:\n            return AgentFinish(\n                # Return values is generally always a dictionary with a single `output` key\n                # It is not recommended to try anything else at the moment :)\n                return_values={\"output\": llm_output},\n                log=llm_output,\n            )\n        # Parse out the action and action input\n        regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n        match = re.search(regex, llm_output, re.DOTALL)\n        if not match:\n            raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n        action = match.group(1).strip()\n        action_input = match.group(2)\n        # Return the action and action input\n        return AgentAction(\n            tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output\n        )\n\n\noutput_parser = CodeEditorOutputParser()\n\nllm_chain = LLMChain(llm=llm, prompt=prompt)\nllm = VicunaLLM()\n\ntool_names = [tool.name for tool in tools]\nagent = LLMSingleActionAgent(\n    llm_chain=llm_chain,\n    output_parser=output_parser,\n    stop=[\"\\nObservation:\"],\n    allowed_tools=tool_names,\n)\n\nagent_executor = AgentExecutor.from_agent_and_tools(\n    agent=agent, tools=tools, verbose=True\n)\n\nagent_executor.run(\n    \"\"\"\nYour job is to plot an example chart using matplotlib. Create your own random data.\nRun this code only when you're finished.\nDO NOT add code and run into a single step.\n\"\"\"\n)\n```\n\n\n\n\n\n\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchuloai%2Fcode-it","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchuloai%2Fcode-it","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchuloai%2Fcode-it/lists"}