https://github.com/vndee/llm-sandbox
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
https://github.com/vndee/llm-sandbox
code-generation code-interpreter large-language-models llm-sandbox
Last synced: 5 months ago
JSON representation
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
- Host: GitHub
- URL: https://github.com/vndee/llm-sandbox
- Owner: vndee
- License: mit
- Created: 2024-06-27T03:58:23.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-05-09T06:48:38.000Z (6 months ago)
- Last Synced: 2025-05-09T07:42:58.646Z (6 months ago)
- Topics: code-generation, code-interpreter, large-language-models, llm-sandbox
- Language: Python
- Homepage: https://blog.duy.dev/the-easiest-way-to-add-code-interpreter-into-your-llm-apps/
- Size: 565 KB
- Stars: 264
- Watchers: 3
- Forks: 28
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
## LLM Sandbox
*Securely Execute LLM-Generated Code with Ease*
[](https://pypi.org/project/llm-sandbox/)
[](https://pypi.org/project/llm-sandbox/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/psf/black)LLM Sandbox is a lightweight and portable sandbox environment designed to run large language model (LLM) generated code in a safe and isolated manner using Docker containers. This project aims to provide an easy-to-use interface for setting up, managing, and executing code in a controlled Docker environment, simplifying the process of running code generated by LLMs.

### Features
- **Easy Setup:** Quickly create sandbox environments with minimal configuration.
- **Isolation:** Run your code in isolated Docker containers to prevent interference with your host system.
- **Flexibility:** Support for multiple programming languages.
- **Portability:** Use predefined Docker images or custom Dockerfiles.
- **Scalability:** Support Kubernetes and remote Docker host.### Installation
#### Using Poetry
1. Ensure you have [Poetry](https://python-poetry.org/docs/#installation) installed.
2. Add the package to your project:```sh
poetry add llm-sandbox # or poetry add llm-sandbox[kubernetes] or poetry add llm-sandbox[podman] or poetry add llm-sandbox[docker]
```#### Using pip
1. Ensure you have [pip](https://pip.pypa.io/en/stable/installation/) installed.
2. Install the package:```sh
pip install llm-sandbox # or pip install llm-sandbox[kubernetes] or pip install llm-sandbox[podman] or pip install llm-sandbox[docker]
```See [CHANGELOG.md](CHANGELOG.md) for more details.
### Usage
#### Session Lifecycle
The `SandboxSession` class manages the lifecycle of the sandbox environment, including the creation and destruction of Docker containers. Here's a typical lifecycle:
1. **Initialization:** Create a `SandboxSession` object with the desired configuration.
2. **Open Session:** Call the `open()` method to build/pull the Docker image and start the Docker container.
3. **Run Code:** Use the `run()` method to execute code inside the sandbox. Currently, it supports Python, Java, JavaScript, C++, Go, and Ruby. See [examples](examples) for more details.
4. **Close Session:** Call the `close()` method to stop and remove the Docker container. If the `keep_template` flag is set to `True`, the Docker image will not be removed, and the last container state will be committed to the image.#### Basic Example
```python
from llm_sandbox import SandboxSession# Create a new sandbox session
with SandboxSession(image="python:3.9.19-bullseye", keep_template=True, lang="python") as session:
result = session.run("print('Hello, World!')")
print(result)# With custom Dockerfile
with SandboxSession(dockerfile="Dockerfile", keep_template=True, lang="python") as session:
result = session.run("print('Hello, World!')")
print(result)# Or default image
with SandboxSession(lang="python", keep_template=True) as session:
result = session.run("print('Hello, World!')")
print(result)
```LLM Sandbox also supports copying files between the host and the sandbox:
```python
from llm_sandbox import SandboxSessionwith SandboxSession(lang="python", keep_template=True) as session:
# Copy a file from the host to the sandbox
session.copy_to_runtime("test.py", "/sandbox/test.py")# Run the copied Python code in the sandbox
result = session.execute_command("python /sandbox/test.py")
print(result)# Copy a file from the sandbox to the host
session.copy_from_runtime("/sandbox/output.txt", "output.txt")
```#### Custom runtime configs
```python
from llm_sandbox import SandboxSessionpod_manifest = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "test",
"namespace": "test",
"labels": {"app": "sandbox"},
},
"spec": {
"containers": [
{
"name": "sandbox-container",
"image": "test",
"tty": True,
"volumeMounts": {
"name": "tmp",
"mountPath": "/tmp",
},
}
],
"volumes": [{"name": "tmp", "emptyDir": {"sizeLimit": "5Gi"}}],
},
}
with SandboxSession(
backend="kubernetes",
image="python:3.9.19-bullseye",
dockerfile=None,
lang="python",
keep_template=False,
verbose=False,
pod_manifest=pod_manifest,
) as session:
result = session.run("print('Hello, World!')")
print(result)
```#### Remote Docker Host
```python
import docker
from llm_sandbox import SandboxSessiontls_config = docker.tls.TLSConfig(
client_cert=("path/to/cert.pem", "path/to/key.pem"),
ca_cert="path/to/ca.pem",
verify=True
)
docker_client = docker.DockerClient(base_url="tcp://:", tls=tls_config)with SandboxSession(
client=docker_client,
image="python:3.9.19-bullseye",
keep_template=True,
lang="python",
) as session:
result = session.run("print('Hello, World!')")
print(result)
```#### Kubernetes Support
```python
from kubernetes import client, config
from llm_sandbox import SandboxSession# Use local kubeconfig
config.load_kube_config()
k8s_client = client.CoreV1Api()with SandboxSession(
client=k8s_client,
backend="kubernetes",
image="python:3.9.19-bullseye",
lang="python",
pod_manifest=pod_manifest, # None by default
) as session:
result = session.run("print('Hello from Kubernetes!')")
print(result)
```#### Podman Support
```python
from llm_sandbox import SandboxSessionwith SandboxSession(
backend="podman",
lang="python",
image="python:3.9.19-bullseye"
) as session:
result = session.run("print('Hello from Podman!')")
print(result)
```### Integration with AI Frameworks
#### Langchain Integration
```python
from typing import Optional, List
from llm_sandbox import SandboxSession
from langchain import hub
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain.agents import AgentExecutor, create_tool_calling_agent@tool
def run_code(lang: str, code: str, libraries: Optional[List] = None) -> str:
"""
Run code in a sandboxed environment.
:param lang: The language of the code.
:param code: The code to run.
:param libraries: The libraries to use, it is optional.
:return: The output of the code.
"""
with SandboxSession(lang=lang, verbose=False) as session: # type: ignore[attr-defined]
return session.run(code, libraries).textif __name__ == "__main__":
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = hub.pull("hwchase17/openai-functions-agent")
tools = [run_code]agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
output = agent_executor.invoke(
{
"input": "Write python code to calculate Pi number by Monte Carlo method then run it."
}
)
print(output)output = agent_executor.invoke(
{
"input": "Write python code to calculate the factorial of a number then run it."
}
)
print(output)output = agent_executor.invoke(
{"input": "Write python code to calculate the Fibonacci sequence then run it."}
)
print(output)
```#### LlamaIndex Integration
```python
from typing import Optional, List
from llm_sandbox import SandboxSessionfrom llama_index.llms.openai import OpenAI
from llama_index.core.tools import FunctionTool
from llama_index.core.agent import FunctionCallingAgentWorkerimport nest_asyncio
nest_asyncio.apply()
def run_code(lang: str, code: str, libraries: Optional[List] = None) -> str:
"""
Run code in a sandboxed environment.
:param lang: The language of the code, must be one of ['python', 'java', 'javascript', 'cpp', 'go', 'ruby'].
:param code: The code to run.
:param libraries: The libraries to use, it is optional.
:return: The output of the code.
"""
with SandboxSession(lang=lang, verbose=False) as session: # type: ignore[attr-defined]
return session.run(code, libraries).textif __name__ == "__main__":
llm = OpenAI(model="gpt-4o", temperature=0)
code_execution_tool = FunctionTool.from_defaults(fn=run_code)agent_worker = FunctionCallingAgentWorker.from_tools(
[code_execution_tool],
llm=llm,
verbose=True,
allow_parallel_tool_calls=False,
)
agent = agent_worker.as_agent()response = agent.chat(
"Write python code to calculate Pi number by Monte Carlo method then run it."
)
print(response)response = agent.chat(
"Write python code to calculate the factorial of a number then run it."
)
print(response)response = agent.chat(
"Write python code to calculate the Fibonacci sequence then run it."
)
print(response)response = agent.chat("Calculate the sum of the first 10000 numbers.")
print(response)
```### Contributing
We welcome contributions! Here are some areas we're looking to improve:
- [ ] Add support for more programming languages
- [ ] Enhance security scanning patterns
- [ ] Improve resource monitoring accuracy
- [ ] Add more AI framework integrations
- [ ] Implement container pooling for better performance
- [ ] Add support for distributed execution
- [ ] Enhance logging and monitoring features### License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
### Changelog
See [CHANGELOG.md](CHANGELOG.md) for a list of changes and improvements in each version.