Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/e2b-dev/fragments
Open-source Next.js template for building apps that are fully generated by AI. By E2B.
https://github.com/e2b-dev/fragments
ai ai-code-generation anthropic claude claude-ai code-interpreter e2b javascript llm nextjs react sandbox typescript
Last synced: about 18 hours ago
JSON representation
Open-source Next.js template for building apps that are fully generated by AI. By E2B.
- Host: GitHub
- URL: https://github.com/e2b-dev/fragments
- Owner: e2b-dev
- License: apache-2.0
- Created: 2024-07-10T20:31:32.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-11-21T20:24:05.000Z (21 days ago)
- Last Synced: 2024-11-27T14:03:55.666Z (15 days ago)
- Topics: ai, ai-code-generation, anthropic, claude, claude-ai, code-interpreter, e2b, javascript, llm, nextjs, react, sandbox, typescript
- Language: TypeScript
- Homepage: https://fragments.e2b.dev
- Size: 4.26 MB
- Stars: 3,507
- Watchers: 30
- Forks: 457
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- AiTreasureBox - e2b-dev/fragments - 12-07_3553_0](https://img.shields.io/github/stars/e2b-dev/fragments.svg)|Open-source Next.js template for building apps that are fully generated by AI. By E2B.| (Repos)
- awesome - e2b-dev/fragments - Open-source Next.js template for building apps that are fully generated by AI. By E2B. (TypeScript)
README
# Fragments by E2B
This is an open-source version of apps like [Anthropic's Claude Artifacts](https://www.anthropic.com/news/claude-3-5-sonnet), Vercel [v0](https://v0.dev), or [GPT Engineer](https://gptengineer.app).
Powered by the [E2B SDK](https://github.com/e2b-dev/code-interpreter).
![Preview](preview.png)
[→ Try on fragments.e2b.dev](https://fragments.e2b.dev)
## Features
- Based on Next.js 14 (App Router, Server Actions), shadcn/ui, TailwindCSS, Vercel AI SDK.
- Uses the [E2B SDK](https://github.com/e2b-dev/code-interpreter) by [E2B](https://e2b.dev) to securely execute code generated by AI.
- Streaming in the UI.
- Can install and use any package from npm, pip.
- Supported stacks ([add your own](#adding-custom-personas)):
- 🔸 Python interpreter
- 🔸 Next.js
- 🔸 Vue.js
- 🔸 Streamlit
- 🔸 Gradio
- Supported LLM Providers ([add your own](#adding-custom-llm-models)):
- 🔸 OpenAI
- 🔸 Anthropic
- 🔸 Google AI
- 🔸 Mistral
- 🔸 Groq
- 🔸 Fireworks
- 🔸 Together AI
- 🔸 Ollama**Make sure to give us a star!**
## Get started
### Prerequisites
- [git](https://git-scm.com)
- Recent version of [Node.js](https://nodejs.org) and npm package manager
- [E2B API Key](https://e2b.dev)
- LLM Provider API Key### 1. Clone the repository
In your terminal:
```
git clone https://github.com/e2b-dev/fragments.git
```### 2. Install the dependencies
Enter the repository:
```
cd fragments
```Run the following to install the required dependencies:
```
npm i
```### 3. Set the environment variables
Create a `.env.local` file and set the following:
```sh
# Get your API key here - https://e2b.dev/
E2B_API_KEY="your-e2b-api-key"# OpenAI API Key
OPENAI_API_KEY=# Other providers
ANTHROPIC_API_KEY=
GROQ_API_KEY=
FIREWORKS_API_KEY=
TOGETHER_API_KEY=
GOOGLE_AI_API_KEY=
GOOGLE_VERTEX_CREDENTIALS=
MISTRAL_API_KEY=
XAI_API_KEY=### Optional env vars
# Domain of the site
NEXT_PUBLIC_SITE_URL=# Disabling API key and base URL input in the chat
NEXT_PUBLIC_NO_API_KEY_INPUT=
NEXT_PUBLIC_NO_BASE_URL_INPUT=# Rate limit
RATE_LIMIT_MAX_REQUESTS=
RATE_LIMIT_WINDOW=# Vercel/Upstash KV (short URLs, rate limiting)
KV_REST_API_URL=
KV_REST_API_TOKEN=# Supabase (auth)
SUPABASE_URL=
SUPABASE_ANON_KEY=# PostHog (analytics)
NEXT_PUBLIC_POSTHOG_KEY=
NEXT_PUBLIC_POSTHOG_HOST=
```### 4. Start the development server
```
npm run dev
```### 5. Build the web app
```
npm run build
```## Customize
### Adding custom personas
1. Make sure [E2B CLI](https://e2b.dev/docs/cli/installation) is installed and you're logged in.
2. Add a new folder under [sandbox-templates/](sandbox-templates/)
3. Initialize a new template using E2B CLI:
```
e2b template init
```This will create a new file called `e2b.Dockerfile`.
4. Adjust the `e2b.Dockerfile`
Here's an example streamlit template:
```Dockerfile
# You can use most Debian-based base images
FROM python:3.19-slimRUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly
# Copy the code to the container
WORKDIR /home/user
COPY . /home/user
```5. Specify a custom start command in `e2b.toml`:
```toml
start_cmd = "cd /home/user && streamlit run app.py"
```6. Deploy the template with the E2B CLI
```
e2b template build --name
```After the build has finished, you should get the following message:
```
✅ Building sandbox template finished.
```7. Open [lib/templates.json](lib/templates.json) in your code editor.
Add your new template to the list. Here's an example for Streamlit:
```json
"streamlit-developer": {
"name": "Streamlit developer",
"lib": [
"streamlit",
"pandas",
"numpy",
"matplotlib",
"request",
"seaborn",
"plotly"
],
"file": "app.py",
"instructions": "A streamlit app that reloads automatically.",
"port": 8501 // can be null
},
```Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM.
4. Optionally, add a new logo under [public/thirdparty/templates](public/thirdparty/templates)
### Adding custom LLM models
1. Open [lib/models.json](lib/models.ts) in your code editor.
2. Add a new entry to the models list:
```json
{
"id": "mistral-large",
"name": "Mistral Large",
"provider": "Ollama",
"providerId": "ollama"
}
```Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see [adding providers](#adding-custom-llm-providers) below).
### Adding custom LLM providers
1. Open [lib/models.ts](lib/models.ts) in your code editor.
2. Add a new entry to the `providerConfigs` list:
Example for fireworks:
```ts
fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString),
```3. Optionally, adjust the default structured output mode in the `getDefaultMode` function:
```ts
if (providerId === 'fireworks') {
return 'json'
}
```4. Optionally, add a new logo under [public/thirdparty/logos](public/thirdparty/logos)
## Contributing
As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request.