An open API service indexing awesome lists of open source software.

https://github.com/presenton/presenton

Open-Source AI Presentation Generator and API (Gamma, Beautiful AI, Decktopus Alternative)
https://github.com/presenton/presenton

ai-agent ai-presentation api gamma powerpoint-automation powerpoint-free powerpoint-generation presentation

Last synced: 23 days ago
JSON representation

Open-Source AI Presentation Generator and API (Gamma, Beautiful AI, Decktopus Alternative)

Awesome Lists containing this project

README

          


Presenton


Quickstart ·
Docs ·
Youtube ·
Discord


Apache2.0
Stars
Platform

# Open-Source AI Presentation Generator and API (Gamma, Beautiful AI, Decktopus Alternative)

### ✨ Why Presenton

No SaaS lock-in · No forced subscriptions · Full control over models and data

What makes Presenton different?

- Fully **self-hosted**; Web (Docker) & Desktop (Mac, Windows & Linux)
- Works with OpenAI, Gemini, Anthropic, Ollama, or custom models
- API deployable
- Fully open-source (Apache 2.0)
- Use your **existing PPTX files as templates** _(coming soon)_


Presenton

#

### 🎛 Features


Presenton Features

#

### 💻 Presenton Desktop

Create AI-powered presentations using your own model provider (BYOK) or run everything locally on your own machine for full control and data privacy.



Cloud deployment

**Available Platforms**

Platform
Architecture
Package
Download

macOS
Apple Silicon / Intel
.dmg
Download ↗

Windows
x64
.exe
Download ↗

Linux
x64
.deb
Download ↗

Presenton gives you complete control over your AI presentation workflow. Choose your models, customize your experience, and keep your data private.

- Custom Templates & Themes — Create unlimited presentation designs with HTML and Tailwind CSS
- AI Template Generation — Create presentation templates from existing Powerpoint documents.
- Flexible Generation — Build presentations from prompts or uploaded documents
- Export Ready — Save as PowerPoint (PPTX) and PDF with professional formatting
- Built-In MCP Server — Generate presentations over Model Context Protocol
- Bring Your Own Key — Use your own API keys for OpenAI, Google Gemini, Anthropic Claude, or any compatible provider. Only pay for what you use, no hidden fees or subscriptions.
- Ollama Integration — Run open-source models locally with full privacy
- OpenAI API Compatible — Connect to any OpenAI-compatible endpoint with your own models
- Multi-Provider Support — Mix and match text and image generation providers
- Versatile Image Generation — Choose from DALL-E 3, Gemini Flash, Pexels, or Pixabay
- Rich Media Support — Icons, charts, and custom graphics for professional presentations
- Runs Locally — All processing happens on your device, no cloud dependencies
- API Deployment — Host as your own API service for your team
- Fully Open-Source — Apache 2.0 licensed, inspect, modify, and contribute
- Docker Ready — One-command deployment with GPU support for local models
- Electron Desktop App — Run Presenton as a native desktop application on Windows, macOS, and Linux (no browser required)
- Sign in with ChatGPT — Use your free or paid ChatGPT account to sign in and start creating presentations instantly — no separate API key required

#

### ☁️ Presenton Cloud

Run Presenton directly in your browser — no installation, no setup required. Start creating presentations instantly from anywhere.



Presenton Cloud

#

### ⚡ Running Presenton


You can run Presenton in two ways:
Docker for a one-command setup without installing a local dev
stack, or the Electron desktop app for a native app
experience (ideal for development or offline use).

**Option 1: Electron (Desktop App)**


Run Presenton as a native desktop application. LLM and image provider
(API keys, etc.) can be configured in the app. The same environment variables
used for Docker apply when running the bundled backend.


Prerequisites: Node.js (LTS), npm, Python 3.11, and
uv
(for the Electron FastAPI backend in
electron/servers/fastapi).

- Setup (First Time)

cd electron

npm run setup:env

This installs Node dependencies, runs uv sync in the FastAPI
server, and installs Next.js dependencies.

- Run in Development

npm run dev


This compiles TypeScript and starts Electron. The backend and UI run locally
inside the desktop window.

- Build Distributable (Optional)
To create installers for Windows, macOS, or Linux:

npm run build:all

npm run dist


Output files are written to electron/dist
(or as configured in your electron-builder settings).

**Option 2: Docker**

- Start Presenton
Linux/MacOS (Bash/Zsh Shell):

docker run -it --name presenton -p 5000:80 -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

Windows (PowerShell):

docker run -it --name presenton -p 5000:80 -v "${PWD}\app_data:/app_data" ghcr.io/presenton/presenton:latest

- Open Presenton


Open http://localhost:5000 in the browser
of your choice to use Presenton.



Note: You can replace 5000 with any other port
number of your choice to run Presenton on a different port number.


#

### ⚙️ Deployment Configurations

These settings apply to both Docker and the Electron app's backend. You may want to directly provide your API KEYS as environment variables and keep them hidden. You can set these environment variables to achieve it.

- CAN_CHANGE_KEYS=[true/false]: Set this to **false** if you want to keep API Keys hidden and make them unmodifiable.
- LLM=[openai/google/anthropic/ollama/custom]: Select **LLM** of your choice.
- OPENAI_API_KEY=[Your OpenAI API Key]: Provide this if **LLM** is set to **openai**
- OPENAI_MODEL=[OpenAI Model ID]: Provide this if **LLM** is set to **openai** (default: "gpt-4.1")
- GOOGLE_API_KEY=[Your Google API Key]: Provide this if **LLM** is set to **google**
- GOOGLE_MODEL=[Google Model ID]: Provide this if **LLM** is set to **google** (default: "models/gemini-2.0-flash")
- ANTHROPIC_API_KEY=[Your Anthropic API Key]: Provide this if **LLM** is set to **anthropic**
- ANTHROPIC_MODEL=[Anthropic Model ID]: Provide this if **LLM** is set to **anthropic** (default: "claude-3-5-sonnet-20241022")
- OLLAMA_URL=[Custom Ollama URL]: Provide this if you want to custom Ollama URL and **LLM** is set to **ollama**
- OLLAMA_MODEL=[Ollama Model ID]: Provide this if **LLM** is set to **ollama**
- CUSTOM_LLM_URL=[Custom OpenAI Compatible URL]: Provide this if **LLM** is set to **custom**
- CUSTOM_LLM_API_KEY=[Custom OpenAI Compatible API KEY]: Provide this if **LLM** is set to **custom**
- CUSTOM_MODEL=[Custom Model ID]: Provide this if **LLM** is set to **custom**
- TOOL_CALLS=[Enable/Disable Tool Calls on Custom LLM]: If **true**, **LLM** will use Tool Call instead of Json Schema for Structured Output.
- DISABLE_THINKING=[Enable/Disable Thinking on Custom LLM]: If **true**, Thinking will be disabled.
- WEB_GROUNDING=[Enable/Disable Web Search for OpenAI, Google And Anthropic]: If **true**, LLM will be able to search web for better results.

You can also set the following environment variables to customize the image generation provider and API keys:

- DISABLE_IMAGE_GENERATION: If **true**, Image Generation will be disabled for slides.
- IMAGE_PROVIDER=[dall-e-3/gpt-image-1.5/gemini_flash/nanobanana_pro/pexels/pixabay/comfyui]: Select the image provider of your choice.
- Required if **DISABLE_IMAGE_GENERATION** is not set to **true**.
- OPENAI_API_KEY=[Your OpenAI API Key]: Required if using **dall-e-3** or **gpt-image-1.5** as the image provider.
- DALL_E_3_QUALITY=[standard/hd]: Optional quality setting for **dall-e-3** (default: `standard`).
- GPT_IMAGE_1_5_QUALITY=[low/medium/high]: Optional quality setting for **gpt-image-1.5** (default: `medium`).
- GOOGLE_API_KEY=[Your Google API Key]: Required if using **gemini_flash** or **nanobanana_pro** as the image provider.
- PEXELS_API_KEY=[Your Pexels API Key]: Required if using **pexels** as the image provider.
- PIXABAY_API_KEY=[Your Pixabay API Key]: Required if using **pixabay** as the image provider.
- COMFYUI_URL=[Your ComfyUI server URL] and COMFYUI_WORKFLOW=[Workflow JSON]: Required if using **comfyui** to route prompts to a self-hosted ComfyUI workflow.

You can disable anonymous telemetry using the following environment variable:

- DISABLE_ANONYMOUS_TRACKING=[true/false]: Set this to **true** to disable anonymous telemetry.

> Note: You can freely choose both the LLM (text generation) and the image provider. Supported image providers: **dall-e-3**, **gpt-image-1.5** (OpenAI), **gemini_flash**, **nanobanana_pro** (Google), **pexels**, **pixabay**, and **comfyui** (self-hosted).




**Docker Run Examples by Provider**

- Using OpenAI

docker run -it --name presenton -p 5000:80 -e LLM="openai" -e OPENAI_API_KEY="******" -e IMAGE_PROVIDER="dall-e-3" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

- Using Google

docker run -it --name presenton -p 5000:80 -e LLM="google" -e GOOGLE_API_KEY="******" -e IMAGE_PROVIDER="gemini_flash" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

- Using Ollama

docker run -it --name presenton -p 5000:80 -e LLM="ollama" -e OLLAMA_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="*******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

- Using Anthropic

docker run -it --name presenton -p 5000:80 -e LLM="anthropic" -e ANTHROPIC_API_KEY="******" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

- Using OpenAI Compatible API

docker run -it -p 5000:80 -e CAN_CHANGE_KEYS="false"  -e LLM="custom" -e CUSTOM_LLM_URL="http://*****" -e CUSTOM_LLM_API_KEY="*****" -e CUSTOM_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e  PEXELS_API_KEY="********" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

- Running Presenton with GPU Support
To use GPU acceleration with Ollama models, you need to install and configure the NVIDIA Container Toolkit. This allows Docker containers to access your NVIDIA GPU.
Once the NVIDIA Container Toolkit is installed and configured, you can run Presenton with GPU support by adding the `--gpus=all` flag:

docker run -it --name presenton --gpus=all -p 5000:80 -e LLM="ollama" -e OLLAMA_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="*******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest

#

### ✨ Generate Presentation via API

**Generate Presentation**


Endpoint: /api/v1/ppt/presentation/generate

Method: POST

Content-Type: application/json

**Request Body**

Parameter
Type
Required
Description

content
string
Yes
Main content used to generate the presentation.

slides_markdown
string[] | null
No
Provide custom slide markdown instead of auto-generation.

instructions
string | null
No
Additional generation instructions.

tone
string
No

Text tone (default: "default").
Options: default, casual, professional,
funny, educational, sales_pitch

verbosity
string
No

Content density (default: "standard").
Options: concise, standard, text-heavy

web_search
boolean
No
Enable web search grounding (default: false).

n_slides
integer
No
Number of slides to generate (default: 8).

language
string
No
Presentation language (default: "English").

template
string
No
Template name (default: "general").

include_table_of_contents
boolean
No
Include table of contents slide (default: false).

include_title_slide
boolean
No
Include title slide (default: true).

files
string[] | null
No

Files to use in generation.
Upload first via /api/v1/ppt/files/upload.

export_as
string
No

Export format (default: "pptx").
Options: pptx, pdf

**Response**

{

"presentation_id": "string",
"path": "string",
"edit_path": "string"
}

**Example Request**

curl -X POST http://localhost:5000/api/v1/ppt/presentation/generate \

-H "Content-Type: application/json" \
-d '{
"content": "Introduction to Machine Learning",
"n_slides": 5,
"language": "English",
"template": "general",
"export_as": "pptx"
}'

**Example Response**

{

"presentation_id": "d3000f96-096c-4768-b67b-e99aed029b57",
"path": "/app_data/d3000f96-096c-4768-b67b-e99aed029b57/Introduction_to_Machine_Learning.pptx",
"edit_path": "/presentation?id=d3000f96-096c-4768-b67b-e99aed029b57"
}


Note:
Prepend your server’s root URL to path and
edit_path to construct valid links.

**Documentation & Tutorials**

#

### 🚀 Roadmap

- [x] Support for custom HTML templates by developers
- [x] Support for accessing custom templates over API
- [x] Implement MCP server
- [ ] Ability for users to change system prompt
- [x] Support external SQL database

#

### 🚀 Roadmap

Track the public roadmap on GitHub Projects: [https://github.com/orgs/presenton/projects/2](https://github.com/orgs/presenton/projects/2)

#