https://github.com/mzac/apt-ui
A docker container that will let you manage your apt package updates on all your Debian based systems through a simple GUI
https://github.com/mzac/apt-ui
apt apt-get claude container debian docker kubernetes proxmox raspberry-pi raspbian ubuntu
Last synced: 12 days ago
JSON representation
A docker container that will let you manage your apt package updates on all your Debian based systems through a simple GUI
- Host: GitHub
- URL: https://github.com/mzac/apt-ui
- Owner: mzac
- License: mit
- Created: 2026-03-22T17:54:37.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-05-01T15:56:49.000Z (12 days ago)
- Last Synced: 2026-05-01T17:14:41.160Z (12 days ago)
- Topics: apt, apt-get, claude, container, debian, docker, kubernetes, proxmox, raspberry-pi, raspbian, ubuntu
- Language: TypeScript
- Homepage:
- Size: 854 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
- Security: SECURITY.md
Awesome Lists containing this project
README
⬑ apt-ui
Self-hosted apt fleet manager β one dashboard, every server, real terminal output.
A focused alternative to AWX / Ansible Tower for Ubuntu, Debian, Raspbian, and Proxmox fleets.
π Architecture Β·
π Security Β·
π Changelog Β·
π¦ Releases
---
---
> π€ **This project was entirely written by [Claude](https://claude.ai) (Anthropic's AI assistant) via [Claude Code](https://claude.ai/code).** All code, configuration, and documentation β from the FastAPI backend and asyncssh integration to the React frontend and Docker setup β was generated through an iterative, conversation-driven development process with no manual coding.
---
## Why apt-ui
**The fleet is the unit, not the host.** Most apt UIs are per-server. apt-ui treats your Ubuntu / Debian / Raspbian / Proxmox fleet as one thing β Check All, Upgrade All, Reboot All, Autoremove All all multiplexed into one terminal stream with per-server filter chips and live status. No more SSH'ing to twelve boxes to roll a security patch.
**One container, zero agents.** Single Docker image (under 250 MB). Talks to managed servers over plain SSH β no daemons on the targets, no message bus, no Postgres, no Redis. The whole control plane is FastAPI + SQLite + APScheduler, designed to run on a Pi 4 and manage 50 servers comfortably.
**Built for staged rollouts.** Tag servers with `ring:test` / `ring:prod` and auto-upgrade promotes through them in alphabetical ring order, aborting the rollout if any host fails. Maintenance windows block scheduled work outside approved hours. Pre/post-upgrade hooks let you take a BTRFS / ZFS snapshot first. Rolling reboot orchestrates kernel reboots in batches with reachability checks between them.
**Security-aware, not just scheduling.** A daily CVE matcher annotates every pending package with USN / CVE-IDs sourced from the Ubuntu USN database. The fleet-wide CVE inventory pivots that data into "which hosts are exposed to CVE-2025-XXXXX." A Prometheus `/metrics` endpoint feeds Grafana. Notifications cover daily summaries, weekly digests, security alerts, and reboot-required events across email / Telegram / Slack / webhook. Auth includes TOTP 2FA, scrypt-hashed API tokens, and admin / read-only RBAC.
---
## What's in the box
> One single-container control plane for an apt fleet β fleet-wide actions, scheduled automation, security visibility, and integrations to keep it honest.
### π¦ Fleet management
| | Feature | Highlights |
|---|---|---|
| πΊ | **Dashboard & fleet view** | server card grid Β· update / security / reboot / autoremove counts Β· clickable filters Β· search across hostnames + tags |
| π· | **Groups & tags** | colour-coded groups (many-to-many) Β· freeform tags Β· auto-tagging by OS and virt type Β· ring tags drive staged rollouts |
| β‘ | **Fleet-wide actions** | Check All Β· Refresh All Β· Upgrade All Β· Autoremove All Β· Rolling Reboot β all multiplexed via WebSocket with per-server filter chips |
| π‘ | **Reachability monitor** | TCP ping every 5 minutes (independent of SSH) Β· offline servers dimmed and banner-flagged Β· `is_reachable` + `last_seen` per server |
| π³ | **Docker host detection** | identifies the host running the dashboard and blocks upgrades of container-runtime packages mid-flight |
| π | **Fleet-wide package search** | five match modes (exact / contains / starts-with / ends-with / regex) Β· pivoted CVE-style table Β· diverged-version highlight |
| βοΈ | **Multi-server compare** | side-by-side installed-package inventory across any combination of servers Β· Diverged / Common / All filter modes |
### π Update & upgrade
| | Feature | Highlights |
|---|---|---|
| π | **Upgradable list** | full version deltas Β· repo source Β· security flag Β· phased-update column Β· package descriptions on hover |
| π― | **Selective upgrade** | check the boxes for individual packages instead of upgrading everything |
| π | **Dist-upgrade detection** | parallel `apt-get dist-upgrade --dry-run` surfaces new dependency packages and "kept back" rows that plain `upgrade` would skip |
| π₯ | **Live terminal** | WebSocket stream of `apt-get` output with carriage-return progress lines updating in place; ANSI colour preserved |
| π¦ | **Package install** | search the apt cache and install new packages on any host from the UI |
| πΏ | **.deb installs** | URL (validated, `wget`-pulled) or browser upload (SFTP'd via asyncssh) β both stream `dpkg -i` + `apt-get install -f` live |
| π§± | **Templates** | named package sets applied to one or more hosts in one click β useful for provisioning identical roles |
| π | **Held packages** | per-package hold / unhold from the Packages tab; held-package chips with one-click β unhold |
| π | **Apt sources editor** | tabbed editor for `/etc/apt/sources.list*` files Β· save / delete / create Β· "Test with apt-get update" streams live |
### π‘ Security
| | Feature | Highlights |
|---|---|---|
| π‘ | **CVE matcher** | daily Ubuntu USN sync Β· per-package severity-coloured π‘ badge Β· USN + CVE links in tooltips |
| π¨ | **Fleet CVE inventory** | `/security` page pivots CVE β servers Β· severity / status / group filters Β· CSV export Β· nav badge with critical-CVE count |
| π | **Per-server SSH keys** | Fernet-encrypted in DB Β· falls back to global `SSH_PRIVATE_KEY` or `SSH_AUTH_SOCK` |
| π‘ | **Auto security updates** | per-server `unattended-upgrades` toggle with shield-badge state Β· streams live SSH output when toggling |
| π’ | **TOTP 2FA** | QR enrolment in Settings β Account Β· login flow asks for a 6-digit code when enabled |
| π | **API tokens** | `aptui_<32 url-safe bytes>` format Β· scrypt-hashed Β· raw value shown only once Β· for `curl` / CI / scripts |
| π₯ | **RBAC** | admin / read-only roles Β· `require_admin` on ~28 mutation endpoints Β· "read-only" badge in the nav |
### β° Automation & scheduling
| | Feature | Highlights |
|---|---|---|
| π | **Scheduled checks** | configurable cron for fleet-wide update checks |
| π€ | **Auto-upgrade** | optional hands-off upgrades on a schedule Β· concurrency cap Β· phased-update toggle Β· conffile-action choice |
| π¦ | **Maintenance windows** | global or per-server time windows where auto-upgrades are blocked Β· midnight-wrap Β· iCal feed for ops calendars |
| πͺ | **Pre/post-upgrade hooks** | shell commands run before / after every upgrade Β· pre-hook failure aborts Β· global or per-server scope |
| π | **Staged rollout (rings)** | `ring:*` tags promote upgrades through environments in alphabetical order Β· per-batch failure aborts the rollout |
| π | **Rolling reboot** | fleet-wide reboot of `reboot_required` servers in ring order with per-batch waits and reachability checks |
| π§ | **Reboot-after-upgrade** | optional checkbox auto-reboots after a successful upgrade if `/var/run/reboot-required` exists |
### π Notifications
| | Channel | Notes |
|---|---|---|
| π§ | **Email** | aiosmtplib Β· STARTTLS / SSL Β· HTML + text fallback |
| βοΈ | **Telegram** | Bot API Β· auto-chunk for messages over 4 K |
| π¬ | **Slack** | incoming webhook Β· Block Kit messages with header + section blocks |
| πͺ | **Webhook** | JSON POST Β· optional `X-Hub-Signature-256` HMAC-SHA256 |
| | | |
| π | **Events** | upgrade complete Β· upgrade error Β· security updates found Β· reboot required Β· daily summary Β· weekly digest |
| π | **Per-channel Γ per-event toggles** | independently enable each event on each channel |
| π
| **Weekly patch digest** | opt-in summary on a configurable cron Β· headline counters Β· by-server table Β· still-pending list Β· CVE summary Β· health flags |
| π | **Notification log** | every send recorded β channel, event, summary, success/failure |
### π Visibility & reporting
| | Feature | Highlights |
|---|---|---|
| π | **Upgrade history** | per-server and fleet-wide log Β· filterable by server / status Β· full terminal output expandable per run |
| π | **SSH audit log** | every command apt-ui dispatches recorded (command, exit, duration, 4 KB output excerpt) Β· sub-tab in History |
| π | **dpkg log** | parses `/var/log/dpkg.log` + rotated `.gz` archives Β· filter by package / action / time |
| π | **Reports** | Patch Coverage Β· Upgrade Success Rate Β· Security SLA β all CSV-exportable |
| π | **Prometheus /metrics** | fleet-state counters / gauges for Grafana Β· optional `METRICS_TOKEN` bearer auth |
| π | **Public /status.json** | opt-in fleet health snapshot for embedding Β· disabled by default |
| π
| **iCal feed** | subscribable maintenance-window calendar at `/api/calendar.ics?token=β¦` |
| π | **OS EOL countdown** | dashboard π badge when OS reaches end-of-life within 365 days Β· severity-coloured Β· ESM note for Ubuntu LTS |
### π§° Server detail
> Each managed server gets its own page with tabs:
`Packages` Β· `Upgrade` Β· `Health` Β· `Apt Repos` Β· `dpkg Log` Β· `History` Β· `Stats` Β· `Shell`
| | Feature | Highlights |
|---|---|---|
| π§ | **OS detection** | Ubuntu Β· Debian Β· Raspbian Β· Armbian Β· Proxmox VE Β· Proxmox Backup Server Β· Proxmox Mail Gateway Β· bare-metal / VM / LXC / Docker via `systemd-detect-virt` |
| πΆ | **Proxmox VE awareness** | dedicated `pveupgrade` button Β· PVE-managed packages highlighted in the Packages tab |
| π₯ | **Health panel** | on-demand probe of `systemctl --failed`, last 20 boot-priority `journalctl` errors, recent reboot history Β· restart-service per failed unit |
| π | **Raspberry Pi EEPROM** | firmware update detection for Pi 4 / 400 / CM4 / 5 Β· one-click apply |
| πΎ | **Disk + boot health** | red badge when `/boot` free < 100 MB or < 10% Β· kernel install date with 60d / 180d age tinting |
| πΈ | **Snapshot capability** | BTRFS / ZFS / LXC detected Β· banner with copy-pastable pre-hook command suggestion in the Upgrade tab |
| β‘ | **apt proxy** | detect + manage `apt-cacher-ng` proxy or `auto-apt-proxy` Β· live SSH output when toggling |
### π Deployment
| | Path | Status |
|---|---|---|
| π³ | **Docker Compose** | `docker compose up -d` β `docker-compose.ghcr.yml` pulls the prebuilt image |
| βΈοΈ | **Kubernetes** | `k8s/deployment.yaml` β Deployment + ClusterIP Service + Longhorn PVC |
| π | **Tailscale sidecar** | optional overlay β joins the container/pod to your tailnet Β· automatic HTTPS via `tailscale serve` |
| π | **Build from source** | `./build-run.sh` β dev workflow with hot rebuild |
| π | **Multi-arch images** | `linux/amd64` + `linux/arm64` published to GHCR every release |
---
## Quick start
> β οΈ Requires Docker + Docker Compose v2 and SSH access to the target servers.
### 1. Set up your `.env`
```bash
cat > .env <> /root/.ssh/authorized_keys
sudo chmod 600 /root/.ssh/authorized_keys
```
Then set `username = root` when adding each server in the dashboard. No sudo configuration required.
### Option B β Regular user with passwordless sudo for apt-get
```bash
# Run on each managed server
echo "youruser ALL=(ALL) NOPASSWD: /usr/bin/apt-get" | sudo tee /etc/sudoers.d/apt-ui
```
### Key delivery
| Mode | When to use |
|---|---|
| **Inline `SSH_PRIVATE_KEY`** | simplest; key must have no passphrase |
| **`SSH_AUTH_SOCK` (agent)** | passphrase-protected key; forwards your host's agent into the container β the key never leaves your host |
| **Per-server key in DB** | upload a dedicated key per managed server via the Add Server form; Fernet-encrypted at rest |
---
## Configuration
All runtime configuration (SMTP / Telegram / Slack / schedules / server list / users) is managed in the web UI and stored in the SQLite database at `/data/apt-ui.db`. **No restart required to change settings.**
| Variable | Default | Description |
|---|---|---|
| `SSH_PRIVATE_KEY` | β | Full PEM content of the private key. Required unless using SSH agent. |
| `SSH_AUTH_SOCK` | β | Path to SSH agent socket inside the container (e.g. `/run/ssh-agent.sock`). Alternative to `SSH_PRIVATE_KEY` β allows passphrase-protected keys. |
| `JWT_SECRET` | random | JWT signing secret. Set to persist sessions across restarts. |
| `ENCRYPTION_KEY` | β | Master key used to encrypt per-server SSH keys in the DB. Falls back to `JWT_SECRET`. |
| `DATABASE_PATH` | `/data/apt-ui.db` | SQLite file path. |
| `TZ` | `America/Montreal` | Timezone for scheduled jobs. |
| `LOG_LEVEL` | `INFO` | Python log level. |
| `ENABLE_TERMINAL` | `false` | Set `true` to enable the interactive SSH shell tab. Only enable for trusted users. |
| `METRICS_TOKEN` | β | Optional bearer token to protect the `/metrics` endpoint. If unset the endpoint is unauthenticated. |
| `STATUS_PAGE_PUBLIC` | `false` | Set `true` to enable the unauthenticated `/status.json` fleet health endpoint. |
| `STATUS_PAGE_SHOW_NAMES` | `false` | Include server names (not hostnames) in `/status.json`. |
| `STATUS_PAGE_TITLE` | `apt-ui Fleet Status` | Custom title returned by `/status.json`. |
---
## CLI tool
Admin operations can be run from inside the container:
```bash
# Reset password (interactive prompt)
docker compose exec apt-ui python -m backend.cli reset-password
# Reset password inline
docker compose exec apt-ui python -m backend.cli reset-password --username admin --password newpass123
# Create a user (admin by default; --readonly for non-admin)
docker compose exec apt-ui python -m backend.cli create-user --username zac --password mypass
# List all users
docker compose exec apt-ui python -m backend.cli list-users
```
---
## Tailscale
The dashboard can join your [Tailscale](https://tailscale.com) tailnet via an optional sidecar container. This gives you:
- Secure remote access without exposing a port to the public internet
- Automatic HTTPS with a Let's Encrypt cert via `tailscale serve`
- Connection status (tailnet IP, hostname, DNS name) visible in Settings β Infrastructure
- Works the same way in Kubernetes β the sidecar joins the pod to the tailnet
### Enable Tailscale (Docker Compose)
Add to your `.env`:
```
TS_AUTHKEY=tskey-client-... # generate at tailscale.com/settings/keys
TS_HOSTNAME=apt-ui # how it appears on your tailnet
```
Run with the overlay:
```bash
docker compose -f docker-compose.yml -f docker-compose.tailscale.yml up -d
```
Tailscale runs as a separate `tailscale/tailscale:latest` container, **not baked into the app image** β `docker compose pull` updates it independently of the app.
### Enable `tailscale serve` (HTTPS on your tailnet)
`tailscale serve` proxies HTTPS `:443` β app `:8000` and provisions a Let's Encrypt cert automatically for your node's DNS name (e.g. `apt-ui.your-tailnet.ts.net`).
In `docker-compose.tailscale.yml`, uncomment the two lines under the `tailscale` service:
```yaml
- TS_SERVE_CONFIG=/serve-config.json
- ./tailscale-serve.json:/serve-config.json:ro
```
The bundled `tailscale-serve.json` uses `${TS_CERT_DOMAIN}` which the Tailscale container resolves to your node's DNS name at runtime.
---
## Kubernetes
A ready-to-use manifest is provided at [`k8s/deployment.yaml`](k8s/deployment.yaml):
- 1-replica Deployment
- ClusterIP Service on port 8000
- PersistentVolumeClaim (Longhorn storage class β change if needed)
- Secret references for `SSH_PRIVATE_KEY` and `JWT_SECRET`
- Liveness + readiness probes against `GET /health`
- Resource limits: 128β256 Mi RAM, 100mβ500m CPU
```bash
# Create the secret
kubectl create secret generic apt-ui-secrets \
--from-literal=ssh-private-key="$(cat ~/.ssh/id_rsa)" \
--from-literal=jwt-secret="$(openssl rand -hex 32)"
# Deploy
kubectl apply -f k8s/deployment.yaml
```
The manifest also has a ready-to-uncomment Tailscale sidecar block β uncomment it and add the auth key to the secret:
```bash
kubectl create secret generic apt-ui-secrets \
--from-literal=ssh-private-key="$(cat ~/.ssh/id_rsa)" \
--from-literal=jwt-secret="$(openssl rand -hex 32)" \
--from-literal=ts-authkey="tskey-client-..."
```
---
## Architecture
```mermaid
flowchart LR
SPA["π₯ React 18 SPA
10 pages Β· Zustand Β· Tailwind"]
subgraph backend["π³ apt-ui container β :8000"]
direction TB
API["β‘ FastAPI
23 routers Β· 70+ REST endpoints
17 WebSocket streams"]
WORKERS["β° APScheduler
check_all Β· auto_upgrade Β· ping_all
weekly_digest Β· daily_summary Β· log_purge"]
DB[("πΎ SQLite
20 tables Β· 50+ migrations
Fernet-encrypted SSH keys")]
SSH["π asyncssh
Fresh connection per command
Per-server key β agent β global"]
end
SERVERS["π§ Managed Linux fleet
Ubuntu Β· Debian Β· Raspbian Β· Armbian
Proxmox VE / PBS / PMG Β· Raspberry Pi"]
NOTIF["π Notifications
π§ SMTP Β· βοΈ Telegram
π¬ Slack Β· πͺ Webhook (HMAC)"]
SPA -- "REST + WebSocket
JWT cookie" --> API
API <--> DB
API --> SSH
WORKERS <--> DB
WORKERS --> SSH
WORKERS --> NOTIF
API --> NOTIF
SSH -- "SSH :22" --> SERVERS
classDef frontend fill:#3b82f6,stroke:#1d4ed8,color:#fff,stroke-width:2px
classDef api fill:#10b981,stroke:#047857,color:#fff,stroke-width:2px
classDef sched fill:#f59e0b,stroke:#b45309,color:#fff,stroke-width:2px
classDef data fill:#8b5cf6,stroke:#6d28d9,color:#fff,stroke-width:2px
classDef transport fill:#6366f1,stroke:#4338ca,color:#fff,stroke-width:2px
classDef external fill:#475569,stroke:#1e293b,color:#fff,stroke-width:2px
classDef notif fill:#ec4899,stroke:#be185d,color:#fff,stroke-width:2px
class SPA frontend
class API api
class WORKERS sched
class DB data
class SSH transport
class SERVERS external
class NOTIF notif
```
> See [ARCHITECTURE.md](ARCHITECTURE.md) for full diagrams, request-flow details, data model, and CI/CD pipeline documentation.
### Tech stack
| Layer | Library / Tool |
|---|---|
| **Backend** | Python 3.12 Β· FastAPI Β· Uvicorn |
| **Auth** | passlib[bcrypt] Β· PyJWT (HS256, 24 h httpOnly cookie) Β· pyotp (TOTP) Β· scrypt (API tokens) |
| **SSH** | asyncssh β fresh connection per command, `known_hosts=None` (trusted LAN) |
| **Encryption** | Fernet (AES-128-CBC + HMAC-SHA256) β per-server SSH keys + TOTP secrets at rest |
| **Database** | SQLite Β· SQLAlchemy 2.x async Β· aiosqlite |
| **Scheduler** | APScheduler 3.x AsyncIOScheduler β live reconfiguration, no restart needed |
| **Notifications** | aiosmtplib (email) Β· httpx (Telegram / Slack / webhook with HMAC-SHA256) |
| **Frontend** | React 18 Β· TypeScript Β· Vite Β· Tailwind CSS |
| **State** | Zustand (auth + job store + servers store) |
| **Charts** | Recharts |
| **Terminal** | ansi-to-html (apt output) Β· @xterm/xterm (interactive shell) |
| **Container** | Multi-stage Dockerfile β `node:20-alpine` build β `python:3.12-slim` runtime |
| **Registry** | GitHub Container Registry β `linux/amd64` + `linux/arm64` |
| **CI/CD** | GitHub Actions Β· CodeQL Β· Dependabot Β· multi-arch release pipeline |
---
## Development
### Backend
```bash
python -m venv venv && source venv/bin/activate
pip install -r backend/requirements.txt
export SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
export DATABASE_PATH="./data/dev.db"
export PYTHONPATH=$(pwd)
uvicorn backend.main:app --reload --port 8000
```
### Frontend
```bash
cd frontend
npm ci
npm run dev # Vite dev server on :5173, proxies /api/* to :8000
```
### Local CI
```bash
make ci # mirrors GitHub Actions: Python syntax + import check + frontend build
make venv # bootstrap a Python venv
make help # list all targets
```
---
## Project status
apt-ui ships on a **calendar versioning** cadence (`YYYY.MM.DD-NN`) β releases happen when a wave of features is ready, not on a fixed schedule. Every release publishes multi-arch (`linux/amd64` + `linux/arm64`) images to GHCR.
| Area | Status |
|---|---|
| Core fleet management (Check / Upgrade / Reboot / Autoremove All, dashboard, groups, tags) | β
Stable |
| Auth + RBAC + 2FA + API tokens | β
Stable |
| Notifications (email / Telegram / Slack / webhook Β· daily summary Β· weekly digest) | β
Stable |
| Maintenance windows Β· pre/post hooks Β· staged rollouts Β· rolling reboot | β
Stable |
| CVE matcher + fleet CVE inventory + Prometheus `/metrics` + status page | β
Stable |
| dpkg log Β· upgrade history Β· SSH audit log Β· reports (Patch Coverage / Success Rate / Security SLA) | β
Stable |
| Proxmox VE / PBS / PMG awareness Β· Raspberry Pi EEPROM Β· OS EOL countdown | β
Stable |
| Deployment: Docker Compose Β· Kubernetes Β· Tailscale sidecar | β
Stable |
| WebAuthn / passkeys | β Out of scope (TOTP covers the 2FA need; WebAuthn requires HTTPS + stable origin which homelab apt-ui rarely has) |
| Full automated snapshot/rollback | β Out of scope (snapshot capability + banner shipped; full automation deferred β pre-upgrade hooks let users wire whatever fits their layout) |
See [CHANGELOG.md](CHANGELOG.md) for the per-release feature list.
---
## Documentation
| Document | Description |
|---|---|
| [ARCHITECTURE.md](ARCHITECTURE.md) | Full architecture diagram, router β file map, data model, CI/CD pipeline |
| [CHANGELOG.md](CHANGELOG.md) | Per-release feature list and bug-fix history |
| [SECURITY.md](SECURITY.md) | Security policy, vulnerability disclosure, threat model notes |
| [CLAUDE.md](CLAUDE.md) | Authoritative spec for future Claude Code sessions |
---
## License
Released under the [MIT License](LICENSE).
---
Built with β€οΈ by Claude β entirely AI-written via Claude Code.