https://github.com/d-buckner/bloud
your home cloud made easier
https://github.com/d-buckner/bloud
cloud cloud-os go nixos operating-system self-hosted self-hosting selfhosted svelte
Last synced: 12 days ago
JSON representation
your home cloud made easier
- Host: GitHub
- URL: https://github.com/d-buckner/bloud
- Owner: d-buckner
- License: agpl-3.0
- Created: 2026-01-12T03:35:57.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2026-03-08T21:50:31.000Z (18 days ago)
- Last Synced: 2026-03-09T02:41:19.331Z (18 days ago)
- Topics: cloud, cloud-os, go, nixos, operating-system, self-hosted, self-hosting, selfhosted, svelte
- Language: Go
- Homepage: https://bloud.co
- Size: 1.69 MB
- Stars: 1
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Roadmap: docs/roadmap.md
Awesome Lists containing this project
README
# Bloud
**Home Cloud Operating System**
An opinionated, zero-config home server OS that makes self-hosting accessible. Install apps with automatic SSO integration. No manual OAuth configuration, reverse proxy setup, or integration configurations.
[](LICENSE)
[]()
> **Status:** Early alpha. Core infrastructure and web UI working.
## The Problem
Self-hosting is overwhelming. Setting up Immich, Nextcloud, and Jellyfin takes hours of configuring reverse proxies, SSL certificates, SSO, and making apps talk to each other.
## The Vision
- Flash USB drive, boot on any x86_64 hardware
- Access web UI, install apps with one click
- Everything pre-integrated: SSO automatic, related apps pre-configured
- Multi-host orchestration for scaling across machines
## Quick Start
```bash
# Install Lima (macOS: brew install lima, Linux: see lima-vm.io)
git clone https://github.com/d-buckner/bloud.git
cd bloud
npm run setup # Check prerequisites, download VM image
./bloud start # Start dev environment
```
Access the web UI at **http://localhost:8080**
## Apps
| Category | Apps |
|----------|------|
| **Infrastructure** | PostgreSQL, Redis, Traefik, Authentik |
| **Media** | Jellyfin |
| **Productivity** | Miniflux (RSS), Actual Budget |
| **Network** | AdGuard Home |
---
## How It Works
Bloud makes self-hosting accessible through three core ideas:
1. **Dependency Graph** - Apps declare what they need ("I require a database"). Bloud figures out what to install and wire together.
2. **Declarative Deployment** - NixOS handles the actual containers. Enable an app, rebuild, and NixOS creates systemd services, volumes, networking - atomically.
3. **Idempotent Configuration** - PreStart (ExecStartPre) handles config files and directories. PostStart (ExecStartPost) handles API calls and integrations. Both run on every service start.
### The Big Picture
Here's what happens when you install Miniflux:
```
┌─────────────────────────────────────────────────────────────────────┐
│ User clicks "Install Miniflux" │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ 1. PLANNING │
│ │
│ Graph analyzes: "Miniflux needs a database" │
│ PostgreSQL is the only compatible option │
│ → Auto-select postgres (no user choice needed) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ 2. NIX GENERATION │
│ │
│ Generate configuration: │
│ • apps.nix → NixOS enables postgres + miniflux │
│ • apps-routes.yml → Traefik routing for /embed/miniflux │
│ • blueprints/ → Authentik OAuth2 provider for miniflux │
│ • secrets/ → Database URL, OAuth credentials │
│ │
│ Run: nixos-rebuild switch │
│ → Containers created, systemd services started │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ 3. CONFIGURATION │
│ │
│ PreStart: Create directories, write Traefik SSO redirect │
│ HealthCheck: Wait for /healthcheck to respond │
│ PostStart: Set admin user theme via Miniflux API │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ 4. RUNNING │
│ │
│ Miniflux is live, connected to postgres, SSO configured │
└─────────────────────────────────────────────────────────────────────┘
```
## Project Structure
```
bloud/
├── apps/ # App definitions
│ ├── miniflux/
│ │ ├── metadata.yaml # What Miniflux needs (integrations, SSO, port)
│ │ ├── module.nix # How to run the container
│ │ └── configurator.go # configuration
│ ├── postgres/
│ ├── authentik/
│ └── ...
│
├── services/host-agent/ # The go server
│ ├── cmd/host-agent/ # Entry points
│ ├── internal/
│ │ ├── orchestrator/ # Install/uninstall coordination
│ │ ├── catalog/ # App graph and dependency resolution
│ │ ├── nixgen/ # Generates apps.nix
│ │ ├── store/ # Database layer
│ │ └── api/ # HTTP server
│ ├── pkg/configurator/ # Configurator interface
│ └── web/ # Svelte frontend
│
├── nixos/
│ ├── bloud.nix # Main NixOS module
│ ├── lib/
│ │ ├── bloud-app.nix # Helper for app modules
│ │ └── podman-service.nix # Systemd service generator
│ └── generated/
│ └── apps.nix # Generated by host-agent
│
└── docs/
```
## How Apps Are Defined
Each app has three files that work together:
### metadata.yaml - The Catalog Entry
This tells Bloud what the app is and what it needs:
```yaml
name: miniflux
displayName: Miniflux
description: Minimalist and opinionated feed reader
category: productivity
port: 8085
integrations:
database:
required: true # Must have a database
multi: false # Only one at a time
compatible:
- app: postgres
default: true
healthCheck:
path: /embed/miniflux/healthcheck
interval: 2
timeout: 60
sso:
strategy: native-oidc # Miniflux handles OAuth2 itself
callbackPath: /oauth2/oidc/callback
providerName: Bloud SSO
userCreation: true
env:
clientId: OAUTH2_CLIENT_ID
clientSecret: OAUTH2_CLIENT_SECRET
discoveryUrl: OAUTH2_OIDC_DISCOVERY_ENDPOINT
redirectUrl: OAUTH2_REDIRECT_URL
routing:
stripPrefix: false # Miniflux serves at /embed/miniflux when BASE_URL is set
```
The `integrations` section is key. It declares dependencies without hardcoding them. Miniflux needs a database - and postgres is the compatible option.
### module.nix - The Container Definition
This is NixOS configuration for running the container:
```nix
mkBloudApp {
name = "miniflux";
description = "Miniflux RSS reader";
image = "miniflux/miniflux:latest";
port = 8085;
database = "miniflux"; # Auto-creates postgres DB
environment = cfg: {
BASE_URL = "${cfg.externalHost}/embed/miniflux";
};
}
```
The `mkBloudApp` helper handles the boilerplate - creating systemd services, setting up podman, managing volumes. When you specify `database = "miniflux"`, it automatically creates that database in the shared postgres instance.
### configurator.go - App Configuration
Configurators run as systemd hooks (ExecStartPre and ExecStartPost):
```go
type Configurator struct{}
func (c *Configurator) Name() string { return "miniflux" }
// PreStart runs before container starts - creates config files, directories
func (c *Configurator) PreStart(ctx context.Context, state *AppState) error {
// Determine desired config
var desired []byte
if _, hasSSO := state.Integrations["sso"]; hasSSO {
desired = []byte(traefikSSOConfig)
}
// Write config (idempotent - overwrites with same content)
configPath := filepath.Join(c.traefikDir, "miniflux-sso.yml")
return os.WriteFile(configPath, desired, 0644)
}
// HealthCheck waits for the app to be ready
func (c *Configurator) HealthCheck(ctx context.Context) error {
url := fmt.Sprintf("http://localhost:%d/embed/miniflux/healthcheck", c.port)
return configurator.WaitForHTTP(ctx, url, 60*time.Second)
}
// PostStart runs as ExecStartPost - after container is healthy
func (c *Configurator) PostStart(ctx context.Context, state *AppState) error {
// Wait for app to be ready
if err := c.HealthCheck(ctx); err != nil {
return err
}
// Set light theme for admin user via Miniflux API
return c.setUserTheme(ctx, 1, "light_serif")
}
```
## The Dependency Graph
When you install an app, Bloud builds a graph of what's needed:
```
User wants: Miniflux
│
▼
┌─────────────────────────────────────┐
│ Miniflux.integrations: │
│ database: required │
│ compatible: [postgres] │
└─────────────────────────────────────┘
│
▼
Is postgres installed? No
│
▼
┌─────────────────────────────────────┐
│ Install Plan: │
│ AutoConfig (only 1 option): │
│ - database → postgres │
│ No user choices needed │
└─────────────────────────────────────┘
```
The graph also determines **execution order**. Apps that provide services must be configured before apps that consume them:
```
Level 0: postgres, redis ← Infrastructure, no dependencies
│
Level 1: authentik ← Depends on postgres + redis
│
Level 2: miniflux, jellyfin ← Depend on postgres and authentik
```
Configuration runs level by level. Miniflux's PostStart can assume postgres is already healthy.
## The Configuration Lifecycle
Configurators run as systemd hooks during service start:
```
┌─────────────────────────────────────────────────────────────────────┐
│ ExecStartPre: PreStart │
│ ────────────────────── │
│ Runs BEFORE container starts │
│ │
│ • Create directories (container will mount them) │
│ • Write config files (container reads on startup) │
│ • Generate Traefik routing rules │
│ │
│ If this fails, container won't start │
└─────────────────────────────────────────────────────────────────────┘
│
▼
[ Container starts ]
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ HealthCheck (built into ExecStartPost) │
│ ─────────────────────────────────────── │
│ Waits for the app to be ready │
│ │
│ • Poll an HTTP endpoint until it responds │
│ • Check database connectivity │
│ • Verify the app initialized properly │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ ExecStartPost: PostStart │
│ ──────────────────────── │
│ Runs AFTER container is healthy │
│ │
│ • Configure app via its REST API │
│ • Set up integrations │
│ • Register OAuth clients │
│ │
│ This is where the "magic wiring" happens │
└─────────────────────────────────────────────────────────────────────┘
```
### Why PreStart vs PostStart?
The key distinction is timing relative to the container lifecycle:
| Phase | When It Runs | Examples |
| --------- | ----------------------- | --------------------------------------------- |
| PreStart | Before container starts | Config files, directories, certificates |
| PostStart | After container healthy | API calls, database records, runtime settings |
**PreStart** handles things the app reads on startup — config files, directories, environment setup. If PreStart fails, the container won't start.
**PostStart** handles things that require the app to be running. API calls modify the app's internal state immediately, no restart needed.
Example: Miniflux's SSO redirect config is written to a Traefik config file — that's PreStart. But setting the admin user's theme via the Miniflux API applies immediately — that's PostStart.
### Idempotency
Every phase must be safe to run repeatedly. Write the desired state every time — don't assume previous runs succeeded:
```go
// GOOD: Idempotent - writes desired config state
func (c *Configurator) PreStart(ctx context.Context, state *AppState) error {
desired := map[string]string{
"DATABASE_URL": state.Database.URL,
"OAUTH_CLIENT_ID": state.OAuth.ClientID,
"OAUTH_CLIENT_SECRET": state.OAuth.ClientSecret,
}
// Write only our managed keys (merge with existing file)
return c.writeManagedKeys(configPath, desired)
}
// GOOD: Idempotent - API call sets to desired state
func (c *Configurator) PostStart(ctx context.Context, state *AppState) error {
return c.setUserTheme(ctx, 1, "light_serif") // Always sets to light
}
```
## Orchestration
Systemd is the single source of truth for service dependencies and lifecycle. Configurators run as **systemd hooks**:
```
┌─────────────────────────────────────────────────────────────────────┐
│ podman-app-a.service │
│ │
│ [Unit] │
│ After=podman-database.service podman-auth-provider.service │
│ Requires=podman-database.service │
│ │
│ [Service] │
│ ExecStartPre=bloud-agent configure prestart app-a │
│ ExecStart=podman run app-a-image ... │
│ ExecStartPost=bloud-agent configure poststart app-a │
└─────────────────────────────────────────────────────────────────────┘
```
**How it works:**
1. Systemd starts services in dependency order (After=/Requires=)
2. Before container starts: `ExecStartPre` runs PreStart
- Writes config files, creates directories
3. Container starts and becomes healthy
4. After container healthy: `ExecStartPost` runs PostStart
- Configures integrations via API
Systemd handles everything: ordering, lifecycle, and running configurators at the right time.
### Container Invalidation
When a new app is installed that provides an integration, existing apps that consume that integration need to be reconfigured. This is handled through **container invalidation**.
**Example: Installing an auth provider when app-a is already running**
```
State: app-a running (no auth)
Action: User installs auth-provider
1. auth-provider installed and started
2. System checks: "Who has auth integration that auth-provider provides?"
→ app-a has auth integration, auth-provider is compatible
3. app-a marked as invalidated
4. Orchestrator restarts app-a (in dependency order)
5. ExecStartPre runs PreStart (writes updated config)
6. Container starts
7. ExecStartPost runs PostStart (configures via API)
8. Auth configured
```
### Configuration State in Database
Integration state is tracked explicitly in postgres, not derived from NixOS configuration hashes. This gives us explicit control over what triggers reconfiguration.
**Schema concept:**
```sql
-- Tracks current integration state per app
CREATE TABLE app_integrations (
app_name TEXT,
integration_name TEXT,
source_app TEXT, -- Which app provides this integration
configured_at TIMESTAMP, -- When PostStart last configured this
PRIMARY KEY (app_name, integration_name)
);
```
**Flow when auth-provider is installed:**
```
┌─────────────────────────────────────────────────────────────────────┐
│ 1. Orchestrator installs auth-provider │
│ │ │
│ ▼ │
│ 2. Query graph: "What integrations does auth-provider provide?" │
│ → auth │
│ │ │
│ ▼ │
│ 3. Query installed apps: "Who has auth integration?" │
│ → [app-a, app-b] │
│ │ │
│ ▼ │
│ 4. Mark apps as invalidated in database │
│ │ │
│ ▼ │
│ 5. Restart invalidated apps (in dependency order) │
│ Orchestrator queries systemd D-Bus for dependencies │
│ Topological sort → restart dependencies before dependents │
│ │ │
│ ▼ │
│ 6. ExecStartPre runs PreStart (writes updated config) │
│ ExecStartPost runs PostStart after container healthy │
│ UPDATE app_integrations SET configured_at = NOW() │
└─────────────────────────────────────────────────────────────────────┘
```
**Benefits of database-tracked state:**
- Explicit record of what's configured vs. what needs configuration
- Configurators can check: "Has my integration state changed?"
- Go controls restarts directly, no NixOS hash tricks
- Can track configuration failures and retry
### Deferred Restart
We **mark** apps for invalidation immediately, but **defer** the actual restart. This provides two benefits:
**1. Deduplication**
If multiple changes affect the same app, it only restarts once:
```
Install service-x + service-y simultaneously
│
├── app-a marked for invalidation (integration-x available)
├── app-a marked for invalidation (integration-y available) ← deduplicated
│
▼
Restart app-a once
│
▼
PreStart + PostStart configure both integrations
```
**2. Correct Ordering**
The orchestrator builds a dependency graph from `app_integrations` table and restarts in topological order:
```
┌─────────────────────────────────────────────────────────────────────┐
│ Install auth-provider │
│ │
│ Mark for invalidation: │
│ - app-a (needs auth) │
│ - app-b (needs auth) │
│ │
│ Orchestrator builds dependency graph from app_integrations: │
│ - app-a depends on auth-provider (source_app) │
│ - app-b depends on auth-provider (source_app) │
│ - Topological sort │
│ │
│ Restart order (dependencies first): │
│ auth-provider (already running, healthy) │
│ app-a, app-b (can restart in parallel, no deps between them) │
│ │
│ app-a and app-b don't restart until auth-provider is healthy. │
└─────────────────────────────────────────────────────────────────────┘
```
**The rule:** Mark immediately, restart in dependency order. PreStart and PostStart handle reconfiguration.
---
## Generated Configuration
When you install an app, Bloud generates several configuration files:
### 1. NixOS Configuration (apps.nix)
Enables apps in NixOS. Each app's `module.nix` defines what `enable = true` means - usually creating a systemd service that runs a podman container.
```nix
# Generated by Bloud - DO NOT EDIT
{
bloud.apps.postgres.enable = true;
bloud.apps.authentik.enable = true;
bloud.apps.miniflux.enable = true;
}
```
### 2. Traefik Routing (apps-routes.yml)
Dynamic routing configuration with routers, middlewares, and services for each app:
```yaml
# Generated by Bloud - DO NOT EDIT
http:
routers:
miniflux-backend:
rule: "PathPrefix(`/embed/miniflux`)"
middlewares:
- miniflux-stripprefix
- iframe-headers
- embed-isolation
service: miniflux
middlewares:
miniflux-stripprefix:
stripPrefix:
prefixes:
- "/embed/miniflux"
services:
miniflux:
loadBalancer:
servers:
- url: "http://localhost:8085"
```
### 3. Authentik Blueprints
SSO configuration for each app - OAuth providers, forward-auth configs, or LDAP:
```yaml
# miniflux.yaml - Generated by Bloud
version: 1
metadata:
name: miniflux-sso-blueprint
labels:
managed-by: bloud
entries:
- model: authentik_providers_oauth2.oauth2provider
identifiers:
name: Miniflux OAuth2 Provider
attrs:
client_id: miniflux-client
client_secret:
redirect_uris:
- url: "http://localhost:8080/embed/miniflux/oauth2/oidc/callback"
```
### 4. Secrets & Environment Files
Per-app `.env` files with database URLs, OAuth secrets, and admin passwords:
```bash
# miniflux.env
DATABASE_URL=postgres://apps:xxx@localhost:5432/miniflux?sslmode=disable
OAUTH2_CLIENT_SECRET=xxx
ADMIN_PASSWORD=xxx
```
### The Flow
```
Orchestrator.Install()
│
├── Write apps.nix (NixOS config)
├── Write apps-routes.yml (Traefik routing)
├── Write authentik blueprints (SSO)
└── Write secret env files
│
▼
nixos-rebuild switch
│
├── Evaluates all NixOS modules
├── Builds new system configuration
├── Creates/updates systemd services
└── Activates new configuration
│
▼
systemd starts containers
│
▼
Systemd hooks configure apps (PreStart/PostStart)
```
NixOS provides atomic deploys - if something fails, the previous generation still exists. You can always `nixos-rebuild --rollback`.
## SSO Integration
Apps can use SSO three ways:
### Native OIDC
The app handles OAuth2 itself. Bloud generates Authentik blueprints to create the OAuth client, and passes credentials via environment variables.
```yaml
sso:
strategy: native-oidc
callbackPath: /oauth2/callback
env:
clientId: OAUTH2_CLIENT_ID
clientSecret: OAUTH2_CLIENT_SECRET
```
Miniflux uses this - it has built-in OIDC support.
### Forward Auth
Traefik intercepts requests and checks authentication with Authentik. The app never sees auth - it just gets `X-Remote-User` headers.
```yaml
sso:
strategy: forward-auth
```
Good for apps that don't speak OAuth2.
### None
App handles its own auth or doesn't need it.
```yaml
sso:
strategy: none
```
## Shared Infrastructure
Instead of each app running its own database, all apps share:
- **PostgreSQL** - One instance, apps get separate databases
- **Redis** - Session storage, caching
- **Traefik** - Reverse proxy, routing, SSO middleware
- **Authentik** - Identity provider
This reduces resource usage and simplifies backups.
When an app declares `database: "miniflux"` in its module.nix, the postgres module automatically creates that database and user.
## Key Design Principles
### 1. Declarative Over Imperative
Apps declare what they need, not how to get it. The system figures out the how.
### 2. Idempotent Everything
Every operation can run repeatedly without side effects. Configurators run on every service start.
### 3. Fail Open, Log Clearly
If configuration fails, log it clearly. The next service restart will retry automatically via the systemd hooks.
### 4. Atomic Deploys
NixOS rebuilds are all-or-nothing. No partial states.
### 5. Single Source of Truth
`apps.nix` defines what's installed. Systemd hooks configure apps on every start to match.
---
## Implementation Status
This section tracks what's implemented vs. what's planned.
### Done
- [x] **App definitions** - `metadata.yaml` and `module.nix` structure
- [x] **Dependency graph** - Planning installs/uninstalls with auto-selection
- [x] **NixOS integration** - `apps.nix` generation and `nixos-rebuild`
- [x] **Orchestrator** - Install/uninstall coordination with queuing
- [x] **Configurator interface** - `PreStart`, `HealthCheck`, `PostStart` methods
- [x] **App configurators** - Miniflux, Authentik, Arr stack, etc.
- [x] **Shared infrastructure** - Single postgres/redis per host
- [x] **SSO integration** - Native OIDC and forward-auth strategies
- [x] **Traefik routing** - Dynamic config generation for apps
### In Progress / Planned
- [ ] **PreStart returns changed** - `PreStart() (changed bool, err error)` to detect if restart needed
- [ ] **Container invalidation** - Mark apps for restart when new integrations become available
- [ ] **app_integrations table** - Track integration state in database
- [ ] **Dependency graph traversal** - Build graph from app_integrations, restart in topological order
- [x] **Remove watchdog references** - Cleaned up old self-healing code/comments from `pkg/configurator/interface.go`
### Not Planned
- ~~Periodic reconciliation~~ - Replaced by systemd hooks (event-driven)
- ~~Self-healing watchdog~~ - Configuration runs on service start only
### What's Not Built Yet
- Bootable USB image
- Multi-host orchestration
- Automatic backups
---
## Development
Development uses [Lima](https://lima-vm.io/) to run a NixOS VM on your local machine. The `./bloud` CLI manages the VM and development services.
### Prerequisites
| Requirement | macOS | Linux |
|-------------|-------|-------|
| **Lima** | `brew install lima` | [See install guide](https://lima-vm.io/docs/installation/) |
| **Node.js 18+** | `brew install node` | `sudo apt install nodejs npm` |
| **Go 1.21+** | `brew install go` | `sudo apt install golang` |
### CLI Commands
```bash
./bloud setup # Check prerequisites, download VM image
./bloud start # Start dev environment (VM + services)
./bloud stop # Stop dev services (VM stays running)
./bloud status # Check what's running
./bloud logs # View recent output
./bloud attach # Attach to tmux session (Ctrl-B D to detach)
./bloud shell # SSH into VM
./bloud rebuild # Apply NixOS config changes
```
### Troubleshooting
**"Lima is not installed"**
- macOS: `brew install lima`
- Linux: `curl -fsSL https://lima-vm.io/install.sh | bash`
**"VM image not found"**
- Run `./bloud setup` to download the pre-built image
- Image location: `lima/imgs/nixos-24.11-lima.img`
**VM boots but services don't start**
- Check logs: `./bloud logs`
- Rebuild NixOS: `./bloud rebuild`
- Nuclear option: `./bloud destroy && ./bloud start`
### Debugging
```bash
# Check host-agent logs
journalctl --user -u bloud-host-agent -f
# Check app container logs (includes PreStart/PostStart output)
journalctl --user -u podman-miniflux -f
# Check systemd service status
systemctl --user status podman-miniflux
# Restart a service to re-run configurators
systemctl --user restart podman-miniflux
# See what would be installed (includes dependencies)
curl http://localhost:8080/api/apps/miniflux/plan-install
# See what would be removed (includes dependents)
curl http://localhost:8080/api/apps/postgres/plan-remove
```
### Adding a New App
1. Create `apps/myapp/metadata.yaml` with integrations and port
2. Create `apps/myapp/module.nix` with container definition
3. Create `apps/myapp/configurator.go` implementing PreStart, HealthCheck, and PostStart
4. Register the configurator in `internal/appconfig/register.go`
See [apps/adding-apps.md](apps/adding-apps.md) for details.
---
## Contributing
Contributions welcome! See:
- [apps/adding-apps.md](apps/adding-apps.md) - Adding new apps
### Getting Started
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Open a Pull Request
### Reporting Issues
[Open an issue](https://github.com/d-buckner/bloud/issues) with:
- Clear description of the problem
- Steps to reproduce (for bugs)
- Your environment
## Further Reading
- [docs/design/graph-configurator-system.md](docs/design/graph-configurator-system.md) - Detailed configurator design
- [docs/embedded-app-routing.md](docs/embedded-app-routing.md) - How apps are served in iframes
- [docs/design/authentication.md](docs/design/authentication.md) - SSO and auth flows
- [docs/design/production-architecture.md](docs/design/production-architecture.md) - Production deployment
## Philosophy
- **Simplicity Over Features** - Opinionated defaults for 80% of users
- **Privacy by Default** - Everything runs locally on your hardware
## License
AGPL v3 - See [LICENSE](LICENSE) for details. If you modify Bloud and offer it as a service, you must share your changes.
---
**Built with NixOS, Podman, Systemd, Go, and Svelte.**