{"id":15136231,"url":"https://github.com/nextcloud/whiteboard","last_synced_at":"2026-02-09T13:17:32.462Z","repository":{"id":246904572,"uuid":"787787462","full_name":"nextcloud/whiteboard","owner":"nextcloud","description":"Create \u0026 collaborate on an infinite canvas!","archived":false,"fork":false,"pushed_at":"2025-03-29T02:47:59.000Z","size":2593,"stargazers_count":99,"open_issues_count":71,"forks_count":19,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-03-29T18:05:46.570Z","etag":null,"topics":["nextcloud","nextcloud-app","whiteboard"],"latest_commit_sha":null,"homepage":"https://apps.nextcloud.com/apps/whiteboard","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nextcloud.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-17T07:24:42.000Z","updated_at":"2025-03-29T02:09:36.000Z","dependencies_parsed_at":"2024-09-09T15:11:14.177Z","dependency_job_id":"3546a924-325c-4095-bee6-d54fd2ca6539","html_url":"https://github.com/nextcloud/whiteboard","commit_stats":{"total_commits":176,"total_committers":9,"mean_commits":"19.555555555555557","dds":0.5738636363636364,"last_synced_commit":"5fdd789ae45c3e78f2d0d741bd979ee9639c40a0"},"previous_names":["nextcloud/whiteboard"],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nextcloud%2Fwhiteboard","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nextcloud%2Fwhiteboard/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nextcloud%2Fwhiteboard/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nextcloud%2Fwhiteboard/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nextcloud","download_url":"https://codeload.github.com/nextcloud/whiteboard/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247386263,"owners_count":20930618,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["nextcloud","nextcloud-app","whiteboard"],"created_at":"2024-09-26T06:04:37.978Z","updated_at":"2026-02-09T13:17:32.384Z","avatar_url":"https://github.com/nextcloud.png","language":"JavaScript","readme":"\u003c!--\n  - SPDX-FileCopyrightText: 2024 Nextcloud GmbH and Nextcloud contributors\n  - SPDX-License-Identifier: AGPL-3.0-or-later\n--\u003e\n\n# Nextcloud Whiteboard\n\n[![REUSE status](https://api.reuse.software/badge/github.com/nextcloud/whiteboard)](https://api.reuse.software/info/github.com/nextcloud/whiteboard)\n\nThe official whiteboard app for Nextcloud. It allows users to create and share whiteboards with other users and collaborate in real-time.\n\nYou can create whiteboards in the files app and share and collaborate on them.\n\n## Features\n\n- 🎨 Drawing shapes, writing text, connecting elements\n- 📝 Real-time collaboration\n- 💪 Strong foundation: We use [Excalidraw](https://github.com/excalidraw/excalidraw) as our base library\n\n## Backend\n\n### Standalone websocket server for Nextcloud Whiteboard\n\nRunning the whiteboard server is required for the whiteboard to work. The server will handle real-time collaboration events and broadcast them to all connected clients, which means that the server must be accessible from the users browser, so exposing it for example throuhg a reverse proxy is necessary. It is intended to be used as a standalone service that can be run in a container.\n\nWe require the following connectivity:\n\n- The whiteboard server needs to be able to reach the Nextcloud server over HTTP(S)\n- The Nextcloud server needs to be able to reach the whiteboard server over HTTP(S)\n- The user's browser needs to be able to reach the whiteboard server over HTTP(S) in the browser\n- Nextcloud and the whiteboard server share a secret key to sign and verify JWTs\n\nOn the Nextcloud side, the server must be configured through:\n\n```bash\nocc config:app:set whiteboard collabBackendUrl --value=\"http://nextcloud.local:3002\"\nocc config:app:set whiteboard jwt_secret_key --value=\"some-random\"\n```\n\n### Running the server\n\n#### Local node\n\nThis mode requires at least Node 20 and NPM 10 to be installed. You can clone this repository, checkout the release version matching your whiteboard app.\nThe server can be run locally using the following command:\n\n```bash\nnpm ci\nJWT_SECRET_KEY=\"some-random\" NEXTCLOUD_URL=http://nextcloud.local npm run server:start\n```\n\n#### Docker\n\nThe server requires the `NEXTCLOUD_URL` environment variable to be set to the URL of the Nextcloud instance that the Whiteboard app is installed on. The server will connect to the Nextcloud instance and listen for whiteboard events.\n\nThe server can be run in a container using the following command:\n\n```bash\ndocker run -e JWT_SECRET_KEY=some-random -e NEXTCLOUD_URL=https://nextcloud.local --rm ghcr.io/nextcloud-releases/whiteboard:release\n```\n\nDocker compose can also be used to run the server:\n\n```yaml\nversion: '3.7'\nservices:\n  nextcloud-whiteboard-server:\n    image: ghcr.io/nextcloud-releases/whiteboard:release\n    ports:\n      - 3002:3002\n    environment:\n      - NEXTCLOUD_URL=https://nextcloud.local\n      - JWT_SECRET_KEY=some-random-key\n      \n```\n\n#### Building the image locally\n\nWhile we publish image on the GitHub container registry you can build the image locally using the following command:\n\n```bash\ndocker build -t nextcloud-whiteboard-server -f Dockerfile .\n```\n\n### Reverse proxy\n\n#### Apache \u003c 2.4.47\n\n```apache\nProxyPass /whiteboard http://localhost:3002/\nRewriteEngine on\nRewriteCond %{HTTP:Upgrade} websocket [NC]\nRewriteCond %{HTTP:Connection} upgrade [NC]\nRewriteRule ^/?whiteboard/(.*) \"ws://localhost:3002/$1\" [P,L]\n```\n\n#### Apache \u003e= 2.4.47\n\n```apache\nProxyPass /whiteboard http://localhost:3002/ upgrade=websocket\n```\n\n#### Nginx\n\n```nginx\nlocation /whiteboard/ {\n\tproxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\tproxy_set_header Host $host;\n\t\n\tproxy_pass http://localhost:3002/;\n\t\n\tproxy_http_version 1.1;\n\tproxy_set_header Upgrade $http_upgrade;\n\tproxy_set_header Connection \"upgrade\";\n}\n```\n\n#### Caddy v2\n\n```caddy\nhandle_path /whiteboard/* {\n    reverse_proxy http://127.0.0.1:3002\n}\n```\n\n#### Traefik v3\n\nAs lables for Docker \u0026 Swarm:\n\n```yaml\n- traefik.http.services.whiteboard.loadbalancer.server.port=3002\n- traefik.http.middlewares.strip-whiteboard.stripprefix.prefixes=/whiteboard\n- traefik.http.routers.whiteboard.rule=Host(`nextcloud.example.com`) \u0026\u0026 PathPrefix(`/whiteboard`)\n- traefik.http.routers.whiteboard.middlewares=strip-whiteboard\n```\n\n## Storage Strategies and Scaling\n\nThe whiteboard application supports two storage strategies: LRU (Least Recently Used) cache and Redis. Each strategy has its own characteristics and is suitable for different deployment scenarios.\n\n### Storage Strategies\n\n#### 1. LRU (Least Recently Used) Cache\n\nThe LRU cache strategy is an in-memory storage solution that keeps the most recently used items in memory while automatically removing the least recently used items when the cache reaches its capacity.\n\n**Advantages:**\n- Simple setup with no additional infrastructure required\n- Fast read and write operations\n- Suitable for single-node deployments\n\n**Limitations:**\n- Limited by available memory on the server\n- Data is not persistent across server restarts\n- Not suitable for multi-node deployments\n\n**Configuration:**\nTo use the LRU cache strategy, set the following in your `.env` file:\n\n```\nSTORAGE_STRATEGY=lru\n```\n\n**Resources:**\n- [LRU Cache in Node.js](https://www.npmjs.com/package/lru-cache)\n\n#### 2. Redis\n\nRedis is an in-memory data structure store that can be used as a database, cache, and message broker. It provides persistence and supports distributed setups.\n\n**Advantages:**\n- Persistent storage\n- Supports multi-node deployments\n- Allows for horizontal scaling\n\n**Limitations:**\n- Requires additional infrastructure setup and maintenance\n- Slightly higher latency compared to LRU cache for single-node setups\n\n**Configuration:**\nTo use the Redis strategy, set the following in your `.env` file:\n\n```\nSTORAGE_STRATEGY=redis\nREDIS_URL=redis://[username:password@]host[:port][/database_number]\n```\n\nReplace the `REDIS_URL` with your actual Redis server details.\n\n### Scaling and Deployment\n\n#### Single-Node Deployment\n\nFor small to medium-sized deployments, a single-node setup can be sufficient:\n\n1. Choose either LRU or Redis strategy based on your persistence needs.\n2. Configure the `.env` file with the appropriate `STORAGE_STRATEGY`.\n3. If using Redis, ensure the Redis server is accessible and configure the `REDIS_URL`.\n4. Start the whiteboard server.\n\n#### Multi-Node Deployment (Clustered Setup)\n\nFor larger deployments requiring high availability and scalability, a multi-node setup is recommended:\n\n1. Use the Redis storage strategy.\n2. Set up a Redis cluster or a managed Redis service.\n3. Configure each node's `.env` file with:\n   ```\n   STORAGE_STRATEGY=redis\n   REDIS_URL=redis://[username:password@]host[:port][/database_number]\n   ```\n4. Set up a load balancer to distribute traffic across the nodes.\n5. Ensure all nodes can access the same Redis instance or cluster.\n\n#### Scaling WebSocket Connections\n\nThe whiteboard application uses the Redis Streams adapter for scaling WebSocket connections across multiple nodes. This adapter leverages Redis Streams, not the Redis Pub/Sub mechanism, for improved performance and scalability.\n\nWhen using the Redis strategy, the application automatically sets up the Redis Streams adapter for WebSocket scaling. This allows multiple server instances to share WebSocket connections and real-time updates.\n\n**Resources:**\n- [Socket.IO Redis Streams Adapter](https://socket.io/docs/v4/redis-streams-adapter/)\n\n#### Considerations for Multi-Node Setups\n\n- **Load Balancing:** Set up a load balancer to distribute incoming connections across your server nodes.\n- **Session Stickiness:** While not strictly required for WebSocket transport, it's recommended to configure your load balancer to use session stickiness. This ensures that requests from a client are routed to the same server for the duration of a session, which can be beneficial if falling back to long polling.\n- **WebSocket Support:** Ensure your load balancer is configured to support WebSocket connections and maintain long-lived connections.\n- **Redis Setup:** The current implementation does not configure Redis Cluster. So if you need to use a Redis Cluster for high availability, you'll need to set up your own load balancer in front of your Redis Cluster nodes.\n- **Redis Connection:** The application currently supports only one Redis connection for both the storage layer and streaming/scaling the WebSocket server.\n- **Redis Persistence:** Configure Redis with appropriate persistence settings (e.g., RDB snapshots or AOF logs) to prevent data loss in case of Redis server restarts.\n- **Monitoring:** Implement monitoring for both your application nodes and Redis servers to quickly identify and respond to issues.\n\n### Choosing the Right Strategy\n\n- **LRU Cache:** Ideal for small deployments, development environments, or scenarios where data persistence across restarts is not critical.\n- **Redis:** Recommended for production environments, especially when scaling horizontally or when data persistence is required.\n\nBy carefully considering your deployment needs and choosing the appropriate storage strategy, you can ensure optimal performance and scalability for your whiteboard application.\n\n### Known issues\n\nIf the [integration_whiteboard](https://github.com/nextcloud/integration_whiteboard) app was previously installed there might be a leftover non-standard mimetype configured. In this case opening the whiteboard may fail and a file is downloaded instead. Make sure to remove any entry in config/mimetypealiases.json mentioning whiteboard and run `occ maintenance:mimetype:update-db` and `occ maintenance:mimetype:update-js`.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnextcloud%2Fwhiteboard","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnextcloud%2Fwhiteboard","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnextcloud%2Fwhiteboard/lists"}