https://github.com/brevdev/setup-scripts
Brev Setup Scripts
https://github.com/brevdev/setup-scripts
Last synced: 6 months ago
JSON representation
Brev Setup Scripts
- Host: GitHub
- URL: https://github.com/brevdev/setup-scripts
- Owner: brevdev
- License: apache-2.0
- Created: 2022-04-05T03:30:20.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2025-10-18T05:45:55.000Z (6 months ago)
- Last Synced: 2025-10-18T21:42:05.640Z (6 months ago)
- Language: Shell
- Homepage: https://developer.nvidia.com/brev
- Size: 106 KB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Brev Setup Scripts
Simple, practical setup scripts for common developer environments.
**What Brev already provides:** NVIDIA drivers, CUDA toolkit, Docker, NVIDIA Container Toolkit
## Available Scripts
### π Python Development
```bash
cd python-dev && bash setup.sh
```
**Installs:** pyenv, Python 3.11, Jupyter Lab, common packages (pandas, numpy, matplotlib)
**Time:** ~3-5 minutes
### π¦ Node.js Development
```bash
cd nodejs-dev && bash setup.sh
```
**Installs:** nvm, Node LTS, pnpm, TypeScript, ESLint, Prettier
**Time:** ~2-3 minutes
### π» Terminal Setup
```bash
cd terminal-setup && bash setup.sh
```
**Installs:** zsh, oh-my-zsh, fzf, ripgrep, bat, eza (modern CLI tools)
**Time:** ~2-3 minutes
**Note:** Automatically switches to zsh when complete
### βΈοΈ Local Kubernetes
```bash
cd k8s-local && bash setup.sh
```
**Installs:** microk8s, kubectl, helm, k9s, GPU operator
**Time:** ~3-5 minutes
**Note:** All tools work immediately, no group membership or logout needed
### π€ ML Quickstart
```bash
cd ml-quickstart && bash setup.sh
```
**Installs:** Miniconda, PyTorch with CUDA, Jupyter Lab, transformers
**Time:** ~5-8 minutes (PyTorch is large)
### β‘ RAPIDS
```bash
cd rapids && bash setup.sh
```
**Installs:** GPU-accelerated pandas (cuDF), scikit-learn (cuML), example notebooks
**Time:** ~8-12 minutes
**Note:** 10-50x faster data processing on GPU. Requires NVIDIA GPU
### π¦ Ollama
```bash
cd ollama && bash setup.sh
```
**Installs:** Ollama with GPU support, llama3.2 model (pre-downloaded)
**Time:** ~3-5 minutes
**Port:** 11434/tcp for API access
### π Unsloth
```bash
cd unsloth && bash setup.sh
```
**Installs:** Unsloth for fast fine-tuning, PyTorch with CUDA, LoRA/QLoRA support
**Time:** ~5-8 minutes
**Note:** Requires NVIDIA GPU
### π LiteLLM
```bash
cd litellm && bash setup.sh
```
**Installs:** Universal LLM proxy (use any LLM with OpenAI API format)
**Time:** ~1-2 minutes
**Port:** 4000/tcp for API access
### π Qdrant
```bash
cd qdrant && bash setup.sh
```
**Installs:** Vector database for RAG and semantic search
**Time:** ~1-2 minutes
**Port:** 6333/tcp for API + dashboard
### π¨ ComfyUI
```bash
cd comfyui && bash setup.sh
```
**Installs:** Node-based UI for Stable Diffusion, SD 1.5 model
**Time:** ~5-10 minutes
**Port:** 8188/tcp for web interface
**Note:** Requires NVIDIA GPU
### ποΈ Databases
```bash
cd databases && bash setup.sh
```
**Installs:** PostgreSQL 16, Redis 7 (in Docker containers)
**Time:** ~1-2 minutes
### π Marimo
```bash
cd marimo && bash setup.sh
```
**Installs:** Marimo reactive notebooks as systemd service
**Time:** ~2-3 minutes
**Port:** 8080/tcp for web access
### π‘οΈ earlyoom
```bash
cd earlyoom && bash setup.sh
```
**Installs:** Early OOM daemon to prevent system freezes
**Time:** ~1-2 minutes
**Note:** Monitors memory/swap and kills processes before OOM hangs
## Quick Start
**Pick what you need:**
```bash
# Python ML developer
cd ml-quickstart && bash setup.sh
# Web developer
cd nodejs-dev && bash setup.sh
cd databases && bash setup.sh
# Terminal power user
cd terminal-setup && bash setup.sh
# Kubernetes developer
cd k8s-local && bash setup.sh
```
## Design Philosophy
Each script is:
- β
**Simple** - One purpose, no complexity
- β
**Short** - Under 150 lines each
- β
**Fast** - Takes 2-8 minutes
- β
**Standalone** - No dependencies between scripts
- β
**Practical** - Installs what developers actually use
We don't:
- β Install what Brev already provides (NVIDIA drivers, CUDA, Docker)
- β Add complex GPU detection logic
- β Support multi-node/HPC scenarios
- β Over-engineer solutions
## Examples
**Python data science:**
```bash
cd python-dev && bash setup.sh
# Then:
ipython
jupyter lab --ip=0.0.0.0
```
**Machine learning with GPU:**
```bash
cd ml-quickstart && bash setup.sh
# Then:
conda activate ml
python gpu_check.py
```
**GPU-accelerated data science with RAPIDS:**
```bash
cd rapids && bash setup.sh
# Then:
conda activate rapids
python ~/rapids-examples/benchmark.py # See 20x+ speedup!
```
**Local LLM with Ollama:**
```bash
cd ollama && bash setup.sh
# Then:
ollama run llama3.2
ollama list
```
**Fast LLM fine-tuning with Unsloth:**
```bash
cd unsloth && bash setup.sh
# Then:
conda activate unsloth
python ~/unsloth-examples/test_install.py
```
**Universal LLM proxy with LiteLLM:**
```bash
cd litellm && bash setup.sh
# Then use any LLM with OpenAI SDK:
# openai.api_base = "http://localhost:4000"
```
**Vector database with Qdrant:**
```bash
cd qdrant && bash setup.sh
# Then:
pip install qdrant-client
python ~/qdrant_example.py
```
**Image generation with ComfyUI:**
```bash
cd comfyui && bash setup.sh
# Then open: http://localhost:8188
```
**Modern terminal:**
```bash
cd terminal-setup && bash setup.sh
# Automatically drops you into zsh, then:
ll # Better ls
cat file.txt # Syntax highlighting
fzf # Fuzzy finder
```
**Local database:**
```bash
cd databases && bash setup.sh
# Then:
docker exec -it postgres psql -U postgres
docker exec -it redis redis-cli
```
**OOM protection with earlyoom:**
```bash
cd earlyoom && bash setup.sh
# Then:
sudo systemctl status earlyoom
sudo journalctl -u earlyoom -f # Watch memory monitoring
```
## File Structure
```
brev-setup-scripts/
βββ README.md # This file
βββ python-dev/
β βββ setup.sh # Python development environment
β βββ README.md
βββ nodejs-dev/
β βββ setup.sh # Node.js development environment
β βββ README.md
βββ terminal-setup/
β βββ setup.sh # Modern terminal with zsh
β βββ README.md
βββ k8s-local/
β βββ setup.sh # Local Kubernetes
β βββ README.md
βββ ml-quickstart/
β βββ setup.sh # PyTorch ML environment
β βββ README.md
βββ ollama/
β βββ setup.sh # Ollama LLM inference
β βββ README.md
βββ unsloth/
β βββ setup.sh # Unsloth fast fine-tuning
β βββ README.md
βββ litellm/
β βββ setup.sh # Universal LLM proxy
β βββ README.md
βββ qdrant/
β βββ setup.sh # Vector database
β βββ README.md
βββ comfyui/
β βββ setup.sh # ComfyUI for Stable Diffusion
β βββ README.md
βββ databases/
β βββ setup.sh # PostgreSQL + Redis
β βββ README.md
βββ marimo/
β βββ setup.sh # Marimo reactive notebooks
β βββ README.md
βββ earlyoom/
β βββ setup.sh # Early OOM daemon
β βββ README.md
βββ rapids/
βββ setup.sh # RAPIDS GPU-accelerated data science
βββ README.md
```
## Contributing
Want to add a script? Keep it simple:
1. **One purpose** - Install one thing well
2. **Short** - Under 150 lines
3. **Fast** - Completes in < 10 minutes
4. **Verify** - Include a verification step
5. **Document** - Show quick start commands
## License
Apache 2.0