{"id":29747980,"url":"https://github.com/epicweb-dev/aster","last_synced_at":"2025-07-28T11:02:17.318Z","repository":{"id":305947373,"uuid":"1023966109","full_name":"epicweb-dev/aster","owner":"epicweb-dev","description":null,"archived":false,"fork":false,"pushed_at":"2025-07-22T21:10:14.000Z","size":262,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-22T21:11:12.103Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/epicweb-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-07-22T01:46:33.000Z","updated_at":"2025-07-22T21:10:18.000Z","dependencies_parsed_at":"2025-07-22T21:11:22.483Z","dependency_job_id":"2058e73e-7916-4b67-a138-cbf91bade69a","html_url":"https://github.com/epicweb-dev/aster","commit_stats":null,"previous_names":["epicweb-dev/aster"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/epicweb-dev/aster","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epicweb-dev%2Faster","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epicweb-dev%2Faster/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epicweb-dev%2Faster/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epicweb-dev%2Faster/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/epicweb-dev","download_url":"https://codeload.github.com/epicweb-dev/aster/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/epicweb-dev%2Faster/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":267145968,"owners_count":24042657,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-26T02:00:08.937Z","response_time":62,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-26T09:38:07.238Z","updated_at":"2025-07-26T09:38:08.121Z","avatar_url":"https://github.com/epicweb-dev.png","language":"TypeScript","readme":"# Aster - React Router Chat App with WebLLM\n\n[![CI](https://github.com/your-username/aster/actions/workflows/ci.yml/badge.svg)](https://github.com/your-username/aster/actions/workflows/ci.yml)\n\nA modern chat application built with React Router and powered by WebLLM for\nclient-side AI inference.\n\n## Features\n\n- 🤖 **Real AI Chat**: Powered by WebLLM with local model inference\n- 🚀 **Streaming Responses**: Real-time streaming of AI responses\n- 📱 **Modern UI**: Clean, responsive interface with dark mode support\n- 🔄 **Dynamic Model Selection**: Choose from 100+ available models organized by\n  category\n- 💾 **Local Processing**: All AI processing happens in your browser\n- ⚡ **Fast Loading**: Optimized model loading with progress tracking\n\n## Available Models\n\nThe app dynamically loads all available models from WebLLM, including:\n\n### Llama Models\n\n- **Llama 3.1**: 8B and 70B parameter models\n- **Llama 3.2**: 1B and 3B parameter models\n- **Llama 3**: 8B and 70B parameter models\n- **Llama 2**: 7B and 13B parameter models\n\n### Qwen Models\n\n- **Qwen 3**: 0.6B, 1.7B, 4B, and 8B parameter models\n- **Qwen 2.5**: 0.5B, 1.5B, 3B, and 7B parameter models (including Math and\n  Coder variants)\n- **Qwen 2**: 0.5B, 1.5B, and 7B parameter models\n\n### Other Popular Models\n\n- **Phi**: 1.5, 2, 3 Mini, and 3.5 Mini models (including vision capabilities)\n- **Gemma**: 2B and 9B parameter models\n- **Mistral**: 7B parameter models\n- **Hermes**: Various instruction-tuned models\n- **DeepSeek**: R1 models\n- **SmolLM**: Lightweight models (135M, 360M, 1.7B parameters)\n- **TinyLlama**: 1.1B parameter models\n- **StableLM**: Zephyr models\n- **WizardMath**: Math-focused models\n\n### Model Categories\n\nModels are organized into categories and include information about:\n\n- **VRAM Requirements**: Memory needed to run the model\n- **Resource Level**: Whether the model is optimized for low-resource devices\n- **Specialization**: Math, coding, vision, or general-purpose models\n\n## Getting Started\n\n1. **Install dependencies**:\n\n   ```bash\n   npm install\n   ```\n\n2. **Start the development server**:\n\n   ```bash\n   npm run dev\n   ```\n\n3. **Open your browser** and navigate to the chat page\n\n## How It Works\n\nThe app uses [WebLLM](https://github.com/mlc-ai/web-llm) to run large language\nmodels directly in your browser. This means:\n\n- No server costs or API keys required\n- Complete privacy - your conversations stay on your device\n- Works offline after initial model download\n- Real-time streaming responses\n- Access to 100+ pre-trained models\n\n## Model Selection\n\nThe app automatically loads all available models from WebLLM and organizes them\nby category. You can:\n\n- **Browse by Category**: Models are grouped by family (Llama, Qwen, Phi, etc.)\n- **Filter by Size**: Choose from small models (135M parameters) to large models\n  (70B parameters)\n- **Select by Specialization**: Pick models optimized for math, coding, vision,\n  or general chat\n- **Consider Resources**: Models are marked as low-resource or standard based on\n  VRAM requirements\n\n## First Load\n\nOn first load, the app will download and initialize the selected model. This may\ntake a few minutes depending on your internet connection. The model will be\ncached for faster subsequent loads.\n\n## Development\n\n### Tech Stack\n\n- Built with React Router v7\n- TypeScript for type safety\n- Tailwind CSS for styling\n- WebLLM for AI inference\n- Vitest for testing\n- Prettier for code formatting\n\n### Getting Started\n\n```bash\n# Install dependencies\nnpm install\n\n# Start development server\nnpm run dev\n\n# Run tests\nnpm test\n\n# Type check\nnpm run typecheck\n\n# Format code\nnpm run format\n\n# Build for production\nnpm run build\n```\n\n### Chat Implementation\n\nThe app includes two chat implementations:\n\n- `/chat` - Original implementation using xstate for state management\n- `/chat-new` - New implementation using useReducer for simpler state management\n\n### CI/CD\n\nThe project includes a GitHub Actions workflow that:\n\n- ✅ Runs all tests with verbose output\n- ✅ Checks code formatting with Prettier\n- ✅ Performs TypeScript type checking\n- ✅ Builds the application for production\n- ✅ Uploads build artifacts\n\nThe workflow runs on pushes and pull requests to `main` and `develop` branches.\n\n## Deployment\n\nThis app is configured for Cloudflare Pages deployment:\n\n```bash\nnpm run deploy\n```\n\n## License\n\nMIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepicweb-dev%2Faster","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fepicweb-dev%2Faster","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fepicweb-dev%2Faster/lists"}