https://github.com/stanford-mast/blast
Browser-LLM Auto-Scaling Technology
https://github.com/stanford-mast/blast
ai-agents browser-automation llm-inference python
Last synced: 6 months ago
JSON representation
Browser-LLM Auto-Scaling Technology
- Host: GitHub
- URL: https://github.com/stanford-mast/blast
- Owner: stanford-mast
- License: mit
- Created: 2025-04-05T00:03:34.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2025-05-08T18:59:10.000Z (7 months ago)
- Last Synced: 2025-05-08T19:46:08.777Z (7 months ago)
- Topics: ai-agents, browser-automation, llm-inference, python
- Language: Python
- Homepage: http://blastproject.org/
- Size: 3.23 MB
- Stars: 487
- Watchers: 7
- Forks: 17
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-llm-os - BLAST: Browser-LLM Auto-Scaling Technology
- awesome-LangGraph - stanford-mast/blast - performance serving engine for browser-augmented LLM applications with auto-scaling and OpenAI-compatible API |  | (🌐 Web Automation & Scraping / 🟩 Development Tools 🛠️)
README
A high-performance serving engine for web browsing AI.
[](https://blastproject.org)
[](https://docs.blastproject.org)
[](https://discord.gg/NqrkJwYYh4)
[](https://x.com/realcalebwin)
## ❓ Use Cases
1. **I want to add web browsing AI to my app...** BLAST serves web browsing AI with an OpenAI-compatible API and concurrency and streaming baked in.
2. **I need to automate workflows...** BLAST will automatically cache and parallelize to keep costs down and enable interactive-level latencies.
3. **Just want to use this locally...** BLAST makes sure you stay under budget and not hog your computer's memory.
## 🚀 Quick Start
```bash
pip install blastai && blastai serve
```
```python
from openai import OpenAI
client = OpenAI(
api_key="not-needed",
base_url="http://127.0.0.1:8000"
)
# Stream real-time browser actions
stream = client.responses.create(
model="not-needed",
input="Compare fried chicken reviews for top 10 fast food restaurants",
stream=True
)
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta if " " in event.delta else "", end="", flush=True)
```
## ✨ Features
- 🔄 **OpenAI-Compatible API** Drop-in replacement for OpenAI's API
- 🚄 **High Performance** Automatic parallelism and prefix caching
- 📡 **Streaming** Stream browser-augmented LLM output to users
- 📊 **Concurrency** Out-of-the-box support many users with efficient resource management
## 📚 Documentation
Visit [documentation](https://docs.blastproject.org) to learn more.
## 🤝 Contributing
Awesome! See our [Contributing Guide](https://docs.blastproject.org/development/contributing) for details.
## 📄 MIT License
As it should be!