https://github.com/wassim249/fastapi-langgraph-agent-production-ready-template
A production-ready FastAPI template for building AI agent applications with LangGraph integration. This template provides a robust foundation for building scalable, secure, and maintainable AI agent services.
https://github.com/wassim249/fastapi-langgraph-agent-production-ready-template
agent agentic-ai docker fastapi fastapi-template langchain langchain-python langgraph langgraph-python llm memory
Last synced: 6 months ago
JSON representation
A production-ready FastAPI template for building AI agent applications with LangGraph integration. This template provides a robust foundation for building scalable, secure, and maintainable AI agent services.
- Host: GitHub
- URL: https://github.com/wassim249/fastapi-langgraph-agent-production-ready-template
- Owner: wassim249
- Created: 2025-04-07T16:54:25.000Z (6 months ago)
- Default Branch: master
- Last Pushed: 2025-04-09T18:01:46.000Z (6 months ago)
- Last Synced: 2025-04-12T17:54:11.193Z (6 months ago)
- Topics: agent, agentic-ai, docker, fastapi, fastapi-template, langchain, langchain-python, langgraph, langgraph-python, llm, memory
- Language: Python
- Homepage:
- Size: 124 KB
- Stars: 51
- Watchers: 1
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-LangGraph - wassim249/fastapi-langgraph-agent-production-ready-template - langgraph-agent-production-ready-template?style=social) | (📋 Templates & Starters / 🟩 Development Tools 🛠️)
README
# FastAPI LangGraph Agent Template
A production-ready FastAPI template for building AI agent applications with LangGraph integration. This template provides a robust foundation for building scalable, secure, and maintainable AI agent services.
## 🌟 Features
- **Production-Ready Architecture**
- FastAPI for high-performance async API endpoints
- LangGraph integration for AI agent workflows
- Langfuse for LLM observability and monitoring
- Structured logging with environment-specific formatting
- Rate limiting with configurable rules
- PostgreSQL for data persistence
- Docker and Docker Compose support
- Prometheus metrics and Grafana dashboards for monitoring- **Security**
- JWT-based authentication
- Session management
- Input sanitization
- CORS configuration
- Rate limiting protection- **Developer Experience**
- Environment-specific configuration
- Comprehensive logging system
- Clear project structure
- Type hints throughout
- Easy local development setup- **Model Evaluation Framework**
- Automated metric-based evaluation of model outputs
- Integration with Langfuse for trace analysis
- Detailed JSON reports with success/failure metrics
- Interactive command-line interface
- Customizable evaluation metrics## 🚀 Quick Start
### Prerequisites
- Python 3.13+
- PostgreSQL
- Docker and Docker Compose (optional)### Environment Setup
1. Clone the repository:
```bash
git clone
cd
```2. Create and activate a virtual environment:
```bash
uv sync
```3. Copy the example environment file:
```bash
cp .env.example .env.[development|staging|production] # e.g. .env.development
```4. Update the `.env` file with your configuration (see `.env.example` for reference)
### Running the Application
#### Local Development
1. Install dependencies:
```bash
uv sync
```2. Run the application:
```bash
make [dev|staging|production] # e.g. make dev
```1. Go to Swagger UI:
```bash
http://localhost:8000/docs
```#### Using Docker
1. Build and run with Docker Compose:
```bash
make docker-build-env ENV=[development|staging|production] # e.g. make docker-build-env ENV=development
make docker-run-env ENV=[development|staging|production] # e.g. make docker-run-env ENV=development
```2. Access the monitoring stack:
```bash
# Prometheus metrics
http://localhost:9090# Grafana dashboards
http://localhost:3000
Default credentials:
- Username: admin
- Password: admin
```The Docker setup includes:
- FastAPI application
- PostgreSQL database
- Prometheus for metrics collection
- Grafana for metrics visualization
- Pre-configured dashboards for:
- API performance metrics
- Rate limiting statistics
- Database performance
- System resource usage## 📊 Model Evaluation
The project includes a robust evaluation framework for measuring and tracking model performance over time. The evaluator automatically fetches traces from Langfuse, applies evaluation metrics, and generates detailed reports.
### Running Evaluations
You can run evaluations with different options using the provided Makefile commands:
```bash
# Interactive mode with step-by-step prompts
make eval [ENV=development|staging|production]# Quick mode with default settings (no prompts)
make eval-quick [ENV=development|staging|production]# Evaluation without report generation
make eval-no-report [ENV=development|staging|production]
```### Evaluation Features
- **Interactive CLI**: User-friendly interface with colored output and progress bars
- **Flexible Configuration**: Set default values or customize at runtime
- **Detailed Reports**: JSON reports with comprehensive metrics including:
- Overall success rate
- Metric-specific performance
- Duration and timing information
- Trace-level success/failure details### Customizing Metrics
Evaluation metrics are defined in `evals/metrics/prompts/` as markdown files:
1. Create a new markdown file (e.g., `my_metric.md`) in the prompts directory
2. Define the evaluation criteria and scoring logic
3. The evaluator will automatically discover and apply your new metric### Viewing Reports
Reports are automatically generated in the `evals/reports/` directory with timestamps in the filename:
```
evals/reports/evaluation_report_YYYYMMDD_HHMMSS.json
```Each report includes:
- High-level statistics (total trace count, success rate, etc.)
- Per-metric performance metrics
- Detailed trace-level information for debugging## 🔧 Configuration
The application uses a flexible configuration system with environment-specific settings:
- `.env.development`
-