https://github.com/wahyudesu/fastapi-ai-production-template
Simple starter template for your ML/AI projects (uv package manager, RestAPI with FastAPI and Dockerfile support)
https://github.com/wahyudesu/fastapi-ai-production-template
ai fastapi fastapi-boilerplate fastapi-template machine-learning python
Last synced: 3 months ago
JSON representation
Simple starter template for your ML/AI projects (uv package manager, RestAPI with FastAPI and Dockerfile support)
- Host: GitHub
- URL: https://github.com/wahyudesu/fastapi-ai-production-template
- Owner: wahyudesu
- License: mit
- Created: 2025-05-29T07:22:32.000Z (4 months ago)
- Default Branch: master
- Last Pushed: 2025-07-20T10:11:52.000Z (3 months ago)
- Last Synced: 2025-07-20T11:19:10.663Z (3 months ago)
- Topics: ai, fastapi, fastapi-boilerplate, fastapi-template, machine-learning, python
- Language: Jupyter Notebook
- Homepage:
- Size: 4.85 MB
- Stars: 75
- Watchers: 1
- Forks: 10
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Fastapi AI Production Boilerplate
Simple starter repo for your Machine Learning/AI projects
![]()
![]()
![]()
![]()
![]()
![]()
## Use Case
- Build and serve machine learning models via production-ready APIs
- Create scalable and easily deployable AI/ML backend services
- Develop AI Agent applications based on FastAPI
- Support end-to-end model experimentation, serving, and deployment## Features
- ✅Built in Security and API endpoint protection
- ✅Lightweight Dockerfile with best practices
- ✅Router serving support for ML, AI models and AI agents
- ✅Project dependencies, env using uv
- ✅Simple Logging using loguru
- ✅Kubernetes manifests: Deployment, Service, HPA, Ingress
- ✅Ready for production and educational use
- ✅Lint and formatting using ruff
- ✅Jupyter notebook for experiment ml and building ai agent
- ✅Rate limiter and Middleware
- ✅Very well documentation for easy understanding
- ... Adding MCP Features## Project Structure
```
root-project/
├── app/
│ ├── main.py # FastAPI entrypoint
│ ├── logger.py # Logging
│ ├── middleware.py # Middleware logging and rate limiter
│ ├── model/ # Model artifacts (e.g., pickle files)
│ └── routers/ # API routers (chatbot, predict, etc.)
│ ├── agent.py # Agent research endpoints
│ ├── chatbot.py # Chatbot endpoints (file upload, entity extraction, etc.)
│ └── predict.py # Prediction endpoints (ML, summarization, etc.)
├── data/ # Dataset
├── k8s/ # Kubernetes
└── notebook/ # Jupyter notebooks for experiments
```This structure makes code management and feature development easier.
- For LLM, use notebooks such as `notebooks/langgraph.ipynb` for experiments.
- For ML, use notebooks like `notebook/bayesian-regression.ipynb`, the `data` folder for datasets, and the `model` folder for models and training/prediction code.- Model serving and API endpoints are organized in the `app/routers` folder.
> For more details, see the [FastAPI Documentation](https://fastapi.tiangolo.com/).
## Installation & Setup
Make sure you have [`uv` installed](https://docs.astral.sh/uv/getting-started/installation/) .
```powershell
# Clone repository
git clone https://github.com/wahyudesu/fastapi-ai-templatecd fastapi-ai-template
# Development
uv venv
.venv\Scripts\activateuv sync
# Copy and edit .env file
cp .env.example .env
# Edit .env according to your needs
# Edit token for security and groq api key if u use llm
```Linter
```
uv run ruff check
```## Run on local
```powershell
uv run uvicorn app.main:app --reload
```After running the command above, your FastAPI application will be available at [http://localhost:8000](http://localhost:8000?token=token).
You can access the beautiful interactive API documentation at [http://localhost:8000/scalar](http://localhost:8000/scalar).
You can also access the interactive API documentation on default swagger-ui at [http://localhost:8000/docs](http://localhost:8000/docs).
To access the MCP endpoint, go to [http://localhost:8000/mcp](http://localhost:8000/mcp).
## Docker
Build the Docker image with:```powershell
docker build -t fastapi-app .
```
Run the Docker container locally with:```powershell
docker run -p 8000:80 fastapi-app
```## Deployment
![]()
![]()
![]()
![]()
You can use virtually any cloud provider to deploy your FastAPI application. Before deploying, make sure you understand the basic concepts.
You can read more about deployment concepts [here](https://fastapi.tiangolo.com/deployment/concepts) .
> This project is developed for modern LLMOps/ML pipelines and is ready for deployment on both cloud platforms and VPS.
## 🤝 Contributing
1. Fork this repository;
2. Create your branch: `git checkout -b my-new-feature`;
3. Commit your changes: `git commit -m 'Add some feature'`;
4. Push to the branch: `git push origin my-new-feature`.
5. After your pull request is merged, you can safely delete your branch.## ⏭️ What's Next?
Saya membuat repo ini se-minimal dan se-sederhana mungkin, dengan fitur yang cukup lengkap, agar memudahkan pemula dalam mengembangkan project melalui repo ini.
Jika repo ini dirasa bermanfaat bisa bintangin dan share ke teman yang membutuhkan, jika cukup ramai saya ada rencana untuk membuat versi lanjutannya yang lebih advanced, dengan fitur tambahan seperti JWT security, ORM, Grafana, dan ML integration. Yang bakal fokus spesifik ke ML dan LLMOps## FAQ
Why FastAPI?
- FastAPI is a modern, high-performance web framework for building APIs with Python. For AI apps, it serves as the interface between your AI models and the outside world, allowing external systems to send data to your models and receive predictions or processing results. What makes FastAPI particularly appealing is its simplicity and elegance—it provides everything you need without unnecessary complexity.
What is Uvicorn?
- Uvicorn is a lightning-fast ASGI server implementation for Python, commonly used to run FastAPI applications in production. It enables asynchronous request handling and is well-suited for modern web frameworks.
Is this boilerplate connected to a database?
- You can add a database such as PostgreSQL, MySQL, or SQLite depending on your use case. If you are only serving models, a database may not be necessary. This repository is designed to be as simple as possible so users can get started quickly.
How about security?
- The project includes built-in security features such as API endpoint protection, authentication, and rate limiting. You can further enhance security by configuring environment variables and using HTTPS in production.
What can I develop with this?
- It depends on your project use case. For serving AI or ML models, this boilerplate is more than sufficient. If you need more features, you can add observability and monitoring tools such as Opik, Comet, or MLflow.