{"id":17880527,"url":"https://github.com/igorbenav/sqlmodel-boilerplate","last_synced_at":"2025-04-04T18:05:07.486Z","repository":{"id":238947051,"uuid":"798040027","full_name":"igorbenav/SQLModel-boilerplate","owner":"igorbenav","description":"An extendable async API using FastAPI, SQLModel, PostgreSQL and Redis.","archived":false,"fork":false,"pushed_at":"2025-02-11T00:11:00.000Z","size":105,"stargazers_count":178,"open_issues_count":3,"forks_count":18,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-03-28T17:03:44.490Z","etag":null,"topics":["alembic","api","arq","async","cache","docker","docker-compose","fastapi","fastapi-boilerplate","postgres","python","redis","sqlmodel","uvicorn"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/igorbenav.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":"igorbenav"}},"created_at":"2024-05-09T01:15:52.000Z","updated_at":"2025-03-28T14:44:56.000Z","dependencies_parsed_at":"2024-05-09T03:46:27.604Z","dependency_job_id":"e5a64e5d-a2e8-4a8d-bbca-60088000d747","html_url":"https://github.com/igorbenav/SQLModel-boilerplate","commit_stats":{"total_commits":9,"total_committers":3,"mean_commits":3.0,"dds":0.4444444444444444,"last_synced_commit":"2ead0414113a4f5b2b28183cef427c4e5b864284"},"previous_names":["igorbenav/sqlmodel-boilerplate"],"tags_count":1,"template":true,"template_full_name":"igorbenav/FastAPI-boilerplate","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/igorbenav%2FSQLModel-boilerplate","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/igorbenav%2FSQLModel-boilerplate/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/igorbenav%2FSQLModel-boilerplate/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/igorbenav%2FSQLModel-boilerplate/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/igorbenav","download_url":"https://codeload.github.com/igorbenav/SQLModel-boilerplate/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247226213,"owners_count":20904465,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alembic","api","arq","async","cache","docker","docker-compose","fastapi","fastapi-boilerplate","postgres","python","redis","sqlmodel","uvicorn"],"created_at":"2024-10-28T12:18:50.321Z","updated_at":"2025-04-04T18:05:07.465Z","avatar_url":"https://github.com/igorbenav.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e Fast FastAPI boilerplate (SQLmodel version)\u003c/h1\u003e\n\u003cp align=\"center\" markdown=1\u003e\n  \u003ci\u003eYet another template to speed your FastAPI development up. This time, using SQLModel.\u003c/i\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/igormagalhaesr/SQLModel-boilerplate\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/igorbenav/docs-images/main/sqlmodelboilerplate.png?raw=true\" alt=\"Purple Rocket with FastAPI Logo as its window. There is a word FAST written\" width=\"35%\" height=\"auto\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Python-3776AB?style=for-the-badge\u0026logo=python\u0026logoColor=white\" alt=\"Python\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://fastapi.tiangolo.com\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/FastAPI-005571?style=for-the-badge\u0026logo=fastapi\" alt=\"FastAPI\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://www.postgresql.org\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/PostgreSQL-316192?style=for-the-badge\u0026logo=postgresql\u0026logoColor=white\" alt=\"PostgreSQL\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://redis.io\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Redis-DC382D?logo=redis\u0026logoColor=fff\u0026style=for-the-badge\" alt=\"Redis\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://docs.docker.com/compose/\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/Docker-2496ED?logo=docker\u0026logoColor=fff\u0026style=for-the-badge\" alt=\"Docker\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://nginx.org/en/\"\u003e\n      \u003cimg src=\"https://img.shields.io/badge/NGINX-009639?logo=nginx\u0026logoColor=fff\u0026style=for-the-badge\" alt=NGINX\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n## 0. About\n\n**FastAPI boilerplate** creates an extendable async API using FastAPI, SQLModel and PostgreSQL:\n\n- [`FastAPI`](https://fastapi.tiangolo.com): modern Python web framework for building APIs\n- [`SQLModel`](https://sqlmodel.tiangolo.com): SQL databases in Python, designed for simplicity, compatibility, and robustness.\n- [`PostgreSQL`](https://www.postgresql.org): The World's Most Advanced Open Source Relational Database\n- [`Redis`](https://redis.io): Source Available, in-memory data store used by millions as a cache, message broker and more.\n- [`ARQ`](https://arq-docs.helpmanual.io) Job queues and RPC in python with asyncio and redis.\n- [`Docker Compose`](https://docs.docker.com/compose/) With a single command, create and start all the services from your configuration.\n- [`NGINX`](https://nginx.org/en/) High-performance low resource consumption web server used for Reverse Proxy and Load Balancing.\n\n\u003e \\[!TIP\\] \n\u003e If you want the `SQLAlchemy + Pydantic` version instead, head to [Fastapi-boilerplate](https://github.com/igorbenav/FastAPI-boilerplate).\n\n## 1. Features\n\n- ⚡️ Fully async\n- 🚀 SQLModel with Pydantic V2 and SQLAlchemy 2.0 support\n- 🔐 User authentication with JWT\n- 🍪 Cookie based refresh token\n- 🏬 Easy redis caching\n- 👜 Easy client-side caching\n- 🚦 ARQ integration for task queue\n- ⚙️ Efficient and robust queries with \u003ca href=\"https://github.com/igorbenav/fastcrud\"\u003efastcrud\u003c/a\u003e\n- ⎘ Out of the box offset and cursor pagination support with \u003ca href=\"https://github.com/igorbenav/fastcrud\"\u003efastcrud\u003c/a\u003e\n- 🛑 Rate Limiter dependency\n- 👮 FastAPI docs behind authentication and hidden based on the environment\n- 🦾 Easily extendable\n- 🤸‍♂️ Flexible\n- 🚚 Easy running with docker compose\n- ⚖️ NGINX Reverse Proxy and Load Balancing\n\n## 2. Contents\n\n0. [About](#0-about)\n1. [Features](#1-features)\n1. [Contents](#2-contents)\n1. [Prerequisites](#3-prerequisites)\n   1. [Environment Variables (.env)](#31-environment-variables-env)\n   1. [Docker Compose](#32-docker-compose-preferred)\n   1. [From Scratch](#33-from-scratch)\n1. [Usage](#4-usage)\n   1. [Docker Compose](#41-docker-compose)\n   1. [From Scratch](#42-from-scratch)\n      1. [Packages](#421-packages)\n      1. [Running PostgreSQL With Docker](#422-running-postgresql-with-docker)\n      1. [Running Redis with Docker](#423-running-redis-with-docker)\n      1. [Running the API](#424-running-the-api)\n   1. [Creating the first superuser](#43-creating-the-first-superuser)\n   1. [Database Migrations](#44-database-migrations)\n1. [Extending](#5-extending)\n   1. [Project Structure](#51-project-structure)\n   1. [Database Model](#52-database-model)\n   1. [DB Models](#53-db-models)\n   1. [Validation Models](#54-validation-models)\n   1. [Alembic Migrations](#55-alembic-migrations)\n   1. [CRUD](#56-crud)\n   1. [Routes](#57-routes)\n      1. [Paginated Responses](#571-paginated-responses)\n      1. [HTTP Exceptions](#572-http-exceptions)\n   1. [Caching](#58-caching)\n   1. [More Advanced Caching](#59-more-advanced-caching)\n   1. [ARQ Job Queues](#510-arq-job-queues)\n   1. [Rate Limiting](#511-rate-limiting)\n   1. [JWT Authentication](#512-jwt-authentication)\n   1. [Running](#513-running)\n   1. [Create Application](#514-create-application)\n   2. [Opting Out of Services](#515-opting-out-of-services)\n1. [Running in Production](#6-running-in-production)\n   1. [Uvicorn Workers with Gunicorn](#61-uvicorn-workers-with-gunicorn)\n   1. [Running With NGINX](#62-running-with-nginx)\n      1. [One Server](#621-one-server)\n      1. [Multiple Servers](#622-multiple-servers)\n1. [Testing](#7-testing)\n1. [Contributing](#8-contributing)\n1. [References](#9-references)\n1. [License](#10-license)\n1. [Contact](#11-contact)\n\n______________________________________________________________________\n\n## 3. Prerequisites\n\n### 3.0 Start\n\nStart by using the template, and naming the repository to what you want.\n\n\u003cp align=\"left\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/43156212/277866726-975d1c98-b1c9-4c8e-b4bd-001c8a5728cb.png\" alt=\"clicking use this template button, then create a new repository option\" width=\"35%\" height=\"auto\"\u003e\n\u003c/p\u003e\n\nThen clone your created repository (I'm using the base for the example)\n\n```sh\ngit clone https://github.com/igormagalhaesr/SQLModel-boilerplate\n```\n\n\u003e \\[!TIP\\]\n\u003e If you are in a hurry, you may use one of the following templates (containing a `.env`, `docker-compose.yml` and `Dockerfile`):\n\n- [Running locally with uvicorn](https://gist.github.com/igorbenav/48ad745120c3f77817e094f3a609111a)\n- [Runing in staging with gunicorn managing uvicorn workers](https://gist.github.com/igorbenav/d0518d4f6bdfb426d4036090f74905ee)\n- [Running in production with NGINX](https://gist.github.com/igorbenav/232c3b73339d6ca74e2bf179a5ef48a1)\n\n\u003e \\[!WARNING\\]\n\u003e Do not forget to place `docker-compose.yml` and `Dockerfile` in the `root` folder, while `.env` should be in the `src` folder.\n\n### 3.1 Environment Variables (.env)\n\nThen create a `.env` file inside `src` directory:\n\n```sh\ntouch .env\n```\n\nInside of `.env`, create the following app settings variables:\n\n```\n# ------------- app settings -------------\nAPP_NAME=\"Your app name here\"\nAPP_DESCRIPTION=\"Your app description here\"\nAPP_VERSION=\"0.1\"\nCONTACT_NAME=\"Your name\"\nCONTACT_EMAIL=\"Your email\"\nLICENSE_NAME=\"The license you picked\"\n```\n\nFor the database ([`if you don't have a database yet, click here`](#422-running-postgresql-with-docker)), create:\n\n```\n# ------------- database -------------\nPOSTGRES_USER=\"your_postgres_user\"\nPOSTGRES_PASSWORD=\"your_password\"\nPOSTGRES_SERVER=\"your_server\" # default \"localhost\", if using docker compose you should use \"db\"\nPOSTGRES_PORT=5432 # default \"5432\", if using docker compose you should use \"5432\"\nPOSTGRES_DB=\"your_db\"\n```\n\nFor database administration using PGAdmin create the following variables in the .env file\n\n```\n# ------------- pgadmin -------------\nPGADMIN_DEFAULT_EMAIL=\"your_email_address\"\nPGADMIN_DEFAULT_PASSWORD=\"your_password\"\nPGADMIN_LISTEN_PORT=80\n```\n\nTo connect to the database, log into the PGAdmin console with the values specified in `PGADMIN_DEFAULT_EMAIL` and `PGADMIN_DEFAULT_PASSWORD`.\n\nOnce in the main PGAdmin screen, click Add Server:\n\n![pgadmin-connect](https://github.com/igorbenav/docs-images/blob/main/289698727-e15693b6-fae9-4ec6-a597-e70ab6f44133-3.png?raw=true)\n\n1. Hostname/address is `db` (if using containers)\n1. Is the value you specified in `POSTGRES_PORT`\n1. Leave this value as `postgres`\n1. is the value you specified in `POSTGRES_USER`\n1. Is the value you specified in `POSTGRES_PASSWORD`\n\nFor crypt:\nStart by running\n\n```sh\nopenssl rand -hex 32\n```\n\nAnd then create in `.env`:\n\n```\n# ------------- crypt -------------\nSECRET_KEY= # result of openssl rand -hex 32\nALGORITHM= # pick an algorithm, default HS256\nACCESS_TOKEN_EXPIRE_MINUTES= # minutes until token expires, default 30\nREFRESH_TOKEN_EXPIRE_DAYS= # days until token expires, default 7\n```\n\nThen for the first admin user:\n\n```\n# ------------- admin -------------\nADMIN_NAME=\"your_name\"\nADMIN_EMAIL=\"your_email\"\nADMIN_USERNAME=\"your_username\"\nADMIN_PASSWORD=\"your_password\"\n```\n\nFor redis caching:\n\n```\n# ------------- redis cache-------------\nREDIS_CACHE_HOST=\"your_host\" # default \"localhost\", if using docker compose you should use \"redis\"\nREDIS_CACHE_PORT=6379 # default \"6379\", if using docker compose you should use \"6379\"\n```\n\nAnd for client-side caching:\n\n```\n# ------------- redis client-side cache -------------\nCLIENT_CACHE_MAX_AGE=30 # default \"30\"\n```\n\nFor ARQ Job Queues:\n\n```\n# ------------- redis queue -------------\nREDIS_QUEUE_HOST=\"your_host\" # default \"localhost\", if using docker compose you should use \"redis\"\nREDIS_QUEUE_PORT=6379 # default \"6379\", if using docker compose you should use \"6379\"\n```\n\n\u003e \\[!WARNING\\]\n\u003e You may use the same redis for both caching and queue while developing, but the recommendation is using two separate containers for production.\n\nTo create the first tier:\n\n```\n# ------------- first tier -------------\nTIER_NAME=\"free\"\n```\n\nFor the rate limiter:\n\n```\n# ------------- redis rate limit -------------\nREDIS_RATE_LIMIT_HOST=\"localhost\"   # default=\"localhost\", if using docker compose you should use \"redis\"\nREDIS_RATE_LIMIT_PORT=6379          # default=6379, if using docker compose you should use \"6379\"\n\n\n# ------------- default rate limit settings -------------\nDEFAULT_RATE_LIMIT_LIMIT=10         # default=10\nDEFAULT_RATE_LIMIT_PERIOD=3600      # default=3600\n```\n\nFor tests (optional to run):\n\n```\n# ------------- test -------------\nTEST_NAME=\"Tester User\"\nTEST_EMAIL=\"test@tester.com\"\nTEST_USERNAME=\"testeruser\"\nTEST_PASSWORD=\"Str1ng$t\"\n```\n\nAnd Finally the environment:\n\n```\n# ------------- environment -------------\nENVIRONMENT=\"local\"\n```\n\n`ENVIRONMENT` can be one of `local`, `staging` and `production`, defaults to local, and changes the behavior of api `docs` endpoints:\n\n- **local:** `/docs`, `/redoc` and `/openapi.json` available\n- **staging:** `/docs`, `/redoc` and `/openapi.json` available for superusers\n- **production:** `/docs`, `/redoc` and `/openapi.json` not available\n\n### 3.2 Docker Compose (preferred)\n\nTo do it using docker compose, ensure you have docker and docker compose installed, then:\nWhile in the base project directory (FastAPI-boilerplate here), run:\n\n```sh\ndocker compose up\n```\n\nYou should have a `web` container, `postgres` container, a `worker` container and a `redis` container running.\nThen head to `http://127.0.0.1:8000/docs`.\n\n### 3.3 From Scratch\n\nInstall poetry:\n\n```sh\npip install poetry\n```\n\n## 4. Usage\n\n### 4.1 Docker Compose\n\nIf you used docker compose, your setup is done. You just need to ensure that when you run (while in the base folder):\n\n```sh\ndocker compose up\n```\n\nYou get the following outputs (in addition to many other outputs):\n\n```sh\nfastapi-boilerplate-worker-1  | ... redis_version=x.x.x mem_usage=999K clients_connected=1 db_keys=0\n...\nfastapi-boilerplate-db-1      | ... [1] LOG:  database system is ready to accept connections\n...\nfastapi-boilerplate-web-1     | INFO:     Application startup complete.\n```\n\nSo you may skip to [5. Extending](#5-extending).\n\n### 4.2 From Scratch\n\n#### 4.2.1. Packages\n\nIn the `root` directory (`FastAPI-boilerplate` if you didn't change anything), run to install required packages:\n\n```sh\npoetry install\n```\n\nEnsuring it ran without any problem.\n\n#### 4.2.2. Running PostgreSQL With Docker\n\n\u003e \\[!NOTE\\]\n\u003e If you already have a PostgreSQL running, you may skip this step.\n\nInstall docker if you don't have it yet, then run:\n\n```sh\ndocker pull postgres\n```\n\nAnd pick the port, name, user and password, replacing the fields:\n\n```sh\ndocker run -d \\\n    -p {PORT}:{PORT} \\\n    --name {NAME} \\\n    -e POSTGRES_PASSWORD={PASSWORD} \\\n    -e POSTGRES_USER={USER} \\\n    postgres\n```\n\nSuch as:\n\n```sh\ndocker run -d \\\n    -p 5432:5432 \\\n    --name postgres \\\n    -e POSTGRES_PASSWORD=1234 \\\n    -e POSTGRES_USER=postgres \\\n    postgres\n```\n\n#### 4.2.3. Running redis With Docker\n\n\u003e \\[!NOTE\\]\n\u003e If you already have a redis running, you may skip this step.\n\nInstall docker if you don't have it yet, then run:\n\n```sh\ndocker pull redis:alpine\n```\n\nAnd pick the name and port, replacing the fields:\n\n```sh\ndocker run -d \\\n  --name {NAME}  \\\n  -p {PORT}:{PORT} \\\nredis:alpine\n```\n\nSuch as\n\n```sh\ndocker run -d \\\n  --name redis  \\\n  -p 6379:6379 \\\nredis:alpine\n```\n\n#### 4.2.4. Running the API\n\nWhile in the `root` folder, run to start the application with uvicorn server:\n\n```sh\npoetry run uvicorn src.app.main:app --reload\n```\n\n\u003e \\[!TIP\\]\n\u003e The --reload flag enables auto-reload once you change (and save) something in the project\n\n### 4.3 Creating the first superuser\n\n#### 4.3.1 Docker Compose\n\n\u003e \\[!WARNING\\]\n\u003e Make sure DB and tables are created before running create_superuser (db should be running and the api should run at least once before)\n\nIf you are using docker compose, you should uncomment this part of the docker-compose.yml:\n\n```\n  #-------- uncomment to create first superuser --------\n  # create_superuser:\n  #   build:\n  #     context: .\n  #     dockerfile: Dockerfile\n  #   env_file:\n  #     - ./src/.env\n  #   depends_on:\n  #     - db\n  #   command: python -m src.scripts.create_first_superuser\n  #   volumes:\n  #     - ./src:/code/src\n```\n\nGetting:\n\n```\n  #-------- uncomment to create first superuser --------\n  create_superuser:\n    build:\n      context: .\n      dockerfile: Dockerfile\n    env_file:\n      - ./src/.env\n    depends_on:\n      - db\n    command: python -m src.scripts.create_first_superuser\n    volumes:\n      - ./src:/code/src\n```\n\nWhile in the base project folder run to start the services:\n\n```sh\ndocker-compose up -d\n```\n\nIt will automatically run the create_superuser script as well, but if you want to rerun eventually:\n\n```sh\ndocker-compose run --rm create_superuser\n```\n\nto stop the create_superuser service:\n\n```sh\ndocker-compose stop create_superuser\n```\n\n#### 4.3.2 From Scratch\n\nWhile in the `root` folder, run (after you started the application at least once to create the tables):\n\n```sh\npoetry run python -m src.scripts.create_first_superuser\n```\n\n### 4.3.3 Creating the first tier\n\n\u003e \\[!WARNING\\]\n\u003e Make sure DB and tables are created before running create_tier (db should be running and the api should run at least once before)\n\nTo create the first tier it's similar, you just replace `create_superuser` for `create_tier` service or `create_first_superuser` to `create_first_tier` for scripts. If using `docker compose`, do not forget to uncomment the `create_tier` service in `docker-compose.yml`.\n\n### 4.4 Database Migrations\n\n\u003e \\[!WARNING\\]\n\u003e To create the tables if you did not create the endpoints, ensure that you import the models in src/app/models/__init__.py. This step is crucial to create the new tables.\n\nIf you are using the db in docker, you need to change this in `docker-compose.yml` to run migrations:\n\n```sh\n  db:\n    image: postgres:13\n    env_file:\n      - ./src/.env\n    volumes:\n      - postgres-data:/var/lib/postgresql/data\n    # -------- replace with comment to run migrations with docker --------\n    expose:\n      - \"5432\"\n    # ports:\n    #  - 5432:5432\n```\n\nGetting:\n\n```sh\n  db:\n    ...\n    # expose:\n    #  - \"5432\"\n    ports:\n      - 5432:5432\n```\n\nWhile in the `src` folder, run Alembic migrations:\n\n```sh\npoetry run alembic revision --autogenerate\n```\n\nAnd to apply the migration\n\n```sh\npoetry run alembic upgrade head\n```\n\n\u003e [!NOTE]\n\u003e If you do not have poetry, you may run it without poetry after running `pip install alembic`\n\n## 5. Extending\n\n### 5.1 Project Structure\n\nFirst, you may want to take a look at the project structure and understand what each file is doing.\n\n```sh\n.\n├── Dockerfile                        # Dockerfile for building the application container.\n├── docker-compose.yml                # Docker Compose file for defining multi-container applications.\n├── pyproject.toml                    # Poetry configuration file with project metadata and dependencies.\n├── README.md                         # Project README providing information and instructions.\n├── LICENSE.md                        # License file for the project.\n│\n├── tests                             # Unit and integration tests for the application.\n│   ├── __init__.py\n│   ├── conftest.py                   # Configuration and fixtures for pytest.\n│   ├── helper.py                     # Helper functions for tests.\n│   └── test_user.py                  # Test cases for user-related functionality.\n│\n└── src                               # Source code directory.\n    ├── __init__.py                   # Initialization file for the src package.\n    ├── alembic.ini                   # Configuration file for Alembic (database migration tool).\n    ├── poetry.lock                   # Poetry lock file specifying exact versions of dependencies.\n    │\n    ├── app                           # Main application directory.\n    │   ├── __init__.py               # Initialization file for the app package.\n    │   ├── main.py                   # Main entry point of the FastAPI application.\n    │   │\n    │   │\n    │   ├── api                       # Folder containing API-related logic.\n    │   │   ├── __init__.py\n    │   │   ├── dependencies.py       # Defines dependencies for use across API endpoints.\n    │   │   │\n    │   │   └── v1                    # Version 1 of the API.\n    │   │       ├── __init__.py\n    │   │       ├── login.py          # API route for user login.\n    │   │       ├── logout.py         # API route for user logout.\n    │   │       ├── posts.py          # API routes for post operations.\n    │   │       ├── rate_limits.py    # API routes for rate limiting functionalities.\n    │   │       ├── tasks.py          # API routes for task management.\n    │   │       ├── tiers.py          # API routes for user tier functionalities.\n    │   │       └── users.py          # API routes for user management.\n    │   │\n    │   ├── core                      # Core utilities and configurations for the application.\n    │   │   ├── __init__.py\n    │   │   ├── config.py             # Configuration settings for the application.\n    │   │   ├── logger.py             # Configuration for application logging.\n    │   │   ├── schemas.py            # Pydantic schemas for data validation.\n    │   │   ├── security.py           # Security utilities, such as password hashing.\n    │   │   ├── setup.py              # Setup file for the FastAPI app instance.\n    │   │   │\n    │   │   ├── db                    # Core Database related modules.\n    │   │   │   ├── __init__.py\n    │   │   │   ├── crud_token_blacklist.py  # CRUD operations for token blacklist.\n    │   │   │   ├── database.py       # Database connectivity and session management.\n    │   │   │   ├── models.py         # Core Database models.\n    │   │   │   └── token_blacklist.py  # Model for token blacklist functionality.\n    │   │   │\n    │   │   ├── exceptions            # Custom exception classes.\n    │   │   │   ├── __init__.py\n    │   │   │   ├── cache_exceptions.py   # Exceptions related to cache operations.\n    │   │   │   └── http_exceptions.py    # HTTP-related exceptions.\n    │   │   │\n    │   │   ├── utils                 # Utility functions and helpers.\n    │   │   │   ├── __init__.py\n    │   │   │   ├── cache.py          # Cache-related utilities.\n    │   │   │   ├── queue.py          # Utilities for task queue management.\n    │   │   │   └── rate_limit.py     # Rate limiting utilities.\n    │   │   │\n    │   │   └── worker                # Worker script for background tasks.\n    │   │       ├── __init__.py\n    │   │       ├── settings.py       # Worker configuration and settings.\n    │   │       └── functions.py      # Async task definitions and management.\n    │   │\n    │   ├── crud                      # CRUD operations for the application.\n    │   │   ├── __init__.py\n    │   │   ├── crud_base.py          # Base class for CRUD operations.\n    │   │   ├── crud_posts.py         # CRUD operations for posts.\n    │   │   ├── crud_rate_limit.py    # CRUD operations for rate limiting.\n    │   │   ├── crud_tier.py          # CRUD operations for user tiers.\n    │   │   ├── crud_users.py         # CRUD operations for users.\n    │   │   └── helper.py             # Helper functions for CRUD operations.\n    │   │\n    │   ├── logs                      # Directory for log files.\n    │   │   └── app.log               # Log file for the application.\n    │   │\n    │   ├── middleware                # Middleware components for the application.\n    │   │   └── client_cache_middleware.py  # Middleware for client-side caching.\n    │   │\n    │   └── models                    # SQLModel db and validation models for the application.\n    │       ├── __init__.py\n    │       ├── post.py               # SQLModel models for posts.\n    │       ├── rate_limit.py         # SQLModel models for rate limiting.\n    │       ├── tier.py               # SQLModel models for user tiers.\n    │       └── user.py               # SQLModel models for users.\n    │\n    ├── migrations                    # Alembic migration scripts for database changes.\n    │   ├── README\n    │   ├── env.py                    # Environment configuration for Alembic.\n    │   ├── script.py.mako            # Template script for Alembic migrations.\n    │   │\n    │   └── versions                  # Individual migration scripts.\n    │       └── README.MD\n    │\n    └── scripts                       # Utility scripts for the application.\n        ├── __init__.py\n        ├── create_first_superuser.py # Script to create the first superuser.\n        └── create_first_tier.py      # Script to create the first user tier.\n```\n\n### 5.2 Database Model\n\nCreate the new entities and relationships and add them to the model \u003cbr\u003e\n![diagram](https://user-images.githubusercontent.com/43156212/284426387-bdafc637-0473-4b71-890d-29e79da288cf.png)\n\n#### 5.2.1 Token Blacklist\n\nNote that this table is used to blacklist the `JWT` tokens (it's how you log a user out) \u003cbr\u003e\n![diagram](https://user-images.githubusercontent.com/43156212/284426382-b2f3c0ca-b8ea-4f20-b47e-de1bad2ca283.png)\n\n### 5.3 DB Models\n\nInside `app/models`, create a new `entity.py` for each new entity (replacing entity with the name) and define the attributes according to [SQLModel standards](https://sqlmodel.tiangolo.com/tutorial/create-db-and-table/):\n\n```python\nfrom sqlmodel import Field, SQLModel\n\n\nclass Entity(SQLModel, table=True):\n    __tablename__ = \"entity\"\n\n    id: int | None = Field(default=None, primary_key=True)\n    name: str = Field(max_digits=30)\n    ...\n```\n\n### 5.4 Validation Models\n\nInside each `entity.py` in `app/models`, create your SQLModel data validation models for each new entity, you'll now use `table=False` (or just leave it blank, as it's the default):\n\n```python\nfrom sqlmodel import Field, SQLModel\n\n# this should be here already\nclass Entity(SQLModel, table=True):\n    __tablename__ = \"entity\"\n\n    id: int | None = Field(default=None, primary_key=True)\n    name: str = Field(max_digits=30)\n    ...\n\n# now we'll create the other validation models\nclass EntityBase(SQLModel):\n    name: str\n    ...\n\n\nclass EntityRead(EntityBase):\n    ...\n\n\nclass EntityCreate(EntityBase):\n    ...\n\n\nclass EntityCreateInternal(EntityCreate):\n    ...\n\n\nclass EntityUpdate(SQLModel):\n    ...\n\n\nclass EntityUpdateInternal(SQLModel):\n    ...\n\n\nclass EntityDelete(SQLModel):\n    model_config = {\"extra\": \"forbid\"}\n\n    is_deleted: bool\n    deleted_at: datetime\n```\n\n### 5.5 Alembic Migrations\n\n\u003e \\[!WARNING\\]\n\u003e To create the tables if you did not create the endpoints, ensure that you import the models in src/app/models/__init__.py. This step is crucial to create the new models.\n\nThen, while in the `src` folder, run Alembic migrations:\n\n```sh\npoetry run alembic revision --autogenerate\n```\n\nAnd to apply the migration\n\n```sh\npoetry run alembic upgrade head\n```\n\n### 5.6 CRUD\n\nInside `app/crud`, create a new `crud_entities.py` inheriting from `FastCRUD` for each new entity:\n\n```python\nfrom fastcrud import FastCRUD\n\nfrom app.models.entity import Entity, EntityCreateInternal, EntityUpdate, EntityUpdateInternal, EntityDelete\n\nCRUDEntity = FastCRUD[Entity, EntityCreateInternal, EntityUpdate, EntityUpdateInternal, EntityDelete]\ncrud_entity = CRUDEntity(Entity)\n```\n\nSo, for users:\n\n```python\n# crud_users.py\nfrom app.model.user import User, UserCreateInternal, UserUpdate, UserUpdateInternal, UserDelete\n\nCRUDUser = FastCRUD[User, UserCreateInternal, UserUpdate, UserUpdateInternal, UserDelete]\ncrud_users = CRUDUser(User)\n```\n\nTo understand the methods that `crud_users`, a `FastCRUD` instance provides, head to [FastCRUD's documentation](https://igorbenav.github.io/fastcrud/).\n\n### 5.7 Routes\n\nInside `app/api/v1`, create a new `entities.py` file and create the desired routes\n\n```python\nfrom typing import Annotated\n\nfrom fastapi import Depends\n\nfrom app.models.entity import EntityRead\nfrom app.core.db.database import async_get_db\nfrom app.crud.crud_entity import crud_entity\n\n...\n\nrouter = fastapi.APIRouter(tags=[\"entities\"])\n\n\n@router.get(\"/entities/{id}\", response_model=List[EntityRead])\nasync def read_entities(request: Request, id: int, db: Annotated[AsyncSession, Depends(async_get_db)]):\n    entity = await crud_entities.get(db=db, id=id)\n\n    return entity\n\n\n...\n```\n\nThen in `app/api/v1/__init__.py` add the router such as:\n\n```python\nfrom fastapi import APIRouter\nfrom app.api.v1.entity import router as entity_router\n\n...\n\nrouter = APIRouter(prefix=\"/v1\")  # this should be there already\n...\nrouter.include_router(entity_router)\n```\n\n#### 5.7.1 Paginated Responses\n\nWith the `get_multi` method we get a python `dict` with full suport for pagination:\n\n```javascript\n{\n  \"data\": [\n    {\n      \"id\": 4,\n      \"name\": \"User Userson\",\n      \"username\": \"userson4\",\n      \"email\": \"user.userson4@example.com\",\n      \"profile_image_url\": \"https://profileimageurl.com\"\n    },\n    {\n      \"id\": 5,\n      \"name\": \"User Userson\",\n      \"username\": \"userson5\",\n      \"email\": \"user.userson5@example.com\",\n      \"profile_image_url\": \"https://profileimageurl.com\"\n    }\n  ],\n  \"total_count\": 2,\n  \"has_more\": false,\n  \"page\": 1,\n  \"items_per_page\": 10\n}\n```\n\nAnd in the endpoint, we can import from `fastcrud.paginated` the following functions and Pydantic Schema:\n\n```python\nfrom fastcrud.paginated import (\n    PaginatedListResponse,  # What you'll use as a response_model to validate\n    paginated_response,  # Creates a paginated response based on the parameters\n    compute_offset,  # Calculate the offset for pagination ((page - 1) * items_per_page)\n)\n```\n\nThen let's create the endpoint:\n\n```python\nimport fastapi\n\nfrom app.models.entity import EntityRead\nfrom app.core.db.database import async_get_db\nfrom app.crud.crud_entity import crud_entity\n\n...\n\n\n@router.get(\"/entities\", response_model=PaginatedListResponse[EntityRead])\nasync def read_entities(\n    request: Request, db: Annotated[AsyncSession, Depends(async_get_db)], page: int = 1, items_per_page: int = 10\n):\n    entities_data = await crud_entity.get_multi(\n        db=db,\n        offset=compute_offset(page, items_per_page),\n        limit=items_per_page,\n        schema_to_select=UserRead,\n        is_deleted=False,\n    )\n\n    return paginated_response(crud_data=entities_data, page=page, items_per_page=items_per_page)\n```\n\n#### 5.7.2 HTTP Exceptions\n\nTo add exceptions you may just import from `app/core/exceptions/http_exceptions` and optionally add a detail:\n\n```python\nfrom app.core.exceptions.http_exceptions import NotFoundException\n\n# If you want to specify the detail, just add the message\nif not user:\n    raise NotFoundException(\"User not found\")\n\n# Or you may just use the default message\nif not post:\n    raise NotFoundException()\n```\n\n**The predefined possibilities in http_exceptions are the following:**\n\n- `CustomException`: 500 internal error\n- `BadRequestException`: 400 bad request\n- `NotFoundException`: 404 not found\n- `ForbiddenException`: 403 forbidden\n- `UnauthorizedException`: 401 unauthorized\n- `UnprocessableEntityException`: 422 unprocessable entity\n- `DuplicateValueException`: 422 unprocessable entity\n- `RateLimitException`: 429 too many requests\n\n### 5.8 Caching\n\nThe `cache` decorator allows you to cache the results of FastAPI endpoint functions, enhancing response times and reducing the load on your application by storing and retrieving data in a cache.\n\nCaching the response of an endpoint is really simple, just apply the `cache` decorator to the endpoint function.\n\n\u003e \\[!WARNING\\]\n\u003e Note that you should always pass request as a variable to your endpoint function if you plan to use the cache decorator.\n\n```python\n...\nfrom app.core.utils.cache import cache\n\n\n@app.get(\"/sample/{my_id}\")\n@cache(key_prefix=\"sample_data\", expiration=3600, resource_id_name=\"my_id\")\nasync def sample_endpoint(request: Request, my_id: int):\n    # Endpoint logic here\n    return {\"data\": \"my_data\"}\n```\n\nThe way it works is:\n\n- the data is saved in redis with the following cache key: `sample_data:{my_id}`\n- then the time to expire is set as 3600 seconds (that's the default)\n\nAnother option is not passing the `resource_id_name`, but passing the `resource_id_type` (default int):\n\n```python\n...\nfrom app.core.utils.cache import cache\n\n\n@app.get(\"/sample/{my_id}\")\n@cache(key_prefix=\"sample_data\", resource_id_type=int)\nasync def sample_endpoint(request: Request, my_id: int):\n    # Endpoint logic here\n    return {\"data\": \"my_data\"}\n```\n\nIn this case, what will happen is:\n\n- the `resource_id` will be inferred from the keyword arguments (`my_id` in this case)\n- the data is saved in redis with the following cache key: `sample_data:{my_id}`\n- then the the time to expire is set as 3600 seconds (that's the default)\n\nPassing resource_id_name is usually preferred.\n\n### 5.9 More Advanced Caching\n\nThe behaviour of the `cache` decorator changes based on the request method of your endpoint.\nIt caches the result if you are passing it to a **GET** endpoint, and it invalidates the cache with this key_prefix and id if passed to other endpoints (**PATCH**, **DELETE**).\n\n#### Invalidating Extra Keys\n\nIf you also want to invalidate cache with a different key, you can use the decorator with the `to_invalidate_extra` variable.\n\nIn the following example, I want to invalidate the cache for a certain `user_id`, since I'm deleting it, but I also want to invalidate the cache for the list of users, so it will not be out of sync.\n\n```python\n# The cache here will be saved as \"{username}_posts:{username}\":\n@router.get(\"/{username}/posts\", response_model=List[PostRead])\n@cache(key_prefix=\"{username}_posts\", resource_id_name=\"username\")\nasync def read_posts(request: Request, username: str, db: Annotated[AsyncSession, Depends(async_get_db)]):\n    ...\n\n\n...\n\n# Invalidating cache for the former endpoint by just passing the key_prefix and id as a dictionary:\n@router.delete(\"/{username}/post/{id}\")\n@cache(\n    \"{username}_post_cache\",\n    resource_id_name=\"id\",\n    to_invalidate_extra={\"{username}_posts\": \"{username}\"},  # also invalidate \"{username}_posts:{username}\" cache\n)\nasync def erase_post(\n    request: Request,\n    username: str,\n    id: int,\n    current_user: Annotated[UserRead, Depends(get_current_user)],\n    db: Annotated[AsyncSession, Depends(async_get_db)],\n):\n    ...\n\n\n# And now I'll also invalidate when I update the user:\n@router.patch(\"/{username}/post/{id}\", response_model=PostRead)\n@cache(\"{username}_post_cache\", resource_id_name=\"id\", to_invalidate_extra={\"{username}_posts\": \"{username}\"})\nasync def patch_post(\n    request: Request,\n    username: str,\n    id: int,\n    values: PostUpdate,\n    current_user: Annotated[UserRead, Depends(get_current_user)],\n    db: Annotated[AsyncSession, Depends(async_get_db)],\n):\n    ...\n```\n\n\u003e \\[!WARNING\\]\n\u003e Note that adding `to_invalidate_extra` will not work for **GET** requests.\n\n#### Invalidate Extra By Pattern\n\nLet's assume we have an endpoint with a paginated response, such as:\n\n```python\n@router.get(\"/{username}/posts\", response_model=PaginatedListResponse[PostRead])\n@cache(\n    key_prefix=\"{username}_posts:page_{page}:items_per_page:{items_per_page}\",\n    resource_id_name=\"username\",\n    expiration=60,\n)\nasync def read_posts(\n    request: Request,\n    username: str,\n    db: Annotated[AsyncSession, Depends(async_get_db)],\n    page: int = 1,\n    items_per_page: int = 10,\n):\n    db_user = await crud_users.get(db=db, schema_to_select=UserRead, username=username, is_deleted=False)\n    if not db_user:\n        raise HTTPException(status_code=404, detail=\"User not found\")\n\n    posts_data = await crud_posts.get_multi(\n        db=db,\n        offset=compute_offset(page, items_per_page),\n        limit=items_per_page,\n        schema_to_select=PostRead,\n        created_by_user_id=db_user[\"id\"],\n        is_deleted=False,\n    )\n\n    return paginated_response(crud_data=posts_data, page=page, items_per_page=items_per_page)\n```\n\nJust passing `to_invalidate_extra` will not work to invalidate this cache, since the key will change based on the `page` and `items_per_page` values.\nTo overcome this we may use the `pattern_to_invalidate_extra` parameter:\n\n```python\n@router.patch(\"/{username}/post/{id}\")\n@cache(\"{username}_post_cache\", resource_id_name=\"id\", pattern_to_invalidate_extra=[\"{username}_posts:*\"])\nasync def patch_post(\n    request: Request,\n    username: str,\n    id: int,\n    values: PostUpdate,\n    current_user: Annotated[UserRead, Depends(get_current_user)],\n    db: Annotated[AsyncSession, Depends(async_get_db)],\n):\n    ...\n```\n\nNow it will invalidate all caches with a key that matches the pattern `\"{username}_posts:*`, which will work for the paginated responses.\n\n\u003e \\[!CAUTION\\]\n\u003e Using `pattern_to_invalidate_extra` can be resource-intensive on large datasets. Use it judiciously and consider the potential impact on Redis performance. Be cautious with patterns that could match a large number of keys, as deleting many keys simultaneously may impact the performance of the Redis server.\n\n#### Client-side Caching\n\nFor `client-side caching`, all you have to do is let the `Settings` class defined in `app/core/config.py` inherit from the `ClientSideCacheSettings` class. You can set the `CLIENT_CACHE_MAX_AGE` value in `.env,` it defaults to 60 (seconds).\n\n### 5.10 ARQ Job Queues\n\nDepending on the problem your API is solving, you might want to implement a job queue. A job queue allows you to run tasks in the background, and is usually aimed at functions that require longer run times and don't directly impact user response in your frontend. As a rule of thumb, if a task takes more than 2 seconds to run, can be executed asynchronously, and its result is not needed for the next step of the user's interaction, then it is a good candidate for the job queue.\n\n\u003e [!TIP]\n\u003e Very common candidates for background functions are calls to and from LLM endpoints (e.g. OpenAI or Openrouter). This is because they span tens of seconds and often need to be further parsed and saved.\n\n#### Background task creation\n\nFor simple background tasks, you can just create a function in the `app/core/worker/functions.py` file. For more complex tasks, we recommend you to create a new file in the `app/core/worker` directory.\n\n```python\nasync def sample_background_task(ctx, name: str) -\u003e str:\n    await asyncio.sleep(5)\n    return f\"Task {name} is complete!\"\n```\n\nThen add the function to the `WorkerSettings` class `functions` variable in `app/core/worker/settings.py` to make it available to the worker. If you created a new file in the `app/core/worker` directory, then simply import this function in the `app/core/worker/settings.py` file:\n\n\n ```python\n from .functions import sample_background_task\n from .your_module import sample_complex_background_task\n\n class WorkerSettings:\n     functions = [sample_background_task, sample_complex_background_task]\n     ...\n ```\n\n#### Add the task to an endpoint\n\nOnce you have created the background task, you can add it to any endpoint of your choice to be enqueued. The best practice is to enqueue the task in a **POST** endpoint, while having a **GET** endpoint to get more information on the task. For more details on how job results are handled, check the [ARQ docs](https://arq-docs.helpmanual.io/#job-results).\n\n```python\n@router.post(\"/task\", response_model=Job, status_code=201)\nasync def create_task(message: str):\n    job = await queue.pool.enqueue_job(\"sample_background_task\", message)\n    return {\"id\": job.job_id}\n\n\n@router.get(\"/task/{task_id}\")\nasync def get_task(task_id: str):\n    job = ArqJob(task_id, queue.pool)\n    return await job.info()\n```\n\nAnd finally run the worker in parallel to your fastapi application.\n\n\u003e [!IMPORTANT]\n\u003e For any change to the `sample_background_task` to be reflected in the worker, you need to restart the worker (e.g. the docker container).\n\nIf you are using `docker compose`, the worker is already running.\nIf you are doing it from scratch, run while in the `root` folder:\n\n```sh\npoetry run arq src.app.core.worker.settings.WorkerSettings\n```\n\n#### Database session with background tasks\n\nWith time your background functions will become 'workflows' increasing in complexity and requirements. Probably, you will need to use a database session to get, create, update, or delete data as part of this workflow.\n\nTo do this, you can add the database session to the `ctx` object in the `startup` and `shutdown` functions in `app/core/worker/functions.py`, like in the example below:\n\n```python\nfrom arq.worker import Worker\nfrom ...core.db.database import async_get_db\n\nasync def startup(ctx: Worker) -\u003e None:\n    ctx[\"db\"] = await anext(async_get_db())\n    logging.info(\"Worker Started\")\n\nasync def shutdown(ctx: Worker) -\u003e None:\n    await ctx[\"db\"].close()\n    logging.info(\"Worker end\")\n```\n\nThis will allow you to have the async database session always available in any background function and automatically close it on worker shutdown. Once you have this database session, you can use it as follows:\n\n```python\nfrom arq.worker import Worker\n\nasync def your_background_function(\n    ctx: Worker,\n    post_id: int,\n    ...\n) -\u003e Any:\n    db = ctx[\"db\"]\n    post = crud_posts.get(db=db, schema_to_select=PostRead, id=post_id)\n    ...\n```\n\n\u003e [!WARNING]\n\u003e When using database sessions, you will want to use Pydantic objects. However, these objects don't mingle well with the seralization required by ARQ tasks and will be retrieved as a dictionary.\n\n### 5.11 Rate Limiting\n\nTo limit how many times a user can make a request in a certain interval of time (very useful to create subscription plans or just to protect your API against DDOS), you may just use the `rate_limiter` dependency:\n\n```python\nfrom fastapi import Depends\n\nfrom app.api.dependencies import rate_limiter\nfrom app.core.utils import queue\nfrom app.models.job import Job\n\n\n@router.post(\"/task\", response_model=Job, status_code=201, dependencies=[Depends(rate_limiter)])\nasync def create_task(message: str):\n    job = await queue.pool.enqueue_job(\"sample_background_task\", message)\n    return {\"id\": job.job_id}\n```\n\nBy default, if no token is passed in the header (that is - the user is not authenticated), the user will be limited by his IP address with the default `limit` (how many times the user can make this request every period) and `period` (time in seconds) defined in `.env`.\n\nEven though this is useful, real power comes from creating `tiers` (categories of users) and standard `rate_limits` (`limits` and `periods` defined for specific `paths` - that is - endpoints) for these tiers.\n\nAll of the `tier` and `rate_limit` models and endpoints are already created in the respective folders (and usable only by superusers). You may use the `create_tier` script to create the first tier (it uses the `.env` variable `TIER_NAME`, which is all you need to create a tier) or just use the api:\n\nHere I'll create a `free` tier:\n\n\u003cp align=\"left\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/43156212/282275103-d9c4f511-4cfa-40c6-b882-5b09df9f62b9.png\" alt=\"passing name = free to api request body\" width=\"70%\" height=\"auto\"\u003e\n\u003c/p\u003e\n\nAnd a `pro` tier:\n\n\u003cp align=\"left\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/43156212/282275107-5a6ca593-ccc0-4965-b2db-09ec5ecad91c.png\" alt=\"passing name = pro to api request body\" width=\"70%\" height=\"auto\"\u003e\n\u003c/p\u003e\n\nThen I'll associate a `rate_limit` for the path `api/v1/tasks/task` for each of them, I'll associate a `rate limit` for the path `api/v1/tasks/task`.\n\n\u003e \\[!WARNING\\]\n\u003e Do not forget to add `api/v1/...` or any other prefix to the beggining of your path. For the structure of the boilerplate, `api/v1/\u003crest_of_the_path\u003e`\n\n1 request every hour (3600 seconds) for the free tier:\n\n\u003cp align=\"left\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/43156212/282275105-95d31e19-b798-4f03-98f0-3e9d1844f7b3.png\" alt=\"passing path=api/v1/tasks/task, limit=1, period=3600, name=api_v1_tasks:1:3600 to free tier rate limit\" width=\"70%\" height=\"auto\"\u003e\n\u003c/p\u003e\n\n10 requests every hour for the pro tier:\n\n\u003cp align=\"left\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/43156212/282275108-deec6f46-9d47-4f01-9899-ca42da0f0363.png\" alt=\"passing path=api/v1/tasks/task, limit=10, period=3600, name=api_v1_tasks:10:3600 to pro tier rate limit\" width=\"70%\" height=\"auto\"\u003e\n\u003c/p\u003e\n\nNow let's read all the tiers available (`GET api/v1/tiers`):\n\n```javascript\n{\n  \"data\": [\n    {\n      \"name\": \"free\",\n      \"id\": 1,\n      \"created_at\": \"2023-11-11T05:57:25.420360\"\n    },\n    {\n      \"name\": \"pro\",\n      \"id\": 2,\n      \"created_at\": \"2023-11-12T00:40:00.759847\"\n    }\n  ],\n  \"total_count\": 2,\n  \"has_more\": false,\n  \"page\": 1,\n  \"items_per_page\": 10\n}\n```\n\nAnd read the `rate_limits` for the `pro` tier to ensure it's working (`GET api/v1/tier/pro/rate_limits`):\n\n```javascript\n{\n  \"data\": [\n    {\n      \"path\": \"api_v1_tasks_task\",\n      \"limit\": 10,\n      \"period\": 3600,\n      \"id\": 1,\n      \"tier_id\": 2,\n      \"name\": \"api_v1_tasks:10:3600\"\n    }\n  ],\n  \"total_count\": 1,\n  \"has_more\": false,\n  \"page\": 1,\n  \"items_per_page\": 10\n}\n```\n\nNow, whenever an authenticated user makes a `POST` request to the `api/v1/tasks/task`, they'll use the quota that is defined by their tier.\nYou may check this getting the token from the `api/v1/login` endpoint, then passing it in the request header:\n\n```sh\ncurl -X POST 'http://127.0.0.1:8000/api/v1/tasks/task?message=test' \\\n-H 'Authorization: Bearer \u003cyour-token-here\u003e'\n```\n\n\u003e \\[!TIP\\]\n\u003e Since the `rate_limiter` dependency uses the `get_optional_user` dependency instead of `get_current_user`, it will not require authentication to be used, but will behave accordingly if the user is authenticated (and token is passed in header). If you want to ensure authentication, also use `get_current_user` if you need.\n\nTo change a user's tier, you may just use the `PATCH api/v1/user/{username}/tier` endpoint.\nNote that for flexibility (since this is a boilerplate), it's not necessary to previously inform a tier_id to create a user, but you probably should set every user to a certain tier (let's say `free`) once they are created.\n\n\u003e \\[!WARNING\\]\n\u003e If a user does not have a `tier` or the tier does not have a defined `rate limit` for the path and the token is still passed to the request, the default `limit` and `period` will be used, this will be saved in `app/logs`.\n\n### 5.12 JWT Authentication\n\n#### 5.12.1 Details\n\nThe JWT in this boilerplate is created in the following way:\n\n1. **JWT Access Tokens:** how you actually access protected resources is passing this token in the request header.\n1. **Refresh Tokens:** you use this type of token to get an `access token`, which you'll use to access protected resources.\n\nThe `access token` is short lived (default 30 minutes) to reduce the damage of a potential leak. The `refresh token`, on the other hand, is long lived (default 7 days), and you use it to renew your `access token` without the need to provide username and password every time it expires.\n\nSince the `refresh token` lasts for a longer time, it's stored as a cookie in a secure way:\n\n```python\n# app/api/v1/login\n\n...\nresponse.set_cookie(\n    key=\"refresh_token\",\n    value=refresh_token,\n    httponly=True,  # Prevent access through JavaScript\n    secure=True,  # Ensure cookie is sent over HTTPS only\n    samesite=\"Lax\",  # Default to Lax for reasonable balance between security and usability\n    max_age=number_of_seconds,  # Set a max age for the cookie\n)\n...\n```\n\nYou may change it to suit your needs. The possible options for `samesite` are:\n\n- `Lax`: Cookies will be sent in top-level navigations (like clicking on a link to go to another site), but not in API requests or images loaded from other sites.\n- `Strict`: Cookies are sent only on top-level navigations from the same site that set the cookie, enhancing privacy but potentially disrupting user sessions.\n- `None`: Cookies will be sent with both same-site and cross-site requests.\n\n#### 5.12.2 Usage\n\nWhat you should do with the client is:\n\n- `Login`: Send credentials to `/api/v1/login`. Store the returned access token in memory for subsequent requests.\n- `Accessing Protected Routes`: Include the access token in the Authorization header.\n- `Token Renewal`: On access token expiry, the front end should automatically call `/api/v1/refresh` for a new token.\n- `Login Again`: If refresh token is expired, credentials should be sent to `/api/v1/login` again, storing the new access token in memory.\n- `Logout`: Call /api/v1/logout to end the session securely.\n\nThis authentication setup in the provides a robust, secure, and user-friendly way to handle user sessions in your API applications.\n\n### 5.13 Running\n\nIf you are using docker compose, just running the following command should ensure everything is working:\n\n```sh\ndocker compose up\n```\n\nIf you are doing it from scratch, ensure your postgres and your redis are running, then\nwhile in the `root` folder, run to start the application with uvicorn server:\n\n```sh\npoetry run uvicorn src.app.main:app --reload\n```\n\nAnd for the worker:\n\n```sh\npoetry run arq src.app.core.worker.settings.WorkerSettings\n```\n### 5.14 Create Application\n\nIf you want to stop tables from being created every time you run the api, you should disable this here:\n\n```python\n# app/main.py\n\nfrom .api import router\nfrom .core.config import settings\nfrom .core.setup import create_application\n\n# create_tables_on_start defaults to True\napp = create_application(router=router, settings=settings, create_tables_on_start=False)\n```\n\nThis `create_application` function is defined in `app/core/setup.py`, and it's a flexible way to configure the behavior of your application.\n\nA few examples:\n\n- Deactivate or password protect /docs\n- Add client-side cache middleware\n- Add Startup and Shutdown event handlers for cache, queue and rate limit\n\n### 5.15 Opting Out of Services\n\nTo opt out of services (like `Redis`, `Queue`, `Rate Limiter`), head to the `Settings` class in `src/app/core/config`:\n\n```python\n# src/app/core/config\nimport os\nfrom enum import Enum\n\nfrom pydantic_settings import BaseSettings\nfrom starlette.config import Config\n\ncurrent_file_dir = os.path.dirname(os.path.realpath(__file__))\nenv_path = os.path.join(current_file_dir, \"..\", \"..\", \".env\")\nconfig = Config(env_path)\n...\n\nclass Settings(\n    AppSettings,\n    PostgresSettings,\n    CryptSettings,\n    FirstUserSettings,\n    TestSettings,\n    RedisCacheSettings,\n    ClientSideCacheSettings,\n    RedisQueueSettings,\n    RedisRateLimiterSettings,\n    DefaultRateLimitSettings,\n    EnvironmentSettings,\n):\n    pass\n\n\nsettings = Settings()\n```\n\nAnd remove the Settings of the services you do not need. For example, without using redis (removed `Cache`, `Queue` and `Rate limit`):\n\n```python\nclass Settings(\n    AppSettings,\n    PostgresSettings,\n    CryptSettings,\n    FirstUserSettings,\n    TestSettings,\n    ClientSideCacheSettings,\n    DefaultRateLimitSettings,\n    EnvironmentSettings,\n):\n    pass\n```\n\nThen comment or remove the services you do not want from `docker-compose.yml`. Here, I removed `redis` and `worker` services:\n\n```yml\nversion: '3.8'\n\nservices:\n  web:\n    build:\n      context: .\n      dockerfile: Dockerfile\n    # -------- replace with comment to run with gunicorn --------\n    command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload\n    # command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n    env_file:\n      - ./src/.env\n    # -------- replace with comment if you are using nginx --------\n    ports:\n      - \"8000:8000\"\n    # expose:\n    #   - \"8000\"\n    depends_on:\n      - db\n      - redis\n    volumes:\n      - ./src/app:/code/app\n      - ./src/.env:/code/.env\n  db:\n    image: postgres:13\n    env_file:\n      - ./src/.env\n    volumes:\n      - postgres-data:/var/lib/postgresql/data\n    # -------- replace with comment to run migrations with docker --------\n    expose:\n      - \"5432\"\n    # ports:\n    #  - 5432:5432\n\nvolumes:\n  postgres-data:\n  redis-data:\n  #pgadmin-data:\n```\n\n## 6. Running in Production\n\n### 6.1 Uvicorn Workers with Gunicorn\n\nIn production you may want to run using gunicorn to manage uvicorn workers:\n\n```sh\ncommand: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n```\n\nHere it's running with 4 workers, but you should test it depending on how many cores your machine has.\n\nTo do this if you are using docker compose, just replace the comment:\nThis part in `docker-compose.yml`:\n\n```YAML\n# docker-compose.yml\n\n# -------- replace with comment to run with gunicorn --------\ncommand: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload\n# command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n```\n\nShould be changed to:\n\n```YAML\n# docker-compose.yml\n\n# -------- replace with comment to run with uvicorn --------\n# command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload\ncommand: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n```\n\nAnd the same in `Dockerfile`:\nThis part:\n\n```Dockerfile\n# Dockerfile\n\nCMD [\"uvicorn\", \"app.main:app\", \"--host\", \"0.0.0.0\", \"--port\", \"8000\", \"--reload\"]\n# CMD [\"gunicorn\", \"app.main:app\", \"-w\", \"4\", \"-k\", \"uvicorn.workers.UvicornWorker\". \"-b\", \"0.0.0.0:8000\"]\n```\n\nShould be changed to:\n\n```Dockerfile\n# Dockerfile\n\n# CMD [\"uvicorn\", \"app.main:app\", \"--host\", \"0.0.0.0\", \"--port\", \"8000\", \"--reload\"]\nCMD [\"gunicorn\", \"app.main:app\", \"-w\", \"4\", \"-k\", \"uvicorn.workers.UvicornWorker\". \"-b\", \"0.0.0.0:8000\"]\n```\n\n\u003e \\[!CAUTION\\]\n\u003e Do not forget to set the `ENVIRONMENT` in `.env` to `production` unless you want the API docs to be public.\n\n### 6.2 Running with NGINX\n\nNGINX is a high-performance web server, known for its stability, rich feature set, simple configuration, and low resource consumption. NGINX acts as a reverse proxy, that is, it receives client requests, forwards them to the FastAPI server (running via Uvicorn or Gunicorn), and then passes the responses back to the clients.\n\nTo run with NGINX, you start by uncommenting the following part in your `docker-compose.yml`:\n\n```python\n# docker-compose.yml\n\n...\n# -------- uncomment to run with nginx --------\n# nginx:\n#   image: nginx:latest\n#   ports:\n#     - \"80:80\"\n#   volumes:\n#     - ./default.conf:/etc/nginx/conf.d/default.conf\n#   depends_on:\n#     - web\n...\n```\n\nWhich should be changed to:\n\n```YAML\n# docker-compose.yml\n\n...\n  #-------- uncomment to run with nginx --------\n  nginx:\n    image: nginx:latest\n    ports:\n      - \"80:80\"\n    volumes:\n      - ./default.conf:/etc/nginx/conf.d/default.conf\n    depends_on:\n      - web\n...\n```\n\nThen comment the following part:\n\n```YAML\n# docker-compose.yml\n\nservices:\n  web:\n    ...\n    # -------- Both of the following should be commented to run with nginx --------\n    command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload\n    # command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n```\n\nWhich becomes:\n\n```YAML\n# docker-compose.yml\n\nservices:\n  web:\n    ...\n    # -------- Both of the following should be commented to run with nginx --------\n    # command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload\n    # command: gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000\n```\n\nThen pick the way you want to run (uvicorn or gunicorn managing uvicorn workers) in `Dockerfile`.\nThe one you want should be uncommented, comment the other one.\n\n```Dockerfile\n# Dockerfile\n\nCMD [\"uvicorn\", \"app.main:app\", \"--host\", \"0.0.0.0\", \"--port\", \"8000\", \"--reload\"]\n# CMD [\"gunicorn\", \"app.main:app\", \"-w\", \"4\", \"-k\", \"uvicorn.workers.UvicornWorker\". \"-b\", \"0.0.0.0:8000\"]\n```\n\nAnd finally head to `http://localhost/docs`.\n\n#### 6.2.1 One Server\n\nIf you want to run with one server only, your setup should be ready. Just make sure the only part that is not a comment in `default.conf` is:\n\n```conf\n# default.conf\n\n# ---------------- Running With One Server ----------------\nserver {\n    listen 80;\n\n    location / {\n        proxy_pass http://web:8000;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n    }\n}\n```\n\nSo just type on your browser: `http://localhost/docs`.\n\n#### 6.2.2 Multiple Servers\n\nNGINX can distribute incoming network traffic across multiple servers, improving the efficiency and capacity utilization of your application.\n\nTo run with multiple servers, just comment the `Running With One Server` part in `default.conf` and Uncomment the other one:\n\n```conf\n# default.conf\n\n# ---------------- Running With One Server ----------------\n...\n\n# ---------------- To Run with Multiple Servers, Uncomment below ----------------\nupstream fastapi_app {\n    server fastapi1:8000;  # Replace with actual server names or IP addresses\n    server fastapi2:8000;\n    # Add more servers as needed\n}\n\nserver {\n    listen 80;\n\n    location / {\n        proxy_pass http://fastapi_app;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n    }\n}\n```\n\nAnd finally, on your browser: `http://localhost/docs`.\n\n\u003e \\[!WARNING\\]\n\u003e Note that we are using `fastapi1:8000` and `fastapi2:8000` as examples, you should replace it with the actual name of your service and the port it's running on.\n\n## 7. Testing\n\nFor tests, ensure you have in `.env`:\n\n```\n# ------------- test -------------\nTEST_NAME=\"Tester User\"\nTEST_EMAIL=\"test@tester.com\"\nTEST_USERNAME=\"testeruser\"\nTEST_PASSWORD=\"Str1ng$t\"\n```\n\nWhile in the tests folder, create your test file with the name \"test\\_{entity}.py\", replacing entity with what you're testing\n\n```sh\ntouch test_items.py\n```\n\nFinally create your tests (you may want to copy the structure in test_user.py)\n\nNow, to run:\n\n### 7.1  Docker Compose\n\nFirst you need to uncomment the following part in the `docker-compose.yml` file:\n\n```YAML\n  #-------- uncomment to run tests --------\n  # pytest:\n  #   build:\n  #     context: .\n  #     dockerfile: Dockerfile\n  #   env_file:\n  #     - ./src/.env\n  #   depends_on:\n  #     - db\n  #     - create_superuser\n  #     - redis\n  #   command: python -m pytest ./tests\n  #   volumes:\n  #     - .:/code\n```\n\nYou'll get:\n\n```YAML\n  #-------- uncomment to run tests --------\n  pytest:\n    build:\n      context: .\n      dockerfile: Dockerfile\n    env_file:\n      - ./src/.env\n    depends_on:\n      - db\n      - create_superuser\n      - redis\n    command: python -m pytest ./tests\n    volumes:\n      - .:/code\n```\n\nStart the Docker Compose services:\n\n```sh\ndocker-compose up -d\n```\n\nIt will automatically run the tests, but if you want to run again later:\n\n```sh\ndocker-compose run --rm pytest\n```\n\n### 7.2  From Scratch\n\nWhile in the `root` folder, run:\n\n```sh\npoetry run python -m pytest\n```\n\n## 8. Contributing\n\nRead [contributing](CONTRIBUTING.md).\n\n## 9. References\n\nThis project is a SQLModel version of Fastapi-boilerplate:\n\n- [`FastAPI-boilerplate`](https://github.com/igorbenav/FastAPI-boilerplate)\n\n## 10. License\n\n[`MIT`](LICENSE.md)\n\n## 11. Contact\n\nIgor Magalhaes – [@igormagalhaesr](https://twitter.com/igormagalhaesr) – igormagalhaesr@gmail.com\n[github.com/igorbenav](https://github.com/igorbenav/)\n","funding_links":["https://github.com/sponsors/igorbenav"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Figorbenav%2Fsqlmodel-boilerplate","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Figorbenav%2Fsqlmodel-boilerplate","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Figorbenav%2Fsqlmodel-boilerplate/lists"}