{"id":31447931,"url":"https://github.com/qdrant/webinar-cloud-inference","last_synced_at":"2025-10-01T02:14:32.009Z","repository":{"id":308413466,"uuid":"1030404896","full_name":"qdrant/webinar-cloud-inference","owner":"qdrant","description":"How to Build a Multimodal Search Stack with One API","archived":false,"fork":false,"pushed_at":"2025-08-01T15:49:07.000Z","size":190,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-26T10:49:03.660Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/qdrant.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-08-01T15:24:54.000Z","updated_at":"2025-08-01T15:49:11.000Z","dependencies_parsed_at":"2025-08-05T20:52:55.234Z","dependency_job_id":"e619c905-afc7-45f5-b462-1958d93ae841","html_url":"https://github.com/qdrant/webinar-cloud-inference","commit_stats":null,"previous_names":["qdrant/webinar-cloud-inference"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/qdrant/webinar-cloud-inference","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Fwebinar-cloud-inference","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Fwebinar-cloud-inference/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Fwebinar-cloud-inference/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Fwebinar-cloud-inference/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/qdrant","download_url":"https://codeload.github.com/qdrant/webinar-cloud-inference/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Fwebinar-cloud-inference/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":277782799,"owners_count":25876209,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-01T02:00:09.286Z","response_time":88,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-10-01T02:14:29.602Z","updated_at":"2025-10-01T02:14:32.000Z","avatar_url":"https://github.com/qdrant.png","language":"JavaScript","funding_links":[],"categories":[],"sub_categories":[],"readme":"---\ntitle: \"Qdrant Webinar: Cloud Inference\"\nemoji: 🖼️\ncolorFrom: blue\ncolorTo: yellow\nsdk: docker\npinned: false\nlicense: apache-2.0\nshort_description: \"Personal Image Catalog\"\n---\n\n# webinar-cloud-inference\n\nThis repository contains materials from the hands-on webinar \"[How to Build a Multimodal Search Stack with One \nAPI](https://www.youtube.com/watch?v=A8BBdGC2xKs)\". It implements a personal image catalog that can be searched by image \nor text, with additional object detection capabilities.\n\n## Software Stack\n\nThe project is built using Python and JavaScript, integrating with external services through their APIs. **The \nmultimodal search capabilities are implemented using [Qdrant Cloud Inference](https://cloud.qdrant.io/), with \n[FastAPI](https://fastapi.tiangolo.com/) serving as the backend API and a modern frontend built with \n[Vite](https://vitejs.dev/) and [DaisyUI](https://daisyui.com/).**\n\n## Prerequisites\n\n**This project requires [Qdrant Cloud](https://cloud.qdrant.io/) as it uses Cloud Inference features that are not available in local Qdrant \ninstances.**\n\nYou'll need to sign up for a free account on [Qdrant Cloud](https://cloud.qdrant.io/) to get:\n- A cluster URL to connect to your instance\n- An API key for authentication\n- Access to Cloud Inference capabilities\n\nYou'll need Python 3.10 or higher installed, and we recommend using [uv](https://docs.astral.sh/uv/) for dependency management. For the \nfrontend, you'll need Node.js 18 or higher.\n\nSince we'll be working with cloud inference, no GPU access is required. Install all necessary libraries with a single \ncommand:\n\n```shell\n# Backend dependencies\ncd backend-app\nuv sync\n\n# Frontend dependencies\ncd ../frontend-app\nnpm install\n```\n\nThe project uses different models for different tasks, specifically:\n- **YOLO** for object detection\n- **CLIP** for vision-language embeddings\n- **Qdrant Cloud Inference** for vector operations\n\n**While you can swap these models with alternatives in the code, we cannot guarantee the system will maintain the same \nperformance level.**\n\n### Configuration\n\nCreate a `.env` file in the `backend-app` directory with the following entries:\n\n```dotenv\n# Qdrant configuration\nQDRANT_URL=your_qdrant_cluster_url\nQDRANT_API_KEY=your_qdrant_api_key\nQDRANT_COLLECTION_NAME=your_collection_name\n\n# YOLO Model Configuration (optional)\nYOLO_MODEL=yolo11s.pt\n\n# Feature Flags (optional)\nENABLE_INGEST=true\n```\n\nYou can rename `.env.example` to `.env` and fill in the values, or set these as environment variables.\n\n### Feature Flags\n\nThe application supports feature flags to control functionality:\n\n- **ENABLE_INGEST**: Controls whether the ingest functionality is available. When set to `false`, both the frontend and backend will hide the ingest features. Defaults to `true`.\n\n#### Qdrant Cloud Setup\n\nSince this project uses Cloud Inference features, you must use Qdrant Cloud. After signing up:\n\n1. Create a new cluster in your Qdrant Cloud dashboard\n2. Note your cluster URL and API key\n3. Use these credentials in your `.env` file\n\nThe Cloud Inference feature allows creating vectors from raw data by sending it directly to the Qdrant cloud instance, \neliminating the need for local model inference.\n\n## Usage\n\n### Development Setup\n\nFor development, you can run the services separately:\n\n#### Backend Setup\n\n1. **Navigate to backend directory**\n   ```bash\n   cd backend-app\n   ```\n\n2. **Install dependencies**\n   ```bash\n   uv sync\n   ```\n\n3. **Configure environment variables**\n   Create a `.env` file in the `backend-app` directory with your Qdrant credentials.\n\n4. **Setup Qdrant collection**\n   ```bash\n   uv run python setup.py\n   ```\n\n5. **Start the backend server**\n   ```bash\n   # For development with uvicorn\n   uv run uvicorn main:app --reload --host 0.0.0.0 --port 7860\n   \n   # For production with gunicorn\n   uv run gunicorn main:app -c gunicorn.conf.py\n   ```\n\n#### Frontend Setup\n\n1. **Navigate to frontend directory**\n   ```bash\n   cd frontend-app\n   ```\n\n2. **Install dependencies**\n   ```bash\n   npm install\n   ```\n\n3. **Start the development server**\n   ```bash\n   npm run dev\n   ```\n\n### Production Deployment\n\nFor production, use the unified Docker container (see Docker Deployment section above).\n\n## API Documentation\n\nOnce the backend is running, visit:\n- **Interactive API docs**: http://localhost:7860/docs\n- **ReDoc documentation**: http://localhost:7860/redoc\n\n## Usage Examples\n\n### Ingesting Images\n\n**Via API:**\n```bash\ncurl -X POST \"http://localhost:7860/api/v1/ingest\" \\\n     -H \"Content-Type: application/json\" \\\n     -d '{\"url\": \"https://example.com/image.jpg\"}'\n```\n\n### Searching Images\n\n**Via API:**\n\n```bash\ncurl \"http://localhost:7860/api/v1/search?query=cat%20sitting%20on%20a%20chair\u0026limit=5\"\n```\n\n**Via Frontend:**\n\nOpen http://localhost:7860 and use the web interface.\n\n## Docker Deployment\n\nThe project uses a unified Docker container that combines both the backend API and frontend application. This simplifies \ndeployment and reduces resource usage.\n\n### Quick Start with Docker\n\n1. **Build the unified container:**\n   ```bash\n   # Using the build script\n   ./build.sh\n   \n   # Or manually\n   docker build -t webinar-app .\n   ```\n\n2. **Run the application:**\n   ```bash\n   # With environment variables\n   docker run -p 7860:7860 --env-file .env webinar-app\n   \n   # Or with inline environment variables\n   docker run -p 7860:7860 \\\n     -e QDRANT_URL=your_qdrant_cluster_url \\\n     -e QDRANT_API_KEY=your_qdrant_api_key \\\n     -e QDRANT_COLLECTION_NAME=your_collection_name \\\n     webinar-app\n   ```\n\n3. **Access the application:**\n   - **Frontend**: http://localhost:7860\n   - **API Documentation**: http://localhost:7860/docs\n   - **Health Check**: http://localhost:7860/health\n\n### Environment Variables\n\nCreate a `.env` file in the root directory with your Qdrant credentials:\n\n```dotenv\nQDRANT_URL=your_qdrant_cluster_url\nQDRANT_API_KEY=your_qdrant_api_key\nQDRANT_COLLECTION_NAME=your_collection_name\n```\n\n\nFor a detailed explanation of the system and its components, refer to the webinar recording and the source code in this \nrepository.","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fqdrant%2Fwebinar-cloud-inference","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fqdrant%2Fwebinar-cloud-inference","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fqdrant%2Fwebinar-cloud-inference/lists"}