https://github.com/attarmau/styleclip
A mockup full stack app built with React, FastAPI, MongoDB, and Docker, powered by AWS Rekognition & CLIP for multi-tagging and clothing recommendations
https://github.com/attarmau/styleclip
clip fastapi mcp recommender-system
Last synced: 4 months ago
JSON representation
A mockup full stack app built with React, FastAPI, MongoDB, and Docker, powered by AWS Rekognition & CLIP for multi-tagging and clothing recommendations
- Host: GitHub
- URL: https://github.com/attarmau/styleclip
- Owner: attarmau
- License: apache-2.0
- Created: 2025-04-15T01:47:58.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-06-06T09:12:40.000Z (4 months ago)
- Last Synced: 2025-06-06T10:24:51.521Z (4 months ago)
- Topics: clip, fastapi, mcp, recommender-system
- Language: Python
- Homepage:
- Size: 352 KB
- Stars: 0
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FastMCP_RecSys
This is a CLIP-Based Fashion Recommender with MCP.### π Sample Components for UI
1. Image upload
2. Submit button
3. Display clothing tags + recommendations# Mockup
A user uploads a clothing image β YOLO detects clothing β CLIP encodes β Recommend similar
# Folder Structure
```
/project-root
β
βββ /backend
β βββ Dockerfile
β βββ /app
β βββ /aws
β β β βββ rekognition_wrapper.py # AWS Rekognition logic
β β βββ /utils
β β β βββ image_utils.py # Bounding box crop utils
β β βββ /controllers
β β β βββ clothing_detector.py # Coordinates Rekognition + cropping
β β βββ /tests
β β β βββ test_rekognition_wrapper.py
β β β βββ test_clothing_tagging.py
β β βββ server.py # FastAPI app code
β β βββ /routes
β β β βββ clothing_routes.py
β β βββ /controllers
β β β βββ clothing_controller.py
β β β βββ clothing_tagging.py
β β β βββ tag_extractor.py # Pending: define core CLIP functionality
β β βββ schemas/
β β β βββ clothing_schemas.py
β β βββ config/
β β β βββ tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
β β β βββ database.py
β β β βββ settings.py
β β β βββ api_keys.py
β β βββ requirements.txt
β βββ .env
β
βββ /frontend
β βββ Dockerfile
β βββ package.json
β βββ package-lock.json
β βββ /public
β β βββ index.html
β βββ /src
β β βββ /components
β β β βββ ImageUpload.jsx
β β β βββ DetectedTags.jsx
β β β βββ Recommendations.jsx
β β βββ /utils
β β β βββ api.js
β β βββ App.js # Main React component
β β βββ index.js
β β βββ index.css
β β βββ tailwind.config.js
β β βββ postcss.config.js
β βββ .env
βββ docker-compose.yml
βββ README.md
```## Quick Start Guide
### Step 1: Clone the GitHub Project
### Step 2: Set Up the Python Environment
```
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On Windows
```
### Step 3: Install Dependencies
```
pip install -r requirements.txt
```
### Step 4: Start the FastAPI Server (Backend)
```
uvicorn backend.app.server:app --reload
```
Once the server is running and the database is connected, you should see the following message in the console:
```
Database connected
INFO: Application startup complete.
```### Step 5: Install Dependencies
Database connected
INFO: Application startup complete.
```
npm install
```
### Step 6: Start the Development Server (Frontend)
```
npm start
```
Once running, the server logs a confirmation and opens the app in your browser: [http://localhost:3000/](http://localhost:3000/)
# Whatβs completed so far:
1. FastAPI server is up and running (24 Apr)
2. Database connection is set up (24 Apr)
3. Backend architecture is functional (24 Apr)
4. Basic front-end UI for uploading picture (25 Apr)
## 5. Mock Testing for AWS Rekognition -> bounding box (15 May)
```
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py
```- Tested Rekognition integration logic independently using a mock β verified it correctly extracts bounding boxes only when labels match the garment set
- Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root## 6. Mock Testing for AWS Rekognition -> CLIP (20 May)
```
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
```- Detecting garments using AWS Rekognition
- Cropping the image around detected bounding boxes
- Tagging the cropped image using CLIP
## 7. Mock Testing for full image tagging pipeline (Image bytes β AWS Rekognition (detect garments) β Crop images β CLIP (predict tags) + Error Handling (25 May)
| **Negative Test Case** | **Description** |
| -------------------------------| ------------------------------------------------------------------------------- |
| No Detection Result | AWS doesn't detect any garments β should return an empty list. |
| Image Not Clothing | CLIP returns vague or empty tags β verify fallback behavior. |
| AWS Returns Exception | Simulate `rekognition.detect_labels` throwing an error β check `try-except`. |
| Corrupted Image File | Simulate a broken (non-JPEG) image β verify it raises an error or gives a hint. |```
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
```- detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
- crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
- get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]## 8. Run Testing for CLIP Output (30 May)
```
python3 -m venv venv
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
python -m backend.app.tests.test_tag_extractor
```Next Step:
1. Evaluate CLIPβs tagging accuracy on sample clothing images
2. Fine-tune the tagging system for better recommendations
3. Test the backend integration with real-time user data
4. Set up monitoring for model performance
5. Front-end demo