https://github.com/vishalrk1/seefood
a web App and Mobile app to classify food dishes and then to display recipe of that food
https://github.com/vishalrk1/seefood
computer-vision deep-learning flask-api flask-restful flutter-apps food101 pytorch recipe-app streamlit-webapp
Last synced: 4 months ago
JSON representation
a web App and Mobile app to classify food dishes and then to display recipe of that food
- Host: GitHub
- URL: https://github.com/vishalrk1/seefood
- Owner: vishalrk1
- Created: 2021-11-12T06:21:29.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2022-12-16T05:49:25.000Z (almost 3 years ago)
- Last Synced: 2025-03-26T15:55:09.285Z (7 months ago)
- Topics: computer-vision, deep-learning, flask-api, flask-restful, flutter-apps, food101, pytorch, recipe-app, streamlit-webapp
- Language: Jupyter Notebook
- Homepage:
- Size: 116 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# SeeFood
an web App to classify food dishes and then to disply recipe of that food### HuggingFace Demo: **[Web App](https://huggingface.co/spaces/Vrk/SeeFood)**
![]()
![]()
# Project Flow Chart
# SeeFood Flask API Responce
```
{
"foodName": "hamburger",
"status": 200,
"foodDetails": {
"id": 642539,
"ingridents": [
"chickpeas, drained and rinsed",
"tahini",
"sriracha sauce",
"garlic",
"fresh parsley, roughly chopped",
"red onion, diced",
"peanut oil",
"cucumber",
"tomato",
"hamburger buns (I used President's Choice multi-grain thins)",
"Tzatziki for topping"
],
"instructions": "Pat the chickpeas dry with a paper towel and throw them into a food processor along with the garlic.\nPuree until smooth.\nUsing your clean hands incorporate tahini, sriracha, parsley and onion into the chickpea mixture.\nForm mixture into four patties and set aside.\nHeat peanut oil in a large skillet over medium heat.\nOnce the oil begins to shimmer add the patties and cook for three minutes per side or until golden brown.\nRemove from and place in a hamburger bun.\nTop each burger with 2 slices of tomato, 2 slices of cucumber and a dollop of tzatziki.\nServe immediately.",
"percentCarbs": 41.95,
"percentFat": 46.97,
"percentProtein": 11.08,
"readyTime": 45,
"soureUrl": "https://www.foodista.com/recipe/DYNQRHMK/falafel-burger",
"summary": "You can never have too many middl eastern recipes, so give Falafel Burger a try. For $1.37 per serving, this recipe covers 19% of your daily requirements of vitamins and minerals. One portion of this dish contains around 12g of protein, 20g of fat, and a total of 402 calories. This recipe serves 4. It is brought to you by Foodista. 4 people were impressed by this recipe. Head to the store and pick up onion, garlic, sriracha sauce, and a few other things to make it today. Only a few people really liked this main course. From preparation to the plate, this recipe takes about about 45 minutes. It is a good option if you're following a dairy free and lacto ovo vegetarian diet. With a spoonacular score of 81%, this dish is spectacular. Try Clean eating falafel burger, Clean eating falafel burger, and Falafel Veggie Burger with Feta Yogurt Sauce for similar recipes.",
"title": "Falafel Burger"
}
}
```# Datasets Used
- **[FOOD 101](https://www.kaggle.com/dansbecker/food-101)** for general food Model
- for **Indian Food Data** created my own dataset# Classification Models
| Classifier | Accuracy |
|---------------------|------|
| EfficientNEt-B0 | 0.69 |
| Xception | 0.76 |
| Rexnet 200 | 0.75|
| ViT | **0.85** |- The model used for Food Not food image classification is CLIP Image Classification model [Huggingface documentation](https://huggingface.co/docs/transformers/model_doc/clip)
```python
from transformers import CLIPProcessor, CLIPModeldef food_not_food(input_image):
model = CLIPModel.from_pretrained("flax-community/clip-rsicd-v2")
processor = CLIPProcessor.from_pretrained("flax-community/clip-rsicd-v2")labels = ["food", "not food"]
inputs = processor(text=[f"a photo of a {l}" for l in labels], images=input_image, return_tensors="pt", padding=True)outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
prob = logits_per_image.softmax(dim=1).detach().cpu().numpy().argmax(axis=1)
return labels[prob[0]]
```## ViT ( Vision Transformer )
## Contact Me