https://github.com/curt-park/yolo-world-with-efficientvit-sam
YOLO-World + EfficientViT SAM
https://github.com/curt-park/yolo-world-with-efficientvit-sam
Last synced: 30 days ago
JSON representation
YOLO-World + EfficientViT SAM
- Host: GitHub
- URL: https://github.com/curt-park/yolo-world-with-efficientvit-sam
- Owner: Curt-Park
- License: apache-2.0
- Created: 2024-02-14T15:55:16.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-18T15:23:09.000Z (over 1 year ago)
- Last Synced: 2024-11-15T19:35:19.974Z (11 months ago)
- Language: Python
- Homepage: https://huggingface.co/spaces/curt-park/yolo-world-with-efficientvit-sam
- Size: 3.88 MB
- Stars: 76
- Watchers: 2
- Forks: 9
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# YOLO-World + EfficientViT SAM
🤗 [HuggingFace Space](https://huggingface.co/spaces/curt-park/yolo-world-with-efficientvit-sam)

## Prerequisites
This project is developed and tested on Python3.10.
```bash
# Create and activate a python 3.10 environment.
conda create -n yolo-world-with-efficientvit-sam python=3.10 -y
conda activate yolo-world-with-efficientvit-sam
# Setup packages.
make setup
```
## How to Run
```bash
python app.py
```
Open http://127.0.0.1:7860/ on your web browser.

## Core Components
### YOLO-World
[YOLO-World](https://github.com/AILab-CVC/YOLO-World) is an open-vocabulary object detection model with high efficiency.
On the challenging LVIS dataset, YOLO-World achieves 35.4 AP with 52.0 FPS on V100,
which outperforms many state-of-the-art methods in terms of both accuracy and speed.


### EfficientViT SAM
[EfficientViT SAM](https://github.com/mit-han-lab/efficientvit) is a new family of accelerated segment anything models.
Thanks to the lightweight and hardware-efficient core building block,
it delivers 48.9× measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance.

## Powered By
```
@misc{zhang2024efficientvitsam,
title={EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss},
author={Zhuoyang Zhang and Han Cai and Song Han},
year={2024},
eprint={2402.05008},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{cheng2024yolow,
title={YOLO-World: Real-Time Open-Vocabulary Object Detection},
author={Cheng, Tianheng and Song, Lin and Ge, Yixiao and Liu, Wenyu and Wang, Xinggang and Shan, Ying},
journal={arXiv preprint arXiv:2401.17270},
year={2024}
}
@article{cai2022efficientvit,
title={Efficientvit: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```