Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mnahkies/recognize-anything-api
Dockerized FastAPI wrapper around the recognize-anything image recognition models
https://github.com/mnahkies/recognize-anything-api
Last synced: 13 days ago
JSON representation
Dockerized FastAPI wrapper around the recognize-anything image recognition models
- Host: GitHub
- URL: https://github.com/mnahkies/recognize-anything-api
- Owner: mnahkies
- License: mit
- Created: 2024-02-24T12:41:03.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2024-03-18T08:57:47.000Z (8 months ago)
- Last Synced: 2024-05-15T15:39:43.823Z (6 months ago)
- Language: Python
- Size: 12.7 KB
- Stars: 25
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# recognize-anything-api
Dockerized FastAPI wrapper around the impressive [recognize-anything](https://github.com/xinyu1205/recognize-anything)
image recognition models.All model weights, etc are baked into the docker image rather than fetched at runtime.
This means it's possible to run this image without granting it internet access, and
hopefully means it will continue to work in 6 months time. You can verify this by
running the image with `--net none` and using `docker exec` trying:```shell
curl --verbose -F file=@/opt/app/recognize_anything/images/demo/demo1.jpg localhost:8000/
```Caveat, the image is huge (~20gb, of which ~13gb is weights, ~6gb pip dependencies) as a
result - though it could probably be slimmed down a bit.## Dockerhub
This repository is published to dockerhub. You can run it like so
```shell
docker run -it --rm --gpus all -p 8000:8000 mnahkies/recognize-anything-api
```**Note:** this assumes you have
the [nvidia container runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
installed, but omitting `--gpus all` should still work fine running inference on the CPU.Then make requests using your client of choice, eg:
```shell
curl --verbose -F file=@/path/to/image.jpg localhost:8000/
```## Build
Pre-requisites:
- Docker/equivalent installed and running
Clone this repository with submodules:
```shell
git clone --recurse-submodules
````Then run:
```
./bin/docker-build.sh`
```## Usage
Simply run:
```shell
./bin/docker-run.sh
```Then you can make requests like:
```shell
curl --verbose -F file=@/path/to/image.jpg localhost:8000/
```You can choose which model is used by setting the `MODEL_NAME` environment variable to one of:
- `ram_plus` (default)
- `ram`
- `tag2text`See [./server.py](./server.py) for other options.
## License
See [./LICENSE](./LICENSE) and [./recognize_anything/NOTICE.txt](./recognize_anything/NOTICE.txt)
## Contributing
This is a very scrappy project that I created to experiment with https://github.com/xinyu1205/recognize-anything
and there is plenty of scope for improvement!PR's to improve the configurability, packaging, efficiency, etc are welcome.