Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/mnahkies/recognize-anything-api

Dockerized FastAPI wrapper around the recognize-anything image recognition models
https://github.com/mnahkies/recognize-anything-api

Last synced: 13 days ago
JSON representation

Dockerized FastAPI wrapper around the recognize-anything image recognition models

Awesome Lists containing this project

README

        

# recognize-anything-api

Dockerized FastAPI wrapper around the impressive [recognize-anything](https://github.com/xinyu1205/recognize-anything)
image recognition models.

All model weights, etc are baked into the docker image rather than fetched at runtime.

This means it's possible to run this image without granting it internet access, and
hopefully means it will continue to work in 6 months time. You can verify this by
running the image with `--net none` and using `docker exec` trying:

```shell
curl --verbose -F file=@/opt/app/recognize_anything/images/demo/demo1.jpg localhost:8000/
```

Caveat, the image is huge (~20gb, of which ~13gb is weights, ~6gb pip dependencies) as a
result - though it could probably be slimmed down a bit.

## Dockerhub

This repository is published to dockerhub. You can run it like so

```shell
docker run -it --rm --gpus all -p 8000:8000 mnahkies/recognize-anything-api
```

**Note:** this assumes you have
the [nvidia container runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
installed, but omitting `--gpus all` should still work fine running inference on the CPU.

Then make requests using your client of choice, eg:

```shell
curl --verbose -F file=@/path/to/image.jpg localhost:8000/
```

## Build

Pre-requisites:

- Docker/equivalent installed and running

Clone this repository with submodules:

```shell
git clone --recurse-submodules
````

Then run:

```
./bin/docker-build.sh`
```

## Usage

Simply run:

```shell
./bin/docker-run.sh
```

Then you can make requests like:

```shell
curl --verbose -F file=@/path/to/image.jpg localhost:8000/
```

You can choose which model is used by setting the `MODEL_NAME` environment variable to one of:

- `ram_plus` (default)
- `ram`
- `tag2text`

See [./server.py](./server.py) for other options.

## License

See [./LICENSE](./LICENSE) and [./recognize_anything/NOTICE.txt](./recognize_anything/NOTICE.txt)

## Contributing

This is a very scrappy project that I created to experiment with https://github.com/xinyu1205/recognize-anything
and there is plenty of scope for improvement!

PR's to improve the configurability, packaging, efficiency, etc are welcome.