https://github.com/seonglae/tei
Text Embeddings Inference (TEI)'s unofficial python wrapper library for batch processing with asyncio
https://github.com/seonglae/tei
aiohttp asyncio embedding embedding-vectors embeddings tei text-embeddings text-embeddings-inference
Last synced: about 2 months ago
JSON representation
Text Embeddings Inference (TEI)'s unofficial python wrapper library for batch processing with asyncio
- Host: GitHub
- URL: https://github.com/seonglae/tei
- Owner: seonglae
- Created: 2023-10-28T08:15:50.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-11-27T15:13:24.000Z (over 1 year ago)
- Last Synced: 2025-02-06T02:57:40.879Z (3 months ago)
- Topics: aiohttp, asyncio, embedding, embedding-vectors, embeddings, tei, text-embeddings, text-embeddings-inference
- Language: Python
- Homepage:
- Size: 9.77 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# TEI Python
Text Embeddings Inference (TEI)'s unofficial python wrapper library for batch processing with asyncio.
# Get Started
```sh
pip install teicli
``````py
from tei import TEIClientclient = TEIClient()
client.embed_sync("Hello world!")
# [0.010536194, 0.05859375, 0.022262....routine = client.embed_batch(["Hello world!", "Hello world!", "Hello world!"])
# [[0.010536194, 0.05859375, 0.022262....
```You need to run own text-embeddings-inference server. Check [here](https://github.com/huggingface/text-embeddings-inference)
```sh
docker run --gpus all -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:0.3.0 --model-id $model --revision
$revision
```