{"id":13472392,"url":"https://github.com/qdrant/fastembed","last_synced_at":"2025-03-26T15:32:06.261Z","repository":{"id":185003143,"uuid":"666260877","full_name":"qdrant/fastembed","owner":"qdrant","description":"Fast, Accurate, Lightweight Python library to make State of the Art Embedding","archived":false,"fork":false,"pushed_at":"2024-10-30T03:40:45.000Z","size":2548,"stargazers_count":1461,"open_issues_count":60,"forks_count":104,"subscribers_count":13,"default_branch":"main","last_synced_at":"2024-10-30T04:14:00.262Z","etag":null,"topics":["embeddings","openai","rag","retrieval","retrieval-augmented-generation","vector-search"],"latest_commit_sha":null,"homepage":"https://qdrant.github.io/fastembed/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"NirantK/fastvector","license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/qdrant.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-07-14T04:59:33.000Z","updated_at":"2024-10-29T22:12:45.000Z","dependencies_parsed_at":"2023-09-26T19:37:19.924Z","dependency_job_id":"236bb33c-88f9-42df-8a0c-fb7f26c4ae88","html_url":"https://github.com/qdrant/fastembed","commit_stats":null,"previous_names":["qdrant/fastembed"],"tags_count":45,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Ffastembed","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Ffastembed/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Ffastembed/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/qdrant%2Ffastembed/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/qdrant","download_url":"https://codeload.github.com/qdrant/fastembed/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245681440,"owners_count":20655198,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["embeddings","openai","rag","retrieval","retrieval-augmented-generation","vector-search"],"created_at":"2024-07-31T16:00:54.330Z","updated_at":"2025-03-26T15:32:06.255Z","avatar_url":"https://github.com/qdrant.png","language":"Python","readme":"# ⚡️ What is FastEmbed?\n\nFastEmbed is a lightweight, fast, Python library built for embedding generation. We [support popular text models](https://qdrant.github.io/fastembed/examples/Supported_Models/). Please [open a GitHub issue](https://github.com/qdrant/fastembed/issues/new) if you want us to add a new model.\n\nThe default text embedding (`TextEmbedding`) model is Flag Embedding, presented in the [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. It supports \"query\" and \"passage\" prefixes for the input text. Here is an example for [Retrieval Embedding Generation](https://qdrant.github.io/fastembed/qdrant/Retrieval_with_FastEmbed/) and how to use [FastEmbed with Qdrant](https://qdrant.github.io/fastembed/qdrant/Usage_With_Qdrant/).\n\n## 📈 Why FastEmbed?\n\n1. Light: FastEmbed is a lightweight library with few external dependencies. We don't require a GPU and don't download GBs of PyTorch dependencies, and instead use the ONNX Runtime. This makes it a great candidate for serverless runtimes like AWS Lambda. \n\n2. Fast: FastEmbed is designed for speed. We use the ONNX Runtime, which is faster than PyTorch. We also use data parallelism for encoding large datasets.\n\n3. Accurate: FastEmbed is better than OpenAI Ada-002. We also [support](https://qdrant.github.io/fastembed/examples/Supported_Models/) an ever-expanding set of models, including a few multilingual models.\n\n## 🚀 Installation\n\nTo install the FastEmbed library, pip works best. You can install it with or without GPU support:\n\n```bash\npip install fastembed\n\n# or with GPU support\n\npip install fastembed-gpu\n```\n\n## 📖 Quickstart\n\n```python\nfrom fastembed import TextEmbedding\n\n\n# Example list of documents\ndocuments: list[str] = [\n    \"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.\",\n    \"fastembed is supported by and maintained by Qdrant.\",\n]\n\n# This will trigger the model download and initialization\nembedding_model = TextEmbedding()\nprint(\"The model BAAI/bge-small-en-v1.5 is ready to use.\")\n\nembeddings_generator = embedding_model.embed(documents)  # reminder this is a generator\nembeddings_list = list(embedding_model.embed(documents))\n  # you can also convert the generator to a list, and that to a numpy array\nlen(embeddings_list[0]) # Vector of 384 dimensions\n```\n\nFastembed supports a variety of models for different tasks and modalities.\nThe list of all the available models can be found [here](https://qdrant.github.io/fastembed/examples/Supported_Models/)\n### 🎒 Dense text embeddings\n\n```python\nfrom fastembed import TextEmbedding\n\nmodel = TextEmbedding(model_name=\"BAAI/bge-small-en-v1.5\")\nembeddings = list(model.embed(documents))\n\n# [\n#   array([-0.1115,  0.0097,  0.0052,  0.0195, ...], dtype=float32),\n#   array([-0.1019,  0.0635, -0.0332,  0.0522, ...], dtype=float32)\n# ]\n\n```\n\nDense text embedding can also be extended with models which are not in the list of supported models.\n\n```python\nfrom fastembed import TextEmbedding\nfrom fastembed.common.model_description import PoolingType, ModelSource\n\nTextEmbedding.add_custom_model(\n    model=\"intfloat/multilingual-e5-small\",\n    pooling=PoolingType.MEAN,\n    normalization=True,\n    sources=ModelSource(hf=\"intfloat/multilingual-e5-small\"),  # can be used with an `url` to load files from a private storage\n    dim=384,\n    model_file=\"onnx/model.onnx\",  # can be used to load an already supported model with another optimization or quantization, e.g. onnx/model_O4.onnx\n)\nmodel = TextEmbedding(model_name=\"intfloat/multilingual-e5-small\")\nembeddings = list(model.embed(documents))\n```\n\n\n### 🔱 Sparse text embeddings\n\n* SPLADE++\n\n```python\nfrom fastembed import SparseTextEmbedding\n\nmodel = SparseTextEmbedding(model_name=\"prithivida/Splade_PP_en_v1\")\nembeddings = list(model.embed(documents))\n\n# [\n#   SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),\n#   SparseEmbedding(indices=[ 38,  12,  91, ... ], values=[0.11, 0.22, 0.39, ...])\n# ]\n```\n\n\u003c!--\n* BM42 - ([link](ToDo))\n\n```\nfrom fastembed import SparseTextEmbedding\n\nmodel = SparseTextEmbedding(model_name=\"Qdrant/bm42-all-minilm-l6-v2-attentions\")\nembeddings = list(model.embed(documents))\n\n# [\n#   SparseEmbedding(indices=[ 17, 123, 919, ... ], values=[0.71, 0.22, 0.39, ...]),\n#   SparseEmbedding(indices=[ 38,  12,  91, ... ], values=[0.11, 0.22, 0.39, ...])\n# ]\n```\n--\u003e\n\n### 🦥 Late interaction models (aka ColBERT)\n\n\n```python\nfrom fastembed import LateInteractionTextEmbedding\n\nmodel = LateInteractionTextEmbedding(model_name=\"colbert-ir/colbertv2.0\")\nembeddings = list(model.embed(documents))\n\n# [\n#   array([\n#       [-0.1115,  0.0097,  0.0052,  0.0195, ...],\n#       [-0.1019,  0.0635, -0.0332,  0.0522, ...],\n#   ]),\n#   array([\n#       [-0.9019,  0.0335, -0.0032,  0.0991, ...],\n#       [-0.2115,  0.8097,  0.1052,  0.0195, ...],\n#   ]),  \n# ]\n```\n\n### 🖼️ Image embeddings\n\n```python\nfrom fastembed import ImageEmbedding\n\nimages = [\n    \"./path/to/image1.jpg\",\n    \"./path/to/image2.jpg\",\n]\n\nmodel = ImageEmbedding(model_name=\"Qdrant/clip-ViT-B-32-vision\")\nembeddings = list(model.embed(images))\n\n# [\n#   array([-0.1115,  0.0097,  0.0052,  0.0195, ...], dtype=float32),\n#   array([-0.1019,  0.0635, -0.0332,  0.0522, ...], dtype=float32)\n# ]\n```\n\n### Late interaction multimodal models (ColPali)\n\n```python\nfrom fastembed import LateInteractionMultimodalEmbedding\n\ndoc_images = [\n    \"./path/to/qdrant_pdf_doc_1_screenshot.jpg\",\n    \"./path/to/colpali_pdf_doc_2_screenshot.jpg\",\n]\n\nquery = \"What is Qdrant?\"\n\nmodel = LateInteractionMultimodalEmbedding(model_name=\"Qdrant/colpali-v1.3-fp16\")\ndoc_images_embeddings = list(model.embed_image(doc_images))\n# shape (2, 1030, 128)\n# [array([[-0.03353882, -0.02090454, ..., -0.15576172, -0.07678223]], dtype=float32)]\nquery_embedding = model.embed_text(query)\n# shape (1, 20, 128)\n# [array([[-0.00218201,  0.14758301, ...,  -0.02207947,  0.16833496]], dtype=float32)]\n```\n\n### 🔄 Rerankers\n```python\nfrom fastembed.rerank.cross_encoder import TextCrossEncoder\n\nquery = \"Who is maintaining Qdrant?\"\ndocuments: list[str] = [\n    \"This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.\",\n    \"fastembed is supported by and maintained by Qdrant.\",\n]\nencoder = TextCrossEncoder(model_name=\"Xenova/ms-marco-MiniLM-L-6-v2\")\nscores = list(encoder.rerank(query, documents))\n\n# [-11.48061752319336, 5.472434997558594]\n```\n\nText cross encoders can also be extended with models which are not in the list of supported models.\n\n```python\nfrom fastembed.rerank.cross_encoder import TextCrossEncoder \nfrom fastembed.common.model_description import ModelSource\n\nTextCrossEncoder.add_custom_model(\n    model=\"Xenova/ms-marco-MiniLM-L-4-v2\",\n    model_file=\"onnx/model.onnx\",\n    sources=ModelSource(hf=\"Xenova/ms-marco-MiniLM-L-4-v2\"),\n)\nmodel = TextCrossEncoder(model_name=\"Xenova/ms-marco-MiniLM-L-4-v2\")\nscores = list(model.rerank_pairs(\n    [(\"What is AI?\", \"Artificial intelligence is ...\"), (\"What is ML?\", \"Machine learning is ...\"),]\n))\n```\n\n## ⚡️ FastEmbed on a GPU\n\nFastEmbed supports running on GPU devices.\nIt requires installation of the `fastembed-gpu` package.\n\n```bash\npip install fastembed-gpu\n```\n\nCheck our [example](https://qdrant.github.io/fastembed/examples/FastEmbed_GPU/) for detailed instructions, CUDA 12.x support and troubleshooting of the common issues.\n\n```python\nfrom fastembed import TextEmbedding\n\nembedding_model = TextEmbedding(\n    model_name=\"BAAI/bge-small-en-v1.5\", \n    providers=[\"CUDAExecutionProvider\"]\n)\nprint(\"The model BAAI/bge-small-en-v1.5 is ready to use on a GPU.\")\n\n```\n\n## Usage with Qdrant\n\nInstallation with Qdrant Client in Python:\n\n```bash\npip install qdrant-client[fastembed]\n```\n\nor \n\n```bash\npip install qdrant-client[fastembed-gpu]\n```\n\nYou might have to use quotes ```pip install 'qdrant-client[fastembed]'``` on zsh.\n\n```python\nfrom qdrant_client import QdrantClient\n\n# Initialize the client\nclient = QdrantClient(\"localhost\", port=6333) # For production\n# client = QdrantClient(\":memory:\") # For small experiments\n\n# Prepare your documents, metadata, and IDs\ndocs = [\"Qdrant has Langchain integrations\", \"Qdrant also has Llama Index integrations\"]\nmetadata = [\n    {\"source\": \"Langchain-docs\"},\n    {\"source\": \"Llama-index-docs\"},\n]\nids = [42, 2]\n\n# If you want to change the model:\n# client.set_model(\"sentence-transformers/all-MiniLM-L6-v2\")\n# List of supported models: https://qdrant.github.io/fastembed/examples/Supported_Models\n\n# Use the new add() instead of upsert()\n# This internally calls embed() of the configured embedding model\nclient.add(\n    collection_name=\"demo_collection\",\n    documents=docs,\n    metadata=metadata,\n    ids=ids\n)\n\nsearch_result = client.query(\n    collection_name=\"demo_collection\",\n    query_text=\"This is a query document\"\n)\nprint(search_result)\n```\n","funding_links":[],"categories":["SDKs \u0026 Libraries","Python","向量数据库、向量搜索、最近邻搜索","Openai","vector-search","📋 Contents","Tools \u0026 Evaluation"],"sub_categories":["网络服务_其他","🔍 5. Retrieval-Augmented Generation (RAG) \u0026 Knowledge","Local Inference"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fqdrant%2Ffastembed","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fqdrant%2Ffastembed","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fqdrant%2Ffastembed/lists"}