{"id":9478083,"url":"https://github.com/redis/redis-vl-python","last_synced_at":"2025-05-14T20:10:07.710Z","repository":{"id":69891618,"uuid":"561914940","full_name":"redis/redis-vl-python","owner":"redis","description":"Redis Vector Library (RedisVL) -- the AI-native Python client for Redis.","archived":false,"fork":false,"pushed_at":"2025-05-01T18:30:55.000Z","size":82189,"stargazers_count":291,"open_issues_count":12,"forks_count":46,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-05-08T00:13:39.579Z","etag":null,"topics":["embedding","large-language-models","llm","llmcache","openai","python","redis","retrieval-augmented-generation","semantic-cache","vector-database","vector-search"],"latest_commit_sha":null,"homepage":"https://docs.redisvl.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/redis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-11-04T19:36:02.000Z","updated_at":"2025-05-06T11:08:17.000Z","dependencies_parsed_at":"2024-02-13T16:41:12.960Z","dependency_job_id":"8b9b4f70-a1f1-4431-963e-e274e8629d82","html_url":"https://github.com/redis/redis-vl-python","commit_stats":{"total_commits":179,"total_committers":14,"mean_commits":"12.785714285714286","dds":0.4916201117318436,"last_synced_commit":"468ecd4fe2ec46056c8807170ded1f36cd6c75ac"},"previous_names":["redis/redis-vl-python","redisventures/redisvl"],"tags_count":28,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Fredis-vl-python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Fredis-vl-python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Fredis-vl-python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/redis%2Fredis-vl-python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/redis","download_url":"https://codeload.github.com/redis/redis-vl-python/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254219374,"owners_count":22034397,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["embedding","large-language-models","llm","llmcache","openai","python","redis","retrieval-augmented-generation","semantic-cache","vector-database","vector-search"],"created_at":"2024-05-12T01:07:37.874Z","updated_at":"2025-05-14T20:10:07.703Z","avatar_url":"https://github.com/redis.png","language":"Python","readme":"\u003cdiv align=\"center\" dir=\"auto\"\u003e\n    \u003cimg width=\"300\" src=\"https://raw.githubusercontent.com/redis/redis-vl-python/main/docs/_static/Redis_Logo_Red_RGB.svg\" style=\"max-width: 100%\" alt=\"Redis\"\u003e\n    \u003ch1\u003e🔥 Vector Library\u003c/h1\u003e\n\u003c/div\u003e\n\n\n\u003cdiv align=\"center\" style=\"margin-top: 20px;\"\u003e\n    \u003cspan style=\"display: block; margin-bottom: 10px;\"\u003eThe *AI-native* Redis Python client\u003c/span\u003e\n    \u003cbr /\u003e\n\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n![Language](https://img.shields.io/github/languages/top/redis/redis-vl-python)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n![GitHub last commit](https://img.shields.io/github/last-commit/redis/redis-vl-python)\n[![pypi](https://badge.fury.io/py/redisvl.svg)](https://pypi.org/project/redisvl/)\n![PyPI - Downloads](https://img.shields.io/pypi/dm/redisvl)\n[![GitHub stars](https://img.shields.io/github/stars/redis/redis-vl-python)](https://github.com/redis/redis-vl-python/stargazers)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\u003cdiv display=\"inline-block\"\u003e\n    \u003ca href=\"https://github.com/redis/redis-vl-python\"\u003e\u003cb\u003eHome\u003c/b\u003e\u003c/a\u003e\u0026nbsp;\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"https://docs.redisvl.com\"\u003e\u003cb\u003eDocumentation\u003c/b\u003e\u003c/a\u003e\u0026nbsp;\u0026nbsp;\u0026nbsp;\n    \u003ca href=\"https://github.com/redis-developer/redis-ai-resources\"\u003e\u003cb\u003eRecipes\u003c/b\u003e\u003c/a\u003e\u0026nbsp;\u0026nbsp;\u0026nbsp;\n  \u003c/div\u003e\n    \u003cbr /\u003e\n\u003c/div\u003e\n\n\n# Introduction\n\nRedis Vector Library is the ultimate Python client designed for AI-native applications harnessing the power of [Redis](https://redis.io).\n\n[redisvl](https://pypi.org/project/redisvl/) is your go-to client for:\n\n- Lightning-fast information retrieval \u0026 vector similarity search\n- Real-time RAG pipelines\n- Agentic memory structures\n- Smart recommendation engines\n\n\n# 💪 Getting Started\n\n## Installation\n\nInstall `redisvl` into your Python (\u003e=3.8) environment using `pip`:\n\n```bash\npip install redisvl\n```\n\u003e For more detailed instructions, visit the [installation guide](https://docs.redisvl.com/en/stable/overview/installation.html).\n\n## Setting up Redis\n\nChoose from multiple Redis deployment options:\n\n\n1. [Redis Cloud](https://redis.io/try-free): Managed cloud database (free tier available)\n2. [Redis Stack](https://redis.io/docs/getting-started/install-stack/docker/): Docker image for development\n    ```bash\n    docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest\n    ```\n3. [Redis Enterprise](https://redis.io/enterprise/): Commercial, self-hosted database\n4. [Azure Managed Redis](https://azure.microsoft.com/en-us/products/managed-redis): Fully managed Redis Enterprise on Azure\n\n\u003e Enhance your experience and observability with the free [Redis Insight GUI](https://redis.io/insight/).\n\n\n# Overview\n\n\n## 🗃️ Redis Index Management\n1. [Design a schema for your use case](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html#define-an-indexschema) that models your dataset with built-in Redis  and indexable fields (*e.g. text, tags, numerics, geo, and vectors*). [Load a schema](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html#example-schema-creation) from a YAML file:\n    ```yaml\n    index:\n      name: user-idx\n      prefix: user\n      storage_type: json\n\n    fields:\n      - name: user\n        type: tag\n      - name: credit_score\n        type: tag\n      - name: embedding\n        type: vector\n        attrs:\n          algorithm: flat\n          dims: 4\n          distance_metric: cosine\n          datatype: float32\n    ```\n    ```python\n    from redisvl.schema import IndexSchema\n\n    schema = IndexSchema.from_yaml(\"schemas/schema.yaml\")\n    ```\n    Or load directly from a Python dictionary:\n    ```python\n    schema = IndexSchema.from_dict({\n        \"index\": {\n            \"name\": \"user-idx\",\n            \"prefix\": \"user\",\n            \"storage_type\": \"json\"\n        },\n        \"fields\": [\n            {\"name\": \"user\", \"type\": \"tag\"},\n            {\"name\": \"credit_score\", \"type\": \"tag\"},\n            {\n                \"name\": \"embedding\",\n                \"type\": \"vector\",\n                \"attrs\": {\n                    \"algorithm\": \"flat\",\n                    \"datatype\": \"float32\",\n                    \"dims\": 4,\n                    \"distance_metric\": \"cosine\"\n                }\n            }\n        ]\n    })\n    ```\n\n2. [Create a SearchIndex](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html#create-a-searchindex) class with an input schema to perform admin and search operations on your index in Redis:\n    ```python\n    from redis import Redis\n    from redisvl.index import SearchIndex\n\n    # Define the index\n    index = SearchIndex(schema, redis_url=\"redis://localhost:6379\")\n\n    # Create the index in Redis\n    index.create()\n    ```\n    \u003e An async-compatible index class also available: [AsyncSearchIndex](https://docs.redisvl.com/en/stable/api/searchindex.html#redisvl.index.AsyncSearchIndex).\n\n3. [Load](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html#load-data-to-searchindex)\nand [fetch](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html#fetch-an-object-from-redis) data to/from your Redis instance:\n    ```python\n    data = {\"user\": \"john\", \"credit_score\": \"high\", \"embedding\": [0.23, 0.49, -0.18, 0.95]}\n\n    # load list of dictionaries, specify the \"id\" field\n    index.load([data], id_field=\"user\")\n\n    # fetch by \"id\"\n    john = index.fetch(\"john\")\n    ```\n\n## 🔍 Retrieval\n\nDefine queries and perform advanced searches over your indices, including the combination of vectors, metadata filters, and more.\n\n- [VectorQuery](https://docs.redisvl.com/en/stable/api/query.html#vectorquery) - Flexible vector queries with customizable filters enabling semantic search:\n\n    ```python\n    from redisvl.query import VectorQuery\n\n    query = VectorQuery(\n      vector=[0.16, -0.34, 0.98, 0.23],\n      vector_field_name=\"embedding\",\n      num_results=3\n    )\n    # run the vector search query against the embedding field\n    results = index.query(query)\n    ```\n\n    Incorporate complex metadata filters on your queries:\n    ```python\n    from redisvl.query.filter import Tag\n\n    # define a tag match filter\n    tag_filter = Tag(\"user\") == \"john\"\n\n    # update query definition\n    query.set_filter(tag_filter)\n\n    # execute query\n    results = index.query(query)\n    ```\n\n- [RangeQuery](https://docs.redisvl.com/en/stable/api/query.html#rangequery) - Vector search within a defined range paired with customizable filters\n- [FilterQuery](https://docs.redisvl.com/en/stable/api/query.html#filterquery) - Standard search using filters and the full-text search\n- [CountQuery](https://docs.redisvl.com/en/stable/api/query.html#countquery) - Count the number of indexed records given attributes\n\n\u003e Read more about building [advanced Redis queries](https://docs.redisvl.com/en/stable/user_guide/02_hybrid_queries.html).\n\n\n## 🔧  Utilities\n\n### Vectorizers\nIntegrate with popular embedding providers to greatly simplify the process of vectorizing unstructured data for your index and queries:\n- [AzureOpenAI](https://docs.redisvl.com/en/stable/api/vectorizer.html#azureopenaitextvectorizer)\n- [Cohere](https://docs.redisvl.com/en/stable/api/vectorizer.html#coheretextvectorizer)\n- [Custom](https://docs.redisvl.com/en/stable/api/vectorizer.html#customtextvectorizer)\n- [GCP VertexAI](https://docs.redisvl.com/en/stable/api/vectorizer.html#vertexaitextvectorizer)\n- [HuggingFace](https://docs.redisvl.com/en/stable/api/vectorizer.html#hftextvectorizer)\n- [Mistral](https://docs.redisvl.com/en/stable/api/vectorizer/html#mistralaitextvectorizer)\n- [OpenAI](https://docs.redisvl.com/en/stable/api/vectorizer.html#openaitextvectorizer)\n- [VoyageAI](https://docs.redisvl.com/en/stable/api/vectorizer/html#voyageaitextvectorizer)\n\n```python\nfrom redisvl.utils.vectorize import CohereTextVectorizer\n\n# set COHERE_API_KEY in your environment\nco = CohereTextVectorizer()\n\nembedding = co.embed(\n    text=\"What is the capital city of France?\",\n    input_type=\"search_query\"\n)\n\nembeddings = co.embed_many(\n    texts=[\"my document chunk content\", \"my other document chunk content\"],\n    input_type=\"search_document\"\n)\n```\n\n\u003e Learn more about using [vectorizers]((https://docs.redisvl.com/en/stable/user_guide/04_vectorizers.html)) in your embedding workflows.\n\n\n### Rerankers\n[Integrate with popular reranking providers](https://docs.redisvl.com/en/stable/user_guide/06_rerankers.html) to improve the relevancy of the initial search results from Redis\n\n### Threshold Optimization\n[Optimize distance thresholds for cache and router](https://docs.redisvl.com/en/stable/user_guide/09_threshold_optimization.html) with the utility `ThresholdOptimizer` classes.\n\n**Note:** only available for `python \u003e 3.9`.\n\n\n\n## 💫 Extensions\nWe're excited to announce the support for **RedisVL Extensions**. These modules implement interfaces exposing best practices and design patterns for working with LLM memory and agents. We've taken the best from what we've learned from our users (that's you) as well as bleeding-edge customers, and packaged it up.\n\n*Have an idea for another extension? Open a PR or reach out to us at applied.ai@redis.com. We're always open to feedback.*\n\n### LLM Semantic Caching\nIncrease application throughput and reduce the cost of using LLM models in production by leveraging previously generated knowledge with the [`SemanticCache`](https://docs.redisvl.com/en/stable/api/cache.html#semanticcache).\n\n```python\nfrom redisvl.extensions.cache.llm import SemanticCache\n\n# init cache with TTL and semantic distance threshold\nllmcache = SemanticCache(\n    name=\"llmcache\",\n    ttl=360,\n    redis_url=\"redis://localhost:6379\",\n    distance_threshold=0.1\n)\n\n# store user queries and LLM responses in the semantic cache\nllmcache.store(\n    prompt=\"What is the capital city of France?\",\n    response=\"Paris\"\n)\n\n# quickly check the cache with a slightly different prompt (before invoking an LLM)\nresponse = llmcache.check(prompt=\"What is France's capital city?\")\nprint(response[0][\"response\"])\n```\n```stdout\n\u003e\u003e\u003e Paris\n```\n\n\u003e Learn more about [semantic caching]((https://docs.redisvl.com/en/stable/user_guide/03_llmcache.html)) for LLMs.\n\n### LLM Memory History\n\nImprove personalization and accuracy of LLM responses by providing user message history as context. Manage access to message history data using recency or relevancy, *powered by vector search* with the [`MessageHistory`](https://docs.redisvl.com/en/stable/api/message_history.html).\n\n```python\nfrom redisvl.extensions.message_history import SemanticMessageHistory\n\nhistory = SemanticMessageHistory(\n    name=\"my-session\",\n    redis_url=\"redis://localhost:6379\",\n    distance_threshold=0.7\n)\n\nhistory.add_messages([\n    {\"role\": \"user\", \"content\": \"hello, how are you?\"},\n    {\"role\": \"assistant\", \"content\": \"I'm doing fine, thanks.\"},\n    {\"role\": \"user\", \"content\": \"what is the weather going to be today?\"},\n    {\"role\": \"assistant\", \"content\": \"I don't know\"}\n])\n```\nGet recent chat history:\n```python\nhistory.get_recent(top_k=1)\n```\n```stdout\n\u003e\u003e\u003e [{\"role\": \"assistant\", \"content\": \"I don't know\"}]\n```\nGet relevant chat history (powered by vector search):\n```python\nhistory.get_relevant(\"weather\", top_k=1)\n```\n```stdout\n\u003e\u003e\u003e [{\"role\": \"user\", \"content\": \"what is the weather going to be today?\"}]\n```\n\u003e Learn more about [LLM message history]((https://docs.redisvl.com/en/stable/user_guide/07_message_history.html)).\n\n\n### LLM Semantic Routing\nBuild fast decision models that run directly in Redis and route user queries to the nearest \"route\" or \"topic\".\n\n```python\nfrom redisvl.extensions.router import Route, SemanticRouter\n\nroutes = [\n    Route(\n        name=\"greeting\",\n        references=[\"hello\", \"hi\"],\n        metadata={\"type\": \"greeting\"},\n        distance_threshold=0.3,\n    ),\n    Route(\n        name=\"farewell\",\n        references=[\"bye\", \"goodbye\"],\n        metadata={\"type\": \"farewell\"},\n        distance_threshold=0.3,\n    ),\n]\n\n# build semantic router from routes\nrouter = SemanticRouter(\n    name=\"topic-router\",\n    routes=routes,\n    redis_url=\"redis://localhost:6379\",\n)\n\n\nrouter(\"Hi, good morning\")\n```\n```stdout\n\u003e\u003e\u003e RouteMatch(name='greeting', distance=0.273891836405)\n```\n\u003e Learn more about [semantic routing](https://docs.redisvl.com/en/stable/user_guide/08_semantic_router.html).\n\n## 🖥️ Command Line Interface\nCreate, destroy, and manage Redis index configurations from a purpose-built CLI interface: `rvl`.\n\n```bash\n$ rvl -h\n\nusage: rvl \u003ccommand\u003e [\u003cargs\u003e]\n\nCommands:\n        index       Index manipulation (create, delete, etc.)\n        version     Obtain the version of RedisVL\n        stats       Obtain statistics about an index\n```\n\n\u003e Read more about [using the CLI](https://docs.redisvl.com/en/latest/overview/cli.html).\n\n## 🚀 Why RedisVL?\n\nIn the age of GenAI, **vector databases** and **LLMs** are transforming information retrieval systems. With emerging and popular frameworks like [LangChain](https://github.com/langchain-ai/langchain) and [LlamaIndex](https://www.llamaindex.ai/), innovation is rapid. Yet, many organizations face the challenge of delivering AI solutions **quickly** and at **scale**.\n\nEnter [Redis](https://redis.io) – a cornerstone of the NoSQL world, renowned for its versatile [data structures](https://redis.io/docs/data-types/) and [processing engines](https://redis.io/docs/interact/). Redis excels in real-time workloads like caching, session management, and search. It's also a powerhouse as a vector database for RAG, an LLM cache, and a chat session memory store for conversational AI.\n\nThe Redis Vector Library bridges the gap between the AI-native developer ecosystem and Redis's robust capabilities. With a lightweight, elegant, and intuitive interface, RedisVL makes it easy to leverage Redis's power. Built on the [Redis Python](https://github.com/redis/redis-py/tree/master) client, `redisvl` transforms Redis's features into a grammar perfectly aligned with the needs of today's AI/ML Engineers and Data Scientists.\n\n\n## 😁 Helpful Links\n\nFor additional help, check out the following resources:\n - [Getting Started Guide](https://docs.redisvl.com/en/stable/user_guide/01_getting_started.html)\n - [API Reference](https://docs.redisvl.com/en/stable/api/index.html)\n - [Example Gallery](https://docs.redisvl.com/en/stable/examples/index.html)\n - [Redis AI Recipes](https://github.com/redis-developer/redis-ai-resources)\n - [Official Redis Vector API Docs](https://redis.io/docs/interact/search-and-query/advanced-concepts/vectors/)\n\n\n## 🫱🏼‍🫲🏽 Contributing\n\nPlease help us by contributing PRs, opening GitHub issues for bugs or new feature ideas, improving documentation, or increasing test coverage. [Read more about how to contribute!](CONTRIBUTING.md)\n\n## 🚧 Maintenance\nThis project is supported by [Redis, Inc](https://redis.io) on a good faith effort basis. To report bugs, request features, or receive assistance, please [file an issue](https://github.com/redis/redis-vl-python/issues).\n","funding_links":[],"categories":["Python","Integrations","CLIs"],"sub_categories":["☕️ Java AI Recipes"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fredis%2Fredis-vl-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fredis%2Fredis-vl-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fredis%2Fredis-vl-python/lists"}