{"id":13644863,"url":"https://github.com/neumtry/neumai","last_synced_at":"2025-10-29T03:23:44.508Z","repository":{"id":194669209,"uuid":"691317919","full_name":"NeumTry/NeumAI","owner":"NeumTry","description":"Neum AI is a best-in-class framework to manage the creation and synchronization of vector embeddings at large scale.","archived":false,"fork":false,"pushed_at":"2024-01-15T23:00:58.000Z","size":4019,"stargazers_count":854,"open_issues_count":8,"forks_count":47,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-04-19T22:54:28.625Z","etag":null,"topics":["ai","chatgpt","data","data-engineering","database","embeddings","etl","llm","llmops","mlops","ops","pipeline","python","rag","retrieval","vector-database","vectors"],"latest_commit_sha":null,"homepage":"https://neum.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NeumTry.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2023-09-14T00:04:50.000Z","updated_at":"2025-04-14T13:48:57.000Z","dependencies_parsed_at":"2023-09-15T02:17:45.005Z","dependency_job_id":"5a97f240-144f-41c8-a9b6-1ee3d821d151","html_url":"https://github.com/NeumTry/NeumAI","commit_stats":null,"previous_names":["neumtry/neumai-tools","neumtry/neum-tools"],"tags_count":9,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NeumTry%2FNeumAI","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NeumTry%2FNeumAI/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NeumTry%2FNeumAI/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NeumTry%2FNeumAI/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NeumTry","download_url":"https://codeload.github.com/NeumTry/NeumAI/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250040893,"owners_count":21365178,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","chatgpt","data","data-engineering","database","embeddings","etl","llm","llmops","mlops","ops","pipeline","python","rag","retrieval","vector-database","vectors"],"created_at":"2024-08-02T01:02:16.758Z","updated_at":"2025-10-29T03:23:44.440Z","avatar_url":"https://github.com/NeumTry.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003eNeum AI\u003c/h1\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \n  [Homepage](https://www.neum.ai) | [Documentation](https://docs.neum.ai) | [Blog](https://neum.ai/blog) | [Discord](https://discord.gg/mJeNZYRz4m) | [Twitter](https://twitter.com/neum_ai)\n  \n  \u003ca href=\"https://www.ycombinator.com/companies/neum-ai\"\u003e\u003cimg src=\"https://badgen.net/badge/Y%20Combinator/S23/orange\"/\u003e\u003c/a\u003e \n  \u003ca href=\"https://pypi.org/project/neumai/\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/v/neumai\" alt=\"PyPI\"\u003e\n  \u003c/a\u003e\n\u003c/div\u003e\n\n![Neum AI Hero](https://uploads-ssl.webflow.com/6552c062a6c96c60086c77df/6557cfde1ff0648321e5d3ba_Group%2066.png)\n\n**[Neum AI](https://neum.ai) is a data platform that helps developers leverage their data to contextualize Large Language Models through Retrieval Augmented Generation (RAG)** This includes\nextracting data from existing data sources like document storage and NoSQL, processing the contents into vector embeddings and ingesting the vector embeddings into vector databases for similarity search. \n\nIt provides you a comprehensive solution for RAG that can scale with your application and reduce the time spent integrating services like data connectors, embedding models and vector databases.\n\n## Features\n\n- 🏭 **High throughput distributed architecture** to handle billions of data points. Allows high degrees of parallelization to optimize embedding generation and ingestion.\n- 🧱 **Built-in data connectors** to common data sources, embedding services and vector stores.\n- 🔄 **Real-time synchronization** of data sources to ensure your data is always up-to-date. \n- ♻ **Customizable data pre-processing** in the form of loading, chunking and selecting.\n- 🤝 **Cohesive data management** to support hybrid retrieval with metadata. Neum AI automatically augments and tracks metadata to provide rich retrieval experience.\n\n## Talk to us\n\nYou can reach our team either through email ([founders@tryneum.com](mailto:founders@tryneum.com)), on [discord](https://discord.gg/mJeNZYRz4m) or by [scheduling a call wit us](https://calendly.com/neum-ai/neum-ai-demo?month=2023-12).\n\n## Getting Started\n\n### Neum AI Cloud\n\nSign up today at [dashboard.neum.ai](https://dashboard.neum.ai). See our [quickstart](https://docs.neum.ai/get-started/quickstart) to get started.\n\nThe Neum AI Cloud supports a large-scale, distributed architecture to run millions of documents through vector embedding. For the full set of features see: [Cloud vs Local](https://neumai.mintlify.app/get-started/cloud-vs-local)\n\n### Local Development\n\nInstall the [`neumai`](https://pypi.org/project/neumai/) package:\n\n```bash\npip install neumai\n```\n\nTo create your first data pipelines visit our [quickstart](https://docs.neum.ai/get-started/quickstart).\n\nAt a high level, a pipeline consists of one or multiple sources to pull data from, one embed connector to vectorize the content, and one sink connector to store said vectors.\nWith this snippet of code we will craft all of these and run a pipeline:\n\u003cdetails open\u003e\u003csummary\u003e\n  \n  ### Creating and running a pipeline\n  \u003c/summary\u003e\n  \n  ```python\n  \n  from neumai.DataConnectors.WebsiteConnector import WebsiteConnector\n  from neumai.Shared.Selector import Selector\n  from neumai.Loaders.HTMLLoader import HTMLLoader\n  from neumai.Chunkers.RecursiveChunker import RecursiveChunker\n  from neumai.Sources.SourceConnector import SourceConnector\n  from neumai.EmbedConnectors import OpenAIEmbed\n  from neumai.SinkConnectors import WeaviateSink\n  from neumai.Pipelines import Pipeline\n\n  website_connector =  WebsiteConnector(\n      url = \"https://www.neum.ai/post/retrieval-augmented-generation-at-scale\",\n      selector = Selector(\n          to_metadata=['url']\n      )\n  )\n  source = SourceConnector(\n      data_connector = website_connector, \n      loader = HTMLLoader(), \n      chunker = RecursiveChunker()\n  )\n\n  openai_embed = OpenAIEmbed(\n      api_key = \"\u003cOPEN AI KEY\u003e\",\n  )\n\n  weaviate_sink = WeaviateSink(\n      url = \"your-weaviate-url\",\n      api_key = \"your-api-key\",\n      class_name = \"your-class-name\",\n  )\n\n  pipeline = Pipeline(\n      sources=[source], \n      embed=openai_embed, \n      sink=weaviate_sink\n  )\n  pipeline.run()\n\n  results = pipeline.search(\n      query=\"What are the challenges with scaling RAG?\", \n      number_of_results=3\n  )\n\n  for result in results:\n      print(result.metadata)\n  ```\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\n\n  ### Creating and running a pipeline - Postgres connector\n  \u003c/summary\u003e\n\n  ```python\n  \n  from neumai.DataConnectors.PostgresConnector import PostgresConnector\n  from neumai.Shared.Selector import Selector\n  from neumai.Loaders.JSONLoader import JSONLoader\n  from neumai.Chunkers.RecursiveChunker import RecursiveChunker\n  from neumai.Sources.SourceConnector import SourceConnector\n  from neumai.EmbedConnectors import OpenAIEmbed\n  from neumai.SinkConnectors import WeaviateSink\n  from neumai.Pipelines import Pipeline\n\n  website_connector =  PostgresConnector(\n      connection_string = 'postgres',\n      query = 'Select * from ...'\n  )\n  source = SourceConnector(\n      data_connector = website_connector, \n      loader = JSONLoader(\n          id_key='\u003cyour id key of your jsons\u003e',\n          selector=Selector(\n              to_embed=['property1_to_embed','property2_to_embed'],\n              to_metadata=['property3_to_include_in_metadata_in_vector']\n          )\n      ),\n      chunker = RecursiveChunker()\n  )\n\n  openai_embed = OpenAIEmbed(\n      api_key = \"\u003cOPEN AI KEY\u003e\",\n  )\n\n  weaviate_sink = WeaviateSink(\n      url = \"your-weaviate-url\",\n      api_key = \"your-api-key\",\n      class_name = \"your-class-name\",\n  )\n\n  pipeline = Pipeline(\n      sources=[source], \n      embed=openai_embed, \n      sink=weaviate_sink\n  )\n\n  pipeline.run()\n\n  results = pipeline.search(\n      query=\"...\", \n      number_of_results=3\n  )\n\n  for result in results:\n      print(result.metadata)\n  ```\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\n  \n  ### Publishing pipeline to Neum Cloud\n  \u003c/summary\u003e\n  \n  ```python\n  from neumai.Client.NeumClient import NeumClient\n  client = NeumClient(\n      api_key='\u003cyour neum api key, get it from https://dashboard.neum.ai',\n  )\n  client.create_pipeline(pipeline=pipeline)\n  ```\n\u003c/details\u003e\n\n### Self-host\n\nIf you are interested in deploying Neum AI to your own cloud contact us at [founders@tryneum.com](mailto:founders@tryneum.com).\n\nWe have a sample backend architecture published on [GitHub](https://github.com/NeumTry/neum-at-scale) which you can use as a starting point.\n\n## Available Connectors\nFor an up-to-date list please visit our [docs](https://docs.neum.ai/components/sourceConnector)\n\n\u003cdetails\u003e\n\n### Source connectors\n1. Postgres\n2. Hosted Files\n3. Websites\n4. S3\n5. Azure Blob\n6. Sharepoint\n7. Singlestore\n8. Supabase Storage\n\n### Embed Connectors\n1. OpenAI embeddings\n2. Azure OpenAI embeddings\n\n### Sink Connectors\n1. Supabase postgres\n2. Weaviate\n3. Qdrant\n4. Pinecone\n5. Singlestore\n\n\u003c/details\u003e\n\n## Roadmap\nOur roadmap is evolving with asks, so if there is anything missing feel free to open an issue or send us a message.\n\n\u003cdetails\u003e\n  \nConnectors\n- [ ]  MySQL - Source\n- [ ]  GitHub - Source\n- [ ]  Google Drive - Source\n- [ ]  Hugging Face - Embedding\n- [x]  LanceDB - Sink\n- [x]  Marqo - Sink\n- [ ]  Milvus - Sink\n- [ ]  Chroma - Sink\n\nSearch\n- [x]  Retrieval feedback\n- [x]  Filter support\n- [x]  Unified Neum AI filters\n- [ ]  Smart routing (w/ embedding based classification)\n- [ ]  Smart routing (w/ LLM based classification)\n- [ ]  Self-Query Retrieval (w/ Metadata attributes generation)\n\nExtensibility\n- [x]  Langchain / Llama Index Document to Neum Document converter\n- [ ]  Custom chunking and loading\n\nExperimental\n- [ ]  Async metadata augmentation\n- [ ]  Chat history connector\n- [ ]  Structured (SQL and GraphQL) search connector\n\u003c/details\u003e\n\n## Neum Tools\nAdditional tooling for Neum AI can be found here:\n\n- [neumai-tools](https://pypi.org/project/neumai-tools/): contains pre-processing tools for loading and chunking data before generating vector embeddings.\n","funding_links":[],"categories":["NLP"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fneumtry%2Fneumai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fneumtry%2Fneumai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fneumtry%2Fneumai/lists"}