{"id":13435470,"url":"https://github.com/activeloopai/deeplake","last_synced_at":"2026-02-11T04:01:16.359Z","repository":{"id":36966293,"uuid":"201403923","full_name":"activeloopai/deeplake","owner":"activeloopai","description":"Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, \u0026 visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai","archived":false,"fork":false,"pushed_at":"2026-02-08T16:57:24.000Z","size":131159,"stargazers_count":8996,"open_issues_count":62,"forks_count":705,"subscribers_count":96,"default_branch":"main","last_synced_at":"2026-02-08T17:31:13.698Z","etag":null,"topics":["ai","computer-vision","cv","data-science","datalake","datasets","deep-learning","image-processing","langchain","large-language-models","llm","machine-learning","ml","mlops","multi-modal","python","pytorch","tensorflow","vector-database","vector-search"],"latest_commit_sha":null,"homepage":"https://activeloop.ai","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/activeloopai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2019-08-09T06:17:59.000Z","updated_at":"2026-02-08T16:47:46.000Z","dependencies_parsed_at":"2023-11-08T23:22:23.363Z","dependency_job_id":"e15d860e-b80d-4f28-9ce8-6dd9cf97a937","html_url":"https://github.com/activeloopai/deeplake","commit_stats":{"total_commits":7218,"total_committers":144,"mean_commits":50.125,"dds":0.8369354391798283,"last_synced_commit":"47e9f45c5091af07275152ea27f92894d6c9fd1a"},"previous_names":["activeloopai/hub"],"tags_count":292,"template":false,"template_full_name":null,"purl":"pkg:github/activeloopai/deeplake","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activeloopai%2Fdeeplake","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activeloopai%2Fdeeplake/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activeloopai%2Fdeeplake/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activeloopai%2Fdeeplake/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/activeloopai","download_url":"https://codeload.github.com/activeloopai/deeplake/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/activeloopai%2Fdeeplake/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29326801,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-11T03:52:29.695Z","status":"ssl_error","status_checked_at":"2026-02-11T03:52:23.094Z","response_time":97,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","computer-vision","cv","data-science","datalake","datasets","deep-learning","image-processing","langchain","large-language-models","llm","machine-learning","ml","mlops","multi-modal","python","pytorch","tensorflow","vector-database","vector-search"],"created_at":"2024-07-31T03:00:35.963Z","updated_at":"2026-02-11T04:01:16.332Z","avatar_url":"https://github.com/activeloopai.png","language":"C++","readme":"\u003cimg src=\"https://static.scarf.sh/a.png?x-pxid=bc3c57b0-9a65-49fe-b8ea-f711c4d35b82\" /\u003e\u003cp align=\"center\"\u003e\n     \u003cimg src=\"https://i.postimg.cc/rsjcWc3S/deeplake-logo.png\" width=\"400\"/\u003e\n\u003c/h1\u003e\n\n\u003c/br\u003e\n\n\u003ch1 align=\"center\"\u003eDeep Lake: Database for AI\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://pypi.org/project/deeplake/\"\u003e\u003cimg src=\"https://badge.fury.io/py/deeplake.svg\" alt=\"PyPI version\" height=\"18\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pepy.tech/project/deeplake\"\u003e\u003cimg src=\"https://static.pepy.tech/badge/deeplake\" alt=\"PyPI version\" height=\"18\"\u003e\u003c/a\u003e\n  \u003ch3 align=\"center\"\u003e\n   \u003ca href=\"https://docs.deeplake.ai/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e\u003cb\u003eDocs\u003c/b\u003e\u003c/a\u003e \u0026bull;\n   \u003ca href=\"https://docs.deeplake.ai/latest/getting-started/quickstart/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e\u003cb\u003eGet Started\u003c/b\u003e\u003c/a\u003e \u0026bull;\n   \u003ca href=\"https://docs.deeplake.ai/latest/api/dataset/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e\u003cb\u003eAPI Reference\u003c/b\u003e\u003c/a\u003e \u0026bull;  \n   \u003ca href=\"http://learn.activeloop.ai\"\u003e\u003cb\u003eLangChain \u0026 VectorDBs Course\u003c/b\u003e\u003c/a\u003e \u0026bull;\n   \u003ca href=\"https://www.activeloop.ai/resources/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e\u003cb\u003eBlog\u003c/b\u003e\u003c/a\u003e \u0026bull;\n   \u003ca href=\"https://www.deeplake.ai/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e\u003cb\u003eWhitepaper\u003c/b\u003e\u003c/a\u003e \u0026bull;  \n  \u003ca href=\"http://slack.activeloop.ai\"\u003e\u003cb\u003eSlack\u003c/b\u003e\u003c/a\u003e \u0026bull;\n  \u003ca href=\"https://twitter.com/intent/tweet?url=https%3A%2F%2Factiveloop.ai%2F\u0026via=activeloopai\u0026text=Deep%20Lake%20is%20the%20Database%20for%20all%20AI%20data.%20Check%20it%20out%21\u0026hashtags=DeepLake%2Cactiveloop%2Copensource\"\u003e\u003cb\u003eTwitter\u003c/b\u003e\u003c/a\u003e\n \u003c/h3\u003e\n\n## What is Deep Lake?\n\nDeep Lake is a Database for AI powered by a storage format optimized for deep-learning applications. Deep Lake can be used for:\n\n1. Storing and searching data plus vectors while building LLM applications\n2. Managing datasets while training deep learning models\n   \nDeep Lake simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, dicom, pdfs, annotations, [and more](https://docs.deeplake.ai/latest/api/types/)), querying and vector search, data streaming while training models at scale, data versioning and lineage, and integrations with popular tools such as LangChain, LlamaIndex, Weights \u0026 Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in your own cloud and in one place. Deep Lake is used by Intel, Bayer Radiology, Matterport, ZERO Systems, Red Cross, Yale, \u0026 Oxford. \n\n### Deep Lake includes the following features:\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eMulti-Cloud Support (S3, GCP, Azure)\u003c/b\u003e\u003c/summary\u003e\nUse one API to upload, download, and stream datasets to/from S3, Azure, GCP, Activeloop cloud, local storage, or in-memory storage. Compatible with any S3-compatible storage such as MinIO. \n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eNative Compression with Lazy NumPy-like Indexing\u003c/b\u003e\u003c/summary\u003e\nStore images, audio, and videos in their native compression. Slice, index, iterate, and interact with your data like a collection of NumPy arrays in your system's memory. Deep Lake lazily loads data only when needed, e.g., when training a model or running queries.\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDataloaders for Popular Deep Learning Frameworks\u003c/b\u003e\u003c/summary\u003e\nDeep Lake comes with built-in dataloaders for Pytorch and TensorFlow. Train your model with a few lines of code - we even take care of dataset shuffling. :)\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eIntegrations with Powerful Tools\u003c/b\u003e\u003c/summary\u003e\nDeep Lake has integrations with \u003ca href=\"https://github.com/hwchase17/langchain\"\u003eLangchain\u003c/a\u003e and \u003ca href=\"https://github.com/jerryjliu/llama_index\"\u003eLLamaIndex\u003c/a\u003e as a vector store for LLM apps, \u003ca href=\"https://wandb.ai/\"\u003eWeights \u0026 Biases\u003c/a\u003e for data lineage during model training, \u003ca href=\"https://github.com/open-mmlab/mmdetection\"\u003eMMDetection\u003c/a\u003e for training object detection models, and \u003ca href=\"https://github.com/open-mmlab/mmsegmentation\"\u003eMMSegmentation\u003c/a\u003e for training semantic segmentation models.\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003e100+ most-popular image, video, and audio datasets available in seconds\u003c/b\u003e\u003c/summary\u003e\nDeep Lake community has uploaded \u003ca href=\"https://app.activeloop.ai/datasets/activeloop?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003e100+ image, video and audio datasets\u003c/a\u003e like \u003ca href=\"https://app.activeloop.ai/activeloop/mnist-train?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eMNIST\u003c/a\u003e, \u003ca href=\"https://app.activeloop.ai/activeloop/coco-train?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eCOCO\u003c/a\u003e,  \u003ca href=\"https://app.activeloop.ai/activeloop/imagenet-train?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eImageNet\u003c/a\u003e,  \u003ca href=\"https://app.activeloop.ai/activeloop/cifar100-test?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eCIFAR\u003c/a\u003e,  \u003ca href=\"https://app.activeloop.ai/activeloop/gtzan-genre?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eGTZAN\u003c/a\u003e and others.\n\u003c/details\u003e\n\u003c/details\u003e\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eInstant Visualization Support in the \u003ca href=\"https://app.activeloop.ai/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eDeep Lake App\u003c/a\u003e\u003c/b\u003e\u003c/summary\u003e\nDeep Lake datasets are instantly visualized with bounding boxes, masks, annotations, etc. in \u003ca href=\"https://app.activeloop.ai/?utm_source=github\u0026utm_medium=github\u0026utm_campaign=github_readme\u0026utm_id=readme\"\u003eDeep Lake Visualizer\u003c/a\u003e (see below).\n\u003c/details\u003e\n\n[![Visualizer](https://www.linkpicture.com/q/ReadMe.gif \"Visualizer\")](https://www.youtube.com/watch?v=SxsofpSIw3k)\n\n## 🚀 How to install Deep Lake\nDeep Lake can be installed using pip:\n```sh\npip install deeplake\n```\n\n### To access all of Deep Lake's features, please register in the [Deep Lake App](https://app.activeloop.ai/register/).\n\n## 🧠 Deep Lake Code Examples by Application\n\n### Vector Store Applications\nUsing Deep Lake as a Vector Store for building LLM applications:\n### - [Vector Store Quickstart](https://docs.deeplake.ai/latest/guides/rag/)\n### - [Vector Store Tutorials](https://docs-v3.activeloop.ai/examples/rag/tutorials)\n### - [LangChain Integration](https://docs-v3.activeloop.ai/examples/rag/langchain-integration)\n### - [LlamaIndex Integration](https://docs-v3.activeloop.ai/examples/rag/llamaindex-integration)\n### - [Image Similarity Search with Deep Lake](https://docs.deeplake.ai/latest/guides/rag/#5-integrating-image-embeddings-for-multi-modal-search)\n\n\n### Deep Learning Applications\nUsing Deep Lake for managing data while training Deep Learning models:\n### - [Deep Learning Quickstart](https://docs.deeplake.ai/latest/guides/deep-learning/deep-learning/)\n### - [Tutorials for Training Models](https://docs-v3.activeloop.ai/examples/dl/tutorials/training-models)\n\n## ⚙️ Integrations\n\nDeep Lake offers integrations with other tools in order to streamline your deep learning workflows. Current integrations include:\n\n* **LLM Apps**\n  * Use [Deep Lake as a vector store for LLM apps](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/). Our integration combines the [Langchain](https://github.com/hwchase17/langchain) [VectorStores API](https://python.langchain.com/en/latest/reference/modules/vectorstore.html?highlight=pinecone#langchain.vectorstores.DeepLake) with Deep Lake datasets as the underlying data storage. The integration is a serverless vector store that can be deployed locally or in a cloud of your choice.\n\n## 📚 Documentation\n\nGetting started guides, examples, tutorials, API reference, and other useful information can be found on our [documentation page](http://docs.deeplake.ai/?utm_source=github\u0026utm_medium=repo\u0026utm_campaign=readme).\n\n## 🎓 For Students and Educators\nDeep Lake users can access and visualize a variety of popular datasets through a free integration with Deep Lake's App. Universities can get up to 1TB of data storage and 100,000 monthly queries on the Tensor Database for free per month. Chat in on [our website](https://activeloop.ai): to claim the access!\n\n## 👩‍💻 Comparisons to Familiar Tools\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs Chroma \u003c/b\u003e\u003c/summary\u003e\n  \nBoth Deep Lake \u0026 ChromaDB enable users to store and search vectors (embeddings) and offer integrations with LangChain and LlamaIndex. However, they are architecturally very different. ChromaDB is a Vector Database that can be deployed locally or on a server using Docker and will offer a hosted solution shortly. Deep Lake is a serverless Vector Store deployed on the user’s own cloud, locally, or in-memory. All computations run client-side, which enables users to support lightweight production apps in seconds. Unlike ChromaDB, Deep Lake’s data format can store raw data such as images, videos, and text, in addition to embeddings. ChromaDB is limited to light metadata on top of the embeddings and has no visualization. Deep Lake datasets can be visualized and version controlled. Deep Lake also has a performant dataloader for fine-tuning your Large Language Models.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs Pinecone\u003c/b\u003e\u003c/summary\u003e\n  \nBoth Deep Lake and Pinecone enable users to store and search vectors (embeddings) and offer integrations with LangChain and LlamaIndex. However, they are  architecturally very different. Pinecone is a fully-managed Vector Database that is optimized for highly demanding applications requiring a search for billions of vectors. Deep Lake is serverless. All computations run client-side, which enables users to get started in seconds. Unlike Pinecone, Deep Lake’s data format can store raw data such as images, videos, and text, in addition to embeddings. Deep Lake datasets can be visualized and version controlled. Pinecone is limited to light metadata on top of the embeddings and has no visualization. Deep Lake also has a performant dataloader for fine-tuning your Large Language Models.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs Weaviate\u003c/b\u003e\u003c/summary\u003e\n  \nBoth Deep Lake and Weaviate enable users to store and search vectors (embeddings) and offer integrations with LangChain and LlamaIndex. However, they are  architecturally very different. Weaviate is a Vector Database that can be deployed in a managed service or by the user via Kubernetes or Docker. Deep Lake is serverless. All computations run client-side, which enables users to support lightweight production apps in seconds. Unlike Weaviate, Deep Lake’s data format can store raw data such as images, videos, and text, in addition to embeddings. Deep Lake datasets can be visualized and version controlled. Weaviate is limited to light metadata on top of the embeddings and has no visualization. Deep Lake also has a performant dataloader for fine-tuning your Large Language Models.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs DVC\u003c/b\u003e\u003c/summary\u003e\n  \nDeep Lake and DVC offer dataset version control similar to git for data, but their methods for storing data differ significantly. Deep Lake converts and stores data as chunked compressed arrays, which enables rapid streaming to ML models, whereas DVC operates on top of data stored in less efficient traditional file structures. The Deep Lake format makes dataset versioning significantly easier compared to traditional file structures by DVC when datasets are composed of many files (i.e., many images). An additional distinction is that DVC primarily uses a command-line interface, whereas Deep Lake is a Python package. Lastly, Deep Lake offers an API to easily connect datasets to ML frameworks and other common ML tools and enables instant dataset visualization through [Activeloop's visualization tool](http://app.activeloop.ai/?utm_source=github\u0026utm_medium=repo\u0026utm_campaign=readme).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs MosaicML MDS format \u003c/b\u003e\u003c/summary\u003e\n  \n* **Data Storage Format:** Deep Lake operates on a columnar storage format, whereas MDS utilizes a row-wise storage approach. This fundamentally impacts how data is read, written, and organized in each system.\n* **Compression:** Deep Lake offers a more flexible compression scheme, allowing control over both chunk-level and sample-level compression for each column or tensor. This feature eliminates the need for additional compressions like zstd, which would otherwise demand more CPU cycles for decompressing on top of formats like jpeg.\n* **Shuffling:** MDS currently offers more advanced shuffling strategies.\n* **Version Control \u0026 Visualization Support:** A notable feature of Deep Lake is its native version control and in-browser data visualization, a feature not present for MosaicML data format. This can provide significant advantages in managing, understanding, and tracking different versions of the data.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs TensorFlow Datasets (TFDS)\u003c/b\u003e\u003c/summary\u003e\n  \nDeep Lake and TFDS seamlessly connect popular datasets to ML frameworks. Deep Lake datasets are compatible with both PyTorch and TensorFlow, whereas TFDS are only compatible with TensorFlow. A key difference between Deep Lake and TFDS is that Deep Lake datasets are designed for streaming from the cloud, whereas TFDS must be downloaded locally prior to use. As a result, with Deep Lake, one can import datasets directly from TensorFlow Datasets and stream them either to PyTorch or TensorFlow. In addition to providing access to popular publicly available datasets, Deep Lake also offers powerful tools for creating custom datasets, storing them on a variety of cloud storage providers, and collaborating with others via simple API. TFDS is primarily focused on giving the public easy access to commonly available datasets, and management of custom datasets is not the primary focus. A full comparison article can be found [here](https://www.activeloop.ai/resources/tensor-flow-tf-data-activeloop-hub-how-to-implement-your-tensor-flow-data-pipelines-with-hub/).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs HuggingFace\u003c/b\u003e\u003c/summary\u003e\nDeep Lake and HuggingFace offer access to popular datasets, but Deep Lake primarily focuses on computer vision, whereas HuggingFace focuses on natural language processing. HuggingFace Transforms and other computational tools for NLP are not analogous to features offered by Deep Lake.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs WebDatasets\u003c/b\u003e\u003c/summary\u003e\nDeep Lake and WebDatasets both offer rapid data streaming across networks. They have nearly identical steaming speeds because the underlying network requests and data structures are very similar. However, Deep Lake offers superior random access and shuffling, its simple API is in python instead of command-line, and Deep Lake enables simple indexing and modification of the dataset without having to recreate it.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eDeep Lake vs Zarr\u003c/b\u003e\u003c/summary\u003e\nDeep Lake and Zarr both offer storage of data as chunked arrays. However, Deep Lake is primarily designed for returning data as arrays using a simple API, rather than actually storing raw arrays (even though that's also possible). Deep Lake stores data in use-case-optimized formats, such as jpeg or png for images, or mp4 for video, and the user treats the data as if it's an array, because Deep Lake handles all the data processing in between. Deep Lake offers more flexibility for storing arrays with dynamic shape (ragged tensors), and it provides several features that are not naively available in Zarr such as version control, data streaming, and connecting data to ML Frameworks.\n\n\u003c/details\u003e\n\n## Community\n\nJoin our [**Slack community**](https://slack.activeloop.ai) to learn more about unstructured dataset management using Deep Lake and to get help from the Activeloop team and other users.\n\nWe'd love your feedback by completing our 3-minute [**survey**](https://forms.gle/rLi4w33dow6CSMcm9).\n\nAs always, thanks to our amazing contributors!\n\n\u003ca href=\"https://github.com/activeloopai/deeplake/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=activeloopai/hub\" /\u003e\n\u003c/a\u003e\n\nMade with [contributors-img](https://contrib.rocks).\n\nPlease read [CONTRIBUTING.md](CONTRIBUTING.md) to get started with making contributions to Deep Lake.\n\n## README Badge\n\nUsing Deep Lake? Add a README badge to let everyone know:\n\n[![deeplake](https://img.shields.io/badge/powered%20by-Deep%20Lake%20-ff5a1f.svg)](https://github.com/activeloopai/deeplake)\n\n```markdown\n[![deeplake](https://img.shields.io/badge/powered%20by-Deep%20Lake%20-ff5a1f.svg)](https://github.com/activeloopai/deeplake)\n```\n\n## Disclaimers\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003e Dataset Licenses\u003c/b\u003e\u003c/summary\u003e\n  \nDeep Lake users may have access to a variety of publicly available datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have a license to use the datasets. It is your responsibility to determine whether you have permission to use the datasets under their license.\n\nIf you're a dataset owner and do not want your dataset to be included in this library, please get in touch through a [GitHub issue](https://github.com/activeloopai/deeplake/issues/new). Thank you for your contribution to the ML community!\n\n\u003c/details\u003e\n\n## Citation\n\nIf you use Deep Lake in your research, please cite Activeloop using:\n\n```markdown\n@article{deeplake,\n  title = {Deep Lake: a Lakehouse for Deep Learning},\n  author = {Hambardzumyan, Sasun and Tuli, Abhinav and Ghukasyan, Levon and Rahman, Fariz and Topchyan, Hrant and Isayan, David and Harutyunyan, Mikayel and Hakobyan, Tatevik and Stranic, Ivo and Buniatyan, Davit},\n  url = {https://www.cidrdb.org/cidr2023/papers/p69-buniatyan.pdf},\n  booktitle={Proceedings of CIDR},\n  year = {2023},\n}\n```\n\n\n## Acknowledgment\n\nThis technology was inspired by our research work at Princeton University. We would like to thank William Silversmith @SeungLab for his awesome [cloud-volume](https://github.com/seung-lab/cloud-volume) tool.\n","funding_links":[],"categories":["Sample Codes / Projects \u003ca name=\"sample\" /\u003e ⛏️📐📁","Python","Tools for Self-Hosting","LLMOps","Observability","Data Management \u0026 Processing","AI","Industry Strength Computer Vision","ai","Data versioning","Tools (GitHub)","C++","Runtime","Agent Integration \u0026 Deployment Tools","Data Pipelines \u0026 Streaming","Vector Database"],"sub_categories":["General 🚧 \u003ca name=\"GeneralCode\" /\u003e","Development","Observability","Popular-LLM","Streaming Data Management","LLMOps vs MLOps","Database","AI Developer Toolkit"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factiveloopai%2Fdeeplake","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Factiveloopai%2Fdeeplake","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Factiveloopai%2Fdeeplake/lists"}