{"id":24556103,"url":"https://github.com/deepfates/memery","last_synced_at":"2025-10-07T06:24:12.640Z","repository":{"id":40420872,"uuid":"347659727","full_name":"deepfates/memery","owner":"deepfates","description":"Search over large image datasets with natural language and computer vision!","archived":false,"fork":false,"pushed_at":"2024-10-15T23:12:04.000Z","size":313559,"stargazers_count":554,"open_issues_count":15,"forks_count":29,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-04-11T23:57:59.962Z","etag":null,"topics":["computer-vision","image-search","local-search","natural-language","python"],"latest_commit_sha":null,"homepage":"https://deepfates.com/memery","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/deepfates.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-03-14T14:29:44.000Z","updated_at":"2025-04-06T13:57:33.000Z","dependencies_parsed_at":"2024-10-17T04:43:23.888Z","dependency_job_id":null,"html_url":"https://github.com/deepfates/memery","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepfates%2Fmemery","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepfates%2Fmemery/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepfates%2Fmemery/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deepfates%2Fmemery/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/deepfates","download_url":"https://codeload.github.com/deepfates/memery/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254394720,"owners_count":22063984,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","image-search","local-search","natural-language","python"],"created_at":"2025-01-23T04:38:31.665Z","updated_at":"2025-10-07T06:24:07.617Z","avatar_url":"https://github.com/deepfates.png","language":"Jupyter Notebook","readme":"# Memery\n\u003e Use human language to search your image folders!\n\n## What is memery?\n\n![meme about having too many memes](images/E2GoeMyWEAAkcLz.jpeg)\n\nThe problem: you have a huge folder of images. Memes, screenshots, datasets, product photos, inspo albums, anything. You know that somewhere in that folder is the exact image you want, but you can't remember the filename or what day you saved it. There's nothing you can do but scroll through the folder, skimming hundreds of thumbnails, hoping you don't accidentally miss it, and that you'll recognize it when you do see it. \n\nHumans do this amazingly well. But even with computers, local image search is still a manual effort - you're still sorting through folders of images, like an archivist of old.\n\n**Now there's Memery**.\n\nThe `memery` package provides natural language search over local images. You can use it to search for things like \"a line drawing of a woman facing to the left\" and get _reasonably good results!_ \n\nYou can do this over thousands of images (it's not optimized for performance yet, but search times scale well under O(n)). \n\nYou can view the images in a browser GUI, or pipe them through command line tools. \n\nYou can use `memery` or its modules in Jupyter notebooks, including GUI functions! \n\nUnder the hood, `memery` makes use of **CLIP**, the [Contrastive Language-Image Pretraining transformer](https://github.com/openai/CLIP), released by OpenAI in 2021. CLIP trains a vision transformer and a language transformer to find the same latent space for images and their captions. This makes it perfect for the purpose of natural language image search. CLIP is a giant achievement, and `memery` stands on its shoulders.\n\nOutline:\n- Usage\n  - Install locally\n  - Use GUI\n  - Use CLI\n  - Use the library\n- Development\n- Contributing\n  - Who works on this project\n\n## Quickstart-Windows\n- Run the `windows-install.bat` file\n- Run the `windows-run.bat` file\n\n## Installation\n\nWith Python 3.9 or greater:\n\nFrom github (recommended)\n```\npip install git+https://github.com/deepfates/memery.git\n```\nor\n```\ngit clone https://github.com/deepfates/memery.git\ncd memery\npoetry install\n```\nFrom PyPi\n```\npip install memery\npip install git+https://github.com/openai/CLIP.git\n```\n\nCurrently memery defaults to GPU installation. This will \nprobably be switched in a future version. \n\nFor now, if you want to run CPU-only, run the following command after installing memery:\n\n`pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html`\n\nSomeday memery will be packaged in an easy to use format, but since this is a Python project it is hard to predict when that day will be.\n\nIf you want to help develop memery, you'll need to clone the repo. See below.\n\n## Usage\n\nWhat's your use case? \n\n**I have images and want to search them with a GUI app**\n   \n   ↳  Use the Browser GUI\n   \n**i have a program/workflow and want to use image search as one part of it**\n   \n   ↳ Use as a Python module\n   \n   ↳ Use from command line or shell scripts\n   \n**i want to improve on and/or contribute to memery development**\n \n   ↳ Start by cloning the repo \n\n### Use GUI\n\nCurrently memery has a rough browser-based GUI. To launch it, run the following in a command line: \n\n```memery serve```\n\nor set up a desktop shortcut that points to the above command.\n\nOptionally, you can pass a directory to open on startup, like so:\n\n```memery serve home/user/Pictures/memes```\n\nRelative directories will also work:\n\n```\ncd ~/Pictures\nmemery serve memes\n```\n\nThe default directory passed will be `./images`, which is memery's example meme directory.\n\nMemery will open in a browser window. The interface is pretty straightforward, but it has some quirks.\n\n![screenshot of memery GUI displaying wholesome memes](images/streamlit-screenshot.png)\n\nThe sidebar on the left controls the location and query for the search. The \"Directory\" box requires a full directory path; unfortunately, Streamlit does not yet have a folder-picker component. The path is relative to your current working directory when you run `memery serve`.\n\nThe search will run once you enter a text or image query. If you enter both text and image queries, memery will search for the combination.\n\nBeneath these widgets is the output area for temporary messages displayed with each search. Mostly this can be ignored.\n\nThe right hand panel displays the images and associated options. Major errors will appear here as giant stack traces; sometimes, changing variables in the other widgets will fix these errors live. If you get a large error here it's helpful to take a screenshot and share it with us in Github Issues.\n\n### Use CLI\n\nThe memery command line matches the core functionality of memery.\n\nUse the `recall` command to search for images, passing the path and optionally passing the -n flag to control how many images are returned (default 10). Use the -t flag to pass a text query and the -i flag to pass an image query or both\n\n```\nmemery recall PATH/TO/IMAGE/FOLDER -t 'text_query' -i 'PATH/TO/IMAGE.jpg' -n 20\n```\n\nYou can encode and index all the images with the `build` command, optionally specifying the number of workers to build the dataset with (default 0)\n\n```\nmemery build PATH/TO/IMAGE/FOLDER --workers 4\n```\n\nClear out the encodings and index using the `purge` command\n\n```\nmemery purge PATH/TO/IMAGE/FOLDER`\n```\n\n### Use as a library\n\nThe core functionality of memery is wrapped into the Memery() class \n\n```\nfrom memery.core import Memery\nmemery = Memery()\n```\n\nThe function currently called `query_flow` accepts a folder name and a query and returns a ranked list of image files. You can query with text or a filepath to an image or both. \n\n\n```\nranked = memery.query_flow('./images', 'dad joke')\n\nprint(ranked[:5])\n```\n```\nConverting query\nSearching 82 images\nDone in 4.014755964279175 seconds\n['images/memes/Wholesome-Meme-68.jpg', 'images/memes/Wholesome-Meme-74.jpg', 'images/memes/Wholesome-Meme-88.jpg', 'images/memes/Wholesome-Meme-78.jpg', 'images/memes/Wholesome-Meme-23.jpg']\n```\n\nHere's the first result from that list:\n\n![](images/memes/Wholesome-Meme-68.jpg)\n\n\nSo that's how to use memery. Let's look at how you can help make it better.\n\n## Development\n\n### Pull the repo\n\nClone this repository from Github:\n\n`git clone https://github.com/deepfates/memery.git`\n\n\n### Install dependencies and memery\nEnter the `memery` folder and install requirements:\n\n```\ncd memery\npoetry install\n```\n\nAnd finally install your local, editable copy of memery with \n\n`pip install -e .`\n\n## Contributing\n\nMemery is open source and you can contribute. See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines on how you can help.\n\n### Who works on this project\n\nMemery was first written by Max Anton Brewer aka @deepfates in the summer of 2021. Some commits are listed from @robotface-io but that was just me using the wrong account when I first started. \n\nMany UI and back-end improvements were added by @wkrettek in 2022! 🙌🎉🌟 \n\nI wrote this to solve my own needs and learn notebook-based development. I hope it helps other people too. If you can help me make it better, please do. I welcome any contribution, guidance or criticism.\n\n**The ideal way to get support is to open an issue on Github**. However, the *fastest* way to get a response from me is probably to [direct message me on twitter](twitter.com/deepfates).\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepfates%2Fmemery","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeepfates%2Fmemery","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepfates%2Fmemery/lists"}