https://github.com/living-with-machines/image-search
Materials for a workshop on image search for heritage data
https://github.com/living-with-machines/image-search
clip computer-vision glam huggingface image-search sentence-transformers workshop
Last synced: 7 months ago
JSON representation
Materials for a workshop on image search for heritage data
- Host: GitHub
- URL: https://github.com/living-with-machines/image-search
- Owner: Living-with-machines
- License: cc-by-4.0
- Created: 2022-04-16T10:19:26.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2022-04-25T12:39:18.000Z (over 3 years ago)
- Last Synced: 2025-04-01T22:05:52.733Z (7 months ago)
- Topics: clip, computer-vision, glam, huggingface, image-search, sentence-transformers, workshop
- Language: Jupyter Notebook
- Homepage:
- Size: 33.1 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Image search with 🤗 datasets
[](https://doi.org/10.5281/zenodo.6473465)
Materials for a workshop on image search with a focus on heritage data. The workshop is based on a blog post [Image search with 🤗 datasets](https://huggingface.co/blog/image-search-datasets) but goes into a *little* bit more detail.
## Contents
- The [Slides](image_search.pdf) are used to introduce [🤗 `datasets`](https://huggingface.co/docs/datasets/index), [`sentence-transformers`](https://www.sbert.net/index.html), and [CLIP](https://openai.com/blog/clip/) as well as giving a broader conceptual overview of image search, embeddings and concluding with some discussion of ethical considerations about deployment.
- [Notebook 1](01_sentence-transformers-intro.ipynb) [](https://colab.research.google.com/github/Living-with-machines/image-search/blob/main/01_sentence-transformers-intro.ipynb) gives a rapid overview of how `sentence-transformers` can be used to 'encode' text and images for tasks like image search.
- [Notebook 2](02_image_search_demo.ipynb) [](https://colab.research.google.com/github/Living-with-machines/image-search/blob/main/02_image_search_demo.ipynb) allows for the exploration of the outputs of a CLIP model. This is intended to allow people to *begin* interrogating the strengths, weaknesses and issues with using CLIP with heritage material.
- [Notebook 3](03_hf_blog_image_search.ipynb) [](https://colab.research.google.com/github/Living-with-machines/image-search/blob/main/03_hf_blog_image_search.ipynb) is the original notebook which accompanied the blog post. This notebook gives an overview of the steps involved from start to finish.
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
### Acknowledgment
> This work was support by Living with Machines. This project, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London.