Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ashvardanian/swiftsemanticsearch
Real-time on-device text-to-image and image-to-image Semantic Search with video stream camera capture using USearch & UForm AI Swift SDKs for Apple devices 🍏
https://github.com/ashvardanian/swiftsemanticsearch
coreml coreml-models image-search ios mobile-app ondeviceai onnx rag semantic-search swift swiftui vector-search vector-search-engine video-search
Last synced: about 21 hours ago
JSON representation
Real-time on-device text-to-image and image-to-image Semantic Search with video stream camera capture using USearch & UForm AI Swift SDKs for Apple devices 🍏
- Host: GitHub
- URL: https://github.com/ashvardanian/swiftsemanticsearch
- Owner: ashvardanian
- Created: 2023-08-28T13:54:18.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-20T13:32:05.000Z (about 1 month ago)
- Last Synced: 2025-01-14T02:07:00.988Z (8 days ago)
- Topics: coreml, coreml-models, image-search, ios, mobile-app, ondeviceai, onnx, rag, semantic-search, swift, swiftui, vector-search, vector-search-engine, video-search
- Language: Swift
- Homepage: https://github.com/unum-cloud/uform
- Size: 623 KB
- Stars: 107
- Watchers: 5
- Forks: 7
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Swift Semantic Search 🍏
![Preview](https://github.com/ashvardanian/ashvardanian/blob/master/repositories/SwiftSemanticSearch.jpg?raw=true#center)
This Swift demo app shows you how to build real-time native AI-powered apps for Apple devices using Unum's Swift libraries and quantized models.
Under the hood, it uses [UForm](https://github.com/unum-cloud/uform) to understand and "embed" multimodal data, like multilingual texts and images, processing them on the fly from a camera feed.
Once the vector embeddings are computed, it uses [USearch](https://github.com/unum-cloud/usearch) to provide a real-time search over the semantic space.
That same engine also enables geo-spatial search over the coordinates of the images and has been shown to scale even to 100M+ entries on an 🍏 iPhone easily.
The demo app is capable of text-to-image and image-to-image search and uses `vmanot/Media` libra to fetch the camera feed, embedding, and searching frames on the fly.
To test the demo:```bash
# Clone the repo
git clone https://github.com/ashvardanian/SwiftSemanticSearch.git# Change directory & decompress the images dataset.zip, which brings:
# - `images.names.txt` with newline-separated image names
# - `images.uform3-image-text-english-small.fbin` - precomputed embeddings
# - `images.uform3-image-text-english-small.usearch` - precomputed index
# - `images` - directory with images
cd SwiftSemanticSearch
unzip dataset.zip
```After that, fire up the Xcode project and run the app on your fruity device!
---
Links:
- [Preprocessing datasets](https://github.com/ashvardanian/SwiftSemanticSearch/blob/main/images.ipynb)
- [USearch Swift docs](https://unum-cloud.github.io/usearch/swift)
- [Form Swift docs](https://unum-cloud.github.io/uform/swift)