Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/autodistill/autodistill-remote-clip
RemoteCLIP module for use with Autodistill.
https://github.com/autodistill/autodistill-remote-clip
Last synced: 5 days ago
JSON representation
RemoteCLIP module for use with Autodistill.
- Host: GitHub
- URL: https://github.com/autodistill/autodistill-remote-clip
- Owner: autodistill
- License: apache-2.0
- Created: 2023-11-17T09:16:05.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-06-11T07:36:49.000Z (5 months ago)
- Last Synced: 2024-06-11T08:57:55.932Z (5 months ago)
- Language: Python
- Size: 16.6 KB
- Stars: 2
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Autodistill RemoteCLIP Module
This repository contains the code supporting the RemoteCLIP base model for use with [Autodistill](https://github.com/autodistill/autodistill).
[RemoteCLIP](https://github.com/ChenDelong1999/RemoteCLIP) is a vision-language CLIP model trained on remote sensing data. According to the RemoteCLIP README:
> RemoteCLIP outperforms previous SoTA by 9.14% mean recall on the RSICD dataset and by 8.92% on RSICD dataset. For zero-shot classification, our RemoteCLIP outperforms the CLIP baseline by up to 6.39% average accuracy on 12 downstream datasets.
Read the full [Autodistill documentation](https://autodistill.github.io/autodistill/).
Read the [RemoteCLIP Autodistill documentation](https://autodistill.github.io/autodistill/base_models/remoteclip/).
## Installation
To use RemoteCLIP with autodistill, you need to install the following dependency:
```bash
pip3 install autodistill-remote-clip
```## Quickstart
```python
from autodistill_remote_clip import RemoteCLIP
from autodistill.detection import CaptionOntology# define an ontology to map class names to our RemoteCLIP prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = RemoteCLIP(
ontology=CaptionOntology(
{
"airport runway": "runway",
"countryside": "countryside",
}
)
)predictions = base_model.predict("runway.jpg")
print(predictions)
```## License
This project is covered under an [Apache 2.0 license](https://github.com/ChenDelong1999/RemoteCLIP/blob/main/LICENSE).
## 🏆 Contributing
We love your input! Please see the core Autodistill [contributing guide](https://github.com/autodistill/autodistill/blob/main/CONTRIBUTING.md) to get started. Thank you 🙏 to all our contributors!