Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/not-lain/loadimg
a python package for loadimg and converting images
https://github.com/not-lain/loadimg
base64 image pillow python requests
Last synced: 7 days ago
JSON representation
a python package for loadimg and converting images
- Host: GitHub
- URL: https://github.com/not-lain/loadimg
- Owner: not-lain
- License: apache-2.0
- Created: 2024-03-26T00:33:32.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2025-01-21T19:15:18.000Z (15 days ago)
- Last Synced: 2025-01-22T14:05:38.837Z (15 days ago)
- Topics: base64, image, pillow, python, requests
- Language: Python
- Homepage:
- Size: 67.4 KB
- Stars: 25
- Watchers: 2
- Forks: 5
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# loadimg
[![Downloads](https://static.pepy.tech/badge/loadimg)](https://pepy.tech/project/loadimg)
A python package for loading and converting images
## How to use
Installation
```
pip install loadimg
```
Usage
```python
from loadimg import load_img
load_img(any_img_type_here,output_type="pil",input_type="auto")
```
Supported types
- Currently supported input types - numpy, pillow, str(both path and url), base64, **auto**
- Currently supported output types - numpy, pillow, str, base64
The base64 is now compatible with most APIs, now supporting Hugging Face, OpenAI and FAL
```python
from loadimg import load_img
from huggingface_hub import InferenceClient# or load a local image
my_b64_img = load_img("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg", output_type="base64" )client = InferenceClient(api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": my_b64_img # base64 allows using images without uploading them to the web
}
}
]
}
]stream = client.chat.completions.create(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
messages=messages,
max_tokens=500,
stream=True
)for chunk in stream:
print(chunk.choices[0].delta.content, end="")
```## Contributions
- [x] thanks to [@KingNish24](https://github.com/KingNish24) for improving base64 support and adding the `input_type` parameter
- [x] thanks to [@Saptarshi-Bandopadhyay](https://github.com/Saptarshi-Bandopadhyay) for supporting base64 and improving the docstrings
- [x] thanks to [@Abbhiishek](https://github.com/Abbhiishek) for improving image naming