Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/simonw/llm-gemini
LLM plugin to access Google's Gemini family of models
https://github.com/simonw/llm-gemini
Last synced: 8 days ago
JSON representation
LLM plugin to access Google's Gemini family of models
- Host: GitHub
- URL: https://github.com/simonw/llm-gemini
- Owner: simonw
- License: apache-2.0
- Created: 2023-12-13T20:17:45.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-29T18:03:42.000Z (4 months ago)
- Last Synced: 2024-10-30T04:48:49.699Z (4 months ago)
- Language: Python
- Size: 43 KB
- Stars: 116
- Watchers: 6
- Forks: 10
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# llm-gemini
[](https://pypi.org/project/llm-gemini/)
[](https://github.com/simonw/llm-gemini/releases)
[](https://github.com/simonw/llm-gemini/actions?query=workflow%3ATest)
[](https://github.com/simonw/llm-gemini/blob/main/LICENSE)API access to Google's Gemini models
## Installation
Install this plugin in the same environment as [LLM](https://llm.datasette.io/).
```bash
llm install llm-gemini
```
## UsageConfigure the model by setting a key called "gemini" to your [API key](https://aistudio.google.com/app/apikey):
```bash
llm keys set gemini
```
``````
You can also set the API key by assigning it to the environment variable `LLM_GEMINI_KEY`.Now run the model using `-m gemini-1.5-pro-latest`, for example:
```bash
llm -m gemini-1.5-pro-latest "A joke about a pelican and a walrus"
```> A pelican walks into a seafood restaurant with a huge fish hanging out of its beak. The walrus, sitting at the bar, eyes it enviously.
>
> "Hey," the walrus says, "That looks delicious! What kind of fish is that?"
>
> The pelican taps its beak thoughtfully. "I believe," it says, "it's a billfish."Other models are:
- `gemini-1.5-flash-latest`
- `gemini-1.5-flash-8b-latest` - the least expensive
- `gemini-exp-1114` - recent experimental #1
- `gemini-exp-1121` - recent experimental #2
- `gemini-exp-1206` - recent experimental #3
- `gemini-2.0-flash-exp` - [Gemini 2.0 Flash](https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash)
- `learnlm-1.5-pro-experimental` - "an experimental task-specific model that has been trained to align with learning science principles" - [more details here](https://ai.google.dev/gemini-api/docs/learnlm).
- `gemini-2.0-flash-thinking-exp-1219` - experimental "thinking" model from December 2024
- `gemini-2.0-flash-thinking-exp-01-21` - experimental "thinking" model from January 2025
- `gemini-2.0-flash` - Gemini 2.0 Flash
- `gemini-2.0-flash-lite-preview-02-05` - Gemini 2.0 Flash-Lite
- `gemini-2.0-pro-exp-02-05` - experimental release of Gemini 2.0 Pro### Images, audio and video
Gemini models are multi-modal. You can provide images, audio or video files as input like this:
```bash
llm -m gemini-1.5-flash-latest 'extract text' -a image.jpg
```
Or with a URL:
```bash
llm -m gemini-1.5-flash-8b-latest 'describe image' \
-a https://static.simonwillison.net/static/2024/pelicans.jpg
```
Audio works too:```bash
llm -m gemini-1.5-pro-latest 'transcribe audio' -a audio.mp3
```And video:
```bash
llm -m gemini-1.5-pro-latest 'describe what happens' -a video.mp4
```
The Gemini prompting guide includes [extensive advice](https://ai.google.dev/gemini-api/docs/file-prompting-strategies) on multi-modal prompting.### JSON output
Use `-o json_object 1` to force the output to be JSON:
```bash
llm -m gemini-1.5-flash-latest -o json_object 1 \
'3 largest cities in California, list of {"name": "..."}'
```
Outputs:
```json
{"cities": [{"name": "Los Angeles"}, {"name": "San Diego"}, {"name": "San Jose"}]}
```### Code execution
Gemini models can [write and execute code](https://ai.google.dev/gemini-api/docs/code-execution) - they can decide to write Python code, execute it in a secure sandbox and use the result as part of their response.
To enable this feature, use `-o code_execution 1`:
```bash
llm -m gemini-1.5-pro-latest -o code_execution 1 \
'use python to calculate (factorial of 13) * 3'
```
### Google searchSome Gemini models support [Grounding with Google Search](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini), where the model can run a Google search and use the results as part of answering a prompt.
Using this feature may incur additional requirements in terms of how you use the results. Consult [Google's documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini) for more details.
To run a prompt with Google search enabled, use `-o google_search 1`:
```bash
llm -m gemini-1.5-pro-latest -o google_search 1 \
'What happened in Ireland today?'
```Use `llm logs -c --json` after running a prompt to see the full JSON response, which includes [additional information](https://github.com/simonw/llm-gemini/pull/29#issuecomment-2606201877) about grounded results.
### Chat
To chat interactively with the model, run `llm chat`:
```bash
llm chat -m gemini-1.5-pro-latest
```## Embeddings
The plugin also adds support for the `text-embedding-004` embedding model.
Run that against a single string like this:
```bash
llm embed -m text-embedding-004 -c 'hello world'
```
This returns a JSON array of 768 numbers.This command will embed every `README.md` file in child directories of the current directory and store the results in a SQLite database called `embed.db` in a collection called `readmes`:
```bash
llm embed-multi readmes --files . '*/README.md' -d embed.db -m text-embedding-004
```
You can then run similarity searches against that collection like this:
```bash
llm similar readmes -c 'upload csvs to stuff' -d embed.db
```See the [LLM embeddings documentation](https://llm.datasette.io/en/stable/embeddings/cli.html) for further details.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-gemini
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```This project uses [pytest-recording](https://github.com/kiwicom/pytest-recording) to record Gemini API responses for the tests.
If you add a new test that calls the API you can capture the API response like this:
```bash
PYTEST_GEMINI_API_KEY="$(llm keys get gemini)" pytest --record-mode once
```
You will need to have stored a valid Gemini API key using this command first:
```bash
llm keys set gemini
# Paste key here
```