https://github.com/ariya/gamal
Research tool leveraging LLM for answers
https://github.com/ariya/gamal
copilot deepinfra gemma groq llama llm mistral openai perplexity telegram-bot
Last synced: 5 months ago
JSON representation
Research tool leveraging LLM for answers
- Host: GitHub
- URL: https://github.com/ariya/gamal
- Owner: ariya
- License: mit
- Created: 2024-07-01T14:32:20.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-11-15T02:10:19.000Z (7 months ago)
- Last Synced: 2025-01-08T06:46:52.357Z (5 months ago)
- Topics: copilot, deepinfra, gemma, groq, llama, llm, mistral, openai, perplexity, telegram-bot
- Language: JavaScript
- Homepage:
- Size: 201 KB
- Stars: 30
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Gamal
Gamal is a simple, zero-dependency tool designed to quickly provide answers to questions. It finds relevant web pages and uses an LLM to summarize the content, delivering concise answers. Gamal is accessible via the terminal (as a CLI tool), through its minimalist web interface, or as a Telegram bot.
[](https://asciinema.org/a/668554)
Gamal utilizes [SearXNG](https://searxng.org) for web searches and requires an LLM to generate responses based on search results. By default, Gamal integrates with [OpenRouter](https://openrouter.ai) as its LLM service, requiring the configuration of an API key in the `LLM_API_KEY` environment variable. Please continue reading for detailed instructions on configuring Gamal to use either a local LLM ([llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), and [Ollama](https://ollama.com)) or other managed LLM services (offering over half a dozen options, including [OpenAI](https://platform.openai.com), [Fireworks](https://fireworks.ai), and [Groq](https://groq.com)).
To execute Gamal as a CLI tool, run it with [Node.js](https://nodejs.org) (version >= 18) or [Bun](https://bun.sh):
```bash
./gamal.js
```For instant answers, pipe the questions directly into Gamal:
```bash
echo "List 5 Indonesia's best travel destinations" | ./gamal.js
```Gamal also includes a minimalist front-end web interface. To launch it, specify the environment variable `GAMAL_HTTP_PORT`, for example:
```bash
GAMAL_HTTP_PORT=5000 ./gamal.js
```Then, open a web browser and go to `localhost:5000`.
Gamal is capable of functioning as a [Telegram bot](https://core.telegram.org/bots). Obtain a token (refer to [Telegram documentation](https://core.telegram.org/bots/tutorial#obtain-your-bot-token) for details) and set it as the environment variable `GAMAL_TELEGRAM_TOKEN` before launching Gamal. Note that conversation history in Telegram chats is stored in memory and not persisted to disk.
## Multi-language Support
Gamal can converse in many languages besides English. It always tries to respond in the same language as the question. You can freely switch languages between questions, as shown in the following example:
```
>> Which planet in our solar system is the biggest?
Jupiter is the largest planet in our solar system [1].
[1] https://science.nasa.gov/jupiter/>> ¿Y el más caliente?
Venus es el planeta más caliente, con hasta 475°C. [1].
[1] https://www.redastronomy.com/sistema-solar/el-planeta-venus/
```Gamal's continuous integration workflows include evaluation tests in English, Spanish, German, French, Italian, and Indonesian.
[](https://github.com/ariya/gamal/actions/workflows/english.yml)
[](https://github.com/ariya/gamal/actions/workflows/spanish.yml)
[](https://github.com/ariya/gamal/actions/workflows/french.yml)
[](https://github.com/ariya/gamal/actions/workflows/german.yml)
[](https://github.com/ariya/gamal/actions/workflows/italian.yml)
[](https://github.com/ariya/gamal/actions/workflows/indonesian.yml)
[](https://github.com/ariya/gamal/actions/workflows/lang-switch.yml)## Conversational Interface
With the integration of third-party tools, Gamal can engage in conversations using voice (both input and output) rather than just text.
For automatic speech recognition (ASR), also known as speech-to-text (STT), Gamal leverages the streaming tool from [whisper.cpp](https://github.com/ggerganov/whisper.cpp). Ensure that `whisper-cpp-stream`, or the custom executable specified in the `WHISPER_STREAM` environment variable, is available in your system's path. Whisper requires a GGML model, which can be downloaded from [Hugging Face](https://huggingface.co/ggerganov/whisper.cpp). The [base model](https://huggingface.co/ggerganov/whisper.cpp/blob/main/ggml-base.en-q5_1.bin) (60 MB) is generally a good balance between accuracy and speed for most modern computers. Set the `WHISPER_MODEL` environment variable to the full path of the downloaded model.
To enable Gamal to respond with voice instead of just text, install [Piper](https://github.com/rhasspy/piper) for text-to-speech (TTS) conversion. Piper can be installed via Nixpkg (the `piper-tts` package). Piper also requires a [voice model](https://huggingface.co/rhasspy/piper-voices), which can be downloaded from sources like [ryan-medium](https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ryan/medium). Make sure to download both the ONNX model file (63 MB) and the corresponding config JSON. Before running Gamal, set the `PIPER_MODEL` environment variable to the full path of the voice model.
The synthesized audio will be played back through the speaker or other audio output using the `play` utility from the [SOX (Sound eXchange project)](https://sourceforge.net/projects/sox/). Ensure that SOX is installed and available in your system's path.
## Using Other LLM Services
Gamal is designed to be used with OpenRouter by default, but it can also be configured to work with other LLM services by adjusting some environment variables. The correct API key and a suitable model are required.
Compatible LLM services include [Deep Infra](https://deepinfra.com), [Fireworks](https://fireworks.ai), [Gemini](https://ai.google.dev/gemini-api), [Groq](https://groq.com), [Hyperbolic](https://www.hyperbolic.xyz), [Lepton](https://lepton.ai), [Novita](https://novita.ai), [OpenAI](https://platform.openai.com), and [Together](https://www.together.ai).
[](https://github.com/ariya/gamal/actions/workflows/test-deepinfra.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-fireworks.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-gemini.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-groq.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-hyperbolic.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-lepton.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-novita.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-openai.yml)
[](https://github.com/ariya/gamal/actions/workflows/test-together.yml)Refer to the relevant section for configuration details. The example provided is for Llama-3.1 8B, though any LLM with 7B parameters should also work, such as Mistral 7B, Qwen-2 7B, or Gemma-2 9B.
Deep Infra
```bash
export LLM_API_BASE_URL=https://api.deepinfra.com/v1/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
```Fireworks
```bash
export LLM_API_BASE_URL=https://api.fireworks.ai/inference/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="accounts/fireworks/models/llama-v3p1-8b-instruct"
```Google Gemini
```bash
export LLM_API_BASE_URL=https://generativelanguage.googleapis.com/v1beta
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="gemini-1.5-flash-8b"
export LLM_JSON_SCHEMA=1
```Groq
```bash
export LLM_API_BASE_URL=https://api.groq.com/openai/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama-3.1-8b-instant"
```Hyperbolic
```bash
export LLM_API_BASE_URL=https://api.hyperbolic.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct"
```Lepton
```bash
export LLM_API_BASE_URL=https://llama3-1-8b.lepton.run/api/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="llama3-1-8b"
```Novita
```bash
export LLM_API_BASE_URL=https://api.novita.ai/v3/openai
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/llama-3.1-8b-instruct"
```OpenAI
```bash
export LLM_API_BASE_URL=https://api.openai.com/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="gpt-4o-mini"
```Together
```bash
export LLM_API_BASE_URL=https://api.together.xyz/v1
export LLM_API_KEY="yourownapikey"
export LLM_CHAT_MODEL="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"
```## Using Local LLM Servers
Gamal is compatible with local LLM inference tools such as [llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), and [Ollama](https://ollama.com). Refer to the relevant section for configuration details.
The example provided uses Llama-3.1 8B. For optimal performance, an instruction-following LLM with 7B parameters or more is recommended. Suitable models include Mistral 7B, Qwen-2 7B, and Gemma-2 9B.
llama.cpp
First, load a quantized model such as [Llama-3.1 8B](https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF). Then, adjust the `LLM_API_BASE_URL` environment variable accordingly:
```bash
/path/to/llama-server -m Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1
```Jan
Refer to [the documentation](https://jan.ai/docs/local-api) and load a model like [Llama-3.1 8B](https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF). Then, set the environment variable:
```bash
export LLM_API_BASE_URL=http://127.0.0.1:1337/v1
export LLM_CHAT_MODEL='llama3.1'
```Ollama
Load a model and configure the environment variables:
```bash
ollama pull llama3.1
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
export LLM_CHAT_MODEL='llama3.1'
```## Evaluating Questions
Gamal includes a built-in evaluation tool. For instance, if a text file named `qa.txt` contains pairs of `User` and `Assistant` messages:
```
User: Which planet is the largest?
Assistant: The largest planet is /Jupiter/.User: and the smallest?
Assistant: The smallest planet is /Mercury/.
```executing the following command will sequentially search for these questions and verify the answers using regular expressions:
```bash
./gamal.js qa.txt
```Additional examples can be found in the `tests/` subdirectory.
Two environment variables can modify the behavior:
* `LLM_DEBUG_FAIL_EXIT`: When set, Gamal will exit immediately upon encountering an incorrect answer, and subsequent questions in the file will not be processed.
* `LLM_DEBUG_PIPELINE`: When set, if the expected regular expression does not match the answer, the internal LLM pipeline will be printed to stdout.
## Improving Search Quality
By default, Gamal uses the [public SearXNG instance](https://searx.space/). However, public instances typically apply aggressive rate limiting. To avoid this, it is recommended to run a local SearXNG instance, such as via a Docker container, following the steps in [its documentation](https://docs.searxng.org/admin/installation-docker.html). Ensure that the local instance is configured to support JSON format by including the following block in the `settings.yml` file:
```yaml
search:
formats:
- html
- json
```Before starting Gamal, the `SEARXNG_URL` environment variable should be set to the URL of the local SearXNG instance (e.g., `localhost:8080` if running locally).
Additionally, connecting Gamal to a private SearXNG instance allows integration with [custom data sources](https://docs.searxng.org/dev/engines/offline/search-indexer-engines.html), enabling enhanced search capabilities.