An open API service indexing awesome lists of open source software.

https://github.com/leettools-dev/leettools

AI Search tools.
https://github.com/leettools-dev/leettools

Last synced: 28 days ago
JSON representation

AI Search tools.

Awesome Lists containing this project

README

          


Logo

[![Follow on X](https://img.shields.io/twitter/follow/LeetTools?logo=X&color=%20%23f5f5f5)](https://twitter.com/intent/follow?screen_name=LeetTools)
[![GitHub license](https://img.shields.io/badge/License-Apache_2.0-blue.svg?labelColor=%20%23155EEF&color=%20%23528bff)](https://github.com/leettools-dev/leettools)

- [AI Search Assistant with Local Knowledge Bases](#ai-search-assistant-with-local-knowledge-bases)
- [Quick Start](#quick-start)
- [Use Different LLM and Search Providers](#use-different-llm-and-search-providers)
- [Use local Ollama service for inference and embedding](#use-local-ollama-service-for-inference-and-embedding)
- [Use DeepSeek API with different embedding services](#use-deepseek-api-with-different-embedding-services)
- [Use Google / FireCrawl as the default web retriever](#use-google--firecrawl-as-the-default-web-retriever)
- [Usage Examples](#usage-examples)
- [Build a local knowledge base using PDFs from the web](#build-a-local-knowledge-base-using-pdfs-from-the-web)
- [Generate analytical research reports like OpenAI/Google's Deep Research](#generate-analytical-research-reports-like-openaigoogles-deep-research)
- [Generate news list from web search results](#generate-news-list-from-web-search-results)
- [Main Components](#main-components)
- [Community](#community)

# AI Search Assistant with Local Knowledge Bases

LeetTools is an AI search assistant that can perform highly customizable search workflows
and generate customized format results based on both web and local knowledge bases. With an
automated document pipeline that handles data ingestion, indexing, and storage, we can
focus on implementing the workflow without worrying about the underlying infrastructure.

LeetTools can run with minimal resource requirements on the command line with a
DuckDB-backend and configurable LLM settings. It can also use other dedicated
databases for different functions, e.g., we can use MongoDB for document storage,
Milvus for vector search, and Neo4j for graph search. We can configure different
functions in the same workflow to use different LLM providers and models.

Here is an illustration of the LeetTools **digest** flow where it can search the web
(or local KB) and generate a digest article from the search results:

![LeetTools Digest Flow](docs/assets/process-digest.drawio.svg)

And here is an example output article generated by the **digest** flow for the query
[How does Ollama work?](docs/examples/ollama.md).

Currently LeetTools provides the following workflows:

* answer : Answer the query directly with source references (similar to Perplexity). [📖](https://leettools-dev.github.io/Flow/answer)
* digest : Generate a multi-section digest article from search results (similar to Google Deep Research). [📖](https://leettools-dev.github.io/Flow/digest)
* search : Search for top segements that match the query. [📖](https://leettools-dev.github.io/Flow/search)
* news : Generate a list of news items for the specified topic. [📖](https://leettools-dev.github.io/Flow/news)
* extract : Extract and store structured data for given schema. [📖](https://leettools-dev.github.io/Flow/extract)
* opinions: Generate sentiment analysis and facts from the search results. [📖](https://leettools-dev.github.io/Flow/opinions)

# Quick Start

**Before you start**

- .env file: We can use any OpenAI-compatible LLM endpoint, such as local Ollama service
or public provider such as Gemini or DeepSeek. we can switch the service easily by
[defining environment variables or switching .env files](#use-different-llm-endpoints).

- LeetHome: By default the data is saved under ${HOME}/leettools, you can set a different
LeetHome environment variable to change the location:

```bash
% export LEET_HOME=
% mkdir -p ${LEET_HOME}
```

**🚀 New: Run LeetTools Web UI with Docker 🚀**

LeetTools now provides a Docker container that includes the web UI. You can start the
container by running the following command:

```bash
docker/start.sh
```

This will start the LeetTools service and the web UI. You can access the web UI at
[http://localhost:3000](http://localhost:3000). The web UI app is currently under development
and not open sourced yet. We plan to open source it in the near future.

**Run with pip**

If you are using an OpenAI compatible LLM endpoint, you can install and run LeetTools
with pip as follows (using Conda/Venv is recommended):

```bash
% conda create -y -n leettools python=3.11
% conda activate leettools
% pip install leettools
% export EDS_LLM_API_KEY=
% leet flow -t answer -q "How does GraphRAG work?" -k graphrag -l info
```

The above `flow -t answer` command will run the `answer` flow with the query "How does
GraphRAG work?" and save the scraped web pages to the knowledge base `graphrag`. The
`-l info` option will show the essential log messages.

The default API endpoint is set to the OpenAI API endpoint, which you can modify by
changing the `EDS_DEFAULT_LLM_BASE_URL` environment variable:

```bash
% export EDS_DEFAULT_LLM_BASE_URL=https://api.openai.com/v1
```

**Run with source code**

```bash
% git clone https://github.com/leettools-dev/leettools.git
% cd leettools

% conda create -y -n leettools python=3.11
% conda activate leettools
% pip install -r requirements.txt
% pip install -e .
# add the script path to the path
% export PATH=`pwd`/scripts:${PATH}
% export EDS_LLM_API_KEY=

% leet flow -t answer -q "How does GraphRAG work?" -k graphrag -l info
```

# Use Different LLM and Search Providers

We can run LeetTools with different env files to use different LLM providers and other
related settings.

## Use local Ollama service for inference and embedding

```bash
# you may need to pull the models first
% ollama pull llama3.2
% ollama pull nomic-embed-text
% ollama serve

% cat > .env.ollama < .env.deepseek <
EDS_DEFAULT_LLM_BASE_URL=https://api.deepseek.com/v1
EDS_LLM_API_KEY=
EDS_DEFAULT_INFERENCE_MODEL=deepseek-chat
EDS_DEFAULT_DENSE_EMBEDDER=dense_embedder_local_mem
EOF

# Then run the command with the -e option to specify the .env file to use
% leet flow -e .env.deepseek -t answer -q "How does GraphRAG work?" -k graphrag -l info
```

If you want to use another API provider (OpenAI compatible) for embedding, say a local
Ollama embedder, you can set the embedding endpoint URL and API key separately as follows:

```bash
% cat > .env.deepseek <
EDS_DEFAULT_INFERENCE_MODEL=deepseek-chat

# this specifies to use an OpenAI compatible embedding endpoint
EDS_DEFAULT_DENSE_EMBEDDER=dense_embedder_openai

# the following specifies the embedding endpoint URL and model to use
EDS_DEFAULT_EMBEDDING_BASE_URL=http://localhost:11434/v1
EDS_DEFAULT_EMBEDDING_MODEL=nomic-embed-text
EDS_EMBEDDING_MODEL_DIMENSION=768
EOF
```

## Use Google / FireCrawl as the default web retriever

The search engine is `google` by default, which can be set by the following environment
variable:

```bash
export EDS_WEB_RETRIEVER=google
export EDS_SEARCH_API_URL=https://www.googleapis.com/customsearch/v1
export EDS_GOOGLE_CX_KEY=
export EDS_GOOGLE_API_KEY=
```

We can also use the FireCrawl search as the default web retriever instead of the default
Google search by setting the following environment variables:

```bash
export EDS_WEB_RETRIEVER=firecrawl
export EDS_FIRECRAWL_API_URL=https://api.firecrawl.dev
export EDS_FIRECRAWL_API_KEY=your_firecrawl_api_key
```

Here is a detailed example of [using FireCrawl with Ollama to run a deep research](docs/use_firecrawl.md).

By default we provide a shared proxy search service that can be used for testing purposes.
Users should use their own search services for production use.

# Usage Examples

## Build a local knowledge base using PDFs from the web

We can build a local knowledge base with PDFs from the web. Suppose we have set up
the local Ollama service as described [above](#use-local-ollama-service-for-inference-and-embedding),
now we can use the following commands to build a local knowledge base with PDFs from the web:

```bash
# create a KB with a URL
# the book downloaded here is "Foundations of Large Language Models"
# it has 231 pages and take some time to process
% leet kb add-url -e .env.ollama -k llmbook -r "https://arxiv.org/pdf/2501.09223"

# now you can query the KB with any topic you want to explore
% leet kb flow -e .env.ollama -t answer -k llmbook -l info \
-q "How does LLM Finetuning process work?"
```

We have a more [detailed example](docs/run_ollama_with_deepseek_r1.md) to show how to
use the local Ollama service with the DeepSeek-r1:1.5B model to build a local knowledge
base.

## Generate analytical research reports like OpenAI/Google's Deep Research

We can generate analytical research reports like OpenAI/Google's Deep Research by using
the `digest` flow. Here is an example:

```bash
% leet flow -e .env.fireworks -t digest -k aijob.fireworks \
-p search_max_results=30 -p days_limit=360 \
-q "How will agentic AI and generative AI affect our non-tech jobs?" \
-l info -o outputs/aijob.fireworks.md
```

An example of the output is available [here](docs/examples/deepseek/aijob.fireworks.md),
and the tutorial to use the DeepSeek API from fireworks.ai for the above command is
available [here](docs/run_deepsearch_with_firework_deepseek.md).

## Generate news list from web search results

We can create a knowledge base with a web search with a date limit, and then generate
a list of news items from the KB. Here is an example:

```bash
leet flow -t news -q "LLM GenAI Startups" -k genai -l info\
-p days_limit=3 -p search_iteration=3 -p search_max_results=100 \
-o llm_genai_news.md
```

The query retrieves the latest web pages from the past 3 days up to 100 search result page
and generates a list of news items from the search results. The output is saved to
the `llm_genai_news.md` file. An example of the output is available [here](docs/examples/llm_genai_news.md).

# Main Components

The main components of the backend include:
* 🚀 Automated document pipeline to ingest, convert, chunk, embed, and index documents.
* 🗂️ Knowledge base to manage and serve the indexed documents.
* 🔍 Search and retrieval library to fetch documents from the web or local KB.
* 🤖 Workflow engine to implement search-based AI workflows.
* ⚙ Configuration system to support dynamic configurations used for every component.
* 📝 Query history system to manage the history and the context of the queries.
* 💻 Scheduler for automatic execution of the pipeline tasks.
* 🧩 Accounting system to track the usage of the LLM APIs.

The architecture of the document pipeline is shown below:

![LeetTools Document Pipeline](https://gist.githubusercontent.com/pengfeng/4b2e36bda389e0a3c338b5c42b5d09c1/raw/6bc06db40dadf995212270d914b46281bf7edae9/leettools-eds-arch.svg)

See the [Documentation](docs/documentation.md) for more details.

# Community

**Acknowledgements**

Right now we are using the following open source libraries and tools (not limited to):

- [DuckDB](https://github.com/duckdb/duckdb)
- [Docling](https://github.com/DS4SD/docling)
- [Chonkie](https://github.com/bhavnicksm/chonkie)
- [Ollama](https://github.com/ollama/ollama)
- [Jinja2](https://jinja.palletsprojects.com/en/3.0.x/)
- [BS4](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
- [FastAPI](https://github.com/fastapi/fastapi)
- [Pydantic](https://github.com/pydantic/pydantic)

We plan to add more plugins for different components to support different workloads.

**Get help and support**

Please feel free to connect with us using the [discussion section](https://github.com/leettools-dev/leettools/discussions).

**Contributing**

Please read [Contributing to LeetTools](CONTRIBUTING.md) for details.

**License**

LeetTools is licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE)
for the full license text.