An open API service indexing awesome lists of open source software.

https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise

Create a local, customizable chatbot using Flowise's visual builder and Ollama's open-source LLMs. This low-code project ensures privacy, scalability, and interactive AI features through an easy-to-use workflow. Ideal for developers seeking a secure, efficient chatbot solution
https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise

ai-chatbot ai-workflow chatbot data-privacy flowise llm local-chatbot low-code ollama open-source privacy-focused

Last synced: 3 months ago
JSON representation

Create a local, customizable chatbot using Flowise's visual builder and Ollama's open-source LLMs. This low-code project ensures privacy, scalability, and interactive AI features through an easy-to-use workflow. Ideal for developers seeking a secure, efficient chatbot solution

Awesome Lists containing this project

README

        

# Local Low-Code Chatbot with Ollama and Flowise

![logo](flowise_chatbot.png)

Create a fully functional, local chatbot using [Flowise](https://flowiseai.com/) and [Ollama](https://ollama.com/). This project is designed to provide a low-code, privacy-friendly solution for building intelligent conversational bots. 🚀
This was made as part of a blog that you can find [here](https://www.gpt-labs.ai/post/how-to-build-your-own-local-low-code-chatbot-using-ollama-and-flowise)
## Features
- **Low-Code Workflow:** Build chatbots visually without heavy coding.
- **Local Hosting:** Keeps your data private and secure.
- **Customisable:** Fully adjustable to your needs.
- **Powered by Open-Source Models:** Utilizes Ollama's LLMs for AI capabilities.

## Workflow Overview
![Workflow Diagram](chatbot_workflow.png)

### Core Components
1. **ChatOllama:** Provides AI responses using Ollama's models.
2. **Buffer Memory:** Retains chat history for continuity.
3. **Conversation Chain:** Integrates ChatOllama and Buffer Memory to enable interactive conversations.

## Getting Started

### Prerequisites
- [Docker](https://www.docker.com/) (for running Ollama locally)
- [Node.js](https://nodejs.org/) and npm (for Flowise)
- A compatible LLM model (e.g., `SARA-llama3.2`)

### Installation
1. Clone the repository:
```bash
git clone https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise.git
cd local-low-code-chatbot-ollama-flowise
```
2. Start Ollama:
```bash
docker run -p 11434:11434 ollama/server
```
3. Install Flowise:
```bash
npm install -g flowise
flowise start
```
4. Import the provided Flowise JSON (`basic-local-chatbot-flowise.json`) into Flowise.

### Running the Chatbot
1. Access the Flowise editor at `http://localhost:3000`.
2. Load and activate the workflow.
3. Start chatting with the bot in the interface.