https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise
Create a local, customizable chatbot using Flowise's visual builder and Ollama's open-source LLMs. This low-code project ensures privacy, scalability, and interactive AI features through an easy-to-use workflow. Ideal for developers seeking a secure, efficient chatbot solution
https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise
ai-chatbot ai-workflow chatbot data-privacy flowise llm local-chatbot low-code ollama open-source privacy-focused
Last synced: 3 months ago
JSON representation
Create a local, customizable chatbot using Flowise's visual builder and Ollama's open-source LLMs. This low-code project ensures privacy, scalability, and interactive AI features through an easy-to-use workflow. Ideal for developers seeking a secure, efficient chatbot solution
- Host: GitHub
- URL: https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise
- Owner: dwain-barnes
- License: apache-2.0
- Created: 2024-11-21T13:33:42.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-11-21T13:59:48.000Z (7 months ago)
- Last Synced: 2025-02-09T08:38:36.718Z (5 months ago)
- Topics: ai-chatbot, ai-workflow, chatbot, data-privacy, flowise, llm, local-chatbot, low-code, ollama, open-source, privacy-focused
- Homepage: https://www.gpt-labs.ai
- Size: 2.09 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Local Low-Code Chatbot with Ollama and Flowise

Create a fully functional, local chatbot using [Flowise](https://flowiseai.com/) and [Ollama](https://ollama.com/). This project is designed to provide a low-code, privacy-friendly solution for building intelligent conversational bots. 🚀
This was made as part of a blog that you can find [here](https://www.gpt-labs.ai/post/how-to-build-your-own-local-low-code-chatbot-using-ollama-and-flowise)
## Features
- **Low-Code Workflow:** Build chatbots visually without heavy coding.
- **Local Hosting:** Keeps your data private and secure.
- **Customisable:** Fully adjustable to your needs.
- **Powered by Open-Source Models:** Utilizes Ollama's LLMs for AI capabilities.## Workflow Overview
### Core Components
1. **ChatOllama:** Provides AI responses using Ollama's models.
2. **Buffer Memory:** Retains chat history for continuity.
3. **Conversation Chain:** Integrates ChatOllama and Buffer Memory to enable interactive conversations.## Getting Started
### Prerequisites
- [Docker](https://www.docker.com/) (for running Ollama locally)
- [Node.js](https://nodejs.org/) and npm (for Flowise)
- A compatible LLM model (e.g., `SARA-llama3.2`)### Installation
1. Clone the repository:
```bash
git clone https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise.git
cd local-low-code-chatbot-ollama-flowise
```
2. Start Ollama:
```bash
docker run -p 11434:11434 ollama/server
```
3. Install Flowise:
```bash
npm install -g flowise
flowise start
```
4. Import the provided Flowise JSON (`basic-local-chatbot-flowise.json`) into Flowise.### Running the Chatbot
1. Access the Flowise editor at `http://localhost:3000`.
2. Load and activate the workflow.
3. Start chatting with the bot in the interface.