Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/taofiqsulayman/local-ai-assistant
A Streamlit app that uses Ollama and the LangChain library to create a chat interface with a local AI assistant.
https://github.com/taofiqsulayman/local-ai-assistant
ai langchain llms ollama python streamlit
Last synced: 9 days ago
JSON representation
A Streamlit app that uses Ollama and the LangChain library to create a chat interface with a local AI assistant.
- Host: GitHub
- URL: https://github.com/taofiqsulayman/local-ai-assistant
- Owner: taofiqsulayman
- Created: 2024-08-13T20:40:31.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-08-13T21:57:46.000Z (6 months ago)
- Last Synced: 2025-01-31T06:51:17.699Z (19 days ago)
- Topics: ai, langchain, llms, ollama, python, streamlit
- Language: Python
- Homepage:
- Size: 10.7 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
Awesome Lists containing this project
README
# Local AI Assistant
## Unlock the Power of Conversational AI
Imagine having a personal AI assistant at your fingertips, ready to answer your questions, provide information, and even entertain you. Welcome to Local AI Assistant, a Streamlit app that harnesses the power of Ollama (for serving LLMs locally) and LangChain to bring conversational AI to your local machine.
## Table of Contents
- [Installation and Setup](#installation-and-setup)
- [Usage](#usage)
- [Troubleshooting](#troubleshooting)
- [Contributing](#contributing)
- [License](#license)## Installation and Setup
### Step 1: Install Ollama
To use this project, you need to install Ollama on your system. Follow these steps:
1. Install the Ollama CLI by running the following command in your terminal:
```bash
pip install ollama
```2. Verify the installation by running:
```bash
ollama --version
```### Step 2: Pull Models
Pull the required models for the project by running:
```bash
ollama pull ${modelName}
```### Step 3: Clone the Repository
Clone this repository using Git:
```bash
git clone [email protected]:taofiqsulayman/local-ai-assistant.git
```### Step 4: Create and Activate a Virtual Environment
Create a new virtual environment for the project:
```bash
python -m venv venv
```Activate the virtual environment:
- On Windows:
```bash
venv\Scripts\activate
```- On macOS/Linux:
```bash
source venv/bin/activate
```### Step 5: Install Requirements
Install the required dependencies by running:
```bash
pip install -r requirements.txt
```### Step 6: Run the App
Run the Streamlit app by executing:
```bash
streamlit run app.py
```## Usage
1. Select a model from the dropdown list.
2. Type your message in the chat input field.
3. Press Enter to send the message and receive a response from the AI assistant.## Troubleshooting
- If you encounter any issues with Ollama, refer to the [Ollama documentation](https://ollama.com).
- For Streamlit-related issues, check the [Streamlit documentation](https://docs.streamlit.io/).## Contributing
Contributions are welcome! If you'd like to contribute to this project, please fork the repository, make changes, and submit a pull request.