Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/praneethravuri/open-llms
A full-stack application that allows you to chat with open-source language models in a ChatGPT-like interface
https://github.com/praneethravuri/open-llms
ai axios chatbot chatgpt fastapi huggingface llm llm-aggregator llms machine-learning nextjs nlp python react tailwindcss transformers typescript
Last synced: 3 months ago
JSON representation
A full-stack application that allows you to chat with open-source language models in a ChatGPT-like interface
- Host: GitHub
- URL: https://github.com/praneethravuri/open-llms
- Owner: praneethravuri
- License: mit
- Created: 2024-05-20T11:36:27.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-09-10T16:59:44.000Z (4 months ago)
- Last Synced: 2024-09-11T04:16:50.559Z (4 months ago)
- Topics: ai, axios, chatbot, chatgpt, fastapi, huggingface, llm, llm-aggregator, llms, machine-learning, nextjs, nlp, python, react, tailwindcss, transformers, typescript
- Language: TypeScript
- Homepage:
- Size: 305 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# OpenLLMs 💬🤖
A chat application that allows users to interact with pre-trained open-source LLM models for question answering. The application features a chat interface where users can input questions, and the application responds with answers generated by the selected models. The aim of this project is to demonstrate how to integrate pre-trained transformer models with a modern web frontend using Next.js and work with multiple LLMs simultaneously.![Alt Text](demo.gif)
## Table of Contents
- [Tech Stack](#tech-stack)
- [Installation](#installation)
- [Running the Application](#running-the-application)
- [Usage](#usage)## Tech Stack
- **Frontend**: Next.js 14, React, TypeScript, Tailwind CSS
- **Backend**: FastAPI, Python, SearxNG
- **Machine Learning**: Hugging Face Transformers for LLMs
- **Other Libraries**: Axios (for HTTP requests), CORS Middleware## Installation
Follow these steps to set up and run the application on your local machine.
### Prerequisites
- Node.js 14+
- Python 3.7+
- Git
- Docker### Steps
1. **Clone the Repository**
```bash
git clone https://github.com/praneethravuri/open-llms.git
```2. **Backend Setup**
Navigate to the backend directory, create a virtual environment, and install the required dependencies.
```bash
cd backend
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
```If the `requirements.txt` file does not exist, create it with the following content:
```txt
fastapi
uvicorn
transformers
torch
tensorflow
sentence_transformers
nltk
tf-keras
language_tool_python
textblob
pymongo
```Additionally, install PyTorch with CUDA support:
```bash
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```3. **Frontend Setup**
Navigate to the frontend directory and install the required dependencies.
```bash
cd frontend
npm install
```## Running the Application 🚀
### Backend
1. **Start the FastAPI Server**
```bash
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
```2. **Start SearXNG***
Navigate to the searxng-docker directory and start SearxNG using Docker.
```bash
cd searxng-docker
docker-compose up
```### Frontend
1. **Start the Next.js Development Server**
```bash
npm run dev
```### Access the Application
Open your browser and navigate to `http://localhost:3000` to see the chat interface.
## Usage
1. **Interact with the Chat Interface**
- Open the chat interface in your browser.
- Type a question in the input box at the bottom.
- Press the send button or hit enter to send your question.
- The application will respond with an answer generated by the selected model.2. **Interact with Different LLMs**
- Select a particular pre-trained LLM.
- Type a question in the input box at the bottom.
- Press the send button or hit enter to send your question.
- The application will respond with an answer generated by the selected model.By following these steps, you will be able to interact with various pre-trained language models through a modern and intuitive web interface. Enjoy exploring the capabilities of LLMs! 🎉ðŸ§