Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/thejasmeetsingh/moody-llm
A LLM whose mood keeps changing.
https://github.com/thejasmeetsingh/moody-llm
asynchronous-programming fastapi genai highlightjs langchain llama3 llm ollama pydantic python3 reactjs restful-api supabase tailwindcss websockets
Last synced: about 1 month ago
JSON representation
A LLM whose mood keeps changing.
- Host: GitHub
- URL: https://github.com/thejasmeetsingh/moody-llm
- Owner: thejasmeetsingh
- Created: 2024-06-21T08:41:26.000Z (7 months ago)
- Default Branch: master
- Last Pushed: 2024-07-11T15:35:10.000Z (6 months ago)
- Last Synced: 2024-11-08T21:03:39.167Z (about 2 months ago)
- Topics: asynchronous-programming, fastapi, genai, highlightjs, langchain, llama3, llm, ollama, pydantic, python3, reactjs, restful-api, supabase, tailwindcss, websockets
- Language: JavaScript
- Homepage:
- Size: 552 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Moody LLM
Moody LLM is an interactive chat application where a Language Model's mood keeps changing, allowing users to receive varied responses based on the LLM's current mood. The project is designed to simulate conversations with a moody AI, providing a unique and dynamic user experience.
## Overview
![](./assets/overview.png)
## Demo
[![](./assets/thumbnail.png)](https://moody-llm.s3.ap-south-1.amazonaws.com/demo.mp4)
## Getting Started
### Prerequisites:
1. [Supabase](https://supabase.com/) Account:
- Create an account on Supabase and set up a table.
- Obtain the Supabase Key and Supabase URL from the Supabase dashboard.
- Configure these details in the backend `.env` file.
**Table Schema:**
```sql
id UUID PRIMARY KEY
created_at TIMESTAMPZ NOT NULL
user_id UUID NOT NULL
message JSON NOT NULL
```2. Ollama Installation:
- [Install](https://ollama.com/download) Ollama on your system.
- Once installed, Do the following:
* Run the command `ollama serve` to start the ollama server.
* In the new tab run, `ollama pull llama3` to pull the llama3 model in your system.### Steps:
- Clone the project repository to your local machine.
- **Backend:**
- Navigate to the backend folder.
- Install requirements: `pip install -r requirements.txt`
- Run the backend services: `fastapi run dev`Access backend services at: http://localhost:8000/
- **Frontend:**
- Navigate to the frontend folder.
- Install libraries: `npm install`
- Run the frontend app: `npm run dev`Access the frontend app at: http://localhost:5173/