https://github.com/benitomartin/youtube-llm
LLM Q&A and Summarization App
https://github.com/benitomartin/youtube-llm
chromadb langchain python streamlit whisper
Last synced: 8 months ago
JSON representation
LLM Q&A and Summarization App
- Host: GitHub
- URL: https://github.com/benitomartin/youtube-llm
- Owner: benitomartin
- Created: 2023-10-12T10:10:48.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-04-20T18:08:40.000Z (over 1 year ago)
- Last Synced: 2024-12-31T14:28:45.636Z (9 months ago)
- Topics: chromadb, langchain, python, streamlit, whisper
- Language: Python
- Homepage:
- Size: 658 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# YOUTUBE Q&A AND SUMMARIZATION APP 📺
![]()
This repository hosts an app developed using **Whisper** and **Langchain** that allows the creation of a Q&A assistant and video summarization. The model's maximum context length is 4097 tokens (gpt-3.5-turbo).
The App can be run locally but requires an `OPENAI_API_KEY` in the `.env` file. Feel free to ⭐ and clone this repo 😉
## 👨💻 **Tech Stack**





## 💬 Set Up
I recommend installing the modules in the following order. The `ffmpeg` module is required for the proper functioning of the application. You can install it using Conda as follows:
```bash
conda install -c conda-forge ffmpeg
``````bash
pip install git+https://github.com/openai/whisper.git
``````bash
pip install -r requirements.txt
```## 🫵 App Deployment
The app can be used running `streamlit run app.py` in the terminal. There are 2 options on the sidebar, Q&A or Summarize. I recommend using videos no longer than 5 min of speech due to the model tokens' limitations.
The first option allows a Q&A assistant to ask questions about the video.
![]()
The second option allows us to get a summary of the video.
![]()