Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/natanielf/lecsum
Automatically transcribe and summarize lecture recordings completely on-device using AI.
https://github.com/natanielf/lecsum
ollama ollama-python whisper whisper-ai
Last synced: 15 days ago
JSON representation
Automatically transcribe and summarize lecture recordings completely on-device using AI.
- Host: GitHub
- URL: https://github.com/natanielf/lecsum
- Owner: natanielf
- Created: 2024-10-05T19:04:10.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2024-10-30T22:25:11.000Z (15 days ago)
- Last Synced: 2024-10-30T23:24:10.120Z (15 days ago)
- Topics: ollama, ollama-python, whisper, whisper-ai
- Language: Python
- Homepage:
- Size: 3.91 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# lecsum
Automatically transcribe and summarize lecture recordings completely on-device using AI.
## Environment Setup
Install [Ollama](https://ollama.com/download).
Create a virtual Python environment:
```sh
python3 -m venv venv
```Activate the virtual environment:
```sh
source venv/bin/activate
```Install dependencies:
```sh
pip install -r requirements.txt
```## Configuration (optional)
Edit `lecsum.yaml`:
| **Field** | **Default Value** | **Possible Values** | **Description** |
| --------------- | ----------------- | -------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
| `whisper_model` | "base.en" | [Whisper model name](https://github.com/openai/whisper#available-models-and-languages) | Specifies which Whisper model to use for transcription |
| `ollama_model` | "llama3.1:8b" | [Ollama model name](https://ollama.com/library) | Specifies which Ollama model to use for summarization |
| `prompt` | "Summarize: " | Any string | Instructs the large language model during the summarization step |## Run
Run the Ollama server:
```sh
ollama serve
```In a new terminal, run:
```sh
./lecsum.py -c [CONFIG_FILE] [AUDIO_FILE]
```## References
- https://pyyaml.org/wiki/PyYAMLDocumentation
- https://github.com/openai/whisper
- https://github.com/ollama/ollama-python