https://github.com/run-llama/notebookllama
A fully open-source, LlamaCloud-backed alternative to NotebookLM
https://github.com/run-llama/notebookllama
Last synced: 8 months ago
JSON representation
A fully open-source, LlamaCloud-backed alternative to NotebookLM
- Host: GitHub
- URL: https://github.com/run-llama/notebookllama
- Owner: run-llama
- License: mit
- Created: 2025-06-27T20:48:40.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-07-17T17:33:54.000Z (8 months ago)
- Last Synced: 2025-07-22T23:57:21.513Z (8 months ago)
- Language: Python
- Size: 1.48 MB
- Stars: 1,229
- Watchers: 8
- Forks: 158
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - run-llama/notebookllama
- Awesome-GitHub-Repo - NotebookLlama - LlamaIndex 官方的 Jupyter Notebook 学习项目,通过实例教程学习如何构建 RAG 应用和 LLM 工作流,是入门 LlamaIndex 的最佳实践。 (学习资源 / 学习项目)
README
# NotebookLlaMa🦙
## A fluffy and open-source alternative to NotebookLM!
https://github.com/user-attachments/assets/7e9cca45-8a4c-4dfa-98d2-2cef147422f2
A fully open-source alternative to NotebookLM, backed by LlamaCloud.
### Prerequisites
This project uses `uv` to manage dependencies. Before you begin, make sure you have `uv` installed.
On macOS and Linux:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
On Windows:
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
For more install options, see `uv`'s [official documentation](https://docs.astral.sh/uv/getting-started/installation/).
---
### Get it up and running!
**1. Clone the Repository**
```bash
git clone https://github.com/run-llama/notebookllama
cd notebookllama/
```
**2. Install Dependencies**
```bash
uv sync
```
**3. Configure API Keys**
First, create your `.env` file by renaming the example file:
```bash
mv .env.example .env
```
Next, open the `.env` file and add your API keys:
- `OPENAI_API_KEY`: find it [on OpenAI Platform](https://platform.openai.com/api-keys)
- `ELEVENLABS_API_KEY`: find it [on ElevenLabs Settings](https://elevenlabs.io/app/settings/api-keys)
- `LLAMACLOUD_API_KEY`: find it [on LlamaCloud Dashboard](https://cloud.llamaindex.ai?utm_source=demo&utm_medium=notebookLM)
**4. Activate the Virtual Environment**
(on mac/unix)
```bash
source .venv/bin/activate
```
(on Windows):
```bash
.\.venv\Scripts\activate
```
**5. Create LlamaCloud Agent & Pipeline**
You will now execute two scripts to configure your backend agents and pipelines.
First, create the data extraction agent:
```bash
uv run tools/create_llama_extract_agent.py
```
Next, run the interactive setup wizard to configure your index pipeline.
> **⚡ Quick Start (Default OpenAI):**
> For the fastest setup, select **"With Default Settings"** when prompted. This will automatically create a pipeline using OpenAI's `text-embedding-3-small` embedding model.
> **🧠 Advanced (Custom Embedding Models):**
> To use a different embedding model, select **"With Custom Settings"** and follow the on-screen instructions.
Run the wizard with the following command:
```bash
uv run tools/create_llama_cloud_index.py
```
**6. Launch Backend Services**
This command will start the required Postgres and Jaeger containers.
```bash
docker compose up -d
```
**7. Run the Application**
First, run the **MCP** server:
```bash
uv run src/notebookllama/server.py
```
Then, in a **new terminal window**, launch the Streamlit app:
```bash
streamlit run src/notebookllama/Home.py
```
> [!IMPORTANT]
>
> _You might need to install `ffmpeg` if you do not have it installed already_
And start exploring the app at `http://localhost:8501/`.
---
### Contributing
Contribute to this project following the [guidelines](./CONTRIBUTING.md).
### License
This project is provided under an [MIT License](./LICENSE).
