https://github.com/nathadriele/llm-zoomcamp
The Zoomcamp LLM Course focuses on tools for working with LLMs and RAG, including OpenAI API, HuggingFace, Elasticsearch, and Streamlit. It covers vector search, embedding creation, data ingestion with Mage, and monitoring using Grafana, emphasizing practical applications and best practices.
https://github.com/nathadriele/llm-zoomcamp
elasticsearch grafana huggingface llms mageai ollama openai-api rag streamlit
Last synced: 7 months ago
JSON representation
The Zoomcamp LLM Course focuses on tools for working with LLMs and RAG, including OpenAI API, HuggingFace, Elasticsearch, and Streamlit. It covers vector search, embedding creation, data ingestion with Mage, and monitoring using Grafana, emphasizing practical applications and best practices.
- Host: GitHub
- URL: https://github.com/nathadriele/llm-zoomcamp
- Owner: nathadriele
- Created: 2024-07-01T17:22:59.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-18T15:36:45.000Z (8 months ago)
- Last Synced: 2025-02-18T16:33:37.227Z (8 months ago)
- Topics: elasticsearch, grafana, huggingface, llms, mageai, ollama, openai-api, rag, streamlit
- Language: Jupyter Notebook
- Homepage:
- Size: 1.82 MB
- Stars: 7
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Zoomcamp llm Course
[More details](https://github.com/DataTalksClub/llm-zoomcamp?tab=readme-ov-file)
🟢 **Final project developed**
https://github.com/nathadriele/biophenotype-rag

## 1. Introduction to LLMs and RAG
- 1.1 LLMs and RAG
- 1.2 Preparing the environment
- 1.3 Retrieval and the basics of search
- 1.4 OpenAI API
- 1.5 Simple RAG with Open AI
- 1.6 Text search with Elasticsearch
## 2. Open-source LLMs
- 2.1 Getting an environment with a GPU
- 2.2 Open-source models from HuggingFace Hub
- 2.3 Running LLMs on a CPU with Ollama
- 2.4 Creating a simple UI with Streamlit## 3. Vector databases
- 3.1 Vector search
- 3.2 Creating and indexing embeddings
- 3.3 Vector search with Elasticsearch## 4. Workshop: dlt
- 4.1 Monitoring
- 4.2 Computing metrics to monitor the quality of LLM answers
- 4.3 Tracking chat history and user feedback
- 4.4 Creating dashboards with Grafana for visualization## 5. LLM orchestration and ingestion
- 5.1 Ingesting data with Mage## 6. Best practices
- 6.1 Best practices