https://github.com/edaaydinea/llm_mastery
Learn to master the latest in Large Language Models (LLMs) including ChatGPT, Gemini, Claude, Llama3, and OpenAI APIs.
https://github.com/edaaydinea/llm_mastery
chatgpt large-language-models openai
Last synced: 7 months ago
JSON representation
Learn to master the latest in Large Language Models (LLMs) including ChatGPT, Gemini, Claude, Llama3, and OpenAI APIs.
- Host: GitHub
- URL: https://github.com/edaaydinea/llm_mastery
- Owner: edaaydinea
- License: apache-2.0
- Created: 2025-03-13T12:50:21.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-03-13T13:08:29.000Z (7 months ago)
- Last Synced: 2025-03-13T14:23:36.680Z (7 months ago)
- Topics: chatgpt, large-language-models, openai
- Language: Jupyter Notebook
- Homepage:
- Size: 16.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM Mastery: ChatGPT, Gemini, Claude, Llama3, OpenAI & APIs
Learn to master the latest in Large Language Models (LLMs) including ChatGPT, Gemini, Claude, Llama3, and OpenAI APIs. This course provides comprehensive insights and practical applications.
[Course Link](https://www.udemy.com/course/llm-mastery-chatgpt-gemini-claude-llama3-openai-apis)
## Table of Contents
- [LLM Mastery: ChatGPT, Gemini, Claude, Llama3, OpenAI \& APIs](#llm-mastery-chatgpt-gemini-claude-llama3-openai--apis)
- [Table of Contents](#table-of-contents)
- [Day 1: How LLMs Work: Parameters, Weights, Inference, Neural Networks and More](#day-1-how-llms-work-parameters-weights-inference-neural-networks-and-more)
- [Day 2: Additional Capabilities of LLMs \& Future Developments](#day-2-additional-capabilities-of-llms--future-developments)
- [Day 3: Prompt Engineering: Effective Use of LLMs in the Standard Interface](#day-3-prompt-engineering-effective-use-of-llms-in-the-standard-interface)## Day 1: How LLMs Work: Parameters, Weights, Inference, Neural Networks and More
**What I did today:**
- Reviewed the basics of Language Models (LMs) including pre-training and fine-tuning.
- Learned about the transformer architecture and reinforcement learning.
- Explored the components of an LM: parameter file and run file.
- Understood the process of creating parameter files using GPUs and text data.
- Studied neural networks and their functioning with forward and back propagation.
- Examined how neural networks work with word tokens in LLMs.
- Discussed the transformer architecture and its current limitations.
- Learned about the mixture of experts approach for transformer architecture.
- Reviewed the fine-tuning process to create assistant models.
- Understood reinforcement learning and its application in LLMs.
- Discussed the scaling laws of LLMs and the importance of GPU and data.**Resources**:
- [day1 notes.ipynb](./week1/notes/day1.ipynb)
## Day 2: Additional Capabilities of LLMs & Future Developments
**What I did today:**
- Explored the multimodal capabilities of LLMs, including processing images, audio, and video.
- Learned about ChatGPT's ability to use various tools like calculators and Python libraries.
- Studied the vision capabilities of LLMs, including image recognition and visual processing.
- Discussed the potential for LLMs to engage in natural, conversational speech.
- Reviewed the concept of System 1 and System 2 thinking in LLMs.
- Learned about recent updates to ChatGPT, including real-time web searches and content creation tools.
- Examined the advancements in OpenAI's O3 model towards AGI.
- Discussed the concept of self-improvement in AI inspired by AlphaGo.
- Explored methods for improving LLM performance, such as RAG and prompt engineering.
- Envisioned the future of LLMs as comprehensive operating systems.**Resources**:
- [day2 notes.ipynb](./week1/notes/day2.ipynb)
## Day 3: Prompt Engineering: Effective Use of LLMs in the Standard Interface