Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mmbazel/aie3
https://github.com/mmbazel/aie3
Last synced: about 5 hours ago
JSON representation
- Host: GitHub
- URL: https://github.com/mmbazel/aie3
- Owner: MMBazel
- License: mit
- Created: 2024-05-30T21:29:56.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-07-07T09:36:27.000Z (4 months ago)
- Last Synced: 2024-07-07T10:36:52.234Z (4 months ago)
- Language: Jupyter Notebook
- Size: 51.3 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# AIE3
# π§βπ»Β What is [AI Engineering](https://maven.com/aimakerspace/ai-eng-bootcamp)?AI Engineering refers to the industry-relevant skills that data science and engineering teams need to successfully **build, deploy, operate, and improve Large Language Model (LLM) applications in production environments**.
In practice, this requires understanding both prototyping and production deployments.
During the *prototyping* phase, Prompt Engineering, [Retrieval Augmented Generation (RAG)](https://www.youtube.com/playlist?list=PLrSHiQgy4VjFlWgcLHatJCBgfeE8smVyP), and [Fine-Tuning](https://www.youtube.com/playlist?list=PLrSHiQgy4VjGMzyXsSlvN-TjPaqFFsAGP) are all necessary tools to be able to understand and leverage. Prototyping includes:
- Building RAG Applications
- Fine-Tuning LLMs & Embedding Models
- Building Agent/Reasoning Applications
- LLM Application DeploymentWhen *productionizing* LLM application prototypes, there are many important aspects ensuring helpful, harmless, honest, reliable, and scalable solutions for your customers or stakeholders. Productionizing includes:
- RAG Evaluation and Improvement
- Monitoring and Visibility Tooling
- Efficient Inference and Serving
- Scaling LLM ApplicationsThis bootcamp builds on our two previous courses, [LLM Engineering](https://maven.com/aimakerspace/llm-engineering) and [LLM Operations](https://maven.com/aimakerspace/llmops) π
- Large Language Model Engineering (LLM Engineering) refers to the emerging best practices and tools for training, fine-tuning, and aligning LLMs before production deployment. LLM Engineering is the counterpart to ML Engineering (MLE) which focuses on LLM-specific techniques like prompt engineering, reinforcement learning, quantization, and other methods required by this new paradigm.
- Large Language Model Ops (LLM Ops, or LLMOps (as fromΒ [WandB](https://docs.wandb.ai/guides/prompts)Β andΒ [a16z](https://a16z.com/emerging-architectures-for-llm-applications/))) refers to the emerging best-practices, tooling, and improvement processes used to manage production LLM applications throughout the AI product lifecycle. LLM Ops is a subset of Machine Learning Operations (MLOps) that focuses on LLM-specific infrastructure and ops capabilities required to build, deploy, monitor, and scale complex LLM applications in production environments.# π **Grading and Certification**
To become **AI-Makerspace Certified**, which will open you up to additional opportunities for full and part-time work within our community and network, you must:
1. Complete all project assignments.
2. Complete a project and present during Demo Day.
3. Receive at least an 85% total grade in the course.If you do not complete all assignments, participate in Demo Day, or maintain a high-quality standard of work, you may still be eligible for a *certificate of completion* if you miss no more than 2 live sessions.
# π About
This GitHub repository is your gateway to mastering the art of AI Engineering. ***All assignments for the course will be released here for your building, shipping, and sharing adventures!***
# π Contributions
We believe in the power of collaboration. Contributions, ideas, and feedback are highly encouraged! Let's build the ultimate resource for AI Engineering together.
Please to reach out with any questions or suggestions.
Happy coding! πππ