Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/mmbazel/aie3


https://github.com/mmbazel/aie3

Last synced: about 5 hours ago
JSON representation

Awesome Lists containing this project

README

        

# AIE3
# πŸ§‘β€πŸ’»Β What is [AI Engineering](https://maven.com/aimakerspace/ai-eng-bootcamp)?

AI Engineering refers to the industry-relevant skills that data science and engineering teams need to successfully **build, deploy, operate, and improve Large Language Model (LLM) applications in production environments**.

In practice, this requires understanding both prototyping and production deployments.

During the *prototyping* phase, Prompt Engineering, [Retrieval Augmented Generation (RAG)](https://www.youtube.com/playlist?list=PLrSHiQgy4VjFlWgcLHatJCBgfeE8smVyP), and [Fine-Tuning](https://www.youtube.com/playlist?list=PLrSHiQgy4VjGMzyXsSlvN-TjPaqFFsAGP) are all necessary tools to be able to understand and leverage. Prototyping includes:
- Building RAG Applications
- Fine-Tuning LLMs & Embedding Models
- Building Agent/Reasoning Applications
- LLM Application Deployment

When *productionizing* LLM application prototypes, there are many important aspects ensuring helpful, harmless, honest, reliable, and scalable solutions for your customers or stakeholders. Productionizing includes:
- RAG Evaluation and Improvement
- Monitoring and Visibility Tooling
- Efficient Inference and Serving
- Scaling LLM Applications

This bootcamp builds on our two previous courses, [LLM Engineering](https://maven.com/aimakerspace/llm-engineering) and [LLM Operations](https://maven.com/aimakerspace/llmops) πŸ‘‡

- Large Language Model Engineering (LLM Engineering) refers to the emerging best practices and tools for training, fine-tuning, and aligning LLMs before production deployment. LLM Engineering is the counterpart to ML Engineering (MLE) which focuses on LLM-specific techniques like prompt engineering, reinforcement learning, quantization, and other methods required by this new paradigm.

- Large Language Model Ops (LLM Ops, or LLMOps (as fromΒ [WandB](https://docs.wandb.ai/guides/prompts)Β andΒ [a16z](https://a16z.com/emerging-architectures-for-llm-applications/))) refers to the emerging best-practices, tooling, and improvement processes used to manage production LLM applications throughout the AI product lifecycle. LLM Ops is a subset of Machine Learning Operations (MLOps) that focuses on LLM-specific infrastructure and ops capabilities required to build, deploy, monitor, and scale complex LLM applications in production environments.

# πŸ† **Grading and Certification**

To become **AI-Makerspace Certified**, which will open you up to additional opportunities for full and part-time work within our community and network, you must:

1. Complete all project assignments.
2. Complete a project and present during Demo Day.
3. Receive at least an 85% total grade in the course.

If you do not complete all assignments, participate in Demo Day, or maintain a high-quality standard of work, you may still be eligible for a *certificate of completion* if you miss no more than 2 live sessions.

# πŸ“š About

This GitHub repository is your gateway to mastering the art of AI Engineering. ***All assignments for the course will be released here for your building, shipping, and sharing adventures!***

# πŸ™ Contributions

We believe in the power of collaboration. Contributions, ideas, and feedback are highly encouraged! Let's build the ultimate resource for AI Engineering together.

Please to reach out with any questions or suggestions.

Happy coding! πŸš€πŸš€πŸš€