Projects in Awesome Lists tagged with machine-learning-operations
A curated list of projects in awesome lists tagged with machine-learning-operations .
https://github.com/seldonio/seldon-core
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
aiops deployment kubernetes machine-learning machine-learning-operations mlops production-machine-learning serving
Last synced: 14 May 2025
https://github.com/SeldonIO/seldon-core
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
aiops deployment kubernetes machine-learning machine-learning-operations mlops production-machine-learning serving
Last synced: 27 Mar 2025
https://github.com/IFCA-Advanced-Computing/frouros
Frouros: an open-source Python library for drift detection in machine learning systems.
change-detection concept-drift data-drift dataset-drift dataset-shift distribution-shift drift-detection machine-learning machine-learning-engineering machine-learning-operations mle mlops python statistics
Last synced: 11 May 2025
https://github.com/openlayer-ai/examples-gallery
Sample notebooks that use the Openlayer Python API
ai data-centric machine-learning machine-learning-operations ml ml-infrastructure mlops model-deployment model-explainability synthetic-dataset-generation tensorflow unbox
Last synced: 22 Apr 2025
https://github.com/juliaai/mljflow.jl
Connecting MLJ and MLFlow
data-science julia machine-learning machine-learning-operations machine-learning-ops mlflow mlj mlops statistics
Last synced: 10 Apr 2025
https://github.com/openlayer-ai/openlayer-python
The official Python library for Openlayer, the Continuous Model Improvement Platform for AI. 📈
ai data-centric machine-learning machine-learning-operations ml ml-infrastructure mlops model-deployment model-explainability openlayer synthetic-dataset-generation tensorflow unbox
Last synced: 13 Apr 2025
https://github.com/ksm26/efficiently-serving-llms
Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.
batch-processing deep-learning-techniques inference-optimization large-scale-deployment machine-learning-operations model-acceleration model-inference-service model-serving optimization-techniques performance-enhancement scalability-strategies server-optimization serving-infrastructure text-generation
Last synced: 28 Mar 2025