Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/nebuly-ai/optimate

A collection of libraries to optimise AI model performances
https://github.com/nebuly-ai/optimate

ai analytics artificial-intelligence deeplearning large-language-models llm

Last synced: 2 days ago
JSON representation

A collection of libraries to optimise AI model performances

Awesome Lists containing this project

README

        

# OptiMate

**[Legacy]**

This repository is now in a legacy phase and is no longer actively maintained. Although the source code is still available in the Git history, there will be no additional updates or official support.

**[About Nebuly]**

Our team is fully committed on creating the best user-experience platform for LLMs so that companies can understand user behavior at scale when interacting with their LLM-based products.
- To learn more on how to get started, visit our [official documentation](https://docs.nebuly.com/welcome/overview)
- If you need enterprise support, please contact us [here](https://www.nebuly.com/nebuly-book-a-demo)

**[About optimate]**

We have open-sourced a couple of internal projects to the community, but we are not currently maintaining them. Optimate is a collection of libraries designed to help you optimize your AI models. It is an open-source project developed by Nebuly AI but is **not actively maintained**.

The tools available to assist you in your optimization are:

✅ [Speedster](https://github.com/nebuly-ai/optimate/tree/main/optimization/speedster): reduce inference costs by leveraging SOTA optimization techniques that best couple your AI models with the underlying hardware (GPUs and CPUs)

✅ [Nos](https://github.com/nebuly-ai/nos): reduce infrastructure costs by leveraging real-time dynamic partitioning and elastic quotas to maximize the utilization of your Kubernetes GPU cluster

✅ [ChatLLaMA](https://github.com/nebuly-ai/optimate/tree/main/optimization/chatllama): reduce hardware and data costs by leveraging fine-tuning optimization techniques and RLHF alignment