https://github.com/mrqadeer/huggingface_togetherai
A powerful LangChain wrapper for integrating Hugging Face API with Together AI, enabling seamless access to cutting-edge DeepSeek-R1 and Meta LLaMA models (e.g., meta-llama/Llama-3.3-70B-Instruct-Turbo).
https://github.com/mrqadeer/huggingface_togetherai
deepseek-r1 huggingface langchain-python meta-llama togetherai
Last synced: 8 months ago
JSON representation
A powerful LangChain wrapper for integrating Hugging Face API with Together AI, enabling seamless access to cutting-edge DeepSeek-R1 and Meta LLaMA models (e.g., meta-llama/Llama-3.3-70B-Instruct-Turbo).
- Host: GitHub
- URL: https://github.com/mrqadeer/huggingface_togetherai
- Owner: mrqadeer
- License: mit
- Created: 2025-02-12T11:59:37.000Z (8 months ago)
- Default Branch: master
- Last Pushed: 2025-02-12T14:08:09.000Z (8 months ago)
- Last Synced: 2025-02-12T15:27:34.893Z (8 months ago)
- Topics: deepseek-r1, huggingface, langchain-python, meta-llama, togetherai
- Language: Python
- Homepage: https://pypi.org/project/huggingface-togetherai/0.1.0/
- Size: 61.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# 🚀 Huggingface-TogetherAI LangChain Wrapper
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/)A **LangChain** integration for **DeepSeek-R1** and **Meta Llama-3.3-70B-Instruct-Turbo** models via **Hugging Face's Inference API**, enabling seamless interaction with state-of-the-art language models.
## ✨ Features
- 🚀 **Custom LangChain Chat Model** – Optimized for Hugging Face + Together AI.
- ⚡ **Sync & Async Support** – Run queries in synchronous or asynchronous mode.
- 🌊 **Streaming Capabilities** – Supports token streaming for real-time responses.
- 🛠️ **Tool Calling & Structured Output** – Enables function calling and JSON outputs.
- 🔧 **Configurable Model Parameters** – Fine-tune temperature, max tokens, etc.## 📦 Installation
```bash
pip install huggingface-togetherai
```## 🚀 Quick Start
```python
from huggingface_togetherai import ChatHuggingFaceTogetherAIhf_token = "your_huggingface_token"
hf_llm = ChatHuggingFaceTogetherAI(
model="deepseek-ai/DeepSeek-R1",
hf_token=hf_token
)response = hf_llm.invoke("Hi!")
print(response)
```## 🤔 Why Use Huggingface-TogetherAI?
In **LangChain**, the `HuggingFaceEndpoint` class is typically used for Hugging Face models:
```python
from langchain_huggingface import HuggingFaceEndpoint
from langchain_huggingface.chat_models import ChatHuggingFacehf_endpoint = HuggingFaceEndpoint(
repo_id="deepseek-ai/DeepSeek-R1",
task="text-generation",
huggingfacehub_api_token=hf_token
)langchain_llm = ChatHuggingFace(llm=hf_endpoint)
langchain_llm.invoke("Hello")
```However, this results in an error:
```
The model deepseek-ai/DeepSeek-R1 is too large to be loaded automatically (688GB > 10GB).
```### ✅ The Better Alternative: Huggingface-TogetherAI
With **Huggingface-TogetherAI**, you can seamlessly use large models without running into memory issues:
```python
from huggingface_togetherai import ChatHuggingFaceTogetherAIhf_llm = ChatHuggingFaceTogetherAI(
model="deepseek-ai/DeepSeek-R1",
hf_token=hf_token,
other_params...
)response = hf_llm.invoke("Hello")
print(response) # Output: '\n\n\n\nHello! How can I assist you today? 😊'
```### 🎉 Good News!
✅ You can leverage **all [Langchain](https://www.langchain.com/) functionalities** for standard LLMs with this package.
## 📜 License
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE) file for more details.