https://github.com/mindspore-lab/mindnlp
MindSpore + π€Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
https://github.com/mindspore-lab/mindnlp
deep-learning diffusion-models huggingface large-language-models llm mindspore natural-language-processing nlp nlp-library python vlm
Last synced: 26 days ago
JSON representation
MindSpore + π€Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
- Host: GitHub
- URL: https://github.com/mindspore-lab/mindnlp
- Owner: mindspore-lab
- License: apache-2.0
- Created: 2022-09-23T08:57:12.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2026-03-01T17:20:49.000Z (about 1 month ago)
- Last Synced: 2026-03-01T18:17:28.040Z (about 1 month ago)
- Topics: deep-learning, diffusion-models, huggingface, large-language-models, llm, mindspore, natural-language-processing, nlp, nlp-library, python, vlm
- Language: Python
- Homepage: http://mindnlp.readthedocs.io/
- Size: 96.5 MB
- Stars: 910
- Watchers: 11
- Forks: 265
- Open Issues: 63
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Notice: NOTICE
- Agents: AGENTS.md
Awesome Lists containing this project
- awesome-production-machine-learning - MindNLP - lab/mindnlp.svg?style=social) - MindNLP is an easy-to-use and high-performance NLP and LLM framework based on MindSpore, compatible with models and datasets of Huggingface. (Industry Strength Natural Language Processing)
README
MindNLP
Run HuggingFace Models on MindSpore with Zero Code Changes
The easiest way to use 200,000+ HuggingFace models on Ascend NPU, GPU, and CPU
Quick Start β’
Features β’
Installation β’
Why MindNLP β’
Documentation
---
## π― What is MindNLP?
**MindNLP** bridges the gap between HuggingFace's massive model ecosystem and MindSpore's hardware acceleration. With just `import mindnlp`, you can run any HuggingFace model on **Ascend NPU**, **NVIDIA GPU**, or **CPU** - no code changes required.
```python
import mindnlp # That's it! HuggingFace now runs on MindSpore
from transformers import pipeline
pipe = pipeline("text-generation", model="Qwen/Qwen2-0.5B")
print(pipe("Hello, I am")[0]["generated_text"])
```
## β‘ Quick Start
### Text Generation with LLMs
```python
import mindspore
import mindnlp
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="Qwen/Qwen3-8B",
ms_dtype=mindspore.bfloat16,
device_map="auto"
)
messages = [{"role": "user", "content": "Write a haiku about coding"}]
print(pipe(messages, max_new_tokens=100)[0]["generated_text"][-1]["content"])
```
### Image Generation with Stable Diffusion
```python
import mindspore
import mindnlp
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
ms_dtype=mindspore.float16
)
image = pipe("A sunset over mountains, oil painting style").images[0]
image.save("sunset.png")
```
### BERT for Text Classification
```python
import mindnlp
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("MindNLP is awesome!", return_tensors="pt")
outputs = model(**inputs)
```
## β¨ Features
### π€ Full HuggingFace Compatibility
- **200,000+ models** from HuggingFace Hub
- **Transformers** - All model architectures
- **Diffusers** - Stable Diffusion, SDXL, ControlNet
- **Zero code changes** - Just `import mindnlp`
### π Hardware Acceleration
- **Ascend NPU** - Full support for Huawei AI chips
- **NVIDIA GPU** - CUDA acceleration
- **CPU** - Optimized CPU execution
- **Multi-device** - Automatic device placement
### π§ Advanced Capabilities
- **Mixed precision** - FP16/BF16 training & inference
- **Quantization** - INT8/INT4 with BitsAndBytes
- **Distributed** - Multi-GPU/NPU training
- **PEFT/LoRA** - Parameter-efficient fine-tuning
### π¦ Easy Integration
- **PyTorch-compatible API** via mindtorch
- **Safetensors** support for fast loading
- **Model Hub mirrors** for faster downloads
- **Comprehensive documentation**
## π§ͺ Mindtorch NPU Debugging
Mindtorch NPU ops are async by default. Use `torch.npu.synchronize()` when you need to block on results.
For debugging, set `ACL_LAUNCH_BLOCKING=1` to force per-op synchronization.
## π¦ Installation
```bash
# From PyPI (recommended)
pip install mindnlp
# From source (latest features)
pip install git+https://github.com/mindspore-lab/mindnlp.git
```
π Version Compatibility
| MindNLP | MindSpore | Python |
|---------|-----------|--------|
| 0.6.x | β₯2.7.1 | 3.10-3.11 |
| 0.5.x | 2.5.0-2.7.0 | 3.10-3.11 |
| 0.4.x | 2.2.x-2.5.0 | 3.9-3.11 |
## π‘ Why MindNLP?
| Feature | MindNLP | PyTorch + HF | TensorFlow + HF |
|---------|---------|--------------|-----------------|
| HuggingFace Models | β
200K+ | β
200K+ | β οΈ Limited |
| Ascend NPU Support | β
Native | β | β |
| Zero Code Migration | β
| - | β |
| Unified API | β
| β
| β |
| Chinese Model Support | β
Excellent | β
Good | β οΈ Limited |
### π Key Advantages
1. **Instant Migration**: Your existing HuggingFace code works immediately
2. **Ascend Optimization**: Native support for Huawei NPU hardware
3. **Production Ready**: Battle-tested in enterprise deployments
4. **Active Community**: Regular updates and responsive support
## πΊοΈ Supported Models
MindNLP supports **all models** from HuggingFace Transformers and Diffusers. Here are some popular ones:
| Category | Models |
|----------|--------|
| **LLMs** | Qwen, Llama, ChatGLM, Mistral, Phi, Gemma, BLOOM, Falcon |
| **Vision** | ViT, CLIP, Swin, ConvNeXt, SAM, BLIP |
| **Audio** | Whisper, Wav2Vec2, HuBERT, MusicGen |
| **Diffusion** | Stable Diffusion, SDXL, ControlNet |
| **Multimodal** | LLaVA, Qwen-VL, ALIGN |
π [View all supported models](https://mindnlp.cqu.ai/supported_models)
## π Resources
- π [Documentation](https://mindnlp.cqu.ai)
- π [Quick Start Guide](https://mindnlp.cqu.ai/quick_start)
- π [Tutorials](https://mindnlp.cqu.ai/tutorials/quick_start)
- π¬ [GitHub Discussions](https://github.com/mindspore-lab/mindnlp/discussions)
- π [Issue Tracker](https://github.com/mindspore-lab/mindnlp/issues)
## π€ Contributing
We welcome contributions! See our [Contributing Guide](https://mindnlp.cqu.ai/contribute) for details.
```bash
# Clone and install for development
git clone https://github.com/mindspore-lab/mindnlp.git
cd mindnlp
pip install -e ".[dev]"
```
## π₯ Community
Join the **MindSpore NLP SIG** (Special Interest Group) for discussions, events, and collaboration:
## β Star History
**If you find MindNLP useful, please consider giving it a star β - it helps the project grow!**
## π License
MindNLP is released under the [Apache 2.0 License](LICENSE).
## π Citation
```bibtex
@misc{mindnlp2022,
title={MindNLP: Easy-to-use and High-performance NLP and LLM Framework Based on MindSpore},
author={MindNLP Contributors},
howpublished={\url{https://github.com/mindspore-lab/mindnlp}},
year={2022}
}
```
---
Made with β€οΈ by the MindSpore Lab team