{"id":15034898,"url":"https://github.com/nvidia/transformerengine","last_synced_at":"2026-02-24T02:09:58.886Z","repository":{"id":60787210,"uuid":"539057023","full_name":"NVIDIA/TransformerEngine","owner":"NVIDIA","description":"A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blackwell GPUs, to provide better performance with lower memory utilization in both training and inference.","archived":false,"fork":false,"pushed_at":"2025-05-11T00:34:35.000Z","size":10289,"stargazers_count":2400,"open_issues_count":254,"forks_count":417,"subscribers_count":35,"default_branch":"main","last_synced_at":"2025-05-11T01:27:49.627Z","etag":null,"topics":["cuda","deep-learning","fp8","gpu","jax","machine-learning","python","pytorch"],"latest_commit_sha":null,"homepage":"https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NVIDIA.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":"CONTRIBUTING.rst","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-09-20T15:20:26.000Z","updated_at":"2025-05-11T00:34:39.000Z","dependencies_parsed_at":"2023-02-15T11:31:41.294Z","dependency_job_id":"d54d8789-a4da-468f-9231-b974a77d48a2","html_url":"https://github.com/NVIDIA/TransformerEngine","commit_stats":null,"previous_names":[],"tags_count":36,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2FTransformerEngine","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2FTransformerEngine/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2FTransformerEngine/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2FTransformerEngine/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NVIDIA","download_url":"https://codeload.github.com/NVIDIA/TransformerEngine/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254044020,"owners_count":22005058,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cuda","deep-learning","fp8","gpu","jax","machine-learning","python","pytorch"],"created_at":"2024-09-24T20:26:44.441Z","updated_at":"2026-02-24T02:09:58.878Z","avatar_url":"https://github.com/NVIDIA.png","language":"Python","readme":"..\n    Copyright (c) 2022-2026, NVIDIA CORPORATION \u0026 AFFILIATES. All rights reserved.\n\n    See LICENSE for license information.\n\n|License|\n\nTransformer Engine\n==================\n\n`Quickstart \u003c#examples\u003e`_ | `Installation \u003c#installation\u003e`_ | `User Guide \u003chttps://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html\u003e`_ | `Examples \u003chttps://github.com/NVIDIA/TransformerEngine/tree/main/examples\u003e`_ | `FP8 Convergence \u003c#fp8-convergence\u003e`_ | `Integrations \u003c#integrations\u003e`_ | `Release notes \u003chttps://docs.nvidia.com/deeplearning/transformer-engine/documentation-archive.html\u003e`_\n\nLatest News\n===========\n\n* [11/2025] `NVIDIA Blackwell Architecture Sweeps MLPerf Training v5.1 Benchmarks \u003chttps://developer.nvidia.com/blog/nvidia-blackwell-architecture-sweeps-mlperf-training-v5-1-benchmarks/\u003e`_\n* [11/2025] `Scale Biology Transformer Models with PyTorch and NVIDIA BioNeMo Recipes \u003chttps://developer.nvidia.com/blog/scale-biology-transformer-models-with-pytorch-and-nvidia-bionemo-recipes/\u003e`_\n* [11/2025] `FP8 Training of Large-Scale RL Models \u003chttps://lmsys.org/blog/2025-11-25-fp8-rl/\u003e`_\n* [09/2025] `Pretraining Large Language Models with NVFP4 \u003chttps://www.arxiv.org/pdf/2509.25149\u003e`_\n* [09/2025] `Native FP8 Mixed Precision Training for Ling 2.0, Open Sourced! \u003chttps://huggingface.co/blog/im0qianqian/ling-mini-2-fp8-mixed-precision-training-solution\u003e`_\n* [09/2025] `Faster Training Throughput in FP8 Precision with NVIDIA NeMo \u003chttps://developer.nvidia.com/blog/faster-training-throughput-in-fp8-precision-with-nvidia-nemo/\u003e`_\n* [08/2025] `How we built DeepL's next-generation LLMs with FP8 for training and inference \u003chttps://www.deepl.com/en/blog/tech/next-generation-llm-fp8-training\u003e`_\n* [08/2025] `NVFP4 Trains with Precision of 16-bit and Speed and Efficiency of 4-bit \u003chttps://developer.nvidia.com/blog/nvfp4-trains-with-precision-of-16-bit-and-speed-and-efficiency-of-4-bit/\u003e`_\n\n`Previous News \u003c#previous-news\u003e`_\n\nWhat is Transformer Engine?\n===========================\n.. overview-begin-marker-do-not-remove\n\nTransformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including\nusing 8-bit floating point (FP8) precision on Hopper, Ada, and Blackwell GPUs, to provide better\nperformance with lower memory utilization in both training and inference. TE provides a collection\nof highly optimized building blocks for popular Transformer architectures and an automatic mixed\nprecision-like API that can be used seamlessly with your framework-specific code. TE also includes a\nframework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8\nsupport for Transformers.\n\nAs the number of parameters in Transformer models continues to grow, training and inference for\narchitectures such as BERT, GPT and T5 become very memory and compute-intensive. Most deep learning\nframeworks train with FP32 by default. This is not essential, however, to achieve full accuracy for\nmany deep learning models. Using mixed-precision training, which combines single-precision (FP32)\nwith lower precision (e.g. FP16) format when training a model, results in significant speedups with\nminimal differences in accuracy as compared to FP32 training. With Hopper GPU\narchitecture FP8 precision was introduced, which offers improved performance over FP16 with no\ndegradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is\nnot available natively in frameworks today.\n\nTE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language\nModel (LLM) libraries. It provides a Python API consisting of modules to easily build a Transformer\nlayer as well as a framework-agnostic library in C++ including structs and kernels needed for FP8\nsupport. Modules provided by TE internally maintain scaling factors and other values needed for FP8\ntraining, greatly simplifying mixed precision training for users.\n\nHighlights\n==========\n\n* Easy-to-use modules for building Transformer layers with FP8 support\n* Optimizations (e.g. fused kernels) for Transformer models\n* Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs\n* Support for optimizations across all precisions (FP16, BF16) on NVIDIA Ampere GPU architecture generations and later\n\nExamples\n========\n\nPyTorch\n^^^^^^^\n\n.. code-block:: python\n\n  import torch\n  import transformer_engine.pytorch as te\n  from transformer_engine.common import recipe\n\n  # Set dimensions.\n  in_features = 768\n  out_features = 3072\n  hidden_size = 2048\n\n  # Initialize model and inputs.\n  model = te.Linear(in_features, out_features, bias=True)\n  inp = torch.randn(hidden_size, in_features, device=\"cuda\")\n\n  # Create an FP8 recipe. Note: All input args are optional.\n  fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.E4M3)\n\n  # Enable autocasting for the forward pass\n  with te.autocast(enabled=True, recipe=fp8_recipe):\n      out = model(inp)\n\n  loss = out.sum()\n  loss.backward()\n\n\nJAX\n^^^\n\nFlax\n~~~~\n\n.. code-block:: python\n\n  import flax\n  import jax\n  import jax.numpy as jnp\n  import transformer_engine.jax as te\n  import transformer_engine.jax.flax as te_flax\n  from transformer_engine.common import recipe\n\n  BATCH = 32\n  SEQLEN = 128\n  HIDDEN = 1024\n\n  # Initialize RNG and inputs.\n  rng = jax.random.PRNGKey(0)\n  init_rng, data_rng = jax.random.split(rng)\n  inp = jax.random.normal(data_rng, [BATCH, SEQLEN, HIDDEN], jnp.float32)\n\n  # Create an FP8 recipe. Note: All input args are optional.\n  fp8_recipe = recipe.DelayedScaling(margin=0, fp8_format=recipe.Format.HYBRID)\n\n  # Enable autocasting for the forward pass\n  with te.autocast(enabled=True, recipe=fp8_recipe):\n      model = te_flax.DenseGeneral(features=HIDDEN)\n\n      def loss_fn(params, other_vars, inp):\n        out = model.apply({'params':params, **other_vars}, inp)\n        return jnp.mean(out)\n\n      # Initialize models.\n      variables = model.init(init_rng, inp)\n      other_variables, params = flax.core.pop(variables, 'params')\n\n      # Construct the forward and backward function\n      fwd_bwd_fn = jax.value_and_grad(loss_fn, argnums=(0, 1))\n\n      for _ in range(10):\n        loss, (param_grads, other_grads) = fwd_bwd_fn(params, other_variables, inp)\n\nFor a more comprehensive tutorial, check out our `Getting Started Guide \u003chttps://docs.nvidia.com/deeplearning/transformer-engine/user-guide/getting_started.html\u003e`_.\n\n.. overview-end-marker-do-not-remove\n\nInstallation\n============\n\nSystem Requirements\n^^^^^^^^^^^^^^^^^^^\n\n* **Hardware:** Blackwell, Hopper, Grace Hopper/Blackwell, Ada, Ampere\n\n* **OS:** Linux (official), WSL2 (limited support)\n\n* **Software:**\n\n  * CUDA: 12.1+ (Hopper/Ada/Ampere), 12.8+ (Blackwell) with compatible NVIDIA drivers\n  * cuDNN: 9.3+\n  * Compiler: GCC 9+ or Clang 10+ with C++17 support\n  * Python: 3.12 recommended\n\n* **Source Build Requirements:** CMake 3.18+, Ninja, Git 2.17+, pybind11 2.6.0+\n\n* **Notes:** FP8 features require Compute Capability 8.9+ (Ada/Hopper/Blackwell)\n\nInstallation Methods\n^^^^^^^^^^^^^^^^^^^^\n\nDocker (Recommended)\n^^^^^^^^^^^^^^^^^^^^\nThe quickest way to get started with Transformer Engine is by using Docker images on\n`NVIDIA GPU Cloud (NGC) Catalog \u003chttps://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch\u003e`_.\n\n\nFor example to use the NGC PyTorch container interactively,\n\n.. code-block:: bash\n\n    docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:26.01-py3\n\nFor example to use the NGC JAX container interactively,\n\n.. code-block:: bash\n\n    docker run --gpus all -it --rm nvcr.io/nvidia/jax:26.01-py3\n\nWhere 26.01 (corresponding to January 2026 release) is the container version.\n\nWe recommend updating to the latest NGC container available here:\n\n* https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch\n* https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jax\n\nIf you run any examples, please ensure you are using a matching version of TransformerEngine. TransformerEngine is pre-built and packaged inside the containers with examples available at ``/opt/transformerengine`` or ``/opt/transformer-engine``. If you would like to use examples from TE main branch and are running into import errors, please try the latest pip package or building from source, although NGC containers are recommended for ease-of-use for most users.\n\n**Benefits of using NGC containers:**\n\n* All dependencies pre-installed with compatible versions and optimized configurations\n* NGC PyTorch 23.08+ containers include FlashAttention-2\n\npip Installation\n^^^^^^^^^^^^^^^^\n\n**Prerequisites for pip installation:**\n\n* A compatible C++ compiler\n* CUDA Toolkit with cuDNN and NVCC (NVIDIA CUDA Compiler) if installing from source.\n\nTo install the latest stable version with pip:\n\n.. code-block:: bash\n\n    # For PyTorch integration\n    pip install --no-build-isolation transformer_engine[pytorch]\n    \n    # For JAX integration\n    pip install --no-build-isolation transformer_engine[jax]\n    \n    # For both frameworks\n    pip install --no-build-isolation transformer_engine[pytorch,jax]\n\nAlternatively, install directly from the GitHub repository:\n\n.. code-block:: bash\n\n    pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable\n\nWhen installing from GitHub, you can explicitly specify frameworks using the environment variable:\n\n.. code-block:: bash\n\n    NVTE_FRAMEWORK=pytorch,jax pip install --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@stable\n\nconda Installation\n^^^^^^^^^^^^^^^^^^\n\nTo install the latest stable version with conda from conda-forge:\n\n.. code-block:: bash\n\n    # For PyTorch integration\n    conda install -c conda-forge transformer-engine-torch\n    \n    # JAX integration (coming soon)\n\nSource Installation\n^^^^^^^^^^^^^^^^^^^\n\n`See the installation guide \u003chttps://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html#installation-from-source\u003e`_\n\nEnvironment Variables\n^^^^^^^^^^^^^^^^^^^^^\nThese environment variables can be set before installation to customize the build process:\n\n* **CUDA_PATH**: Path to CUDA installation\n* **CUDNN_PATH**: Path to cuDNN installation\n* **CXX**: Path to C++ compiler\n* **NVTE_FRAMEWORK**: Comma-separated list of frameworks to build for (e.g., ``pytorch,jax``)\n* **MAX_JOBS**: Limit number of parallel build jobs (default varies by system)\n* **NVTE_BUILD_THREADS_PER_JOB**: Control threads per build job\n* **NVTE_CUDA_ARCHS**: Semicolon-separated list of CUDA compute architectures to compile for (e.g., ``80;90`` for A100 and H100). If not set, automatically determined based on CUDA version. Setting this can significantly reduce build time and binary size.\n\nCompiling with FlashAttention\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nTransformer Engine supports both FlashAttention-2 and FlashAttention-3 in PyTorch for improved performance. FlashAttention-3 was added in release v1.11 and is prioritized over FlashAttention-2 when both are present in the environment.\n\nYou can verify which FlashAttention version is being used by setting these environment variables:\n\n.. code-block:: bash\n\n    NVTE_DEBUG=1 NVTE_DEBUG_LEVEL=1 python your_script.py\n\nIt is a known issue that FlashAttention-2 compilation is resource-intensive and requires a large amount of RAM (see `bug \u003chttps://github.com/Dao-AILab/flash-attention/issues/358\u003e`_), which may lead to out of memory errors during the installation of Transformer Engine. Please try setting **MAX_JOBS=1** in the environment to circumvent the issue.\n\n.. troubleshooting-begin-marker-do-not-remove\n\nTroubleshooting\n^^^^^^^^^^^^^^^\n\n**Common Issues and Solutions:**\n\n1. **ABI Compatibility Issues:**\n\n   * **Symptoms:** ``ImportError`` with undefined symbols when importing transformer_engine\n   * **Solution:** Ensure PyTorch and Transformer Engine are built with the same C++ ABI setting. Rebuild PyTorch from source with matching ABI.\n   * **Context:** If you're using PyTorch built with a different C++ ABI than your system's default, you may encounter these undefined symbol errors. This is particularly common with pip-installed PyTorch outside of containers.\n\n2. **Missing Headers or Libraries:**\n\n   * **Symptoms:** CMake errors about missing headers (``cudnn.h``, ``cublas_v2.h``, ``filesystem``, etc.)\n   * **Solution:** Install missing development packages or set environment variables to point to correct locations:\n\n     .. code-block:: bash\n\n         export CUDA_PATH=/path/to/cuda\n         export CUDNN_PATH=/path/to/cudnn\n\n   * If CMake can't find a C++ compiler, set the ``CXX`` environment variable.\n   * Ensure all paths are correctly set before installation.\n\n3. **Build Resource Issues:**\n\n   * **Symptoms:** Compilation hangs, system freezes, or out-of-memory errors\n   * **Solution:** Limit parallel builds:\n\n     .. code-block:: bash\n\n         MAX_JOBS=1 NVTE_BUILD_THREADS_PER_JOB=1 pip install ...\n\n4. **Verbose Build Logging:**\n\n   * For detailed build logs to help diagnose issues:\n\n     .. code-block:: bash\n\n         cd transformer_engine\n         pip install -v -v -v --no-build-isolation .\n\n**Problems using UV or Virtual Environments:**\n\n1. **Import Error:**\n\n   * **Symptoms:** Cannot import ``transformer_engine``\n   * **Solution:** Ensure your UV environment is active and that you have used ``uv pip install --no-build-isolation \u003cte_pypi_package_or_wheel_or_source_dir\u003e`` instead of a regular pip install to your system environment.\n\n2. **cuDNN Sublibrary Loading Failed:**\n\n   * **Symptoms:** Errors at runtime with ``CUDNN_STATUS_SUBLIBRARY_LOADING_FAILED``\n   * **Solution:** This can occur when TE is built against the container's system installation of cuDNN, but pip packages inside the virtual environment pull in pip packages for ``nvidia-cudnn-cu12/cu13``. To resolve this, when building TE from source please specify the following environment variables to point to the cuDNN in your virtual environment.\n   \n   \n     .. code-block:: bash\n\n        export CUDNN_PATH=$(pwd)/.venv/lib/python3.12/site-packages/nvidia/cudnn\n        export CUDNN_HOME=$CUDNN_PATH\n        export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$LD_LIBRARY_PATH\n\n3. **Building Wheels:**\n\n   * **Symptoms:** Regular TE installs work correctly but UV wheel builds fail at runtime.\n   * **Solution:** Ensure that ``uv build --wheel --no-build-isolation -v`` is used during the wheel build as well as the pip installation of the wheel. Use ``-v`` for verbose output to verify that TE is not pulling in a mismatching version of PyTorch or JAX that differs from the UV environment's version.\n\n**JAX-specific Common Issues and Solutions:**\n\n1. **FFI Issues:**\n\n   * **Symptoms:** ``No registered implementation for custom call to \u003csome_te_ffi\u003e for platform CUDA``\n   * **Solution:** Ensure ``--no-build-isolation`` is used during installation. If pre-building wheels, ensure that the wheel is both built and installed with ``--no-build-isolation``. See \"Problems using UV or Virtual Environments\" above if using UV.\n\n.. troubleshooting-end-marker-do-not-remove\n\nBreaking Changes\n================\n\nv1.7: Padding mask definition for PyTorch\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIn an effort to unify the definition and usage of the attention mask across all three frameworks in Transformer Engine, the padding mask has changed from `True` meaning inclusion of the corresponding position in attention to exclusion of that position in our PyTorch implementation. Since v1.7, all attention mask types follow the same definition where `True` means masking out the corresponding position and `False` means including that position in attention calculation.\n\nAn example of this change is,\n\n.. code-block:: bash\n\n    # for a batch of 3 sequences where `a`s, `b`s and `c`s are the useful tokens\n    # and `0`s are the padding tokens,\n    [a, a, a, 0, 0,\n     b, b, 0, 0, 0,\n     c, c, c, c, 0]\n    # the padding mask for this batch before v1.7 is,\n    [ True,  True,  True, False, False,\n      True,  True, False, False, False,\n      True,  True,  True,  True, False]\n    # and for v1.7 onwards it should be,\n    [False, False, False,  True,  True,\n     False, False,  True,  True,  True,\n     False, False, False, False,  True]\n\nFP8 Convergence\n===============\n\nFP8 has been tested extensively across different model architectures and configurations and we found **no significant difference** between FP8 and BF16 training loss curves. FP8 has also been validated for accuracy on downstream LLM tasks (e.g. LAMBADA and WikiText). Below are examples of models tested for convergence across different frameworks.\n\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| Model      | Framework        | Source                                                                                                  |\n+============+==================+=========================================================================================================+\n| T5-770M    |  JAX/T5x         | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/t5x#convergence-and-performance|\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| MPT-1.3B   |  Mosaic Composer | https://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1                                              |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| GPT-5B     |  JAX/Paxml       | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results               |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| GPT-5B     |  NeMo Framework  | Available on request                                                                                    |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| LLama2-7B  |  Alibaba Pai     | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ                                                       |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| T5-11B     |  JAX/T5x         | Available on request                                                                                    |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| MPT-13B    |  Mosaic Composer | https://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8         |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| GPT-22B    |  NeMo Framework  | Available on request                                                                                    |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| LLama2-70B |  Alibaba Pai     | https://mp.weixin.qq.com/s/NQT0uKXLbXyh5031zBdeBQ                                                       |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n| GPT-175B   |  JAX/Paxml       | https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax#h100-results               |\n+------------+------------------+---------------------------------------------------------------------------------------------------------+\n\nIntegrations\n============\n\nTransformer Engine has been integrated with popular LLM frameworks such as:\n\n* `DeepSpeed \u003chttps://github.com/deepspeedai/DeepSpeed/blob/master/tests/unit/runtime/half_precision/test_fp8.py\u003e`_\n* `Hugging Face Accelerate \u003chttps://huggingface.co/docs/accelerate/main/en/usage_guides/low_precision_training#configuring-transformersengine\u003e`_\n* `Lightning \u003chttps://github.com/Lightning-AI/lightning/issues/17172\u003e`_\n* `MosaicML Composer \u003chttps://github.com/mosaicml/composer/releases/tag/v0.13.1\u003e`_\n* `NVIDIA JAX Toolbox \u003chttps://github.com/NVIDIA/JAX-Toolbox\u003e`_\n* `NVIDIA Megatron-LM \u003chttps://github.com/NVIDIA/Megatron-LM\u003e`_\n* `NVIDIA NeMo Framework \u003chttps://github.com/NVIDIA/NeMo-Megatron-Launcher\u003e`_\n* `Amazon SageMaker Model Parallel Library \u003chttps://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features-v2-tensor-parallelism.html\u003e`_\n* `Levanter \u003chttps://github.com/stanford-crfm/levanter\u003e`_\n* `GPT-NeoX \u003chttps://github.com/EleutherAI/gpt-neox\u003e`_\n* `Hugging Face Nanotron \u003chttps://github.com/huggingface/nanotron\u003e`_ - Coming soon!\n* `Colossal-AI \u003chttps://github.com/hpcaitech/ColossalAI\u003e`_ - Coming soon!\n* `PeriFlow \u003chttps://github.com/friendliai/periflow-python-sdk\u003e`_ - Coming soon!\n\n\nContributing\n============\n\nWe welcome contributions to Transformer Engine! To contribute to Transformer Engine and make pull requests,\nfollow the guidelines outlined in the `\u003cCONTRIBUTING.rst\u003e`_ guide.\n\nPapers\n======\n\n* `Attention original paper \u003chttps://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf\u003e`_\n* `Megatron-LM tensor parallel \u003chttps://arxiv.org/pdf/1909.08053.pdf\u003e`_\n* `Megatron-LM sequence parallel \u003chttps://arxiv.org/pdf/2205.05198.pdf\u003e`_\n* `FP8 Formats for Deep Learning \u003chttps://arxiv.org/abs/2209.05433\u003e`_\n\nVideos\n======\n\n* `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/\u003e`__\n* `Blackwell Numerics for AI | GTC 2025 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtc25-s72458/\u003e`_\n* `Building LLMs: Accelerating Pretraining of Foundational Models With FP8 Precision | GTC 2025 \u003chttps://www.nvidia.com/gtc/session-catalog/?regcode=no-ncid\u0026ncid=no-ncid\u0026tab.catalogallsessionstab=16566177511100015Kus\u0026search=zoho#/session/1726152813607001vnYK\u003e`_\n* `From FP8 LLM Training to Inference: Language AI at Scale | GTC 2025 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtc25-s72799/\u003e`_\n* `What's New in Transformer Engine and FP8 Training | GTC 2024 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtc24-s62457/\u003e`_\n* `FP8 Training with Transformer Engine | GTC 2023 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51393\u003e`_\n* `FP8 for Deep Learning | GTC 2023 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtcspring23-s52166/\u003e`_\n* `Inside the Hopper Architecture | GTC 2022 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtcspring22-s42663/\u003e`_\n\n.. |License| image:: https://img.shields.io/badge/License-Apache%202.0-blue.svg\n   :target: https://opensource.org/licenses/Apache-2.0\n\nPrevious News\n=============\n\n* [06/2025] `Floating Point 8: An Introduction to Efficient, Lower-Precision AI Training \u003chttps://developer.nvidia.com/blog/floating-point-8-an-introduction-to-efficient-lower-precision-ai-training/\u003e`_\n* [05/2025] `Advanced Optimization Strategies for LLM Training on NVIDIA Grace Hopper \u003chttps://developer.nvidia.com/blog/advanced-optimization-strategies-for-llm-training-on-nvidia-grace-hopper/\u003e`_\n* [03/2025] `Stable and Scalable FP8 Deep Learning Training on Blackwell | GTC 2025 \u003chttps://www.nvidia.com/en-us/on-demand/session/gtc25-s72778/\u003e`_\n* [03/2025] `Measure and Improve AI Workload Performance with NVIDIA DGX Cloud Benchmarking \u003chttps://developer.nvidia.com/blog/measure-and-improve-ai-workload-performance-with-nvidia-dgx-cloud-benchmarking/\u003e`_\n\n.. image:: docs/examples/comparison-fp8-bf16-training-nvidia-dgx-cloud-benchmarking-performance-explorer.jpg\n  :width: 600\n  :alt: Comparison of FP8 versus BF16 training, as seen in NVIDIA DGX Cloud Benchmarking Performance Explorer\n\n* [02/2025] `Understanding the Language of Life's Biomolecules Across Evolution at a New Scale with Evo 2 \u003chttps://developer.nvidia.com/blog/understanding-the-language-of-lifes-biomolecules-across-evolution-at-a-new-scale-with-evo-2/\u003e`_\n* [02/2025] `NVIDIA DGX Cloud Introduces Ready-To-Use Templates to Benchmark AI Platform Performance \u003chttps://developer.nvidia.com/blog/nvidia-dgx-cloud-introduces-ready-to-use-templates-to-benchmark-ai-platform-performance/\u003e`_\n* [01/2025] `Continued Pretraining of State-of-the-Art LLMs for Sovereign AI and Regulated Industries with iGenius and NVIDIA DGX Cloud \u003chttps://developer.nvidia.com/blog/continued-pretraining-of-state-of-the-art-llms-for-sovereign-ai-and-regulated-industries-with-igenius-and-nvidia-dgx-cloud/\u003e`_\n* [11/2024] `Developing a 172B LLM with Strong Japanese Capabilities Using NVIDIA Megatron-LM \u003chttps://developer.nvidia.com/blog/developing-a-172b-llm-with-strong-japanese-capabilities-using-nvidia-megatron-lm/\u003e`_\n* [11/2024] `How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances \u003chttps://aws.amazon.com/blogs/machine-learning/how-fp8-boosts-llm-training-by-18-on-amazon-sagemaker-p5-instances/\u003e`_\n* [11/2024] `Efficiently train models with large sequence lengths using Amazon SageMaker model parallel \u003chttps://aws.amazon.com/blogs/machine-learning/efficiently-train-models-with-large-sequence-lengths-using-amazon-sagemaker-model-parallel/\u003e`_\n* [09/2024] `Reducing AI large model training costs by 30% requires just a single line of code from FP8 mixed precision training upgrades \u003chttps://company.hpc-ai.com/blog/reducing-ai-large-model-training-costs-by-30-requires-just-a-single-line-of-code-from-fp8-mixed-precision-training-upgrades\u003e`_\n* [05/2024] `Accelerating Transformers with NVIDIA cuDNN 9 \u003chttps://developer.nvidia.com/blog/accelerating-transformers-with-nvidia-cudnn-9/\u003e`_\n* [03/2024] `Turbocharged Training: Optimizing the Databricks Mosaic AI stack with FP8 \u003chttps://www.databricks.com/blog/turbocharged-training-optimizing-databricks-mosaic-ai-stack-fp8\u003e`_\n* [03/2024] `FP8 Training Support in SageMaker Model Parallelism Library \u003chttps://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-release-notes.html\u003e`_\n* [12/2023] `New NVIDIA NeMo Framework Features and NVIDIA H200 \u003chttps://developer.nvidia.com/blog/new-nvidia-nemo-framework-features-and-nvidia-h200-supercharge-llm-training-performance-and-versatility/\u003e`_\n\n.. image:: docs/examples/H200-NeMo-performance.png\n  :width: 600\n  :alt: H200\n\n* [11/2023] `Inflection-2: The Next Step Up \u003chttps://inflection.ai/inflection-2\u003e`_\n* [11/2023] `Unleashing The Power Of Transformers With NVIDIA Transformer Engine \u003chttps://lambdalabs.com/blog/unleashing-the-power-of-transformers-with-nvidia-transformer-engine\u003e`_\n* [11/2023] `Accelerating PyTorch Training Workloads with FP8 \u003chttps://towardsdatascience.com/accelerating-pytorch-training-workloads-with-fp8-5a5123aec7d7\u003e`_\n* [09/2023] `Transformer Engine added to AWS DL Container for PyTorch Training \u003chttps://github.com/aws/deep-learning-containers/pull/3315\u003e`_\n* [06/2023] `Breaking MLPerf Training Records with NVIDIA H100 GPUs \u003chttps://developer.nvidia.com/blog/breaking-mlperf-training-records-with-nvidia-h100-gpus/\u003e`_\n* [04/2023] `Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1) \u003chttps://www.mosaicml.com/blog/coreweave-nvidia-h100-part-1\u003e`_\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnvidia%2Ftransformerengine","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnvidia%2Ftransformerengine","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnvidia%2Ftransformerengine/lists"}