https://github.com/nvidia-merlin/distributed-embeddings
distributed-embeddings is a library for building large embedding based models in Tensorflow 2.
https://github.com/nvidia-merlin/distributed-embeddings
Last synced: 9 months ago
JSON representation
distributed-embeddings is a library for building large embedding based models in Tensorflow 2.
- Host: GitHub
- URL: https://github.com/nvidia-merlin/distributed-embeddings
- Owner: NVIDIA-Merlin
- License: apache-2.0
- Created: 2022-03-10T21:53:27.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2023-10-17T12:53:02.000Z (about 2 years ago)
- Last Synced: 2025-04-18T00:57:59.320Z (9 months ago)
- Language: Python
- Homepage:
- Size: 1.79 MB
- Stars: 44
- Watchers: 9
- Forks: 12
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# [Distributed Embeddings](https://github.com/NVIDIA-Merlin/distributed-embeddings)
[](https://nvidia-merlin.github.io/distributed-embeddings/Introduction.html)
[](https://github.com/NVIDIA-Merlin/distributed-embeddingsb/blob/main/LICENSE)
distributed-embeddings is a library for building large embedding based (e.g. recommender) models in Tensorflow 2. It provides a scalable model parallel wrapper that automatically distribute embedding tables to multiple GPUs, as well as efficient embedding operations that cover and extend Tensorflow's embedding functionalities.
Refer to [NVIDIA Developer blog](https://developer.nvidia.com/blog/fast-terabyte-scale-recommender-training-made-easy-with-nvidia-merlin-distributed-embeddings/) about Terabyte-scale Recommender Training for more details.
## Features
### Distributed Model Parallel Wrappers
`dist_model_parallel` contain tools to enable model parallel training by changing only three lines of your script. It can also be used alongside data parallel to form hybrid parallel training. Users can easily experiment large scale embeddings beyond single GPU's memory capacity without complex code to handle cross-worker communication.
To start model parallel, simply wrap a list of keras Embedding layers with `dist_model_parallel.DistributedEmbedding`
### Embedding Layers
`distributed_embeddings.Embedding` combines functionalities of `tf.keras.layers.Embedding` and `tf.nn.embedding_lookup_sparse` under a unified Keras layer API. The backend is designed to achieve high GPU efficiency.
### Input Key Mapping with IntergerLookup Layers
`distributed_embeddings.IntegerLookup` extends `tf.keras.layers.IntegerLookup`'s functionality with on-the-fly vocabulary building. This allow user to start training directly from input keys without offline preprocessing. A highly optimized GPU backend is along with CPU support.
**See more details at [User Guide](https://nvidia-merlin.github.io/distributed-embeddings/userguide.html)**
## Installation
### Requirements
Python 3, CUDA 11 or newer, TensorFlow 2
### Containers ###
You can build inside 22.03 or later NGC TF2 [image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow):
Note: horovod v0.27 and TensorFlow 2.10, alternatively NGC 23.03 container, is required for building v0.3+
```bash
docker pull nvcr.io/nvidia/tensorflow:23.06-tf2-py3
```
### Build from source
After clone this repository, run:
```bash
git submodule update --init --recursive
make pip_pkg && pip install artifacts/*.whl
```
Test installation with:
```python
python -c "import distributed_embeddings"
```
You can also run [Synthetic](https://github.com/NVIDIA-Merlin/distributed-embeddings/tree/main/examples/benchmarks/synthetic_models) and [DLRM](https://github.com/NVIDIA-Merlin/distributed-embeddings/blob/main/examples/dlrm/main.py) examples.
## Feedback and Support
If you'd like to contribute to the library directly, see the [CONTRIBUTING.md](https://github.com/NVIDIA-Merlin/distributed-embeddings/blob/main/CONTRIBUTING.md). We're particularly interested in contributions or feature requests for our feature engineering and preprocessing operations. To further advance our Merlin Roadmap, we encourage you to share all the details regarding your recommender system pipeline in this [survey](https://developer.nvidia.com/merlin-devzone-survey).
If you're interested in learning more about how distributed-embeddings works, see [documentation]( https://nvidia-merlin.github.io/distributed-embeddings/Introduction.html).