{"id":13717574,"url":"https://github.com/webdataset/webdataset","last_synced_at":"2025-12-11T22:45:07.782Z","repository":{"id":38221681,"uuid":"201089222","full_name":"webdataset/webdataset","owner":"webdataset","description":"A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.","archived":false,"fork":false,"pushed_at":"2025-05-06T23:27:22.000Z","size":54074,"stargazers_count":2586,"open_issues_count":115,"forks_count":205,"subscribers_count":24,"default_branch":"main","last_synced_at":"2025-05-08T22:37:39.106Z","etag":null,"topics":["data-augmentation","deep-learning","pytorch","webdataset","webdataset-format"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/webdataset.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-08-07T16:42:04.000Z","updated_at":"2025-05-08T22:17:59.000Z","dependencies_parsed_at":"2022-07-10T14:17:23.221Z","dependency_job_id":"a2e18b9f-43a8-4d89-bd4e-10bbd0094068","html_url":"https://github.com/webdataset/webdataset","commit_stats":{"total_commits":715,"total_committers":34,"mean_commits":"21.029411764705884","dds":0.5034965034965035,"last_synced_commit":"039d74319ae55e5696dcef89829be9671802cf70"},"previous_names":["tmbdev/webdataset"],"tags_count":106,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/webdataset%2Fwebdataset","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/webdataset%2Fwebdataset/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/webdataset%2Fwebdataset/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/webdataset%2Fwebdataset/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/webdataset","download_url":"https://codeload.github.com/webdataset/webdataset/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253166473,"owners_count":21864467,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["data-augmentation","deep-learning","pytorch","webdataset","webdataset-format"],"created_at":"2024-08-03T00:01:24.269Z","updated_at":"2025-12-11T22:45:07.775Z","avatar_url":"https://github.com/webdataset.png","language":"Python","readme":"[![Test](https://github.com/tmbdev/webdataset/workflows/CI/badge.svg)](https://github.com/tmbdev/webdataset/actions?query=workflow%3ACI)\n[![DeepSource](https://static.deepsource.io/deepsource-badge-light-mini.svg)](https://deepsource.io/gh/tmbdev/webdataset/?ref=repository-badge)\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport torch.utils.data\nimport torch.nn\nfrom random import randrange\nimport os\nos.environ[\"WDS_VERBOSE_CACHE\"] = \"1\"\nos.environ[\"GOPEN_VERBOSE\"] = \"0\"\n```\n\n# The WebDataset Format\n\nWebDataset format files are tar files, with two conventions:\n\n- within each tar file, files that belong together and make up a training sample share the same basename when stripped of all filename extensions\n- the shards of a tar file are numbered like `something-000000.tar` to `something-012345.tar`, usually specified using brace notation `something-{000000..012345}.tar`\n\nYou can find a longer, more detailed specification of the WebDataset format in the [WebDataset Format Specification](https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit?usp=sharing)\n\nWebDataset can read files from local disk or from any pipe, which allows it to access files using common cloud object stores. WebDataset can also read concatenated MsgPack and CBORs sources.\n\nThe WebDataset representation allows writing purely sequential I/O pipelines for large scale deep learning. This is important for achieving high I/O rates from local storage (3x-10x for local drives compared to random access) and for using object stores and cloud storage for training.\n\nThe WebDataset format represents images, movies, audio, etc. in their native file formats, making the creation of WebDataset format data as easy as just creating a tar archive. Because of the way data is aligned, WebDataset works well with block deduplication as well and aligns data on predictable boundaries.\n\nStandard tools can be used for accessing and processing WebDataset-format files.\n\n\n```python\nbucket = \"https://storage.googleapis.com/webdataset/testdata/\"\ndataset = \"publaynet-train-{000000..000009}.tar\"\n\nurl = bucket + dataset\n!curl -s {bucket}publaynet-train-000000.tar | dd count=5000 2\u003e /dev/null | tar tf - 2\u003e /dev/null | sed 10q\n```\n\n    PMC4991227_00003.json\r\n\n\n    PMC4991227_00003.png\r\n    PMC4537884_00002.json\r\n\n\n    PMC4537884_00002.png\r\n\n\n    PMC4323233_00003.json\r\n\n\n    PMC4323233_00003.png\r\n    PMC5429906_00004.json\r\n\n\n    PMC5429906_00004.png\r\n    PMC5592712_00002.json\r\n\n\n    PMC5592712_00002.png\r\n\n\nNote that in these `.tar` files, we have pairs of `.json` and `.png` files; each such pair makes up a training sample.\n\n# WebDataset Libraries\n\nThere are several libraries supporting the WebDataset format:\n\n- `webdataset` for Python3 (includes the `wids` library), this repository\n- [Webdataset.jl](https://github.com/webdataset/WebDataset.jl) a Julia implementation\n- [tarp](https://github.com/webdataset/tarp), a Golang implementation and command line tool\n- Ray Data sources and sinks\n\nThe `webdataset` library can be used with PyTorch, Tensorflow, and Jax.\n\n# The `webdataset` Library\n\nThe `webdataset` library is an implementation of PyTorch `IterableDataset` (or a mock implementation thereof if you aren't using PyTorch). It implements as form of stream processing. Some of its features are:\n\n- large scale parallel data access through sharding\n- high performance disk I/O due to purely sequential reads\n- latency insensitive due to big fat pipes\n- no local storage required\n- instant startup for training jobs\n- only requires reading from file descriptors/network streams, no special APIs\n- its API encourages high performance I/O pipelines\n- scalable from tiny desktop datasets to petascale datasets\n- provides local caching if desired\n- requires no dataset metadata; any collection of shards can be read and used instantly\n\nThe main limitations people run into are related to the fact that `IterableDataset` is less commonly used in PyTorch and some existing code may not support it as well, and that achieving an exactly balanced number of training samples across many compute nodes for a fixed epoch size is tricky; for multinode training, `webdataset` is usually used with shard resampling.\n\nThere are two interfaces, the concise \"fluid\" interface and a longer \"pipeline\" interface. We'll show examples using the fluid interface, which is usually what you want.\n\n\n```python\nimport webdataset as wds\nshuffle_buffer = 10  # usually, pick something bigger, like 1000\npil_dataset = wds.WebDataset(url).shuffle(shuffle_buffer).decode(\"pil\").to_tuple(\"png\", \"json\")\n```\n\n    /Users/tbreuel/proj/webdataset/src/webdataset/compat.py:379: UserWarning: WebDataset(shardshuffle=...) is None; set explicitly to False or a number\n      warnings.warn(\"WebDataset(shardshuffle=...) is None; set explicitly to False or a number\")\n\n\nThe resulting datasets are standard PyTorch `IterableDataset` instances.\n\n\n```python\nisinstance(pil_dataset, torch.utils.data.IterableDataset)\n```\n\n\n\n\n    True\n\n\n\n\n```python\nfor image, json in pil_dataset:\n    break\nplt.imshow(image)\n```\n\n\n\n\n    \u003cmatplotlib.image.AxesImage at 0x12fec5050\u003e\n\n\n\n\n    \n![png](readme_files/readme_11_1.png)\n    \n\n\nWe can add onto the existing pipeline for augmentation and data preparation.\n\n\n```python\nimport torchvision.transforms as transforms\nfrom PIL import Image\n\npreproc = transforms.Compose([\n    transforms.Resize((224, 224)),\n    transforms.ToTensor(),\n    lambda x: 1-x,\n])\n\ndef preprocess(sample):\n    image, json = sample\n    try:\n        label = json[\"annotations\"][0][\"category_id\"]\n    except Exception:\n        label = 0\n    return preproc(image), label\n\ndataset = pil_dataset.map(preprocess)\n\nfor image, label in dataset:\n    break\nplt.imshow(image.numpy().transpose(1, 2, 0))\n```\n\n\n\n\n    \u003cmatplotlib.image.AxesImage at 0x1677a1250\u003e\n\n\n\n\n    \n![png](readme_files/readme_13_1.png)\n    \n\n\n`WebDataset` is just an instance of a standard `IterableDataset`. It's a single-threaded way of iterating over a dataset. Since image decompression and data augmentation can be compute intensive, PyTorch usually uses the `DataLoader` class to parallelize data loading and preprocessing. `WebDataset` is fully compatible with the standard `DataLoader`.\n\nHere are a number of notebooks showing how to use WebDataset for image classification and LLM training:\n\n- [train-resnet50-wds](examples/out/train-resnet50-wds.ipynb) -- simple, single GPU training from Imagenet\n- [train-resnet50-multiray-wds](examples/out/train-resnet50-multiray-wds.ipynb) -- multinode training using webdataset\n- [generate-text-dataset](examples/out/generate-text-dataset.ipynb) -- initial dataset generation\n- [tesseract-wds](examples/out/tesseract-wds.ipynb) -- shard-to-shard transformations, here for OCR running over large datasets\n- [train-ocr-errors-hf](examples/out/train-ocr-errors-hf.ipynb) -- an example of LLM fine tuning using a dataset in webdataset format\n\nThe [wds-notes](examples/wds-notes.ipynb) notebook contains some additional documentation and information about the library.\n\n# The `webdataset` Pipeline API\n\nThe `wds.WebDataset` fluid interface is just a convenient shorthand for writing down pipelines. The underlying pipeline is an instance of the `wds.DataPipeline` class, and you can construct data pipelines explicitly, similar to the way you use `nn.Sequential` inside models.\n\n\n```python\ndataset = wds.DataPipeline(\n    wds.SimpleShardList(url),\n    # at this point we have an iterator over all the shards\n\n    # this shuffles the shards\n    wds.shuffle(100),\n\n    # add wds.split_by_node here if you are using multiple nodes\n    wds.split_by_worker,\n\n    # at this point, we have an iterator over the shards assigned to each worker\n    wds.tarfile_to_samples(),\n\n    # this shuffles the samples in memory\n    wds.shuffle(shuffle_buffer),\n\n    # this decodes the images and json\n    wds.decode(\"pil\"),\n    wds.to_tuple(\"png\", \"json\"),\n    wds.map(preprocess),\n    wds.batched(16)\n)\n\nbatch = next(iter(dataset))\nbatch[0].shape, batch[1].shape\n```\n\n\n\n\n    (torch.Size([16, 3, 224, 224]), (16,))\n\n\n\n# Secure Mode\n\nYou can run WebDataset in a mode that improves security. This disables the `pipe:` and `file:` protocols, as well as attempts to decode Python pickles. This should disable simple attacks and no successful attacks are currently known; rely on this mode at your own risk. \n\nYou enable secure mode by setting `webdataset.utils.enforce_security = True` before you start using the library. You can also set `WDS_SECURE=1` in your environment.\n\n# Installation and Documentation\n\n    $ pip install webdataset\n\nFor the Github version:\n\n    $ pip install git+https://github.com/tmbdev/webdataset.git\n\nHere are some videos talking about WebDataset and large scale deep learning:\n\n- [Introduction to Large Scale Deep Learning](https://www.youtube.com/watch?v=kNuA2wflygM)\n- [Loading Training Data with WebDataset](https://www.youtube.com/watch?v=mTv_ePYeBhs)\n- [Creating Datasets in WebDataset Format](https://www.youtube.com/watch?v=v_PacO-3OGQ)\n- [Tools for Working with Large Datasets](https://www.youtube.com/watch?v=kIv8zDpRUec)\n\n# Dependencies\n\nThe WebDataset library only requires PyTorch, NumPy, and a small library called `braceexpand`.\n\nWebDataset loads a few additional libraries dynamically only when they are actually needed and only in the decoder:\n\n- PIL/Pillow for image decoding\n- `torchvision`, `torchvideo`, `torchaudio` for image/video/audio decoding\n- `msgpack` for MessagePack decoding\n- the `curl` command line tool for accessing HTTP servers\n- the Google/Amazon/Azure command line tools for accessing cloud storage buckets\n\nLoading of one of these libraries is triggered by configuring a decoder that attempts to decode content in the given format and encountering a file in that format during decoding. (Eventually, the torch... dependencies will be refactored into those libraries.)\n\n\n","funding_links":[],"categories":["Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Jupyter Notebook","Python","Dataset"],"sub_categories":["Other libraries｜其他库:","Dataset structure of CLMP"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwebdataset%2Fwebdataset","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwebdataset%2Fwebdataset","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwebdataset%2Fwebdataset/lists"}