{"id":13499114,"url":"https://github.com/chenxi116/PNASNet.pytorch","last_synced_at":"2025-03-29T04:30:30.918Z","repository":{"id":107365815,"uuid":"140200211","full_name":"chenxi116/PNASNet.pytorch","owner":"chenxi116","description":"PyTorch implementation of PNASNet-5 on ImageNet","archived":false,"fork":false,"pushed_at":"2022-08-04T20:12:17.000Z","size":151,"stargazers_count":315,"open_issues_count":3,"forks_count":44,"subscribers_count":14,"default_branch":"master","last_synced_at":"2024-08-01T22:50:07.321Z","etag":null,"topics":["automl","deep-learning","imagenet","neural-architecture-search","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chenxi116.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-07-08T20:26:13.000Z","updated_at":"2024-07-24T20:46:51.000Z","dependencies_parsed_at":"2023-05-17T06:31:19.025Z","dependency_job_id":null,"html_url":"https://github.com/chenxi116/PNASNet.pytorch","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chenxi116%2FPNASNet.pytorch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chenxi116%2FPNASNet.pytorch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chenxi116%2FPNASNet.pytorch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chenxi116%2FPNASNet.pytorch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chenxi116","download_url":"https://codeload.github.com/chenxi116/PNASNet.pytorch/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222455951,"owners_count":16987577,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automl","deep-learning","imagenet","neural-architecture-search","pytorch"],"created_at":"2024-07-31T22:00:29.102Z","updated_at":"2024-10-31T17:31:31.909Z","avatar_url":"https://github.com/chenxi116.png","language":"Python","readme":"# PNASNet.pytorch\n\nPyTorch implementation of [PNASNet-5](https://arxiv.org/1712.00559). Specifically, PyTorch code from [this repository](https://github.com/quark0/darts) is adapted to completely match both [my implemetation](https://github.com/chenxi116/PNASNet.TF) and the [official implementation](https://github.com/tensorflow/models/blob/master/research/slim/nets/nasnet/pnasnet.py) of PNASNet-5, both written in TensorFlow. This complete match allows the pretrained TF model to be exactly converted to PyTorch: see `convert.py`.\n\nIf you use the code, please cite:\n```bash\n@inproceedings{liu2018progressive,\n  author    = {Chenxi Liu and\n               Barret Zoph and\n               Maxim Neumann and\n               Jonathon Shlens and\n               Wei Hua and\n               Li{-}Jia Li and\n               Li Fei{-}Fei and\n               Alan L. Yuille and\n               Jonathan Huang and\n               Kevin Murphy},\n  title     = {Progressive Neural Architecture Search},\n  booktitle = {European Conference on Computer Vision},\n  year      = {2018}\n}\n```\n\n## Requirements\n\n- TensorFlow 1.8.0 (for image preprocessing)\n- PyTorch 0.4.0\n- torchvision 0.2.1\n\n## Data and Model Preparation\n\n- Download the ImageNet validation set and move images to labeled subfolders. To do the latter, you can use [this script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh). Make sure the folder `val` is under `data/`.\n- Download [PNASNet.TF](https://github.com/chenxi116/PNASNet.TF) and follow its README to download the `PNASNet-5_Large_331` pretrained model.\n- Convert TensorFlow model to PyTorch model:\n```bash\npython convert.py\n```\n\n## Notes on Model Conversion\n\n- In both TensorFlow implementations, `net[0]` means `prev` and `net[1]` means `prev_prev`. However, in the [PyTorch implementation](https://github.com/quark0/darts), `states[0]` means `prev_prev` and `states[1]` means `prev`. I followed the PyTorch implemetation in this repository. This is why the 0 and 1 in PNASCell specification are reversed.\n- The default value of `eps` in BatchNorm layers is `1e-3` in TensorFlow and `1e-5` in PyTorch. I changed all BatchNorm `eps` values to `1e-3` (see `operations.py`) to exactly match the TensorFlow pretrained model.\n- The TensorFlow pretrained model uses `tf.image.resize_bilinear` to resize the image (see `utils.py`). I cannot find a python function that exactly matches this function's behavior (also see [this thread](https://github.com/tensorflow/tensorflow/issues/6720) and [this post](https://hackernoon.com/how-tensorflows-tf-image-resize-stole-60-days-of-my-life-aba5eb093f35) on this topic), so currently in `main.py` I call TensorFlow to do the image preprocessing, in order to guarantee both models have the identical input.\n- When converting the model from TensorFlow to PyTorch (i.e. `convert.py`), I use input image size of 323 instead of 331. This is because the 'SAME' padding in TensorFlow may differ from padding in PyTorch in some layers (see [this link](https://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t); basically TF may only pad 1 right and bottom, whereas PyTorch always pads 1 for all four margins). However, they behave exactly the same when image size is 323: `conv0` does not have padding, so feature size becomes 161, then 81, 41, etc.\n- The exact conversion when image size is 323 is also corroborated by the following table:\n\nImage Size | Official TensorFlow Model | Converted PyTorch Model\n--- | --- | ---\n(331, 331) | (0.829, 0.962) | (0.828, 0.961)\n(323, 323) | (0.827, 0.961) | (0.827, 0.961)\n\n\n## Usage\n\n```bash\npython main.py\n```\n\nThe last printed line should read:\n```bash\nTest: [50000/50000]\tPrec@1 0.828\tPrec@5 0.961\n```\n","funding_links":[],"categories":["Papers\u0026Codes","DLA","1.) Neural Architecture Search","Paper implementations｜论文实现","Model Deployment library","Paper implementations"],"sub_categories":["PNasNet","**[Papers]**","Other libraries｜其他库:","PyTorch \u003ca name=\"pytorch\"/\u003e","Other libraries:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchenxi116%2FPNASNet.pytorch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchenxi116%2FPNASNet.pytorch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchenxi116%2FPNASNet.pytorch/lists"}