{"id":43874928,"url":"https://github.com/astra-vision/rain-rendering","last_synced_at":"2026-02-06T14:42:16.317Z","repository":{"id":43576977,"uuid":"293284622","full_name":"astra-vision/rain-rendering","owner":"astra-vision","description":"Rain Rendering for Evaluating and Improving Robustness to Bad Weather (Tremblay et al., 2020) (S. S. Halder et al., 2019)","archived":false,"fork":false,"pushed_at":"2025-01-10T09:46:37.000Z","size":5239,"stargazers_count":141,"open_issues_count":2,"forks_count":23,"subscribers_count":4,"default_branch":"master","last_synced_at":"2025-01-10T10:53:15.777Z","etag":null,"topics":["computer-vision","fog","gan","particles-simulation","physically-based-rendering","rain"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/astra-vision.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-09-06T13:40:19.000Z","updated_at":"2025-01-10T09:46:40.000Z","dependencies_parsed_at":"2024-06-25T10:09:54.248Z","dependency_job_id":"edfab9c5-d086-42aa-9113-13ec118a6624","html_url":"https://github.com/astra-vision/rain-rendering","commit_stats":{"total_commits":9,"total_committers":2,"mean_commits":4.5,"dds":"0.11111111111111116","last_synced_commit":"9e994b0cd7e5e210189821345f6655b21c79ca48"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/astra-vision/rain-rendering","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/astra-vision%2Frain-rendering","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/astra-vision%2Frain-rendering/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/astra-vision%2Frain-rendering/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/astra-vision%2Frain-rendering/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/astra-vision","download_url":"https://codeload.github.com/astra-vision/rain-rendering/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/astra-vision%2Frain-rendering/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29164933,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-06T14:37:12.680Z","status":"ssl_error","status_checked_at":"2026-02-06T14:36:22.973Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","fog","gan","particles-simulation","physically-based-rendering","rain"],"created_at":"2026-02-06T14:42:15.689Z","updated_at":"2026-02-06T14:42:16.308Z","avatar_url":"https://github.com/astra-vision.png","language":"Python","readme":"# Rain Rendering for Evaluating and Improving Robustness to Bad Weather\n\nOfficial repository.  \nThis code is to augment clear weather images with controllable amount of rain using our physics-based rendering. It allows evaluating/training algorithms, improving robustness to rain, detecting/removing rain, etc.\n\nWe provide rain-augmented datasets in the [dataset zoo](#dataset-zoo).\n\n## Paper \n\n![alt text](teaser.png \"Rain rendering\")\n\n\u003c!--\nPhysic-based rain (PBR) pipeline:\n\n![alt text](doc/pbr_pipeline.png \"PBR pipeline\")\n\nGAN+PBR pipeline: \n\n![alt text](doc/ganpbr_pipeline.png \"GAN+PBR pipeline\")\n//--\u003e\n\n[Rain Rendering for Evaluating and Improving Robustness to Bad Weather](https://arxiv.org/abs/2009.03683) \\\n[Maxime Tremblay](http://vision.gel.ulaval.ca/en/people/Id_637/index.php), [Shirsendu S. Halder](https://scholar.google.com/citations?user=A_e7SA8AAAAJ), [Raoul de Charette](https://team.inria.fr/rits/membres/raoul-de-charette/), [Jean-François Lalonde](http://vision.gel.ulaval.ca/~jflalonde/)  \nInria, Université Laval. IJCV 2020\n\nIf you find our work useful, please cite:\n```\n@article{tremblay2020rain,\n  title={Rain Rendering for Evaluating and Improving Robustness to Bad Weather},\n  author={Tremblay, Maxime and Halder, Shirsendu S. and de Charette, Raoul and Lalonde, Jean-François},\n  journal={International Journal of Computer Vision},\n  year={2020}\n}\n```\nThis works is accepted at IJCV 2020 ([preprint](https://arxiv.org/abs/2009.03683)) and is an extension of our [ICCV'19 paper](https://arxiv.org/abs/1908.10335).\n\n\n## Preparation\n\nTested on both Linux \u0026 Windows with\n* Python 3.6\n* OpenCV 3.2.0\n* PyClipper 1.0.6\n* Numpy 1.18\n\n### Setup\n\nCreate your conda virtual environment:\n```sh\nconda create --name py36_weatheraugment python=3.6 opencv numpy matplotlib tqdm imageio pillow natsort glob2 scipy scikit-learn scikit-image pexpect -y\n\nconda activate py36_weatheraugment\n\npip install pyclipper imutils\n```\n\nOur code relies on kindly shared third parties researches. Specifically, we use the particles simulator of ([de Charette et al., ICCP 2012](https://github.com/cv-rits/weather-particle-simulator)), and the rainstreak illumination database of ([Garg and Nayar, TOG 2006](https://cave.cs.columbia.edu/projects/categories/project?cid=Physics-Based+Vision\u0026pid=Photorealistic+Rendering+of+Rain+Streaks)).\nTo install all third parties:\n* Download the Columbia Uni. [rain streak database](https://cave.cs.columbia.edu/old/databases/rain_streak_db/databases.zip) \\[[backup link](https://web.archive.org/web/20240820054544/https://www.cs.columbia.edu/CAVE/databases/rain_streak_db/databases.zip)\\]and extract files in `3rdparty/rainstreakdb`\n* **\\[Optional, cf. below\\]** Install the CMU [weather particle simulator](https://github.com/cv-rits/weather-particle-simulator) with \n`git submodule update --init` and follow \"setup\" instructions in `3rdparty/weather-particle-simulator/readme.md` to ensure dependencies are resolved.\n\nNote that without the weather particle simulator, you will only be able to run our rendering using our pre-computed particles simulation on a few datasets. Cf. [dataset zoo](#dataset-zoo).\n\n## Running the code\nThe renderer augment sequences of images with rain, using the following required data:\n- images\n- depth maps\n- calibration files (optional, KITTI format)\n- particles simulation files (optional, otherwise files are automatically generated by the \"weather particle simulator\")\n\nFile structure may vary per dataset, but a typical structure is:\n```sh\ndata/source/DATASET/SEQUENCE/rgb/file0001.png           # Source images (color, 8 bits)\ndata/source/DATASET/SEQUENCE/depth/file0001.png         # Depth images (16 bits, with depth_in_meter = depth/256.)\n```\n\nParticles simulation files are located (or automatically generated) in:\n```sh\ndata/particles/DATASET/XXXX/rain/10mm/*.xml         # Particles simulation files (here, 10mm/hr rain)\n```\n\nUpon success, the renderer will output:\n```sh\ndata/output/DATASET/SEQUENCE/rain/10mm/rainy_image/file0001.png     # Rainy images (here, 10mm/hr rain)\ndata/output/DATASET/SEQUENCE/rain/10mm/rainy_mask/file0001.png      # Rainy masks (int32 showing rain drops opacity, useful for rain detection/removal works) \ndata/output/DATASET/SEQUENCE/envmap/file0001.png                    # Estimated environment maps (only output with --save_envmap)\n```\n\nWe provide guidance and all required files to generate rain on [KITTI](http://www.cvlibs.net/datasets/kitti), [Cityscapes](https://www.cityscapes-dataset.com), [nuScenes](https://www.nuscenes.org).\nYou may refer to the custom section below to render rain on your own images.\n\n\n### Rendering rain on KITTI, Cityscapes, nuScenes\n\n_Notes: For ready-to-use rainy versions of KITTI/Cityscapes/nuScenes, refer to the [dataset zoo](#dataset-zoo). The following instructions is for re-generating your own rainy images._\n\n#### KITTI\nTo generate rain on the [2D object subset of KITTI](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d), download \"left color images of object data set\" from [here](http://www.cvlibs.net/download.php?file=data_object_image_2.zip), \"camera calibration matrices of object data set\" from [here](http://www.cvlibs.net/download.php?file=data_object_calib.zip), and our depth files from [here](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_data_object_training_image_2_depth.zip).\nExtract all in `data/source/kitti/data_object`.\n\nYou should consider downloading pre-computed KITTI particles simulations from [here](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_data_object_particles.zip) and extract files in `data/particles/kitti`. (This is mandatory if particles simulator is not set up)\n\nVerify that the following files exist `data/source/kitti/data_object/training/image_2/000000.png`, `data/source/kitti/data_object/training/image_2/depth/000000.png`, `data/source/kitti/data_object/training/calib/000000.txt` and `data/particles/kitti/data_object/rain/25mm/` (if you downloaded particles files). Adjust your file structure if needed.\n\nTo generate rain of 25mm/hr fall rate on the first 10 frames of each sequence of KITTI, run:  \n```sh\npython main.py --dataset kitti --intensity 25 --frame_end 10\n```\n\nOutput will be located in `data/output/kitti`. Drop the `frame_end` argument to render the full rainy dataset or refer to the [Advanced usage](#advanced-usage) for more examples. \n\n\\[We provide all required data for KITTI sequences: `data_object/training`, `raw_data/2011_09_26/2011_09_26_drive_0032_sync`, `raw_data/2011_09_26/2011_09_26_drive_0056_sync`\\]\n\n\n#### Cityscapes\n\nDownload the \"leftImg8bit\" dataset from [here](https://www.cityscapes-dataset.com/downloads/), and our depth files from [here](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_cityscapes_leftImg8bit_train_depth.zip).\nExtract all in `data/source/cityscapes`.\n\nYou should also consider downloading Cityscapes pre-computed particles simulations from [here](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_cityscapes_particles.zip) and extract files in `data/particles/cityscapes`. (This is mandatory if particles simulator is not set up)\n\nVerify that the following files exist: `data/source/cityscapes/leftImg8bit/train/aachen/aachen_000000_000019_leftImg8bit.png`, `data/source/cityscapes/leftImg8bit/train/depth/aachen/aachen_000000_000019_leftImg8bit.png` and `/data/particles/cityscapes/leftImg8bit/rain/25mm/` (if you downloaded particles files). Adjust your file structure if needed.\n\nTo generate rain of 25mm/hr fall rate on the first 2 frames of each sequence of Cityscapes, run:  \n`python main.py --dataset cityscapes --intensity 25 --frame_end 2`  \nAlternatively you can render only one sequence, for example with:  \n`python main.py --dataset cityscapes --sequences leftImg8bit/train/aachen --intensity 25 --frame_end 10`\n\nOutput will be located in `data/output/cityscapes`. Drop the `frame_end` argument to render the full rainy dataset or refer to the [Advanced usage](#advanced-usage) for more examples.\n\n#### nuScenes (coming up)\n\nRecent updates broke nuScenes compatibility, this will be resolved soon. Stay tuned.\nTo run our rendering on your custom dataset, you must provide a configuration file. Configuration files are stored in config/ and must be named after the dataset.\n\n\n\u003c!--\n**Prerequisite**  \nNote that nuScenes requires installing [nuscenes-devkit](https://github.com/nutonomy/nuscenes-devkit) which may be done with:\n```\npip install nuscenes-devkit\n```\n\nPlease download the full dataset (v1.0) from the [nuScenes](https://www.nuscenes.org/download) website and extract it.\n\nFor our experiments on nuScenes, we did not use all the images from all the camera sensors. We only used images in `samples/CAM_FRONT`, i.e. key frames in the scene which have annotations that were captured by the front camera.\n\nWe have made different splits from those images; we've separated rain from clear and removed all the night images. There are train and test images for both of these categories. The split files are in the [config/nuscenes/splits](config/nuscenes/splits) folder in json format.\n\nAll of the json files have contains the `sample_data_tokens` key which provide a list of tokens that each points toward a specific `sample_data` in the nuScenes dataset. This is all handled by our dataset wrapper in `nusc_dataset.py`.  \n\n#### nuScenes-GAN\n\nAs mentioned in fig. 5 of the paper (see GAN+PBR pipeline), our GAN+PBR approach is straightforward and only requires translated images from clear to rain using a GAN-based approach such as [CycleGAN](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix). \n\nIf you want to test our GAN+PBR approach on nuScenes you will need clear-rain translated images (such as in our [dataset zoo](#dataset-zoo) as well as the metadata from nuScenes. The folder organisation of the GAN augmented data should be exactly the same as the organisation in the nuScenes folder (same subfolders and same filenames).\n\nThe same split files can be used for augmenting nuScenes-GAN images than for regular nuScenes.   \n//--\u003e\n\n### Rendering rain on custom images\n\nRendering rain on any custom images is easy but requires some preparation time.  \n\n### Preparation\nThe code requires your data to be organized in _dataset_ (root folder) with _sequences_ (subfolders). Let's assume you have:\n```sh\ndata/source/DATASET/SEQUENCE/rgb/xxx.png           # Source images (color, 8 bits)\ndata/source/DATASET/SEQUENCE/depth/xxx.png         # Depth images (16 bits, with depth_in_meter = depth/256.)\n```\n(_optionally_ intrinsic calib files can be provided, in KITTI format.)\n\nTo run our rendering on your custom dataset, you must provide a configuration file. Configuration files are stored in `config/` and must be named after the dataset.\n\nThe dataset configuration file should expose two functions: \n* `resolve_paths(params)` which updates and returns the `params` dictionary with paths params.images\\[sequence\\], params.depth\\[sequence\\], params.calib\\[sequence\\] the sequence dictionary paths to images/depth/calib data for _all_ sequences of the dataset.\n* `settings()` which returns a dictionary with dataset settings (and optionally sequence-wise settings).\n \nHere is a sample configuration file `config/customdb.py`:\n```python\nimport os\ndef resolve_paths(params):\n    # List sequences path (relative to dataset folder)\n    # Let's just consider any subfolder is a sequence\n    params.sequences = [x for x in os.listdir(params.images_root) if os.path.isdir(os.path.join(params.images_root, x))]\n    assert (len(params.sequences) \u003e 0), \"There are no valid sequences folder in the dataset root\"\n\n    # Set source image directory\n    params.images = {s: os.path.join(params.dataset_root, s, 'rgb') for s in params.sequences}\n\n    # Set calibration (Kitti format) directory IF ANY (optional)\n    params.calib = {s: None for s in params.sequences}\n\n    # Set depth directory\n    params.depth = {s: os.path.join(params.dataset_root, s, 'depth') for s in params.sequences}\n\n    return params\n\ndef settings():\n    settings = {}\n\n    # Camera intrinsic parameters\n    settings[\"cam_hz\"] = 10               # Camera Hz (aka FPS)\n    settings[\"cam_CCD_WH\"] = [1242, 375]  # Camera CDD Width and Height (pixels)\n    settings[\"cam_CCD_pixsize\"] = 4.65    # Camera CDD pixel size (micro meters)\n    settings[\"cam_WH\"] = [1242, 375]      # Camera image Width and Height (pixels)\n    settings[\"cam_focal\"] = 6             # Focal length (mm)\n    settings[\"cam_gain\"] = 20             # Camera gain\n    settings[\"cam_f_number\"] = 6.0        # F-Number\n    settings[\"cam_focus_plane\"] = 6.0     # Focus plane (meter)\n    settings[\"cam_exposure\"] = 2          # Camera exposure (ms)\n\n    # Camera extrinsic parameters (right-handed coordinate system)\n    settings[\"cam_pos\"] = [1.5, 1.5, 0.3]     # Camera pos (meter)\n    settings[\"cam_lookat\"] = [1.5, 1.5, -1.]  # Camera look at vector (meter)\n    settings[\"cam_up\"] = [0., 1., 0.]         # Camera up vector (meter)\n\n    # Sequence-wise settings\n    settings[\"sequences\"] = {}\n    settings[\"sequences\"][\"seq1\"] = {}\n    settings[\"sequences\"][\"seq1\"][\"sim_mode\"] = \"normal\"\n    settings[\"sequences\"][\"seq1\"][\"sim_duration\"] = 10  # Duration of the rain simulation (sec)\n\n    return settings\n```\nThe `resolve_paths(params)` function parses the dataset root folder (here, `data/source/customdb`) to discover sequences (any subfolder) and assign images/depth/calib path for each sequence. For each sequence in this example, images are located in `rgb`, depth maps are in `depth`, and calib files are not provided (i.e. `None`).  \n\nOf importance here, `settings[\"sequences\"]` is a sequence-wise dictionary, which _keys_ may be any relative path to a sequence or a group of sequences. \nNote that sequences inherit dataset settings.  \nSequences-wise parameters allow defining custom parameters for the simulation. \nIn above example, this will simulate a rain event of 10 seconds. \nTo ensure temporal consistency, continuous frame use continuous simulation steps.\n\nSimulations can have fancy settings, such as camera motion speed (e.g. to mimic a vehicle motion), varying rain fall rates (e.g. to mimic rain of different intensity), changing focals/exposure, etc...\nYou may refer to sample config files in `config/`.\n\n### Running\n\nOnce data and config are prepared, you may run the rendering with:  \n`python main.py --dataset customdb --intensity 25 --frame_end 10`  (replace \"**customdb**\" with your dataset name)\n\nOutput will be located in `data/output/customdb`.\n\n#### Notes\n* Particles simulations will be automatically generated on first run and won't be re-generated again. However, _some_ settings `config/DATASET.py` may affect the physical simulator (e.g. camera focal _does_, camera gain _does not_).  You may need to use the parameter `--force_particles` to ensure re-running particles simulation if you changed some parameters.\n* If depth maps are smaller than images, we crop center images to match depth maps (i.e. we assume depth had some padding).\n\n## Advanced usage\n\nYou can generate multiple rain fall rate at once if you provide comma separated intensities. For example,   \n`python main.py --dataset kitti --intensity 1,5,10,20`  renders rain on all sequences of KITTI at 1, 5, 10, 20mm/hr\n\nYou can generate rain on multiple sequences using comma separated sequences. For example,  \n`python main.py --dataset cityscapes --sequence leftImg8bit/train/aachen,leftImg8bit/train/bochum  --intensity 10,20`  renders 10mm and 20mm rain on _aachen_ and _bochum_ sequences only\n\nYou can control which part of the sequence is rendered with, `--frame_*` parameters. For example,  \n`python main.py --dataset kitti --intensity 1,5 --frame_start 5 --frame_end 25`  generates rain on frames 5-25 from each sequence  \n`python main.py --dataset kitti --intensity 1,5 --frame_step 100`  generates every 100 other frames of all sequences (extremely useful for quick overview of a sequence)\n\n#### Multi threads rendering  \nRain rendering is quite long. You can use multithread rendering which significantly speeds up. For example,  \n`python main_threaded.py --dataset kitti --intensity 1,5,10,20,30 --frame_start 0 --frame_end 8`  (note all arguments are automatically passed to each main.py thread)\n\n**Known limitation:** there might be some conflicts if multiple renderers while threaded start the particles simulator. Hence, ensure particle simulation files are ready prior to the multi-threaded rendering. \n\n#### Particles simulator\nParticles simulations can be computed in a multi-thread manner, separate from our renderer. To do so, edit bottom lines of `tools/particles_simulation.py` and run:  \n`python tools/particles_simulation.py` \n\n## Dataset zoo\n\n### Rainy versions of KITTI, Cityscapes, nuScenes\n\nTo download the rainy versions of the datasets, please visit our [ICCV'19 paper website](https://team.inria.fr/rits/computer-vision/weather-augment/).  \n\n### Data to generate rain on KITTI, Cityscapes, nuScenes\n\nHere, we gather the direct links to download required data to generate rain on popular datasets.\n\n| Data          | Kitti (Object detection)         | Kitti (Raw data)         | Cityscapes  | nuScenes | \n| ------------- | :-----------: | :-----------: | :---------: | :------: |\n| Images        | [link](http://www.cvlibs.net/download.php?file=data_object_image_2.zip)  | seq 0032 [link](https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0032/2011_09_26_drive_0032_sync.zip) \u003cbr\u003e seq 0056 [link](https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0056/2011_09_26_drive_0056_sync.zip)  | \"leftImg8bit\" from [link](https://www.cityscapes-dataset.com/downloads/)    | coming up |\n| Depth         | [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_data_object_training_image_2_depth.zip)  | seq 0032 [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_raw_data_2011_09_26_2011_09_26_drive_0032_sync_image_02_data_depth.zip) \u003cbr\u003e seq 0056 [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_raw_data_2011_09_26_2011_09_26_drive_0056_sync_image_02_data_depth.zip)  | [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_cityscapes_leftImg8bit_train_depth.zip)    | coming up |\n| Calibration   | [link](http://www.cvlibs.net/download.php?file=data_object_calib.zip)  | _included in images_  | - | - |\n| Particles*    | [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_data_object_particles.zip)| seq 0032 [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_raw_data_2011_09_26_2011_09_26_drive_0032_sync_particles.zip) \u003cbr\u003e seq 0056 [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_kitti_raw_data_2011_09_26_2011_09_26_drive_0056_sync_particles.zip)  | [link](https://www.rocq.inria.fr/rits_files/download.php?file=computer-vision/weather-augment/weather_cityscapes_particles.zip)    | coming up |\n\n\n\\* For each sequence, we provide 38 rain physical simulations at: 1, 2, 3, ..., 10, 15, 20, ..., 100, 110, 120, ..., 200mm/hr. This prevents re-running physical simulations (which is long). If you do so, extract files in `data/particles/DATABASE`. When running the renderer, if files are correctly located \"All particles simulations ready\" should print in the log.\n\u003c!--\n##### Additional data\nThis [~~link~~]() points to the 50 hand made semantic segmentation for nuScenes. \n\n## Computer vision tasks robustness to bad weather improvement\n\nUsing the preceding datasets, we finetuned various network to improve the performance in bad weather. More details are available in the paper, but here is a figure with various results on real rain with a rain-aware YOLOv2 network.\n\n![alt text](doc/improved.png \"Rain-aware YOLOv2 results on real rain\")\n//--\u003e\n\n### License\n\nThe code is released under the [Apache 2.0 license](./LICENSE).  \nThe data in the dataset zoo are released under the [Creative Commons Attribution-NonCommercial-ShareAlike 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastra-vision%2Frain-rendering","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fastra-vision%2Frain-rendering","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastra-vision%2Frain-rendering/lists"}