{"id":19529270,"url":"https://github.com/isl-org/objects-with-lighting","last_synced_at":"2025-04-26T11:34:34.423Z","repository":{"id":217653665,"uuid":"726602268","full_name":"isl-org/objects-with-lighting","owner":"isl-org","description":"Repository for the Objects With Lighting Dataset","archived":false,"fork":false,"pushed_at":"2024-08-13T09:08:07.000Z","size":2001,"stargazers_count":36,"open_issues_count":1,"forks_count":1,"subscribers_count":11,"default_branch":"main","last_synced_at":"2024-08-13T18:52:46.701Z","etag":null,"topics":["3d","benchmark","dataset","inverse-rendering"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/isl-org.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-12-02T20:58:24.000Z","updated_at":"2024-08-13T08:54:15.000Z","dependencies_parsed_at":"2024-01-17T18:58:38.249Z","dependency_job_id":"bea56396-6f1c-4284-98d0-15806bf3c01e","html_url":"https://github.com/isl-org/objects-with-lighting","commit_stats":null,"previous_names":["isl-org/objects-with-lighting"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/isl-org%2Fobjects-with-lighting","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/isl-org%2Fobjects-with-lighting/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/isl-org%2Fobjects-with-lighting/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/isl-org%2Fobjects-with-lighting/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/isl-org","download_url":"https://codeload.github.com/isl-org/objects-with-lighting/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224033531,"owners_count":17244616,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d","benchmark","dataset","inverse-rendering"],"created_at":"2024-11-11T01:23:13.565Z","updated_at":"2025-04-26T11:34:34.417Z","avatar_url":"https://github.com/isl-org.png","language":"Python","readme":"# Objects with Lighting\n\n## [Paper](https://arxiv.org/abs/2401.09126)\n\nThis repo is the code distribution for the _Objects with Lighting_ dataset.\nIt contains the evaluation script (`scripts/evaluation.py`) and the tools used for building the dataset.\n\nIf you find the data or code useful please cite\n```bibtex\n@inproceedings{Ummenhofer2024OWL,\n  author       = {Benjamin Ummenhofer and\n                  Sanskar Agrawal and\n                  Rene Sep{\\'{u}}lveda and\n                  Yixing Lao and\n                  Kai Zhang and\n                  Tianhang Cheng and\n                  Stephan R. Richter and\n                  Shenlong Wang and\n                  Germ{\\'{a}}n Ros},\n  title        = {Objects With Lighting: {A} Real-World Dataset for Evaluating Reconstruction\n                  and Rendering for Object Relighting},\n  booktitle    = {3DV},\n  publisher    = {{IEEE}},\n  year         = {2024}\n}\n```\n\n## Downloads\n\nPlease download the dataset from the current release or use the links below for the latest version. Extracting the files in the repository root will create the `dataset` directory.\n\n- [objects-with-lighting-dataset-v1_1.tgz](https://github.com/isl-org/objects-with-lighting/releases/download/v1/objects-with-lighting-dataset-v1_1.tgz)\n- [objects-with-lighting-dataset-v1_2.tgz](https://github.com/isl-org/objects-with-lighting/releases/download/v1/objects-with-lighting-dataset-v1_2.tgz)\n\n```bash\nwget https://github.com/isl-org/objects-with-lighting/releases/download/v1/objects-with-lighting-dataset-v1_{1,2}.tgz\n```\n\n## Directory structure\n\n```\n├─ calibration      # Data files for calibration and generated calibration parameters\n├─ dataset          # Contains the data meant for consumption\n├─ docs             # Images and markdown for documentation\n├─ methods          # Documentation and scripts for the baseline method and other state-of-the-art methods\n├─ scripts          # This dir contains scripts for creating data and evaluation\n├─ utils            # Utility python modules used by all scripts\n```\n\n## Evaluation script\n\nThe script `scripts/evaluate.py` can be used to compute the common metrics PSNR, SSIM, LPIPS for predicted images.\nThe results will be stored in a json file.\n - Supported image file formats are `.png`, `.exr`, and `.npy`.\n - We assume `.exr` and `.npy` files store unclipped linear images, while `.png` stores values after applying the tonemapping as used for the dataset.\n - For linear images the evaluation script computes the optimal exposure value minimizing the least squares error before computing the error metrics.\n - The predicted images have to be stored with the same folder structure as the dataset and should be named `pr_image_xxxx.{npy,exr,png}`.\n\nThe script can be invoked as \n```bash\npython scripts/evaluate.py -p path/to/predictions results.json\n```\n\n## Dataset format\n\n```\n├─ dataset\n    ├─ object_name     # The dataset is grouped into objects \n        ├─ test        # Files in the test dir are meant for evaluation\n            ├─ inputs  # The inputs dir contains all files that are allowed to be used by the methods\n```\n\nEach of the test directories contains the following data.\n\n| File | Description |\n| --- | --- |\n| `inputs/`  | This directory contains all data for reconstructing the object. For a fair evaluation only data inside this folder may be used. |\n| `inputs/image_xxxx.png`  | Image files with 8-bit RGB images after tonemapping. |\n| `inputs/camera_xxxx.txt` | Camera parameters for the corresponding image file. |\n| `inputs/mask_xxxx.png` | An approximate mask for methods that require it. |\n| `inputs/exposure.txt`    | The exposure value that has been used in the tonemapping. |\n| `inputs/object_bounding_box.txt`  | The axis aligned bounding box of the object. This box is not a tight bounding box. |\n| `env.hdr`    | An equirectangular image of the environment where the input images have been taken. This image is provided for debugging purposes and should not be used for reconstruction or evaluation. |\n| `env_512_rotated.hdr` | This environment map is downscaled to 1024x512 and has been rotated with the 'world_to_env' transform for easier usage. This image is provided for debugging purposes and should not be used for reconstruction or evaluation. |\n| `world_to_env.txt` | The 4x4 world to camera transform that transforms a point into the coordinate system of the equirectangular image 'env.hdr'. |\n| `gt_image_xxxx.png` | A ground truth image used in evaluation. |\n| `gt_camera_xxxx.txt` | The corresponding camera parameters for a ground truth image. |\n| `gt_mask_xxxx.png` | The mask used for evaluation. Valid pixels are marked with the value 255. |\n| `gt_exposure_xxxx.txt` | The exposure used in the tonemapping of the corresponding ground truth image. |\n| `gt_env_xxxx.hdr` | An equirectangular image of the environment where the corresponding ground truth image was taken. |\n| `gt_world_to_env_xxxx.txt` | The 4x4 world to camera transform that transforms a point into the coordinate system of the equirectangular image 'gt_env_xxxx.hdr'. |\n| `gt_env_512_rotated_xxxx.hdr` | This environment map is downscaled to 1024x512 and has been rotated with the 'world_to_env' transform for easier usage. |\n\n### Tone mapping\n\nWe generate the tonemapped 8-bit images with the following function.\n\n$$ y = 255 (x 2^\\text{exposure})^\\gamma $$\n\nWe use $\\gamma=1/2.2$ for all images in the dataset.\nThe exposure values used may differ for input and test images. The exposure values can be found in the corresponding `exposure.txt` files.\nThe values $y$ are clipped to the range 0 to 255.\n\n### `.txt` files\n\n#### Camera parameters\nThe camera parameters are defined by the intrinsic matrix $K$, and the extrinsics $R,t$.\nThe extrinsics are a world to camera transformation (world2cam).\nThe intrinsics describe a pinhole camera and the projection follows the OpenCV convention.\nWe can project a 3D point $X$ to the camera coordinate system with \n\n$$ x = K(R X + t) $$\n\nNote that $x$ is in homogeneous coordinates.\nThe camera parameters are stored in the `*camera_xxx.txt` files in the following format.\n```\nk11 k12 k13\nk21 k22 k23\nk31 k32 k33\nr11 r12 r13\nr21 r22 r23\nr31 r32 r33\ntx  ty  tz\nwidth height channels\n```\n\nThe following snippet can be used to parse the file with numpy.\n```python\nparams = np.loadtxt('path/to/camera_xxxx.txt')\nK, R, t, (width, height, channels) = params[:3], params[3:6], params[6], params[7].astype(int)\n```\n\n#### Object bounding box\nThe `object_bounding_box.txt` files describe an axis aligned bounding box.\nThe format used in the text file is\n```\nxmin xmax ymin ymax zmin zmax\n```\n\n#### World to environment map transforms\nThe `*world_to_env*.txt` files describe a transformation from the world coordinate system into the coordinate system of the omnidirectional camera that captures the environment.\nThe text file stores a 4x4 transformation matrix and transforms a homogeneous 3D point to the camera coordinate system.\nUsually we make the assumption that the environment is infinitely far away from the object and we are only interested in directions.\nIn this case only the rotational part of the 4x4 matrix in the upper left corner is used.\nWith $R$ as the rotation and $t$ as the translation the format of the text file is\n```\nr11 r12 r13 tx\nr21 r22 r23 ty\nr31 r32 r33 tz\n0   0   0   1\n```\n\n#### Exposure\nThe exposure values are single scalars stored in the `*exposure*.txt` files.\n\n## Coordinate systems\n\nThe dataset uses right-handed coordinate systems.\n\n### Cameras\nCameras look in positive z-direction.\nThe intrinsic and extrinsic camera parameters can be used to directly project a 3D point $X$ to image space coordinates.\n\n$$ x = K(R X + t) $$\n\n$x$ is a homogeneous point describing a position in the image.\nThe extrinsics are a world to camera transformation (world2cam), the intrinsics and the projection follow the OpenCV conventions.\n\n### Images\nThe x-axis for images points to the right and the y-axis points down following the memory order.\nThe coordinates $(x,y)$ of the top left corner are $(0,0)$.\nThe center of the first pixel is at $(0.5, 0,5)$.\nThe bottom right corner for an image with witdth $w$ and height $h$ is at $(w, h)$.\n\n\n### Environment maps\nEnvironment maps are stored as equirectangular images. \nSimilar to regular images, the u-axis points to the right and the v-axis points down following the memory order.\nWe use a normalized coordinate system. \nThe coordinates $(u,v)$ of the top left corner are $(0,0)$.\nThe bottom right corner is at $(1, 1)$ irrespective of the size of the environment map.\nThis corresponds to the texture coordinate convention used by DirectX.\n\n![Equirectangular uv coordinates](docs/images/equirect_coords1.png)\n\nDirections map to the equirectangular image as shown in the image below.\nThe direction +Z $(0,0,1)$ maps to the upper border of the environment map and -Z $(0,0,-1)$ to the lower border.\n+X $(1,0,0)$ maps to the center and -X $(-1,0,0)$ maps to the vertically centered point on the right and left border.\n+Y $(0,1,0)$ and -Y $(0,-1,0)$ map to the uv coordinates $(0.25,0.5)$ and $(0.75,0.5)$ respectively.\n\n![Equirectangular xyz directions](docs/images/equirect_coords2.png)\n\nThe following shows the left half of the environment map mapped to a sphere.\n![xyz directions with sphere mapped equirectangular image](docs/images/sphere_directions.png)\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fisl-org%2Fobjects-with-lighting","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fisl-org%2Fobjects-with-lighting","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fisl-org%2Fobjects-with-lighting/lists"}