{"id":13761945,"url":"https://github.com/l1997i/DurLAR","last_synced_at":"2025-05-10T14:30:54.891Z","repository":{"id":152041971,"uuid":"391903651","full_name":"l1997i/DurLAR","owner":"l1997i","description":"(3DV 2021) A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications","archived":false,"fork":false,"pushed_at":"2025-03-16T11:45:26.000Z","size":21681,"stargazers_count":39,"open_issues_count":1,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-16T12:37:06.618Z","etag":null,"topics":["3dvision","autonomous-driving","autonomous-vehicles","computer-vision","dataset","depth-estimation","durham","durlar","kitti","lidar","point-cloud","robotics"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/l1997i.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-08-02T10:21:09.000Z","updated_at":"2025-03-16T11:45:29.000Z","dependencies_parsed_at":null,"dependency_job_id":"6c9f8111-1e29-4ee3-801f-28626c13c421","html_url":"https://github.com/l1997i/DurLAR","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/l1997i%2FDurLAR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/l1997i%2FDurLAR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/l1997i%2FDurLAR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/l1997i%2FDurLAR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/l1997i","download_url":"https://codeload.github.com/l1997i/DurLAR/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253428226,"owners_count":21906874,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3dvision","autonomous-driving","autonomous-vehicles","computer-vision","dataset","depth-estimation","durham","durlar","kitti","lidar","point-cloud","robotics"],"created_at":"2024-08-03T14:00:31.872Z","updated_at":"2025-05-10T14:30:54.884Z","avatar_url":"https://github.com/l1997i.png","language":"Shell","readme":"[![Durham](https://img.shields.io/badge/UK-Durham-blueviolet)](https://durham-repository.worktribe.com/output/1138941/durlar-a-high-fidelity-128-channel-lidar-dataset-with-panoramic-ambient-and-reflectivity-imagery-for-multi-modal-autonomous-driving-applications)\n[![arXiv](https://img.shields.io/badge/arXiv-2406.10068-b31b1b.svg)](https://arxiv.org/abs/2406.10068)\n[![GitHub license](https://img.shields.io/badge/license-Apache2.0-blue.svg)](https://github.com/l1997i/lim3d/blob/main/LICENSE)\n![Stars](https://img.shields.io/github/stars/l1997i/durlar?style=social)\n\n# DurLAR: A High-Fidelity 128-Channel LiDAR Dataset\n\n\u003c!-- ![DurLAR](https://github.com/l1997i/DurLAR/blob/main/head.png?raw=true) --\u003e\n\nhttps://github.com/l1997i/DurLAR/assets/35445094/2c6d4056-a6de-4fad-9576-693efe2860f0\n\n## News\n\n- [2025/03/16] We provide the [download link](https://huggingface.co/datasets/l1997i/DurLAR) for our DurLAR dataset hosted on Hugging Face, along with the corresponding [download script](durlar_hf_download_script.sh).\n- [2024/12/05] We provide the **intrinsic parameters** of our OS1-128 LiDAR [[download]](os1-128.json).\n\n## Sensor placement\n\n- **LiDAR**: [Ouster OS1-128 LiDAR sensor](https://ouster.com/products/os1-lidar-sensor/) with 128 channels vertical resolution\n\n- **Stereo** Camera: [Carnegie Robotics MultiSense S21 stereo camera](https://carnegierobotics.com/products/multisense-s21/) with grayscale, colour, and IR enhanced imagers, 2048x1088 @ 2MP resolution\n\n- **GNSS/INS**: [OxTS RT3000v3](https://www.oxts.com/products/rt3000-v3/) global navigation satellite and inertial navigation system, supporting localization from GPS, GLONASS, BeiDou, Galileo, PPP and SBAS constellations\n\n- **Lux Meter**: [Yocto Light V3](http://www.yoctopuce.com/EN/products/usb-environmental-sensors/yocto-light-v3), a USB ambient light sensor (lux meter), measuring ambient light up to 100,000 lux\n\n\n## Panoramic Imagery\n\n\u003cbr\u003e\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://github.com/l1997i/DurLAR/blob/main/reflect_center.gif?raw=true\" width=\"100%\"/\u003e\n    \u003ch5 id=\"title\" align=\"center\"\u003eReflectivity imagery\u003c/h5\u003e\n\u003c/br\u003e\n\n\u003cbr\u003e\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://github.com/l1997i/DurLAR/blob/main/ambient_center.gif?raw=true\" width=\"100%\"/\u003e\n    \u003ch5 id=\"title\" align=\"center\"\u003eAmbient imagery\u003c/h5\u003e\n\u003c/br\u003e\n\n\n## File Description\n\nEach file contains 8 topics for each frame in DurLAR dataset,\n\n- `ambient/`: panoramic ambient imagery\n- `reflec/`: panoramic reflectivity imagery\n- `image_01/`: right camera (grayscale+synced+rectified)\n- `image_02/`: left RGB camera (synced+rectified)\n- `ouster_points`: ouster LiDAR point cloud (KITTI-compatible binary format)\n- `gps`, `imu`, `lux`: csv file format\n\nThe structure of the provided DurLAR full dataset zip file,  \n\n```\nDurLAR_\u003cdate\u003e/  \n├── ambient/  \n│   ├── data/  \n│   │   └── \u003cframe_number.png\u003e   [ ..... ]   \n│   └── timestamp.txt  \n├── gps/  \n│   └── data.csv  \n├── image_01/  \n│   ├── data/  \n│   │   └── \u003cframe_number.png\u003e   [ ..... ]   \n│   └── timestamp.txt  \n├── image_02/  \n│   ├── data/  \n│   │   └── \u003cframe_number.png\u003e   [ ..... ]   \n│   └── timestamp.txt  \n├── imu/  \n│   └── data.csv  \n├── lux/  \n│   └── data.csv  \n├── ouster_points/  \n│   ├── data/  \n│   │   └── \u003cframe_number.bin\u003e   [ ..... ]   \n│   └── timestamp.txt  \n├── reflec/  \n│   ├── data/  \n│   │   └── \u003cframe_number.png\u003e   [ ..... ]   \n│   └── timestamp.txt  \n└── readme.md                    [ this README file ]  \n```  \n\nThe structure of the provided calibration zip file,  \n\n```\nDurLAR_calibs/  \n├── calib_cam_to_cam.txt              [ Camera to camera calibration results ]   \n├── calib_imu_to_lidar.txt            [ IMU to LiDAR calibration results ]   \n└── calib_lidar_to_cam.txt            [ LiDAR to camera calibration results ]   \n```\n\n## Get Started\n\n- [A quick look: the **exemplar dataset** (600 frames)](https://huggingface.co/datasets/l1997i/DurLAR_S)\n- [**The full dataset** hosted on Hugging Face](https://huggingface.co/datasets/l1997i/DurLAR)\n- [Download the **calibration files**](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs.zip)  \n- [Download the **calibration files** (v2, targetless)](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs_v2.zip) \n- [Download the **exemplar ROS bag** (for targetless calibration)](https://durhamuniversity-my.sharepoint.com/:f:/g/personal/mznv82_durham_ac_uk/Ei28Yy-Gb_BKoavvJ6R_jLcBfTZ_xM5cZhEFgMFNK9HhyQ?e=rxPgI9)\n- [Download the **exemplar dataset** (600 frames)](https://collections.durham.ac.uk/collections/r2gq67jr192)\n- [Download the **full dataset**](https://github.com/l1997i/DurLAR?tab=readme-ov-file#access-for-the-full-dataset) (Fill in the form to request access to the full dataset)\n\n\u003e Note that [we did not include CSV header information](https://github.com/l1997i/DurLAR/issues/9) in the [**exemplar dataset** (600 frames)](https://collections.durham.ac.uk/collections/r2gq67jr192). You can refer to [Header of csv files](https://github.com/l1997i/DurLAR?tab=readme-ov-file#header-of-csv-files) to get the first line of the `csv` files.\n\n\u003e **calibration files** (v2, targetless): Following the publication of the proposed DurLAR dataset and the corresponding paper, we identify a more advanced [targetless calibration method](https://github.com/koide3/direct_visual_lidar_calibration) ([#4](https://github.com/l1997i/DurLAR/issues/4)) that surpasses the LiDAR-camera calibration technique previously employed. We provide [**exemplar ROS bag**](https://durhamuniversity-my.sharepoint.com/:f:/g/personal/mznv82_durham_ac_uk/Ei28Yy-Gb_BKoavvJ6R_jLcBfTZ_xM5cZhEFgMFNK9HhyQ?e=rxPgI9) for [targetless calibration](https://github.com/koide3/direct_visual_lidar_calibration), and also corresponding [calibration results (v2)](https://github.com/l1997i/DurLAR/raw/main/DurLAR_calibs_v2.zip). Please refer to [Appendix (arXiv)](https://arxiv.org/pdf/2406.10068) for more details. \n\n###  Prerequisites\n\n- Before starting, ensure you have the [**Hugging Face CLI (huggingface_hub)**](https://huggingface.co/docs/huggingface_hub/en/guides/cli) installed:\n\n  ​\tInstall it via: ```pip install -U \"huggingface_hub[cli]\"```\n\n- Since the **DurLAR dataset** is a [gated (restricted access) dataset](https://huggingface.co/docs/hub/datasets-gated) on Hugging Face, you need to authenticate before downloading it. \n\n  - You first need a Hugging Face account. If you don’t have one, please register.\n  - Authenticate via the command line **on the computer where you want to download the dataset** by entering: `huggingface-cli login`. Following the instructions of that command, and it will prompt you for your Hugging Face **Access Token**.\n  - Open [this link](https://huggingface.co/datasets/l1997i/DurLAR) and login to your Hugging Face account. At the top of the page, in the section **“You need to agree to share your contact information to access this dataset”**, agree to the conditions and access the dataset content. If you have already agreed and been automatically granted access, the page will display: **“Gated dataset: You have been granted access to this dataset.”**\n\n\u003e **Note**: The Hugging Face service may be restricted in certain countries or regions. In such cases, you may consider using a Hugging Face mirror (Hugging Face 镜像) as an alternative.\n\n### Download the dataset using scripts (recommended)\n\nWe provide a [pre-written Bash script](durlar_hf_download_script.sh) to download the dataset from Hugging Face. You need to manually modify the **User Configuration (Modify as Needed)** section at the beginning of [durlar_hf_download_script.sh](durlar_hf_download_script.sh) to match your desired paths and features.\n\nIf you encounter any issues (e.g., network problems or unexpected interruptions), you can also modify this script to fit your needs. For example, if the extraction process is interrupted, you can manually comment out the dataset download section and resume from extraction.\n\n### Manually download the dataset (fallback)\n\nIf the above script does not work, you can download the dataset manually. This option is only a fallback in case the script fails and is **not recommended**. \n\nTo download all dataset files from Hugging Face, use:\n\n```bash\nhuggingface-cli download l1997i/DurLAR --repo-type dataset --local-dir [YOUR_DATASET_DOWNLOAD_PATH]\n```\n\nThe parameter `--local-dir` is optional. For example,\n\n```bash\n# Example 1: The recommended (and default) way to download files from the Hub is to use the cache-system, without --local-dir specified\nhuggingface-cli download l1997i/DurLAR --repo-type dataset\n\n# Example 2: in some cases you want to download files and move them to a specific folder. You can do that using the --local-dir option.\nhuggingface-cli download l1997i/DurLAR --repo-type dataset --local-dir /home/my_username/datasets/DurLAR_tar/\n```\n\n### **Merge and Extract the Dataset**\n\nReassemble the .tar parts:\n\n```bash\ncd /home/my_username/datasets/DurLAR_tar/ # Enter the dataset local-dir folder. Note: Your path may be different from mine. Please adjust it accordingly.\ncat DurLAR_dataset.tar* \u003e DurLAR_dataset_full.tar\n```\n\nExtract the full dataset:\n\n```bash\ntar -xvf DurLAR_dataset_full.tar -C /your/desired/path\n```\n\nOptionally, If you have **pigz** installed, you can extract using **multi-threaded pigz** for faster speed:\n\n```bash\nwhich pigz # it should correctly return the path of pigz\n\ntar --use-compress-program=\"pigz -p $(nproc)\" -xvf DurLAR_dataset/DurLAR_dataset.tar -C /your/desired/path\n```\n\n### Cleanup (Optional)\n\nOnce extracted, delete the archive parts to free up space:\n\n```bash\nrm -rf DurLAR_tar # Note: Your path may be different from mine. Please adjust it accordingly.\n```\n\n****\n\n**Now you will find the fully extracted DurLAR dataset in your `/your/desired/path`.**\n\n## [Outdated] Early download options for the dataset (not recommended)\n\n*The following methods were the early download options for the DurLAR dataset*. While they are still functional, they may suffer from drawbacks such as slow download speeds (the host server is in the UK), temporary connection bans, and lack of download resumption. Therefore, we do not recommend using these methods anymore.\n\nAccess to the complete DurLAR dataset can be requested through **one** of the following ways. 您可任选以下其中**任意**链接申请访问完整数据集。\n\n[1. Access for the full dataset](https://forms.gle/ZjSs3PWeGjjnXmwg9) \n\n[2. 申请访问完整数据集](https://wj.qq.com/s2/9459309/4cdd/)\n\n\n### [Outdated] Usage of the download script (not recommended)\n\nUpon completion of the form, the download script `durlar_download` and accompanying instructions will be **automatically** provided. The DurLAR dataset can then be downloaded via the command line.\n\nFor the first use, it is highly likely that the `durlar_download` file will need to be made\nexecutable:\n\n``` bash\nchmod +x durlar_download\n```\n \nBy default, this script downloads the small subset for simple testing. Use the following command: \n\n```bash\n./durlar_download\n```\n \nIt is also possible to select and download various test drives:\n```\nusage: ./durlar_download [dataset_sample_size] [drive]\ndataset_sample_size = [ small | medium | full ]\ndrive = 1 ... 5\n```\n \nGiven the substantial size of the DurLAR dataset, please download the complete dataset\nonly when necessary:\n```bash\n./durlar_download full 5\n```\n \nThroughout the entire download process, it is important that your network remains\nstable and free from any interruptions. In the event of network issues, please delete all\nDurLAR dataset folders and rerun the download script. Currently, our script supports\nonly Ubuntu (tested on Ubuntu 18.04 and Ubuntu 20.04, amd64). For downloading the\nDurLAR dataset on other operating systems, please refer to [Durham Collections](https://collections.durham.ac.uk/collections/r2gq67jr192) for instructions.\n\n## CSV format for `imu`, `gps`, and `lux` topics\n\n### Format description\n\nOur `imu`, `gps`, and `lux` data are all in `CSV` format. The **first row** of the `CSV` file contains headers that **describe the meaning of each column**. Taking `imu` csv file for example (only the first 9 rows are displayed),\n\n1. `%time`: Timestamps in Unix epoch format.\n2. `field.header.seq`: Sequence numbers.\n3. `field.header.stamp`: Header timestamps.\n4. `field.header.frame_id`: Frame of reference, labeled as \"gps\".\n5. `field.orientation.x`: X-component of the orientation quaternion.\n6. `field.orientation.y`: Y-component of the orientation quaternion.\n7. `field.orientation.z`: Z-component of the orientation quaternion.\n8. `field.orientation.w`: W-component of the orientation quaternion.\n9. `field.orientation_covariance0`: Covariance of the orientation data.\n\n![image](https://github.com/l1997i/DurLAR/assets/35445094/18c1e563-c137-44ba-9834-345120026db0)\n\n### Header of `csv` files\n\nThe first line of the `csv` files is shown as follows.\n\nFor the GPS, \n```csv\ntime,field.header.seq,field.header.stamp,field.header.frame_id,field.status.status,field.status.service,field.latitude,field.longitude,field.altitude,field.position_covariance0,field.position_covariance1,field.position_covariance2,field.position_covariance3,field.position_covariance4,field.position_covariance5,field.position_covariance6,field.position_covariance7,field.position_covariance8,field.position_covariance_type\n```\n\nFor the IMU, \n```\ntime,field.header.seq,field.header.stamp,field.header.frame_id,field.orientation.x,field.orientation.y,field.orientation.z,field.orientation.w,field.orientation_covariance0,field.orientation_covariance1,field.orientation_covariance2,field.orientation_covariance3,field.orientation_covariance4,field.orientation_covariance5,field.orientation_covariance6,field.orientation_covariance7,field.orientation_covariance8,field.angular_velocity.x,field.angular_velocity.y,field.angular_velocity.z,field.angular_velocity_covariance0,field.angular_velocity_covariance1,field.angular_velocity_covariance2,field.angular_velocity_covariance3,field.angular_velocity_covariance4,field.angular_velocity_covariance5,field.angular_velocity_covariance6,field.angular_velocity_covariance7,field.angular_velocity_covariance8,field.linear_acceleration.x,field.linear_acceleration.y,field.linear_acceleration.z,field.linear_acceleration_covariance0,field.linear_acceleration_covariance1,field.linear_acceleration_covariance2,field.linear_acceleration_covariance3,field.linear_acceleration_covariance4,field.linear_acceleration_covariance5,field.linear_acceleration_covariance6,field.linear_acceleration_covariance7,field.linear_acceleration_covariance8\n```\n\nFor the LUX,\n```csv\ntime,field.header.seq,field.header.stamp,field.header.frame_id,field.illuminance,field.variance\n```\n\n### To process the `csv` files\n\nTo process the `csv` files, you can use multiple ways. For example,\n\n**Python**: Use the pandas library to read the CSV file with the following code:\n```python\nimport pandas as pd\ndf = pd.read_csv('data.csv')\nprint(df)\n```\n\n**Text Editors**: Simple text editors like `Notepad` (Windows) or `TextEdit` (Mac) can also open `CSV` files, though they are less suited for data analysis.\n\n\n## Folder \\#Frame Verification\n\nFor easy verification of folder data and integrity, we provide the number of frames in each drive folder, as well as the [MD5 checksums](https://collections.durham.ac.uk/collections/r2gq67jr192?utf8=%E2%9C%93\u0026cq=MD5\u0026sort=) of the zip files.\n\n| Folder   | # of Frames |\n|----------|-------------|\n| 20210716 | 41993       |\n| 20210901 | 23347       |\n| 20211012 | 28642       |\n| 20211208 | 26850       |\n| 20211209 | 25079       |\n|**total** | **145911**  |\n\n## Intrinsic Parameters of Our Ouster OS1-128 LiDAR\n\nThe intrinsic JSON file of our LiDAR can be downloaded at [this link](os1-128.json). For more information, visit the [official user manual of OS1-128](https://data.ouster.io/downloads/software-user-manual/firmware-user-manual-v3.1.0.pdf).\n\nPlease note that **sensitive information, such as the serial number and unique device ID, has been redacted** (indicated as XXXXXXX). \n\n---\n\n## Reference\n\nIf you are making use of this work in any way (including our dataset and toolkits), you must please reference the following paper in any report, publication, presentation, software release or any other associated materials:\n\n[DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications](https://dro.dur.ac.uk/34293/)\n(Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon), In Int. Conf. 3D Vision, 2021. [[pdf](https://www.l1997i.com/assets/pdf/li21durlar_arxiv_compressed.pdf)] [[video](https://youtu.be/1IAC9RbNYjY)][[poster](https://www.l1997i.com/assets/pdf/li21durlar_poster_v2_compressed.pdf)]\n\n```\n@inproceedings{li21durlar,\n author = {Li, L. and Ismail, K.N. and Shum, H.P.H. and Breckon, T.P.},\n title = {DurLAR: A High-fidelity 128-channel LiDAR Dataset with Panoramic Ambient and Reflectivity Imagery for Multi-modal Autonomous Driving Applications},\n booktitle = {Proc. Int. Conf. on 3D Vision},\n year = {2021},\n month = {December},\n publisher = {IEEE},\n keywords = {autonomous driving, dataset, high-resolution LiDAR, flash LiDAR, ground truth depth, dense depth, monocular depth estimation, stereo vision, 3D},\n category = {automotive 3Dvision},\n}\n```\n---\n","funding_links":[],"categories":["Summary Table"],"sub_categories":["Update: 2023-07-12"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fl1997i%2FDurLAR","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fl1997i%2FDurLAR","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fl1997i%2FDurLAR/lists"}