{"id":18339893,"url":"https://github.com/mxagar/open3d_guide","last_synced_at":"2025-07-03T22:37:48.248Z","repository":{"id":236799488,"uuid":"793173663","full_name":"mxagar/open3d_guide","owner":"mxagar","description":"My personal guide to the great Python library Open3D.","archived":false,"fork":false,"pushed_at":"2025-02-03T17:25:22.000Z","size":19204,"stargazers_count":9,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-04-06T05:36:26.843Z","etag":null,"topics":["3d","computer-vision","icp","open3d","point-cloud","vision"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mxagar.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-28T16:27:40.000Z","updated_at":"2025-02-15T14:27:46.000Z","dependencies_parsed_at":"2024-05-02T09:27:04.172Z","dependency_job_id":"f617ed4a-e3a5-4d69-8b45-724c0cdf318e","html_url":"https://github.com/mxagar/open3d_guide","commit_stats":null,"previous_names":["mxagar/open3d_guide"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/mxagar/open3d_guide","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mxagar%2Fopen3d_guide","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mxagar%2Fopen3d_guide/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mxagar%2Fopen3d_guide/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mxagar%2Fopen3d_guide/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mxagar","download_url":"https://codeload.github.com/mxagar/open3d_guide/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mxagar%2Fopen3d_guide/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":263415911,"owners_count":23463108,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d","computer-vision","icp","open3d","point-cloud","vision"],"created_at":"2024-11-05T20:19:46.618Z","updated_at":"2025-07-03T22:37:48.233Z","avatar_url":"https://github.com/mxagar.png","language":"Jupyter Notebook","readme":"# Open3D Guide\n\nMy personal guide to the great Python library [Open3D](https://www.open3d.org/).\n\nI followed these resources:\n\n- [**Open3D Basic Tutorial**](https://www.open3d.org/docs/latest/tutorial/Basic/index.html)\n- The [Open3D Official Documentation](https://www.open3d.org/docs/release/index.html)\n- [Open3D Python Tutorial, by Nicolai Nielsen](https://www.youtube.com/watch?v=zF3MreN1w6c\u0026list=PLkmvobsnE0GEZugH1Di2Cr_f32qYkv7aN)\n- [pointcloud_tutorial, by Jeff Delmerico](https://github.com/mxagar/pointcloud_tutorial)\n- [3D Data Processing with Open3D](https://towardsdatascience.com/3d-data-processing-with-open3d-c3062aadc72e)\n\nAlso, look at this [Point Cloud Library (PCL)](https://pointclouds.org/) compilation of mine, where the below listed topics are shown using PCL:\n\n[mxagar/tool_guides/pcl](https://github.com/mxagar/tool_guides/tree/master/pcl)\n\n- Point cloud creation and management.\n- Point (and cloud) feature exploration: normals, PFHs, moments, etc.\n- Filtering and segmentation: voxel grid filtering, projection, outlier removal, RANSAC, shape segmentation, etc.\n- Registration, i.e., Matching: ICP.\n- Surface processing: resampling, convex hulls, projections, triangulation, etc.\n- Visualization: normals, coordinate systems, etc.\n- Data structures: KD-tree, voxelmaps and octrees, etc.\n\n## Table of Contents\n\n- [Open3D Guide](#open3d-guide)\n  - [Table of Contents](#table-of-contents)\n  - [Setup and File Structure](#setup-and-file-structure)\n    - [How to Use the Repository Contents](#how-to-use-the-repository-contents)\n    - [Known Issues](#known-issues)\n  - [1. Introduction and File IO](#1-introduction-and-file-io)\n  - [2. Point Clouds](#2-point-clouds)\n  - [3. Meshes](#3-meshes)\n  - [4. Transformations](#4-transformations)\n  - [5. Rest of Modules](#5-rest-of-modules)\n    - [RGBD Images and Odometry](#rgbd-images-and-odometry)\n    - [Visualization](#visualization)\n    - [KDTree](#kdtree)\n    - [ICP Registration](#icp-registration)\n    - [Working with Numpy](#working-with-numpy)\n    - [Tensor](#tensor)\n    - [Voxelization](#voxelization)\n  - [5. Use Cases](#5-use-cases)\n    - [Capturing 3D Models with Your Phone](#capturing-3d-models-with-your-phone)\n    - [3D-2D-3D Projection of a Scene](#3d-2d-3d-projection-of-a-scene)\n  - [Authorship](#authorship)\n\n## Setup and File Structure\n\nIf you have already a dedicated Python environment, just install Open3D via pip:\n\n```bash\n# I created this guide using version 0.18 (Windows 11) and 0.16.1 (Apple M1)\npip install open3d\n```\n\nIf you don't have a dedicated Python environment yest, a quick recipe to getting started by using [conda](https://conda.io/projects/conda/en/latest/index.html) is the following:\n\n```bash\n# Set proxy, if required\n\n# Create environment, e.g., with conda, to control Python version\nconda create -n 3d python=3.10 pip\nconda activate 3d\n\n# Install pip-tools\npython -m pip install -U pip-tools\n\n# Generate pinned requirements.txt\npip-compile requirements.in\n\n# Sync/Install (missing) pinned requirements\npip-sync requirements.txt\n\n# Alternatively: you can install pinned requirements with pip, as always\npython -m pip install -r requirements.txt\n\n# If required, add new dependencies to requirements.in and sync\n# i.e., update environment\npip-compile requirements.in\npip-sync requirements.txt\n\n# Optional: if you's like to export you final conda environment config\nconda env export \u003e environment.yml\n# Optional: If required, to delete the conda environment\nconda remove --name 3d --all\n```\n\n### How to Use the Repository Contents\n\nThe repository consists of three main folders:\n\n- [`notebooks/`](./notebooks): Personal notebooks based mainly on the [**Open3D Basic Tutorial**](https://www.open3d.org/docs/latest/tutorial/Basic/index.html); the sections below contain code summaries from those notebooks.\n- [`examples/`](./examples): Official example files from [https://github.com/isl-org/Open3D/tree/main/examples/python](https://github.com/isl-org/Open3D/tree/main/examples/python).\n- [`models/`](./models): Several models both from Open3D repositories as well as from [mxagar/tool_guides/pcl](https://github.com/mxagar/tool_guides/tree/master/pcl), i.e., PCD files from PCL.\n\n**Sections 1-4** contain the most important and basic topics necessary to start using Open3D: File I/O, Point clouds, Meshes and Transformations. Each of the topics has\n\n- a dedicated notebook in [`notebooks/`](./notebooks)\n- and a code summary taken from the associated notebook.\n\n**Section 5** contains the rest of the topics, which have also a dedicated notebook, but\n\n- they don't have a dedicated section\n- and their code is mostly only in the notebook.\n\n**Section 6** contains specific use cases (or complex examples/solution recipes) I will be adding with time.\n\n### Known Issues\n\n:warning: Mac/Apple M1 wheels (latest version to date 0.16.1) cause an OpenGL error when we launch the visualization; if the code is in a script it is not that big of an issue from the UX perspective, but if the code is on a notebook, the kernel crashes and it needs to be restarted.\n\n- Github issue: [isl-org/Open3D/issues/1673](https://github.com/isl-org/Open3D/issues/1673).\n\n:warning: OpenGL GPU support is not provided for AMD chips (Open3D 0.18); instead of using `open3d.visualization.draw`, we should use `open3d.visualization.draw_geometries`, which is a basic rendering scheme.\n\n- Github issue: [isl-org/Open3D/issues/4852](https://github.com/isl-org/Open3D/issues/4852)\n\n:warning: Headless rendering is not possible for Windows (Open3D 0.18).\n\n## 1. Introduction and File IO\n\nNotebook: [`01_Intro_File_IO.ipynb`](./notebooks/01_Intro_File_IO.ipynb).\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/file_io.html](https://www.open3d.org/docs/latest/tutorial/Basic/file_io.html).\n\nSummary of contents:\n\n- Load pointclouds, meshes, images\n- Visualize in new window and in notebook\n- Save files with desired formats\n- Download models from the internet with `o3d.data`: [https://www.open3d.org/docs/release/python_api/open3d.data.html](https://www.open3d.org/docs/release/python_api/open3d.data.html)\n\n```python\nimport sys\nimport os\n\n# Add the directory containing 'examples' to the Python path\nnotebook_directory = os.getcwd()\nparent_directory = os.path.dirname(notebook_directory)  # Parent directory\nsys.path.append(parent_directory)\n\nimport open3d as o3d\nfrom examples import open3d_example as o3dex\nimport numpy as np\n\n# Here, the same file is opened locally\npcd = o3d.io.read_point_cloud(\"../models/fragment.ply\")\nprint(pcd) # 196133 points\nprint(np.asarray(pcd.points))\n\n# A new visualization window is opened\n# Keys:\n#  [/]          : Increase/decrease field of view.\n#  R            : Reset view point.\n#  Ctrl/Cmd + C : Copy current view status into the clipboard.\n#  Ctrl/Cmd + V : Paste view status from clipboard.\n#  Q, Esc       : Exit window.\n#  H            : Print help message.\n#  P, PrtScn    : Take a screen capture.\n#  D            : Take a depth capture.\n#  O            : Take a capture of current rendering settings.\n# IMPORTANT: Press Q to exit the viewer; the notebook cell waits for that!\no3d.visualization.draw_geometries([pcd],\n                                  zoom=0.3412,\n                                  front=[0.4257, -0.2125, -0.8795],\n                                  lookat=[2.6172, 2.0475, 1.532],\n                                  up=[-0.0694, -0.9768, 0.2024])\n\n# We can inspect the docstring of each function with help()\nhelp(o3d.visualization.draw_geometries)\n\n# In-Notebook web visualizer (but with a worse quality)\no3d.web_visualizer.draw(pcd,                                  \n                        lookat=[2.6172, 2.0475, 1.532],\n                        up=[-0.0694, -0.9768, 0.2024])\n\n# IO Pointcloud\npcd = o3d.io.read_point_cloud(\"../models/fragment.pcd\")\nprint(pcd)\n# Save file\n# The format is passed in the extension, or, optionally in the argument format='xyz'\n# Supported formats: \n# xyz: [x, y, z]\n# xyzn: [x, y, z, nx, ny, nz]\n# xyzrgb: [x, y, z, r, g, b]\n# pts: [x, y, z, i, r, g, b]\n# ply\n# pcd\no3d.io.write_point_cloud(\"copy_of_fragment.pcd\", pcd)\n\n# IO Mesh\nmesh = o3d.io.read_triangle_mesh(\"../models/monkey.ply\")\nprint(mesh)\n# Save file\n# The format is passed in the extension\n# Supported formats: \n# ply, stl, obj, off, gltf/glb\no3d.io.write_triangle_mesh(\"copy_monkey.ply\", mesh)\n\n# IO Image\nimg = o3d.io.read_image(\"../models/lenna.png\")\nprint(img)\n# Save file\n# Supported formats: JPG, PNG\no3d.io.write_image(\"copy_of_lena.jpg\", img)\n\n# We can download data using `o3d.data`; a list of all possible models is provided here:\n# https://www.open3d.org/docs/release/python_api/open3d.data.html\narmadillo = o3d.data.ArmadilloMesh()\narmadillo_mesh = o3d.io.read_triangle_mesh(armadillo.path)\nbunny = o3d.data.BunnyMesh()\nbunny_mesh = o3d.io.read_triangle_mesh(bunny.path)\n\n# Visualize the mesh\nprint(bunny_mesh)\no3d.visualization.draw_geometries([bunny_mesh], window_name='3D Mesh Visualization')\n```\n\n## 2. Point Clouds\n\nNotebook: [`01_Intro_File_IO.ipynb`](./notebooks/01_Intro_File_IO.ipynb).\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/pointcloud.html](https://www.open3d.org/docs/latest/tutorial/Basic/pointcloud.html).\n\nSummary of contents:\n\n- Visualize a point cloud: `o3d.visualization.draw_geometries()`\n- Voxel downsampling: `pc.voxel_down_sample()`\n- Vertex normal estimation: `pc.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(...)`\n- Access estimated vertex normals as Numpy arrays: `np.asarray(pc.normals)[:10, :]`\n- Crop point cloud: `cropped_pc = polygon_volume.crop_point_cloud(pc)`\n- Paint point cloud: `pc.paint_uniform_color([1, 0.5, 0])`\n- Point cloud distance and selection: `dists = pc1.compute_point_cloud_distance(pc2)`, `pc.select_by_index(ind)`\n- Bounding volumes (AABB, OBB): `pc.get_axis_aligned_bounding_box()`, `pc.get_oriented_bounding_box()`\n- Convex hull and sampling: `pc = mesh.sample_points_poisson_disk()`, `hull_mes, _ = pc.compute_convex_hull()`\n- DBSCAN clustering: `pc.cluster_dbscan(eps=0.02, min_points=10, print_progress=True)`\n- Plane segmentation: `plane_model, inliers = pc.segment_plane()`\n- (Visually) Hidden point removal: `pc.hidden_point_removal(camera, radius)`\n\n```python\nimport sys\nimport os\n\n# Add the directory containing 'examples' to the Python path\nnotebook_directory = os.getcwd()\nparent_directory = os.path.dirname(notebook_directory)  # Parent directory\nsys.path.append(parent_directory)\n\nimport open3d as o3d\nfrom examples import open3d_example as o3dex\nimport numpy as np\n\n## -- Visualize point cloud\n\nprint(\"Load a ply point cloud, print it, and render it\")\npcd = o3d.io.read_point_cloud(\"../models/fragment.ply\")\nprint(pcd)\nprint(np.asarray(pcd.points))\n# A new visualization window is opened\n# Points rendered as surfels\n# Keys:\n#  [/]          : Increase/decrease field of view.\n#  R            : Reset view point.\n#  Ctrl/Cmd + C : Copy current view status into the clipboard.\n#  Ctrl/Cmd + V : Paste view status from clipboard.\n#  Q, Esc       : Exit window.\n#  H            : Print help message.\n#  P, PrtScn    : Take a screen capture.\n#  D            : Take a depth capture.\n#  O            : Take a capture of current rendering settings.\no3d.visualization.draw_geometries([pcd],\n                                  zoom=0.3412,\n                                  front=[0.4257, -0.2125, -0.8795],\n                                  lookat=[2.6172, 2.0475, 1.532],\n                                  up=[-0.0694, -0.9768, 0.2024])\n\n## -- Voxel downsampling\n\n# Voxel downsampling\n# 1. Points are bucketed into voxels.\n# 2. Each occupied voxel generates exactly one point by averaging all points inside.\nprint(\"Downsample the point cloud with a voxel of 0.05\")\ndownpcd = pcd.voxel_down_sample(voxel_size=0.05)\no3d.visualization.draw_geometries([downpcd],\n                                  zoom=0.3412,\n                                  front=[0.4257, -0.2125, -0.8795],\n                                  lookat=[2.6172, 2.0475, 1.532],\n                                  up=[-0.0694, -0.9768, 0.2024])\n\n## -- Vertex normal estimation\n\nprint(\"Recompute the normal of the downsampled point cloud\")\n# Compute normals: estimate_normals()\n# The function finds adjacent points and calculates the principal axis of the adjacent points using covariance analysis.\n# The function takes an instance of KDTreeSearchParamHybrid class as an argument. \n# The two key arguments radius = 0.1 and max_nn = 30 specifies search radius and maximum nearest neighbor.\n# It has 10cm of search radius, and only considers up to 30 neighbors to save computation time.\n# NOTE: normal direction is chosen to comply with original ones, else arbitrary\ndownpcd.estimate_normals(\n    search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30)\n)\n# Visualize points and normals: toggle on/off normals with N\no3d.visualization.draw_geometries([downpcd],\n                                  zoom=0.3412,\n                                  front=[0.4257, -0.2125, -0.8795],\n                                  lookat=[2.6172, 2.0475, 1.532],\n                                  up=[-0.0694, -0.9768, 0.2024],\n                                  point_show_normal=True)\n\n## -- Access estimated vertex normals as Numpy arrays\n\nprint(\"Print a normal vector of the 0th point\")\nprint(downpcd.normals[0])\n\n# Use help() extensively to check all available variables/proterties/functions!\nhelp(downpcd)\n\n# Normal vectors can be transformed as a numpy array using np.asarray\nprint(\"Print the normal vectors of the first 10 points\")\nprint(np.asarray(downpcd.normals)[:10, :])\n\n## -- Crop point cloud\n\n# Download the cropping demo\n# The demo consists of the living room PLY `fragment.ply` and a JSON which contains a bounding polygon\ndemo_crop = o3d.data.DemoCropPointCloud()\n\n# Once we have the polygon which encloses our desired region, cropping is easy\nprint(\"Load a polygon volume and use it to crop the original point cloud\")\n# Read a json file that specifies polygon selection area\nvol = o3d.visualization.read_selection_polygon_volume(\n    \"../models/cropped.json\"\n)\n# Filter out points. Only the chair remains.\nchair = vol.crop_point_cloud(pcd)\no3d.visualization.draw_geometries([chair],\n                                  zoom=0.7,\n                                  front=[0.5439, -0.2333, -0.8060],\n                                  lookat=[2.4615, 2.1331, 1.338],\n                                  up=[-0.1781, -0.9708, 0.1608])\n\n## -- Paint point cloud\n\nprint(\"Paint chair\")\n# Paint all the points to a uniform color.\n# The color is in RGB space, [0, 1] range.\nchair.paint_uniform_color([1, 0.706, 0])\no3d.visualization.draw_geometries([chair],\n                                  zoom=0.7,\n                                  front=[0.5439, -0.2333, -0.8060],\n                                  lookat=[2.4615, 2.1331, 1.338],\n                                  up=[-0.1781, -0.9708, 0.1608])\n\n## -- Point cloud distance and selection\n\n# Load data\npcd = o3d.io.read_point_cloud(\"../models/fragment.ply\")\nvol = o3d.visualization.read_selection_polygon_volume(\n    \"../models/cropped.json\")\nchair = vol.crop_point_cloud(pcd)\n\n# Compute the distance from a source point cloud to a target point cloud.\n# I.e., it computes for each point in the source point cloud the distance to the closest point in the target point cloud\n# pcd: 196133 points\n# chair: 31337 points\n# dists: 196133 items\n# np.where yields a tuple witha unique array -\u003e [0]\n# With select_by_index all indices from pcd are taken which have a distance larger than 0.01\n# Since chair is contained in pcd, this is equivalent to removing chair from pcd\ndists = pcd.compute_point_cloud_distance(chair)\ndists = np.asarray(dists)\nind = np.where(dists \u003e 0.01)[0]\npcd_without_chair = pcd.select_by_index(ind)\no3d.visualization.draw_geometries([pcd_without_chair],\n                                  zoom=0.3412,\n                                  front=[0.4257, -0.2125, -0.8795],\n                                  lookat=[2.6172, 2.0475, 1.532],\n                                  up=[-0.0694, -0.9768, 0.2024])\n\n## -- Bounding volumes (AABB, OBB)\n\n# Get the AABB and the OBB of a point cloud\n# Then visualize them\naabb = chair.get_axis_aligned_bounding_box()\naabb.color = (1, 0, 0)\nobb = chair.get_oriented_bounding_box()\nobb.color = (0, 1, 0)\no3d.visualization.draw_geometries([chair, aabb, obb],\n                                  zoom=0.7,\n                                  front=[0.5439, -0.2333, -0.8060],\n                                  lookat=[2.4615, 2.1331, 1.338],\n                                  up=[-0.1781, -0.9708, 0.1608])\n\n## -- Convex hull and sampling\n\n# Download data\nbunny = o3d.data.BunnyMesh()\nbunny_mesh = o3d.io.read_triangle_mesh(bunny.path) # ../models/BunnyMesh.ply\n\n# Before computing the convex hull, the point cloud is sampled.\n# sample_points_poisson_disk(): each point has approximately the same distance\n# to the neighbouring points (blue noise).\n# Method is based on Yuksel, \"Sample Elimination for Generating Poisson Disk Sample Sets\", EUROGRAPHICS, 2015\n# number_of_points: Number of points that should be sampled.\npcl = bunny_mesh.sample_points_poisson_disk(number_of_points=2000)\n\n# Compute the convex hull of the sampled point cloud (based in Qhull)\n# A triangle mesh is returned\nhull, _ = pcl.compute_convex_hull()\n# The conv hull traingle mesh a line set is created for visualization purposes\n# and lines painted in red\nhull_ls = o3d.geometry.LineSet.create_from_triangle_mesh(hull)\nhull_ls.paint_uniform_color((1, 0, 0))\n\n# Visualize downsampled pointcloud as well as convex hull represented with lines\no3d.visualization.draw_geometries([pcl, hull_ls])\n\n## -- DBSCAN clustering\n\nimport matplotlib.pyplot as plt\n\n# Load model\npcd = o3d.io.read_point_cloud(\"../models/fragment.ply\")\n\n# DBSCAN two parameters:\n# - eps defines the distance to neighbors in a cluster \n# - and min_points defines the minimum number of points required to form a cluster.\n# The function returns labels, where the label -1 indicates noise.\nwith o3d.utility.VerbosityContextManager(\n        o3d.utility.VerbosityLevel.Debug) as cm:\n    labels = np.array(\n        pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True)\n    )\n\n# Plot points with colors\nmax_label = labels.max()\nprint(f\"Point cloud has {max_label + 1} clusters\")\ncolors = plt.get_cmap(\"tab20\")(labels / (max_label if max_label \u003e 0 else 1))\ncolors[labels \u003c 0] = 0\n# Vector3dVector: Convert float64 numpy array of shape (n, 3) to Open3D format\n# https://www.open3d.org/docs/release/python_api/open3d.utility.html#open3d-utility\npcd.colors = o3d.utility.Vector3dVector(colors[:, :3])\no3d.visualization.draw_geometries([pcd],\n                                  zoom=0.455,\n                                  front=[-0.4999, -0.1659, -0.8499],\n                                  lookat=[2.1813, 2.0619, 2.0999],\n                                  up=[0.1204, -0.9852, 0.1215])\n\n## -- Plane segmentation\n\n# Segmententation of geometric primitives (only plane?) from point clouds using RANSAC\n# - distance_threshold defines the maximum distance a point can have to an estimated plane to be considered an inlier,\n# - ransac_n defines the number of points that are randomly sampled to estimate a plane,\n# - and num_iterations defines how often a random plane is sampled and verified.\n# The function then returns the plane as (a,b,c,d) such that for each point (x,y,z) on the plane we have ax+by+cz+d=0.\n# The function further returns a list of indices of the inlier points.\npcd = o3d.io.read_point_cloud(\"../models/fragment.pcd\")\nplane_model, inliers = pcd.segment_plane(distance_threshold=0.01,\n                                         ransac_n=3,\n                                         num_iterations=1000)\n# Plane model\n[a, b, c, d] = plane_model\nprint(f\"Plane equation: {a:.2f}x + {b:.2f}y + {c:.2f}z + {d:.2f} = 0\")\n\n# Plot\ninlier_cloud = pcd.select_by_index(inliers)\ninlier_cloud.paint_uniform_color([1.0, 0, 0])\noutlier_cloud = pcd.select_by_index(inliers, invert=True)\no3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],\n                                  zoom=0.8,\n                                  front=[-0.4999, -0.1659, -0.8499],\n                                  lookat=[2.1813, 2.0619, 2.0999],\n                                  up=[0.1204, -0.9852, 0.1215])\n\n## -- (Visually) Hidden point removal\n\n# Download data\narmadillo = o3d.data.ArmadilloMesh()\narmadillo_mesh = o3d.io.read_triangle_mesh(armadillo.path) # ../models/ArmadilloMesh.ply\n\n\n# First, we load a mesh and sample points on it\nprint(\"Convert mesh to a point cloud and estimate dimensions\")\npcd = armadillo_mesh.sample_points_poisson_disk(5000)\ndiameter = np.linalg.norm(\n    np.asarray(pcd.get_max_bound()) - np.asarray(pcd.get_min_bound())\n)\no3d.visualization.draw_geometries([pcd])\n\n# Imagine you want to render a point cloud from a given view point, \n# but points from the background leak into the foreground because they are not occluded by other points.\n# For this purpose we can apply a hidden point removal algorithm\nprint(\"Define parameters used for hidden_point_removal\")\ncamera = [0, 0, diameter]\nradius = diameter * 100\n\nprint(\"Get all points that are visible from given view point\")\n_, pt_map = pcd.hidden_point_removal(camera, radius)\n\nprint(\"Visualize result\")\npcd = pcd.select_by_index(pt_map)\no3d.visualization.draw_geometries([pcd])\n\n```\n\n## 3. Meshes\n\nNotebook: [`03_Meshes.ipynb`](./notebooks/03_Meshes.ipynb).\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/mesh.html](https://www.open3d.org/docs/latest/tutorial/Basic/mesh.html).\n\nSummary of contents:\n\n- Load and check properties: `read_triangle_mesh()`, `mesh.vertices`, `mesh.triangles`\n- Visualize a mesh: `o3d.visualization.draw_geometries([mesh])`\n- Surface normal estimation: `mesh.compute_vertex_normals()`, `mesh.triangle_normals`\n- Crop a mesh using Numpy slicing\n- Paint a mesh: `mesh1.paint_uniform_color([1, 0.5, 0])`\n- Check properties: `is_edge_manifold`, `is_vertex_manifold`, `is_self_intersecting`, `is_watertight`, `is_orientable`.\n- Mesh filtering:\n  - Average filter: `mesh.filter_smooth_simple(...)`\n  - Laplacian: `mesh.filter_smooth_laplacian(...)`\n  - Taubin: `mesh.filter_smooth_taubin(...)`\n- Sampling mesh surfaces with points:\n  - Uniform: `mesh.sample_points_uniformly(number_of_points=500)`\n  - Poison: `mesh.sample_points_poisson_disk(number_of_points=500, init_factor=5)`\n- Mesh subdivision: `mesh.subdivide_midpoint(...)`, `mesh.subdivide_loop(...)`.\n- Mesh simplification:\n  - Vertex clustering: `mesh.simplify_vertex_clustering(...)`\n  - Mesh decimation: `mesh.simplify_quadric_decimation(...)`\n- Connected components: `mesh.cluster_connected_triangles()`\n\n```python\nimport sys\nimport os\nimport copy\n\n# Add the directory containing 'examples' to the Python path\nnotebook_directory = os.getcwd()\nparent_directory = os.path.dirname(notebook_directory)  # Parent directory\nsys.path.append(parent_directory)\n\nimport open3d as o3d\nfrom examples import open3d_example as o3dex\nimport numpy as np\n\n## -- Load and Check Properties\n\n# Download data\ndataset = o3d.data.KnotMesh()\nmesh = o3d.io.read_triangle_mesh(dataset.path) # ../models/KnotMesh.ply\n\nprint(mesh)\n# Open3D provides direct memory access to these fields via numpy\nprint('Vertices:')\nprint(np.asarray(mesh.vertices))\nprint('Triangles:')\nprint(np.asarray(mesh.triangles))\n\n## --  Visualize a mesh\n\nprint(\"Try to render a mesh with normals (exist: \" +\n      str(mesh.has_vertex_normals()) + \") and colors (exist: \" +\n      str(mesh.has_vertex_colors()) + \")\")\no3d.visualization.draw_geometries([mesh])\nprint(\"A mesh with no normals and no colors does not look good.\")\n\n## -- Surface normal estimation\n\n# Rendering is much better with normals\nprint(\"Computing normal and rendering it.\")\nmesh.compute_vertex_normals()\nprint(np.asarray(mesh.triangle_normals))\no3d.visualization.draw_geometries([mesh])\n\n## -- Crop mesh using Numpy slicing\n\nprint(\"We make a partial mesh of only the first half triangles.\")\n# Make a copy\nmesh1 = copy.deepcopy(mesh)\n# Vector3iVector: Convert int32 numpy array of shape (n, 3) to Open3D format\n# https://www.open3d.org/docs/release/python_api/open3d.utility.html#open3d-utility\n# Take 1/2 of triangle normals and triangles\nmesh1.triangles = o3d.utility.Vector3iVector(\n    np.asarray(mesh1.triangles)[:len(mesh1.triangles) // 2, :])\nmesh1.triangle_normals = o3d.utility.Vector3dVector(\n    np.asarray(mesh1.triangle_normals)[:len(mesh1.triangle_normals) // 2, :])\nprint(mesh1.triangles)\no3d.visualization.draw_geometries([mesh1])\n\n## -- Paint mesh\n\nprint(\"Painting the mesh\")\nmesh1.paint_uniform_color([1, 0.706, 0])\no3d.visualization.draw_geometries([mesh1])\n\n## -- Check properties\n\ndef check_properties(name, mesh):\n    mesh.compute_vertex_normals()\n\n    edge_manifold = mesh.is_edge_manifold(allow_boundary_edges=True)\n    edge_manifold_boundary = mesh.is_edge_manifold(allow_boundary_edges=False)\n    vertex_manifold = mesh.is_vertex_manifold()\n    self_intersecting = mesh.is_self_intersecting()\n    watertight = mesh.is_watertight()\n    orientable = mesh.is_orientable()\n\n    print(name)\n    print(f\"  edge_manifold:          {edge_manifold}\")\n    print(f\"  edge_manifold_boundary: {edge_manifold_boundary}\")\n    print(f\"  vertex_manifold:        {vertex_manifold}\")\n    print(f\"  self_intersecting:      {self_intersecting}\")\n    print(f\"  watertight:             {watertight}\")\n    print(f\"  orientable:             {orientable}\")\n\n    geoms = [mesh]\n    if not edge_manifold:\n        edges = mesh.get_non_manifold_edges(allow_boundary_edges=True)\n        geoms.append(o3dex.edges_to_lineset(mesh, edges, (1, 0, 0)))\n    if not edge_manifold_boundary:\n        edges = mesh.get_non_manifold_edges(allow_boundary_edges=False)\n        geoms.append(o3dex.edges_to_lineset(mesh, edges, (0, 1, 0)))\n    if not vertex_manifold:\n        verts = np.asarray(mesh.get_non_manifold_vertices())\n        pcl = o3d.geometry.PointCloud(\n            points=o3d.utility.Vector3dVector(np.asarray(mesh.vertices)[verts]))\n        pcl.paint_uniform_color((0, 0, 1))\n        geoms.append(pcl)\n    if self_intersecting:\n        intersecting_triangles = np.asarray(\n            mesh.get_self_intersecting_triangles())\n        intersecting_triangles = intersecting_triangles[0:1]\n        intersecting_triangles = np.unique(intersecting_triangles)\n        print(\"  # visualize self-intersecting triangles\")\n        triangles = np.asarray(mesh.triangles)[intersecting_triangles]\n        edges = [\n            np.vstack((triangles[:, i], triangles[:, j]))\n            for i, j in [(0, 1), (1, 2), (2, 0)]\n        ]\n        edges = np.hstack(edges).T\n        edges = o3d.utility.Vector2iVector(edges)\n        geoms.append(o3dex.edges_to_lineset(mesh, edges, (1, 0, 1)))\n    o3d.visualization.draw_geometries(geoms, mesh_show_back_face=True)\n\ncheck_properties('Knot', o3dex.get_knot_mesh())\n#check_properties('Moebius', o3d.geometry.TriangleMesh.create_moebius(twists=1))\ncheck_properties(\"non-manifold edge\", o3dex.get_non_manifold_edge_mesh())\ncheck_properties(\"non-manifold vertex\", o3dex.get_non_manifold_vertex_mesh())\ncheck_properties(\"open box\", o3dex.get_open_box_mesh())\ncheck_properties(\"intersecting_boxes\", o3dex.get_intersecting_boxes_mesh())\n\n## -- Mesh filtering\n\n# - Average Filtering\n\n# Add noise to vertices in Numpy\nprint('create noisy mesh')\nmesh_in = o3dex.get_knot_mesh()\nvertices = np.asarray(mesh_in.vertices)\nnoise = 5\nvertices += np.random.uniform(0, noise, size=vertices.shape)\n# Convert Numpy to O3D format\nmesh_in.vertices = o3d.utility.Vector3dVector(vertices)\nmesh_in.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_in])\n\n# Average filter\n# The simplest filter is the average filter.\n# A given vertex v_i is given by the average of the adjacent vertices N.\nprint('filter with average with 1 iteration')\nmesh_out = mesh_in.filter_smooth_simple(number_of_iterations=1)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\nprint('filter with average with 5 iterations')\nmesh_out = mesh_in.filter_smooth_simple(number_of_iterations=5)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\n# - Laplacian\n\n# Normalized weights that relate to the distance of the neighboring vertices\n# The problem with the average and Laplacian filter is that they lead to a shrinkage of the triangle mesh\nprint('filter with Laplacian with 10 iterations')\nmesh_out = mesh_in.filter_smooth_laplacian(number_of_iterations=10)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\nprint('filter with Laplacian with 50 iterations')\nmesh_out = mesh_in.filter_smooth_laplacian(number_of_iterations=50)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\n# - Taubin filter\n\n# The problem with the average and Laplacian filter is that they lead to a shrinkage of the triangle mesh\n# The application of two Laplacian filters with different strength parameters can prevent the mesh shrinkage\nprint('filter with Taubin with 10 iterations')\nmesh_out = mesh_in.filter_smooth_taubin(number_of_iterations=10)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\nprint('filter with Taubin with 100 iterations')\nmesh_out = mesh_in.filter_smooth_taubin(number_of_iterations=100)\nmesh_out.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh_out])\n\n## -- Sampling mesh surfaces with points\n\nmesh = o3d.geometry.TriangleMesh.create_sphere()\nmesh.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh])\n# Uniform sampling: fast, but can lead to clusters of points\npcd = mesh.sample_points_uniformly(number_of_points=500)\no3d.visualization.draw_geometries([pcd])\n\nmesh = o3dex.get_bunny_mesh()\nmesh.compute_vertex_normals()\no3d.visualization.draw_geometries([mesh])\n# Uniform sampling: fast, but can lead to clusters of points\npcd = mesh.sample_points_uniformly(number_of_points=50000)\no3d.visualization.draw_geometries([pcd])\n\n# Uniform sampling can yield clusters of points on the surface, \n# while a method called Poisson disk sampling can evenly distribute the points on the surface\n# by eliminating redundant (high density) samples.\n# We have 2 options to provide the initial point cloud to remove from\n# 1) Default via the parameter init_factor: \n# The method first samples uniformly a point cloud from the mesh \n# with init_factor x number_of_points and uses this for the elimination.\nmesh = o3d.geometry.TriangleMesh.create_sphere()\npcd = mesh.sample_points_poisson_disk(number_of_points=500, init_factor=5)\no3d.visualization.draw_geometries([pcd])\n# 2) One can provide a point cloud and pass it to the sample_points_poisson_disk method.\n# Then, this point cloud is used for elimination.\npcd = mesh.sample_points_uniformly(number_of_points=2500)\npcd = mesh.sample_points_poisson_disk(number_of_points=500, pcl=pcd)\no3d.visualization.draw_geometries([pcd])\n\nmesh = o3dex.get_bunny_mesh()\npcd = mesh.sample_points_poisson_disk(number_of_points=10000, init_factor=5)\no3d.visualization.draw_geometries([pcd])\n\npcd = mesh.sample_points_uniformly(number_of_points=50000)\npcd = mesh.sample_points_poisson_disk(number_of_points=10000, pcl=pcd)\no3d.visualization.draw_geometries([pcd])\n\n## -- Mesh subdivision\n\n# In mesh subdivision we divide each triangle into a number of smaller triangles\n# In the simplest case, we compute the midpoint of each side per triangle\n# and divide the triangle into four smaller triangles: subdivide_midpoint.\nmesh = o3d.geometry.TriangleMesh.create_box()\nmesh.compute_vertex_normals()\nprint(\n    f'The mesh has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\nmesh = mesh.subdivide_midpoint(number_of_iterations=1)\nprint(\n    f'After subdivision it has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\n\n# Another subdivision method: [Loop1987]\nmesh = o3d.geometry.TriangleMesh.create_sphere()\nmesh.compute_vertex_normals()\nprint(\n    f'The mesh has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\nmesh = mesh.subdivide_loop(number_of_iterations=2)\nprint(\n    f'After subdivision it has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\n\nmesh = o3dex.get_knot_mesh()\nmesh.compute_vertex_normals()\nprint(\n    f'The mesh has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\nmesh = mesh.subdivide_loop(number_of_iterations=1)\nprint(\n    f'After subdivision it has {len(mesh.vertices)} vertices and {len(mesh.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh], mesh_show_wireframe=True)\n\n## -- Mesh simplification\n\n# - Vertex clustering\n\nmesh_in = o3dex.get_bunny_mesh()\nmesh_in.compute_vertex_normals()\nprint(\n    f'Input mesh has {len(mesh_in.vertices)} vertices and {len(mesh_in.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh_in])\n\n# The vertex clustering method pools all vertices that fall\n# into a voxel of a given size to a single vertex\n# Parameters \n# - contraction: how the vertices are pooled; o3d.geometry.SimplificationContraction.Average \n# computes a simple average.\n# - voxel_size\nvoxel_size = max(mesh_in.get_max_bound() - mesh_in.get_min_bound()) / 32\nprint(f'voxel_size = {voxel_size:e}')\nmesh_smp = mesh_in.simplify_vertex_clustering(\n    voxel_size=voxel_size,\n    contraction=o3d.geometry.SimplificationContraction.Average)\nprint(\n    f'Simplified mesh has {len(mesh_smp.vertices)} vertices and {len(mesh_smp.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh_smp])\n\n# Now, the voxel size is 2x\nvoxel_size = max(mesh_in.get_max_bound() - mesh_in.get_min_bound()) / 16\nprint(f'voxel_size = {voxel_size:e}')\nmesh_smp = mesh_in.simplify_vertex_clustering(\n    voxel_size=voxel_size,\n    contraction=o3d.geometry.SimplificationContraction.Average)\nprint(\n    f'Simplified mesh has {len(mesh_smp.vertices)} vertices and {len(mesh_smp.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh_smp])\n\n# - Mesh decimation\n\n# We select a single triangle that minimizes an error metric and removes it.\n# This is repeated until a required number of triangles is achieved.\n# Stopping criterium: target_number_of_triangles \nmesh_smp = mesh_in.simplify_quadric_decimation(target_number_of_triangles=6500)\nprint(\n    f'Simplified mesh has {len(mesh_smp.vertices)} vertices and {len(mesh_smp.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh_smp])\n\nmesh_smp = mesh_in.simplify_quadric_decimation(target_number_of_triangles=1700)\nprint(\n    f'Simplified mesh has {len(mesh_smp.vertices)} vertices and {len(mesh_smp.triangles)} triangles'\n)\no3d.visualization.draw_geometries([mesh_smp])\n\n## -- Connected components\n\n# Spurious triangles added randomly scattered\nprint(\"Generate data\")\nmesh = o3dex.get_bunny_mesh().subdivide_midpoint(number_of_iterations=2)\nvert = np.asarray(mesh.vertices)\nmin_vert, max_vert = vert.min(axis=0), vert.max(axis=0)\nfor _ in range(30):\n    cube = o3d.geometry.TriangleMesh.create_box()\n    cube.scale(0.005, center=cube.get_center())\n    cube.translate(\n        (\n            np.random.uniform(min_vert[0], max_vert[0]),\n            np.random.uniform(min_vert[1], max_vert[1]),\n            np.random.uniform(min_vert[2], max_vert[2]),\n        ),\n        relative=False,\n    )\n    mesh += cube\nmesh.compute_vertex_normals()\nprint(\"Show input mesh\")\no3d.visualization.draw_geometries([mesh])\n\n# Cluster connected components:\n# We can compute the connected components of triangles, i.e., the clusters of triangles which are connected.\n# This is useful in image/3D model reconstruction\nprint(\"Cluster connected triangles\")\nwith o3d.utility.VerbosityContextManager(\n        o3d.utility.VerbosityLevel.Debug) as cm:\n    triangle_clusters, cluster_n_triangles, cluster_area = (\n        mesh.cluster_connected_triangles())\ntriangle_clusters = np.asarray(triangle_clusters)\ncluster_n_triangles = np.asarray(cluster_n_triangles)\ncluster_area = np.asarray(cluster_area)\n\nprint(\"Show mesh with small clusters removed\")\nmesh_0 = copy.deepcopy(mesh)\ntriangles_to_remove = cluster_n_triangles[triangle_clusters] \u003c 100\nmesh_0.remove_triangles_by_mask(triangles_to_remove)\no3d.visualization.draw_geometries([mesh_0])\n\nprint(\"Show largest cluster\")\nmesh_1 = copy.deepcopy(mesh)\nlargest_cluster_idx = cluster_n_triangles.argmax()\ntriangles_to_remove = triangle_clusters != largest_cluster_idx\nmesh_1.remove_triangles_by_mask(triangles_to_remove)\no3d.visualization.draw_geometries([mesh_1])\n```\n\n## 4. Transformations\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/transformation.html](https://www.open3d.org/docs/latest/tutorial/Basic/transformation.html).\n\nSummary of contents:\n\n- Translate: `mesh.translate()`\n- Rotate: `mesh.rotate()`\n  - `get_rotation_matrix_from_xyz`\n  - `get_rotation_matrix_from_axis_angle`\n  - `get_rotation_matrix_from_quaternion`\n- Scale: `mesh.scale()`\n- General (homogeneous) transformation: `mesh.transform()`\n\n```python\nimport sys\nimport os\nimport copy\n\n# Add the directory containing 'examples' to the Python path\nnotebook_directory = os.getcwd()\nparent_directory = os.path.dirname(notebook_directory)  # Parent directory\nsys.path.append(parent_directory)\n\nimport open3d as o3d\nfrom examples import open3d_example as o3dex\nimport numpy as np\n\n## -- Translate\n\n# Factory function which creates a mesh coordinate frame\n# Check other factory functions with help(o3d.geometry.TriangleMesh)\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\n# Translate mesh and deepcopy\nmesh_tx = copy.deepcopy(mesh).translate((1.3, 0, 0))\nmesh_ty = copy.deepcopy(mesh).translate((0, 1.3, 0))\nprint(f'Center of mesh: {mesh.get_center()}')\n# The method get_center returns the mean of the TriangleMesh vertices.\n# That means that for a coordinate frame created at the origin [0,0,0],\n# get_center will return [0.05167549 0.05167549 0.05167549]\nprint(f'Center of mesh tx: {mesh_tx.get_center()}')\nprint(f'Center of mesh ty: {mesh_ty.get_center()}')\no3d.visualization.draw_geometries([mesh, mesh_tx, mesh_ty])\n\n# The method takes a second argument relative that is by default set to True.\n# If set to False, the center of the geometry is translated directly to the position specified\n# in the first argument.\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nmesh_mv = copy.deepcopy(mesh).translate((2, 2, 2), relative=False)\nprint(f'Center of mesh: {mesh.get_center()}')\nprint(f'Center of translated mesh: {mesh_mv.get_center()}')\no3d.visualization.draw_geometries([mesh, mesh_mv])\n\n## -- Rotate\n\n# We pass a rotation matrix R to rotate\n# There are many conversion functions to get R\n# - Convert from Euler angles with get_rotation_matrix_from_xyz (where xyz can also be of the form yzx, zxy, xzy, zyx, and yxz)\n# - Convert from Axis-angle representation with get_rotation_matrix_from_axis_angle\n# - Convert from Quaternions with get_rotation_matrix_from_quaternion\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nmesh_r = copy.deepcopy(mesh)\nR = mesh.get_rotation_matrix_from_xyz((np.pi / 2, 0, np.pi / 4))\nmesh_r.rotate(R, center=(0, 0, 0))\no3d.visualization.draw_geometries([mesh, mesh_r])\n\n# The function rotate has a second argument center that is by default set to True.\n# This indicates that the object is first centered prior to applying the rotation\n# and then moved back to its previous center. \n# If this argument is set to False, then the rotation will be applied directly, \n# such that the whole geometry is rotated around the coordinate center.\n# This implies that the mesh center can be changed after the rotation.\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nmesh_r = copy.deepcopy(mesh).translate((2, 0, 0))\nmesh_r.rotate(mesh.get_rotation_matrix_from_xyz((np.pi / 2, 0, np.pi / 4)),\n              center=(0, 0, 0))\no3d.visualization.draw_geometries([mesh, mesh_r])\n\n## -- Scale\n\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nmesh_s = copy.deepcopy(mesh).translate((2, 0, 0))\nmesh_s.scale(0.5, center=mesh_s.get_center())\no3d.visualization.draw_geometries([mesh, mesh_s])\n\n# The scale method also has a second argument center that\n# is set to True by default. If it is set to False,\n# then the object is not centered prior to scaling such that\n# the center of the object can move due to the scaling operation\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nmesh_s = copy.deepcopy(mesh).translate((2, 1, 0))\nmesh_s.scale(0.5, center=(0, 0, 0))\no3d.visualization.draw_geometries([mesh, mesh_s])\n\n## -- Transform\n\n# Open3D also supports a general transformation \n# defined by a 4×4 homogeneous transformation matrix using the method transform.\nmesh = o3d.geometry.TriangleMesh.create_coordinate_frame()\nT = np.eye(4)\nT[:3, :3] = mesh.get_rotation_matrix_from_xyz((0, np.pi / 3, np.pi / 2))\nT[0, 3] = 1\nT[1, 3] = 1.3\nprint(T)\nmesh_t = copy.deepcopy(mesh).transform(T)\no3d.visualization.draw_geometries([mesh, mesh_t])\n\n```\n\n## 5. Rest of Modules\n\n### RGBD Images and Odometry\n\nSources: \n\n- [https://www.open3d.org/docs/latest/tutorial/Basic/rgbd_image.html](https://www.open3d.org/docs/latest/tutorial/Basic/rgbd_image.html).\n- [https://www.open3d.org/docs/latest/tutorial/Basic/rgbd_odometry.html](https://www.open3d.org/docs/latest/tutorial/Basic/rgbd_odometry.html).\n\nNotebook: [`05_RGBD_Images.ipynb`](./notebooks/05_RGBD_Images.ipynb)\n\nSummary of contents:\n\n- Redwood dataset: RGB, Depth and Co.\n- RGBD Odometry\n  - Camera parameters: \n    - `o3d.camera.PinholeCameraIntrinsic`\n    - `o3d.io.read_pinhole_camera_intrinsic`\n  - Read RGBD images:\n    - `o3d.geometry.RGBDImage.create_from_color_and_depth`\n    - `o3d.geometry.PointCloud.create_from_rgbd_image`\n  - Compute odometry from two RGBD image pairs: `o3d.pipelines.odometry.compute_rgbd_odometry`\n    - `o3d.pipelines.odometry.RGBDOdometryJacobianFromColorTerm()`\n    - `o3d.pipelines.odometry.RGBDOdometryJacobianFromHybridTerm()`\n  - Visualize RGBD image pairs\n\n### Visualization\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/visualization.html](https://www.open3d.org/docs/latest/tutorial/Basic/visualization.html).\n\nNotebook: [`06_Visualization.ipynb`](./notebooks/06_Visualization.ipynb).\n\nSummary of contents:\n\n- Function `draw_geometries`\n- Store viewpoint: `Ctrl+C`\n- Geometry primitives:\n  - `o3d.geometry.TriangleMesh.create_box`\n  - `o3d.geometry.TriangleMesh.create_sphere`\n  - `o3d.geometry.TriangleMesh.create_cylinder`\n  - `o3d.geometry.TriangleMesh.create_coordinate_frame`\n- Drawing line sets: `o3d.geometry.LineSet`\n\n### KDTree\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/kdtree.html](https://www.open3d.org/docs/latest/tutorial/Basic/kdtree.html).\n\nNotebook: [`07_KDTree.ipynb`](./notebooks/07_KDTree.ipynb).\n\nSummary of contents:\n\n- Build KDTree from point cloud and find \u0026 visualize nearest points of a point\n  - `pcd_tree = o3d.geometry.KDTreeFlann(pcd)`: create a KDTree\n  - `pcd_tree.search_knn_vector_3d`: given a point, find the N nearest ones\n  - `pcd_tree.search_radius_vector_3d`: given a point, find the ones within a radius R\n\n### ICP Registration\n\nSources: \n\n- ICP: [https://www.open3d.org/docs/latest/tutorial/Basic/icp_registration.html](https://www.open3d.org/docs/latest/tutorial/Basic/icp_registration.html).\n- Global registrations: [https://www.open3d.org/docs/latest/tutorial/Advanced/global_registration.html](https://www.open3d.org/docs/latest/tutorial/Advanced/global_registration.html).\n- Colored point cloud registrations: [https://www.open3d.org/docs/latest/tutorial/Advanced/colored_pointcloud_registration.html](https://www.open3d.org/docs/latest/tutorial/Advanced/colored_pointcloud_registration.html).\n\nNotebook: [`08_ICP_Registration.ipynb`](./notebooks/08_ICP_Registration.ipynb).\n\nSummary of contents:\n\n- Prepare Input Data: Source and Target\n- Point-to-point ICP\n  - `o3d.pipelines.registration.registration_icp`\n  - `o3d.pipelines.registration.TransformationEstimationPointToPoint()`\n- Point-to-plane ICP\n  - `o3d.pipelines.registration.TransformationEstimationPointToPlane()`\n\n\u003e This tutorial demonstrates the ICP (Iterative Closest Point) registration algorithm. It has been a mainstay of geometric registration in both research and industry for many years. The input are two point clouds and an initial transformation that roughly aligns the source point cloud to the target point cloud. The output is a refined transformation that tightly aligns the two point clouds. A helper function draw_registration_result visualizes the alignment during the registration process. In this tutorial, we show two ICP variants, the point-to-point ICP and the point-to-plane ICP [Rusinkiewicz2001].\n\u003e\n\u003e Both [ICP registration](https://www.open3d.org/docs/latest/tutorial/Advanced/global_registration.html) and [Colored point cloud registration](https://www.open3d.org/docs/latest/tutorial/Advanced/colored_pointcloud_registration.html) are known as **local registration methods** because they rely on a rough alignment as initialization. Prior to a local registration we need some kind of [**global registration**](https://www.open3d.org/docs/latest/tutorial/Advanced/global_registration.html). This family of algorithms do not require an alignment for initialization. They usually produce less tight alignment results and are used as initialization of the local methods.\n\n**This notebook deals with the local registration approach ICP**: we give a source and target point cloud already aligned and we obtain a more tight alignment.\n\n**IMPORTANT: The point-to-plane ICP algorithm uses point normals; we need to estimate them if they are not available**.\n\n### Working with Numpy\n\nSources: \n\n- Tutorial: [https://www.open3d.org/docs/latest/tutorial/Basic/working_with_numpy.html](https://www.open3d.org/docs/latest/tutorial/Basic/working_with_numpy.html).\n- Conversion interfaces: [https://www.open3d.org/docs/latest/python_api/open3d.utility.html#open3d-utility](https://www.open3d.org/docs/latest/python_api/open3d.utility.html#open3d-utility).\n\nNotebook: [`09_Numpy.ipynb`](./notebooks/09_Numpy.ipynb).\n\nAll data structures in Open3D are natively compatible with a NumPy buffer.\n\nCommon interfaces to use O3D and Numpy interchangeably are:\n\n- `o3d.utility.Vector3dVector`; more opetions in [open3d.utility](https://www.open3d.org/docs/latest/python_api/open3d.utility.html#open3d-utility).\n- `np.asarray(pcd.points)`.\n\nThe tutorial in the section notebook generates a variant of sync function using NumPy and visualizes the function using Open3D.\n\n### Tensor\n\nSource: [https://www.open3d.org/docs/latest/tutorial/Basic/tensor.html](https://www.open3d.org/docs/latest/tutorial/Basic/tensor.html).\n\nNotebook: [`10_Tensor.ipynb`](./notebooks/10_Tensor.ipynb).\n\n\u003e Tensor is a “view” of a data Blob with shape, stride, and a data pointer. It is a multidimensional and homogeneous matrix containing elements of single data type. It is used in Open3D to perform numerical operations. It supports GPU operations as well.\n\nSummary of contents:\n\n- Tensor creation\n- Properties of a tensor\n- Copy \u0026 device transfer\n- Data types\n- Type casting\n- Numpy I/O with direct memory map\n- PyTorch I/O with DLPack memory map\n- Binary element-wise operations\n- Unary element-wise operations\n- Reduction\n- Slicing, indexing, getitem, and setitem\n- Advanced indexing\n- Logical operations\n- Comparision Operations\n- Nonzero operations\n\n### Voxelization\n\nSource: \n\n- Tutorial: [https://www.open3d.org/docs/latest/tutorial/Advanced/voxelization.html](https://www.open3d.org/docs/latest/tutorial/Advanced/voxelization.html).\n- [`open3d.geometry.VoxelGrid`](https://www.open3d.org/docs/latest/python_api/open3d.geometry.VoxelGrid.html#open3d.geometry.VoxelGrid).\n- [`open3d.geometry.Voxel`](https://www.open3d.org/docs/latest/python_api/open3d.geometry.Voxel.html#open3d.geometry.Voxel)\n\nNotebook: [`11_Voxelization.ipynb`](./notebooks/11_Voxelization.ipynb).\n\nSummary of contents:\n\n- Voxelize from triangle mesh: `o3d.geometry.VoxelGrid.create_from_triangle_mesh`\n- Voxels and their data: `voxel_grid.get_voxels()`\n- Voxel cubes for visualization\n- Create a Voxelmap from the VoxelGrid: A cartesian occupancy map\n- Voxelize from a point cloud: `o3d.geometry.VoxelGrid.create_from_point_cloud`\n- Inclusion test: `voxel_grid.check_if_included`\n- Voxel carving\n\n\n## 5. Use Cases\n\n### Capturing 3D Models with Your Phone\n\nThere are many phone apps to capture physical worlds objects which use many technologies:\n\n- Just photos (photogrammetry)\n- LIDAR data (e.g., from the new iPhones)\n- etc.\n\nOne possible and free application is [RealityScan](https://www.unrealengine.com/en-US/realityscan), from [Unreal Engine](https://www.unrealengine.com).\n\nExample capture (photogrammetry) with an iPhone SE (2020, iOS 18.2.1): [`models/ikea_cup_reality_scan_iphone/`](./models/ikea_cup_reality_scan_iphone/). The capture could be easily improved, I think it is not characteristic of the quality achievable with RealityScan; however, it is a good benchmark/example for some applications, because it has reconstruction mistakes, such as holes.\n\n### 3D-2D-3D Projection of a Scene\n\nThe notebook [`notebooks/12_3D2D3D_Projections.ipynb`](./notebooks/12_3D2D3D_Projections.ipynb) contains the following topics:\n\n- **Capture RGB and Depth Map Snapshots of a 3D Object**\n- **Reconstruct Pointcloud from RGB + Depth Map + Camera Parameters**\n- **Viewpoint Optimization**\n\n![Reconstructed Pointcloud](./assets/2d3d_reconstruction.png)\n\nThe last topic, **Viewpoint Optimization**, contains an approximative heuristic to get a set of viewpoints that optimally cover the complete model.\n\nIndeed it is not enough with being able to run a 3D-2D-3D projection, but we need to know how to navigate the scene and get the optimum set of viewpoints to capture the scene!\n\nThe presented method is not thoroughly debugged!\n\nIdea:\n\n- Get an initial set of viewpoints by projecting outwards the voxel centers of a coarse voxel grid.\n- Create a finer voxel grid and go through the initial set of viewpoints in a loop:\n  1. Each loop iteration, compute the priority map: how many fine voxel centers are seen if the camera is set in a viewpoint of the initial set\n  2. Pick the viewpoint with highest fine voxel count\n  3. Mark all fine voxels as seen\n  4. Store viewpoint\n  5. Next iteration (step 1)\n- Loop finishes when all fine voxel centers have been seen.\n\n## Authorship\n\nI compiled this guide following and modifying the cited resources, so most of it is not a creative original work of mine.\n\nMikel Sagardia, 2024.  \nNo guarantees.  \n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmxagar%2Fopen3d_guide","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmxagar%2Fopen3d_guide","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmxagar%2Fopen3d_guide/lists"}