{"id":13528093,"url":"https://github.com/electricsquare/raymarching-workshop","last_synced_at":"2025-04-01T11:31:06.790Z","repository":{"id":47443090,"uuid":"156190888","full_name":"electricsquare/raymarching-workshop","owner":"electricsquare","description":"An Introduction to Raymarching","archived":false,"fork":false,"pushed_at":"2021-08-20T15:03:26.000Z","size":10635,"stargazers_count":946,"open_issues_count":0,"forks_count":54,"subscribers_count":38,"default_branch":"master","last_synced_at":"2024-08-02T06:25:40.814Z","etag":null,"topics":["graphics","raymarching","rendering","shaders","shadertoy","signed-distance-field","workshop"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/electricsquare.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-11-05T09:19:44.000Z","updated_at":"2024-07-29T18:11:30.000Z","dependencies_parsed_at":"2022-09-23T03:50:36.464Z","dependency_job_id":null,"html_url":"https://github.com/electricsquare/raymarching-workshop","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/electricsquare%2Fraymarching-workshop","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/electricsquare%2Fraymarching-workshop/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/electricsquare%2Fraymarching-workshop/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/electricsquare%2Fraymarching-workshop/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/electricsquare","download_url":"https://codeload.github.com/electricsquare/raymarching-workshop/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222721794,"owners_count":17028600,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["graphics","raymarching","rendering","shaders","shadertoy","signed-distance-field","workshop"],"created_at":"2024-08-01T06:02:12.632Z","updated_at":"2024-11-02T13:31:22.357Z","avatar_url":"https://github.com/electricsquare.png","language":null,"readme":"# Raymarching Workshop Course Outline\r\n\r\n![](assets/ES-logo-and-wordmark.jpg)\r\n\r\nBrought to you by [Electric Square](https://www.electricsquare.com/)\r\n\r\nCreated and presented by [AJ Weeks](https://twitter.com/_ajweeks_) \u0026 [Huw Bowles](https://twitter.com/hdb1)\r\n\r\n\u003cdetails\u003e\r\n\u003csummary\u003eTable of Contents\u003c/summary\u003e\r\n\u003cbr\u003e\r\n\r\n- [Overview](#overview)\r\n  - [Raymarching](#raymarching-distance-fields)\r\n- [Let's begin!](#lets-begin)\r\n  - [2D SDF demo](#2d-sdf-demo)\r\n  - [Combining shapes](#combining-shapes)\r\n- [Transition to 3D](#transition-to-3d)\r\n  - [Raymarching loop](#raymarching-loop)\r\n  - [Camera](#camera)\r\n  - [Scene definition](#scene-definition)\r\n  - [Raymarching](#raymarching)\r\n  - [Ambient term](#ambient-term)\r\n  - [Diffuse term](#diffuse-term)\r\n  - [Shadows](#shadows)\r\n  - [Ground plane](#ground-plane)\r\n  - [Soft shadows](#soft-shadows)\r\n- [Texture mapping](#texture-mapping)\r\n  - [3D Texture mapping](#3d-texture-mapping)\r\n  - [2D Texture mapping](#2d-texture-mapping)\r\n  - [Triplanar mapping](#triplanar-mapping)\r\n- [Materials](#materials)\r\n- [Fog](#fog)\r\n- [Anti-aliasing](#anti-aliasing)\r\n- [Step count optimization](#step-count-optimization)\r\n- [Shape and material interpolation](#shape-and-material-interpolation)\r\n- [Domain repetition](#domain-repetition)\r\n- [Post processing effects](#post-processing-effects)\r\n  - [Vignette](#vignette)\r\n  - [Contrast](#contrast)\r\n  - [Ambient occlusion](#ambient-occlusion)\r\n- [What's next?](#whats-next)\r\n- [Glossary of terms](#glossary-of-terms)\r\n    \r\n\u003c/details\u003e\r\n\r\n[//]: \u003c\u003e (TOC created with https://github.com/Lirt/markdown-toc-bash)\r\n\r\n### Overview\r\nRendering an image involves determining the colour of every pixel in the image, which requires figuring out what surface lies behind the pixel in the world, and then 'shading' it to compute a final colour.\r\n\r\nCurrent generation GPUs take triangle meshes as input, rasterise them into pixels (called _fragments_ before they're drawn to a display), and then shade them to calculate their contribution to the image. While this pipeline is currently ubiquitous, it is also complicated and not necessarily the best way to learn graphics.\r\n\r\nAn alternative approach is to cast a ray through each pixel and intersect it with the surfaces in the scene, and then compute the shading.\r\n\r\nThis course introduces one technique for raycasting through 'distance fields'. A distance field is a function that returns how close a given point is to the closest surface in the scene. This distance defines the radius of a sphere of empty space around each point. Signed distance fields (SDFs) are distance fields that are defined both inside and outside objects; if the queried position is 'inside' a surface, its distance will be reported as negative, otherwise it will be positive.\r\n\r\n### What's possible with raymarching?\r\nThe game 'Claybook' solely uses distance fields to represent the scene. This affords it a lot of interesting possibilities, like completely dynamic surface topologies and shape morphing. These effects would be very difficult to achieve with triangle meshes. Other benefits include easy-to-implement and high-quality soft shadows and ambient occlusion.\r\n\r\n![](assets/0-claybook-01.gif)\r\n\r\nhttps://www.claybookgame.com/\r\n\r\nThe following image was also rendered in real-time using the techniques we'll cover today (plus many fancy techniques which we won't have time to dive into).\r\n\r\n![](assets/0-snail.png)\r\n\r\nYou can run it live in your browser here: https://www.shadertoy.com/view/ld3Gz2\r\n\r\nBy using an SDF (signed distance field), the geometry for this scene didn't have to be created in a DCC like Maya, but instead is represented entirely parametrically. This makes it trivial to animate the shape by simply varying the inputs to the scene mapping function. \r\n\r\nOther graphical effects are made simpler by raymarching when compared with the traditional rasterization alternatives. Subsurface scattering, for instance, requires simply sending a few extra rays into the surface to see how thick it is. Ambient occlusion, anti-aliasing, and depth of field are three other techniques which require just a few extra lines and yet greatly improve the image quality.\r\n\r\n### Raymarching distance fields\r\nWe will march along each ray and look for an intersection with a surface in the scene. One way to do this would be to start at the ray origin (on the camera plane), and take uniform steps along the ray, evaluating the distance field at each point. When the distance to the scene is less than a threshold value, we know we have hit a surface and we therefore we can then terminate the raymarch and shade that pixel.\r\n\r\nA more efficient approach is to use the distance returned by the SDF to determine the next step size. As mentioned above, the distance returned by an SDF can be regarded as the radius of a sphere of empty space around the input point. It is therefore safe to step by this amount along the ray because we know we will not pass through any surfaces.\r\n\r\nIn the following 2D representation of raymarching, each circle's center is where the scene was sampled from. The ray was then marched along that distance (extending to the radius of the circle), and then resampled.\r\n\r\nAs you can see, sampling the SDF doesn't give you the exact intersection point of your ray, but rather a minimum distance you can travel without passing through a surface.\r\n\r\nOnce this distance is below a certain threshold, the raymarch terminates the pixel can be shaded based on the properties of the surface intersected with.\r\n\r\n![](assets/0-SDF.png)\r\n\r\nPlay around with this shader in your browser here: (click and drag in the image to set the ray direction) https://www.shadertoy.com/view/lslXD8\r\n\r\n### Comparison to ray tracing\r\nAt this point one might ask why we don't just compute the intersection with the scene directly using analytic mathematics, using a technique referred to as Ray Tracing. This is how offline renders would typically work - all the triangles in the scene are indexed into some kind of spatial data structure like a Bounding Volume Hierarchy (BVH) or kD-tree, which allow efficient intersection of triangles situated along a ray.\r\n\r\nWe raymarch distance fields instead because:\r\n- It's very simple to implement the ray casting routine\r\n- We avoid all of the complexity of implementing ray-triangle intersections and BVH data structures\r\n- We don't need to author the explicit scene representation - triangle meshes, tex coords, colours, etc\r\n- We benefit from a range of useful features of distance fields, some of which are mentioned above\r\n\r\nHaving said the above, there are some elegant/simple entry points into ray tracing. The [Ray Tracing in One Weekend free book](http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-weekend.html) (and [subsequent](http://in1weekend.blogspot.com/2016/01/ray-tracing-second-weekend.html) [chapters](http://in1weekend.blogspot.com/2016/03/ray-tracing-rest-of-your-life.html)) are very highly recommended and are essential reading for anyone interested in graphics.\r\n\r\n\r\n## Let's begin!\r\n### ShaderToy\r\nShaderToy is a shader creation website and platform for browsing, sharing and discussing shaders.\r\n\r\nWhile you can jump straight in and start writing a new shader without creating an account, this is dangerous as you can easily lose work if there are connection issues or if you hang the GPU (easily done by e.g. creating an infinite loop).\r\nTherefore we strongly recommend creating an account (it's fast/easy/free) by heading here: https://www.shadertoy.com/signin, and saving regularly.\r\n\r\nFor a ShaderToy overview and getting started guide, we recomend following a tutorial such as this one from [@The_ArtOfCode](https://twitter.com/the_artofcode): https://www.youtube.com/watch?v=u5HAYVHsasc. The basics here are necessary to follow the rest of the workshop.\r\n\r\n\r\n### 2D SDF demo\r\n\r\nWe provide a simple framework for defining and visualizing 2D signed distance fields.\r\n\r\nhttps://www.shadertoy.com/view/Wsf3Rj\r\n\r\nPrior to defining the distance field the result will be entirely white. The goal of this section is to design an SDF that gives the desired scene shape (white outline). In code this distance is computed by the `sdf()` function, which is given a 2D position in space as input. The concepts you learn here will generalise directly to 3D space and will allow you to model a 3D scene.\r\n\r\nStart simple - try first to just use the x or y component of the point `p` and observe the result:\r\n\r\n```cpp\r\nfloat sdf(vec2 p)\r\n{\r\n    return p.y;\r\n}\r\n```\r\n\r\nThe result should look as follows:\r\n\r\n![](assets/0-SDF-horizontal.png)\r\n\r\nGreen denotes 'outside' surfaces, red denotes 'inside' surfaces, the white line delineates the surface itself, and the shading in the inside/outside regions illustrates distance iso-lines - lines at fixed distances. In 2D this SDF models a horizontal line in 2D at `y=0`. What sort of geometric primitive would this represent in 3D?\r\n\r\nAnother good thing to try is to use distances, for example: `return length(p);`. This operator returns the magnitude of the vector, and in this case it's giving us the current point's distance to the origin.\r\n\r\nA point is not a very interesting thing to render as a point is infinitesimal, and our rays would always miss it!\r\nWe can give the point some area by subtracting the desired radius from the distance: `return length(p) - 0.25;`.\r\nWe can also modify the input point prior to taking its magnitude: `length(p - vec2(0.0, 0.2)) - 0.25;`.\r\nWhat effect does this have on the shape?\r\nWhat values might the function be returning for points 'inside' the circle?\r\n\r\nCongratulations - you have just modelled a circle using mathematics :). This will trivially extend to 3D in which case it models a sphere. Contrast this scene representation to other 'explicit' scenes representations such as triangle meshes or NURBS surfaces. We created a sphere in minutes with a single line of code, and our code directly maps to one mathematical definition for a sphere - 'the set of all points that are equidistant from a center point'.\r\n\r\nFor other types of primitives, the distance functions are similarly elegant. iq made a great reference page with images: http://iquilezles.org/www/articles/distfunctions/distfunctions.htm\r\n\r\nOnce you understand how a distance to a primitive works - put it in a box - define a function for it so you don't need to remember and write out the code each time. There is a function already defined for the circle `sdCircle()` which you can find in the shader. Add any primitives you wish.\r\n\r\n### Combining shapes\r\nNow we know how to create individual primitives, how can we combine them to define a scene with multiple shapes?\r\n\r\nOne way to do this is the 'union' operator - which is defined as the minimum of two distances. It's best to experiment with the code in order to get a strong grasp of this, but the intuition is that the SDF gives the distance to the nearest surface, and if the scene has multiple objects you want the distance to the closest object, which will be the minimum of the distances to each object.\r\n\r\nIn code this may look as follows:\r\n\r\n```cpp\r\nfloat sdf(vec2 p)\r\n{\r\n    float d = 1000.0;\r\n    \r\n    d = min(d, sdCircle(p, vec2(-0.1, 0.4), 0.15));\r\n    d = min(d, sdCircle(p, vec2( 0.5, 0.1), 0.35));\r\n    \r\n    return d;\r\n}\r\n```\r\n\r\nIn this way we can compactly combine many shapes. Once this is understood, the `opU()` function should be used, which stands for 'operation union'.\r\n\r\nThis is only scratching the surface of what is possible. We can get smooth blends using a fancy soft min function - try using the provided `opBlend()`. There are many other interesting techniques that can be applied, the interested reader is referred to this extended introduction to building scenes with SDFs: https://www.youtube.com/watch?v=s8nFqwOho-s\r\n\r\nExample:\r\n\r\n![](assets/0-SDF-demo.jpg)\r\n\r\n\r\n## Transition to 3D\r\nHopefully you've gained a basic understanding of how distance fields can be used to represent scene data, and how we'll use raymarching to find intersection points with the scene. We're now going to start working in three dimensions, where the real magic happens.\r\n\r\nWe recommend saving your current shader and starting a new one so that you can refer back to your 2D visualization later.\r\nMost of the helpers can copied into your new shader and made to work in 3D by swapping the `vec2`s with `vec3`s.\r\n\r\n### Raymarching loop\r\nRather than visualise the SDF like we did in 2D, we're going to jump right in to rendering the scene. Here's the basic idea of how we'll implement raymarching (in pseudo code):\r\n\r\n```\r\nMain function\r\n    Evaluate camera\r\n    Call RenderRay\r\n\r\nRenderRay function\r\n    Raymarch to find intersection of ray with scene\r\n    Shade\r\n```\r\n\r\nThese steps will now each be described in more detail.\r\n\r\n### Camera\r\n```cpp\r\nvec3 getCameraRayDir(vec2 uv, vec3 camPos, vec3 camTarget)\r\n{\r\n    // Calculate camera's \"orthonormal basis\", i.e. its transform matrix components\r\n    vec3 camForward = normalize(camTarget - camPos);\r\n    vec3 camRight = normalize(cross(vec3(0.0, 1.0, 0.0), camForward));\r\n    vec3 camUp = normalize(cross(camForward, camRight));\r\n     \r\n    float fPersp = 2.0;\r\n    vec3 vDir = normalize(uv.x * camRight + uv.y * camUp + camForward * fPersp);\r\n \r\n    return vDir;\r\n}\r\n```\r\n\r\nThis function first calculates the three axes of the camera's 'view' matrix; the forward, right, and up vectors.\r\nThe forward vector is the normalized vector from the camera position to the look target position.\r\nThe right vector is found by crossing the forward vector with the world up axis.\r\nThe forward and right vectors are then crossed to obtain the camera up vector.\r\n\r\nFinally the camera ray is computed using this frame by taking a point in front of the camera and offsetting it in the camera right and up directions using the pixel coordinates `uv`.\r\n`fPersp` allows us to indirectly control our camera's field of view. You can think of this multiplication as moving the near plane closer and farther from the camera. Experiment with different values to see the result.\r\n\r\n### Scene definition\r\n```cpp\r\nfloat sdSphere(vec3 p, float r)\r\n{\r\n    return length(p) - r;\r\n}\r\n \r\nfloat sdf(vec3 pos)\r\n{\r\n    float t = sdSphere(pos-vec3(0.0, 0.0, 10.0), 3.0);\r\n     \r\n    return t;\r\n}\r\n```\r\n\r\nAs you can see, we've added a `sdSphere()` which is identical to `sdCircle` save for the number of components in our input point.\r\n\r\n### Raymarching\r\nPseudo code:\r\n\r\n```cpp\r\ncastRay\r\n    for i in step count:\r\n         sample scene\r\n             if within threshold return dist\r\n    return -1\r\n```\r\n\r\nTry to write this yourself - if you get stuck only then take a look at the solution below.\r\n\r\nReal code:\r\n\r\n```cpp\r\nfloat castRay(vec3 rayOrigin, vec3 rayDir)\r\n{\r\n    float t = 0.0; // Stores current distance along ray\r\n     \r\n    for (int i = 0; i \u003c 64; i++)\r\n    {\r\n        float res = SDF(rayOrigin + rayDir * t);\r\n        if (res \u003c (0.0001*t))\r\n        {\r\n            return t;\r\n        }\r\n        t += res;\r\n    }\r\n     \r\n    return -1.0;\r\n}\r\n```\r\n\r\nWe'll now add a `render` function, which will eventually be responsible for shading the found intersection point. For now however, lets display the distance to the scene to check we're on track. We'll scale and invert it to better see the differences.\r\n\r\n```cpp\r\nvec3 render(vec3 rayOrigin, vec3 rayDir)\r\n{\r\n    float t = castRay(rayOrigin, rayDir);\r\n    \r\n    // Visualize depth\r\n    vec3 col = vec3(1.0-t*0.075);\r\n    \r\n    return col;\r\n}\r\n```\r\n\r\nTo calculate the each ray's direction, we'll want to transform the pixel coordinate input `fragCoord` from the range `[0, w), [0, h)` into `[-a, a], [-1, 1]`, where `w` and `h` are the width and height of the screen in pixels, and `a` is the aspect ratio of the screen. We can then pass the value returned from this helper into the `getCameraRayDir` function we defined above to get the ray direction.\r\n\r\n```cpp\r\nvec2 normalizeScreenCoords(vec2 screenCoord)\r\n{\r\n    vec2 result = 2.0 * (screenCoord/iResolution.xy - 0.5);\r\n    result.x *= iResolution.x/iResolution.y; // Correct for aspect ratio\r\n    return result;\r\n}\r\n```\r\n\r\nOur main image function then looks as follows:\r\n\r\n```cpp\r\nvoid mainImage(out vec4 fragColor, vec2 fragCoord)\r\n{\r\n    vec3 camPos = vec3(0, 0, -1);\r\n    vec3 camTarget = vec3(0, 0, 0);\r\n    \r\n    vec2 uv = normalizeScreenCoords(fragCoord);\r\n    vec3 rayDir = getCameraRayDir(uv, camPos, camTarget);   \r\n    \r\n    vec3 col = render(camPos, rayDir);\r\n    \r\n    fragColor = vec4(col, 1); // Output to screen\r\n}\r\n```\r\n\r\n**Exercises:**\r\n- Experiment with the step count and observe how result changes.\r\n- Experiment with the termination threshold and observe how result changes.\r\n\r\n![](assets/1-depth.png)\r\n\r\nFor full working program, see Shadertoy: [Part 1a](https://www.shadertoy.com/view/XltBzj)\r\n\r\n\r\n### Ambient term\r\nTo get some colour into the scene we're first going to differentiate between objects and the background.\r\n\r\nTo do this, we can return -1 in castRay to signal nothing was hit. We can then handle that case in render.\r\n\r\n```cpp\r\nvec3 render(vec3 rayOrigin, vec3 rayDir)\r\n{\r\n    vec3 col;\r\n    // t stores the distance the ray travelled before intersecting a surface\r\n    float t = castRay(rayOrigin, rayDir);\r\n \r\n    // -1 means the ray didn't intersect anything, so render the skybox\r\n    if (t == -1.0)\r\n    {\r\n        // Skybox colour\r\n        col = vec3(0.30, 0.36, 0.60) - (rayDir.y * 0.7);\r\n    }\r\n    else\r\n    {\r\n        vec3 objectSurfaceColour = vec3(0.4, 0.8, 0.1);\r\n        vec3 ambient = vec3(0.02, 0.021, 0.02);\r\n        col = ambient * objectSurfaceColour;\r\n    }\r\n     \r\n    return col;\r\n}\r\n```\r\n![](assets/1-ambient.png)\r\n\r\nhttps://www.shadertoy.com/view/4tdBzj\r\n\r\n\r\n\r\n### Diffuse term\r\nTo get more realistic lighting let's calculate the surface normal so we can calculate basic Lambertian lighting.\r\n\r\nTo calculate the normal, we are going to calculate the gradient of the surface in all three axes.\r\n\r\nWhat this means in practice is sampling the SDF four extra times, each slightly offset from our primary ray.\r\n\r\n```cpp\r\nvec3 calcNormal(vec3 pos)\r\n{\r\n    // Center sample\r\n    float c = sdf(pos);\r\n    // Use offset samples to compute gradient / normal\r\n    vec2 eps_zero = vec2(0.001, 0.0);\r\n    return normalize(vec3( sdf(pos + eps_zero.xyy), sdf(pos + eps_zero.yxy), sdf(pos + eps_zero.yyx) ) - c);\r\n}\r\n```\r\n\r\nOne great way to inspect normals is by displaying them as though they represented color. This is what a sphere should look like when displaying its scaled and biased normal (brought from `[-1, 1]` into `[0, 1]` since your monitor can't display negative colour values)\r\n\r\n```cpp\r\ncol = N * vec3(0.5) + vec3(0.5);\r\n```\r\n![](assets/1-normals.png)\r\n\r\nNow that we have a normal, we can take the dot product between it and the light direction.\r\n\r\nThis will tell us how directly the surface is facing the light and therefore how bright it should be.\r\n\r\nWe take the max of this value with 0 to prevent negative values from giving unwanted effects on the dark side of objects.\r\n\r\n```cpp\r\n// L is vector from surface point to light, N is surface normal. N and L must be normalized!\r\nfloat NoL = max(dot(N, L), 0.0);\r\nvec3 LDirectional = vec3(0.9, 0.9, 0.8) * NoL;\r\nvec3 LAmbient = vec3(0.03, 0.04, 0.1);\r\nvec3 diffuse = col * (LDirectional + LAmbient);\r\n```\r\n\r\nOne very important part of rendering which can easily be overlooked is gamma correction. Pixel values sent to the monitor are in gamma space, which is a nonlinear space used to maximise precision, by using less bits in intensity ranges that humans are less sensitive to.\r\n\r\nBecause monitors don't operate in \"linear\" space, we need to compensate for their gamma curve prior to outputting a colour. The difference is very noticeable and should always be corrected for. In reality we don't know the gamma curve for a particular display device is, so the whole situation with display technology is an awful mess (hence the gamma tuning step in many games), but a common assumption is the following gamma curve:\r\n\r\n![](assets/1-gamma.svg)\r\n\r\nThe constant 0.4545 is simply 1.0 / 2.2\r\n\r\n```cpp\r\ncol = pow(col, vec3(0.4545)); // Gamma correction\r\n```\r\n\r\n![](assets/1-diffuse.png)\r\n\r\nhttps://www.shadertoy.com/view/4t3fzn\r\n\r\n\r\n### Shadows\r\nTo calculate shadows, we can fire a ray starting at the point we intersected the scene and going in the direction of the light source.\r\n\r\nIf this ray march results in us hitting something, then we know the light will also be obstructed and so this pixel is in shadow.\r\n```cpp\r\nfloat shadow = 0.0;\r\nvec3 shadowRayOrigin = pos + N * 0.01;\r\nvec3 shadowRayDir = L;\r\nIntersectionResult shadowRayIntersection = castRay(shadowRayOrigin, shadowRayDir);\r\nif (shadowRayIntersection.mat != -1.0)\r\n{\r\n    shadow = 1.0;\r\n}\r\ncol = mix(col, col*0.2, shadow);\r\n```\r\n\r\n### Ground plane\r\nLet's add a ground plane so we can see shadows cast by our spheres better.\r\n\r\n```cpp\r\n// p: plane origin (position), n.xyz: plane surface normal, p.w: plane's distance from origin (along its normal)\r\nfloat sdPlane(vec3 p, vec4 n)\r\n{\r\n    return dot(p, n.xyz) + n.w;\r\n}\r\n```\r\n\r\n### Soft shadows\r\nShadows in real life don't immediately stop, they have some falloff, referred to as a penumbra. \r\n\r\nWe can model this by taking marching several rays from our surface point, each with slightly different directions.\r\n\r\nWe can then sum the result and average over the number of iterations we did. This will cause the edges of the shadow to have \r\n\r\nsome rays hit, and others miss, giving a 50% darkness.\r\n\r\nFinding somewhat pseudo random number can be done a number of ways, we'll use the following though:\r\n\r\n```cpp\r\n// Return a psuedo random value in the range [0, 1), seeded via coord\r\nfloat rand(vec2 coord)\r\n{\r\n  return fract(sin(dot(coord.xy, vec2(12.9898,78.233))) * 43758.5453);\r\n}\r\n```\r\n\r\nThis function will return a number in the range [0, 1). We know the output is bound to this range because the outermost operation is fract, which returns the fractional component of a floating point number.\r\n\r\nWe can then use this to calculate our shadow ray as follows:\r\n\r\n```cpp\r\nfloat shadow = 0.0;\r\nfloat shadowRayCount = 1.0;\r\nfor (float s = 0.0; s \u003c shadowRayCount; s++)\r\n{\r\n    vec3 shadowRayOrigin = pos + N * 0.01;\r\n    float r = rand(vec2(rayDir.xy)) * 2.0 - 1.0;\r\n    vec3 shadowRayDir = L + vec3(1.0 * SHADOW_FALLOFF) * r;\r\n    IntersectionResult shadowRayIntersection = castRay(shadowRayOrigin, shadowRayDir);\r\n    if (shadowRayIntersection.mat != -1.0)\r\n    {\r\n        shadow += 1.0;\r\n    }\r\n}\r\ncol = mix(col, col*0.2, shadow/shadowRayCount);\r\n```\r\n\r\n## Texture mapping\r\nRather than define a single surface colour (or other characteristic) uniformly over the entire surface, one can define patterns to apply to the surface using textures.\r\nWe'll cover three ways of achieving this.\r\n\r\n### 3D Texture mapping\r\nThere are volume textures readily accessible in shadertoy that can be assigned to a channel. Try sampling one of these textures using the 3D position of the surface point:\r\n\r\n```cpp\r\n// assign a 3D noise texture to iChannel0 and then sample based on world position\r\nfloat textureFreq = 0.5;\r\nvec3 surfaceCol = texture(iChannel0, textureFreq * surfacePos).xyz;\r\n```\r\n\r\nOne way to sample noise is to add together multiple scales, using something like the following:\r\n\r\n```cpp\r\n// assign a 3D noise texture to iChannel0 and then sample based on world position\r\nfloat textureFreq = 0.5;\r\nvec3 surfaceCol =\r\n    0.5    * texture(iChannel0, 1.0 * textureFreq * surfacePos).xyz +\r\n    0.25   * texture(iChannel0, 2.0 * textureFreq * surfacePos).xyz +\r\n    0.125  * texture(iChannel0, 4.0 * textureFreq * surfacePos).xyz +\r\n    0.0625 * texture(iChannel0, 8.0 * textureFreq * surfacePos).xyz ;\r\n```\r\n\r\nThe constants/weights above are typically used for a fractal noise, but they can take any desired values. Try experimenting with weights/scales/colours and seeing what interesting effects you can achieve.\r\n\r\nTry animating your object using iTime and observing how the volume texture behaves. Can this behaviour be changed?\r\n\r\n### 2D Texture mapping\r\nApplying a 2D texture is an interesting problem - how to project the texture onto the surface? In normal 3D graphics, each triangle in an object has one or more UVs assigned which provide the coordinates of the region of texture which should me mapped to the triangle (texture mapping). In our case we don't have UVs provided so we need to figure out how to sample the texture.\r\n\r\nOne approach is to sample the texture using a top down world projection, by sampling the texture based on X \u0026 Z coordinates:\r\n\r\n```cpp\r\n// top down projection\r\nfloat textureFreq = 0.5;\r\nvec2 uv = textureFreq * surfacePos.xz;\r\n \r\n// sample texture\r\nvec3 surfaceCol = texture2D(iChannel0, uv).xyz;\r\n```\r\n\r\nWhat limitations do you see with this approach?\r\n\r\n\r\n\r\n### Triplanar mapping\r\n\r\nA more advanced way to map textures is to do 3 projections from the primary axes, and then blend the result using triplanar mapping. The goal of the blending is to pick the best texture for each point on the surface. One possibility is to define the blend weights based on the alignment of the surface normal with each world axis. A surface that faces front on with one of the axes will receive a large blend weight:\r\n\r\n```cpp\r\nvec3 triplanarMap(vec3 surfacePos, vec3 normal)\r\n{\r\n    // Take projections along 3 axes, sample texture values from each projection, and stack into a matrix\r\n    mat3 triMapSamples = mat3(\r\n        texture(iChannel0, surfacePos.yz).rgb,\r\n        texture(iChannel0, surfacePos.xz).rgb,\r\n        texture(iChannel0, surfacePos.xy).rgb\r\n        );\r\n \r\n    // Weight three samples by absolute value of normal components\r\n    return triMapSamples * abs(normal);\r\n}\r\n```\r\n\r\n![](assets/2-triplanar.png)\r\n\r\nWhat limitations do you see with this approach?\r\n\r\n\r\n### Materials\r\nAlong with the distance we return from the castRay function, we can also return an index which represents the material of the object hit. We can use this index to colour objects accordingly.\r\n\r\nOur operators will need to take vec2s rather than floats, and compare the first component of each.\r\n\r\nNow, when defining our scene we'll also specify a material for each primitive as the y component of a vec2:\r\n\r\n```cpp\r\nvec2 res =     vec2(sdSphere(pos-vec3(3,-2.5,10), 2.5),      0.1);\r\nres = opU(res, vec2(sdSphere(pos-vec3(-3, -2.5, 10), 2.5),   2.0));\r\nres = opU(res, vec2(sdSphere(pos-vec3(0, 2.5, 10), 2.5),     5.0));\r\nreturn res;\r\n```\r\n\r\nThis requires the operation functions to accept vec2s instead of floats. Here's the new version of the union operator:\r\n\r\n```cpp\r\nvec2 opU(vec2 d1, vec2 d2)\r\n{\r\n    return (d1.x \u003c d2.x) ? d1 : d2;\r\n}\r\n```\r\n\r\nThe new version of `castRay` tracks the material ID of the closest object at all times, so that when we decide we've hit a surface, we can return its material ID.\r\n\r\n```cpp\r\n// Returns a vec2, x: signed distance to surface, y: material ID\r\nvec2 castRay(vec3 rayOrigin, vec3 rayDir)\r\n{\r\n    float tmax = 250.0;\r\n    // t stores the distance the ray travelled before intersecting a surface\r\n    float t = 0.0;\r\n    \r\n    vec2 result;\r\n    result.y = -1.0; // Default material ID\r\n    \r\n    for (int i = 0; i \u003c 256; i++)\r\n    {\r\n        vec2 res = SDF(rayOrigin + rayDir * t);\r\n        if (res.x \u003c (0.0001*t))\r\n        {\r\n            // When within a small distance of the surface, count it as an intersection\r\n            result.x = t;\r\n            return result;\r\n        }\r\n        else if (res.x \u003e tmax)\r\n        {\r\n            // Indicate that this ray didn't intersect anything\r\n            result.y = -1.0;\r\n            result.x = -1.0;\r\n            return result;\r\n        }\r\n        t += res.x;\r\n        result.y = res.y; // Material ID of closest object\r\n    }\r\n    \r\n    result.x = t; // Distance to intersection\r\n    return result;\r\n}\r\n```\r\n\r\nOur `render` function can change to extract both fields now returned from `castRay` as follows:\r\n\r\n```cpp\r\nvec2 res = castRay(rayOrigin, rayDir);\r\nfloat t = res.x; // Distance to surface\r\nfloat m = res.y; // Material ID\r\n```\r\n\r\nWe can then multiply this material index by some values in the render function to get different colours for each object. Try different values out.\r\n\r\n```cpp\r\n// m just stores some material identifier, here we're arbitrarily modifying it\r\n// just to get some different colour values per object\r\ncol = vec3(0.18*m, 0.6-0.05*m, 0.2)\r\nif (m == 2.0)\r\n{\r\n  // Apply triplanar mapping only to objects with material ID 2\r\n    col *= triplanarMap(pos, N, 0.6);\r\n}\r\n```\r\n\r\nLet's colour the ground plane using a checkerboard pattern. I've taken this fancy analytically-anti-aliased checkerbox function from Inigo Quilez' website.\r\n\r\n```cpp\r\nfloat checkers(vec2 p)\r\n{\r\n    vec2 w = fwidth(p) + 0.001;\r\n    vec2 i = 2.0*(abs(fract((p-0.5*w)*0.5)-0.5)-abs(fract((p+0.5*w)*0.5)-0.5))/w;\r\n    return 0.5 - 0.5*i.x*i.y;\r\n}\r\n```\r\n\r\nWe'll pass in the xz components of our plane position to get the pattern to repeat in those dimensions.\r\n\r\n![](assets/3-materials.png)\r\n\r\nhttps://www.shadertoy.com/view/Xl3fzn\r\n\r\n### Fog\r\nWe can add now fog to the scene based on how far each intersection occurred from the camera.\r\n\r\nSee if you can get something similar to the following:\r\n\r\n![](assets/3-fog.png)\r\n\r\nhttps://www.shadertoy.com/view/Xtcfzn\r\n\r\n\r\nShape and material blending\r\nTo avoid the harsh crease given by the min operator, we can use a more sophisticated operator which blends the shapes smoothly.\r\n\r\n```cpp\r\n// polynomial smooth min (k = 0.1);\r\nfloat sminCubic(float a, float b, float k)\r\n{\r\n    float h = max(k-abs(a-b), 0.0);\r\n    return min(a, b) - h*h*h/(6.0*k*k);\r\n}\r\n \r\nvec2 opBlend(vec2 d1, vec2 d2)\r\n{\r\n    float k = 2.0;\r\n    float d = sminCubic(d1.x, d2.x, k);\r\n    float m = mix(d1.y, d2.y, clamp(d1.x-d,0.0,1.0));\r\n    return vec2(d, m);\r\n}\r\n```\r\n\r\n![](assets/3-blending.png)\r\n\r\n### Anti-aliasing\r\nBy sampling the scene many times with slightly offset camera direction vectors, we can get an smoothed value which avoids aliasing.\r\n\r\nI've brought out the scene colour calculation out to its own function to make calling it in the loop clearer.\r\n\r\n```cpp\r\nfloat AA_size = 2.0;\r\nfloat count = 0.0;\r\nfor (float aaY = 0.0; aaY \u003c AA_size; aaY++)\r\n{\r\n    for (float aaX = 0.0; aaX \u003c AA_size; aaX++)\r\n    {\r\n        fragColor += getSceneColor(fragCoord + vec2(aaX, aaY) / AA_size);\r\n        count += 1.0;\r\n    }\r\n}\r\nfragColor /= count;\r\n```\r\n\r\n### Step count optimization\r\nIf we visualize how many steps we take for each pixel in red, we can clearly see that the rays which hit nothing are responsible for most of our iterations.\r\n\r\nThis can give a significant performance boost for certain scenes.\r\n\r\n```cpp\r\nif (t \u003e drawDist) return backgroundColor;\r\n```\r\n![](assets/4-step-count-vis-0.png)\r\n![](assets/4-step-count-vis-1.png)\r\n\r\n### Shape and material interpolation\r\nWe can interpolate between two shapes using the mix function and using iTime to modulate over time.\r\n\r\n```cpp\r\nvec2 shapeA = vec2(sdBox(pos-vec3(6.5, -3.0, 8), vec3(1.5)), 1.5);\r\nvec2 shapeB = vec2(sdSphere(pos-vec3(6.5, -3.0, 8), 1.5),    3.0);\r\nres = opU(res, mix(shapeA, shapeB, sin(iTime)*0.5+0.5));\r\n```\r\n![](assets/4-shape-interp.gif)\r\n\r\n### Domain repetition\r\nIt's quite easy to repeat a shape using a signed distance field, essentially you just have to modulo the input position in one or more dimensions.\r\n\r\nThis technique can be used for example to repeat a column several times without increasing the scene's representation size.\r\n\r\nHere I've repeated all three components of the input position, then used the subtraction operator ( max() ) to limit the repetition to a bounding box.\r\n\r\n![](assets/4-domain-rep.png)\r\n\r\nOne gotcha is that you need to subtract half of the value you are modulating by in order to center the repetition on your shape as to not cut it in half.\r\n\r\n```cpp\r\nfloat repeat(float d, float domain)\r\n{\r\n    return mod(d, domain)-domain/2.0;\r\n}\r\n```\r\n\r\n\r\n## Post processing effects\r\n### Vignette\r\nBy darkening pixels which are farther from the center of the screen we can get an simple vignette effect.\r\n\r\n### Contrast\r\nDarker and lighter values can be accentuated, causing the perceived dynamic range to increase along with the intensity of the image.\r\n\r\n```cpp\r\ncol = smoothstep(0.0,1.0,col);\r\n```\r\n\r\n### \"Ambient occlusion\"\r\nIf we take the inverse of the image shown above (in Optimizations), we can get a weird AO-like effect.\r\n\r\n```cpp\r\ncol *= (1.0-vec3(steps/maxSteps));\r\n```\r\n\r\n---\r\n\r\nAs you can see, many post processing effects can implemented trivially; play around with different functions and see what other effects you can create.\r\n\r\n![](assets/raymarching-weeks-03.gif)\r\n[www.shadertoy.com/view/MtdBzs](https://www.shadertoy.com/view/MtdBzs)\r\n\r\n## What's next?\r\nWe've just covered the basics here; there is much more to be explored in this field such as:\r\n\r\n- Subsurface scattering\r\n- Ambient occlusion\r\n- Animated primitives\r\n- Primitive warping functions (twist, bend, ...)\r\n- Transparency (refraction, caustics, ...)\r\n- Optimizations (bounding volume hierarchies)\r\n- ...\r\n\r\nBrowse ShaderToy to get some inspiration about what can be done and poke through various shaders to see how different effects are implemented. Many shaders have variables which you can tweak and instantly see the effects of (alt-enter is the shortcut to compile!).\r\n\r\nAlso give the references a read through if you're interested in learning more!\r\n\r\n---\r\n\r\nThanks for reading! Be sure to send us your cool shaders! If you have any feedback on the course we'd also love to hear it!\r\n\r\nContact us on twitter [@\\_ajweeks\\_](https://twitter.com/_ajweeks_) \u0026 [@hdb1](https://twitter.com/hdb1)\r\n\r\n[Electric Square](https://www.electricsquare.com/careers/) is hiring!\r\n\r\n---\r\n\r\n### Recommended reading:\r\n\r\nSDF functions: http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/\r\n\r\nClaybook demo: https://www.youtube.com/watch?v=Xpf7Ua3UqOA\r\n\r\nRay Tracing in One Weekend: http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-weekend.html\r\n\r\nPhysically-based rendering bible, PBRT: https://www.pbrt.org/\r\n\r\nPrimitives reference: http://iquilezles.org/www/articles/distfunctions/distfunctions.htm\r\n\r\nExtended introduction to building scenes with SDFs: https://www.youtube.com/watch?v=s8nFqwOho-s\r\n\r\nVery realistic lighting \u0026 colours: http://www.iquilezles.org/www/articles/outdoorslighting/outdoorslighting.htm\r\n\r\n---\r\n\r\n## Glossary of terms\r\n\r\nMuch of computer graphics \u0026 mathematics literature is littered with short, hard to comprehend variable names. We tried to keep our variable names clear yet concise to avoid confusion, but to provide further clarity we've defined in more detail what most variables map to.\r\n\r\n|Term|Description|\r\n|---|---|\r\n|shader|A piece of code that runs on the GPU, in this workshop we exclusively write fragment shaders, which (essentially) run once per pixel|\r\n|sdf|Signed distance field|\r\n|sd{Shape} (e.g. `sdCircle`)|A function that will compute the shortest (signed) distance to the surface of a {Shape}. The result will be 0 when at the surface, and negative when inside the shape.|\r\n|p/pos|Position, typically assumed to be in world space in this workshop|\r\n|op{X} (e.g. `opU`)|Operator that combines the results from multiple distance functions|\r\n|uv/fragCoord/screenCoord|Texture coordinate, used to describe how a texture (image) maps to geometry. In this workshop we mainly use a screen UV (which span the whole screen, going from 0 to 1 from the bottom left, to top right of the screen) to compute the ray direction.|\r\n|col|Stores a colour value as an RGB triplet|\r\n|fragColor|The final colour value produced by the shader, and therefore what gets displayed on screen.|\r\n|N|[Surface normal](https://mathworld.wolfram.com/NormalVector.html) (the unit vector perpendicular to a given surface)|\r\n|NoL|The result of the [dot product](https://mathworld.wolfram.com/DotProduct.html) between the normal \u0026 light vectors. Pronounced \"N dot L\". Produces a [cosine falloff](https://ogldev.org/www/tutorial18/tutorial18.html).|\r\n|i{Variable} (e.g. `iTime`)|An input provided by ShaderToy. View the descriptions of all inputs via the dropdown above the code window.|\r\n\r\nFunctions like `abs`, `fwidth`, and `fract` are built-in functions provided via the graphics API. View their definitions here: https://www.khronos.org/registry/OpenGL-Refpages/gl4/index.php\r\n\r\n","funding_links":[],"categories":["Others","miscellaneous"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Felectricsquare%2Fraymarching-workshop","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Felectricsquare%2Fraymarching-workshop","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Felectricsquare%2Fraymarching-workshop/lists"}