Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/abdallahabusedo/motion-blur-in-webgl-
https://github.com/abdallahabusedo/motion-blur-in-webgl-
Last synced: 14 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/abdallahabusedo/motion-blur-in-webgl-
- Owner: abdallahabusedo
- License: mit
- Created: 2019-12-20T11:12:23.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2023-01-07T13:01:22.000Z (almost 2 years ago)
- Last Synced: 2024-10-11T06:01:39.674Z (27 days ago)
- Language: TypeScript
- Size: 2.6 MB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# WebGL Motion Blur
## Requirement: **Motion Blur**![with-motion-blur](examples/with-motion-blur.png)
*With Motion Blur* (the house is moving by the way)
![without-motion-blur](examples/without-motion-blur.png)
*Without Motion Blur*## HINT how to make the motion blur
The steps to do motion blur is as follows:
- While rendering the scene to a color render target, we need to render the motion vectors into another render target.
- The motion vector is the vector from the pixel's world-space position during the previous frame to the pixel's world-space position during the current frame. We need the model matrices in both the current and the previous frame to calculate the motion vector. You don't need to calculate those matrices, they are already both supplied inside every object in the scene.
- During post processing we need to find where the pixel was during the last frame. To do that, we need the view-projection matrix in both the current and previous frame in addition to the motion vectors.
- First, we take the pixel's depth and the pixel's screen coordinates, use them to reconstruct the pixel's position in Normalized Device Coordinates then transform it back to the world-space using the inverse of the current frame's view-projection matrix.
- Then, we subtract the pixel's motion vector from the pixel's world position to get its world position in the previous frame.
- Finally, we transform the pixel to the Normalized Device Coordinates of the previous frame using the previous frame's view-projection matrix. We can use these coodinates to get the pixel's screen coordinates in the previous frame.
- Using the pixel's screen coordinates in both frames, we can sample the color texture from multiple locations along the line between both screen coordinates and compute the average. The computed average will be the new color. Now we have motion blur.## Extra Resources
* [Mozilla WebGL Reference and Tutorial](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API) which is for WebGL1 but many of the presented material is still valid for WebGL2.
* [WebGL2 Fundamentals](https://webgl2fundamentals.org/)
* [Khronos WebGL2 Reference Guide](https://www.khronos.org/files/webgl20-reference-guide.pdf)
* [Mozilla WebGL2 API Documentation](https://developer.mozilla.org/en-US/docs/Web/API/WebGL2RenderingContext)
* [Mouse Picking with Ray Casting](http://antongerdelan.net/opengl/raycasting.html) by Anton Gerdelan.
* [WebGL2 Samples](https://github.com/WebGLSamples/WebGL2Samples)
* [GLSL Reference](https://www.khronos.org/opengles/sdk/docs/manglsl/docbook4/)