FDNavigate back to the homepage

Workflow from Shadertoy to React Three Fiber - R3F

Rick
August 7th, 2023 · 5 min read

I have tried to find a working version of this workflow for along time and is something I have tried multiple times but never really nailed it.

I have a working version using floodfill techniques but it didn’t use purely R3F components. I finally have something which demonstrates texture feedback and written entirely in R3F without the native three.js style mixed in.

A little side note is this idea came from a codepen here. It was a fellow creative dev asking in the forums why the ping pong textures wasn’t working and someone kindly amended her code.

This opens up a lot of doors for shadertoy shaders in R3F.

The premise as stated involved using frame buffer objects which are just texture/images. Luckily @react-three/drei has a useFBO hook which quite neatly allows you to create frame buffer objects.

I hear you saying what the hell is this?

FBO or image or texture saves a frames pixels or data and can be passed around in a useFrame in such ways as to render the screen to it and then use in uniforms passing it to shaders.

I found this workflow helped me to understand stateless highly parallel shaders and GPU architecture in a very practical way.

In the same way you can access data about other pixels in the same frame using compute shaders (supported in upcoming webgpu, not WebGL2) - texture feedback allows you to save the frames data and pass it to the next frame.

Theres always a catch though 😀

The active FBO in a frame cannot be written too and read from in the same frame (this is, at least, my take on this with my limited understanding of the underlying mechanics of raw webgl).

So what does this mean? Its a big no no to render/write to the buffer and then use this in a uniform and read/sample in a shader. It creates a bad feedback loop and it wont work. Or at least every time I tried this I got stuck at this stage (temporarily finding away to do it using flood fill a algorithm in an earlier article).

As stated earlier the native threejs version of this is in this codepen. Im going to run through the basic ideas of the codesandbox above.

Ping Pong Textures - implementation

Here is the offscreen scene component:

1const OffScreenScene = forwardRef(function OffScreenScene(props, ref) {
2 const { size } = useThree();
3
4 return (
5 <group>
6 <mesh ref={ref}>
7 <planeGeometry args={[size.width, size.height]} />
8 <offScreenMaterial
9 bufferTexture={props.map}
10 res={new THREE.Vector2(1024, 1024)}
11 smokeSource={new THREE.Vector3(0, 0, 0)}
12 />
13 </mesh>
14 <gridHelper />
15 </group>
16 );
17});

and this is where it is used in the main scenes render method:

1const [offScreenScene] = useState(() => new THREE.Scene());
2
3
4render (
5 {createPortal(
6 <>
7 <OffScreenScene ref={offScreen} map={offScreenFBOTexture.texture} />
8 <OrthographicCamera
9 makeDefault
10 position={[0, 0, 2]}
11 args={[-1, 1, 1, -1, 1, 1000]}
12 aspect={size.width / size.height}
13 ref={cameraRef}
14 />
15 </>,
16 offScreenScene
17 )}
18)

A lot to digest lets go through step by step.

Offscreen rendering or render targets - this is a very common technique to do more advanced rendering or shaders. A potentially more well known demonstration of this workflow is GPU particles.

Which will have a offscreen render target or simulationMaterial which uses the pixels in a FBO/texture/buffer to store positions or velocities/acceleration and then uses an orthographic camera faced directly down to render the offscreen scene . It is also a very very common practice in shadertoys shaders to have greater than 1 FBO/texture/buffer.

And when you think about it, it is a quite in genius way to store some kind of state or data to be used in another shader or R3F material.

So createPortal is a @react-three/drei helper function which will accept a component or jsx as the first parameter and a scene as the second parameter. The way to think of this is anything in this quote enclave (from the docs!) will not be rendered in the main scene, ideal for offscreen rendering.

Why do we need to define this in the parent component or the onscreen component?

Because we can pass refs/props down to the R3F components in the offscreen rendered scene. Very useful ey!

Because we need to be able to access the material to set a map prop and setting the offscreen FBO.

1<mesh ref={ref}>
2 <planeGeometry args={[size.width, size.height]} />
3 <offScreenMaterial
4 bufferTexture={props.map}
5 res={new THREE.Vector2(1024, 1024)}
6 smokeSource={new THREE.Vector3(0, 0, 0)}
7 />
8</mesh>

and then utilize this in the useFrame of the onscreen parent component like so:

1const offScreenFBOTexture = useFBO(1024, 1024, {
2 minFilter: THREE.LinearFilter,
3 magFilter: THREE.NearestFilter
4});
5const onScreenFBOTexture = useFBO(1024, 1024, {
6 minFilter: THREE.LinearFilter,
7 magFilter: THREE.NearestFilter
8});
9
10const [offScreenScene] = useState(() => new THREE.Scene());
11const offScreenCameraRef = useRef(null);
12
13let textureA = offScreenFBOTexture;
14let textureB = onScreenFBOTexture;
15
16useFrame(({ gl, camera }) => {
17 gl.setRenderTarget(textureB);
18 gl.render(offScreenScene, offScreenCameraRef.current);
19
20 //Swap textureA and B
21 var t = textureA;
22 textureA = textureB;
23 textureB = t;
24
25 onScreen.current.material.map = textureB.texture;
26 offScreen.current.material.uniforms.bufferTexture.value = textureA.texture;
27
28 gl.setRenderTarget(null);
29 gl.render(scene, camera);
30});

We have one offScreenFBOTexture and onScreenFBOTexture for the scenes.

The offscreen scene is defined for the createPortal component:

1const [offScreenScene] = useState(() => new THREE.Scene());

We also need to have access to the offscreen camera when we want to render using the renderer (gl) in R3F’s useFrame hook.

1const offScreenCameraRef = useRef(null);
2
3// example
4useFrame(({ gl }) => {
5 gl.render(scene, camera)
6})

This gl doesn’t have limited use cases, this can be used for any three scene - offscreen or onscreen.

Below shows how we render the offscreen scene into the offscreen render target and then swap or ping pong the textures. This avoids the known read/write limitation in current webgl2. Then the offscreen texture is set as the map prop for the onscreen material and we use the onscreen texture in the offscreen material/shader uniforms.

1//Swap textureA and B
2var t = textureA;
3textureA = textureB;
4textureB = t;
5
6onScreen.current.material.map = textureB.texture;
7offScreen.current.material.uniforms.bufferTexture.value = textureA.texture;

Finally but by no means least, we have to set the render target back to null and render the scene again otherwise we would only be rendering the offscreen scene and not the onscreen scene.

1gl.setRenderTarget(null);
2gl.render(scene, camera);

Offscreen Orthographic Camera

How do we know what camera to use for the offscreen render?

This came of stack overflow and ill try and explain from what I know.

A normal camera has perspective which can be difficult when wanting to deal with depths and trying to calculate this. This would not work 100% if the camera is facing the offscreen mesh at a 45 degree angle.

It needs to be front on and without perspective!

There comes into play the orthographic camera. It will face down naturally and we can control how much of the offscreen mesh/texture is rendered playing with the near/far and left/right/bottom/top arguments for the camera.

Interactivity?

1const onPointerMove = useCallback((e) => {
2 const { uv } = e;
3
4 offScreen.current.material.uniforms.smokeSource.value.x = uv.x;
5 offScreen.current.material.uniforms.smokeSource.value.y = uv.y;
6}, []);
7
8const onMouseUp = useCallback((event) => {
9 offScreen.current.material.uniforms.smokeSource.value.z = 0.0;
10}, []);
11const onMouseDown = useCallback((event) => {
12 offScreen.current.material.uniforms.smokeSource.value.z = 0.1;
13}, []);

And the onscreen mesh:

1<mesh
2 ref={onScreen}
3 onPointerMove={onPointerMove}
4 onPointerDown={onMouseDown}
5 onPointerUp={onMouseUp}
6>
7 <planeGeometry args={[20, 20]} />
8 <meshBasicMaterial side={THREE.DoubleSide} map={onScreenFBOTexture} />
9</mesh>

Thanks to the guys who maintain R3F/Three we have useful events for the jsx components which are powered with a raycaster under the hood fired from the mouse position into the scene.

The event when a ray hits the mesh has loads of info so i encourage you to look around in the event object and play around with the data.

I have used uvs as in these circumstances I find it easier to find the difference between uvs multiplied by the screen resolution, than try and get my head around coordinate systems which still somewhat confuse me if Im being honest.

The onMouseUp and onMouseDown just determine when the mesh is clicked and stopped being clicked. Giving us the perfect switch between active effect and the non-active effect.

Shaders

1const OffScreenMaterial = shaderMaterial(
2 {
3 bufferTexture: { value: null },
4 res: { value: new THREE.Vector2(0, 0) },
5 smokeSource: { value: new THREE.Vector3(0, 0, 0) }
6 },
7 // vertex shader
8 /*glsl*/ `
9 varying vec2 vUv;
10 void main () {
11 vUv =uv;
12 gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
13 }
14 `,
15 // fragment shader
16 /*glsl*/ `
17 uniform vec2 res;//The width and height of our screen
18 uniform sampler2D bufferTexture;//Our input texture
19 uniform vec3 smokeSource;//The x,y are the posiiton. The z is the power/density
20 varying vec2 vUv;
21
22 void main() {
23 vec2 pixel = vUv;
24 gl_FragColor = texture2D( bufferTexture, gl_FragCoord.xy / res.xy );
25
26 //Get the distance of the current pixel from the smoke source
27 float dist = distance(pixel * res.xy, smokeSource.xy * res.xy);
28
29 gl_FragColor.rgb += smokeSource.z * max(10.0 - dist, 0.0);
30
31 //Smoke diffuse
32 vec4 rightColor = texture2D(bufferTexture,vec2(pixel.x + 1.0/res.x,pixel.y));
33 vec4 leftColor = texture2D(bufferTexture,vec2(pixel.x - 1.0/res.x,pixel.y));
34 vec4 upColor = texture2D(bufferTexture,vec2(pixel.x,pixel.y + 1.0/res.y));
35 vec4 downColor = texture2D(bufferTexture,vec2(pixel.x,pixel.y - 1.0/res.y));
36
37 //Diffuse equation
38 float factor = 8.0 * 0.016 * (leftColor.r + rightColor.r + downColor.r * 3.0 + upColor.r - 6.0 * gl_FragColor.r);
39
40 //Account for low precision of texels
41 float minimum = 0.000003;
42 if (factor >= -minimum && factor < -0.10) factor = -minimum;
43
44 gl_FragColor.rgb += factor;
45 }`
46);
47
48extend({ OffScreenMaterial });

Who ever came up with this shader is pretty clever! we are finding the distance between the uvs of the clicked position and the uvs passed from the vertex shader which is the current vertex/fragment, multiplied by the screen width and height.

Then as the distance gets bigger we sort of invert it and use the z coordinate of the onKeyDown and onKeyUp as a multiplier. So as the dist gets bigger the value we multiply the z actually gets smaller and if it gets to small we just 0 it with the max function.

1float dist = distance(
2 // current uvs 0 - 1.0 range and the screen resolution
3 // however big the screen is in pixels
4 vec2(0.5, 0.5) * vec2(1024.0, 768.0),
5 // Clicked position / smokeSource uvs and Resolution
6 // again
7 vec2(0.12, 0.89) * vec2(1024.0, 768.0)
8);

When we click the z will be 0.1 and:

1gl_FragColor.rgb += smokeSource.z * max(10.0 - dist, 0.0);

The brightness of the colors increases as the rgb channels in a vec3 get to 1.0, i.e. vec3(1.0, 1.0, 1.0) is white and as the rgb channels go from 0.0-1.0 range, they go from black to white respectively. (not limited to this range though)

This makes sense now, as when we click the z is 0.1 which will produce more white and when not clicked the z will be 0 and 0 multiplied by anything is 0 - which produces black when not clicked.

The algorithm which produces the smoke effect:

1//Smoke diffuse
2vec4 rightColor = texture2D(bufferTexture,vec2(pixel.x + 1.0/res.x,pixel.y));
3vec4 leftColor = texture2D(bufferTexture,vec2(pixel.x - 1.0/res.x,pixel.y));
4vec4 upColor = texture2D(bufferTexture,vec2(pixel.x,pixel.y + 1.0/res.y));
5vec4 downColor = texture2D(bufferTexture,vec2(pixel.x,pixel.y - 1.0/res.y));
6
7//Diffuse equation
8float factor = 8.0 * 0.016 * (leftColor.r + rightColor.r + downColor.r * 3.0 + upColor.r - 6.0 * gl_FragColor.r);
9
10//Account for low precision of texels
11float minimum = 0.000003;
12if (factor >= -minimum && factor < -0.10) factor = -minimum;
13
14gl_FragColor.rgb += factor;

I don’t understand and being honest isn’t really the purpose of this article - the purpose being to show you how to utilize ping pong textures and a workflow from shadertoy to R3F.

Until next time! Im going to use this to port a shader in the near future!

More articles from theFrontDev

Creating amazing particle effect along a curve in React Three Fiber

Creating particles in @react-three/fiber and making them follow a curve. This could be used for a cool dashboard GUI or combined with soft particles to procedurally place smoke coming from something - to name but a few uses.

August 2nd, 2023 · 2 min read

Creating Realistic Water Ripples - A Beginner's Guide to Reflective Effects

This article provides a step-by-step tutorial for beginners, covering the basics of creating realistic reflective water ripples using shaders and reflective materials. Discover the techniques to bring your virtual water surfaces to life and add a basic reflective material.

June 3rd, 2023 · 3 min read
© 2021–2024 theFrontDev
Link to $https://twitter.com/TheFrontDevLink to $https://github.com/Richard-ThompsonLink to $https://www.linkedin.com/in/richard-thompson-248ba3111/