From our sponsor: Convert templates to clean, developer-friendly React, Vue and HTML code with Anima.

Frame buffers are a key feature of WebGL for creating advanced graphical effects such as depth of field, flowering, film grains, or different types of aliases, and have already been thoroughly addressed here in Codrops. They allow us to “post-process” our scenes by applying different effects to them once they are given. But how exactly do they work?

By default, WebGL (and also Three.js and all other libraries built on it) renders to the default frame buffer, which is device display. If you’ve used Three.js or another WebGL frame before, you know you create your web with the right geometry and material, outline it, and voilà, it shows up on your screen.

However, as developers, we can create new frame buffers in addition to the default and give WebGL explicit instructions for rendering them. In this case, we render our scenes in image buffers that are in the video card’s memory instead of the device’s screen. Afterwards, we can treat these image buffers like regular textures and apply filters and effects before they are finally rendered on the device screen.

Here is a video that details the post-processing and effects Metal Gear Solid 5: Phantom Pain it really brings home an idea. Notice how it starts with footage from the actual game rendered in the default frame buffer (device display), and then shares how each framing buffer looks. All of these frame buffers are put together for each frame, and the result is the last image you see when you play the game:

So when theory is out of the way, a stunning typographic trajectory effect is created by enhancing the frame buffer!

Our skeleton applications

Make some 2D text in the default frame buffer, i.e. the device display threejs. Here is our boiler:

const LABEL_TEXT = 'ABC'

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a threejs renderer:
// 1. Size it correctly
// 2. Set default background color
// 3. Append it to the page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
document.body.appendChild(renderer.domElement)

// Create an orthographic camera that covers the entire screen
// 1. Position it correctly in the positive Z dimension
// 2. Orient it towards the scene center
const orthoCamera = new THREE.OrthographicCamera(
  -innerWidth / 2,
  innerWidth / 2,
  innerHeight / 2,
  -innerHeight / 2,
  0.1,
  10,
)
orthoCamera.position.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))

// Create a plane geometry that spawns either the entire
// viewport height or width depending on which one is bigger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
  labelMeshSize,
  labelMeshSize
)

// Programmaticaly create a texture that will hold the text
let labelTextureCanvas
{
  // Canvas and corresponding context2d to be used for
  // drawing the text
  labelTextureCanvas = document.createElement('canvas')
  const labelTextureCtx = labelTextureCanvas.getContext('2d')

  // Dynamic texture size based on the device capabilities
  const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
  const relativeFontSize = 20
  // Size our text canvas
  labelTextureCanvas.width = textureSize
  labelTextureCanvas.height = textureSize
  labelTextureCtx.textAlign = 'center'
  labelTextureCtx.textBaseline = 'middle'

  // Dynamic font size based on the texture size
  // (based on the device capabilities)
  labelTextureCtx.font = `${relativeFontSize}px Helvetica`
  const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
  const widthDelta = labelTextureCanvas.width / textWidth
  const fontSize = relativeFontSize * widthDelta
  labelTextureCtx.font = `${fontSize}px Helvetica`
  labelTextureCtx.fillStyle = 'white'
  labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.height / 2)
}
// Create a material with our programmaticaly created text
// texture as input
const labelMaterial = new THREE.MeshBasicMaterial({
  map: new THREE.CanvasTexture(labelTextureCanvas),
  transparent: true,
})

// Create a plane mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)

// Start out animation render loop
renderer.setAnimationLoop(onAnimLoop)

function onAnimLoop() {
  // On each new frame, render the scene to the default framebuffer 
  // (device screen)
  renderer.render(scene, orthoCamera)
}

This code simply initializes a threejs scene, add a 2D-level text structure to it, and render it default frame buffer (device display). If we implement it threejs involved in our project, we get this:

Look at the pen
Step 1: Make the default frame buffer
by Georgi Nikoloff (COM)@gbnikolov)
on CodePen.0

Again, we do not explicitly specify otherwise, so we render to the default frame buffer (device display).

Now that we were able to make our scene on the device screen, we add a frame buffer (THEEE.WebGLRenderTarget) and make it a texture in the video card memory.

Rendering for frame buffer

Let’s start by creating a new frame buffer when we initialize our application:

const clock = new THREE.Clock()
const scene = new THREE.Scene()

// Create a new framebuffer we will use to render to
// the video card memory
const renderBufferA = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

// ... rest of application

Now that we have created it, we must explicitly teach threejs render it the default frame buffer instead of the device display. We do this program in an animation loop:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)
  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
}

And here is the result:

Look at the pen
Step 2: Submit to the frame buffer
by Georgi Nikoloff (COM)@gbnikolov)
on CodePen.0

As you can see, we get a blank screen, but there are no errors in our program – so what happened? Well, we are not anymore rendering to device screen, but another frame buffer! Our scenes are converted to the texture of the video card memory, so that’s why we see a blank screen.

To display the generated texture containing this view back to the default frame buffer (device display), we need to create another 2D layer that covers the entire screen of our application and transfers the texture as material to it.

First, we create a full-screen 2D layer that covers the entire screen of the device:

// ... rest of initialisation step

// Create a second scene that will hold our fullscreen plane
const postFXScene = new THREE.Scene()

// Create a plane geometry that covers the entire screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)

// Create a plane material that expects a sampler texture input
// We will pass our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
  },
  // vertex shader will be in charge of positioning our plane correctly
  vertexShader: `
      varying vec2 v_uv;

      void main () {
        // Set the correct position of each plane vertex
        gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);

        // Pass in the correct UVs to the fragment shader
        v_uv = uv;
      }
    `,
  fragmentShader: `
      // Declare our texture input as a "sampler" variable
      uniform sampler2D sampler;

      // Consume the correct UVs from the vertex shader to use
      // when displaying the generated texture
      varying vec2 v_uv;

      void main () {
        // Sample the correct color from the generated texture
        vec4 inputColor = texture2D(sampler, v_uv);
        // Set the correct color of each pixel that makes up the plane
        gl_FragColor = inputColor;
      }
    `
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)

// ... animation loop code here, same as before

As you can see, we are creating a new scene that will keep our entire screen level. After creating it, we need to add an animation loop to move the generated texture from the previous step to the full screen level of the screen:

function onAnimLoop() {
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // On each new frame, render the scene to renderBufferA
  renderer.render(scene, orthoCamera)
  
  // 👇
  // Set the device screen as the framebuffer to render to
  // In WebGL, framebuffer "null" corresponds to the default 
  // framebuffer!
  renderer.setRenderTarget(null)

  // 👇
  // Assign the generated texture to the sampler variable used
  // in the postFXMesh that covers the device screen
  postFXMesh.material.uniforms.sampler.value = renderBufferA.texture

  // 👇
  // Render the postFX mesh to the default framebuffer
  renderer.render(postFXScene, orthoCamera)
}

Once you’ve included these snippets, we can see our scenes outlined on the screen again:

Look at the pen
Step 3: Display the created frame buffer on the device screen
by Georgi Nikoloff (COM)@gbnikolov)
on CodePen.0

Repeat the steps required to produce this image on the screen in each rendering loop:

  1. Create renderTargetA framebuffer, which allows us to make a separate texture from users ’device video memory
  2. Create an ABC plane grid
  3. Run the “ABC” level network renderTargetA instead of the device display
  4. Create a separate full-screen planar grid that expects texture as material input
  5. Return the full-screen flat eye back to the default frame buffer (device display) using the created texture created by rendering the “ABC” grid renderTargetA

Achieving a persistence effect with two frame buffers

We don’t have much use for frame buffers if we simply display them on the device screen, as we do right now. Now that the assay is complete, some really great post-treatments are done.

First of all, we really want to create a new frame buffer – renderTargetB, and make sure it and renderTargetA are let variables, rather then consts. This is because we change them at the end of each rendering so that we can achieve framebuffer ping pong table.

“Ping-ponging” WebG1 has a technology that switches the use of the frame buffer as either input or output. It’s a neat trick that allows general-purpose GPU calculations and it is used in effects such as Gaussian blur, where to blur the scene we have:

  1. Do it framebuffer A using the 2D plane and spread the horizontal blur through the fragment shader
  2. Perform the result from step 1 to step from the horizontally blurred image framebuffer B and spread the vertical blur through the fragment shader
  3. Change framebuffer A and framebuffer B
  4. Repeat steps 1-3 and gradually increase the blur until the desired Gaussian blur radius is reached.

Here is a small diagram that describes the steps required to achieve ping pong:

So with this in mind, we will render the content renderTargetA part renderTargetB using postFXMesh we created and applied special effects through the fragment shadow.

Let’s start by creating things renderTargetB:

let renderBufferA = new THREE.WebGLRenderTarget(
  // ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
  innerWidth * devicePixelRatio,
  innerHeight * devicePixelRatio
)

Next, let’s expand our animation chain to actually make ping-pong technology:

function onAnimLoop() {
  // 👇
  // Do not clear the contents of the canvas on each render
  // In order to achieve our ping-pong effect, we must draw
  // the new frame on top of the previous one!
  renderer.autoClearColor = false

  // 👇
  // Explicitly set renderBufferA as the framebuffer to render to
  renderer.setRenderTarget(renderBufferA)

  // 👇
  // Render the postFXScene to renderBufferA.
  // This will contain our ping-pong accumulated texture
  renderer.render(postFXScene, orthoCamera)

  // 👇
  // Render the original scene containing ABC again on top
  renderer.render(scene, orthoCamera)
  
  // Same as before
  // ...
  // ...
  
  // 👇
  // Ping-pong our framebuffers by swapping them
  // at the end of each frame render
  const temp = renderBufferA
  renderBufferA = renderBufferB
  renderBufferB = temp
}

If we’re going to redo the scene with these updated snippets, we won’t see a visual difference, even if we actually make an alternative between the two frame buffers. This is because, as of now, we do not apply any special effects to the shadow of our fragment postFXMesh.

To change the fragment shadow:

// Sample the correct color from the generated texture
// 👇
// Notice how we now apply a slight 0.005 offset to our UVs when
// looking up the correct texture color

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the correct color of each pixel that makes up the plane
// 👇
// We fade out the color from the previous step to 97.5% of
// whatever it was before
gl_FragColor = vec4(inputColor * 0.975);

Once these changes are in effect, here’s an updated program:

Look at the pen
Step 4: Create another frame buffer and ping pong table between them
by Georgi Nikoloff (COM)@gbnikolov)
on CodePen.0

Let’s specify one frame rendering of our updated example:

  1. We do renderTargetB result renderTargetA
  2. We give our “ABC” text renderTargetA, composing on top of it renderTargetB print in step 1 (we will not clear the contents of the fabric with new renders because we set renderer.autoClearColor = false)
  3. We ignore the created renderTargetA texture postFXMesh, use a small transition vec2(0.002) For UV radiation when applying color to a texture and fading it a bit by multiplying the result 0.975
  4. We do postFXMesh device display
  5. We change renderTargetA with renderTargetB (ping pong table)

We repeat steps 1 to 5. For each new frame pattern, steps 1 to 5 are repeated. In this way, the previous target frame distillation device to which we render is used as the input of the current rendering, and so on. You can see this effect visually in the last presentation – notice how as ping pong progresses more and more offset is applied to the UV radiation and more and more opacity dims.

The interaction of simple noise and mouse

Now that we’ve got ping pong technology to work properly, we can be creative and expand it.

Instead of simply adding a deviation to our fragment shade as before:

vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));

Actually used simple noise to get a more interesting visual result. We also control the direction at the location of the mouse.

Here is an updated fragment shade:

// Pass in elapsed time since start of our program
uniform float time;

// Pass in normalised mouse position
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;

// <Insert snoise function definition from the link above here>

// Calculate different offsets for x and y by using the UVs
// and different time offsets to the snoise method
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;

// Add the snoise offset multiplied by the normalised mouse position
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);

We also need to define mousePos and time as a contribution from us postFXMesh material shade:

const postFXMaterial = new THREE.ShaderMaterial({
  uniforms: {
    sampler: { value: null },
    time: { value: 0 },
    mousePos: { value: new THREE.Vector2(0, 0) }
  },
  // ...
})

Finally, make sure we attach a mousemove event listener to our page and pass the updated normalized mouse coordinates from Javascript to our GLSL fragment shadow:

// ... initialisation step

// Attach mousemove event listener
document.addEventListener('mousemove', onMouseMove)

function onMouseMove (e) {
  // Normalise horizontal mouse pos from -1 to 1
  const x = (e.pageX / innerWidth) * 2 - 1

  // Normalise vertical mouse pos from -1 to 1
  const y = (1 - e.pageY / innerHeight) * 2 - 1

  // Pass normalised mouse coordinates to fragment shader
  postFXMesh.material.uniforms.mousePos.value.set(x, y)
}

// ... animation loop

Once these changes are in place, here is our outcome. Be sure to hover your mouse around it (you may have to wait a while for everything to load):

Look at the pen
Step 5: Interaction between Perlin noise and mouse
by Georgi Nikoloff (COM)@gbnikolov)
on CodePen.0

Conclusion

Frame buffers are a powerful tool in WebGL that allows us to significantly enhance our scenes with post-processing and achieve all sorts of great effects. Some technologies require more than one frame buffer, as we saw, and it is up to the developers to mix and match them, but we still need to achieve the graphics we want.

I urge you to try the examples given, try to render more elements, changing the color of the ABC text between them renderTargetA and renderTargetB switch to mix different colors, etc.

in first demo, you will see a specific example of how this typographic effect could be used and another demo is a playground where you can try different settings (just open the controls in the upper right corner).

Additional readings:

Scroll to a step with a smooth WebGL Shader transform

LEAVE A REPLY

Please enter your comment!
Please enter your name here