Categories
Nuke

Filtering Algorithms 14/11

Types of filters

  • Image filtering algorithms are designed to assist in determining the changes that occur in pixel values as images undergo transformation and processing. This article will dissect the various image filters at our disposal and explore their effects on images within the context of Nuke.

Resize Type:

Sinc4 – Lots of sharpening, often too much sharpening

Lanczos6 – Moderate amount of sharpening over 6 pixels. Good for scaling down.

Lanczos4 – Small amount of sharpening over 4 pixels. Good for scaling down.

Rifman – Moderate smoothing and high sharpening. Typically too hard on the sharpening in many situations.

Simon – Some smoothing and moderate sharpening. Excellent choice for many situations.

Keys – Some smoothing plus minor sharpening, decent choice for general transformations.

Anisotropic – High quality filter. Performs well with high angle surfaces. Only available in 3D nodes.

Cubic – Nuke default filter. Pixels receive some smoothing, leading to predictable results. Often too smooth.

Mitchell – Moderate smoothing, low sharpening, with a slight blur. Changes pixel values even with no movement. ​

Notch – High amounts of flat smoothing. Good for hiding buzzing or moire patterns. Changes pixel values even with no movement

Parzen – Lots of smoothing. Changes pixel values even with no movement.

Filtering workflow when using with/without motion blur

Concatenation

The capability to execute a single mathematical calculation across multiple tools within the family is crucial. This singular calculation, or filter, enables us to preserve the maximum amount of detail possible.

Examples

Wrong way

Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction, after that we use and grade node to change the values and we use another transform but this time in backwards 0.5 pixels in X and Y, We would loose the quality of the pixel, since the link/concatenation in broken by using the grade in between.

Right way

Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction and we use another transform but this time in backwards 0.5 pixels in X and Y, after that we use and grade node to change the values, We would not loose the quality of the pixel, since the link/concatenation is not broken since, the grade in used after the transform calculations, using grade prior to both transformations works to.

BBOX

The bounding box defines the area of the frame that Nuke sees as having valid image data. The larger the bounding box is, the longer it takes Nuke to process and render the image. To minimize processing and rendering times, you can crop the bounding box. Occasionally, the bounding box may also be too small, in which case you need to expand it.

Motion Blur

In Nuke, motion blur can be applied to enhance the realism of moving elements within a scene. It’s achieved by calculating the movement of objects between frames and then blurring them accordingly. This helps to create a more natural and visually appealing representation of motion, especially when working with animations or scenes involving fast-moving subjects.

Proper workflow when using motion blur in nuke

2D and 3D camera projections

In Nuke, 2D camera projection is a technique used to integrate 2D elements, such as images or graphics, into a 3D scene. This process involves taking a flat, 2D image and mapping it onto a 3D surface as if it’s being viewed from a specific camera perspective.

2D camera projection is commonly used in visual effects and motion graphics to add elements like signs, labels, or textures to scenes that were not present in the original footage but need to look as if they belong in the 3D environment.

Camera projection workflows

Defocus

The Defocus node in Nuke is used to simulate the blurring effect that occurs when a camera is out of focus. When a camera lens is not focused perfectly on a subject, objects at different distances from the focal plane appear blurred in the captured image. The Defocus node allows you to replicate this effect in post-production, giving you control over the amount and nature of the blur applied.

Depth of field

  • The ZDefocus node blurs the image according to a depth map channel. This allows you to simulate depth-of-field (DOF) blurring.
  • In order to defocus the image, ZDefocus splits the image up into layers, each of which is assigned the same depth value everywhere and processed with a single blur size. After ZDefocus has processed all the layers, it blends them together from the back to the front of the image, with each new layer going over the top of the previous ones. This allows it to preserve the ordering of objects in the image.

Leave a Reply

Your email address will not be published. Required fields are marked *