Category: Nuke
Copycat in Nuke 16/04
Copycat node is a machine learning algorithm introduced in 2020 in nuke, it is used to train with a set of images and generate in-between frames with data which matches the given intervals of frame. the data keeps getting better when we feed more data and the output becomes more accurate.
Examle:
We can use the copycat to clean-up the plate by training the node with intermediate cleaned up frames and letting the machine learning to create the in-between frames.




- Create the Copycat node and set the directory
- Copycat node should work on linear colorspace input.
- Adjust epochs/steps of calculations as per the shot requirement / Think about epochs like sampling in 3D/ MB.
- More the epochs more the time to train and also the PC might be unusable during the calculations.
- In the current version of copycat we can pause the training process to check the results.
- In the advance settings having large model size might take a lot time to calculate so need to be carful on using these settings.
- We can use any previous training data and use it as a checkpoint to aid the machine learning.


- We can use the same principle to many things like the roto




keying in Nuke – 13/2
- In Nuke, keying is the process of isolating specific elements, typically from a video or image, based on their color or luminance values. This is often done to remove backgrounds, separate objects, or create matte elements for compositing.
HSV
HSV Color Scale: The HSV (which stands for Hue Saturation Value)
scale provides a numerical readout of your image that corresponds to the color
names contained therein. Hue is measured in degrees from 0 to 360
R = HUE: Hue literally means colour
G = Saturation: Saturation pertains the amount of white light mixed with a hue.
B = Luminance: Luminance is a measure to describe the perceived brightness of a colour
Hue corrections











Luminance Key
Can be used to create mask with the luminance value of an image.
This can be used to key various channels like RGB/



Luma key can also be used is various ways like experimenting with different color spaces.
Here is an example with keying red channel in various color spaces.



Hue / Luma separator
We can separate Hue from an image by dividing the luminance with the colour of the image in the merge division mode.



Color Difference Key
We can separate individual colors by negating colors from two channels using merge minus function.










Adding Channels
We can create separate channels for certain needs for example we can create a Roto and add it as a new channel and we can use grade or blur node’s mask as the new channel which was created. These Channels are separate from alpha so it is only visible on its assigned name.




ChromaKeyer
ChromaKeyer is basic keying node useful for quick color keying and it is not accurate around hairs or other tiny moving objects.
Image Based Keyer – IBK
IBK stands for Image Based Keyer It operate with a subtractive or difference methodology. It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges.
IBK Color: This node we can use it to make up the clean plate for the background of the subject.
IBK Gizmo: With this node we can make use of the produced clean plate of the IBK Color and aid it to key the subject especially for hairs.




KeyLight
Key light It is a really great keyer over all and does color de-spill.



Primatte
Primatte is what is called a 3D keyer. It uses a special algorithm that puts the colors into 3D color space and creates a 3d geometric shape to select colors from that space.

Ultimatte
The advantage is you can get phenomenal keys that can get really fine detail and pull shadows and transparency from the same image.


Green De-spill
It’s the process of removing green scatter around the subject which occurs mainly due to the green screen.
We first key the green using key-light and after the comes the difference function which will extract the green only from the image and then we desaturate the green and we add back to the image.
we can use the same principle for the blue screen.






De-spill map
We can create a De-spill map with the help of minus function with the color function. Here we use Green – Red thus getting a alpha as a result which can be used for de-spill.




Edge extend
Sometimes we will get black halo/edges in an keyed footage, to eliminate this we use edge extend, It works by extruding the color values around the edge of the alpha and we can use this value to reduce the black halo which will be appearing.







Fixing uneven green screen
Having an irregular green screen will develop problems down the pipeline especially during keying. so to counter that we need to even out the greenscreen in comp.













Merge keys then comp over
Production Scenario
Modelling Hot Air Balloon
- Modeling a hot air balloon in Maya involves creating a 3D representation of the balloon’s shape and structure. Here’s how I model a hot air balloon in Maya:
Step 1: Reference Images
Gather reference images of hot air balloons from different angles. Use these images to guide your modeling process.


Step 2: Create the Balloon Shape
- Create a Sphere:
- Go to the “Create” menu.
- Choose “Polygon Primitives” > “Sphere.”
- Click and drag on the grid to create a sphere.
- Adjust the sphere’s size using the manipulator or the attribute editor.
- Shape Adjustment:
- Enter “Vertex” mode (right-click on the sphere and choose “Vertex”).
- Adjust the shape of the sphere to match the reference images. Scale and move vertices as needed.


Step 3: Add Details
- Balloon Opening:
- Select the top vertices of the sphere.
- Scale them down to create the opening of the balloon.
- Rope Attachments:
- Model small cylinders or tubes for rope attachments.
- Position them at the bottom of the balloon and scale as necessary.







Step 4: UV Mapping
- Unwrap the UVs:
- Go to the “UV Editing” workspace.
- Select the balloon object.
- Choose “Create UVs” > “Automatic Mapping” or use “Unfold” tools to unwrap the UVs.
- Adjust UVs:
- Arrange UVs in the UV editor to ensure proper texture mapping.


Step 6: Materials and Textures
- Create Materials:
- Open the Hypershade editor.
- Create a new Lambert or Blinn material for the balloon.
- Assign Textures:
- Apply textures to the material if desired.
- Use image textures for patterns, colors, or details on the balloon.




Compositing in Nuke
- Before we begin working on the footage, we check it for things like camera movements and objects in the foreground and background. This helps us plan how we’re going to tackle the shot.
Import Footage:
- Open Nuke and create a new project.
- Import the footage you want to rotoscope by using the ‘Read’ node.


- Create a Roto Node:
- Right-click in the Node Graph and select “Draw” > “Roto.”
- the Roto node to the footage node.
- Rotoscope the First Frame:
- Go to the first frame of your footage.
- Use the Roto node to draw a shape around the object you want to rotoscope.
- Make sure the shape encloses the entire object you want to isolate.
- Keyframes:
- Move a few frames forward in the timeline.
- Adjust the shape of the Roto node to match the object’s movement.
- Press ‘A’ on your keyboard to set a keyframe for the current frame.


- Refine the Rotoscope:
- Continue moving forward frame by frame, adjusting the shape of the Roto node as needed.
- Track the Rotoscope:
- In Nuke, you can use the built-in tracker to automate the tracking process. Select the Roto node, go to the “Tracker” tab, and enable it.
- Adjust tracking settings, such as search area and correlation, to achieve accurate tracking.
- Fine-tune the results by manually adjusting keyframes if necessary.


- Output:
- Connect the Roto node to the desired downstream nodes for further compositing.
- You can use the roto shape as a mask for other nodes or apply color correction, effects, etc.



Mood of the Balloon Festival
- I have chosen to place the balloon festival within a romantic setting, where the color pink takes on a significant role in shaping the overall color palette.
First Iteration
- In the initial pass, I experimented by incorporating additional elements such as foreground fog, neon text, a background sky video, glitters, and added birds in flight.









- I animated certain image planes of the balloons to preview my approach for the final renders. Additionally, I utilized previous tracking data to synchronize balloon movements with the camera motion.
- To enhance the ambiance in line with my chosen setting, I applied a pinkish color grade to the overall scene.



Second Iteration
- At this point, I’ve completed the modeling and rendering of the balloon in Maya, making it ready for use in Nuke compositing. I seamlessly replaced the still images with my balloon renders, addressing any roto mismatches and incorporating general corrections suggested by my mentor.


- Additionally, I introduced hanging lights to get more depth and a festive atmosphere into the overall scene.
- The footage obtained from the internet featured blinking hanging lights, but it seemed too simplistic to me. Therefore, I opted to elevate it by eroding the alpha channel, allowing only subtle visibility of the lights. I increased the exposure to achieve higher pixel values, creating the illusion of a hot filament inside the light bulb. Finally, I applied an exponential glow to and merged with original lights to enhance the overall appearance.
- Lastly, I performed an overall color correction to give the lights a more orange hue.





Third / Final Iteration
- For the final version, I primarily focused on implementing corrections provided by my mentor.
- At first, I decreased the speed of the moving balloon in the background and adjusted its scale as it approached from the distant background.
- In addition, I enhanced the balloon’s depth and appeal by adding a flame effect. This effect illuminates the insides of the balloon using a noise pattern and roto mask in Nuke.







- For the next part, I modified the text as I was dissatisfied with its appearance. I changed both the font and the color for a more aesthetically pleasing result.
- To make the text move, I used a sine function expression in the vertical (y-axis) transformation.
- To make the text exciting, I introduced a glitter effect using hearts as the bokeh shape. I achieved this by eroding the alpha of the image to reveal only certain areas and applied a noise pattern as a mask, and using convolve and roto to get the heart bokeh.
- I also used the same effect for the hanging lights aswell.









- Finally merged all the elements together.
Nuke’s 3D System 28/11
- Nuke primarily operates as a 2D compositing software, but it does have some 3D capabilities. The 3D system in Nuke allows you to work with three-dimensional elements within a 2D compositing environment.
- 3D Space:
- Camera Nodes: Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.


- Geometry and Objects:
- Card Nodes: You can use card nodes to represent flat or simple 3D objects within the 3D space. These cards can be textured with images or sequences, allowing you to integrate 2D and 3D elements seamlessly.
- ScanlineRender Node: This node is used to render 3D scenes within Nuke, taking into account lighting, shadows, and reflections.




- 3D Rendering:
- Nuke’s 3D system provides basic rendering capabilities for simple scenes. It supports features like ambient occlusion, shadows, and reflections.
- Shading and Lighting:
- Nuke includes nodes for basic shading and lighting, allowing you to control the appearance of 3D objects in your composition.
- Scene Integration:
- You can integrate 3D elements into live-action footage, matching the camera movement for a more realistic composite.
- Expression Linking:
- You can use expressions to link 2D and 3D properties, allowing for dynamic relationships between elements in different dimensions.


- Nuke can be customized in many ways through preference, we can change the 3D navigation method to emulate other 3D software navigation methods.
- We can customize the nuke UI and save the changes with a name as a workspace, so when opening the nuke we can use our preferred workspace
- We can also create certain tool sets to save some time.
- All the saved tool sets, workspace and preferences are saved in the parent folder of the Nuke.





Nuke Camera
Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.
- Create a Camera Node:
- In the Node Graph, press Tab to open the node creation panel.
- Type “Camera” and select the “Camera” node.
- Import Camera Data:
- If you have camera tracking data from external software (e.g., PFTrack, SynthEyes), use a ReadGeo node or similar to import the camera data into Nuke.
- Adjust Camera Settings:
- Open the Camera node properties by double-clicking on it.
- Set the film back, focal length, and other parameters to match the real camera used during filming.
- Create 3D Objects:Use Card nodes or other geometry nodes to represent objects in the 3D space.
- Connect them to the ScanlineRender node for rendering.
- Animate the Camera:
- Keyframe the camera’s translation, rotation, and focal length to match the movement in the live-action footage.
- You can use keyframes or expressions to link camera properties to tracking data.
- Camera Projection:
- Use the CameraProject node to project 2D images onto 3D geometry based on the camera’s perspective.
Scanline Render
The ScanlineRender node in Nuke is used for rendering 3D scenes within the compositing environment. It simulates a simplified rendering process, taking into account the lighting, shading, and textures of 3D objects in a scene.
Node Properties:
- Render Settings:
- In the ScanlineRender node properties, you can find settings for rendering quality, anti-aliasing, and other parameters.
- Shading Model:
- Choose the shading model (e.g., Lambert, Phong) that best suits your scene and desired look.
- Background:
- Specify the background color or connect another image node to the “Background” input for a more complex background.
- Outputs:
- The ScanlineRender node typically has outputs for the rendered image, depth information, and other auxiliary data.
Lens Distortion
Lens distortion refers to the imperfections introduced by camera lenses that can cause straight lines to appear curved or distorted. In visual effects and compositing, correcting lens distortion is crucial for seamlessly integrating elements into live-action footage. Nuke provides tools to analyze and correct lens distortion.
- Understanding Lens Distortion:
- Radial Distortion: Causes straight lines to curve, more pronounced at the frame edges.
- Tangential Distortion: Shifts the image along the horizontal and vertical axes.
- LensDistortion Node:
- Analysis: Use the LensDistortion node to estimate distortion parameters from a grid pattern.
- Correction: Apply obtained parameters for distortion correction.
- Undistort and Distort Nodes:
- Undistort: Use the Undistort node to remove lens distortion.
- Distort: The Distort node reintroduces lens distortion, e.g., for 3D integration.
- LensDistortion Model:
- Model Options: Choose a lens distortion model (e.g., “Nuke,” “Brown,” “Houdini”).
- Parameters (K1, K2, P1, P2): Define distortion correction amount and type.
- Fine-Tuning:
- Grid Warp: Manually adjust correction with a grid warp in the LensDistortion node.
- LensDistortionCorrect Node: Use for advanced correction with extra controls.
- Animation:
- Keyframe Parameters: Adjust distortion parameters for changing distortion over time.
- Checkerboard Patterns:
- Calibration Aid: Filming a checkerboard pattern aids in accurate distortion analysis.






STmap
An STmap (Spatial-Temporal Map) in Nuke is a representation of the distortion in an image due to various factors, including lens distortion, and it is used to correct this distortion. The STmap carries spatial and temporal information, making it a powerful tool for addressing complex distortions that may vary across different areas of the image and evolve over time.
- Understanding STmap:
- Spatial-Temporal Distortion: Spatial distortion is caused by lens imperfections, while temporal distortion evolves over time.
- LensDistortion Node and STmap:
- LensDistortion Node: In Nuke, the LensDistortion node analyzes footage and generates an STmap representing spatial and temporal distortions.
- STmap Output: The LensDistortion node produces an STmap encapsulating distortions in the footage.
- Usage of STmap:
- LensDistortionCorrection: The LensDistortionCorrection node uses the STmap to undistort or redistort images.
- Creation of STmap:
- Calibration Grid: Use a grid during shooting for generating an STmap, providing reference points for distortion analysis.
- Analysis: The LensDistortion node analyzes the grid to create the corresponding STmap.
- Application to Animation:
- Changing Distortion Over Time: For evolving lens distortion, animate distortion parameters or use a sequence of STmaps.
- Manual Adjustments:
- GridWarp and STmap: The GridWarp node, combined with an STmap, allows manual adjustments, helpful when automatic analysis falls short.




Cleanup in NUKE 21/11
Roto Paint
- In Nuke, the RotoPaint node is a versatile tool used for both rotoscoping and painting tasks within a compositing workflow. It combines the capabilities of both the Roto and Paint nodes, allowing artists to create complex shapes for rotoscoping and perform detailed paint work directly within the same node.
Painting:
- Brush-Based Painting: The RotoPaint node includes painting tools similar to those found in standalone paint applications. Artists can use brushes to clone, smudge, blur, and paint directly onto the image.
- Frame-by-Frame Painting: It supports frame-by-frame painting, making it possible to create hand-painted elements that evolve over time in a sequence.
- Integration with Rotoscoping: The ability to paint directly on top of roto shapes is valuable. This allows for precise paint work on specific regions of the image, matching the motion and contours defined by the rotoscoping shapes.






Clone and Repair:
- Clone Brush: The RotoPaint node includes a clone brush that allows you to sample pixels from one part of the image and paint them onto another. This is useful for removing unwanted elements or duplicating parts of the image.
- Repair Work: Artists can use the RotoPaint node for repairing and fixing issues in the footage, such as wire removal or blemish cleanup.



Integration with the Nuke Environment:
- Layered Approach: Like other nodes in Nuke, the RotoPaint node works in a layered manner, allowing you to apply multiple instances of the node with different settings for complex compositing tasks.
- Integration with Channels: It can work with multiple input and output channels, giving you control over how the roto and paint information integrates with other elements in the composite.

Smear Tool:
The Smear tool can simulate motion blur by dragging or smearing pixels in the direction of motion. This is handy for matching the motion blur of live-action elements or for adding a sense of movement to painted or roto shapes.


The RotoPaint node is a powerful tool widely used in VFX for tasks like rotoscoping, painting, and image repair.
Grain & Noise
- Grain refers to the visual noise or texture present in an image. Grain is often a result of film or sensor characteristics and can be an important aesthetic element, especially when compositing CG elements into live-action to achieve a more realistic look.
- In digital imaging, especially in the context of sensors and electronic devices, “noise” refers to random variations in brightness or color. This noise can result from factors such as sensor sensitivity, electronic interference, or high ISO settings in low-light conditions.
Denoise
denoise” refers to the process of reducing or eliminating digital noise in an image or sequence. Digital noise often appears as unwanted random variations in brightness or color, and it can result from factors like low-light conditions, high ISO settings, or the limitations of digital sensors. The Denoise node in Nuke is a tool specifically designed to address and mitigate this type of noise.


Capturing the grain from the footage
- This process allows us to analyze the existing grain pattern in a clean plate or reference frame and apply it to other elements in our composite
- Select a Clean Plate:
- Choose a frame from your footage where there is no significant action or objects of interest, and the background is relatively uniform. This will be our clean plate, and we’ll capture the grain from this frame.
- Create a Grain Sample:
- Place the clean plate on the timeline.
- Use a Copy node to duplicate the clean plate.
- Add Grain to the Duplicate:
- Apply a Grain or Noise node to the duplicated clean plate.
- Adjust the settings of the grain to match the natural grain in our footage. We may need to tweak parameters such as size, intensity, and seed to match the original grain.
- Difference Operation:
- Use a Dissolve or Difference node to compare the original clean plate with the duplicated one containing added grain. This will help us see the difference and fine-tune the settings.
- Create a Grain Map:
- Use the Difference node’s output as a guide to create a black and white map where the grain is most prominent. We can use nodes like Grain2D, Blur, or ColorCorrect to enhance this map.
- Apply Grain to Other Elements:
- Use the Copy node or a similar method to apply the grain map to other elements in our composite.
- Adjust the blending mode or opacity to control the intensity of the added grain.
- Fine-Tune as Needed:
- Continuously check and adjust the added grain to make sure it matches the original footage. We may need to iterate on the settings to achieve a realistic and consistent result.


DasGrain
DasGrain, is a regrain tool which analyzes the grain in your plate, and uses it to regrain an entire degrained comp. It works by:
- Isolating the grain by finding the difference between a plate & its degrained counterpart.
- Analyses the grain response over a range of luminance, and adapts the grain to fit the comp based on its own luminance.
- Uses a mask input to restore the plate’s original grain, so you don’t have to Keymix your plate back over top at the end of your script.
- This method is widely used in the industry for adding grains to clean-ups in nuke.
- We use the noise for cleaning up elements because it lacks the noise pattern seen in the plate. As a result, the object may not blend seamlessly.
- It is crucial to seamlessly blend the grain with the footage; otherwise, improper execution can lead to issues at the end of the pipeline.






2D Tracking
- 2D tracking in Nuke refers to the process of analyzing and following the motion of objects within a two-dimensional space in a video or image sequence. This tracking is essential for tasks such as adding elements to a scene, stabilizing footage, or applying visual effects that need to move with a specific object.
2D Tracking Process
1. Import Footage:
- Open Nuke and import your footage or image sequence.
- Analise the footage on how to approach the shot.
2. Create a Tracker Node:
- In the Node Graph, right-click and select “Tracker” from the menu.
- Connect the Tracker node to your footage.
3. Select Tracking Points:
- Open the Tracker node properties.
- Click “Add Track” to define tracking points on the phone screen.
- Choose high-contrast points that will allow for accurate tracking.
4. Define Tracking Region:
- For each tracking point, draw a tracking region around it.
- These regions should cover the area of the phone screen.
5. Analyze Motion:
- Click “Analyze” in the Tracker node properties.
- Nuke will analyze the motion of the tracking points throughout the sequence.

6. Green De-Spill:
- Use green de-spill to remove the green spill or color contamination that often occurs on edges of the subject when working with green screens.
7. Create a CornerPin Node:
- Add a CornerPin node to the Node Graph.
- Connect the output of the Tracker node to the input of the CornerPin node.
8. Apply Tracking Data to CornerPin:
- Connect the tracking data to the CornerPin’s input parameters.
- This will allow the CornerPin to stabilize the footage based on the tracked motion.
9. Create a Roto Node for the Phone Screen:
- Add a Roto node and draw a shape around the phone screen.
- Animate the shape to account for any movement or rotation that the tracker might have missed.




10. Merge the Replacement Image:
- Import the replacement image for the phone screen.
- Use a Copy or Transform node to size and position it appropriately.
11. Apply CornerPin to Replacement Image:
- Connect the output of the CornerPin node to the replacement image’s input.
- This ensures that the replacement image follows the stabilized motion.
12. Adjust Placement:
- Fine-tune the placement of the replacement image to match the stabilized phone screen.
13. Review and Iterate:
- Scrub through the timeline to review the stabilized phone screen replacement.
- Make manual adjustments as needed for a seamless integration.

14. Negate stablization
- Remove the stabilization with the same tracking data but this time using (Match-move 1-pt) transform
15. Finalize and Render:
- Once satisfied, render the final composition.


Final Output
My Reflection with this project
I’m happy that everything about this project turned out to be very useful for developing my skills, especially in new software like Unreal Engine. I’m also pleased with how well the main shot of my project, the Moon impact, turned out.
The first part of the project involved coming up with an idea and a story, which proved to be challenging. In fact, I hadn’t decided until 2-3 weeks had passed. Initially, I considered a spaceship emerging and destroying a moon, but it felt too simple and lacked a compelling story. After some contemplation, I recalled a scene from the movie Top Gun where a jet maneuvers to evade a missile. This inspired the idea of a jet evading a missile, leading to the missile inadvertently hitting and destroying the moon.
I began the project by building the environment in Unreal Engine, marking my first experience with the software. It was highly beneficial, allowing me to quickly create basic terrains and populate the area with various assets. To add an element firing the projectile at the jet, I came up with the idea of using artillery. Subsequently, I worked in Maya for modeling and Substance for texturing learning those software as well. I wanted to craft a scene where the artillery shoots the plane, and the plane skillfully maneuvers to evade the projectile, causing it to hit the moon instead.
For the moon destruction, I opted to use Houdini for FX due to my prior experience with the software and my career focus on FX. I drew inspiration from an online lesson at Rebelway, where they demonstrated a similar process. I decided to incorporate it into my project. Starting with a sphere and experimenting with various parameters, I achieved the desired effect. This process allowed me to gain valuable insights into VEX language and how to approach certain elements without resorting to simulation. This knowledge proved extremely useful for making quick changes. In fact, out of the five elements, I only used simulation for two, achieving the rest through SOPs.
Story / Concept
FADE IN:
EXT. DESERT – DAY
- The Space Odyssey theme plays as the camera moves from a ground-up shot of a sand dune, revealing a cannon in the distance.
EXT. REMOTE AREA – EARTH – DAY
- A cannon is set up in a remote area.
INT. CANNON SHOULDER VIEW – DAY
- The camera shows a shoulder view of the cannon, capturing a twinkle of a jet flying high in the sky.
EXT. SKY – DAY
- A fighter jet maneuvers around the sky, seemingly oblivious to the cannon below.
- The cannon takes aim and fires a shot towards the fighter jet.
- CAMERA SLOWLY FOLLOWS the projectile as it travels through the sky.
- The fighter jet executes a cobra move inspired by Top Gun.
- However, the shot misses the fighter jet and instead collides with the moon.
- The projectile slams into the moon, causing a massive explosion.
- The moon is destroyed, leaving only dust and debris behind.
FADE OUT.
The Space Odyssey music comes to an end.
3D Modelling in Maya




Texturing in Substance

















Creating Env In Unreal
- Creating a desert environment in Unreal Engine involves several steps, including terrain creation, asset placement, lighting, and fine-tuning to achieve a realistic and immersive result.
1. Create a New Project:
Start Unreal Engine and create a new project. Choose the template that best fits your project requirements, such as the Third Person or First Person template.
2. Landscape Creation:
- Create a new landscape by going to the Landscape mode.
- Sculpt the landscape to resemble the desert terrain. Use tools like “Sculpt,” “Flatten,” and “Smooth” to shape the landscape according to your vision.





3. Desert Materials:
- Apply desert materials to the landscape. You can either create your own materials or use existing ones from the Unreal Engine Marketplace.
- Consider adding features like sand dunes, rocks, and desert vegetation to enhance the realism.
4. Sky and Atmosphere:
- Adjust the sky and atmospheric settings to match a desert environment. You can use the “Sky Atmosphere” actor to control the overall look of the sky, including the sun position and atmosphere settings.
5. Lighting:
- Configure the lighting to simulate the harsh sunlight of a desert environment. Pay attention to the direction, intensity, and color of the light source.
- Consider using dynamic lighting to create realistic shadows.




6. Asset Placement:
- Populate the environment with assets such as rocks, cacti, tumbleweeds, and other desert-themed objects. You can either create your own assets or use assets from the Unreal Engine Marketplace.
7. Post-Processing:
- Apply post-processing effects to enhance the overall visual appeal. Adjust settings such as bloom, contrast, and color grading to achieve the desired look.



Moon FX in Houdini
- I have decided to use Houdini for FX since I already have some experience working with it. I love the procedural approach, and it allows me to have total control over my FX.
My approach in Houdini
Geometry prep
- I took a standard sphere, applied UVs for future use, and added a temporary texture for reference.
- Marked a point where all my fx and reference data is going to take place.
- Now, I used that point as the origin and created a falloff where the moon destruction is going to take place.
- I generated custom velocity vectors using VEX functions, pointing outward from the designated point. Adjusted velocity using the previously created falloff mask for distance-based reduction.






Creating source for Dust FX
- I utilized the previously created mask to form a ring-like shape. This segmented ring will have custom velocity applied and serve as the source for the Dust FX.




Creating Shock wave with the generated source
- Now, with the source established, I created custom velocity by implementing various wrangles and SOP techniques.
- To achieve this, I duplicated the source using a trail node and utilized an add node along with an ID attribute to connect the points, forming lines that would serve as the source. Using a foreach loop, I generated lines with various lengths.
- To make the shockwave originate from the cracks on the moon’s surface, I made these lines as a mask. Using attribute paint, I applied the mask to the interior points of the fractures. Subsequently, I deleted the points that were not marked red and utilized the resulting I set as the source for the pop sim. Once the particle simulation was done, I fed it into the pyro solver to achieve the desired shockwave effect.









Creating Tendrils with SOP
- I wanted to give more impact to the destruction, So I added trails/tendrils to represent fast-moving debris coming out from the moon.
- For the tendrils I used lines with various pscale values to get different lengths, and I used a global multiplier to animate pscale which will give the illusion of trail growth across frames.
- After that, I added some animated noise to make the trail look turbulent. I also applied a gradient with values ranging from 0 to 1, going from the origin to the end. This data is used to control the width of the trail.
- I converted the result into geometry, then transformed it into a VDB for the dust effect. Additionally, I applied an overall time remap to enhance the impact.







Fracture of surface in SOP
- To fracture the surface of the moon without relying on simulations, I opted for a SOP-based approach, incorporating extensive VEX coding.
- Initially, I prepared the geometry by implementing cuts with a custom object and added interior details to the inside geometry. Subsequently, I extracted the centroids of all the resulting pieces and applied a mask ramp using a mask attribute created at the begining. This attribute allowed me to define the desired area of effect on the surface. Lastly, I utilized clusters to group the pieces into clumps.
- In the final step, I used VEX code to manipulate the intrinsic transformation data, modifying inner transformations. Utilizing the attributes created earlier, I adjusted these data to simulate the effect of an RBD simulation. Custom ramps were also implemented to provide precise control over these attributes.










Secondary Sim (Debris)
- To enhance the impact, I implemented a particle simulation to depict the scattering of debris after the moon was impacted.
- Initially, I took the animated fractured geometry and converted it into points to serve as the source for the particle simulation. Utilizing speed culling, I removed stationary points and then crafted custom velocity, resembling a crown splash effect.
- After preparing the source, I integrated it into the particle simulation (pop sim). Further refinement was achieved by introducing variations in scale and applying random animated rotations to the resulting particles.
- To streamline rendering speed, I opted for a straightforward approach by substituting the particles with a simple Octahedron shape in the final visualization.








Breakdown video
Planar Tracking
- Planar tracking is a technique used for tracking the movement and transformation of flat or planar surfaces within a video or image sequence. Unlike point tracking, which tracks individual points in an image, planar tracking focuses on tracking the entire planar surface, making it particularly useful for tracking objects with consistent textures or patterns.
Here’s how planar tracking can be done in Nuke:
Asset Gather:








- Selection of Planar Surface:
- Identify the planar surface in the footage that you want to track. This surface should have distinguishable features or patterns that the tracking algorithm can follow.
- Adding a Tracker:
- Use a planar tracker node (such as the “Tracker” node in Nuke) to add a tracker to the selected planar surface. This involves defining the region on the first frame and indicating the planar area to be tracked.

- Automatic Tracking:
- The planar tracker analyzes the selected features within the defined region and automatically tracks their movement across subsequent frames. This is beneficial for surfaces with consistent textures, making planar tracking robust and reliable.
- Adjustments and Refinement:
- After the automatic tracking, you can review and make manual adjustments if needed. This allows you to refine the tracking by correcting any drift or errors that may have occurred during the automatic tracking process.

- Transformation Data:
- The planar tracker provides transformation data, including position, scale, rotation, and skew, for each frame. This data can then be applied to other elements, such as graphics or effects, to ensure they match the movement and perspective changes of the tracked planar surface.
- Integration into Compositing:
- Once the planar tracking is complete, you can integrate other elements into the scene, and they will follow the tracked planar surface’s movement. This is particularly useful for tasks like adding labels to moving objects or seamlessly integrating CGI elements into live-action footage.


- We may encounter issues with foreground elements, such as the light pole in this image. However, we can easily overcome these challenges by utilizing the same tracking data to apply rotoscoping. The resulting roto alpha can then be used as a clipping mask to bring the pole to the foreground.


Filtering Algorithms 14/11
Types of filters
- Image filtering algorithms are designed to assist in determining the changes that occur in pixel values as images undergo transformation and processing. This article will dissect the various image filters at our disposal and explore their effects on images within the context of Nuke.
Resize Type:
Sinc4 – Lots of sharpening, often too much sharpening
Lanczos6 – Moderate amount of sharpening over 6 pixels. Good for scaling down.
Lanczos4 – Small amount of sharpening over 4 pixels. Good for scaling down.
Rifman – Moderate smoothing and high sharpening. Typically too hard on the sharpening in many situations.
Simon – Some smoothing and moderate sharpening. Excellent choice for many situations.
Keys – Some smoothing plus minor sharpening, decent choice for general transformations.
Anisotropic – High quality filter. Performs well with high angle surfaces. Only available in 3D nodes.
Cubic – Nuke default filter. Pixels receive some smoothing, leading to predictable results. Often too smooth.
Mitchell – Moderate smoothing, low sharpening, with a slight blur. Changes pixel values even with no movement.
Notch – High amounts of flat smoothing. Good for hiding buzzing or moire patterns. Changes pixel values even with no movement
Parzen – Lots of smoothing. Changes pixel values even with no movement.


Concatenation
The capability to execute a single mathematical calculation across multiple tools within the family is crucial. This singular calculation, or filter, enables us to preserve the maximum amount of detail possible.
Examples
Wrong way
Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction, after that we use and grade node to change the values and we use another transform but this time in backwards 0.5 pixels in X and Y, We would loose the quality of the pixel, since the link/concatenation in broken by using the grade in between.



Right way
Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction and we use another transform but this time in backwards 0.5 pixels in X and Y, after that we use and grade node to change the values, We would not loose the quality of the pixel, since the link/concatenation is not broken since, the grade in used after the transform calculations, using grade prior to both transformations works to.


BBOX
The bounding box defines the area of the frame that Nuke sees as having valid image data. The larger the bounding box is, the longer it takes Nuke to process and render the image. To minimize processing and rendering times, you can crop the bounding box. Occasionally, the bounding box may also be too small, in which case you need to expand it.



Motion Blur
In Nuke, motion blur can be applied to enhance the realism of moving elements within a scene. It’s achieved by calculating the movement of objects between frames and then blurring them accordingly. This helps to create a more natural and visually appealing representation of motion, especially when working with animations or scenes involving fast-moving subjects.

2D and 3D camera projections
In Nuke, 2D camera projection is a technique used to integrate 2D elements, such as images or graphics, into a 3D scene. This process involves taking a flat, 2D image and mapping it onto a 3D surface as if it’s being viewed from a specific camera perspective.
2D camera projection is commonly used in visual effects and motion graphics to add elements like signs, labels, or textures to scenes that were not present in the original footage but need to look as if they belong in the 3D environment.

Defocus
The Defocus node in Nuke is used to simulate the blurring effect that occurs when a camera is out of focus. When a camera lens is not focused perfectly on a subject, objects at different distances from the focal plane appear blurred in the captured image. The Defocus node allows you to replicate this effect in post-production, giving you control over the amount and nature of the blur applied.


Depth of field
- The ZDefocus node blurs the image according to a depth map channel. This allows you to simulate depth-of-field (DOF) blurring.
- In order to defocus the image, ZDefocus splits the image up into layers, each of which is assigned the same depth value everywhere and processed with a single blur size. After ZDefocus has processed all the layers, it blends them together from the back to the front of the image, with each new layer going over the top of the previous ones. This allows it to preserve the ordering of objects in the image.