Categories
Maya

Lighting, Materials and Rendering in Maya 27/11

  • In Maya with the Arnold renderer, lighting, materials, and render passes are fundamental components of the rendering process, contributing to the creation of visually compelling and realistic images.
Lighting in Maya Arnold:
  • Arnold Lights:
    • Arnold supports various light types, including point lights, spotlights, area lights, and distant lights.
    • Lights contribute to the illumination of the scene, influencing the appearance of surfaces and shadows.
  • Light Parameters:
    • Lights have parameters that control intensity, color, falloff, and other characteristics that impact how they interact with the scene.
  • HDR Lighting:
    • Arnold supports High Dynamic Range (HDR) images as light sources, allowing for realistic and complex lighting scenarios.
  • Physical Sky and Sun:
    • Maya Arnold includes a physical sky and sun system for simulating realistic outdoor lighting conditions.
Materials in Arnold:
  • Arnold Standard Surface Shader:
    • The Arnold Standard Surface shader is a versatile material shader that supports a wide range of realistic surface properties.
    • It includes controls for base color, specular reflection, roughness, and other parameters.
  • Texture Mapping:
    • Maya Arnold allows you to apply texture maps to materials, enhancing the realism of surfaces by incorporating details like color, bump, and specular maps.
  • Material Library:
    • Arnold provides a material library with pre-built shaders and textures, making it easier to create realistic materials.
  • Subsurface Scattering (SSS):
    • Arnold supports subsurface scattering, allowing you to simulate the way light penetrates and scatters beneath the surface of translucent materials.
Render Passes in Arnold:
  • AOVs (Arbitrary Output Variables):
    • Arnold allows you to render additional passes beyond the beauty pass, known as Arbitrary Output Variables (AOVs).
    • Common AOVs include diffuse, specular, reflection, and ambient occlusion passes.
  • Compositing Workflow:
    • Render passes enable a more flexible compositing workflow. They allow artists to adjust and enhance specific aspects of the image in post-production.
  • Denoising Passes:
    • Arnold provides denoising passes that can be used in compositing to reduce noise in the final image.
  • Cryptomatte:
    • Cryptomatte is a popular AOV that simplifies object selection in post-production by generating ID mattes automatically.
Camera Placement:
  • Camera Settings:
    • Set up your camera with the desired composition, focal length, and depth of field.
    • Adjust camera settings in the Attribute Editor.
Arnold Render Settings:
  • Open Render Settings:
    • Navigate to the Arnold Render Settings in the Render Settings window.
  • Common Tab:
    • Set the image size, aspect ratio, and frame range in the Common tab.
  • Arnold Renderer Tab:
    • Choose “Arnold” as the Renderer.
    • Adjust settings like the AOVs, Ray Depth, and Sampling.
Render Preview:
  • Render View:
    • Use the Render View window to preview your scene’s rendering without saving an image.
Render the Scene:
  • Render Button:
    • Click the Render button in the Render Settings window to start the rendering process.
  • Watch Progress:
    • Monitor the rendering progress in the Rendering menu or the Script Editor.
Save Rendered Image:
  • Image Format:
    • Choose an image format (e.g., JPEG, PNG, EXR) for the final rendered image.
  • Save Image:
    • Save the rendered image to your desired location.
Post-Processing:
  • Compositing Software:
    • Import the rendered image into compositing software (e.g., Nuke, Adobe After Effects) for further adjustments if needed.
AOVs comp in Nuke
Final Image
Categories
Nuke

Nuke’s 3D System 28/11

  • Nuke primarily operates as a 2D compositing software, but it does have some 3D capabilities. The 3D system in Nuke allows you to work with three-dimensional elements within a 2D compositing environment.
  • 3D Space:
    • Camera Nodes: Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.
  • Geometry and Objects:
    • Card Nodes: You can use card nodes to represent flat or simple 3D objects within the 3D space. These cards can be textured with images or sequences, allowing you to integrate 2D and 3D elements seamlessly.
    • ScanlineRender Node: This node is used to render 3D scenes within Nuke, taking into account lighting, shadows, and reflections.
  • 3D Rendering:
    • Nuke’s 3D system provides basic rendering capabilities for simple scenes. It supports features like ambient occlusion, shadows, and reflections.
  • Shading and Lighting:
    • Nuke includes nodes for basic shading and lighting, allowing you to control the appearance of 3D objects in your composition.
  • Scene Integration:
    • You can integrate 3D elements into live-action footage, matching the camera movement for a more realistic composite.
  • Expression Linking:
    • You can use expressions to link 2D and 3D properties, allowing for dynamic relationships between elements in different dimensions.
  • Nuke can be customized in many ways through preference, we can change the 3D navigation method to emulate other 3D software navigation methods.
  • We can customize the nuke UI and save the changes with a name as a workspace, so when opening the nuke we can use our preferred workspace
  • We can also create certain tool sets to save some time.
  • All the saved tool sets, workspace and preferences are saved in the parent folder of the Nuke.

Nuke Camera

Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.

  • Create a Camera Node:
    • In the Node Graph, press Tab to open the node creation panel.
    • Type “Camera” and select the “Camera” node.
  • Import Camera Data:
    • If you have camera tracking data from external software (e.g., PFTrack, SynthEyes), use a ReadGeo node or similar to import the camera data into Nuke.
  • Adjust Camera Settings:
    • Open the Camera node properties by double-clicking on it.
    • Set the film back, focal length, and other parameters to match the real camera used during filming.
  • Create 3D Objects:Use Card nodes or other geometry nodes to represent objects in the 3D space.
    • Connect them to the ScanlineRender node for rendering.
  • Animate the Camera:
    • Keyframe the camera’s translation, rotation, and focal length to match the movement in the live-action footage.
    • You can use keyframes or expressions to link camera properties to tracking data.
  • Camera Projection:
    • Use the CameraProject node to project 2D images onto 3D geometry based on the camera’s perspective.

Scanline Render

The ScanlineRender node in Nuke is used for rendering 3D scenes within the compositing environment. It simulates a simplified rendering process, taking into account the lighting, shading, and textures of 3D objects in a scene.

Node Properties:
  • Render Settings:
    • In the ScanlineRender node properties, you can find settings for rendering quality, anti-aliasing, and other parameters.
  • Shading Model:
    • Choose the shading model (e.g., Lambert, Phong) that best suits your scene and desired look.
  • Background:
    • Specify the background color or connect another image node to the “Background” input for a more complex background.
  • Outputs:
    • The ScanlineRender node typically has outputs for the rendered image, depth information, and other auxiliary data.

Lens Distortion

Lens distortion refers to the imperfections introduced by camera lenses that can cause straight lines to appear curved or distorted. In visual effects and compositing, correcting lens distortion is crucial for seamlessly integrating elements into live-action footage. Nuke provides tools to analyze and correct lens distortion.

  • Understanding Lens Distortion:
    • Radial Distortion: Causes straight lines to curve, more pronounced at the frame edges.
    • Tangential Distortion: Shifts the image along the horizontal and vertical axes.
  • LensDistortion Node:
    • Analysis: Use the LensDistortion node to estimate distortion parameters from a grid pattern.
    • Correction: Apply obtained parameters for distortion correction.
  • Undistort and Distort Nodes:
    • Undistort: Use the Undistort node to remove lens distortion.
    • Distort: The Distort node reintroduces lens distortion, e.g., for 3D integration.
  • LensDistortion Model:
    • Model Options: Choose a lens distortion model (e.g., “Nuke,” “Brown,” “Houdini”).
    • Parameters (K1, K2, P1, P2): Define distortion correction amount and type.
  • Fine-Tuning:
    • Grid Warp: Manually adjust correction with a grid warp in the LensDistortion node.
    • LensDistortionCorrect Node: Use for advanced correction with extra controls.
  • Animation:
    • Keyframe Parameters: Adjust distortion parameters for changing distortion over time.
  • Checkerboard Patterns:
    • Calibration Aid: Filming a checkerboard pattern aids in accurate distortion analysis.

STmap

An STmap (Spatial-Temporal Map) in Nuke is a representation of the distortion in an image due to various factors, including lens distortion, and it is used to correct this distortion. The STmap carries spatial and temporal information, making it a powerful tool for addressing complex distortions that may vary across different areas of the image and evolve over time.

  • Understanding STmap:
    • Spatial-Temporal Distortion: Spatial distortion is caused by lens imperfections, while temporal distortion evolves over time.
  • LensDistortion Node and STmap:
    • LensDistortion Node: In Nuke, the LensDistortion node analyzes footage and generates an STmap representing spatial and temporal distortions.
    • STmap Output: The LensDistortion node produces an STmap encapsulating distortions in the footage.
  • Usage of STmap:
    • LensDistortionCorrection: The LensDistortionCorrection node uses the STmap to undistort or redistort images.
  • Creation of STmap:
    • Calibration Grid: Use a grid during shooting for generating an STmap, providing reference points for distortion analysis.
    • Analysis: The LensDistortion node analyzes the grid to create the corresponding STmap.
  • Application to Animation:
    • Changing Distortion Over Time: For evolving lens distortion, animate distortion parameters or use a sequence of STmaps.
  • Manual Adjustments:
    • GridWarp and STmap: The GridWarp node, combined with an STmap, allows manual adjustments, helpful when automatic analysis falls short.
Categories
Nuke

Cleanup in NUKE 21/11

Roto Paint

  • In Nuke, the RotoPaint node is a versatile tool used for both rotoscoping and painting tasks within a compositing workflow. It combines the capabilities of both the Roto and Paint nodes, allowing artists to create complex shapes for rotoscoping and perform detailed paint work directly within the same node.
Painting:
  • Brush-Based Painting: The RotoPaint node includes painting tools similar to those found in standalone paint applications. Artists can use brushes to clone, smudge, blur, and paint directly onto the image.
  • Frame-by-Frame Painting: It supports frame-by-frame painting, making it possible to create hand-painted elements that evolve over time in a sequence.
  • Integration with Rotoscoping: The ability to paint directly on top of roto shapes is valuable. This allows for precise paint work on specific regions of the image, matching the motion and contours defined by the rotoscoping shapes.
Clone and Repair:
  • Clone Brush: The RotoPaint node includes a clone brush that allows you to sample pixels from one part of the image and paint them onto another. This is useful for removing unwanted elements or duplicating parts of the image.
  • Repair Work: Artists can use the RotoPaint node for repairing and fixing issues in the footage, such as wire removal or blemish cleanup.
Integration with the Nuke Environment:
  • Layered Approach: Like other nodes in Nuke, the RotoPaint node works in a layered manner, allowing you to apply multiple instances of the node with different settings for complex compositing tasks.
  • Integration with Channels: It can work with multiple input and output channels, giving you control over how the roto and paint information integrates with other elements in the composite.
Smear Tool:

The Smear tool can simulate motion blur by dragging or smearing pixels in the direction of motion. This is handy for matching the motion blur of live-action elements or for adding a sense of movement to painted or roto shapes.

The RotoPaint node is a powerful tool widely used in VFX for tasks like rotoscoping, painting, and image repair.

Grain & Noise
  • Grain refers to the visual noise or texture present in an image. Grain is often a result of film or sensor characteristics and can be an important aesthetic element, especially when compositing CG elements into live-action to achieve a more realistic look.
  • In digital imaging, especially in the context of sensors and electronic devices, “noise” refers to random variations in brightness or color. This noise can result from factors such as sensor sensitivity, electronic interference, or high ISO settings in low-light conditions.
Denoise

denoise” refers to the process of reducing or eliminating digital noise in an image or sequence. Digital noise often appears as unwanted random variations in brightness or color, and it can result from factors like low-light conditions, high ISO settings, or the limitations of digital sensors. The Denoise node in Nuke is a tool specifically designed to address and mitigate this type of noise.

Capturing the grain from the footage

  • This process allows us to analyze the existing grain pattern in a clean plate or reference frame and apply it to other elements in our composite
  • Select a Clean Plate:
    • Choose a frame from your footage where there is no significant action or objects of interest, and the background is relatively uniform. This will be our clean plate, and we’ll capture the grain from this frame.
  • Create a Grain Sample:
    • Place the clean plate on the timeline.
    • Use a Copy node to duplicate the clean plate.
  • Add Grain to the Duplicate:
    • Apply a Grain or Noise node to the duplicated clean plate.
    • Adjust the settings of the grain to match the natural grain in our footage. We may need to tweak parameters such as size, intensity, and seed to match the original grain.
  • Difference Operation:
    • Use a Dissolve or Difference node to compare the original clean plate with the duplicated one containing added grain. This will help us see the difference and fine-tune the settings.
  • Create a Grain Map:
    • Use the Difference node’s output as a guide to create a black and white map where the grain is most prominent. We can use nodes like Grain2D, Blur, or ColorCorrect to enhance this map.
  • Apply Grain to Other Elements:
    • Use the Copy node or a similar method to apply the grain map to other elements in our composite.
    • Adjust the blending mode or opacity to control the intensity of the added grain.
  • Fine-Tune as Needed:
    • Continuously check and adjust the added grain to make sure it matches the original footage. We may need to iterate on the settings to achieve a realistic and consistent result.
DasGrain

DasGrain, is a regrain tool which analyzes the grain in your plate, and uses it to regrain an entire degrained comp. It works by:

  • Isolating the grain by finding the difference between a plate & its degrained counterpart.
  • Analyses the grain response over a range of luminance, and adapts the grain to fit the comp based on its own luminance.
  • Uses a mask input to restore the plate’s original grain, so you don’t have to Keymix your plate back over top at the end of your script.
  • This method is widely used in the industry for adding grains to clean-ups in nuke.
  • We use the noise for cleaning up elements because it lacks the noise pattern seen in the plate. As a result, the object may not blend seamlessly.
  • It is crucial to seamlessly blend the grain with the footage; otherwise, improper execution can lead to issues at the end of the pipeline.
Categories
Nuke

2D Tracking

  • 2D tracking in Nuke refers to the process of analyzing and following the motion of objects within a two-dimensional space in a video or image sequence. This tracking is essential for tasks such as adding elements to a scene, stabilizing footage, or applying visual effects that need to move with a specific object.

2D Tracking Process

1. Import Footage:

  • Open Nuke and import your footage or image sequence.
  • Analise the footage on how to approach the shot.

2. Create a Tracker Node:

  • In the Node Graph, right-click and select “Tracker” from the menu.
  • Connect the Tracker node to your footage.

3. Select Tracking Points:

  • Open the Tracker node properties.
  • Click “Add Track” to define tracking points on the phone screen.
  • Choose high-contrast points that will allow for accurate tracking.

4. Define Tracking Region:

  • For each tracking point, draw a tracking region around it.
  • These regions should cover the area of the phone screen.

5. Analyze Motion:

  • Click “Analyze” in the Tracker node properties.
  • Nuke will analyze the motion of the tracking points throughout the sequence.

6. Green De-Spill:

  • Use green de-spill to remove the green spill or color contamination that often occurs on edges of the subject when working with green screens.

7. Create a CornerPin Node:

  • Add a CornerPin node to the Node Graph.
  • Connect the output of the Tracker node to the input of the CornerPin node.

8. Apply Tracking Data to CornerPin:

  • Connect the tracking data to the CornerPin’s input parameters.
  • This will allow the CornerPin to stabilize the footage based on the tracked motion.

9. Create a Roto Node for the Phone Screen:

  • Add a Roto node and draw a shape around the phone screen.
  • Animate the shape to account for any movement or rotation that the tracker might have missed.

10. Merge the Replacement Image:

  • Import the replacement image for the phone screen.
  • Use a Copy or Transform node to size and position it appropriately.

11. Apply CornerPin to Replacement Image:

  • Connect the output of the CornerPin node to the replacement image’s input.
  • This ensures that the replacement image follows the stabilized motion.

12. Adjust Placement:

  • Fine-tune the placement of the replacement image to match the stabilized phone screen.

13. Review and Iterate:

  • Scrub through the timeline to review the stabilized phone screen replacement.
  • Make manual adjustments as needed for a seamless integration.

14. Negate stablization

  • Remove the stabilization with the same tracking data but this time using (Match-move 1-pt) transform

15. Finalize and Render:

  • Once satisfied, render the final composition.
Categories
Maya

Animation 20/11

Animation principles

1.Timing and Spacing

Timing and spacing in animation create the illusion of motion within the laws of physics. Timing is the number of frames between two poses, determining the speed of movement. Spacing is the placement of individual frames, with closer spacing creating a slower appearance and wider spacing resulting in faster movement.

2. Squash & Stretch

Squash and stretch add flexibility to objects, commonly exaggerated in animation. Similar to real-life occurrences, like a falling ball stretching before impact and squashing upon hitting the ground, this principle is evident in animated elements such as facial expressions—where eyes squash during a blink or stretch when expressing surprise or fear.

3. Anticipation

Anticipation in animation preps the audience for upcoming actions, enhancing believability. Whether a baseball pitcher winding up before a throw or a parkour runner bending their knees before a jump, these preparatory movements are essential for realistic and convincing animation. Without anticipation, these actions would lack authenticity.

4. Ease-In and Ease-Out

Ease-In and Ease-Out, also known as slow-in and slow-out, involve incorporating acceleration and deceleration into movements. Just as a car gradually accelerates from a standstill or slows down before a complete stop, animation benefits from starting slower (closer frames), accelerating (wider frames), and then slowing again (closer frames). This principle prevents unnatural, robotic-looking movements, adding a more realistic and fluid quality to animation.

5. Follow Through and Overlapping

Follow Through and Overlapping, while distinct, are closely related principles. When a character stops walking, not every body part halts instantly; there’s a natural follow-through where clothing and body parts continue moving. Overlapping action involves different body parts moving at different times, creating a realistic effect. In a waving motion, for instance, the shoulder initiates the movement, followed by the arm, and the elbow and hand lag behind by a few frames. These principles capture realistic movement with elements moving at slightly varying speeds.

6. Arcs

Arcs are crucial in animation as virtually everything in real life moves in some form of arching motion. People don’t move in straight lines unless you’re animating a robot. When a person turns their head or a character moves, there’s a natural inclination for the motion to follow an arched trajectory, such as the dip of the head during a turn or the toes moving in a rounded, arching motion.

7. Exaggeration

Exaggeration is employed in animation to enhance the appeal of movements. Whether creating highly cartoony actions or adding a touch of exaggeration for realistic effects, it elevates the animation’s visual interest. In realistic animation, exaggeration can be used to make movements more readable or enjoyable while maintaining a connection to reality. For instance, when depicting a diver preparing to dive, exaggeration can be applied by pushing them down a bit further before the leap, adding a dynamic touch. Timing can also incorporate exaggeration to emphasize different movements or enhance the perception of a character’s weight.

8. Solid Drawing

Solid Drawing is vital for maintaining balance and anatomical accuracy in poses. In 3D animation, although animators may rely less on hand-drawn elements, the concept of solid drawing remains crucial. It involves creating drawings with a sense of volume and weight, ensuring accuracy in the pose. In 3D character rigging, attention to balance, weight distribution, and silhouette clarity is essential. Additionally, avoiding ‘twinning’—mirroring a pose on both sides of the character—helps create more dynamic and realistic animations.

9. Appeal

Appeal in animation extends to various aspects, such as appealing poses and character design. A key aspect is the character’s design, aiming for a connection with the audience. Complex or unclear designs may lack appeal. Enhancing character uniqueness involves pushing and exaggerating certain features, like exaggerating the jaw or emphasizing youthfulness in the eyes, contributing to a more memorable and appealing character design.

10. Straight Ahead And Pose to Pose Action

Straight Ahead Action, also known as Pose to Pose, represents two distinct animation techniques.

  1. Straight Ahead Action:
    • Spontaneous and linear approach.
    • Each pose or drawing is created sequentially, one after another.
  2. Pose to Pose:
    • Methodical and planned.
    • Involves creating only the essential poses needed to convey the action.
    • Allows for a simpler and more focused workflow, ensuring correct posing and timing before adding finer details.

11. Secondary Action

Secondary Action involves creating supporting actions that emphasize the main action in an animation, adding depth and authenticity to the performance. It should be subtle, complementing rather than distracting from the primary action. For instance, in a scene where characters are talking (main action), a character tapping their fingers nervously (secondary action) enhances the overall realism. Another example could be a character walking down the street while whistling, where the whistling serves as a secondary action.

12. Staging

Staging involves setting up a scene, including character placement, background elements, and camera angles, to ensure the animation’s message is clear. It focuses on communicating character expressions or interactions effectively, using camera angles that best convey the intended message. The goal is to prevent viewer confusion by maintaining a clear focus on the shot’s purpose and the desired communication.

Categories
Houdini Maya Nuke Substance Painter UnrealEngine

Celestial Misfire – Term 1 Project

Final Output

My Reflection with this project

I’m happy that everything about this project turned out to be very useful for developing my skills, especially in new software like Unreal Engine. I’m also pleased with how well the main shot of my project, the Moon impact, turned out.

The first part of the project involved coming up with an idea and a story, which proved to be challenging. In fact, I hadn’t decided until 2-3 weeks had passed. Initially, I considered a spaceship emerging and destroying a moon, but it felt too simple and lacked a compelling story. After some contemplation, I recalled a scene from the movie Top Gun where a jet maneuvers to evade a missile. This inspired the idea of a jet evading a missile, leading to the missile inadvertently hitting and destroying the moon.

I began the project by building the environment in Unreal Engine, marking my first experience with the software. It was highly beneficial, allowing me to quickly create basic terrains and populate the area with various assets. To add an element firing the projectile at the jet, I came up with the idea of using artillery. Subsequently, I worked in Maya for modeling and Substance for texturing learning those software as well. I wanted to craft a scene where the artillery shoots the plane, and the plane skillfully maneuvers to evade the projectile, causing it to hit the moon instead.

For the moon destruction, I opted to use Houdini for FX due to my prior experience with the software and my career focus on FX. I drew inspiration from an online lesson at Rebelway, where they demonstrated a similar process. I decided to incorporate it into my project. Starting with a sphere and experimenting with various parameters, I achieved the desired effect. This process allowed me to gain valuable insights into VEX language and how to approach certain elements without resorting to simulation. This knowledge proved extremely useful for making quick changes. In fact, out of the five elements, I only used simulation for two, achieving the rest through SOPs.

Story / Concept

FADE IN: 

EXT. DESERT – DAY 

  • The Space Odyssey theme plays as the camera moves from a ground-up shot of a sand dune, revealing a cannon in the distance. 

EXT. REMOTE AREA – EARTH – DAY 

  • A cannon is set up in a remote area. 

INT. CANNON SHOULDER VIEW – DAY 

  • The camera shows a shoulder view of the cannon, capturing a twinkle of a jet flying high in the sky. 

EXT. SKY – DAY 

  • A fighter jet maneuvers around the sky, seemingly oblivious to the cannon below. 
  • The cannon takes aim and fires a shot towards the fighter jet. 
  • CAMERA SLOWLY FOLLOWS the projectile as it travels through the sky. 
  • The fighter jet executes a cobra move inspired by Top Gun. 
  • However, the shot misses the fighter jet and instead collides with the moon. 
  • The projectile slams into the moon, causing a massive explosion. 
  • The moon is destroyed, leaving only dust and debris behind. 

FADE OUT. 

The Space Odyssey music comes to an end. 

3D Modelling in Maya

Reference
Blocking
Modelling
UVs

Texturing in Substance

Import in substance painter
Apply base material
Texturing

Creating Env In Unreal

  • Creating a desert environment in Unreal Engine involves several steps, including terrain creation, asset placement, lighting, and fine-tuning to achieve a realistic and immersive result.
1. Create a New Project:

Start Unreal Engine and create a new project. Choose the template that best fits your project requirements, such as the Third Person or First Person template.

2. Landscape Creation:
  • Create a new landscape by going to the Landscape mode.
  • Sculpt the landscape to resemble the desert terrain. Use tools like “Sculpt,” “Flatten,” and “Smooth” to shape the landscape according to your vision.
3. Desert Materials:
  • Apply desert materials to the landscape. You can either create your own materials or use existing ones from the Unreal Engine Marketplace.
  • Consider adding features like sand dunes, rocks, and desert vegetation to enhance the realism.
4. Sky and Atmosphere:
  • Adjust the sky and atmospheric settings to match a desert environment. You can use the “Sky Atmosphere” actor to control the overall look of the sky, including the sun position and atmosphere settings.
5. Lighting:
  • Configure the lighting to simulate the harsh sunlight of a desert environment. Pay attention to the direction, intensity, and color of the light source.
  • Consider using dynamic lighting to create realistic shadows.
6. Asset Placement:
  • Populate the environment with assets such as rocks, cacti, tumbleweeds, and other desert-themed objects. You can either create your own assets or use assets from the Unreal Engine Marketplace.
7. Post-Processing:
  • Apply post-processing effects to enhance the overall visual appeal. Adjust settings such as bloom, contrast, and color grading to achieve the desired look.
Imported 3D model

Moon FX in Houdini

  • I have decided to use Houdini for FX since I already have some experience working with it. I love the procedural approach, and it allows me to have total control over my FX.
My approach in Houdini
Geometry prep
  • I took a standard sphere, applied UVs for future use, and added a temporary texture for reference.
  • Marked a point where all my fx and reference data is going to take place.
  • Now, I used that point as the origin and created a falloff where the moon destruction is going to take place.
  • I generated custom velocity vectors using VEX functions, pointing outward from the designated point. Adjusted velocity using the previously created falloff mask for distance-based reduction.
Creating source for Dust FX
  • I utilized the previously created mask to form a ring-like shape. This segmented ring will have custom velocity applied and serve as the source for the Dust FX.
Creating Shock wave with the generated source
  • Now, with the source established, I created custom velocity by implementing various wrangles and SOP techniques.
  • To achieve this, I duplicated the source using a trail node and utilized an add node along with an ID attribute to connect the points, forming lines that would serve as the source. Using a foreach loop, I generated lines with various lengths.
  • To make the shockwave originate from the cracks on the moon’s surface, I made these lines as a mask. Using attribute paint, I applied the mask to the interior points of the fractures. Subsequently, I deleted the points that were not marked red and utilized the resulting I set as the source for the pop sim. Once the particle simulation was done, I fed it into the pyro solver to achieve the desired shockwave effect.
Creating Tendrils with SOP
  • I wanted to give more impact to the destruction, So I added trails/tendrils to represent fast-moving debris coming out from the moon.
  • For the tendrils I used lines with various pscale values to get different lengths, and I used a global multiplier to animate pscale which will give the illusion of trail growth across frames.
  • After that, I added some animated noise to make the trail look turbulent. I also applied a gradient with values ranging from 0 to 1, going from the origin to the end. This data is used to control the width of the trail.
  • I converted the result into geometry, then transformed it into a VDB for the dust effect. Additionally, I applied an overall time remap to enhance the impact.
Fracture of surface in SOP
  • To fracture the surface of the moon without relying on simulations, I opted for a SOP-based approach, incorporating extensive VEX coding.
  • Initially, I prepared the geometry by implementing cuts with a custom object and added interior details to the inside geometry. Subsequently, I extracted the centroids of all the resulting pieces and applied a mask ramp using a mask attribute created at the begining. This attribute allowed me to define the desired area of effect on the surface. Lastly, I utilized clusters to group the pieces into clumps.
  • In the final step, I used VEX code to manipulate the intrinsic transformation data, modifying inner transformations. Utilizing the attributes created earlier, I adjusted these data to simulate the effect of an RBD simulation. Custom ramps were also implemented to provide precise control over these attributes.
Manipulating pieces with VEX
Secondary Sim (Debris)
  • To enhance the impact, I implemented a particle simulation to depict the scattering of debris after the moon was impacted.
  • Initially, I took the animated fractured geometry and converted it into points to serve as the source for the particle simulation. Utilizing speed culling, I removed stationary points and then crafted custom velocity, resembling a crown splash effect.
  • After preparing the source, I integrated it into the particle simulation (pop sim). Further refinement was achieved by introducing variations in scale and applying random animated rotations to the resulting particles.
  • To streamline rendering speed, I opted for a straightforward approach by substituting the particles with a simple Octahedron shape in the final visualization.

Breakdown video

Categories
Nuke

Planar Tracking

  • Planar tracking is a technique used for tracking the movement and transformation of flat or planar surfaces within a video or image sequence. Unlike point tracking, which tracks individual points in an image, planar tracking focuses on tracking the entire planar surface, making it particularly useful for tracking objects with consistent textures or patterns.

Here’s how planar tracking can be done in Nuke:

Asset Gather:
  1. Selection of Planar Surface:
    • Identify the planar surface in the footage that you want to track. This surface should have distinguishable features or patterns that the tracking algorithm can follow.
  2. Adding a Tracker:
    • Use a planar tracker node (such as the “Tracker” node in Nuke) to add a tracker to the selected planar surface. This involves defining the region on the first frame and indicating the planar area to be tracked.
  1. Automatic Tracking:
    • The planar tracker analyzes the selected features within the defined region and automatically tracks their movement across subsequent frames. This is beneficial for surfaces with consistent textures, making planar tracking robust and reliable.
  2. Adjustments and Refinement:
    • After the automatic tracking, you can review and make manual adjustments if needed. This allows you to refine the tracking by correcting any drift or errors that may have occurred during the automatic tracking process.
  1. Transformation Data:
    • The planar tracker provides transformation data, including position, scale, rotation, and skew, for each frame. This data can then be applied to other elements, such as graphics or effects, to ensure they match the movement and perspective changes of the tracked planar surface.
  2. Integration into Compositing:
    • Once the planar tracking is complete, you can integrate other elements into the scene, and they will follow the tracked planar surface’s movement. This is particularly useful for tasks like adding labels to moving objects or seamlessly integrating CGI elements into live-action footage.

  • We may encounter issues with foreground elements, such as the light pole in this image. However, we can easily overcome these challenges by utilizing the same tracking data to apply rotoscoping. The resulting roto alpha can then be used as a clipping mask to bring the pole to the foreground.
Categories
Nuke

Filtering Algorithms 14/11

Types of filters

  • Image filtering algorithms are designed to assist in determining the changes that occur in pixel values as images undergo transformation and processing. This article will dissect the various image filters at our disposal and explore their effects on images within the context of Nuke.

Resize Type:

Sinc4 – Lots of sharpening, often too much sharpening

Lanczos6 – Moderate amount of sharpening over 6 pixels. Good for scaling down.

Lanczos4 – Small amount of sharpening over 4 pixels. Good for scaling down.

Rifman – Moderate smoothing and high sharpening. Typically too hard on the sharpening in many situations.

Simon – Some smoothing and moderate sharpening. Excellent choice for many situations.

Keys – Some smoothing plus minor sharpening, decent choice for general transformations.

Anisotropic – High quality filter. Performs well with high angle surfaces. Only available in 3D nodes.

Cubic – Nuke default filter. Pixels receive some smoothing, leading to predictable results. Often too smooth.

Mitchell – Moderate smoothing, low sharpening, with a slight blur. Changes pixel values even with no movement. ​

Notch – High amounts of flat smoothing. Good for hiding buzzing or moire patterns. Changes pixel values even with no movement

Parzen – Lots of smoothing. Changes pixel values even with no movement.

Filtering workflow when using with/without motion blur

Concatenation

The capability to execute a single mathematical calculation across multiple tools within the family is crucial. This singular calculation, or filter, enables us to preserve the maximum amount of detail possible.

Examples

Wrong way

Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction, after that we use and grade node to change the values and we use another transform but this time in backwards 0.5 pixels in X and Y, We would loose the quality of the pixel, since the link/concatenation in broken by using the grade in between.

Right way

Here we have an pixel and we are going to transform it 0.5 pixels in X and Y direction and we use another transform but this time in backwards 0.5 pixels in X and Y, after that we use and grade node to change the values, We would not loose the quality of the pixel, since the link/concatenation is not broken since, the grade in used after the transform calculations, using grade prior to both transformations works to.

BBOX

The bounding box defines the area of the frame that Nuke sees as having valid image data. The larger the bounding box is, the longer it takes Nuke to process and render the image. To minimize processing and rendering times, you can crop the bounding box. Occasionally, the bounding box may also be too small, in which case you need to expand it.

Motion Blur

In Nuke, motion blur can be applied to enhance the realism of moving elements within a scene. It’s achieved by calculating the movement of objects between frames and then blurring them accordingly. This helps to create a more natural and visually appealing representation of motion, especially when working with animations or scenes involving fast-moving subjects.

Proper workflow when using motion blur in nuke

2D and 3D camera projections

In Nuke, 2D camera projection is a technique used to integrate 2D elements, such as images or graphics, into a 3D scene. This process involves taking a flat, 2D image and mapping it onto a 3D surface as if it’s being viewed from a specific camera perspective.

2D camera projection is commonly used in visual effects and motion graphics to add elements like signs, labels, or textures to scenes that were not present in the original footage but need to look as if they belong in the 3D environment.

Camera projection workflows

Defocus

The Defocus node in Nuke is used to simulate the blurring effect that occurs when a camera is out of focus. When a camera lens is not focused perfectly on a subject, objects at different distances from the focal plane appear blurred in the captured image. The Defocus node allows you to replicate this effect in post-production, giving you control over the amount and nature of the blur applied.

Depth of field

  • The ZDefocus node blurs the image according to a depth map channel. This allows you to simulate depth-of-field (DOF) blurring.
  • In order to defocus the image, ZDefocus splits the image up into layers, each of which is assigned the same depth value everywhere and processed with a single blur size. After ZDefocus has processed all the layers, it blends them together from the back to the front of the image, with each new layer going over the top of the previous ones. This allows it to preserve the ordering of objects in the image.
Categories
Maya

Rigging | 13/11

Parenting

  • Parenting locks an object to an main object so that whatever changes made to the parent object will also affect the child as well.
  • The child on the other hand can move independent even when parented to an obj.
  • While this can be use to animate an object, this will be tricky when we try to animate bi-peds, quad-peds or anything which has an arm like movement. So, in this case we use the IK/FK rigging

Bones

  • Bones as the name suggest is placed underneath the mesh for rigging. it is the one which is going to control the geometry which lies above
  • The movement of the geometry is achieved by parenting the bones with the geo.

IK Handle

  • Through IK handle we can get arm like movement in the geometry, without needing to animate each geometry manually.
  • For characters like human, maya has auto rig option which is much more easy to use

IK Handle