ALWIN FELIX's BLOG
Final Output
My Reflection with this project
I’m happy that everything about this project turned out to be very useful for developing my skills, especially in new software like Unreal Engine. I’m also pleased with how well the main shot of my project, the Moon impact, turned out.
The first part of the project involved coming up with an idea and a story, which proved to be challenging. In fact, I hadn’t decided until 2-3 weeks had passed. Initially, I considered a spaceship emerging and destroying a moon, but it felt too simple and lacked a compelling story. After some contemplation, I recalled a scene from the movie Top Gun where a jet maneuvers to evade a missile. This inspired the idea of a jet evading a missile, leading to the missile inadvertently hitting and destroying the moon.
I began the project by building the environment in Unreal Engine, marking my first experience with the software. It was highly beneficial, allowing me to quickly create basic terrains and populate the area with various assets. To add an element firing the projectile at the jet, I came up with the idea of using artillery. Subsequently, I worked in Maya for modeling and Substance for texturing learning those software as well. I wanted to craft a scene where the artillery shoots the plane, and the plane skillfully maneuvers to evade the projectile, causing it to hit the moon instead.
For the moon destruction, I opted to use Houdini for FX due to my prior experience with the software and my career focus on FX. I drew inspiration from an online lesson at Rebelway, where they demonstrated a similar process. I decided to incorporate it into my project. Starting with a sphere and experimenting with various parameters, I achieved the desired effect. This process allowed me to gain valuable insights into VEX language and how to approach certain elements without resorting to simulation. This knowledge proved extremely useful for making quick changes. In fact, out of the five elements, I only used simulation for two, achieving the rest through SOPs.
Story / Concept
FADE IN:
EXT. DESERT – DAY
- The Space Odyssey theme plays as the camera moves from a ground-up shot of a sand dune, revealing a cannon in the distance.
EXT. REMOTE AREA – EARTH – DAY
- A cannon is set up in a remote area.
INT. CANNON SHOULDER VIEW – DAY
- The camera shows a shoulder view of the cannon, capturing a twinkle of a jet flying high in the sky.
EXT. SKY – DAY
- A fighter jet maneuvers around the sky, seemingly oblivious to the cannon below.
- The cannon takes aim and fires a shot towards the fighter jet.
- CAMERA SLOWLY FOLLOWS the projectile as it travels through the sky.
- The fighter jet executes a cobra move inspired by Top Gun.
- However, the shot misses the fighter jet and instead collides with the moon.
- The projectile slams into the moon, causing a massive explosion.
- The moon is destroyed, leaving only dust and debris behind.
FADE OUT.
The Space Odyssey music comes to an end.
3D Modelling in Maya




Texturing in Substance

















Creating Env In Unreal
- Creating a desert environment in Unreal Engine involves several steps, including terrain creation, asset placement, lighting, and fine-tuning to achieve a realistic and immersive result.
1. Create a New Project:
Start Unreal Engine and create a new project. Choose the template that best fits your project requirements, such as the Third Person or First Person template.
2. Landscape Creation:
- Create a new landscape by going to the Landscape mode.
- Sculpt the landscape to resemble the desert terrain. Use tools like “Sculpt,” “Flatten,” and “Smooth” to shape the landscape according to your vision.





3. Desert Materials:
- Apply desert materials to the landscape. You can either create your own materials or use existing ones from the Unreal Engine Marketplace.
- Consider adding features like sand dunes, rocks, and desert vegetation to enhance the realism.
4. Sky and Atmosphere:
- Adjust the sky and atmospheric settings to match a desert environment. You can use the “Sky Atmosphere” actor to control the overall look of the sky, including the sun position and atmosphere settings.
5. Lighting:
- Configure the lighting to simulate the harsh sunlight of a desert environment. Pay attention to the direction, intensity, and color of the light source.
- Consider using dynamic lighting to create realistic shadows.




6. Asset Placement:
- Populate the environment with assets such as rocks, cacti, tumbleweeds, and other desert-themed objects. You can either create your own assets or use assets from the Unreal Engine Marketplace.
7. Post-Processing:
- Apply post-processing effects to enhance the overall visual appeal. Adjust settings such as bloom, contrast, and color grading to achieve the desired look.



Moon FX in Houdini
- I have decided to use Houdini for FX since I already have some experience working with it. I love the procedural approach, and it allows me to have total control over my FX.
My approach in Houdini
Geometry prep
- I took a standard sphere, applied UVs for future use, and added a temporary texture for reference.
- Marked a point where all my fx and reference data is going to take place.
- Now, I used that point as the origin and created a falloff where the moon destruction is going to take place.
- I generated custom velocity vectors using VEX functions, pointing outward from the designated point. Adjusted velocity using the previously created falloff mask for distance-based reduction.






Creating source for Dust FX
- I utilized the previously created mask to form a ring-like shape. This segmented ring will have custom velocity applied and serve as the source for the Dust FX.




Creating Shock wave with the generated source
- Now, with the source established, I created custom velocity by implementing various wrangles and SOP techniques.
- To achieve this, I duplicated the source using a trail node and utilized an add node along with an ID attribute to connect the points, forming lines that would serve as the source. Using a foreach loop, I generated lines with various lengths.
- To make the shockwave originate from the cracks on the moon’s surface, I made these lines as a mask. Using attribute paint, I applied the mask to the interior points of the fractures. Subsequently, I deleted the points that were not marked red and utilized the resulting I set as the source for the pop sim. Once the particle simulation was done, I fed it into the pyro solver to achieve the desired shockwave effect.









Creating Tendrils with SOP
- I wanted to give more impact to the destruction, So I added trails/tendrils to represent fast-moving debris coming out from the moon.
- For the tendrils I used lines with various pscale values to get different lengths, and I used a global multiplier to animate pscale which will give the illusion of trail growth across frames.
- After that, I added some animated noise to make the trail look turbulent. I also applied a gradient with values ranging from 0 to 1, going from the origin to the end. This data is used to control the width of the trail.
- I converted the result into geometry, then transformed it into a VDB for the dust effect. Additionally, I applied an overall time remap to enhance the impact.







Fracture of surface in SOP
- To fracture the surface of the moon without relying on simulations, I opted for a SOP-based approach, incorporating extensive VEX coding.
- Initially, I prepared the geometry by implementing cuts with a custom object and added interior details to the inside geometry. Subsequently, I extracted the centroids of all the resulting pieces and applied a mask ramp using a mask attribute created at the begining. This attribute allowed me to define the desired area of effect on the surface. Lastly, I utilized clusters to group the pieces into clumps.
- In the final step, I used VEX code to manipulate the intrinsic transformation data, modifying inner transformations. Utilizing the attributes created earlier, I adjusted these data to simulate the effect of an RBD simulation. Custom ramps were also implemented to provide precise control over these attributes.










Secondary Sim (Debris)
- To enhance the impact, I implemented a particle simulation to depict the scattering of debris after the moon was impacted.
- Initially, I took the animated fractured geometry and converted it into points to serve as the source for the particle simulation. Utilizing speed culling, I removed stationary points and then crafted custom velocity, resembling a crown splash effect.
- After preparing the source, I integrated it into the particle simulation (pop sim). Further refinement was achieved by introducing variations in scale and applying random animated rotations to the resulting particles.
- To streamline rendering speed, I opted for a straightforward approach by substituting the particles with a simple Octahedron shape in the final visualization.








Breakdown video
London City
Week 1
Haze and Depth Techniques
- Changing the look and feel of the shot by adding fake haze which gives an illusion of a city being very polluted or in a disaster using simple mask and Colo corrections.







Smoke Luma Key
- Here we have a plate and a smoke with bluescreen, we can use the luma key to get the alpha
- While merging it is better to use the over operation instead of the screen since we already made a premult, so the screen operation will make the smoke more transparent and inaccurate.







- Below is the right method to merge the smoke to the plate.
- We use the edge extend in alpha to get the lost details in the smoke.






Luma key Inverted
- Using luma key the opposite way and exploring some blending techniques.
- Using the blur plate as an average for the smoke to blend the smoke to the environment.
- transforming the layer to a desired place.








smoke LumaKey Log
- Treating the plate as a Log to get more flat and unsaturated color.
- Through this method we can get more details while keying a footage.






WEEK 03
The Lower Third Definition

What is a lower third?
A lower third is a combination of text and graphical elements placed in the lower area of the television screen to give the audience more information. It doesn’t necessarily have to occupy the “lower third” of the screen, but that’s where it gets its name. They might seem trivial, but their necessity is clear when they’re used poorly or missing altogether which can confuse the audience.
When might be a good time to use lower thirds? If you’re filming a documentary, or any other interview-type program, keeping track of all of the subjects can get confusing without lower thirds.
If the show, company, or film has a certain tone or aesthetic, it’s good to keep the lower third design “on brand.” All of the elements of the lower third should work together to add to the visuals, not distract from them.
ELEMENTS OF A LOWER THIRD:
Color
Typography
Animation Style
Size and Position
Shapes and Logos
Setting things up
Before having an logo its better to have a few seconds of the main subject visible as to direct the viewer to all the content of the screen and after than we can introduce the company logo and typographic in a motion graphics kind of animation to make the appeal for the broadcasting network.
Title safe and Action Safe
Title safe and action safe are used to ensure that text and important visual elements in a video are clearly visible and not cut off when displayed on different screens.
Title safe refers to an area within the video frame where text or important subtitles should be placed to ensure they are fully visible on all screens, even if the edges of the video are cropped.
Action safe is a slightly larger area within the frame where the most essential visual elements should be placed, such as characters or important objects, to prevent them from being cut off on some screens.
By using title safe and action safe guidelines, video editors and designers can ensure that their content is displayed accurately and effectively on various devices and platforms.

Grid
The grid node can be used as a stensil to align the graphics on the screen and also make it look more appealing in proportions since humans can easily view imperfections.

Creating the Graphic elements
Element 1
We can use a constant and make a roto like a rectangle and transform and animate across the screen.


Element 2
For the second element we create a gradient which appears in a wipe effect in the screen, we can do this with a ramp node and creating a roto which animates over time.



We can add additional elements like transformation animation and we can make use of the curve editor to get ease in or ease out effect, which makes the whole graphic interesting.
Element 3
Here we are going to have a small box which is going to contain the logo of the broadcasting network.
first we take a constant and apply ramp which has alpha and we use the noise along with the distort node in tandum with the alpha from the gradient to get the texture and finally merge everything together.






Element 4
We do the same kind of effects for the other elements as well






Element 5, 6 and 7










Special Effects
We can create a cool special effects which looks like a box blur and slide the animation across the time to give a revealing effect to the graphic. we can use a combination of blur, grade and edge detect nodes to get this effect.








Typographics
The text part is as important as the graphics even more so, to make the text more appealing we can animate the wipe for revealing similar to the other elements of the scene, can also add drop shadow to make it more pop out.



NoOp Node
This is a useful node which does nothing, and we can use this as custom master controls which affects all the other nodes through connected link, for example we can control multiple dissolve with multiple values in the same node without the hassle to navigate through the node network.
Particles in Nuke 23/4
Particles in nuke can be used to make cool effects to simulations like rain or fire ember, this is very useful in the late post production stages when it involves simple elements which does not require FX department.



- There are lots of possibility in particles and Nuke provides some presents for start-up and all of them can be modified through attributes like wind, turbulence etc.


- We can use the card as the particle emitter and we can manipulate the position of the card to change the emission as well, the bigger the card the bigger the emission surface




- By default the particles outputs Alpha value

- We can use a texture as a point colour while emitting the particles.
- The image itself cam be used as a particle and when doing this it’s advisable to use the extremely low res images since using full quality image could use all the GPU RAM.
- We can also use multiple images as particles and also 3D objects as the particles as well, and we can also animate 3D object which will apply into the instanced particles as well.
- When using 3D object it is advisable to use low poly objects.



- There are particle properties nodes like drag, gravity and turbulence, we can use these to create cool effects.


- The particles by default generates Z depth data, we can use this to create camera defocus in nuke.


- The particle emitter can also be of a geometry type.


Creating embers for fire with Particle system
- First we use a grid as an emitter geo and we rotate it in random direction to get variation in the emmission of the particles.
- Then we use various forces like wind, drag and turbulent noise to make the particles act more organic and on the specification of a real-life fire ember.
- And finally to get a visual we place a camera and a scanline render node to render the image in the viewport.
- To make the particle disappear we create a particle curve node to animate the alpha based on the age.
- To make the particles more believable we use the embedded z_depth pass into the defocus node.
- Change the colour of the particle to orange with grade.
- And if there is an movement in the plate we track and apply the transform accordingly.
- Create a base glow of the ember and create mask of that confining withing the core of the fireplace so that the centre region fire embers looks intense.
- Add that soft glow which we created into the plate.
- Create a separate glow of particle and mask the centre again and grade with extra intensity and merge the resultant image into the BG plate.











- Now to create a fake smoke field we create a noise using the FractalBlur node and tweak the parameters and shuffle out the green and blue channels and use the green channel as the alpha channel aswell.
- Animate the noise across the screen.
- Now blur extremely the embers which we created earlier and copy the alpha from the fractalblur to the blured immage and premult to get the result.
- And finally merge it to the plate.







Copycat in Nuke 16/04
Copycat node is a machine learning algorithm introduced in 2020 in nuke, it is used to train with a set of images and generate in-between frames with data which matches the given intervals of frame. the data keeps getting better when we feed more data and the output becomes more accurate.
Examle:
We can use the copycat to clean-up the plate by training the node with intermediate cleaned up frames and letting the machine learning to create the in-between frames.




- Create the Copycat node and set the directory
- Copycat node should work on linear colorspace input.
- Adjust epochs/steps of calculations as per the shot requirement / Think about epochs like sampling in 3D/ MB.
- More the epochs more the time to train and also the PC might be unusable during the calculations.
- In the current version of copycat we can pause the training process to check the results.
- In the advance settings having large model size might take a lot time to calculate so need to be carful on using these settings.
- We can use any previous training data and use it as a checkpoint to aid the machine learning.


- We can use the same principle to many things like the roto




Grain | Patch lighting | UV map
Grain
Its important to check the grains, as it will affect the final output of the image and can be picked up in QC
regions especially cleaned up elements has tell tale signs of manipulation while doing QC, so its important to take care of those specific regions and fix grain in them.
Patch Changing Lighting
Wrong method
- Patching the tracked with the clone stamp approach will look fine if the image is stationary but we will face lighting problems if the object changes position.



CC By hand
- This is also an improper way to work since we hand animate the CC values between frames and this consumes a lot of time on production.
- Also if the light changes drastically like a red or green tint suddenly the workload increases tremendously.





Slice Tool
- We can use this to monitor the color values on an selected positions of length in the screen.



Frequency seperation patching
- We can seperate the image based on Hi and low Frequency which means hi freq is going to be the little details like edges and wrinkles and pores and the low frequency is going to be the general average lighting of the image.
- Blur the image to get the low freq and for high freq merge in blur with “from” operation to the original plate.
- So what we are doing is we paint the high frequency like clone stamp this will transfer the details to the area and we copy the same to the low freq this will transfer the light data to the high freq clone thus blending it seamlessly.








Paint work divide / multiply






Interactive Light Patch












Light / Glow matching
- Sometimes an bright source can create a haze or glow on the lens, this will washout the elements around it, so when comping our elements would look out of place, to tackle this we use curve tool and analyze the light source and apply that data to a grade to change the result.








Roto and Transform
- An easy way to remove trackers or cleanup a plate is using track, roto and transform. its basically clone stamp tool but its now tracked.







UV map / ST map
In Nuke, STmap stands for Spatio-Temporal Map. It is a node that allows you to warp and distort an image based on a reference image or sequence of images. This can be used for effects such as morphing between two images, creating fluid simulations, or stabilizing shaky footage. The STmap node is commonly used in visual effects and compositing to create complex image distortions and transformations.
The Expression:
Red Channel:
(x + .5) / width
Green Channel:
(y + .5) / height



Adv. mask removal | Colne and vectors
- Denoise the plate using a smart vector node if you have a good GPU and write it to the disk. The output may appear as a black screen, so change the AOV to smart vector.
- Adjust the vector detail based on the shot’s needs. Higher detail results in slower processing time. Lower detail can be used for shots with less detail. Write the adjusted detail to disk, as this node can slow down the script.
- Pick a specific frame, roto paint the markers, blur to avoid crisp edges, and frame hold.
- Use a vector distort node (in wraped src mode) to carry this information to other frames by plugging in the smart vector to the painted frame.
- Use a vectorToMotion node to convert the smart vector to motion vector to apply motion blur.
- Regrain from the original footage and premult to finish the process.







Adv. mask removal | Colne and vectors | method 2
- Another method of the above is to use STMap, we use this because it is very faster.



Adv. mask removal | Colne and vectors | Invert STMap | method 3
- Through this method we can stabalize an whole image and apply corrections and reverse it to track through the surface
Adding or removing details from plate
- We can add or remove textures in a plate with STmap
keying in Nuke – 13/2
- In Nuke, keying is the process of isolating specific elements, typically from a video or image, based on their color or luminance values. This is often done to remove backgrounds, separate objects, or create matte elements for compositing.
HSV
HSV Color Scale: The HSV (which stands for Hue Saturation Value)
scale provides a numerical readout of your image that corresponds to the color
names contained therein. Hue is measured in degrees from 0 to 360
R = HUE: Hue literally means colour
G = Saturation: Saturation pertains the amount of white light mixed with a hue.
B = Luminance: Luminance is a measure to describe the perceived brightness of a colour
Hue corrections











Luminance Key
Can be used to create mask with the luminance value of an image.
This can be used to key various channels like RGB/



Luma key can also be used is various ways like experimenting with different color spaces.
Here is an example with keying red channel in various color spaces.



Hue / Luma separator
We can separate Hue from an image by dividing the luminance with the colour of the image in the merge division mode.



Color Difference Key
We can separate individual colors by negating colors from two channels using merge minus function.










Adding Channels
We can create separate channels for certain needs for example we can create a Roto and add it as a new channel and we can use grade or blur node’s mask as the new channel which was created. These Channels are separate from alpha so it is only visible on its assigned name.




ChromaKeyer
ChromaKeyer is basic keying node useful for quick color keying and it is not accurate around hairs or other tiny moving objects.
Image Based Keyer – IBK
IBK stands for Image Based Keyer It operate with a subtractive or difference methodology. It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges.
IBK Color: This node we can use it to make up the clean plate for the background of the subject.
IBK Gizmo: With this node we can make use of the produced clean plate of the IBK Color and aid it to key the subject especially for hairs.




KeyLight
Key light It is a really great keyer over all and does color de-spill.



Primatte
Primatte is what is called a 3D keyer. It uses a special algorithm that puts the colors into 3D color space and creates a 3d geometric shape to select colors from that space.

Ultimatte
The advantage is you can get phenomenal keys that can get really fine detail and pull shadows and transparency from the same image.


Green De-spill
It’s the process of removing green scatter around the subject which occurs mainly due to the green screen.
We first key the green using key-light and after the comes the difference function which will extract the green only from the image and then we desaturate the green and we add back to the image.
we can use the same principle for the blue screen.






De-spill map
We can create a De-spill map with the help of minus function with the color function. Here we use Green – Red thus getting a alpha as a result which can be used for de-spill.




Edge extend
Sometimes we will get black halo/edges in an keyed footage, to eliminate this we use edge extend, It works by extruding the color values around the edge of the alpha and we can use this value to reduce the black halo which will be appearing.







Fixing uneven green screen
Having an irregular green screen will develop problems down the pipeline especially during keying. so to counter that we need to even out the greenscreen in comp.













Merge keys then comp over
Production Scenario
UNESCO – Collaborative unit
Modelling Hot Air Balloon
- Modeling a hot air balloon in Maya involves creating a 3D representation of the balloon’s shape and structure. Here’s how I model a hot air balloon in Maya:
Step 1: Reference Images
Gather reference images of hot air balloons from different angles. Use these images to guide your modeling process.


Step 2: Create the Balloon Shape
- Create a Sphere:
- Go to the “Create” menu.
- Choose “Polygon Primitives” > “Sphere.”
- Click and drag on the grid to create a sphere.
- Adjust the sphere’s size using the manipulator or the attribute editor.
- Shape Adjustment:
- Enter “Vertex” mode (right-click on the sphere and choose “Vertex”).
- Adjust the shape of the sphere to match the reference images. Scale and move vertices as needed.


Step 3: Add Details
- Balloon Opening:
- Select the top vertices of the sphere.
- Scale them down to create the opening of the balloon.
- Rope Attachments:
- Model small cylinders or tubes for rope attachments.
- Position them at the bottom of the balloon and scale as necessary.







Step 4: UV Mapping
- Unwrap the UVs:
- Go to the “UV Editing” workspace.
- Select the balloon object.
- Choose “Create UVs” > “Automatic Mapping” or use “Unfold” tools to unwrap the UVs.
- Adjust UVs:
- Arrange UVs in the UV editor to ensure proper texture mapping.


Step 6: Materials and Textures
- Create Materials:
- Open the Hypershade editor.
- Create a new Lambert or Blinn material for the balloon.
- Assign Textures:
- Apply textures to the material if desired.
- Use image textures for patterns, colors, or details on the balloon.




Compositing in Nuke
- Before we begin working on the footage, we check it for things like camera movements and objects in the foreground and background. This helps us plan how we’re going to tackle the shot.
Import Footage:
- Open Nuke and create a new project.
- Import the footage you want to rotoscope by using the ‘Read’ node.


- Create a Roto Node:
- Right-click in the Node Graph and select “Draw” > “Roto.”
- the Roto node to the footage node.
- Rotoscope the First Frame:
- Go to the first frame of your footage.
- Use the Roto node to draw a shape around the object you want to rotoscope.
- Make sure the shape encloses the entire object you want to isolate.
- Keyframes:
- Move a few frames forward in the timeline.
- Adjust the shape of the Roto node to match the object’s movement.
- Press ‘A’ on your keyboard to set a keyframe for the current frame.


- Refine the Rotoscope:
- Continue moving forward frame by frame, adjusting the shape of the Roto node as needed.
- Track the Rotoscope:
- In Nuke, you can use the built-in tracker to automate the tracking process. Select the Roto node, go to the “Tracker” tab, and enable it.
- Adjust tracking settings, such as search area and correlation, to achieve accurate tracking.
- Fine-tune the results by manually adjusting keyframes if necessary.


- Output:
- Connect the Roto node to the desired downstream nodes for further compositing.
- You can use the roto shape as a mask for other nodes or apply color correction, effects, etc.



Mood of the Balloon Festival
- I have chosen to place the balloon festival within a romantic setting, where the color pink takes on a significant role in shaping the overall color palette.
First Iteration
- In the initial pass, I experimented by incorporating additional elements such as foreground fog, neon text, a background sky video, glitters, and added birds in flight.









- I animated certain image planes of the balloons to preview my approach for the final renders. Additionally, I utilized previous tracking data to synchronize balloon movements with the camera motion.
- To enhance the ambiance in line with my chosen setting, I applied a pinkish color grade to the overall scene.



Second Iteration
- At this point, I’ve completed the modeling and rendering of the balloon in Maya, making it ready for use in Nuke compositing. I seamlessly replaced the still images with my balloon renders, addressing any roto mismatches and incorporating general corrections suggested by my mentor.


- Additionally, I introduced hanging lights to get more depth and a festive atmosphere into the overall scene.
- The footage obtained from the internet featured blinking hanging lights, but it seemed too simplistic to me. Therefore, I opted to elevate it by eroding the alpha channel, allowing only subtle visibility of the lights. I increased the exposure to achieve higher pixel values, creating the illusion of a hot filament inside the light bulb. Finally, I applied an exponential glow to and merged with original lights to enhance the overall appearance.
- Lastly, I performed an overall color correction to give the lights a more orange hue.





Third / Final Iteration
- For the final version, I primarily focused on implementing corrections provided by my mentor.
- At first, I decreased the speed of the moving balloon in the background and adjusted its scale as it approached from the distant background.
- In addition, I enhanced the balloon’s depth and appeal by adding a flame effect. This effect illuminates the insides of the balloon using a noise pattern and roto mask in Nuke.







- For the next part, I modified the text as I was dissatisfied with its appearance. I changed both the font and the color for a more aesthetically pleasing result.
- To make the text move, I used a sine function expression in the vertical (y-axis) transformation.
- To make the text exciting, I introduced a glitter effect using hearts as the bokeh shape. I achieved this by eroding the alpha of the image to reveal only certain areas and applied a noise pattern as a mask, and using convolve and roto to get the heart bokeh.
- I also used the same effect for the hanging lights aswell.









- Finally merged all the elements together.
- In Maya with the Arnold renderer, lighting, materials, and render passes are fundamental components of the rendering process, contributing to the creation of visually compelling and realistic images.
Lighting in Maya Arnold:
- Arnold Lights:
- Arnold supports various light types, including point lights, spotlights, area lights, and distant lights.
- Lights contribute to the illumination of the scene, influencing the appearance of surfaces and shadows.
- Light Parameters:
- Lights have parameters that control intensity, color, falloff, and other characteristics that impact how they interact with the scene.
- HDR Lighting:
- Arnold supports High Dynamic Range (HDR) images as light sources, allowing for realistic and complex lighting scenarios.
- Physical Sky and Sun:
- Maya Arnold includes a physical sky and sun system for simulating realistic outdoor lighting conditions.







Materials in Arnold:
- Arnold Standard Surface Shader:
- The Arnold Standard Surface shader is a versatile material shader that supports a wide range of realistic surface properties.
- It includes controls for base color, specular reflection, roughness, and other parameters.
- Texture Mapping:
- Maya Arnold allows you to apply texture maps to materials, enhancing the realism of surfaces by incorporating details like color, bump, and specular maps.
- Material Library:
- Arnold provides a material library with pre-built shaders and textures, making it easier to create realistic materials.
- Subsurface Scattering (SSS):
- Arnold supports subsurface scattering, allowing you to simulate the way light penetrates and scatters beneath the surface of translucent materials.







Render Passes in Arnold:
- AOVs (Arbitrary Output Variables):
- Arnold allows you to render additional passes beyond the beauty pass, known as Arbitrary Output Variables (AOVs).
- Common AOVs include diffuse, specular, reflection, and ambient occlusion passes.
- Compositing Workflow:
- Render passes enable a more flexible compositing workflow. They allow artists to adjust and enhance specific aspects of the image in post-production.
- Denoising Passes:
- Arnold provides denoising passes that can be used in compositing to reduce noise in the final image.
- Cryptomatte:
- Cryptomatte is a popular AOV that simplifies object selection in post-production by generating ID mattes automatically.


Camera Placement:
- Camera Settings:
- Set up your camera with the desired composition, focal length, and depth of field.
- Adjust camera settings in the Attribute Editor.
Arnold Render Settings:
- Open Render Settings:
- Navigate to the Arnold Render Settings in the Render Settings window.
- Common Tab:
- Set the image size, aspect ratio, and frame range in the Common tab.
- Arnold Renderer Tab:
- Choose “Arnold” as the Renderer.
- Adjust settings like the AOVs, Ray Depth, and Sampling.




Render Preview:
- Render View:
- Use the Render View window to preview your scene’s rendering without saving an image.
Render the Scene:
- Render Button:
- Click the Render button in the Render Settings window to start the rendering process.
- Watch Progress:
- Monitor the rendering progress in the Rendering menu or the Script Editor.
Save Rendered Image:
- Image Format:
- Choose an image format (e.g., JPEG, PNG, EXR) for the final rendered image.
- Save Image:
- Save the rendered image to your desired location.
Post-Processing:
- Compositing Software:
- Import the rendered image into compositing software (e.g., Nuke, Adobe After Effects) for further adjustments if needed.


Nuke’s 3D System 28/11
- Nuke primarily operates as a 2D compositing software, but it does have some 3D capabilities. The 3D system in Nuke allows you to work with three-dimensional elements within a 2D compositing environment.
- 3D Space:
- Camera Nodes: Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.


- Geometry and Objects:
- Card Nodes: You can use card nodes to represent flat or simple 3D objects within the 3D space. These cards can be textured with images or sequences, allowing you to integrate 2D and 3D elements seamlessly.
- ScanlineRender Node: This node is used to render 3D scenes within Nuke, taking into account lighting, shadows, and reflections.




- 3D Rendering:
- Nuke’s 3D system provides basic rendering capabilities for simple scenes. It supports features like ambient occlusion, shadows, and reflections.
- Shading and Lighting:
- Nuke includes nodes for basic shading and lighting, allowing you to control the appearance of 3D objects in your composition.
- Scene Integration:
- You can integrate 3D elements into live-action footage, matching the camera movement for a more realistic composite.
- Expression Linking:
- You can use expressions to link 2D and 3D properties, allowing for dynamic relationships between elements in different dimensions.


- Nuke can be customized in many ways through preference, we can change the 3D navigation method to emulate other 3D software navigation methods.
- We can customize the nuke UI and save the changes with a name as a workspace, so when opening the nuke we can use our preferred workspace
- We can also create certain tool sets to save some time.
- All the saved tool sets, workspace and preferences are saved in the parent folder of the Nuke.





Nuke Camera
Nuke supports the use of virtual cameras, allowing you to create a 3D space and move the camera within it. This is useful for matching the movement of live-action footage or creating parallax effects.
- Create a Camera Node:
- In the Node Graph, press Tab to open the node creation panel.
- Type “Camera” and select the “Camera” node.
- Import Camera Data:
- If you have camera tracking data from external software (e.g., PFTrack, SynthEyes), use a ReadGeo node or similar to import the camera data into Nuke.
- Adjust Camera Settings:
- Open the Camera node properties by double-clicking on it.
- Set the film back, focal length, and other parameters to match the real camera used during filming.
- Create 3D Objects:Use Card nodes or other geometry nodes to represent objects in the 3D space.
- Connect them to the ScanlineRender node for rendering.
- Animate the Camera:
- Keyframe the camera’s translation, rotation, and focal length to match the movement in the live-action footage.
- You can use keyframes or expressions to link camera properties to tracking data.
- Camera Projection:
- Use the CameraProject node to project 2D images onto 3D geometry based on the camera’s perspective.
Scanline Render
The ScanlineRender node in Nuke is used for rendering 3D scenes within the compositing environment. It simulates a simplified rendering process, taking into account the lighting, shading, and textures of 3D objects in a scene.
Node Properties:
- Render Settings:
- In the ScanlineRender node properties, you can find settings for rendering quality, anti-aliasing, and other parameters.
- Shading Model:
- Choose the shading model (e.g., Lambert, Phong) that best suits your scene and desired look.
- Background:
- Specify the background color or connect another image node to the “Background” input for a more complex background.
- Outputs:
- The ScanlineRender node typically has outputs for the rendered image, depth information, and other auxiliary data.
Lens Distortion
Lens distortion refers to the imperfections introduced by camera lenses that can cause straight lines to appear curved or distorted. In visual effects and compositing, correcting lens distortion is crucial for seamlessly integrating elements into live-action footage. Nuke provides tools to analyze and correct lens distortion.
- Understanding Lens Distortion:
- Radial Distortion: Causes straight lines to curve, more pronounced at the frame edges.
- Tangential Distortion: Shifts the image along the horizontal and vertical axes.
- LensDistortion Node:
- Analysis: Use the LensDistortion node to estimate distortion parameters from a grid pattern.
- Correction: Apply obtained parameters for distortion correction.
- Undistort and Distort Nodes:
- Undistort: Use the Undistort node to remove lens distortion.
- Distort: The Distort node reintroduces lens distortion, e.g., for 3D integration.
- LensDistortion Model:
- Model Options: Choose a lens distortion model (e.g., “Nuke,” “Brown,” “Houdini”).
- Parameters (K1, K2, P1, P2): Define distortion correction amount and type.
- Fine-Tuning:
- Grid Warp: Manually adjust correction with a grid warp in the LensDistortion node.
- LensDistortionCorrect Node: Use for advanced correction with extra controls.
- Animation:
- Keyframe Parameters: Adjust distortion parameters for changing distortion over time.
- Checkerboard Patterns:
- Calibration Aid: Filming a checkerboard pattern aids in accurate distortion analysis.






STmap
An STmap (Spatial-Temporal Map) in Nuke is a representation of the distortion in an image due to various factors, including lens distortion, and it is used to correct this distortion. The STmap carries spatial and temporal information, making it a powerful tool for addressing complex distortions that may vary across different areas of the image and evolve over time.
- Understanding STmap:
- Spatial-Temporal Distortion: Spatial distortion is caused by lens imperfections, while temporal distortion evolves over time.
- LensDistortion Node and STmap:
- LensDistortion Node: In Nuke, the LensDistortion node analyzes footage and generates an STmap representing spatial and temporal distortions.
- STmap Output: The LensDistortion node produces an STmap encapsulating distortions in the footage.
- Usage of STmap:
- LensDistortionCorrection: The LensDistortionCorrection node uses the STmap to undistort or redistort images.
- Creation of STmap:
- Calibration Grid: Use a grid during shooting for generating an STmap, providing reference points for distortion analysis.
- Analysis: The LensDistortion node analyzes the grid to create the corresponding STmap.
- Application to Animation:
- Changing Distortion Over Time: For evolving lens distortion, animate distortion parameters or use a sequence of STmaps.
- Manual Adjustments:
- GridWarp and STmap: The GridWarp node, combined with an STmap, allows manual adjustments, helpful when automatic analysis falls short.



