Concatenation is the ability to perform one single mathematical calculation across several tools in the Transform family. This single calculation (or filter) allows us to retain as much detail as possible.

– do not put grade/color correct in between transform node as it decreases the quality of the footage / more blurry
– the only filter matters for transform node is the filter at the end of the chain
– the only motion blur for transform node is the motion blur at the end of the chain
– use clone (alt+K) if you want to adjust multiple objects at the same time using the same value → saves time and avoids having to change nodes individually.
– most of the time you’ll be using cubic as filtering

Correct Merging procedure

Bounding Box BBOX = area where nuke will read and render information from
Plate > S = project setting

bounding box is represented by the dotted frame

3 ways to match bounding box:

  1. use crop node after transform node if your bounding box exceeds the root resolution so Nuke doesnt read information outside of the format
  2. use merge node → change set bbox from union (default) to B
  3. if use roto node → copy alpha → alpha → premult = it will automatically match bbox to roto

Working with 3D

  • Press tab in viewer to switch between viewport to camera view
  • hold ctrl + left click to orbit
  • using reformat node to limit bbox to the size of original 3D elements in the scene

Shuffle node

Shuffle node represents the input layer – output layer
Shuffle create new channels for output and read different layers from many inputs both internal like the ones we created, but also multi-channel EXR renders where there are multiple LAYERS and multiple CHANNELS per image.
→ useful for manipulate rgba channel + depth

PLANAR TRACKER (perspective shift / 2.5 tracking)

PlanarTracker is good for perspective shift, but not for spacial movement

if there’s an object in the front, top viewer drop down → add new track layer, roto it and remember to put its folder ON TOP of the roto folder of the object behind. This will tell Nuke to exclude tracking the object in the front.
– absolute: shrink the source image
– relative: shrink the source image less

Zdefocus

Zdefocus node uses a focal point to generate bokeh, creating more of a film look than the Blur node.
Cons:
– Elements are a bit fake by eyes
– NukeX 14 has Bokeh node, which uses camera’s information so it’s more accurate to the eyes

Using Focal_point to pick subject/area to focus on
Convolve > Switch > Any shape (roto)/text can be used as bokeh shapes/blades

Homework

For this week assignment, I need to use Planartrack node to replace the 2 posters in a scene that has perspective shift. I first didn’t understand why the CornerPin and the posters’ corners were not matching, then I learned that I have to click on the Premult node first before adding the CornerPin, so it will be generated from the poster itself. Then I roto the pole and added it back on top of the poster.

As a final touch, I attempted to replace the wall with a brick wall texture using what I have learned about Planar Tracking. I tracked the frames/windows around the poster, then track the wall using the same technique as the posters’ replacement. The result turned out quite nice as the color matches with the whole scene, but I think the image texture is still too sharp and not perfectly blended with the wall so it looks just a bit off.

A major problem that I had was not really understanding how to stabilize the brick wall texture so it doesn’t move with the camera shift. I have tried Stabilizing from Tracking, and also using teacher’s advice to try tracking the wall using smaller area instead of tracking the whole wall like I did at first, yet it’s still not working at all. I might have to revisit this project to figure this out.

Node setup

Creating a Cine Camera in Unreal Engine

Unreal Engine offers three methods to create a Cine Camera:

  1. Right-click in the viewport > Place Actor > Cine Camera Actor
  2. Use the ‘Quickly add to the project’ button next to the Mode option, select Cinematic > Cine Camera Actor
  3. Click on the sandwich menu, choose Create Camera Here > Cine Camera Actor

Camera Types

  • Camera Rig Rail: This mimics a real-world rail system, enabling the attachment and animation of the camera along a predefined path.
  • Camera Rig Crane: Similar to a crane in real life, this allows for the attachment and animation of the camera with crane-like movements.
  • Cine Camera Actor: This camera type provides detailed options for Filmback, Lens, and Focus, aligning with industry standards to create realistic scenes.

Features of Cine Camera Actor

  1. Piloting: Navigate the scene effortlessly by switching the view to a specific camera. Change perspectives by selecting ‘Perspective’ in the viewport, choosing the camera, or right-clicking in the viewport and opting for ‘Pilot’ followed by the camera’s name.
  2. Picture-in-Picture Display: The ‘Preview Selected Cameras’ option in Editor Preferences can be toggled on or off, allowing you to preview a camera by selecting it in the Outliner. This feature is enabled by default.
  3. Look at: Focus the camera on a specific object. Set this by adding an actor for the camera to track, then in Lookat tracking settings, select ‘Actor to track’, pick your desired actor, and turn on “Enable Look at Tracking.”

To change the viewport layout, click on the Sandwich bar, go to Layouts, and choose your preferred layout division. Maximize the current view with F11.

Incorporating Cine Cameras with Post Process Volume is vital for configuring Depth of Field (DOF) and Exposure. These settings are accessible in both Cine Cameras and PPV, with PPV offering global adjustments.

To view DOF in the viewport, navigate to Show > Visualize > Depth of Field Layers.

Post Process Volume (PPV) – Exposure

Local Exposure: Useful for consistent imagery when detailed scene lighting is impractical. Always set this up with Lumen Global Illumination.

Camera Setup Workflow:

  1. Define Filmback (scene size).
  2. Adjust Depth of Field (DOF): Set aperture, focal length, focus distance. (Example: Narrow DOF used)
  3. Adjust exposure: Set shutter speed, ISO.
  4. Create and adjust the Exposure in Post Process Volume (PPV).

Important project settings for Lumen
Point light
spot light
rect light

Env. Light Mixer – Create lighting from scratch:

  • Create:
    • Sky Light
    • Atmospheric Light
    • Sky Atmosphere
    • Volumetric Cloud
    • Height Fog

Other elements to add to create realistic sky/world/lighting:

  • Volumetric Cloud: Uses a material-driven method to create lifelike clouds, offering versatility in cloud types and enhancing the sky’s realism.
  • Exponential Height Fog: Adds atmospheric fog that varies in density with altitude, providing a smooth transition and allowing for two different fog colors for environmental tuning.
  • HDRI Map: Uses an environmental texture to provide accurate background scenery, natural reflections, and contributes to the overall illumination of the scene.

Things to keep in mind when dealing with indirect lighting:

  • Base/albedo color: the material or color of your objects matter as they reflect light bouncing off them. If lighting in your scene seems too dark/light, considering tweaking color of your material.
  • In real world nothing is 100% black or white. Most black: 0.04, most white: 0.9, middle ground: 0.18
  • Use chrome ball to visualize lighting & reflection

To turn off Auto Exposure:

  • Add PostProcessVolume to scene
  • Infinite Extent (Unbound) ✅
  • Metering Mode: Manual
  • Apply Physical Camera Exposure ✅
  • Exposure Compensation: USE THIS to control light (without having to manipulate light in your scene)

Experimentation

After several attempts in creating beautiful skies and environment lighting in UE5, I started to get a hang of how to work with Lumen. There are still a lot of settings which confuses me sometimes, yet I believe lighting is such an essential aspect responsible in deciding the quality and mood of your scene.

References

Transform node deals with translation, rotation and scale as well as tracking, warping and motion blur. Sometimes you want to animate these values just by using one transform node, but sometimes it’s better to use rotation or scale node separately to understand the process better.

Using rotation and scale node to separate individual operation

2D tracker

Tracker node: container of pixel in x and y
– Allows you to extract animation data from the position, rotation, and size of an image.
– Using expressions, you can apply the data directly to transform and match-move another element.
– To stabilize the image you can invert the values of the data and apply them to the original element
– We can also make several tracking nodes from the main Tracker node to automatically make the scene stable, match the movement, and either reduce or add shakiness.

General process for tracking an image:

1. Connect a Tracker node to the image you want to track.
2. Use auto-tracking for simple tracks or place tracking anchors on features at keyframes in the image.
3. Calculate the tracking data.
4. Choose the tracking operation you want to perform: stabilize, match-move, etc.

2D, 2.5 & 3D tracking


– 2D track: x & y
– 2.5: still x & y but in 4 points to mimic a sense of perspective. Use planartracker for this.
– 3D: XYZ


– inner tracking square: track the main shape/point
– outer tracking square: find movement around the inner square to track it

Pre-tracking treating:

Sometimes we should treat the original plate to obtain better tracks, if the scene is too noisy or grainy. In this case, we use a ‘Denoise’ node to make the noise or grain less, helping the tracker read the changes between frames better. We can also use tools like ‘Laplacian’, ‘Median’, or ‘Grade Contrast’ to fix the grain issue.

  1. Denoise plate (denoise node – median node).
  2. Increase contrast with grade node.
  3. Lapalachian node can help in certain case to lock better tracks.
Denoise footage to improve tracking
Stablize operation and compensation using transform nodes

It’s always important to use a Quality Control (QC) backdrop to make sure the tracking and any added rotoscoping is done right.

Homework assignment:

I attempted to do this assignment twice, as the first time I was really confused of the process and messed up the nodes. Doing it the second time made me realize that I first need to track the 4 points on the phone to create a Transform_stabilize node. This comes before the first Merge operation, following with a Transform_matchmove from the same tracking. Doing this ensures that the phone mockup is merged correctly with the tracked points.

I was not satisfied with the roto of the fingers at first because of the green spill. In this particular node setup as well haven’t touched on green spill operation, yet I managed to compensate for it by using Filter_Erode node with a slight blur to the edges to make the roto not too obvious.

I also used Erode node for the phone mockup as it doesn’t fully cover the green screen at the top of the phone, no matter how accurately I tried to adjust the CornerPin2D.

Before
After
Final node set up


Homework feedback:
– My final work is good but personally I was not satisfied with the roto of the finger as it is still woobly, I want to learn how to roto better in the future.
– I learned that I could have used Curve Editor to smooth out my animation by using curve editor to control animation from linear to flow
– x → f → press h on in & out points (for easy in & out)
– y → f → press h on in & out points to smooth animation or move curve

This week, I tackle on Landscape and material in Unreal Engine 5. In terms of 3D I have always been focusing more on character design, yet creating real-time landscape with great details has been something that I always wanted to implemented in my work as I believe it would be a great storytelling enhancement.

Resources:
Unreal Engine 5 Beginner Tutorial for Film: Landscape and Materials
How I Quickly Create 3D Environments in Unreal Engine 5 | FULL WORKFLOW
Landscape Basics Tutorial for Beginners in Unreal Engine 5.2

Creating landscape
1. Stablishing Scale: this is to make sure your landscape is scaled correctly by adding a mannequin
– Content browser → add → add feature or content pack → third person
– Mannequin → Character → Mesh → SK_Mannequin

2. Landscape mode: Here you can create landscape by manually paint/sculpt on the plane OR use height maps
– Settings: Section per component = to subdivide square/section for higher res landscape
– Drag downloaded surface material into Landscape material

** Be aware of tiling: you can compensate it by modify tiling X & Y in Shader Editor

Before
After


After trying out different surface textures from Quixel Bridge, the landscape still doesn’t look exactly realistic to me if I just use 1 texture for the whole landscape. I have eventually learned how to create different material layers in order for me to paint onto my own landscape, using these tutorials.

From 2:10:00

How to create different landscape material layers:

  • In Content browser, create a material folder. Right click > New Material, name it as Landscape
  • Open up the material, add LandscapeLayerBlend node (This will tell UE5 that this is a landscape material)
  • On left panel, click + to add new landscape layers. Name them accordingly
  • Import Textures into UE5
  • Add MakeMaterialAttribute node, connect Base color (RGB), Normal (RGB) and Roughness (G) to it.
  • Add Texture Coordinate, Multiply and ScalarParameter (for tile), connect to textures’ UVs
  • Right click on material > Make Material Instance, drag this into Landscape Material. If you open this up now you can modify the tile that has been added beforehand.
  • Repeat accordingly to create more material layers.
Tip: Hold alt + click on lines to break connection
Ctrl + W = Duplicate a node

Example of how to create and organize material layers.
I tried to mock up my scene in Blender, obviously was not going as well as UE5 since I used hair particle system to create the grass and it was VERY heavy
experimenting with painting and sculpting Landscape in UE5


Premult = multiplies the input’s rgb channels by its alpha

Keymix = similar to Merge but accepts unpremult assett. Often used for merging masks

Uses of Premult node:
– Merging unpremult images = to avoid unwanted artifacts (fringing around masked objects)
– Color correcting premultiplied images.Unpremult color correctionpremult

Unpremult = divides the input’s rgb channels by its alpha

Colour correcting premultiplied images:

When you colour correct a premultiplied image, you should first connect an Unpremult node to the image to turn the image into an unpremultiplied one.
Then, perform the colour correction. Finally, add a Premult node to return the image to its original premultiplied state for Merge operations. Typically, most 3D rendered images are premultiplied.
** If the background is black or even just very dark, the image may be premultiplied.

Merging Operation Examples:

Merge (over)
Merge (mask)
Merge (average)
Merge (overlay)

Reformat = lets you resize and reposition your image sequences to a different format (width and height).
– Allows you to use plates of varying image resolution on a single script without running into issues when combining them.
– All scripts should include Reformat nodes after each Read node to specify the output resolution of the images in the script.

Colorspace and Linearization
– Colorspace defines how the footage was captured or exported. Most files are non-linear, and knowing the correct colorspace is critical for proper linearization.
– Linearization is the process of converting footage into linear space. All the tools inside nuke are built around linear math. This also allows the mixing of media types. We need to know what colorspace of the file was before starting to work on the file.
– You can work in LOG/RAW or Linear.

LUTs, CDLs, and Grades
· LUTs can be used for creatively or technically, i.e. converting from log to lin, or adding a “look”
· CDLs are creative, i.e. adding a “look” to a clip
· Graded footage means colored to it’s “final” look

For color correction we always want to think in terms of:
– Highlights
– Midtones
– Shadows
Two commonly used nodes are : grade & colorcorrect

Both gives us the chance to grade H, M, S of a shot

– To grade Highlights we use either GAIN or MULTIPLY,
– To grade Shadows we use LIFT
– To grade Midtones we use GAMMA.

How to match color using grade node to match constant A to constant B
– Add a grade node (G), pick constant A color as WHITEPOINT by selecting eyedrop -> ctrl + shift on constant A color. 
– Pick constant B as GAIN by selecting eyedrop -> ctrl + shift on constant B

2 examples of basic grade matches using merge node
Match color

Note: Cloning a node will keep the same value/setting across the nodes (signified by a C letter on top of the node)

Primary color

secondary color

QC quality control