3D classic vs 3D beta

  • Classic still incorporate FBX or Alembic. Nodes are green.
  • Beta adapt to USD workflow. Nodes are red.
  • You can’t mix nodes from the classic 3D system and the new 3D system. There are a few notable exceptions to this rule, like CameraTracker and DepthGenerator. Classic 3D system nodes are green by default and new 3D nodes are red.
  • The red new version is still quite unstable, we should use the classic version for now.

3D navigation

Tab in viewer – switch between 2D and 3D
Middle mouse – Pan
Alt + right click – Orbit
Scroll – Zoom

Highlight your node setup > Create to save your particular set up/tool set

Scene Node: is used to manage and assemble 3D geometry, lights, and cameras. It acts as a container/collector for all your 3D elements. When you connect multiple 3D objects to a Scene node, you can manipulate them as a single unit, which simplifies the process of working with complex 3D composites.

Camera Node: In Nuke, the Camera node is a virtual representation of a real or imaginary camera. It’s used in 3D compositing to define the perspective and projection of a scene. You can adjust parameters like focal length, aperture, and field of view. This node is crucial for match-moving and integrating 3D elements into 2D footage, as it helps in replicating the original camera movements and settings.

ScanlineRender Node: This is a rendering node used to render 3D geometry in a scene. It works by converting the 3D data into a 2D image, using the settings from the Camera node

To view from camera’s POV:
Example of 3D set up
Example of complex 3D setup

Lens Distortion

Resources:
Working with Lens Distortion: https://learn.foundry.com/nuke/content/comp_environment/lens_distortion/adding_removing_lens_distortion.html

Lens distortion refers to the way a camera lens can warp or distort the image it captures. This isn’t necessarily a flaw; in fact, it’s often a characteristic of the lens design. There are mainly two types of lens distortions: barrel distortion and pincushion distortion.

  1. Barrel Distortion: This occurs mostly in wide-angle lenses. It makes the image appear bulged outwards from the center. Think of how a fisheye lens makes things look curved and expansive. It’s often used for artistic effect or to create a sense of immersion.
  2. Pincushion Distortion: The opposite of barrel distortion, this happens more in telephoto lenses. The image appears pinched at the center, making the edges seem to bow inwards. It’s less common in everyday photography and filmmaking.

LensDistortion Node: This node is used for correcting or applying lens distortion. It allows you to analyze an image for any barrel or pincushion distortion and correct it, or you can intentionally add distortion to match CG elements with live-action footage

example of a wide angle lens 24mm – straight lines tend to be bent
Only available in NukeX

It’s important to use LensDistortion (Undistort) before doing many match-move or tracking in order to work with correct information then use LensDistortion (Redistort) afterward to return the plate to correct format.

LensDistortion > Analysis > Detect or Draw (Straight Lines) > Solve
Draw horizontal and vertical lines to help giving as much information to Undistort as possible, especially around the edges. Then click Solve.

LensDistortion (StMap) introduce “forward” and “backward” as an option to view and is lighter to use & faster to view.

3D tracking

  1. Denoise your plate then treat it (color grade etc)
  2. Roto out area that you want to avoid tracking (things that move/not static, be mindful of reflective objects)
Helping Nuke know that this is the floor by selecting a few points of the floor in the plate > right click > ground plane > set to selected
choose 1 point to set as origin to make sure the tracking scene is not tilted etc

CameraTracker analyses the motion in a 2D footage and extrapolates this movement into a 3D camera path. It tracks various points in the footage (usually high-contrast or distinctive features) across frames to determine how the camera was moving when the footage was shot.

Source: sequence or still

Autotrack: to improve tracking

Turn on Preview Features to show

Click on error-max. click on graph and press F

reduce Max error to 6, click delete unresolved and delete rejected

Usually solved error is anywhere around 1 or below is good

The floor is now matching the ground plane

To check tracking:

  • Select some point > Create > Cube/Plane
  • Plug the object Card into Scene > move it to match the ground plane
You should create multiple card from points, in foreground and background to make sure everything matches and works perfectly
Using pointcloudgenerator to see camera movement
First Analyze then Track points then Delete Rejected Points to remove red
Select all vertex > Groups > Create group > Bake Selected Groups

Previously I have modelled the balloon for this project, this time I created shaders for them to make 2 different colour variations. I wanted a fun fantasy look that feels dynamic, so not only I animated the movement of the balloons to rotate but also add some shakiness and tilts to them. I purposely chose glass and glossy material with the intention to make them the main highlight of the scene. On the balloon itself is a shader that I also animated in the end to make a lava lamp/lightning effect that ended up looking really cool.

Balloon 1
Balloon 2
My Nuke setup

In today’s class, we learn about stages in Production for film in compositing,
– Temps/Postviz? – Rough version/mock up?
– Trailer – Key shots
– Finals
– Quality Control

Project Management Softwares:
– Google Doc/Spreadsheet/Notion
– Ftrack
– Shotgun

Production roles:

  • Line Producer: The person who manages and keeps track of the whole team in terms of production.
  • VFX Producer: ensure studio projects completion on time. Strive to complete project by the set deadline and within budget & available resources.

Tech Check before publishing a version of your work:

  • Check if you did all the notes of the shot/ follow briefs
  • Compare new version with previous version
  • Check editorial? Any retime in the shot?
  • Has your shot had the latest CG and FX?
  • Does your shot have the latest camera match move?
  • Write in personal notes
  • Do you have different alternatives for one shot?
  • Quality Control

  1. Roto Paint (P)

The RotoPaint node gives you a broader scale of tools to use than Roto, though many of the controls are shared across both nodes. It is used to clean up and clone out unwanted elements from the plate. Cons: Heavier than original Roto node

Brush settings
Cloning the house using Rotopaint node
  • Clone: Ctrl + click = choose clone source. Shift + drag = change brush size
  • Reveal: Reveals the original background after cloning (change paint source from ‘fg’ to ‘bg’ to take effects).
  • Paint: Paint/draw on the shot
  • Blur, sharpen, smear, dodge, burn: Adjusts the area painted on the shot (similar to Photoshop).
  • Dodge for highlight, Burn for shadow
Lifetime type has many options, we typically use all frames, or you can use other options for transitioning.
make sure to have the paint on alpha by change output mask to rgba.alpha to premult
be mindful to change source if rotopaint doesn’t seem to be working

2 ways to separate rotopaint from the plate

Using Difference node
Using Merge (divide) then (Multiply) back

Grain

  • Film grain is the texture in photographic film, caused by small metallic silver particles developed from light-sensitive materials.
  • Unlike digital noise, which is electronic interference in digital cameras, film grain is a physical attribute of analog film.
  • Grain and noise both impart a comparable appearance, sensation, and texture to filmic images. Techniques utilized for footage from both digital and film sources, such as adding grain, adjusting perspective, and applying denoising, help in creating a scene that feels more natural and organic.

Setup 1
  • When working on grainy footages, we will typically start with denoising procedure first. This step is essential to help the tracking of elements that we want to replace in the scene later on.
  • Once the editing is completed, grain is re-added to the video to restore its original texture and maintain visual continuity throughout the shot

Setup 2
Setup 3
  • Denoise Node: Link to your source footage the set a bounding box to analyze the existing noise in the footage. After this, you can fine-tune settings like the amount of Denoise, Smoothness, and other parameters.
  • Grain Node: Can be quite difficult to match the grain to source footage as you’ll need to go through different RGB channels one by one and adjust settings like grain size, irregularity, and intensity to achieve a close match.
  • F_ReGrain Node: Offers a more precise grain matching compared to the Grain node. You connect the grain to the original footage and then link the Src (source) from the shot you are trying to replicate. Note that this is available only in NukeX and is much heavier than Grain node.

DasGrain node

  • normalised grain – average denoised, clean plate and cleaned up plate
  • common_key = look for the difference between clean plate and cleaned up plate

Homework

Before cloning procedure
After cloning

I tried to match grain using Grain node by going through RGB channel. I wouldn’t say it looks perfect, especially for Blue channel. But the result seems to be pretty good at the end.

R Channel
G Channel
B Channel

Concatenation is the ability to perform one single mathematical calculation across several tools in the Transform family. This single calculation (or filter) allows us to retain as much detail as possible.

– do not put grade/color correct in between transform node as it decreases the quality of the footage / more blurry
– the only filter matters for transform node is the filter at the end of the chain
– the only motion blur for transform node is the motion blur at the end of the chain
– use clone (alt+K) if you want to adjust multiple objects at the same time using the same value → saves time and avoids having to change nodes individually.
– most of the time you’ll be using cubic as filtering

Correct Merging procedure

Bounding Box BBOX = area where nuke will read and render information from
Plate > S = project setting

bounding box is represented by the dotted frame

3 ways to match bounding box:

  1. use crop node after transform node if your bounding box exceeds the root resolution so Nuke doesnt read information outside of the format
  2. use merge node → change set bbox from union (default) to B
  3. if use roto node → copy alpha → alpha → premult = it will automatically match bbox to roto

Working with 3D

  • Press tab in viewer to switch between viewport to camera view
  • hold ctrl + left click to orbit
  • using reformat node to limit bbox to the size of original 3D elements in the scene

Shuffle node

Shuffle node represents the input layer – output layer
Shuffle create new channels for output and read different layers from many inputs both internal like the ones we created, but also multi-channel EXR renders where there are multiple LAYERS and multiple CHANNELS per image.
→ useful for manipulate rgba channel + depth

PLANAR TRACKER (perspective shift / 2.5 tracking)

PlanarTracker is good for perspective shift, but not for spacial movement

if there’s an object in the front, top viewer drop down → add new track layer, roto it and remember to put its folder ON TOP of the roto folder of the object behind. This will tell Nuke to exclude tracking the object in the front.
– absolute: shrink the source image
– relative: shrink the source image less

Zdefocus

Zdefocus node uses a focal point to generate bokeh, creating more of a film look than the Blur node.
Cons:
– Elements are a bit fake by eyes
– NukeX 14 has Bokeh node, which uses camera’s information so it’s more accurate to the eyes

Using Focal_point to pick subject/area to focus on
Convolve > Switch > Any shape (roto)/text can be used as bokeh shapes/blades

Homework

For this week assignment, I need to use Planartrack node to replace the 2 posters in a scene that has perspective shift. I first didn’t understand why the CornerPin and the posters’ corners were not matching, then I learned that I have to click on the Premult node first before adding the CornerPin, so it will be generated from the poster itself. Then I roto the pole and added it back on top of the poster.

As a final touch, I attempted to replace the wall with a brick wall texture using what I have learned about Planar Tracking. I tracked the frames/windows around the poster, then track the wall using the same technique as the posters’ replacement. The result turned out quite nice as the color matches with the whole scene, but I think the image texture is still too sharp and not perfectly blended with the wall so it looks just a bit off.

A major problem that I had was not really understanding how to stabilize the brick wall texture so it doesn’t move with the camera shift. I have tried Stabilizing from Tracking, and also using teacher’s advice to try tracking the wall using smaller area instead of tracking the whole wall like I did at first, yet it’s still not working at all. I might have to revisit this project to figure this out.

Node setup

Transform node deals with translation, rotation and scale as well as tracking, warping and motion blur. Sometimes you want to animate these values just by using one transform node, but sometimes it’s better to use rotation or scale node separately to understand the process better.

Using rotation and scale node to separate individual operation

2D tracker

Tracker node: container of pixel in x and y
– Allows you to extract animation data from the position, rotation, and size of an image.
– Using expressions, you can apply the data directly to transform and match-move another element.
– To stabilize the image you can invert the values of the data and apply them to the original element
– We can also make several tracking nodes from the main Tracker node to automatically make the scene stable, match the movement, and either reduce or add shakiness.

General process for tracking an image:

1. Connect a Tracker node to the image you want to track.
2. Use auto-tracking for simple tracks or place tracking anchors on features at keyframes in the image.
3. Calculate the tracking data.
4. Choose the tracking operation you want to perform: stabilize, match-move, etc.

2D, 2.5 & 3D tracking


– 2D track: x & y
– 2.5: still x & y but in 4 points to mimic a sense of perspective. Use planartracker for this.
– 3D: XYZ


– inner tracking square: track the main shape/point
– outer tracking square: find movement around the inner square to track it

Pre-tracking treating:

Sometimes we should treat the original plate to obtain better tracks, if the scene is too noisy or grainy. In this case, we use a ‘Denoise’ node to make the noise or grain less, helping the tracker read the changes between frames better. We can also use tools like ‘Laplacian’, ‘Median’, or ‘Grade Contrast’ to fix the grain issue.

  1. Denoise plate (denoise node – median node).
  2. Increase contrast with grade node.
  3. Lapalachian node can help in certain case to lock better tracks.
Denoise footage to improve tracking
Stablize operation and compensation using transform nodes

It’s always important to use a Quality Control (QC) backdrop to make sure the tracking and any added rotoscoping is done right.

Homework assignment:

I attempted to do this assignment twice, as the first time I was really confused of the process and messed up the nodes. Doing it the second time made me realize that I first need to track the 4 points on the phone to create a Transform_stabilize node. This comes before the first Merge operation, following with a Transform_matchmove from the same tracking. Doing this ensures that the phone mockup is merged correctly with the tracked points.

I was not satisfied with the roto of the fingers at first because of the green spill. In this particular node setup as well haven’t touched on green spill operation, yet I managed to compensate for it by using Filter_Erode node with a slight blur to the edges to make the roto not too obvious.

I also used Erode node for the phone mockup as it doesn’t fully cover the green screen at the top of the phone, no matter how accurately I tried to adjust the CornerPin2D.

Before
After
Final node set up


Homework feedback:
– My final work is good but personally I was not satisfied with the roto of the finger as it is still woobly, I want to learn how to roto better in the future.
– I learned that I could have used Curve Editor to smooth out my animation by using curve editor to control animation from linear to flow
– x → f → press h on in & out points (for easy in & out)
– y → f → press h on in & out points to smooth animation or move curve

Premult = multiplies the input’s rgb channels by its alpha

Keymix = similar to Merge but accepts unpremult assett. Often used for merging masks

Uses of Premult node:
– Merging unpremult images = to avoid unwanted artifacts (fringing around masked objects)
– Color correcting premultiplied images.Unpremult color correctionpremult

Unpremult = divides the input’s rgb channels by its alpha

Colour correcting premultiplied images:

When you colour correct a premultiplied image, you should first connect an Unpremult node to the image to turn the image into an unpremultiplied one.
Then, perform the colour correction. Finally, add a Premult node to return the image to its original premultiplied state for Merge operations. Typically, most 3D rendered images are premultiplied.
** If the background is black or even just very dark, the image may be premultiplied.

Merging Operation Examples:

Merge (over)
Merge (mask)
Merge (average)
Merge (overlay)

Reformat = lets you resize and reposition your image sequences to a different format (width and height).
– Allows you to use plates of varying image resolution on a single script without running into issues when combining them.
– All scripts should include Reformat nodes after each Read node to specify the output resolution of the images in the script.

Colorspace and Linearization
– Colorspace defines how the footage was captured or exported. Most files are non-linear, and knowing the correct colorspace is critical for proper linearization.
– Linearization is the process of converting footage into linear space. All the tools inside nuke are built around linear math. This also allows the mixing of media types. We need to know what colorspace of the file was before starting to work on the file.
– You can work in LOG/RAW or Linear.

LUTs, CDLs, and Grades
· LUTs can be used for creatively or technically, i.e. converting from log to lin, or adding a “look”
· CDLs are creative, i.e. adding a “look” to a clip
· Graded footage means colored to it’s “final” look

For color correction we always want to think in terms of:
– Highlights
– Midtones
– Shadows
Two commonly used nodes are : grade & colorcorrect

Both gives us the chance to grade H, M, S of a shot

– To grade Highlights we use either GAIN or MULTIPLY,
– To grade Shadows we use LIFT
– To grade Midtones we use GAMMA.

How to match color using grade node to match constant A to constant B
– Add a grade node (G), pick constant A color as WHITEPOINT by selecting eyedrop -> ctrl + shift on constant A color. 
– Pick constant B as GAIN by selecting eyedrop -> ctrl + shift on constant B

2 examples of basic grade matches using merge node
Match color

Note: Cloning a node will keep the same value/setting across the nodes (signified by a C letter on top of the node)

Primary color

secondary color

QC quality control



The first time I did the roto of the mountain, I came through a few fail attempts/problems that turned out to be good Nuke lessons for me to remember.

  1. Did not use a lot of points to roto the mountain at first: I thought of this from the running man roto where you should not put too many points because it’s harder to roto with such rapid movement. Yet this mountain shot is relatively static, so the mountain doesn’t move much and it has a lot of bumps/details. So in order to make the best result, I went on and make the roto as accurate as I could. I also took my teacher’s advice to break up the roto into parts, though I think I could have gotten away with just one big roto in the case.
  2. I initially used Transform – Matchmove operation to match the roto with the tracking, which did not work very well as it cuts out the left and bottom part of the mountain, eventhough I have roto outside of the frame.
  3. I finally use the transfer data technique by basically CTRL + DRAG the transform and center date from the tracker node to the ROOT of roto node. Then it works perfectly
Problem with using Transform Matchmove
First failed attempt at mountain roto using Transform Matchmove
Correct way to transform data (in this case)
Final node setup

Rotoscoping is a technique in animation and visual effects where artists trace over live-action footage frame-by-frame to isolate a character or object. This allows for easier compositing or layering of visual effects. It’s often used in films, commercials, and music videos to blend different visual elements seamlessly.

In real-world settings, fast-moving objects appear blurred to the human eye and camera sensors, so it’s recommended that we use motion blur/blur nodes mimic this natural phenomenon to blend seamlessly into new scenes or with added visual effects.

Calculate motion blur in Nuke:
24 frames per second
1/60 shutter speed
24fps/60=0.4

roto → motion blur → on

For this week homework, I was assigned to do the roto of the running man and the bridge in this particular footage, using the given setups below. At first these complex setups were really confusing to me, as I understand the nodes individually but I don’t exactly know why and how they are linked to each other. Therefore I did further research by watching tutorials and observing and playing around with the setup, then it starts to make more sense what the teacher was presenting here.

Reference: Nuke Rotoscoping Tutorial – Introduction to Roto Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

This is a standard roto setup, where the plate (read) and the (roto) note is connected to a (copy) node, but separated into 2 different branches (A & B). Then the (premult) node is to multiply the alpha channel with the rgb.

The benefit of this setup (using (copy) node) is that it gives us a more broken down workflow of our script, so if there is any further modification added to the alpha channel, it won’t affect the rgb (original footage).

However, the first set up is a simplified one where you will be required to do the roto manually frame by frame. With this second setup, the only difference is that the (roto) nodes are separated into different branches, then got combined using (channelmerge) node. Additionally, we got introduced to the (tracker) node, which can be used to track the movement of a particular area in the shot. It helps speeding up the roto process, and is especially used in tracking very subtle shifts of static objects from camera movement that we might oversee with our own eyes.

Reference: NKE 101- Nuke 101- PART 10: Intro to Tracker Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

At first, I wanted to try saving time by using tracker node to track the movement of the man. But after a few attempt it didn’t seem to work very well, perhaps because of the drastic movements.

The rotoscope process took me a lot of time, especially at first when I didn’t realize the technique and the movement pattern. I found the arms were the most tricky parts to work with as they don’t stay at the same shapes and placements and it takes really long to modify frame by frame.

Final takeaways:
Eventhough this first attempt at rotoscoping is not perfect, I have learned a lot through this assignment. First of all, the most important thing to remember is to break down the subject into small parts that you can easily track and modify. I separated the head, torso, and then both arms and legs into 3 parts. Additionally, it would help save more time if I have observed the movement pattern, in order to find main keyframes to start with. Thirdly, I initially started the assignment with a mouse, then I found that using a wacom has drastically improve the quality and save much more time on rotoscoping. The result doesn’t satisfy me 100%, yet it reflected exactly where I struggle at first, clearly shown through how the roto of the arms and head were not that good, then I improved much more on the legs after learning the right techniques.

Workshop Notes:

This is my first time getting to learn Nuke, yet with my experience in other 3D softwares like Blender or Houdini, I have been familiar with node-based workflow. Compositing is an important part of CG and VFX industry, I am eager to improve my skill and knowledge on this topic as I believe it will be beneficial in taking my personal works to another level of quality.

Basic Elements:

About Node-Based Compositing:

In compositing, nodes serve as the fundamental components. To craft a new compositing script, you insert and link nodes, forming an operational network. This setup lets you carry out various manipulations on your images (procedural workflow?)

In node-based software, intricate compositing tasks are simplified by connecting a series of basic image manipulations, known as “nodes.” These nodes come together to create a schematic layout, resembling a flowchart. A composite is represented as a tree-like graph, intuitively mapping out the journey from the original media to the end product. This is actually how all compositing software internally manage composites.

Flexibility and Limitations:

This approach offers high flexibility, allowing you to adjust the settings of an earlier image-processing step while observing the complete composite.
However, it’s worth noting that node-based systems often struggle with keyframing and time-based effects, as they don’t originate from a timeline like layer-based systems do.

Key Points to Remember:

Make sure to convert MP4 or MOV files into image sequences before proceeding, as Nuke is not well-optimized for MOV files. It’s designed to work best with image sequences, which are collections of ordered still frames that together form an animation. Typically, these frames are stored in a single folder and labeled in a sequential manner to maintain their order.

Workflow Steps:

  1. Import your video file.
  2. Tweak the frame sequencing and range using the retime and frame range nodes. Exercise caution not to alter the frame speed in the retime node.
  3. Invoke a write node to export the video as an image sequence.
  4. Choose either DPX or EXR as the file format.
  5. Don’t forget to add hashtags, enabling Nuke to properly write the sequences.

Typical VFX pipeline:

Further research