Project3D node

Purpose:

Project3D is used to project a 2D image onto a 3D object. It’s like shining a slide projector onto a physical model; the image “wraps” around the 3D shape according to the geometry and camera position.

Project on a Match-move Geometry

  • Freeze a frame using Framehold (Choose a frame that is the closest to the camera and appear the clearest)
  • Input a 2D image into the Project3D node (this can be a texture, or from a premulted rotopaint patch)
  • Freeze the frame again (this is to minimize calculation from rotopaint node)
  • Premult the patch
  • Use a Project3D node that connects to a Match-move Camera
  • Project3D > Card > ScanlineRender
  • Merge Original Plate with ScanlineRender’s output
Simple projecting procedure with rotopaint patch

MergeMat (Shader): Similar to the Merge node, this is specifically designed for 3D space operations.

Project at different distance

In this setup above, we use 2 framehold nodes, one the closest and one furthest from the camera. Then merge 2 Project nodes together using Mergemat. This approach ensures a more natural result by projecting the patch at different distance.

Projecting Roto

  • Roto the 2D element

ModelBuilder (only in Nuke X) – For building geometry. right click and choose mode. right click and change selection mode (like 3D softwares)

Resources:

https://learn.foundry.com/nuke/content/reference_guide/3d_nodes/project3d.html

Concatenation is the ability to perform one single mathematical calculation across several tools in the Transform family. This single calculation (or filter) allows us to retain as much detail as possible.

– do not put grade/color correct in between transform node as it decreases the quality of the footage / more blurry
– the only filter matters for transform node is the filter at the end of the chain
– the only motion blur for transform node is the motion blur at the end of the chain
– use clone (alt+K) if you want to adjust multiple objects at the same time using the same value → saves time and avoids having to change nodes individually.
– most of the time you’ll be using cubic as filtering

Correct Merging procedure

Bounding Box BBOX = area where nuke will read and render information from
Plate > S = project setting

bounding box is represented by the dotted frame

3 ways to match bounding box:

  1. use crop node after transform node if your bounding box exceeds the root resolution so Nuke doesnt read information outside of the format
  2. use merge node → change set bbox from union (default) to B
  3. if use roto node → copy alpha → alpha → premult = it will automatically match bbox to roto

Working with 3D

  • Press tab in viewer to switch between viewport to camera view
  • hold ctrl + left click to orbit
  • using reformat node to limit bbox to the size of original 3D elements in the scene

Shuffle node

Shuffle node represents the input layer – output layer
Shuffle create new channels for output and read different layers from many inputs both internal like the ones we created, but also multi-channel EXR renders where there are multiple LAYERS and multiple CHANNELS per image.
→ useful for manipulate rgba channel + depth

PLANAR TRACKER (perspective shift / 2.5 tracking)

PlanarTracker is good for perspective shift, but not for spacial movement

if there’s an object in the front, top viewer drop down → add new track layer, roto it and remember to put its folder ON TOP of the roto folder of the object behind. This will tell Nuke to exclude tracking the object in the front.
– absolute: shrink the source image
– relative: shrink the source image less

Zdefocus

Zdefocus node uses a focal point to generate bokeh, creating more of a film look than the Blur node.
Cons:
– Elements are a bit fake by eyes
– NukeX 14 has Bokeh node, which uses camera’s information so it’s more accurate to the eyes

Using Focal_point to pick subject/area to focus on
Convolve > Switch > Any shape (roto)/text can be used as bokeh shapes/blades

Homework

For this week assignment, I need to use Planartrack node to replace the 2 posters in a scene that has perspective shift. I first didn’t understand why the CornerPin and the posters’ corners were not matching, then I learned that I have to click on the Premult node first before adding the CornerPin, so it will be generated from the poster itself. Then I roto the pole and added it back on top of the poster.

As a final touch, I attempted to replace the wall with a brick wall texture using what I have learned about Planar Tracking. I tracked the frames/windows around the poster, then track the wall using the same technique as the posters’ replacement. The result turned out quite nice as the color matches with the whole scene, but I think the image texture is still too sharp and not perfectly blended with the wall so it looks just a bit off.

A major problem that I had was not really understanding how to stabilize the brick wall texture so it doesn’t move with the camera shift. I have tried Stabilizing from Tracking, and also using teacher’s advice to try tracking the wall using smaller area instead of tracking the whole wall like I did at first, yet it’s still not working at all. I might have to revisit this project to figure this out.

Node setup

Premult = multiplies the input’s rgb channels by its alpha

Keymix = similar to Merge but accepts unpremult assett. Often used for merging masks

Uses of Premult node:
– Merging unpremult images = to avoid unwanted artifacts (fringing around masked objects)
– Color correcting premultiplied images.Unpremult color correctionpremult

Unpremult = divides the input’s rgb channels by its alpha

Colour correcting premultiplied images:

When you colour correct a premultiplied image, you should first connect an Unpremult node to the image to turn the image into an unpremultiplied one.
Then, perform the colour correction. Finally, add a Premult node to return the image to its original premultiplied state for Merge operations. Typically, most 3D rendered images are premultiplied.
** If the background is black or even just very dark, the image may be premultiplied.

Merging Operation Examples:

Merge (over)
Merge (mask)
Merge (average)
Merge (overlay)

Reformat = lets you resize and reposition your image sequences to a different format (width and height).
– Allows you to use plates of varying image resolution on a single script without running into issues when combining them.
– All scripts should include Reformat nodes after each Read node to specify the output resolution of the images in the script.

Colorspace and Linearization
– Colorspace defines how the footage was captured or exported. Most files are non-linear, and knowing the correct colorspace is critical for proper linearization.
– Linearization is the process of converting footage into linear space. All the tools inside nuke are built around linear math. This also allows the mixing of media types. We need to know what colorspace of the file was before starting to work on the file.
– You can work in LOG/RAW or Linear.

LUTs, CDLs, and Grades
· LUTs can be used for creatively or technically, i.e. converting from log to lin, or adding a “look”
· CDLs are creative, i.e. adding a “look” to a clip
· Graded footage means colored to it’s “final” look

For color correction we always want to think in terms of:
– Highlights
– Midtones
– Shadows
Two commonly used nodes are : grade & colorcorrect

Both gives us the chance to grade H, M, S of a shot

– To grade Highlights we use either GAIN or MULTIPLY,
– To grade Shadows we use LIFT
– To grade Midtones we use GAMMA.

How to match color using grade node to match constant A to constant B
– Add a grade node (G), pick constant A color as WHITEPOINT by selecting eyedrop -> ctrl + shift on constant A color. 
– Pick constant B as GAIN by selecting eyedrop -> ctrl + shift on constant B

2 examples of basic grade matches using merge node
Match color

Note: Cloning a node will keep the same value/setting across the nodes (signified by a C letter on top of the node)

Primary color

secondary color

QC quality control



The first time I did the roto of the mountain, I came through a few fail attempts/problems that turned out to be good Nuke lessons for me to remember.

  1. Did not use a lot of points to roto the mountain at first: I thought of this from the running man roto where you should not put too many points because it’s harder to roto with such rapid movement. Yet this mountain shot is relatively static, so the mountain doesn’t move much and it has a lot of bumps/details. So in order to make the best result, I went on and make the roto as accurate as I could. I also took my teacher’s advice to break up the roto into parts, though I think I could have gotten away with just one big roto in the case.
  2. I initially used Transform – Matchmove operation to match the roto with the tracking, which did not work very well as it cuts out the left and bottom part of the mountain, eventhough I have roto outside of the frame.
  3. I finally use the transfer data technique by basically CTRL + DRAG the transform and center date from the tracker node to the ROOT of roto node. Then it works perfectly
Problem with using Transform Matchmove
First failed attempt at mountain roto using Transform Matchmove
Correct way to transform data (in this case)
Final node setup

Rotoscoping is a technique in animation and visual effects where artists trace over live-action footage frame-by-frame to isolate a character or object. This allows for easier compositing or layering of visual effects. It’s often used in films, commercials, and music videos to blend different visual elements seamlessly.

In real-world settings, fast-moving objects appear blurred to the human eye and camera sensors, so it’s recommended that we use motion blur/blur nodes mimic this natural phenomenon to blend seamlessly into new scenes or with added visual effects.

Calculate motion blur in Nuke:
24 frames per second
1/60 shutter speed
24fps/60=0.4

roto → motion blur → on

For this week homework, I was assigned to do the roto of the running man and the bridge in this particular footage, using the given setups below. At first these complex setups were really confusing to me, as I understand the nodes individually but I don’t exactly know why and how they are linked to each other. Therefore I did further research by watching tutorials and observing and playing around with the setup, then it starts to make more sense what the teacher was presenting here.

Reference: Nuke Rotoscoping Tutorial – Introduction to Roto Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

This is a standard roto setup, where the plate (read) and the (roto) note is connected to a (copy) node, but separated into 2 different branches (A & B). Then the (premult) node is to multiply the alpha channel with the rgb.

The benefit of this setup (using (copy) node) is that it gives us a more broken down workflow of our script, so if there is any further modification added to the alpha channel, it won’t affect the rgb (original footage).

However, the first set up is a simplified one where you will be required to do the roto manually frame by frame. With this second setup, the only difference is that the (roto) nodes are separated into different branches, then got combined using (channelmerge) node. Additionally, we got introduced to the (tracker) node, which can be used to track the movement of a particular area in the shot. It helps speeding up the roto process, and is especially used in tracking very subtle shifts of static objects from camera movement that we might oversee with our own eyes.

Reference: NKE 101- Nuke 101- PART 10: Intro to Tracker Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

At first, I wanted to try saving time by using tracker node to track the movement of the man. But after a few attempt it didn’t seem to work very well, perhaps because of the drastic movements.

The rotoscope process took me a lot of time, especially at first when I didn’t realize the technique and the movement pattern. I found the arms were the most tricky parts to work with as they don’t stay at the same shapes and placements and it takes really long to modify frame by frame.

Final takeaways:
Eventhough this first attempt at rotoscoping is not perfect, I have learned a lot through this assignment. First of all, the most important thing to remember is to break down the subject into small parts that you can easily track and modify. I separated the head, torso, and then both arms and legs into 3 parts. Additionally, it would help save more time if I have observed the movement pattern, in order to find main keyframes to start with. Thirdly, I initially started the assignment with a mouse, then I found that using a wacom has drastically improve the quality and save much more time on rotoscoping. The result doesn’t satisfy me 100%, yet it reflected exactly where I struggle at first, clearly shown through how the roto of the arms and head were not that good, then I improved much more on the legs after learning the right techniques.

Workshop Notes:

This is my first time getting to learn Nuke, yet with my experience in other 3D softwares like Blender or Houdini, I have been familiar with node-based workflow. Compositing is an important part of CG and VFX industry, I am eager to improve my skill and knowledge on this topic as I believe it will be beneficial in taking my personal works to another level of quality.

Basic Elements:

About Node-Based Compositing:

In compositing, nodes serve as the fundamental components. To craft a new compositing script, you insert and link nodes, forming an operational network. This setup lets you carry out various manipulations on your images (procedural workflow?)

In node-based software, intricate compositing tasks are simplified by connecting a series of basic image manipulations, known as “nodes.” These nodes come together to create a schematic layout, resembling a flowchart. A composite is represented as a tree-like graph, intuitively mapping out the journey from the original media to the end product. This is actually how all compositing software internally manage composites.

Flexibility and Limitations:

This approach offers high flexibility, allowing you to adjust the settings of an earlier image-processing step while observing the complete composite.
However, it’s worth noting that node-based systems often struggle with keyframing and time-based effects, as they don’t originate from a timeline like layer-based systems do.

Key Points to Remember:

Make sure to convert MP4 or MOV files into image sequences before proceeding, as Nuke is not well-optimized for MOV files. It’s designed to work best with image sequences, which are collections of ordered still frames that together form an animation. Typically, these frames are stored in a single folder and labeled in a sequential manner to maintain their order.

Workflow Steps:

  1. Import your video file.
  2. Tweak the frame sequencing and range using the retime and frame range nodes. Exercise caution not to alter the frame speed in the retime node.
  3. Invoke a write node to export the video as an image sequence.
  4. Choose either DPX or EXR as the file format.
  5. Don’t forget to add hashtags, enabling Nuke to properly write the sequences.

Typical VFX pipeline:

Further research