Keying is a compositing technique used in visual effects and post-production to separate a subject from its background. This process involves creating a matte or mask that isolates the subject, allowing compositors to replace the background with a new image or scene.

There are many different types of keying, and they can be used together to achieve

HSV Color Scale

The HSV (which stands for Hue / Saturation /Value) scale provides a numerical readout of your image that corresponds to the color names contained therein.

It separates color information (hue) from the grayscale (value/lightness), allowing for more straightforward adjustments to color intensity and brightness.

R = HUE: Hue literally means colour, measured in degrees from 0 to 360

G = Saturation: Saturation pertains the amount of white light mixed with a hue. It measures the intensity or purity of the color, ranging from 0% (gray) to 100% (full intensity)

B = Luminance/Value (Brightness). Luminance is a measure to describe the perceived brightness of a colour, from 0% (black, no light) to 100% (full brightness, maximum light).


Colorspace node

Colorspace node can be used to convert RGB channels from Linear color space to HSV color space to help analyze color of the plate.


HueCorrect Node

HueCorrect node can be used to mute, suppress or desaturate colors

Mute: Shift color to another color, tone down color, keep luminence

Suppress: Remove color entirely, with luminence

Desaturated: Reduce color


Keyer (Luminance Key) node

The Keyer (luminance key) node analyzes the luminance values of the footage, allowing you to select a range of brightness to create a matte or mask based on the brightness levels within an image.

Different operation options to choose to create alpha from

Key Features:

  • Flexibility: Allows for keying based on luminance, is especially useful in monochromatic scenes or when dealing with unevenly lit backgrounds.
  • Detail Preservation: Capable of preserving fine details in the keyed element, such as hair or edges, by carefully adjusting the luminance range and softness of the key.
  • Spill Suppression: While primarily focused on luminance, additional nodes may be used in conjunction with the Keyer to manage spill or color cast from the background, ensuring a clean and natural integration with the new background.
Color grading using Saturation key results in more natural result than without a mask

IBK Gizmo/Colour

In Nuke, IBKGizmo and IBKColour are keying nodes designed to work together for extracting high-quality mattes from footage, especially useful in complex keying scenarios where traditional chroma key methods may struggle.

IBK stands for Image Based Keyer. It operates with a subtractive or difference methodology

IBKGizmo

  • Is the core node used for generating mattes, handling difficult keying challenges.
  • Example: fine hair details, uneven background tones, severely motion blurred edges etc…

IBKColour

  • Works in tandem with IBKGizmo to address color spill issues.
  • After a matte is generated using IBKGizmo, IBKColour helps to neutralize or remove color spill from the background, ensuring that the foreground elements integrate seamlessly with a new background.

ChromaKeyer Node

  • Uses an eyedropper to select the background color you wish to key out
  • Works well with evenly lit screens of saturated color.
  • Takes advantage of GPU devices for efficient processing.

Keylight Node

  • Provide high-quality keys with detailed edge control and effective spill suppression.
  • For challenging keying scenarios, consider using EdgeBlur or Roto to address specific issues or enhance the key.

Primatte Node

  • 3D keyer that uses a special algorithm in 3D color space
  • Offers an Auto-Compute feature for step-by-step alpha data extraction.

Green Despill

Blue Despill

Clamp node: used for clamping/control max/min value of color

Despill madness gizmo

EdgeExtend node: Premult by default, automatically detects the edges within an image and extends them outward, filling in empty or problematic areas.

with EdgeExtend
without EdgeExtend

addmix vs merge over:

Tips for Effective Keying in Nuke:

Clean Plates: Whenever possible, use clean plates to help with the keying process, especially for difference keying.

Preprocessing: Adjusting the input footage for contrast or color balance can significantly improve keying results.

Combination of Tools: Often, the best results come from combining several keying tools, leveraging the strengths of each to address different aspects of the keying challenge.

Reflection:

I was quite confused about the concept of HSV color space and working with luminance at first, but after going through example nodes and reading about it, it makes sense how useful it is in ensuring high-quality, detail-rich mattes for complex visual effects sequences.

Luminance keying is particularly useful for isolating elements from either a very bright (high luminance) or very dark (low luminance) background when traditional chroma keying (based on color) is not feasible.

ReadGeo node

  • Used to Read/Import Alembic (.abc) / .obj / .usd files
  • Can be rendered through ScanlineRender

StMap node

Functionality:

  • An STMap is used for image distortion based on a UV map (STMap).
  • It works by using the red and green channels of an image to define the new X and Y coordinates for pixel remapping
  • The STMap node takes an input image and an ST map and warps the input image according to the coordinates defined by the ST map.


PositionToPoints node

  • Converts a position pass (an image where RGB values represent 3D coordinates) into a 3D point cloud.
  • Useful for visualizing the spatial layout of a scene rendered from 3D applications within Nuke.

Import 3D geometry and texture:

  • Use the ReadGeo node to import 3D model into Nuke. Connect the ReadGeo node to a Scene node to include the model into 3D scene.
  • Apply the STMap node to warp or adjust a 2D texture based on the UV mapping specified in the STMap image, using UV Expression . Connect your texture to the STMap node as the source.

Relative Path: Reconnect or keep plates’ directory by copy the project directory (found in project setting)

Example: [python {nuke.script_directory()}]

Paste this before the root of the folder

Example: C:/Users/23037923/OneDrive – University of the Arts London/Nuke/Week_13_CG Nuke/Images/LegoCar_No_PPlane_V3.exr

to [python {nuke.script_directory()}]/Images/LegoCar_No_PPlane_V3.exr

-> This will ensure the link of directory even when you move the folder around

Tip: you can Copy/Paste a node or a setup as python code to share

v01

On my first roto attempt, I decided to divide the roto of the wall into 4 section with bezier curve. This worked out pretty well, yet I think it can be improved after being reviewed in class.

v02

So I came back and try again, this time with smaller section, using B-spline and Open spline to really go into the details.

Full Final Nodes

AOV (Arbitrary Output Variable)

  • Is a concept in 3D rendering that represents a custom data channel produced during the rendering process.
  • These channels contain specific types of information about the rendered scene, such as lighting, shadows, color, reflections, and more.
  • AOVs are significant for VFX because they provide more flexibility in controlling every pass and grade it according to background image

Key types of AOVs:

  • Direct and Indirect AOVs: Capture light directly from the source and light that has bounced off surfaces.
  • Standard Surface AOVs: Isolate material components such as diffuse, specular, and subsurface scattering for fine-tuning in compositing.
  • Ultilities AOVs: Used in combination with tools to achieve various effects like defocus, motion blur, re-lighting, etc…

Passes

Passes, often part of the AOVs in a broader sense, are specifically categorized render outputs that represent different elements or effects within a rendered scene. While AOVs provide the technical variables, passes focus on the compositional elements that make up the beauty shot or contribute to visual effects, such as:

  • Beauty Passes: The comprehensive render that includes all visual elements.
  • Lighting Passes: Separate the lighting into its specific types (e.g., key light, fill light) for detailed lighting control.
  • Reflection, Refraction Passes: Isolate reflective and refractive elements, allowing for adjustments to how surfaces interact with light.
  1. Beauty Passes: Used to recreate beauty renders
    • Material AOVs: Used to adjust the Material Attributes (shader) of objects in the scene
  2. Data Passes / Helper passes
    • Provide technical information used to adjust or apply effects in post-production
    • Examples of data passes:
      • Normals Pass
      • Motion Vector Pass: Contains the direction and magnitude of motion for each pixel, enabling post-production motion blur.
      • UV Pass: Stores the UV mapping information, allowing for post-production texturing or adjustments to textures.
      • Position Pass: Gives the exact position of each pixel in 3D space, useful for integrating 3D elements or effects based on location.
      • Material ID/ Object ID Pass: Assigns a unique color to each material or object, simplifying selection and isolation for adjustments.
      • Z-Depth Pass: Offers depth information for each part of the image

Working with render passes:

  • You can break down render passes by Using Shuffle nodes to separate out individual AOVs or passes from multi-layer EXR files
  • When we build a cg beauty we simply combine information of highlights, midtones and shadows.
Passes naming is different depending on the render engine.

Rules for rebuilding CG assets:

Merge (Plus Lights): Diffuse / Indirect / Specular / Reflections

Merge (Multiply Shadows): AO / Shadows

  • Each pass should be graded separately
  • A final grade can be applied to the entire asset if needed

LayerContactSheet is used to view all the passes contained in the EXR

  • Enable โ€˜Show layer namesโ€™ to display the name of each channel.

Tips: Ctrl+Shift Drag node onto another to swap/replace node

Example of different passes

Project3D node

Purpose:

Project3D is used to project a 2D image onto a 3D object. It’s like shining a slide projector onto a physical model; the image “wraps” around the 3D shape according to the geometry and camera position.

Project on a Match-move Geometry

  • Freeze a frame using Framehold (Choose a frame that is the closest to the camera and appear the clearest)
  • Input a 2D image into the Project3D node (this can be a texture, or from a premulted rotopaint patch)
  • Freeze the frame again (this is to minimize calculation from rotopaint node)
  • Premult the patch
  • Use a Project3D node that connects to a Match-move Camera
  • Project3D > Card > ScanlineRender
  • Merge Original Plate with ScanlineRender’s output
Simple projecting procedure with rotopaint patch

MergeMat (Shader): Similar to the Merge node, this is specifically designed for 3D space operations.

Project at different distance

In this setup above, we use 2 framehold nodes, one the closest and one furthest from the camera. Then merge 2 Project nodes together using Mergemat. This approach ensures a more natural result by projecting the patch at different distance.

Projecting Roto

  • Roto the 2D element

ModelBuilder (only in Nuke X) – For building geometry. right click and choose mode. right click and change selection mode (like 3D softwares)

Resources:

https://learn.foundry.com/nuke/content/reference_guide/3d_nodes/project3d.html

3D tracking

1. Preparation of the Footage

  • Import: Bring your footage into Nuke.
  • Pre-Processing: Ensure the footage is ready for tracking. This includes deinterlacing, stabilizing if necessary, and removing any lens distortion. You can also treat it by brightening or sharpening the shot.

2. CameraTracker Node

CameraTracker analyses the motion in a 2D footage and extrapolates this movement into a 3D camera path. It tracks various points in the footage (usually high-contrast or distinctive features) across frames to determine how the camera was moving when the footage was shot.

  • 3D tracking only works on stationary objects
  • Roto out area that you want to avoid tracking (things that move/not static, be mindful of reflective objects). Then connect to CameraTracker node via ‘mask’.
  • In the use of Roto, change mask type to ‘Mask Alpha’
  • In CameraTracker settings, choose the type of source and mask. If you’re unsure about the Lens Distortion and Focal Length, leave settings as default

In Settings, turn on Preview Features to show trackers
After configuring all the setting, click ‘Track’

Several properties in this tab can help achieve a better track:

  • Number of Features: The amount of automatic tracking points created by the tracker. If you increase this, reduce Feature Separation.
  • Detection Threshold: The higher the number, the more precise the tracker has to be in finding trackable points.
  • Feature Separation: The higher the number, the farther apart the tracking features have to be. If you increase Number of Features, reduce this value.
  • Camera Motion: This setting tells Nuke whether the camera is handheld or Free Camera, meaning a free-moving camera, or if itโ€™s Rotating Camera, meaning a camera on a tripod.

3. Solving the Camera

After tracking process is done, click ‘Solve’

  • Check error figure in AutoTracks tab to evaluate your track
  • Click on error-max. Click on graph and press F
  • Reduce Max error to 6, click delete unresolved and delete rejected
  • Usually solved error is anywhere around 1-2 or below is good

4. Export the scene

Export by choosing ‘Scene’ or ‘Scene+’
Make sure link output is enabled

choose 1 point to set as origin to make sure the tracking scene is not tilted etc
Helping Nuke know that this is the floor by selecting a few points of the floor in the plate > right click > ground plane > set to selected

To check tracking:

  • Select some point > Create > Cube/Plane
  • Plug the object Card into Scene > move it to match the ground plane
The floor is now matching the ground plane
You should create multiple card from points, in foreground and background to make sure everything matches and works perfectly
Using pointcloudgenerator to see camera movement
First Analyze then Track points then Delete Rejected Points to remove red

Reflection:

Overall, I am happy of how this project turned out, as well as appreciating the knowledge that I have earned through the process of making it. One of the most challenging aspect for me at the beginning that has affected my ability to start was being unsure of what I wanted to make, which got me really anxious and definitely did not help my creativity. I figured I was being too ambitious and got overwhelmed by all the ideas I wanted to make and all the softwares to learn. I eventually had some reflection and reminded myself that the main aim of this project is for me to learn new softwares and practice VFX fundamentals, so I need to lower my expectation for myself and just experiment. I did not have a plan or storyboard at first, and the idea only started coming together as I was creating my Unreal Scene. I have had the vision in mind of the theme that my art usually follows, so every idea after that was by all mean a trust in my own process and experimentation.

Unreal Engine:
In terms of my scene in Unreal Engine, this was the first time that I have used the software and was definitely amazed by how powerful it is in handling such a heavy scene full of foliage (in real time as well). That being said, it took me experimenting on a dozen of projects before I was finally satisfied with what I created, which eventually helped me get more fluent with the software itself. If I have had more time or had a clearer vision in mind, I would have modelled more of my own objects and learned how to texture them realisticly to put into the scene. However, I optimized by playing around with every single material in the scene, from static meshes to foliage to landscape material. This has definitely made me feel more confident on working with models and materials in Unreal. Moreover, I have learned the workflow of importing Alembic into Unreal for animation. The most time consuming part of this was setting up and applying the material for every single parts of the model. In the future, I want to learn how to animate properly and using FBX or USD format to understand the workflow more intensively, also to give myself more freedom in posing and animating the characters.

Zbrush/Substance Painter/Daz
Throughout this project, I have strengthen my skill and love for 3D art when I got to learn my 2 favourite softwares Zbrush and Substance Painter. The leaf boat in particular was something so simple yet I put much effort to make it my own creation. I definitely have spent way more time than I needed to on the sculpting of the boat. If I had to do it again, I would have painted the veins of the leaf with Substance Painter. Yet the workflow that I took luckily taught me valuable lessons in how to work with high & low poly in Zbrush, as well as the baking process before moving to texturing. Considering what I have learned for this, I consider it as a win ๐Ÿ™‚

Other used softwares:
Blender: Modelling, Particle system, Animation
Nuke: Compositing the bubbles
Photoshop: Texture creation and editting
Premiere Pro: Final Video editing and rendering

What I want to improve in the future projects:
Apart from strengthening my skill in animation, modelling/sculpting/texturing, I definitely want to be more mindful about the filmmaking/cinematic aspect in the future. I think the final video turned out beautifully, yet it lacks storytelling. I believe if I had solidify a vision earlier on, I would have spent more time on planning, writing a script and making a storyboard. This is typically a workflow known in the industry as well, so eventhough I have always been working purely based on intuition and experimentation, I need to improve on this so I can create more impactful visuals and for anyone who wants to understand my creative process better.
Furthermore, I want to use Nuke more in my future projects since I think it’s a very powerful software. However I have considered its uses in this project and thought it would be better if I know how to to 3D tracking and compositing due to the light and shadow of the scene (which we haven’t fully learned on yet). Luckily the bubble worked out perfectly as it has a complex shader that I have set up, which would be hard to transfer from Blender to Unreal, therefore the use of Nuke in this case is justified and helped blending the bubbles into the scene nicely.

After all, thank you so much for all the help from my tutors to make this project happened!