To be honest, it has been quite a challenge for me to pinpoint exactly what I want to achieve in my personal project. I’m keen to enhance my 3D skills and less interested in the compositing aspects of VFX, even though the brief requires merging CG elements with real-life footage.

Fortunately, I’ve recently been commissioned by Gliese Nguyen, a student from LCF, to help create a 3D character and fashion collection for her app prototype. While feeling somewhat lost in my own project, I agreed to work on her ideas, hoping to sharpen my skills and find inspiration through this collaboration.

After Gliese shared her vision and expectations, I was excited to start on the project. This is my first time modeling a character from scratch, as I usually work with pre-made Daz characters. It’s a fantastic opportunity for me to dive deeper into an area of 3D design that I truly enjoy.

Character Sketch – 3 Viewpoints
Character Sketch

MODELLING PROCESS

I must say it was a fun process making this cute character. I refered to tutorials from Crossmind Studio‘s course and applied it onto this character and I have learned a lot on shape building, modelling details while maintaining nice topology. I am pretty sure there are some parts that were not perfect, yet I tried to keep the mesh all quads as much as I could. For example, the picture bellow shows the wireframe of the character, I notice that there is a face that has 5 vertices, which is not ideal (N-gons) but since it’s on the character face that won’t be animated, so I hope that it won’t be an issues. I will be more mindful about this in the future.

The tutorial I followed taught me a great starting point: begin with a very simple, low-poly model and add details like loop cuts as needed. This approach is easier because you can always add more complexity, but simplifying an overly detailed model can be tricky.

When modeling, I worked on different body parts separately. For instance, I modeled the head and the torso on their own first. To connect them smoothly, I used a technique to ensure the points matched perfectly at the neck, which you can see in the picture below.

I kept the limbs simple, which meant I didn’t need to create detailed hands or feet. However, based on some feedback, I added extra loop cuts in areas like the armpits and at the elbows and knees. This extra step will help make any future animation of the character move more naturally.

Final character model

Clothes Sketches and Modelling

The E-skin collection – Capsule Closet:

  • T-shirt
  • Sweater
  • Shorts
  • Trousers
  • Skirt
  • Hoodie

UV unwrapping and texturing

This was my first time trying UV unwrapping, an experience I found both scary and exciting. UV unwrapping had previously been an area in 3D design where I felt less confident. Through this process, I’ve learned crucial techniques for marking seams effectively to ensure the UVs are nicely distributed. I’ve come to understand that UV unwrapping is a vital yet complex aspect of 3D modeling, essential for a smooth texturing process later on. To enhance realism, especially for the character’s clothing, I utilized material resources sourced from various sources, aiming to achieve a more lifelike appearance.

Tshirt UV + Logo

With the Tshirt, I was going to make the Annakiki logo as a texture at first. Then I figured it would look better if I make the letter solid and shrinkwrap it on the tshirt.

Skirt UV
Hoodie UV
Short UV
Sweater UV

Especially with this sweater, UV unwrap has helped me to be able to map the knit stiches to the exact direction I want them to be.

Model + Outfit Outcome

Critical Reflection: At this stage, I am very pleased with the outcome, as it fulfills my client’s requirements and has turned out wonderfully. The project has significantly enhanced my skills in modeling, UV unwrapping, and texturing. However, I see room for improvement in my approach to topology. As mentioned earlier, I accidently created some N-gons in the character’s mesh, which could pose challenges for future animation—fortunately, these are located in areas of the character’s face that require minimal movement.

The entire project was completed within a week to meet the client’s deadline, and both the client and I are satisfied with the results. Looking forward, I am now brainstorming the next steps for further developing the character in visual effects,

When I think about how to focus on effectively blend the character with real-life footage, I reconsidered the character’s appearance to enhance its realism. Originally designed with a metallic sheen, the character felt too synthetic and will be out of place against the live-action background. To ensure a more seamless integration into the real world scenes, I decided to revise the character’s skin to mimic human-like textures and tones.

Texture Paint

Keying is a compositing technique used in visual effects and post-production to separate a subject from its background. This process involves creating a matte or mask that isolates the subject, allowing compositors to replace the background with a new image or scene.

There are many different types of keying, and they can be used together to achieve

HSV Color Scale

The HSV (which stands for Hue / Saturation /Value) scale provides a numerical readout of your image that corresponds to the color names contained therein.

It separates color information (hue) from the grayscale (value/lightness), allowing for more straightforward adjustments to color intensity and brightness.

R = HUE: Hue literally means colour, measured in degrees from 0 to 360

G = Saturation: Saturation pertains the amount of white light mixed with a hue. It measures the intensity or purity of the color, ranging from 0% (gray) to 100% (full intensity)

B = Luminance/Value (Brightness). Luminance is a measure to describe the perceived brightness of a colour, from 0% (black, no light) to 100% (full brightness, maximum light).


Colorspace node

Colorspace node can be used to convert RGB channels from Linear color space to HSV color space to help analyze color of the plate.


HueCorrect Node

HueCorrect node can be used to mute, suppress or desaturate colors

Mute: Shift color to another color, tone down color, keep luminence

Suppress: Remove color entirely, with luminence

Desaturated: Reduce color


Keyer (Luminance Key) node

The Keyer (luminance key) node analyzes the luminance values of the footage, allowing you to select a range of brightness to create a matte or mask based on the brightness levels within an image.

Different operation options to choose to create alpha from

Key Features:

  • Flexibility: Allows for keying based on luminance, is especially useful in monochromatic scenes or when dealing with unevenly lit backgrounds.
  • Detail Preservation: Capable of preserving fine details in the keyed element, such as hair or edges, by carefully adjusting the luminance range and softness of the key.
  • Spill Suppression: While primarily focused on luminance, additional nodes may be used in conjunction with the Keyer to manage spill or color cast from the background, ensuring a clean and natural integration with the new background.
Color grading using Saturation key results in more natural result than without a mask

IBK Gizmo/Colour

In Nuke, IBKGizmo and IBKColour are keying nodes designed to work together for extracting high-quality mattes from footage, especially useful in complex keying scenarios where traditional chroma key methods may struggle.

IBK stands for Image Based Keyer. It operates with a subtractive or difference methodology

IBKGizmo

  • Is the core node used for generating mattes, handling difficult keying challenges.
  • Example: fine hair details, uneven background tones, severely motion blurred edges etc…

IBKColour

  • Works in tandem with IBKGizmo to address color spill issues.
  • After a matte is generated using IBKGizmo, IBKColour helps to neutralize or remove color spill from the background, ensuring that the foreground elements integrate seamlessly with a new background.

ChromaKeyer Node

  • Uses an eyedropper to select the background color you wish to key out
  • Works well with evenly lit screens of saturated color.
  • Takes advantage of GPU devices for efficient processing.

Keylight Node

  • Provide high-quality keys with detailed edge control and effective spill suppression.
  • For challenging keying scenarios, consider using EdgeBlur or Roto to address specific issues or enhance the key.

Primatte Node

  • 3D keyer that uses a special algorithm in 3D color space
  • Offers an Auto-Compute feature for step-by-step alpha data extraction.

Green Despill

Blue Despill

Clamp node: used for clamping/control max/min value of color

Despill madness gizmo

EdgeExtend node: Premult by default, automatically detects the edges within an image and extends them outward, filling in empty or problematic areas.

with EdgeExtend
without EdgeExtend

addmix vs merge over:

Tips for Effective Keying in Nuke:

Clean Plates: Whenever possible, use clean plates to help with the keying process, especially for difference keying.

Preprocessing: Adjusting the input footage for contrast or color balance can significantly improve keying results.

Combination of Tools: Often, the best results come from combining several keying tools, leveraging the strengths of each to address different aspects of the keying challenge.

Reflection:

I was quite confused about the concept of HSV color space and working with luminance at first, but after going through example nodes and reading about it, it makes sense how useful it is in ensuring high-quality, detail-rich mattes for complex visual effects sequences.

Luminance keying is particularly useful for isolating elements from either a very bright (high luminance) or very dark (low luminance) background when traditional chroma keying (based on color) is not feasible.

ReadGeo node

  • Used to Read/Import Alembic (.abc) / .obj / .usd files
  • Can be rendered through ScanlineRender

StMap node

Functionality:

  • An STMap is used for image distortion based on a UV map (STMap).
  • It works by using the red and green channels of an image to define the new X and Y coordinates for pixel remapping
  • The STMap node takes an input image and an ST map and warps the input image according to the coordinates defined by the ST map.


PositionToPoints node

  • Converts a position pass (an image where RGB values represent 3D coordinates) into a 3D point cloud.
  • Useful for visualizing the spatial layout of a scene rendered from 3D applications within Nuke.

Import 3D geometry and texture:

  • Use the ReadGeo node to import 3D model into Nuke. Connect the ReadGeo node to a Scene node to include the model into 3D scene.
  • Apply the STMap node to warp or adjust a 2D texture based on the UV mapping specified in the STMap image, using UV Expression . Connect your texture to the STMap node as the source.

Relative Path: Reconnect or keep plates’ directory by copy the project directory (found in project setting)

Example: [python {nuke.script_directory()}]

Paste this before the root of the folder

Example: C:/Users/23037923/OneDrive – University of the Arts London/Nuke/Week_13_CG Nuke/Images/LegoCar_No_PPlane_V3.exr

to [python {nuke.script_directory()}]/Images/LegoCar_No_PPlane_V3.exr

-> This will ensure the link of directory even when you move the folder around

Tip: you can Copy/Paste a node or a setup as python code to share

For this garage project, I have come up with this idea of intergrating a plant communication machine into the scene. I used AI to visualize and come up with some interesting shapes in mind of how this machine is going to look like. I want the look to be sci-fi but not with heavy machinery, rather more organic modelling. If I have more time at the end I want to make some complex wire connection with some human intergrated into the scene as well.

Through this project, I want to have a better understanding of how to work with CG in Nuke, combining 2D and 3D elements together seamlessly, as well as improving my modelling and texturing skills. I am very happy with the idea and looking forward to bring it to life 🙂

Variation 1:

Block out shapes
Visualizing with emission and simple materials

At the first blocking out and testing textures stages, I figured I didn’t like the shape of this variation very much. Imagining placing it in the Garage plate, I opt for a design that is wider horizontally, also to look for some hard surface machinery details to make the model more complex and realistic.

Machinery Research

After looking at a few references on Pinterest, I proceeded to try sketching some ideas on top of the plate.

Final Design Visualization:

Eventually, I decided to use AI to visualize some more designs that are tailored to my vision.

1
4
6

2
3
5
7

I ended up choosing design 6 & 7, I think they look pretty and not too complex, yet can still convey the overall look that I have been looking for.

Most challenging parts:

  • Pipes, I thought they would be easy to make but turns out there are so many different techniques that could be used depending on your needs. I eventually found one that works best for me, so I used it throughout the whole design.
  • Importing Quixel Bridge Assets into Blender: Textures error/no Texture (Resolved)

Test render:

v01

On my first roto attempt, I decided to divide the roto of the wall into 4 section with bezier curve. This worked out pretty well, yet I think it can be improved after being reviewed in class.

v02

So I came back and try again, this time with smaller section, using B-spline and Open spline to really go into the details.

Full Final Nodes

AOV (Arbitrary Output Variable)

  • Is a concept in 3D rendering that represents a custom data channel produced during the rendering process.
  • These channels contain specific types of information about the rendered scene, such as lighting, shadows, color, reflections, and more.
  • AOVs are significant for VFX because they provide more flexibility in controlling every pass and grade it according to background image

Key types of AOVs:

  • Direct and Indirect AOVs: Capture light directly from the source and light that has bounced off surfaces.
  • Standard Surface AOVs: Isolate material components such as diffuse, specular, and subsurface scattering for fine-tuning in compositing.
  • Ultilities AOVs: Used in combination with tools to achieve various effects like defocus, motion blur, re-lighting, etc…

Passes

Passes, often part of the AOVs in a broader sense, are specifically categorized render outputs that represent different elements or effects within a rendered scene. While AOVs provide the technical variables, passes focus on the compositional elements that make up the beauty shot or contribute to visual effects, such as:

  • Beauty Passes: The comprehensive render that includes all visual elements.
  • Lighting Passes: Separate the lighting into its specific types (e.g., key light, fill light) for detailed lighting control.
  • Reflection, Refraction Passes: Isolate reflective and refractive elements, allowing for adjustments to how surfaces interact with light.
  1. Beauty Passes: Used to recreate beauty renders
    • Material AOVs: Used to adjust the Material Attributes (shader) of objects in the scene
  2. Data Passes / Helper passes
    • Provide technical information used to adjust or apply effects in post-production
    • Examples of data passes:
      • Normals Pass
      • Motion Vector Pass: Contains the direction and magnitude of motion for each pixel, enabling post-production motion blur.
      • UV Pass: Stores the UV mapping information, allowing for post-production texturing or adjustments to textures.
      • Position Pass: Gives the exact position of each pixel in 3D space, useful for integrating 3D elements or effects based on location.
      • Material ID/ Object ID Pass: Assigns a unique color to each material or object, simplifying selection and isolation for adjustments.
      • Z-Depth Pass: Offers depth information for each part of the image

Working with render passes:

  • You can break down render passes by Using Shuffle nodes to separate out individual AOVs or passes from multi-layer EXR files
  • When we build a cg beauty we simply combine information of highlights, midtones and shadows.
Passes naming is different depending on the render engine.

Rules for rebuilding CG assets:

Merge (Plus Lights): Diffuse / Indirect / Specular / Reflections

Merge (Multiply Shadows): AO / Shadows

  • Each pass should be graded separately
  • A final grade can be applied to the entire asset if needed

LayerContactSheet is used to view all the passes contained in the EXR

  • Enable ‘Show layer names’ to display the name of each channel.

Tips: Ctrl+Shift Drag node onto another to swap/replace node

Example of different passes

Project3D node

Purpose:

Project3D is used to project a 2D image onto a 3D object. It’s like shining a slide projector onto a physical model; the image “wraps” around the 3D shape according to the geometry and camera position.

Project on a Match-move Geometry

  • Freeze a frame using Framehold (Choose a frame that is the closest to the camera and appear the clearest)
  • Input a 2D image into the Project3D node (this can be a texture, or from a premulted rotopaint patch)
  • Freeze the frame again (this is to minimize calculation from rotopaint node)
  • Premult the patch
  • Use a Project3D node that connects to a Match-move Camera
  • Project3D > Card > ScanlineRender
  • Merge Original Plate with ScanlineRender’s output
Simple projecting procedure with rotopaint patch

MergeMat (Shader): Similar to the Merge node, this is specifically designed for 3D space operations.

Project at different distance

In this setup above, we use 2 framehold nodes, one the closest and one furthest from the camera. Then merge 2 Project nodes together using Mergemat. This approach ensures a more natural result by projecting the patch at different distance.

Projecting Roto

  • Roto the 2D element

ModelBuilder (only in Nuke X) – For building geometry. right click and choose mode. right click and change selection mode (like 3D softwares)

Resources:

https://learn.foundry.com/nuke/content/reference_guide/3d_nodes/project3d.html

3D tracking

1. Preparation of the Footage

  • Import: Bring your footage into Nuke.
  • Pre-Processing: Ensure the footage is ready for tracking. This includes deinterlacing, stabilizing if necessary, and removing any lens distortion. You can also treat it by brightening or sharpening the shot.

2. CameraTracker Node

CameraTracker analyses the motion in a 2D footage and extrapolates this movement into a 3D camera path. It tracks various points in the footage (usually high-contrast or distinctive features) across frames to determine how the camera was moving when the footage was shot.

  • 3D tracking only works on stationary objects
  • Roto out area that you want to avoid tracking (things that move/not static, be mindful of reflective objects). Then connect to CameraTracker node via ‘mask’.
  • In the use of Roto, change mask type to ‘Mask Alpha’
  • In CameraTracker settings, choose the type of source and mask. If you’re unsure about the Lens Distortion and Focal Length, leave settings as default

In Settings, turn on Preview Features to show trackers
After configuring all the setting, click ‘Track’

Several properties in this tab can help achieve a better track:

  • Number of Features: The amount of automatic tracking points created by the tracker. If you increase this, reduce Feature Separation.
  • Detection Threshold: The higher the number, the more precise the tracker has to be in finding trackable points.
  • Feature Separation: The higher the number, the farther apart the tracking features have to be. If you increase Number of Features, reduce this value.
  • Camera Motion: This setting tells Nuke whether the camera is handheld or Free Camera, meaning a free-moving camera, or if it’s Rotating Camera, meaning a camera on a tripod.

3. Solving the Camera

After tracking process is done, click ‘Solve’

  • Check error figure in AutoTracks tab to evaluate your track
  • Click on error-max. Click on graph and press F
  • Reduce Max error to 6, click delete unresolved and delete rejected
  • Usually solved error is anywhere around 1-2 or below is good

4. Export the scene

Export by choosing ‘Scene’ or ‘Scene+’
Make sure link output is enabled

choose 1 point to set as origin to make sure the tracking scene is not tilted etc
Helping Nuke know that this is the floor by selecting a few points of the floor in the plate > right click > ground plane > set to selected

To check tracking:

  • Select some point > Create > Cube/Plane
  • Plug the object Card into Scene > move it to match the ground plane
The floor is now matching the ground plane
You should create multiple card from points, in foreground and background to make sure everything matches and works perfectly
Using pointcloudgenerator to see camera movement
First Analyze then Track points then Delete Rejected Points to remove red