v01

On my first roto attempt, I decided to divide the roto of the wall into 4 section with bezier curve. This worked out pretty well, yet I think it can be improved after being reviewed in class.

v02

So I came back and try again, this time with smaller section, using B-spline and Open spline to really go into the details.

Full Final Nodes

AOV (Arbitrary Output Variable)

  • Is a concept in 3D rendering that represents a custom data channel produced during the rendering process.
  • These channels contain specific types of information about the rendered scene, such as lighting, shadows, color, reflections, and more.
  • AOVs are significant for VFX because they provide more flexibility in controlling every pass and grade it according to background image

Key types of AOVs:

  • Direct and Indirect AOVs: Capture light directly from the source and light that has bounced off surfaces.
  • Standard Surface AOVs: Isolate material components such as diffuse, specular, and subsurface scattering for fine-tuning in compositing.
  • Ultilities AOVs: Used in combination with tools to achieve various effects like defocus, motion blur, re-lighting, etc…

Passes

Passes, often part of the AOVs in a broader sense, are specifically categorized render outputs that represent different elements or effects within a rendered scene. While AOVs provide the technical variables, passes focus on the compositional elements that make up the beauty shot or contribute to visual effects, such as:

  • Beauty Passes: The comprehensive render that includes all visual elements.
  • Lighting Passes: Separate the lighting into its specific types (e.g., key light, fill light) for detailed lighting control.
  • Reflection, Refraction Passes: Isolate reflective and refractive elements, allowing for adjustments to how surfaces interact with light.
  1. Beauty Passes: Used to recreate beauty renders
    • Material AOVs: Used to adjust the Material Attributes (shader) of objects in the scene
  2. Data Passes / Helper passes
    • Provide technical information used to adjust or apply effects in post-production
    • Examples of data passes:
      • Normals Pass
      • Motion Vector Pass: Contains the direction and magnitude of motion for each pixel, enabling post-production motion blur.
      • UV Pass: Stores the UV mapping information, allowing for post-production texturing or adjustments to textures.
      • Position Pass: Gives the exact position of each pixel in 3D space, useful for integrating 3D elements or effects based on location.
      • Material ID/ Object ID Pass: Assigns a unique color to each material or object, simplifying selection and isolation for adjustments.
      • Z-Depth Pass: Offers depth information for each part of the image

Working with render passes:

  • You can break down render passes by Using Shuffle nodes to separate out individual AOVs or passes from multi-layer EXR files
  • When we build a cg beauty we simply combine information of highlights, midtones and shadows.
Passes naming is different depending on the render engine.

Rules for rebuilding CG assets:

Merge (Plus Lights): Diffuse / Indirect / Specular / Reflections

Merge (Multiply Shadows): AO / Shadows

  • Each pass should be graded separately
  • A final grade can be applied to the entire asset if needed

LayerContactSheet is used to view all the passes contained in the EXR

  • Enable ‘Show layer names’ to display the name of each channel.

Tips: Ctrl+Shift Drag node onto another to swap/replace node

Example of different passes

Project3D node

Purpose:

Project3D is used to project a 2D image onto a 3D object. It’s like shining a slide projector onto a physical model; the image “wraps” around the 3D shape according to the geometry and camera position.

Project on a Match-move Geometry

  • Freeze a frame using Framehold (Choose a frame that is the closest to the camera and appear the clearest)
  • Input a 2D image into the Project3D node (this can be a texture, or from a premulted rotopaint patch)
  • Freeze the frame again (this is to minimize calculation from rotopaint node)
  • Premult the patch
  • Use a Project3D node that connects to a Match-move Camera
  • Project3D > Card > ScanlineRender
  • Merge Original Plate with ScanlineRender’s output
Simple projecting procedure with rotopaint patch

MergeMat (Shader): Similar to the Merge node, this is specifically designed for 3D space operations.

Project at different distance

In this setup above, we use 2 framehold nodes, one the closest and one furthest from the camera. Then merge 2 Project nodes together using Mergemat. This approach ensures a more natural result by projecting the patch at different distance.

Projecting Roto

  • Roto the 2D element

ModelBuilder (only in Nuke X) – For building geometry. right click and choose mode. right click and change selection mode (like 3D softwares)

Resources:

https://learn.foundry.com/nuke/content/reference_guide/3d_nodes/project3d.html

3D tracking

1. Preparation of the Footage

  • Import: Bring your footage into Nuke.
  • Pre-Processing: Ensure the footage is ready for tracking. This includes deinterlacing, stabilizing if necessary, and removing any lens distortion. You can also treat it by brightening or sharpening the shot.

2. CameraTracker Node

CameraTracker analyses the motion in a 2D footage and extrapolates this movement into a 3D camera path. It tracks various points in the footage (usually high-contrast or distinctive features) across frames to determine how the camera was moving when the footage was shot.

  • 3D tracking only works on stationary objects
  • Roto out area that you want to avoid tracking (things that move/not static, be mindful of reflective objects). Then connect to CameraTracker node via ‘mask’.
  • In the use of Roto, change mask type to ‘Mask Alpha’
  • In CameraTracker settings, choose the type of source and mask. If you’re unsure about the Lens Distortion and Focal Length, leave settings as default

In Settings, turn on Preview Features to show trackers
After configuring all the setting, click ‘Track’

Several properties in this tab can help achieve a better track:

  • Number of Features: The amount of automatic tracking points created by the tracker. If you increase this, reduce Feature Separation.
  • Detection Threshold: The higher the number, the more precise the tracker has to be in finding trackable points.
  • Feature Separation: The higher the number, the farther apart the tracking features have to be. If you increase Number of Features, reduce this value.
  • Camera Motion: This setting tells Nuke whether the camera is handheld or Free Camera, meaning a free-moving camera, or if it’s Rotating Camera, meaning a camera on a tripod.

3. Solving the Camera

After tracking process is done, click ‘Solve’

  • Check error figure in AutoTracks tab to evaluate your track
  • Click on error-max. Click on graph and press F
  • Reduce Max error to 6, click delete unresolved and delete rejected
  • Usually solved error is anywhere around 1-2 or below is good

4. Export the scene

Export by choosing ‘Scene’ or ‘Scene+’
Make sure link output is enabled

choose 1 point to set as origin to make sure the tracking scene is not tilted etc
Helping Nuke know that this is the floor by selecting a few points of the floor in the plate > right click > ground plane > set to selected

To check tracking:

  • Select some point > Create > Cube/Plane
  • Plug the object Card into Scene > move it to match the ground plane
The floor is now matching the ground plane
You should create multiple card from points, in foreground and background to make sure everything matches and works perfectly
Using pointcloudgenerator to see camera movement
First Analyze then Track points then Delete Rejected Points to remove red

Reflection:

Overall, I am happy of how this project turned out, as well as appreciating the knowledge that I have earned through the process of making it. One of the most challenging aspect for me at the beginning that has affected my ability to start was being unsure of what I wanted to make, which got me really anxious and definitely did not help my creativity. I figured I was being too ambitious and got overwhelmed by all the ideas I wanted to make and all the softwares to learn. I eventually had some reflection and reminded myself that the main aim of this project is for me to learn new softwares and practice VFX fundamentals, so I need to lower my expectation for myself and just experiment. I did not have a plan or storyboard at first, and the idea only started coming together as I was creating my Unreal Scene. I have had the vision in mind of the theme that my art usually follows, so every idea after that was by all mean a trust in my own process and experimentation.

Unreal Engine:
In terms of my scene in Unreal Engine, this was the first time that I have used the software and was definitely amazed by how powerful it is in handling such a heavy scene full of foliage (in real time as well). That being said, it took me experimenting on a dozen of projects before I was finally satisfied with what I created, which eventually helped me get more fluent with the software itself. If I have had more time or had a clearer vision in mind, I would have modelled more of my own objects and learned how to texture them realisticly to put into the scene. However, I optimized by playing around with every single material in the scene, from static meshes to foliage to landscape material. This has definitely made me feel more confident on working with models and materials in Unreal. Moreover, I have learned the workflow of importing Alembic into Unreal for animation. The most time consuming part of this was setting up and applying the material for every single parts of the model. In the future, I want to learn how to animate properly and using FBX or USD format to understand the workflow more intensively, also to give myself more freedom in posing and animating the characters.

Zbrush/Substance Painter/Daz
Throughout this project, I have strengthen my skill and love for 3D art when I got to learn my 2 favourite softwares Zbrush and Substance Painter. The leaf boat in particular was something so simple yet I put much effort to make it my own creation. I definitely have spent way more time than I needed to on the sculpting of the boat. If I had to do it again, I would have painted the veins of the leaf with Substance Painter. Yet the workflow that I took luckily taught me valuable lessons in how to work with high & low poly in Zbrush, as well as the baking process before moving to texturing. Considering what I have learned for this, I consider it as a win 🙂

Other used softwares:
Blender: Modelling, Particle system, Animation
Nuke: Compositing the bubbles
Photoshop: Texture creation and editting
Premiere Pro: Final Video editing and rendering

What I want to improve in the future projects:
Apart from strengthening my skill in animation, modelling/sculpting/texturing, I definitely want to be more mindful about the filmmaking/cinematic aspect in the future. I think the final video turned out beautifully, yet it lacks storytelling. I believe if I had solidify a vision earlier on, I would have spent more time on planning, writing a script and making a storyboard. This is typically a workflow known in the industry as well, so eventhough I have always been working purely based on intuition and experimentation, I need to improve on this so I can create more impactful visuals and for anyone who wants to understand my creative process better.
Furthermore, I want to use Nuke more in my future projects since I think it’s a very powerful software. However I have considered its uses in this project and thought it would be better if I know how to to 3D tracking and compositing due to the light and shadow of the scene (which we haven’t fully learned on yet). Luckily the bubble worked out perfectly as it has a complex shader that I have set up, which would be hard to transfer from Blender to Unreal, therefore the use of Nuke in this case is justified and helped blending the bubbles into the scene nicely.

After all, thank you so much for all the help from my tutors to make this project happened!

Working with sequences and rendering process was pretty easy once I got a hang of how to do it. My decision was to work with a 35mm film format as I was inspired by Wes Anderson’s movie style. I believe it added to the fantasy film look, like a fairytale coming to life. I also chose a prime lens and heavily dependent on camera movement to film my final edits.

Keyframing the camera animation

How to work with sequences:

  • Add level sequence > Save it into a Sequence folder
  • Drag camera into timeline
  • Animate camera using keyframes
Remember to ensure your fps
Select all keyframes > Linear for smoother animation
Cinematics > Movie Render Queue
My render settings. I also added an Anti-Aliasing option for improve quality.
I was being mindful of the focal point and composition in every shot to make sure I can guide the audience’s view throughout.

I actually was testing my render, expecting some problems to come up that needs troubleshooting. To my surprise, not only my render came out beautifully, the character hair appeared much nicer compairing to the viewport, which was a problem that I was really trying to fix through shaders. After knowing that the rendering process will be good, I came back and spent some time fixing the character’s materials and makeup, particularly the eyes , which kept being black at first. I resolved this by using a different texture.

I am very happy with how the render turns out

I decided to render my scene in EXR sequences format as it ensures high quality render and due to the fact that I will later import the sequences into Nuke for further compositing.

Since I wanted to add some bubbles to the scene, I thought it would be better to try using Nuke for this. So I created some bubble animation using Blender’s particle system, then rendered it with a transparent background.

Nuke set up to add bubbles in

Resources

Eye Blink Animation

As time was restricted, I decided to use my favourite software Daz for human model creation. I have morphed my own characters so it was very convenient to pose them exactly how I wanted. The model also comes with its own textures that I can use later in Unreal Engine. My idea behind the pose is to later situated the model inside the leaf boat, then add some butterflies into the scene. I also wanted some movement so the model doesn’t look like a static mesh, so I animated the eye blink movement, tried my best to make them dynamic and as lively as possible. If I had more time I would have animated the whole body as well, but I feel good about the result nevertheless.

I was not particularly happy with the fact that Alembic import didn’t name or sort out the material, so it took me quite a bit of time to assemble and set up all the materials one by one. Luckily, I have worked with Daz model before in Blender and there is a particular order to each body part, so it did help me in guessing which one is which. Initially I got frustrated with this so I tried out different technique to import the animation in, from using Daz to Unreal plugin (which didn’t work), to FBX, etc… I decided that nothing looks better than the Alembic route so I went back and tried to finish.

After setting up all the materials manually, there were two problems arised.

  • Firstly, the hair was really patchy and not looking great at all, especially at the root and area that has heavier shadow. As I imported the whole model in as a single alembic, there was no way I could separated the hair from the body for modification. I eventually took the time and imported a separated model without the hair first. Then I imported the hair into Blender and converted it to a particle system, exported as Alembic and imported again into Unreal Engine. The hair is now read with UE5 Grooming system, so i have more options to adjust the hair roots and tips until it looks better
  • Secondly, the eyes textures turns black after importing into UE5. I resolved this by finding a few different textures to test until it worked out. I also had to set the Cornea material to transparent in order for the Irises to show.
Creating Butterfly animation in Blender

I created two simple butterflies animation in Blender, using pictures I found online. Then I put them into Photoshop to create their normal maps and opacity maps. I would say this is one of my favourite technique to learn and can be used in so many cases in the future.

Fixing roughness on the butterfly using Material Instance

Resources:

3D classic vs 3D beta

  • Classic still incorporate FBX or Alembic. Nodes are green.
  • Beta adapt to USD workflow. Nodes are red.
  • You can’t mix nodes from the classic 3D system and the new 3D system. There are a few notable exceptions to this rule, like CameraTracker and DepthGenerator. Classic 3D system nodes are green by default and new 3D nodes are red.
  • The red new version is still quite unstable, we should use the classic version for now.

3D navigation

Tab in viewer – switch between 2D and 3D
Middle mouse – Pan
Alt + right click – Orbit
Scroll – Zoom

Highlight your node setup > Create to save your particular set up/tool set

Scene Node: is used to manage and assemble 3D geometry, lights, and cameras. It acts as a container/collector for all your 3D elements. When you connect multiple 3D objects to a Scene node, you can manipulate them as a single unit, which simplifies the process of working with complex 3D composites.

Camera Node: In Nuke, the Camera node is a virtual representation of a real or imaginary camera. It’s used in 3D compositing to define the perspective and projection of a scene. You can adjust parameters like focal length, aperture, and field of view. This node is crucial for match-moving and integrating 3D elements into 2D footage, as it helps in replicating the original camera movements and settings.

ScanlineRender Node: This is a rendering node used to render 3D geometry in a scene. It works by converting the 3D data into a 2D image, using the settings from the Camera node

To view from camera’s POV:
Example of 3D set up
Example of complex 3D setup

Lens Distortion

Resources:
Working with Lens Distortion: https://learn.foundry.com/nuke/content/comp_environment/lens_distortion/adding_removing_lens_distortion.html

Lens distortion refers to the way a camera lens can warp or distort the image it captures. This isn’t necessarily a flaw; in fact, it’s often a characteristic of the lens design. There are mainly two types of lens distortions: barrel distortion and pincushion distortion.

  1. Barrel Distortion: This occurs mostly in wide-angle lenses. It makes the image appear bulged outwards from the center. Think of how a fisheye lens makes things look curved and expansive. It’s often used for artistic effect or to create a sense of immersion.
  2. Pincushion Distortion: The opposite of barrel distortion, this happens more in telephoto lenses. The image appears pinched at the center, making the edges seem to bow inwards. It’s less common in everyday photography and filmmaking.

LensDistortion Node: This node is used for correcting or applying lens distortion. It allows you to analyze an image for any barrel or pincushion distortion and correct it, or you can intentionally add distortion to match CG elements with live-action footage

example of a wide angle lens 24mm – straight lines tend to be bent
Only available in NukeX

It’s important to use LensDistortion (Undistort) before doing many match-move or tracking in order to work with correct information then use LensDistortion (Redistort) afterward to return the plate to correct format.

LensDistortion > Analysis > Detect or Draw (Straight Lines) > Solve
Draw horizontal and vertical lines to help giving as much information to Undistort as possible, especially around the edges. Then click Solve.

LensDistortion (StMap) introduce “forward” and “backward” as an option to view and is lighter to use & faster to view.

3D tracking

  1. Denoise your plate then treat it (color grade etc)
  2. Roto out area that you want to avoid tracking (things that move/not static, be mindful of reflective objects)
Helping Nuke know that this is the floor by selecting a few points of the floor in the plate > right click > ground plane > set to selected
choose 1 point to set as origin to make sure the tracking scene is not tilted etc

CameraTracker analyses the motion in a 2D footage and extrapolates this movement into a 3D camera path. It tracks various points in the footage (usually high-contrast or distinctive features) across frames to determine how the camera was moving when the footage was shot.

Source: sequence or still

Autotrack: to improve tracking

Turn on Preview Features to show

Click on error-max. click on graph and press F

reduce Max error to 6, click delete unresolved and delete rejected

Usually solved error is anywhere around 1 or below is good

The floor is now matching the ground plane

To check tracking:

  • Select some point > Create > Cube/Plane
  • Plug the object Card into Scene > move it to match the ground plane
You should create multiple card from points, in foreground and background to make sure everything matches and works perfectly
Using pointcloudgenerator to see camera movement
First Analyze then Track points then Delete Rejected Points to remove red
Select all vertex > Groups > Create group > Bake Selected Groups