Premult = multiplies the input’s rgb channels by its alpha

Keymix = similar to Merge but accepts unpremult assett. Often used for merging masks

Uses of Premult node:
– Merging unpremult images = to avoid unwanted artifacts (fringing around masked objects)
– Color correcting premultiplied images.Unpremult color correctionpremult

Unpremult = divides the input’s rgb channels by its alpha

Colour correcting premultiplied images:

When you colour correct a premultiplied image, you should first connect an Unpremult node to the image to turn the image into an unpremultiplied one.
Then, perform the colour correction. Finally, add a Premult node to return the image to its original premultiplied state for Merge operations. Typically, most 3D rendered images are premultiplied.
** If the background is black or even just very dark, the image may be premultiplied.

Merging Operation Examples:

Merge (over)
Merge (mask)
Merge (average)
Merge (overlay)

Reformat = lets you resize and reposition your image sequences to a different format (width and height).
– Allows you to use plates of varying image resolution on a single script without running into issues when combining them.
– All scripts should include Reformat nodes after each Read node to specify the output resolution of the images in the script.

Colorspace and Linearization
– Colorspace defines how the footage was captured or exported. Most files are non-linear, and knowing the correct colorspace is critical for proper linearization.
– Linearization is the process of converting footage into linear space. All the tools inside nuke are built around linear math. This also allows the mixing of media types. We need to know what colorspace of the file was before starting to work on the file.
– You can work in LOG/RAW or Linear.

LUTs, CDLs, and Grades
· LUTs can be used for creatively or technically, i.e. converting from log to lin, or adding a “look”
· CDLs are creative, i.e. adding a “look” to a clip
· Graded footage means colored to it’s “final” look

For color correction we always want to think in terms of:
– Highlights
– Midtones
– Shadows
Two commonly used nodes are : grade & colorcorrect

Both gives us the chance to grade H, M, S of a shot

– To grade Highlights we use either GAIN or MULTIPLY,
– To grade Shadows we use LIFT
– To grade Midtones we use GAMMA.

How to match color using grade node to match constant A to constant B
– Add a grade node (G), pick constant A color as WHITEPOINT by selecting eyedrop -> ctrl + shift on constant A color. 
– Pick constant B as GAIN by selecting eyedrop -> ctrl + shift on constant B

2 examples of basic grade matches using merge node
Match color

Note: Cloning a node will keep the same value/setting across the nodes (signified by a C letter on top of the node)

Primary color

secondary color

QC quality control



The first time I did the roto of the mountain, I came through a few fail attempts/problems that turned out to be good Nuke lessons for me to remember.

  1. Did not use a lot of points to roto the mountain at first: I thought of this from the running man roto where you should not put too many points because it’s harder to roto with such rapid movement. Yet this mountain shot is relatively static, so the mountain doesn’t move much and it has a lot of bumps/details. So in order to make the best result, I went on and make the roto as accurate as I could. I also took my teacher’s advice to break up the roto into parts, though I think I could have gotten away with just one big roto in the case.
  2. I initially used Transform – Matchmove operation to match the roto with the tracking, which did not work very well as it cuts out the left and bottom part of the mountain, eventhough I have roto outside of the frame.
  3. I finally use the transfer data technique by basically CTRL + DRAG the transform and center date from the tracker node to the ROOT of roto node. Then it works perfectly
Problem with using Transform Matchmove
First failed attempt at mountain roto using Transform Matchmove
Correct way to transform data (in this case)
Final node setup

Rotoscoping is a technique in animation and visual effects where artists trace over live-action footage frame-by-frame to isolate a character or object. This allows for easier compositing or layering of visual effects. It’s often used in films, commercials, and music videos to blend different visual elements seamlessly.

In real-world settings, fast-moving objects appear blurred to the human eye and camera sensors, so it’s recommended that we use motion blur/blur nodes mimic this natural phenomenon to blend seamlessly into new scenes or with added visual effects.

Calculate motion blur in Nuke:
24 frames per second
1/60 shutter speed
24fps/60=0.4

roto → motion blur → on

For this week homework, I was assigned to do the roto of the running man and the bridge in this particular footage, using the given setups below. At first these complex setups were really confusing to me, as I understand the nodes individually but I don’t exactly know why and how they are linked to each other. Therefore I did further research by watching tutorials and observing and playing around with the setup, then it starts to make more sense what the teacher was presenting here.

Reference: Nuke Rotoscoping Tutorial – Introduction to Roto Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

This is a standard roto setup, where the plate (read) and the (roto) note is connected to a (copy) node, but separated into 2 different branches (A & B). Then the (premult) node is to multiply the alpha channel with the rgb.

The benefit of this setup (using (copy) node) is that it gives us a more broken down workflow of our script, so if there is any further modification added to the alpha channel, it won’t affect the rgb (original footage).

However, the first set up is a simplified one where you will be required to do the roto manually frame by frame. With this second setup, the only difference is that the (roto) nodes are separated into different branches, then got combined using (channelmerge) node. Additionally, we got introduced to the (tracker) node, which can be used to track the movement of a particular area in the shot. It helps speeding up the roto process, and is especially used in tracking very subtle shifts of static objects from camera movement that we might oversee with our own eyes.

Reference: NKE 101- Nuke 101- PART 10: Intro to Tracker Node. Available at https://www.youtube.com/watch?v=ELg9ncl-0Wo

At first, I wanted to try saving time by using tracker node to track the movement of the man. But after a few attempt it didn’t seem to work very well, perhaps because of the drastic movements.

The rotoscope process took me a lot of time, especially at first when I didn’t realize the technique and the movement pattern. I found the arms were the most tricky parts to work with as they don’t stay at the same shapes and placements and it takes really long to modify frame by frame.

Final takeaways:
Eventhough this first attempt at rotoscoping is not perfect, I have learned a lot through this assignment. First of all, the most important thing to remember is to break down the subject into small parts that you can easily track and modify. I separated the head, torso, and then both arms and legs into 3 parts. Additionally, it would help save more time if I have observed the movement pattern, in order to find main keyframes to start with. Thirdly, I initially started the assignment with a mouse, then I found that using a wacom has drastically improve the quality and save much more time on rotoscoping. The result doesn’t satisfy me 100%, yet it reflected exactly where I struggle at first, clearly shown through how the roto of the arms and head were not that good, then I improved much more on the legs after learning the right techniques.

Workshop Notes:

This is my first time getting to learn Nuke, yet with my experience in other 3D softwares like Blender or Houdini, I have been familiar with node-based workflow. Compositing is an important part of CG and VFX industry, I am eager to improve my skill and knowledge on this topic as I believe it will be beneficial in taking my personal works to another level of quality.

Basic Elements:

About Node-Based Compositing:

In compositing, nodes serve as the fundamental components. To craft a new compositing script, you insert and link nodes, forming an operational network. This setup lets you carry out various manipulations on your images (procedural workflow?)

In node-based software, intricate compositing tasks are simplified by connecting a series of basic image manipulations, known as “nodes.” These nodes come together to create a schematic layout, resembling a flowchart. A composite is represented as a tree-like graph, intuitively mapping out the journey from the original media to the end product. This is actually how all compositing software internally manage composites.

Flexibility and Limitations:

This approach offers high flexibility, allowing you to adjust the settings of an earlier image-processing step while observing the complete composite.
However, it’s worth noting that node-based systems often struggle with keyframing and time-based effects, as they don’t originate from a timeline like layer-based systems do.

Key Points to Remember:

Make sure to convert MP4 or MOV files into image sequences before proceeding, as Nuke is not well-optimized for MOV files. It’s designed to work best with image sequences, which are collections of ordered still frames that together form an animation. Typically, these frames are stored in a single folder and labeled in a sequential manner to maintain their order.

Workflow Steps:

  1. Import your video file.
  2. Tweak the frame sequencing and range using the retime and frame range nodes. Exercise caution not to alter the frame speed in the retime node.
  3. Invoke a write node to export the video as an image sequence.
  4. Choose either DPX or EXR as the file format.
  5. Don’t forget to add hashtags, enabling Nuke to properly write the sequences.

Typical VFX pipeline:

Further research

I already have a 3D background in Blender and Daz3D before starting the course, so I found Maya pretty similar and easy to get on with. On that note, 3D modelling is something I’m still not very good at, therefore this is a great opportunity to practice. I personally prefer organic modelling/sculpting than hard surface modelling, but knowing my weakness is a good place to start improving.

As we were assigned to model a hot air balloon, I went on looking for a better reference as I want more complex shapes to practice with. This one caught my eyes since the base reminds me of Greek temples/architecture.

First attempt – Before Subdiv
Second attempt – Before Subdiv
First attempt – after SubDiv
Second attempt – After Subdiv

I started modelling the hot air balloon in Maya, then switched to Blender when it comes to the base because it has more complex shapes. Also I kept finding myself pressing Blender hotkeys unconsciously, so it took me a lot of time to even model simple objects in Maya.

I begin with a cube with Subdiv modifier, then used extrude/inset tools continuously to modify the mesh. In this particular example of the base, I have learned to be mindful about Individual Median Points & normals as I wanted to model 4 individual faces/sides to look the same. After the first base, everything seems to be much more easier as I got more used to the technique. I finished off by using curve to create the strings dangling around the hot air balloon.

What I have realized throughout my modelling process up to this point is that I tend to go quite high poly as I care a lot about the details. Yet sometimes I don’t need so many polygons to create the same shape. Optimizing my model is something I want to improve more in the future.

Final result

Overall I am very happy with the result and this was a great opportunity for me to warm up my modelling skill again. I have decided that I will use Blender to finish this project as well as for future modelling for time efficiency.

For this week’s task and given concept of time, it was my opportunity to relive my passion for photography. Time has always been existing as a concept to me, as it is abstract and subjective. Human create metrics to measure and keep track of time, yet there are moments that pass by merely just by us experience and engage in them, giving us different sensation, hence challenge each of our perception of time. The reality only exists in this present moment.

Given my understanding of time, I have given myself more “time” to really look, observe and appreciate small little moments within my days. The pictures might generate different meaning to other people, but to me, I’m the one and only who knows and deeply understand that I was really trying to keep those “present” moments in my memory through the act of taking pictures.

Unreal Engine 5 for VFX/Cinematography

Pre-visualization (Previs)
• Prototyping: Quick assembly of scenes to visualize what the final output might look like.
• Real-time rendering allows us to make creative decisions early in the production process.

Real-time rendering & animation
• Real-time rendering capabilities allow for immediate feedback on character animations, lighting, and textures.
• Real-time motion capture

Virtual Production
• LED Walls and Virtual Sets: The engine can be used to drive large LED walls to create photorealistic backdrops, significantly reducing the need for location shoots.
• Integration with real-time tracking systems allows virtual cameras to move with physical cameras, making the virtual environment more interactive and realistic.

Post-production
• Compositing
• Particle Systems and Simulations: For scenes requiring complex particle effects (smoke, fire, etc.), these can be created and rendered within the engine.

Final Output
• High-quality Rendering
• Multi-platform Delivery

Benefits for VFX artists
• Cost-Efficiency: Real-time rendering can save both time and money compared to traditional methods.
• Collaboration: Multi-user editing capabilities make it easier for different departments to collaborate in real-time.

Workshop notes:

Important setting in Editor preferences:
• Invert Middle Mouse Pan
• Orbit camera around selection
• Enable Arcball Rotate
• Enable Screen Rotate
• Left Mouse Drag Does Marquee in
Level Sequence Editor

Real-time Project Setup using Lumen and Nanite:
• Occlusion Culling = turn off
• Enable Virtual Texture Support = on
• Global Illumination = Lumen
• Reflection Method = Lumen
• Software Raytracing Mode = Global Tracing
• Use hardware Raytracing when available = on
• Support Hardware Raytracing = on
• Allow Static Lighting = off (if you don’t have any baked lighting)
• Separate Translucency = off (better Depth of Field)
• SM6 = on
• Default RHI = DirectX12
• Shadow Map Method = Virtual Shadow Maps (Beta)