Year: 2024
Exploratory Practice – Personal Project
To be honest, it has been quite a challenge for me to pinpoint exactly what I want to achieve in my personal project. I’m keen to enhance my 3D skills and less interested in the compositing aspects of VFX, even though the brief requires merging CG elements with real-life footage.
Fortunately, I’ve recently been commissioned by Gliese Nguyen, a student from LCF, to help create a 3D character and fashion collection for her app prototype. While feeling somewhat lost in my own project, I agreed to work on her ideas, hoping to sharpen my skills and find inspiration through this collaboration.
After Gliese shared her vision and expectations, I was excited to start on the project. This is my first time modeling a character from scratch, as I usually work with pre-made Daz characters. It’s a fantastic opportunity for me to dive deeper into an area of 3D design that I truly enjoy.


MODELLING PROCESS
I must say it was a fun process making this cute character. I refered to tutorials from Crossmind Studio‘s course and applied it onto this character and I have learned a lot on shape building, modelling details while maintaining nice topology. I am pretty sure there are some parts that were not perfect, yet I tried to keep the mesh all quads as much as I could. For example, the picture bellow shows the wireframe of the character, I notice that there is a face that has 5 vertices, which is not ideal (N-gons) but since it’s on the character face that won’t be animated, so I hope that it won’t be an issues. I will be more mindful about this in the future.

The tutorial I followed taught me a great starting point: begin with a very simple, low-poly model and add details like loop cuts as needed. This approach is easier because you can always add more complexity, but simplifying an overly detailed model can be tricky.
When modeling, I worked on different body parts separately. For instance, I modeled the head and the torso on their own first. To connect them smoothly, I used a technique to ensure the points matched perfectly at the neck, which you can see in the picture below.
I kept the limbs simple, which meant I didn’t need to create detailed hands or feet. However, based on some feedback, I added extra loop cuts in areas like the armpits and at the elbows and knees. This extra step will help make any future animation of the character move more naturally.





Clothes Sketches and Modelling





The E-skin collection – Capsule Closet:
- T-shirt
- Sweater
- Shorts
- Trousers
- Skirt
- Hoodie
UV unwrapping and texturing
This was my first time trying UV unwrapping, an experience I found both scary and exciting. UV unwrapping had previously been an area in 3D design where I felt less confident. Through this process, I’ve learned crucial techniques for marking seams effectively to ensure the UVs are nicely distributed. I’ve come to understand that UV unwrapping is a vital yet complex aspect of 3D modeling, essential for a smooth texturing process later on. To enhance realism, especially for the character’s clothing, I utilized material resources sourced from various sources, aiming to achieve a more lifelike appearance.

With the Tshirt, I was going to make the Annakiki logo as a texture at first. Then I figured it would look better if I make the letter solid and shrinkwrap it on the tshirt.




Especially with this sweater, UV unwrap has helped me to be able to map the knit stiches to the exact direction I want them to be.

Model + Outfit Outcome
Critical Reflection: At this stage, I am very pleased with the outcome, as it fulfills my client’s requirements and has turned out wonderfully. The project has significantly enhanced my skills in modeling, UV unwrapping, and texturing. However, I see room for improvement in my approach to topology. As mentioned earlier, I accidently created some N-gons in the character’s mesh, which could pose challenges for future animation—fortunately, these are located in areas of the character’s face that require minimal movement.
The entire project was completed within a week to meet the client’s deadline, and both the client and I are satisfied with the results. Looking forward, I am now brainstorming the next steps for further developing the character in visual effects,








When I think about how to focus on effectively blend the character with real-life footage, I reconsidered the character’s appearance to enhance its realism. Originally designed with a metallic sheen, the character felt too synthetic and will be out of place against the live-action background. To ensure a more seamless integration into the real world scenes, I decided to revise the character’s skin to mimic human-like textures and tones.


Exploratory Practice – Group Project
1. Concept // Moodboard //Storyboard
It has been a while since my last blog post. Since my mental and physical health has been really rough this year, I have been trying really hard to maintain a healthier lifestyle and get back on track. I am really grateful to kick start again on this project with my classmates – Moonju, Alwin and Wayne.
Initially, I was particularly interested in the sustainability theme, as it aligns with my aesthetic and vision, and I believe that everyone in my team has different skillsets that can support each other to create something good together. After a few brainstorming sessions, we agreed on working on a campaign for green vehicle, inspired by brands like Lime, unu, VOI, and Felyx, etc…
Link to our Miro board:

Me and Moonju started working on the storyboard together, I found it hard to express my ideas at first due to my no-so-good drawing skills, but everyone seems to understand the idea and happy to work on it at the end 🙂
I have always been intrigued by the biome / plant simulation made in Houdini or Blender’s geometry nodes and truly want to learn and experiment on them, hence incorporating these elements into this project.

After my first rough storyboard sketch, Moonju tried to turn it into an animatic video so we can better visualise how long each shot is going to take. We ended up removing the phone shots to keep the whole video within 1-minute time frame.


As we decided on the roles we’re taking for the project, I affirming my desire to contribute and strengthing my skills in 3D design and directing.
Here are our individual roles, with our planned equipments and filming dates and location.
2. Filming – Shooting Location #1 – Green wall
The first location I picked for the project was a building opposite my house, close to the uni. As we were searching for Lime’s parking spots to scout for a scooter, we found this building with a beautiful wall full of plants and leaf. This was actually not what we expected, but we were happy to add the wall as an addition to the project as it fits our theme, hence the revision of the storyboard.
We actually attempted to filmed twice. The first day we planned to, we rented a lot of equipments like GoPro, Black Magic, Gimbal. etc,… I tried to set up the gimbal since I have tried it before, but it was really hard to balance it and with the Black Magic camera on it, it was super heavy to operate. Furthermore, the first shooting day the weather was soooo bad, it was super cold and windy, so we decided to postponed it to another day. Everyone ended up coming back to my house to warm up, and I made them some hot chocolate 🙂

In the fourth week, we managed to confirm another day of shooting and got blessed with a nice sunny weather 🙂 . We started with the building scenes, and luckily, there were few people around, allowing us to shoot without interruptions. However, setting up the camera proved difficult due to our inexperience. Despite positioning it far from the actor, the angle was too narrow. We discovered this issue was caused by sensor cropping in the camera settings between 4K and 6K resolutions. We tried to adjust it, but the SD card couldn’t handle the high bitrate required for 6K, causing the camera to stop. As a result, we opted to stick with 4K.
Overall, I thoroughly enjoyed this filming experience as it allowed me to hone my videography skills. I was very meticulous about the outcome of each shot, taking the lead in most of the filming and directing. I ensured our actor, Minghan, felt as comfortable as possible and communicated clearly with him about how I wanted him to perform.


3. Filming – Shooting Location #2 – Waterloo Tunnel
We chose Waterloo Tunnel because its entrance, covered with plants, perfectly suited our vision. We decided to conduct a test shoot first, as renting Lime scooters costs money by the minute, and we wanted to avoid unnecessary expenses.
On the test shoot day, we used Wayne’s bike to experiment with different angles and equipment. Initially, I attached my octopus tripod to the front of his bike as he rode into the tunnel, but the footage turned out shaky due to the uneven brick ground. We then tried using a chest mount, which I initially disliked because of the angle. However, we realized the perspective would differ if the rider were on a scooter instead of a bike. I instructed Wayne to ride in a straight line and keep his torso steady to ensure smoother footage.
On the official filming day, we rented two Lime scooters to ensure smoother footage compared to filming on a bike. Patience was key, as the tunnel was frequently crowded with passersby. While observing, we noticed a sign at the tunnel entrance and decided to incorporate it into our project by adding a slogan.
Overall, I really enjoyed these shooting days. I captured exactly what we needed for the project, enhancing my filming and directing skills. Although I was uncertain about transitioning from the real-life tunnel to the CG tunnel, I knew it would require some experimentation.
Afterward, Moonju used the footage I had filmed to create a rough cut and see how the entire video would look. Fortunately, everything fit within the 1-minute time frame.
Green Screen
We booked the green screen room and a Blackmagic camera for filming, intending to bring the scooter into the green screen room. However, the staff at reception didn’t allow it due to health and safety concerns, saying that the scooter’s battery as a potential fire hazard. They informed us that we needed permission from the health and safety team at least two days in advance.
With our original plan disrupted, we quickly found an alternative. We rented a green screen kit from the kit room, which included a large green cloth and poles. We relocated to the parking lot of my building and successfully completed the filming there.






CG Tunnel


For the tunnel scene, I envisioned transforming the tunnel into a visually captivating environment resembling an aquarium. However, instead of marine life, this unique space would be inhabited by a lush, vibrant world of plants.

I imagined the walls and ceiling of the tunnel teeming with a variety of greenery, creating a mesmerizing, garden-like atmosphere that envelops the viewer. The idea was to evoke the feeling of walking through an underwater realm, but with foliage thriving in place of fish, providing a surreal and enchanting visual experience.



This concept aimed to blend elements of nature and fantasy, creating a stunning, immersive setting for our project.
At the start of this project, I was trying to decide whether to use Unreal Engine or Blender. Unreal is great for creating realistic plants, trees, and foliage, so I thought it might be the best choice. I started with Unreal, hoping it would help make the environment in my project look amazing. However, I ran into issues with the glass materials, which didn’t look right for what I needed.
Since I’m more comfortable with Blender, I decided to switch back to it. I used some helpful add-ons in Blender to place plant models around my scene and make everything work better.
This experience taught me a lot about choosing the right tools for a project. It showed me that it’s okay to try new things, but sometimes, sticking with what you know best can help you get the best results. I learned that being flexible and knowing when to switch methods is important in making sure a project turns out well.


I also added some imperfection on the glass to make it more realistic

Final Outcome
Collaborative Unit – Character Animation
With this idea, there would be 2 animations that I had to work on. It’s going to be the first time I work intensively on animating realistic character, so despite my lack of experience, this is something I’m excited to learn and try out. I already have made a character of my own in Daz, opting for using the software for animation as well as I know animating on Daz is a bit more intuitive than traditional animation, which can help me save some time.
I decided to use 30fps (since 25fps is too realistic, and 60fps – which can be perfect for slow motions, yet might be too heavy and taking too long to render later on), aiming to make the final video around 1 minute long. That would be around 2000 frames to work with. 1000 frames for the running animation, and 1000 frames for the giant girl.
Running Girl Animation
This animation was thought to be an easy one, as I found a few running presets in Daz, ready to use. However, those animations are pretty fast, and I wanted the sequence to be slow so that we can showcase more of the environment and the movement of the character.
I spent a lot of time trying to slow down the running girl animation, opting for manually moving each and every keyframe just so I have more control over the flow of the sequence. I wanted the first few seconds to be significantly slow, where the camera would zoom into her face, her slowly opening her eyes then start running. Overall, I reduced the speed to 〜4 times for the animation to fit into 〜1000 frames range, adding some eye blink animation and expression as well.

I intentionally left 200 frames at the beginning, in case we need to do any cloth/hair simulation later on. The screenshot shows the overall keyframes of how this animation plays out.

Garage Project (P2): Compositing




References
Nuke: Green screen
Keying is a compositing technique used in visual effects and post-production to separate a subject from its background. This process involves creating a matte or mask that isolates the subject, allowing compositors to replace the background with a new image or scene.
There are many different types of keying, and they can be used together to achieve
HSV Color Scale
The HSV (which stands for Hue / Saturation /Value) scale provides a numerical readout of your image that corresponds to the color names contained therein.
It separates color information (hue) from the grayscale (value/lightness), allowing for more straightforward adjustments to color intensity and brightness.

R = HUE: Hue literally means colour, measured in degrees from 0 to 360
G = Saturation: Saturation pertains the amount of white light mixed with a hue. It measures the intensity or purity of the color, ranging from 0% (gray) to 100% (full intensity)
B = Luminance/Value (Brightness). Luminance is a measure to describe the perceived brightness of a colour, from 0% (black, no light) to 100% (full brightness, maximum light).
Colorspace node
Colorspace node can be used to convert RGB channels from Linear color space to HSV color space to help analyze color of the plate.

HueCorrect Node

HueCorrect node can be used to mute, suppress or desaturate colors
Mute: Shift color to another color, tone down color, keep luminence
Suppress: Remove color entirely, with luminence
Desaturated: Reduce color

Keyer (Luminance Key) node
The Keyer (luminance key) node analyzes the luminance values of the footage, allowing you to select a range of brightness to create a matte or mask based on the brightness levels within an image.


Key Features:
- Flexibility: Allows for keying based on luminance, is especially useful in monochromatic scenes or when dealing with unevenly lit backgrounds.
- Detail Preservation: Capable of preserving fine details in the keyed element, such as hair or edges, by carefully adjusting the luminance range and softness of the key.
- Spill Suppression: While primarily focused on luminance, additional nodes may be used in conjunction with the Keyer to manage spill or color cast from the background, ensuring a clean and natural integration with the new background.



IBK Gizmo/Colour
In Nuke, IBKGizmo and IBKColour are keying nodes designed to work together for extracting high-quality mattes from footage, especially useful in complex keying scenarios where traditional chroma key methods may struggle.
IBK stands for Image Based Keyer. It operates with a subtractive or difference methodology
IBKGizmo
- Is the core node used for generating mattes, handling difficult keying challenges.
- Example: fine hair details, uneven background tones, severely motion blurred edges etc…
IBKColour
- Works in tandem with IBKGizmo to address color spill issues.
- After a matte is generated using IBKGizmo, IBKColour helps to neutralize or remove color spill from the background, ensuring that the foreground elements integrate seamlessly with a new background.

ChromaKeyer Node
- Uses an eyedropper to select the background color you wish to key out
- Works well with evenly lit screens of saturated color.
- Takes advantage of GPU devices for efficient processing.

Keylight Node
- Provide high-quality keys with detailed edge control and effective spill suppression.
- For challenging keying scenarios, consider using EdgeBlur or Roto to address specific issues or enhance the key.

Primatte Node

- 3D keyer that uses a special algorithm in 3D color space
- Offers an Auto-Compute feature for step-by-step alpha data extraction.
Green Despill
Blue Despill
Clamp node: used for clamping/control max/min value of color
Despill madness gizmo
EdgeExtend node: Premult by default, automatically detects the edges within an image and extends them outward, filling in empty or problematic areas.


addmix vs merge over:
Tips for Effective Keying in Nuke:
– Clean Plates: Whenever possible, use clean plates to help with the keying process, especially for difference keying.
– Preprocessing: Adjusting the input footage for contrast or color balance can significantly improve keying results.
– Combination of Tools: Often, the best results come from combining several keying tools, leveraging the strengths of each to address different aspects of the keying challenge.
Reflection:
I was quite confused about the concept of HSV color space and working with luminance at first, but after going through example nodes and reading about it, it makes sense how useful it is in ensuring high-quality, detail-rich mattes for complex visual effects sequences.
Luminance keying is particularly useful for isolating elements from either a very bright (high luminance) or very dark (low luminance) background when traditional chroma keying (based on color) is not feasible.
Nuke: CG (Part 2)
ReadGeo node
- Used to Read/Import Alembic (.abc) / .obj / .usd files
- Can be rendered through ScanlineRender


StMap node
Functionality:
- An STMap is used for image distortion based on a UV map (STMap).
- It works by using the red and green channels of an image to define the new X and Y coordinates for pixel remapping
- The STMap node takes an input image and an ST map and warps the input image according to the coordinates defined by the ST map.

PositionToPoints node
- Converts a position pass (an image where RGB values represent 3D coordinates) into a 3D point cloud.
- Useful for visualizing the spatial layout of a scene rendered from 3D applications within Nuke.

Import 3D geometry and texture:
- Use the ReadGeo node to import 3D model into Nuke. Connect the ReadGeo node to a Scene node to include the model into 3D scene.
- Apply the STMap node to warp or adjust a 2D texture based on the UV mapping specified in the STMap image, using UV Expression . Connect your texture to the STMap node as the source.

Relative Path: Reconnect or keep plates’ directory by copy the project directory (found in project setting)
Example: [python {nuke.script_directory()}]
Paste this before the root of the folder
Example: C:/Users/23037923/OneDrive – University of the Arts London/Nuke/Week_13_CG Nuke/Images/LegoCar_No_PPlane_V3.exr
to [python {nuke.script_directory()}]/Images/LegoCar_No_PPlane_V3.exr
-> This will ensure the link of directory even when you move the folder around
Tip: you can Copy/Paste a node or a setup as python code to share
Garage Project: Initial Idea & Modelling

For this garage project, I have come up with this idea of intergrating a plant communication machine into the scene. I used AI to visualize and come up with some interesting shapes in mind of how this machine is going to look like. I want the look to be sci-fi but not with heavy machinery, rather more organic modelling. If I have more time at the end I want to make some complex wire connection with some human intergrated into the scene as well.
Through this project, I want to have a better understanding of how to work with CG in Nuke, combining 2D and 3D elements together seamlessly, as well as improving my modelling and texturing skills. I am very happy with the idea and looking forward to bring it to life 🙂
Variation 1:



At the first blocking out and testing textures stages, I figured I didn’t like the shape of this variation very much. Imagining placing it in the Garage plate, I opt for a design that is wider horizontally, also to look for some hard surface machinery details to make the model more complex and realistic.
Machinery Research







After looking at a few references on Pinterest, I proceeded to try sketching some ideas on top of the plate.


Final Design Visualization:
Eventually, I decided to use AI to visualize some more designs that are tailored to my vision.







I ended up choosing design 6 & 7, I think they look pretty and not too complex, yet can still convey the overall look that I have been looking for.
Most challenging parts:
- Pipes, I thought they would be easy to make but turns out there are so many different techniques that could be used depending on your needs. I eventually found one that works best for me, so I used it throughout the whole design.
- Importing Quixel Bridge Assets into Blender: Textures error/no Texture (Resolved)



Test render:

Nuke Garage (Part 2) – Clean up & Mask

On my first roto attempt, I decided to divide the roto of the wall into 4 section with bezier curve. This worked out pretty well, yet I think it can be improved after being reviewed in class.

So I came back and try again, this time with smaller section, using B-spline and Open spline to really go into the details.


Nuke – CG (Part 1)
AOV (Arbitrary Output Variable)
- Is a concept in 3D rendering that represents a custom data channel produced during the rendering process.
- These channels contain specific types of information about the rendered scene, such as lighting, shadows, color, reflections, and more.
- AOVs are significant for VFX because they provide more flexibility in controlling every pass and grade it according to background image
Key types of AOVs:
- Direct and Indirect AOVs: Capture light directly from the source and light that has bounced off surfaces.

- Standard Surface AOVs: Isolate material components such as diffuse, specular, and subsurface scattering for fine-tuning in compositing.
- Ultilities AOVs: Used in combination with tools to achieve various effects like defocus, motion blur, re-lighting, etc…

Passes
Passes, often part of the AOVs in a broader sense, are specifically categorized render outputs that represent different elements or effects within a rendered scene. While AOVs provide the technical variables, passes focus on the compositional elements that make up the beauty shot or contribute to visual effects, such as:
- Beauty Passes: The comprehensive render that includes all visual elements.
- Lighting Passes: Separate the lighting into its specific types (e.g., key light, fill light) for detailed lighting control.
- Reflection, Refraction Passes: Isolate reflective and refractive elements, allowing for adjustments to how surfaces interact with light.
- Beauty Passes: Used to recreate beauty renders
- Material AOVs: Used to adjust the Material Attributes (shader) of objects in the scene
- Data Passes / Helper passes
- Provide technical information used to adjust or apply effects in post-production
- Examples of data passes:
- Normals Pass
- Motion Vector Pass: Contains the direction and magnitude of motion for each pixel, enabling post-production motion blur.
- UV Pass: Stores the UV mapping information, allowing for post-production texturing or adjustments to textures.
- Position Pass: Gives the exact position of each pixel in 3D space, useful for integrating 3D elements or effects based on location.
- Material ID/ Object ID Pass: Assigns a unique color to each material or object, simplifying selection and isolation for adjustments.
- Z-Depth Pass: Offers depth information for each part of the image


Working with render passes:
- You can break down render passes by Using Shuffle nodes to separate out individual AOVs or passes from multi-layer EXR files
- When we build a cg beauty we simply combine information of highlights, midtones and shadows.

Rules for rebuilding CG assets:
Merge (Plus Lights): Diffuse / Indirect / Specular / Reflections
Merge (Multiply Shadows): AO / Shadows
- Each pass should be graded separately
- A final grade can be applied to the entire asset if needed
LayerContactSheet is used to view all the passes contained in the EXR
- Enable ‘Show layer names’ to display the name of each channel.

Tips: Ctrl+Shift Drag node onto another to swap/replace node
