Table of Contents
This article explores the fascinating world of motion capture and provides a detailed guide on mastering 3D motion tracking and camera tracking in Blender, a powerful open-source tool for creating stunning visual effects (VFX). From understanding the basics of motion capture technology to applying Blender motion capture techniques, this tutorial covers every step to enhance your VFX projects.
Introduction: Motion Capture and Its Role in VFX
Motion capture, AKA mocap, is an industry standard method for capturing different aspects of an actor’s performance like general movements or facial expressions and then converting them into digital data. This data can be used in many different ways like applying it to an animated character for more realistic movements or taking it up a notch and creating full-on digital humans.
But where does that motion…go? By focusing less on the actors and more on how it works, you can understand the real artistry behind motion capture. In the context of VFX, motion tracking in Blender allows artists to reconstruct the movements of a camera in digital space, giving full control over footage to add or remove anything desired, making it a crucial skill for seamless CGI integration.
Part 1: Understanding Motion Capture Technology
History of Motion Capture
In movies, motion capture can find its roots all the way back to the 1910s when an animator named Max Fleischer came up with a technique called rotoscoping. He used rotoscoping in his animated series called Out of the Inkwell and the way it worked was Max’s brother, David, would dress up as a clown for the role of Coco the Clown and then act normally for the role in front of the camera. Later on, Max would use the footage as a reference to draw over frame by frame to create the animation by tracking the human movements and translating it to the animated character.
Modern Techniques
Motion capture today works by the same principle but with different techniques. The most common way of doing motion capture today is to cover the actor’s body mostly around the joints with reflective markers. The actor will then play his or her role normally in front of many cameras that are designed to track the reflective trackers and save their movement data in a 3D environment.
These trackers come in many different varieties with different application but they all have the same purpose: tracking body movements or facial expressions in detail that VFX artists can use later for the film project. There are a few main ways to capture motion. This is Rokoko’s Smartsuit Pro and it uses inertial sensors. These broadcast the location of devices embedded inside the suit kinda like how your phone knows which way it’s turned.
More common in high-end video games and movies, you’ll see optical tracking of markers in which a camera is learning where parts of a person’s body are by looking for high contrast areas. These suits don’t have any sensors at all. They’re basically fashion, made to be seen really clearly. The footage shot with the suits on is fed through software that interprets what the camera sees, before artists review it.
Applications
The use of motion capture in films is ever grown, especially in VFX-heavy movies from the Planet of the Apes and the Lord of the Rings to Marvel Universe and Star Wars films. The most common use of motion capture in films is to animate fantastical characters.
Examples of using motion capture for character creation are rampant but the biggest name in this is Andy Serkis. He’s one of the legends when it comes to motion capture. He has a list of characters that he has brought to life like Caesar from the Planet of the Ape series, Golem from the Lord of the Rings, Supreme Leader Snoke from the recent Star Wars series, and Captain Haddock from the Adventures of Tintin.
Serkis even played two roles in the 2005 Peter Jackson’s adaptation of King Kong: one as the Kong himself in a motion capture style and one as Lumpy the Cook in a normal acting style. The other way of using motion capture in the film is specifically designed for capturing the facial expressions, which is called performance capture. In performance capture, the actor’s face is covered with trackers and then a camera is mounted on their head facing their face.
This way every movement of facial muscles is tracked and recorded. One of the most notable performances for this method of motion capture is done by Benedict Cumberbatch for the role of Smaug in the Hobbit series.
Another use case of motion capture and performance capture that has become more abundant in the past decade is the process of designing and creating digital humans. Star Wars franchise is one of the pioneers of using this technology in their films. In Rogue One: A Star Wars Story, the VFX team Industrial Light and Magic (ILM) used their award-winning technology to recreate Grand Moff Tarkin, which was played by the late Peter Cushing in 1977.
Part 2: Motion Tracking in Blender for VFX
Why Motion Tracking is Essential
From stabilizing the camera to Marvel CGI, by harnessing the power of motion tracking you’re able to reconstruct the movements of your camera in digital space which gives you full control over your footage to add or remove anything you want.
If you want to add anything on top of your footage, you need to perfectly replicate the motion of your camera; otherwise, that thing will just slide around. If it is rendered with the same camera movement though, it looks like it’s actually there in the scene.
Step 1: Preparing Your Footage
Step number one is when you film your video, write down your focal length, your frame rate, your f-stop, your focus point, your sensor size, and resolution of the video. If you don’t know these things, it’s not a problem; it just helps if you do. This will make it easier to match your real-life camera settings with your CGI to make it more believable.
So look up your phone and camera on Google to find these settings; it will help you. Before we begin the motion tracking process, we also have to convert our video file into an image sequence. Blender can perform motion tracking on video files just fine, but in case your video was encoded with an inter-frame compression or long GOP compression, you might run into the critical issues with the track. These compressions essentially save every other frame and move the existing pixels around to fill in the missing frames.
It does that to reduce the file size, but it’s not ideal for motion tracking purposes. To do that, open up Blender and change the 3D viewport into a video sequencer, click on ADD and add a movie, select your recorded video, and then go into the output properties panel. Make sure the resolution and frame rate match a recorded video. If you don’t know these settings, you can always right-click the video file, select properties, drop down details, and it will be displayed there.
Set the start and the end frame of the video to make sure you’re exporting a PNG sequence. Now this feature is usually on by default, but just to be sure, double-check that in the post-processing drop-down the sequencer option is ticked on. Set an output location and render the animation by pressing Ctrl and F12.
Step 2: Setting Up Blender
Open up Blender and change your viewport to the movie clip editor, then change the timeline to the movie clip editor as well but change the clip option into the graph. These are the two windows that we’ll be working with. In the clip editor, click open and browse for your image sequence; you can press A on your keyboard to select them all. Now in the output properties, set your original frame rate, and in the clip drop-down on the top left, click the set scene frames.
This will make the project as long as your video as well as prefetch, which will prefetch the frames from your disk so we’ll play back faster and track faster as well. Also going to the render settings drop-down, the color management tab, and set the view transform to standard to keep the original video colors. The project is now set, so let’s enter the information of the camera in the track sidebar drop-down camera and then lens.
This is where you enter your sensor width, which you can Google for almost any device, or you can use one of the presets provided by Blender. The pixel aspect ratio is for devices that work with anamorphic optics; most of you will probably have to leave that at one. Focal length is something we all know, so if you know that information, put it in, and if not, Blender can guess most of these automatically.
Step 3: The Tracking Process
If you want to track your footage, you need at least eight active trackers on your video. So let’s talk about what those are. Trackers are little patterns from your frames that are being tracked onto the footage. A tracker consists of the pattern itself and the search size around it.
An example of how this works is if you imagine a video of walking down a block, you put a tracker on one of the windows; when you place it, you define the pattern, and in the next frame, it will search for that same pattern within its search size and snap directly on it. That way, it always tracks the location of the window on your 2D footage and stores its position attribute so later it can be used to calculate the camera solve.
It does that with an algorithm which is beautifully explained by the principles of computer vision in their video series Optical Flow, the structure of motion, and object tracking. With enough trackers (8) placed in various depths, Blender can calculate the exact motion of the camera using some good old trigonometry.
Besides the pattern size and search size, which can be quickly set with Blender’s different motion presets, trackers also have different motion models, different match types, options for pre-passing and normalizing, and individual RGB channels. You can add these markers on the marker drop-down on the left after you’ve selected all your settings, or you can control-click the image. You can then reposition these trackers or even change their settings in the tracker settings panel on the right.
Step 4: Solving Camera Motion
This step involves telling Blender what to look for when organizing all the data you got from your markers and using that to calculate your camera motion. Open the solve panel on the left and drop-down solve.
The tripod button should only be used when the camera hasn’t physically moved a lot during the recording of the video and it just rotated in place as if you’re recording with a tripod. That tells Blender not to add location to the camera solve.
The keyframe setting will take a time frame of your clip between the start point keyframe A and the endpoint keyframe B and use that as a reference to reconstruct the camera motion. To solve the camera motion, you just click solve camera motion; it’s gonna take a second, and you’ll be met with this solved error text. Genuinely speaking, anything above one pixel is utter garbage, anything below one pixel is usable, and anything above 0.5 pixels is optimal.
Step 5: Setting Up Your Scene
Final steps are setting up your scene. Drop down the scene setup panel and click set as background, which will set your undistorted footage as the background clip of your camera to help you work on the scene better, and click setup tracking scene, which will add two collections, two render layers, a shadow catcher, and some compositor notes. If your scene is not oriented properly, you can always select three trackers on the floor and click set floor to properly level your scene.
You can also do that with a wall, select one of the trackers, and click set origin to set that tracker as the center of your scene. Then select one of the trackers on its side relative to the scene and select set X axis or set Y axis. This will rotate the scene so the tracker is aligned on the specific axes from your world origin. Final step: I’ve hit this microphone like 12 times during the recording of this video; like the video to stop microphone abuse.
Advanced Techniques
Motion tracking can also be used to stabilize your footage. You will find it in the stabilization panel on the right; turn on 2D stabilization. By default, it will stabilize the location, but you can also include rotation and scale to be stabilized as well. Plane track: you can track and replace planar features on your clip by selecting four trackers, preferably on the same plane, opening the solve panel on the left, and in the plane track drop-down, click create plane track.
Finally, annotations: you can add annotations to the project to make notes for yourself. You’ve got a few different tools here, and you can attach them to your view or the clip.
Conclusion
Blender motion tracking and camera tracking open up a world of possibilities for VFX artists. This tutorial has covered the essentials, from the historical roots of motion capture to the practical steps of setting up a scene in Blender. Practice these techniques and share your results to refine your skills further.
FAQs
What is the difference between motion capture and motion tracking?
Motion capture, AKA mocap, is an industry standard method for capturing an actor’s performance, like movements or facial expressions, to animate characters, while motion tracking, particularly in Blender, focuses on reconstructing camera movements in digital space for VFX integration.
Is it necessary to know camera settings for Blender motion tracking?
When you film your video, write down your focal length, frame rate, f-stop, focus point, sensor size, and resolution of the video. If you don’t know these things, it’s not a problem; it just helps if you do. Blender can estimate some settings automatically.
How many trackers are needed for a reliable camera solve in Blender?
If you want to track your footage, you need at least eight active trackers on your video to ensure Blender can calculate the exact motion of the camera accurately.
What is considered a good solve error value in Blender 3D camera tracking?
Genuinely speaking, anything above one pixel is utter garbage, anything below one pixel is usable, and anything above 0.5 pixels is optimal for a flawless track.
Can motion tracking in Blender be used for object tracking?
Yes, in the track panel on the right, in the objects drop-down, click the plus icon to add a new object. Now you can track features on a moving object and solve object motion as well.
How can footage be stabilized using Blender’s motion tracking tools?
Motion tracking can also be used to stabilize your footage. You will find it in the stabilization panel on the right; turn on 2D stabilization to stabilize location, rotation, and scale using assigned trackers.
i was using rokoko mocap feature in blender!