User Guide for Multiple PS Eye Cameras Configuration
Contents
- 1 System Requirements
- 2 Software Installation
- 3 Recording Video with Multiple PS Eye Cameras
- 4 Scene Set-up
- 5 Calibration
- 5.1 Flashlight Calibration
- 5.1.1 Importance of high frame rate
- 5.1.2 Glowing Marker
- 5.1.3 Step 1: Running iPi Recorder in Calibration Mode
- 5.1.4 Step 2: Record Calibration Video
- 5.1.5 Step 3: Stop Recording and Check Recorded Video
- 5.1.6 Step 4: Take Note of Height of Your First Camera Over the Ground.
- 5.1.7 Step 5: Process Calibration Video in iPi Mocap Studio
- 5.1.8 Step 6: Check Calibration Quality
- 5.1.9 Step 7: Check Ground Plane
- 5.1.10 Step 8: Set Scene Scale Using Camera Height as Reference
- 5.2 Calibration Based on AI Joints
- 5.1 Flashlight Calibration
- 6 Recording Actor's Performance
- 7 Processing Video from Multiple PS Eye Cameras
- 8 Tracking Tips and Tricks for Multiple PS Eye Cameras
- 9 Manual Clean-up
- 10 Automatic Refinement and Filtering
- 11 Export and Motion Transfer
- 11.1 Animation Export
- 11.2 Motion Transfer
- 11.3 Export Pipelines for Popular 3D Packages
- 11.3.1 MotionBuilder
- 11.3.2 3D MAX Biped
- 11.3.3 Maya
- 11.3.4 Unreal Engine
- 11.3.5 FBX
- 11.3.6 COLLADA
- 11.3.7 LightWave
- 11.3.8 SoftImage|XSI
- 11.3.9 Poser
- 11.3.10 DAZ 3D Genesis 8
- 11.3.11 iClone 8
- 11.3.12 iClone 3
- 11.3.13 Valve Source Engine SMD
- 11.3.14 Valve Source Filmmaker
- 11.3.15 Blender
- 11.3.16 Cinema4D
- 11.3.17 Evolver
- 11.3.18 Second Life
- 11.3.19 Massive
- 11.3.20 IKinema WebAnimate
- 11.3.21 Jimmy|Rig Pro
- 12 Video Materials
System Requirements
iPi Recorder
- Computer (desktop or laptop):
- CPU: x86 compatible (Intel Pentium 4 or higher, AMD Athlon or higher, 2GHz), dual- or quad- core is preferable
- Operating system: Windows 11 / 10 / 8.1 / 8 / 7 (32-bit or 64-bit)
- USB: at least two USB 2.0 or USB 3.0 controllers
- For more info see USB controllers
- ExpressCard or eSATA slot (for laptops)
- Optional, but highly recommended. It allows to install external USB controller in case of compatibility issues between cameras and built-in USB controllers, or if all USB ports are in fact connected to a single USB controller
- Storage system: HDD or SSD or RAID with write speed:
- For 4 cameras at 60 fps, 640 x 480 resolution: not less than 17 MByte/sec
- For 6 cameras at 60 fps, 640 x 480 resolution: not less than 25 MByte/sec
- For 8 cameras at 60 fps, 640 x 480 resolution: not less than 35 MByte/sec
- For 12 cameras at 60 fps, 640 x 480 resolution: not less than 50 MByte/sec
- For 16 cameras at 60 fps, 640 x 480 resolution: not less than 70 MByte/sec
- 3 to 8 Sony PlayStation Eye for PS3 cameras.
- for more info see Cameras and Accessories#Cameras
- 4 to 12 active USB 2.0 extension cables (depending on number of cameras and scene set-up)
- Optional: tripods to place cameras
- for more info see Cameras and Accessories#Tripods
- Sony Move controller or Mini Maglite or other flashlight with candle mode for calibration
- Minimum required space: 4m by 4m (13 by 13 feet)
iPi Mocap Studio
- Computer (desktop or laptop):
- CPU: x86/x64 compatible (Intel Pentium 4 or higher, AMD Athlon or higher), dual- or quad- core is preferable.
- Operating system: Windows 11 / 10 / 8.1 / 8 / 7 (32-bit or 64-bit).
- Video card: DirectX 11 capable gaming-class graphics card.
- For more info see Cameras and Accessories#Video_Card.
- You can use GPU-z and CPU-z to detect exact version and features of your video card.
Software Installation
iPi Recorder
Download and run the setup package of the latest version of iPi Recorder. You will be presented with the following dialog.
- Select needed components
- Read and accept the license agreement by checking appropriate checkbox
- Press the Install button to begin installation
- You can plug only one depth sensor to one USB controller. Single USB controller bandwidth is not enough to record from 2 sensors.
- You can plug not more than 2 Sony PS Eye cameras to one USB controller, otherwise you will not be able to capture at 60 fps with 640 x 480 resolution.
- For more info see USB controllers.
Upon installation is complete, iPi Recorder will launch automatically. Continue with user's guide to get a knowledge of using the software.
Components
If some of the components is already installed, it has no checkbox and is marked with ALREADY INSTALLED label. You should not install all optional components in advance, without necessity. All of them can be installed separately at later time. Components descriptions below contain corresponding download links.
- Microsoft .NET Framework 4.5.1 - Client. This is required component and cannot be unchecked.
This is basic infrastructure for running .NET programs. iPi Recorder is a .NET program.- Web installer: http://www.microsoft.com/en-us/download/details.aspx?id=40773
- Standalone installer: http://www.microsoft.com/en-us/download/details.aspx?id=40779
- Playstation3 Eye Webcam :: WinUSB Drivers Registration. Check if you plan to work with Sony PS Eye cameras.
Device drivers for PS Eye camera.
- (Windows 8, 8.1, 10, 11) Microsoft Kinect 2:: MS Kinect SDK 2.0. Check if you plan to work with Kinect 2 for Windows or Kinect for Xbox One depth sensors, but do not plan to connect multiple Kinects to a single PC.
Device drivers and software libraries for Microsoft Kinect 2. Requires 64-bit Windows 8+ and USB 3.0.
- (Windows 7, 8, 8.1, 10, 11) Microsoft Kinect :: MS Kinect SDK 1.8. Check if you plan to work with Microsoft Kinect depth sensors.
Device drivers and software libraries for Microsoft Kinect. Requires Windows 7 and later.
- iPi Recorder 4.x.x.x. This is required component and cannot be unchecked.
iPi Recorder itself.
iPi Mocap Studio
Download and run the latest setup package of iPi Mocap Studio. You will be presented with the following dialog:
All components are required for installation.
Other components are included with iPi Mocap Studio setup.
- Press the Install button to begin installation.
- You will be prompted to read and accept the license agreement(s) by checking corresponding checkbox.
- Press the Install button to begin installation.
- Upon installation is complete, you will be prompted to launch iPi Mocap Studio.
- As soon as iPi Mocap Studio launches, you will be prompted to enter your license key or start 30-days free trial period.
For more info about license protection see Licensing Policy. - Ensure that your graphics hardware is set to maximum performance with iPi Mocap Studio.
Recording Video with Multiple PS Eye Cameras
Environment
Space
For a multiple PlayStation Eye configuration, you need a minimum of 13 feet by 13 feet space (4 meters by 4 meters). At smaller space, actor simply won’t fit into view of cameras.
For 640 by 480 camera resolution, capture area can be as big as 20 feet by 20 feet (7 meters by 7 meters). That should be enough for capturing motions like running, dancing etc.
Background
Light-color background (light walls and light floor) is recommended for markerless motion capture. iPi Desktop Motion Capture is designed to work with real-life backgrounds. A multi-camera configuration (3 cameras and up) can handle certain amount of background clutter. Please keep in mind that the system can be confused if your background has large objects of the same color as actor clothes.
Using a green or a blue backdrop may improve results, but you are not required to use a backdrop if you have reasonable office or home environment with light-color walls and bright lighting.
Lighting
For best results, your environment should have multiple light sources for uniform, ambient lighting. Typical office lighting with multiple light sources located on ceiling should be quite suitable for markeless motion capture. In a home environment, you may need to use additional light sources to achieve more uniform lighting.
Please note that the system cannot work in direct sunlight. If you plan a motion capture session outdoors you should choose a cloudy, overcast day.
Actor Clothing
Actor should be dressed in solid-color long-sleeve shirt, solid-color trousers (or jeans) and solid-color shoes. Deep, saturated colors are preferable. Casual clothes like jeans should be OK for use with markerless mocap system. iPi Desktop Motion Capture uses clothing color for separating actor from background and therefore cannot work with totally arbitrary clothing.
Recommended shirt (torso) colors are black, blue or green. Red is not recommended because red can blend with human skin color making it difficult for the system to see hands placed over torso. Black color is useful for reducing self-shadows on torso. If you have bright uniform lighting you can get better results with a primary-color (blue or green) shirt.
Recommended jeans/trousers color is blue.
Recommended shoe color is black.
iPi Desktop Motion Capture has an option of using T-shirt over long-sleeve shirt for actor clothing. The tracking quality should benefit from such clothing because the arms are distinguished better from the torso.
Recording Process
Please record a video using iPi Recorder application. It supports recording with Sony PS Eye cameras, depth sensors (Kinect) and DirectShow-compatible webcams (USB and FireWire).
iPi Recorder is a stand-alone application and does not require a powerful video card. You may choose to install it on a notebook PC for portability. Since it is free, you can install it on as many computers as you need.
Please run iPi Recorder and follow recording workflow as described in user's guide for the program.
Framerate
It is recommended that you record all videos at maximum available framerate. High framerate helps reduce motion blur and capture fine details of the motion.
Maximum possible framerate for Sony PlayStation Eye camera is 60 frames per second. Sony advertises PlayStation Eye camera as capable of capturing at 120 frames per second but framerates over 60 FPS result in too much noise in PlayStation Eye camera sensor and are not usable for motion capture.
Framerate lower than 30 frames per second is not recommended for motion capture.
4 cameras at 320 by 240 resolution
A dual-core CPU should be fast enough for recording a 4-camera video at 320 by 240 resolution at 60 frames per second.
4 cameras at 640 by 480 resolution at 60 frames per second
A quad-core CPU is recommended for recording at 640 by 480 resolution at 60 frames per second. If you have a dual-core CPU you may need to configure a lower framerate and/or lower compression quality to be able to record video at 640 by 480.
6 cameras at 640 by 480 resolution at 60 frames per second
A quad-core CPU clocked at 2.0 GHz (or better) is recommended for recording at 640 by 480 resolution at 60 frames per second. You will also need to get additional USB controller.
USB controllers
All modern computers (e.g. dual-core and better) based on Intel, AMD and Nvidia chipsets have two high-speed USB (USB 2.0) controllers on board. That should give you enough bandwidth to be able to record with 4 cameras at 640x480 (raw Bayer format) at 60 FPS, or 6 cameras at 640x480 (raw Bayer format) at 40 FPS.
Under certain circumstances you may need to get additional USB controllers.
Performance Tips
Recording with multiple cameras is a very resource consuming operation. If the system does not provide sufficient resources, you may experience different problems like frame drops, unstable frame rate, premature ending of recording. All these end up in poor quality records which cannot be successfully processed in iPi Mocap Studio. Below are some recommendations which help you to avoid problems with hardware performance during recording.
- Use dedicated physical drive (HDD or SSD, or even RAM drive when having enough memory) for recording onto it. When recording onto OS drive, performance may suffer badly from other working processes reading and writing to that drive in parallel.
- Ensure PC is not set to any kind of power saving mode like Power Saver power plan or Battery Saver mode in Windows 10.
- Connect laptop to a power line if possible. If not, charge battery to full before going on to recording.
- Stop programs which perform many computations or disk operations. Examples of such programs are torrent clients and anti-virus software.
- Unplug unnecessary high-bandwidth USB devices like external drives, Wi-Fi adapters (if not needed for distributed recording).
Scene Set-up
The general rule of thumb is to place most of the cameras at 1 - 1.5m height and one per each 4 at a greater height of 2 - 2.5m. However, for specific motions other setups may be more beneficial. For instance, when an actor is lying on the floor or crawling, you can improve tracking quality by placing more (like half) cameras higher, up to the ceiling level.
Anyway, the system is flexible and you can get decent results with any reasonable setup. Even when you cannot follow all recommendations due to limitations in space or room's geometry.
5 and More Cameras Configuration
You can set up 5 or more cameras in a full-circle or a half-circle configuration, depending on available space. You can improve accuracy by placing one or two cameras high over the ground (like 3 meters high).
Recommended configuration for 6-camera full-circle setup:
Four Camera Configuration
You can set up 4 cameras in a half-circle or a full-circle configuration, depending on available space. You can improve accuracy by placing one of the cameras high over the ground (like 3 meters high).
Recommended configuration for 4-camera setup in half circle:
Example:
Three Camera Configuration
Recommended configuration for 3-camera setup is a half-circle:
Example:
Virtual view of the same scene:
Camera Setup
Install the cameras on tripods and connect cables.
Sony PlayStation Eye cameras do not have standard tripod mounting screw, so you will have to use some kind of ad hoc solution. The simplest approach is to fix the cameras to tripods using sticky tape.
When mixing active and passive USB cables, make sure cable connection order is correct (computer->active cable->passive cable->camera).
If you're using the PlayStation Eye camera, make sure you have the lens set to the wide setting.
Calibration
Flashlight Calibration
Calibration is a process of computing accurate camera positions and orientations from a video of user waving a small glowing object called marker (for color/color+depth cameras). This step is essential and required for multi-camera system setup.
Importance of high frame rate
You should record calibration video at the same resolution as your action video and at the same (or higher) frame rate.
Calibration at a different resolution may lead to reduced accuracy because cameras usually have different minor distortions at different resolutions (caused by internal scaling algorithm).
Calibration at low frame rate may lead to reduced accuracy because of increased synchronization errors.
Glowing Marker
Mini Maglite flashlight is recommended for calibration. This is a very common flashlight in US and many other countries. Removing flashlight reflector converts it into an ideal glowing marker easily detectable by motion capture software.
If you cannot get a Mini Maglite, you can use some other similar flashlight.
Alternatively, you can use Sony Move motion controller with white light turned on.
Step 1: Running iPi Recorder in Calibration Mode
Run iPi Recorder and choose one of the darkening modes in "darkening for calibration" list (for Sony PS Eye cameras)
or set Exposure to reasonably small value (for DirectShow-compatible web cameras)
This is important because it helps to reduce motion blur during calibration.
Video will look dim in calibration mode.
Step 2: Record Calibration Video
Start video recording.
Move the marker slowly through your entire capture volume (front-top-right-bottom-left-back-top-right-bottom-left). Start from top and move the marker in a descending spiral motion.
Put the marker to the ground at each corner and at the center of capture volume. At least 4-5 ground points are needed for the correct detection of the groundplane.
Step 3: Stop Recording and Check Recorded Video
Check the video and make sure that:
- There is no significant motion blur (image of marker looks like a round spot rather than an ellipse or a luminescent line)
- Most of the time (80%-90% of all recording time) marker is visible in all cameras
Step 4: Take Note of Height of Your First Camera Over the Ground.
Take note of height of your first camera over the ground. You will need this parameter later.
Step 5: Process Calibration Video in iPi Mocap Studio
To process calibration video please do the following:
- Create new calibration project in iPi Mocap Studio:
- Press New button or select File > New Project menu item or use Ctrl+N (2)
- Choose Calibration project type in New Project Wizard.
- Set the diagonal Field of View (FOV) for your cameras. If this screen is not shown, this means FOV is already known and you do not need to manually specify it.
- Note: If you use Sony PS Eye or Logitech QuickCam 9000 cameras, leave the FOV value at the default 75 degrees.
- Adjust the Region-of-Interest to cover the part of video that contains the glowing marker (3).
- Change the size of light marker spot on video if needed.
- Medium usually work well, but you may need Large if you use Sony PS Move for calibration, or Small in some cases.
- Normally, you need to use default calibration settings
- Auto-detect initial camera positions turned on
- If calibration fails to detect initial camera positions correctly, you can turn this checkbox off and set camera positions manually. But in most cases it will be easier to re-record calibration sequence.
- Auto-adjust camera FOV turned off
- Use this mode only if you did not know camera field of view requested during project creation (if it was requested) or if it was set by default (this may be recommended for particular camera models).
- Auto-detect initial camera positions turned on
- Click Calibrate Based on Flashlight Marker button and wait while the system finishes calibration (4).
Step 6: Check Calibration Quality
Resulting scene should look like this:
Make sure you have Good or Perfect calibration result.
Step 7: Check Ground Plane
Ground plane should be detected automatically. Ground points are marked by yellow color.
- Make sure that ground points are actually near the ground plane.
- If ground plane is detected incorrectly, select ground points manually.
- Expand Manual Calibration Adjustment expander.
- First, unmark all ground points by pressing Clear all points button.
- For each ground point, click on it in 3D view and press Mark as ground button.
- You can cancel marking point as ground point by pressing Unmark ground button.
Step 8: Set Scene Scale Using Camera Height as Reference
Now cameras in your scene are properly oriented relative to other cameras and relative to ground plane. But you still need to find one more parameter: scene scale.
Use Сamera #1 height over ground to set correct scene scale.
- Save results to calibration project file or using Save scene... button on Scene tab (6).
Calibration Based on AI Joints
Novel AI-based pose estimation algorithms allow to use AI-detected joint positions instead of glowing object (flashlight), that makes the calibration workflow simpler.
Importance of high frame rate
You should record calibration video at the same resolution as your action video and at the same (or higher) frame rate.
Calibration at a different resolution may lead to reduced accuracy because cameras usually have different minor distortions at different resolutions (caused by internal scaling algorithm).
Calibration at low frame rate may lead to reduced accuracy because of increased synchronization errors.
Step 1: Running iPi Recorder
Run iPi Recorder in a regular (default) mode used for recording action videos. Do not use darkening modes in "darkening for calibration" list (if available). Darkening is good for flashlight calibration, but bad for joints detection.
Step 2: Record Calibration Video
Start video recording.
Just walk around the whole capture area, and put hands up from time to time.
Step 3: Take Note of Height of Your First Camera Over the Ground.
Take note of height of your first camera over the ground. You will need this parameter later.
Step 4: Process Calibration Video in iPi Mocap Studio
To process calibration video please do the following:
- Create new calibration project in iPi Mocap Studio:
- Press New button or select File > New Project menu item or use Ctrl+N (2)
- Choose Calibration project type in New Project Wizard.
- Set the diagonal Field of View (FOV) for your cameras. If this screen is not shown, this means FOV is already known and you do not need to manually specify it.
- Note: If you use Sony PS Eye or Logitech QuickCam 9000 cameras, leave the FOV value at the default 75 degrees.
- Adjust the Region-of-Interest to cover the part of video that contains movement around capture area.
- Normally, you need to use default calibration settings
- Auto-detect initial camera positions turned on
- If calibration fails to detect initial camera positions correctly, you can turn this checkbox off and set camera positions manually. But in most cases it will be easier to re-record calibration sequence.
- Auto-adjust camera FOV turned off
- Use this mode only if you did not know camera field of view requested during project creation (if it was requested) or if it was set by default (this may be recommended for particular camera models).
- Auto-detect initial camera positions turned on
- Click Calibrate Based on AI Joints button and wait while the system finishes calibration (4).
Complete Calibration
Follow the same Steps 6-8 described for flashlight calibration
Recording Actor's Performance
Recommended Layout of an Action Video
- Enter the actor.
- Strike a T-pose.
- Action
T-pose
It is preferable to have actor strike a “T-pose” before the actual action. The software will need T-pose for building actor appearance model during tracking.
Takes
Take is a concept originating from cinematography. In a nutshell, take is a single continuous recorded performance.
Usually it is a good idea to record multiple takes of the same motion, because a lot of things can go wrong for purely artistic reasons.
Iterations
A common problem with motion capture is “clipping” in resulting 3D character animation. For example, arms entering the body of animated computer-generated character. Many CG characters have various items and attachments like a bullet-proof vest, a fantasy armor or a helmet. It can be easy for an actor to forget about the shape of the CG model.
For this reason, you may need to schedule more than one motion capture session for the same motions. Recommended approach is:
- Record the videos
- Process the videos in iPi Mocap Studio
- Import your target character into iPi Mocap Studio and review the resulting animation
- Give feedback to the actor
- Schedule another motion capture session if needed
Ian Chisholm's hints on motion capture
Ian Chisholm is a machinima director and actor and the creator of critically acclaimed Clear Skies machinima series. Below are some hints from his motion capture guide based on his experience with motion capture for Clear Skies III.
Three handy hints for acting out mocap:
- Don’t weave and bob around like you’re in a normal conversation – it looks terrible when finally onscreen. You need to be fairly (but not completely) static when acting.
- If you are recording several lines in one go, make sure you have lead in and lead out between each one, i.e. stand still! Otherwise, the motions blend into each other and it’s hard to pick a start and end point for each take.
- Stand a bit like a gorilla – have your arms out from your sides:
Well, obviously not quite that much. But anyway, if you don’t, you’ll find the arms clip slightly into the models and they look daft.
If you have a lot of capture to do, you need to strike a balance between short and long recordings. Aim for 30 seconds to 2 minutes. Too long is a pain to work on later due to the fiddlyness of setting up takes, and too short means you are forever setting up T-poses.
Takes
Because motion capture is not a perfect art, and neither is acting, it’s best to perform multiple takes. I found that three was the best amount for most motion capture. Take less if it’s a basic move, take more if it’s complex and needs to be more accurate. It will make life easier for you in the processing stage if you signal the break between takes – I did this by reaching out one arm and holding up fingers to show which take it was.
Naming conventions
As it’s the same actor looking exactly the same each and every time, and there is no sound, and the capture is in lowres 320*200, you really need to name the files very clearly so that you later know which act, scene, character, and line(s) the capture is for.
My naming convention was based on act, scene, character, page number of the scene, line number, and take number. You end up with something unpleasant to read like A3S1_JR_P2_L41_t3 but it’s essential when you’ve got 1500 actions to record.
Processing Video from Multiple PS Eye Cameras
- Run iPi Mocap Studio
- Press Ctrl+N or push New button on toolbar to create new project (1)
- Choose recorded *.iPiVideo file
- Select "Action" project type
- To load camera calibration data, select corresponding calibration project file (.iPiCalib) or scene file (.iPiScene)
- Adjust the Region-of-Interest (ROI) (2). (Region of Interest should cover the part of video that contains the motion).
- Go to T-pose frame. Using character Move/Rotate controls, roughly align the model with the image of actor in one view (3).
- Adjust character height and proportions (on Actor tab) to correspond to actor (4).
- Push the Auto Detect Actor Colors button on Actor tab to automatically adjust model's colors.
- Go to the first frame of Region of Interest (ROI) (5).
- Push the Refit Pose button on Tracking tab (6).
- If initial pose was recognized incorrectly, you can roughly adjust it manually and use auto-fit again.
- Push the Track Forward button on Tracking tab (7).
Tracking Tips and Tricks for Multiple PS Eye Cameras
Light Settings
Ideally, lighting should be ambient. In this case colors of clothes and body will look the same in all cameras, and you can leave default light settings.
But in real-life you can have non-uniform lighting, when most light comes from window, or bright light source. In this case colors will be darker in cameras directed towards this light source, and lighter in the rest of cameras. In such situation adjusting light settings may substantially improve tracking.
You can find light settings at Scene tab.
- Light source position is marked by yellow ball (1). Click and drag light control to orbit the light source (2). Approximate postions will be fine, you do not need to be very accurate.
- You can also change light intensity (3). You can check how it influences tracking using View > Pose Mismatch window (4).
Ground Height Fine-Tuning
Sometimes ground height detected during calibration may differ from the actual. This depends on particular flashlight you use for calibration, lighting conditions and other circumstances. Incorrect ground height may cause problems in feet tracking. Ground Height Fine-Tuning setting allows you to manually correct the ground height.
- To make sure that the ground height is correct, you need to check if feet stand on the ground surface.
- Run Refit Pose for T-pose frame and zoom / rotate the viewport to see the feet position clearly
- If you see that feet are above the ground, detected ground is lower than actual
- If you see that legs are bent a little, this may mean that detected ground is higher than actual
- Use Ground Height Fine-Tuning slider at Scene tab to change the ground height, and re-run Refit Pose to check the updated ground position.
Using Pose Mismatch View
Pose Mismatch window is a very useful tool which allows you to understand how scene and actor settings affect tracking.
- Pose Mismatch window is shown using View > Pose Mismatch menu item
- Mismatch number at the top evaluates how actor model matches to video in the current frame
- You need to run Refit Pose to match actor model to video before comparing Mismatch numbers
- Lower value of Mismatch number means better match. So your need to minimize Mismatch number while choosing settings
Checking Camera Calibration
Frequent cause of tracking errors is incorrect camera calibration, that can be result of moving camera(s) after calibration recording. You can use T-pose to detect this problem:
- Run Refit Pose for T-pose after adjusting actor and scene settings
- Make sure that actor model is well aligned with video in all cameras
- If any camera was shifted, you should see the shift of the actor model relative to the actor's image on video (see the screenshot)
Manual Clean-up
Once initial tracking is performed on all (or part) of your video, you can begin cleaning out tracking errors (if any). Automatic Refinement and Filtering should be applied after clean-up.
Cleaning up tracking gaps
Tracking errors usually happen in a few specific video frames and propagate to multiple subsequent frames, resulting in tracking gaps. Examples of problematic frames:
- Occlusion (like one hand not visible in any of the cameras)
- Indistinctive pose (like hands folded on chest).
- Very fast motion with motion blur.
To clean up a sequence of incorrect frames (a tracking gap), you should use backward tracking:
- Go toward the last frame of tracking gap, to a frame where actor pose is distinctive (no occlusion, no motion blur etc.).
- If necessary, use Rotate, Move and IK (Inverse Kinematics) tools to edit character pose to match actor pose on video.
- Turn off Trajectory Filtering (set it to zero) so that it does not interfere with your editing.
- Click Refit Pose button to get a better fit of character pose.
- Click Track Backward button.
- Stop backward tracking as soon as it comes close to the nearest good frame.
- If necessary, go back to remaining parts of tracking gap and use forward and backward tracking to clean them up.
Individual body parts tracking
In most cases tracking errors affect some of limbs. Individual Body Parts Tracking settings on Tracking tab allow to redo tracking specified body parts.
- Tracking will be done for selected body parts only.
- Unselected body parts will keep the same rotations.
Cleaning up individual frames
To clean up individual frames you should use a combination of editing tools (Rotate, Move and Inverse Kinematics) and Refit Pose button.
Tracking errors that cannot be cleaned up using iPi Studio
Not all tracking errors can be cleaned up in iPi Mocap Studio using automatic tracking and Refit Pose button.
- Frames immediately affected by occlusion sometimes cannot be corrected. Recommended workarounds:
- Manually edit problematic poses (not using Refit Pose button).
- Record a new video of the motion and try to minimize occlusion.
- Record a new video of the motion using more cameras.
- Frames immediately affected by motion blur sometimes cannot be corrected. Recommended workarounds:
- Manually edit problematic poses (not using Refit Pose button).
- Edit problematic poses in some external animation editor.
- Record a new video of the motion using higher framerate.
- Frames affected by strong shadows on the floor sometimes cannot be corrected. Typical example is push-ups. This is a limitation of current version of markerless mocap technology. iPi Soft is working to improve tracking in future versions of iPi Mocap Studio.
Automatic Refinement and Filtering
Automatic Refinement and Filtering should be applied after Manual Clean-up, if there were tracking errors.
Also, this final step is called Post-Processing and includes:
- Tracking Refinement
- Jitter Removal
- Trajectory Filtering
Tracking refinement
After the primary tracking and cleanup are complete, you can optionally run the Refine pass (see Refine Forward and Refine Backward buttons). It slightly improves accuracy of pose matching, and can automatically correct minor tracking errors. However, it takes a bit more time than the primary tracking, so it is not recommended for quick-and-dirty tests.
- Using the same tracking parameters as the primary tracking (e.g. feet tracking, head tracking) in order not to lose previously tracked data.
- Before motion controller data.
- If you plan to manually edit the animation (not related to automatic cleanup with Refit Pose).
In contrast to the primary tracking, Refine does no pose prediction. It is based on the current pose in a frame only. Essentially, running Refine is equal to automatically applying Refit Pose to a range of frames which were previously tracked.
Post-processing: Jitter Removal
- Jitter Removal filter is a powerful post-processing filter. It should be applied after cleaning up tracking gaps and errors.
- It is recommended that you always apply Jitter Removal filter before exporting animation.
- Jitter Removal filter suppresses unwanted noise and at the same time preserves sharp, dynamic motions. By design, this filter should be applied to relatively large segments of animation (no less than 50 frames).
- Range of frames affected by Jitter Removal is controlled by current Region of Interest (ROI).
- You can configure Jitter Removal options for specific body parts. Default setting for Jitter Removal “aggressiveness” is 1 (one tick of corresponding slider). Oftentimes, you can get better results by applying a slightly more aggressive Jitter Removal for torso and legs. Alternatively, you may want to use less aggressive Jitter Removal settings for sharp motions like martial arts moves.
- Jitter Removal filter makes an internal backup of all data produced by tracking and clean up stages. Therefore, you can re-apply Jitter Removal multiple times. Each subsequent run works off original tracking/clean-up results and overrides previous runs.
Post-processing: Trajectory Filtering
- Trajectory Filter is a traditional digital signal filter. Its purpose is to filter out minor noise that remains after Jitter Removal filter.
- Trajectory Filter is very fast. It is applied on-the-fly to current Region of Interest (ROI).
- Default setting for Trajectory Filter is 1. Higher settings result in multiple passes of Trajectory Filter. It is recommended that you leave it at the default setting.
- Trajectory Filter can be useful for “gluing” together multiple segments of animation processed with different Jitter Removal options: change the Region of Interest (ROI) to cover all of your motion (e.g. multiple segments processed with different jitter removal setting); change Trajectory Filtering setting to 0 (zero); then change it back to 1 (or other suitable value).
Export and Motion Transfer
Animation Export
To export tracked motion, follow simple steps below.
- Select Export tab
- Select rig from the list of available rigs or import your custom model
- Note: The motions will be automatically transferred to selected rig (except Default iPi Rig, that does not require motion transfer). See details on motion transfer below)
- Press Export button or use File > Export Animation menu item to export all animation frames from within Region of Interest (ROI).
- Note: To export animation for specific take, right-click on take and select Export Animation item from pop-up menu.
- Select output file format
Motion Transfer
Default iPi Character Rig
The default skeleton in iPi Mocap Studio is optimized for markerless motion capture. It may or may not be suitable as a skeleton for your character. Default iPi skeleton in T-pose has non-zero rotations for all joints. Please note that default iPi skeleton with zero rotations does not represent a meaningful pose and looks like a random pile of bones.
By default iPi Mocap Studio exports a T-pose (or a reasonable default pose for custom rig after motion transfer) in the first frame of animation. In case when it is not desired switch off Export T-pose in first frame checkbox.
Other rigs
iPi Mocap Studio has integrated motion transfer technology that allows to automatically transfer motion to a custom rig.
- Select Export tab
- Select rig from the list of available rigs or import your custom model
- Note: The motions will be automatically transferred to selected rig (except Default iPi Rig, that does not require motion transfer). You will be able to see the transferred motion in the viewport
- You may need to assign bone mappings on the Export tab for motion transfer to work correctly.
- You can save your motion transfer profile to XML file for future use.
Starting with version 3.5, iPi Mocap Studio supports rotation of imported character into proper orientation. This is useful for many popular characters, including Unreal Engine standard character.
Starting with version 3.5, iPi Mocap Studio can map hips motion either to Root/Ground or to Hips/Pelvis. This is useful for game engine characters, including standard Unity 3D Engine and Unreal Engine characters.
Multiple Target Bones
Bone mapping allows to specify multiple target bones. This can use used if:
- Target character is more detailed so one bone in default iPi character rig corresponds to multiple bones in target character (i.e. your character has more spine bones)
- Your character has separate bones for swing and twist rotation channels
To map source character bone to multiple target bones you need to use Add a target bone item in a Manage target bones context menu. You then set weights for splitting the source rotation.
Export Pipelines for Popular 3D Packages
MotionBuilder
Select Motion Builder target character on Export tab and export animation to BVH or FBX.
3D MAX Biped
- Select 3ds Max Biped target character on Export tab and export animation to BVH or FBX.
- Create a Biped character in 3D MAX (Create > Systems > Biped).
- Put your Biped character to your 3d scene.
- Go to Motion tab. Click Motion Capture button and import your BVH or FBX file.
Our user Cra0kalo created an example Valve Biped rig for use with 3D MAX. It may be useful if you work with Valve Source Engine characters.
Maya
Latest versions of Maya (starting with Maya 2011) have a powerful biped animation subsystem called "HumanIK". Animations exported from iPi Mocap Studio in MotionBuilder-friendly format should work fine with Maya 2011 and HumanIK. The following video tutorials can be helpful:
- Maya HumanIK Mocap retarget with iPi Mocap Studio, by Wes McDermott
- Non-Destructive Live Retargeting — Maya 2011 New Features
- Motion Capture Workflow With Maya 2011
For older versions of Maya please see the #Other_rigs section. Recommended format for import/export with older versions of Maya is FBX.
Unreal Engine
iPi Mocap Studio has built-in motion transfer profile for UE4 Unreal Mannequin (default Unreal character) and MetaHuman character. So select the corrensponding target character, export animation to FBX and then import it into Unreal.
FBX
iPi Mocap Studio supports FBX format for import/export of animations and characters. When exporting animation, you are presented with several options:
- Which version of FBX format to use, ranging from 6.1 (2010 product line) to 7.4 (2015 product line)
- Produce text or binary file
The default values are defined by an imported character (if any), otherwise set to recently used values.
Some applications do not use the latest FBX SDK and may have problems importing FBX files of newer versions. In case of such problems, your can use Autodesk's free FBX Converter to convert your animation file to an appropriate FBX version.
COLLADA
iPi Mocap Studio supports COLLADA format for import/export of animations and characters. Current version of iPi Mocap Studio exports COLLADA animations as matrices. In case if you encounter incompatibilities with other applications' implementation of COLLADA format, we recommend using Autodesk's free FBX Converter to convert your data between FBX and COLLADA formats. FBX is known to be more universally supported in many 3D graphics packages.
LightWave
Recommended format for importing target characters from LightWave to iPi Studio is FBX. Recommended format for bringing animations from iPi Mocap Studio to LightWave is BVH or FBX.
SoftImage|XSI
Our user Eric Cosky published a tutorial on using iPi Mocap Studio with SoftImage|XSI:
https://www.ipisoft.com/forum/viewtopic.php?f=13&p=9660#p9660
Poser
- Export your poser character in T-pose in BVH format (File > Export).
- Import your Poser character skeleton into iPi Mocap Studio. Your animation will be transferred to your Poser character.
- Export your animation to BVH format.
- Import exported BVH to Poser
A workaround for wrists bug is to chop off wrists from your Poser 8 skeleton (for instance using BVHacker) before importing Poser 8 target character into iPi Mocap Studio. Missing wrists should not cause any problems during motion transfer in iPi Mocap Studio if your BVH file is edited correctly. Poser will ignore missing wrists when importing resulting motion so the resulting motion will look right in Poser (wrists in default pose as expected).
DAZ 3D Genesis 8
- Just select built-in Genesis 8 Male or Genesis 8 Female rig as target character.
- Export your animation to BVH format.
- Import exported BVH to DAZ Studio.
- Turn off limits and locks for all joints.
- Right click the root node of figure > Select > Select children.
- Edit > Figure > Limits > Limits Off.
- If you need to unlock nodes, use Edit > Figure > Locks > Unlock Selected Node(s).
- Import BVH file you exported from iPi Mocap Studio.
- Turn off limits and locks for all joints.
iClone 8
If you use default iClone character, our users recommend this simple workflow:
- Select Motion Builder target character on Export tab and export animation to BVH.
- Select Mixamo Default Character profile when importing BVH to iClone.
iClone 3
Workflow for iClone is straightforward.
- Select iClone target character on Export tab and export animation to BVH.
- Go to Animation tab in iClone and launch BVH Converter.
- Import your BVH file with Default profile, click Convert
- Save the resulting animation in iMotion format. Now your animation can be applied to iClone characters.
Valve Source Engine SMD
Transfer motions to your Valve Source Engine character (stored in .smd file) and export your animation in Valve Source Engine SMD format.
Our user Cra0kalo created an example Valve Biped rig for use with 3D MAX. It may be useful if you wish to apply more then one capture through MotionBuilder or edit the custom keyframes in MAX.
Valve Source Filmmaker
DMX
First, you need to import your character (or its skeleton) into iPi Mocap Studio, for motion transfer.
There are currently 3 ways of doing this:
- You can import an animation DMX (in default pose) into iPi Mocap Studio. Since it has a skeleton, it should be enough for motion transfer. To create an animation DMX with default pose, you can add your character to your scene in Source Filmmaker and export DMX for corresponding animation node:
- open Animation Set Editor Tab;
- click + > Create Animation Set for New Model;
- choose a model and click Open;
- export animation for your model, in ASCII DMX format;
- There is a checkbox named Ascii in the top area of the export dialog.
- Alternatively, you can just import an SMD file with your character into iPi Mocap Studio. For example, SMD files for all Team Fortress 2 characters can be found in your SDK in a location similar to the following (you need to have Source SDK installed): C:\Program Files (x86)\Steam\steamapps\<your steam name>\sourcesdk_content\tf\modelsrc\player\pyro\parts\smd\pyro_model.smd).
- If you created a custom character in Maya, you should be able to export it in DMX model fromat. (Please see Valve documentation on how to do this).
Then you can import your model DMX into iPi Mocap Studio. Current version of iPi Mocap Studio cannot display character skin, but it should display the skeleton. Skeleton should be enough for motion transfer.
To export animation in DMX, press Export Animation button on the Export tab in iPi Mocap Studio and choose DMX from the list of supported formats. You may also want to uncheck Export T-pose in first frame option on the Export tab in iPi Mocap Studio.
Now you can import your animation into Source Filmmaker. There will be some warnings about missing channels for face bones but you can safely ignore them.
Old way involving Maya
This was used until iPi Mocap Studio got DMX support. And still may be useful in case of any troubles with DMX. Please see the following video tutorial series:
http://www.youtube.com/playlist?list=PLD4409518E1F04270
Blender
You can export default iPi rig or transfer motion to your custom rig following general instructions.
Cinema4D
If you have experience with Cinema4D please help to expand this Wiki by posting Cinema4D import/export tips to Community Tutorials section of our user forum.
Evolver
Transfer motions to your Evolver character (stored in COLLADA or FBX file) and export your animation.
Evolver offers several different skeletons for Evolver characters. Here is an example motion transfer profile for Evolver "Gaming" skeleton: evolver_game.profile.xml
Second Life
Transfer motions to your Second Life character (stored in BVH file) and export your animation in BVH format.
SecondLife documentation contains a link to useful SL avatar files. The ZIP file includes a BVH of the "default pose". Be sure to have that.
See the discussion on our Forum for additional details: https://www.ipisoft.com/forum/viewtopic.php?f=2&p=7845
Massive
Please see our user forum for a discussion of animation import/export for Massive:
https://ipisoft.com/forum/viewtopic.php?f=12&t=3233
IKinema WebAnimate
Please see the following video tutorial on how to use iPi Mocap Studio with IKinema WebAnimate:
http://www.youtube.com/watch?v=a-yJ-O02SLU
Jimmy|Rig Pro
Please see the following video tutorial on how to use iPi Mocap Studio with Jimmy|Rig Pro:
http://www.youtube.com/watch?v=wD1keDh3fCk
Video Materials
For video materials please refer to our Gallery