LIDAR Magazine

Application of Structured Light Scanning in Generating a 3D Human Head Model

A 1.347Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

Using a Laser Scanner in Animation and Games
In many computer graphic applications such as animation, computer games, human computer interaction and virtual reality, the creation of 3D models of human bodies is required. To describe a person, the appearance of his/ her head is critical to conveying his/her characteristics and personality to others.

There are several techniques for obtaining 3D information of objects (e.g., 3D reconstruction). The most common approach is an image – based method that uses 2D cameras for 3D reconstruction using one-image, two-images or multiple-images. However, this technique completely relies on the skills of artists to design realistic human geometry models from the images.

Recently, 3D laser scanningLight Detection and Ranging (LIDAR) has emerged as a novel method to obtain 3D information in the film and gaming industry for pre-visualization of scenes, or in post-production, to create stunning CGI and visual special effects (VFX). The makers of modern animation movies and video games generate 3D models using a 3D laser scanner to come ever closer towards reality. While capturing human faces with detail and achieving a level of realism in a reasonable time is not currently possible with other methods, the speed and accuracy of laser scanning allows generating 3D models at a high level of detail and precision, in a matter of minutes.

For this study, we used the NextEngine desktop structured light scanner which is able to obtain accuracies of 0.005 inches (RMS) and resolutions on the order of 4×10-5 in. Structured light is very similar to laser scanning in its end results; however, in structured light the laser projects a pattern on the surface of the object and a separate camera captures how that pattern bends on the surface. They typically have a relatively small field of view and work at close range (within 1-2 ft). A laser scanner, in contrast, actively emits laser pulses that reflect off of the surface and return to the scanner. Laser scanners have a much larger field of view and range. In this article, we demonstrate the application of this micron laser scanner in face scanning.

3D Scanning
The NextEngine 3D laser has basic scan settings for distance modes, light settings, and sampling densities.

DistanceThere are three distance modes available in the scanner including Extended, Wide, and Macro. Table 1 shows the optimum range and approximate size of field of view for each mode. Since we want to scan the whole face, the Extended mode was selected as the best option.

LightThe light levels relate to how much light the scanner shines on the object. Prior scans showed the "Neutral" level gives the best coverage rather than the "Dark" and "Light" settings which seem to either reflect too much light or not return enough. It is worth mentioning that external light sources influenced our scans and created large occlusions in the scan. To alleviate this, scanning was done in a dark room.

Sampling DensityThe most challenging parameter in face scanning was resolution or laser point density. The scanner can collect the data at varying densities from 1,600 points/in2 to 40,000 points/in2. In the scanner this range is divided into three parts, "Quick", "Standard" and "High" density.

At the first sight, one may be tempted to use the highest possible resolution to obtain all details; however, to render games we must consider efficiency. As such, the distribution of polygons has to be carefully picked to minimize redundancy and to provide something animatable. On the other hand, obviously too sparse sampling will not provide sufficient detail.

For this investigation, our goal is to model the face from ear to ear. A single scan cannot cover full geometry of the face, so we need to acquire several scans. Three scans with different angles can cover our desired area. Obviously, all these three scans must be registered or merged together (registration process) into a common coordinate system.

Despite the steady increase in accuracy, most available scanning techniques cause severe scanning artifacts such as noise, outliers in high density and holes or ghost geometry in low density. Particularly in the registration step, the lack of points makes alignment procedures more difficult. If there are just a few points for the registration step it can lead to poor alignment. On the other hand, in high density scans the error can be accumulated. This is why it is important to find the optimum scanning resolution.

Results
All scans were taken in the Extended mode with Neutral light setting. Each density mode (Quick, Standard and High) contains three scans with different angles to cover all face geometry.

Figure 1 shows the generated 3D model after alignment of three scans with Quick density setting at 24 points per square inch. As you can see in this mode, the scanner is not able to capture sufficient details in face and body (shoulders).

You may ask, how about scanning with higher density? Figure 2 represents the 3D model with Standard density setting with 125 points per square inch. In this picture, the face point cloud seems smooth while there are still some problems with body parts. Showing details in this mode somewhat improved the results, which led us to think that collecting at the highest density would provide the best 3D model with the most details. Let’s look at the figure 3 which is the result of using "High" density mode.

Unfortunately, the 3D model with the highest density does not capture surface characteristics properly. There are a lot of small dimples, which indicates this model has too many polygons which appear as noise, creating the dimples. On the other hand, this model shows the chest and shoulders in good detail.

Based on the generated 3D models from 125 points/in2 and 2000 points/ in2 density modes, it seems reasonable to merge these models into an optimal model with good quality in face and body (Figures 4 and 5). In figure 4 the smoothness of the final model is shown while figure 5 shows the wireframe triangulation. In figure 4, the mixed model is smooth with the least holes in all parts. Examining the distribution of regular (near equilateral) triangles, which is shown in figure 5, is another parameter to check when evaluating the quality of the generated model. However, there are a wide variety of mesh simplification techniques available that preserve small triangles in areas of high curvature and use larger triangles in flat areas to reduce file size. Table 2 shows properties of generated 3D models.

Conclusions
Laser scanning is very useful for 3D modeling for input into animations and video games because one can quickly scan a real, existing object and bring it to the virtual world. Although 3D laser models can be imported as a skeleton animation in a graphical platform such as OpenGL or Maya for further implementations by various formats, the OBJ (Object Format), which natively supports texture mapping (e.g., imagery), is commonly used to transfer the models.

Through laser scanning, more realistic looking animations become possible. However, one should be aware that the optimal level of realism requires a balance between additional processing to fill holes in low density models and smoothing noise visible in high resolution models.

Hamid Mahmoudabadi is a Ph.D. graduate research assistant of Geomatics in the School of Civil and Construction Engineering at Oregon State University. His research interests include algorithms for processing terrestrial laser scanning data, including registration, matching, segmentation, object recognition, and 3D scene modeling using digital images and computational methods.

Michael Olsen is an Assistant Professor of Geomatics in the School of Civil and Construction Engineering at Oregon State University. He currently chairs the ASCE Geomatics Spatial Data Applications Committee and is an Associate Editor for the ASCE Journal of Surveying Engineering.

A 1.347Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

Exit mobile version