Recently, an innovative technique that more completely reconstructs challenging 3D objects has been developed by the research team from the Advanced Innovation Center For Future Visual Entertainment of Beijing Film Academy and the research team from Shandong University, both led by Chen Baoquan, Dean of the School of Computer Science and Technology and the Software College, SDU, together with other researchers from Israel and Canada.
This new approach uses water to acquire 3D shape like CT scan with water, and turns modeling surface reconstruction into a volumetric problem. Most notably, it can more accurately reconstruct even hidden parts of an object than common laser scanners, which can be applied to more fields with a higher price-quality ratio. The work was presented at SIGGRAPH 2017 in Los Angeles (the top-tier conference in the field of CG). The related research paper, "Dip Transform for 3D Shape Reconstruction", involving the design and manufacturing of algorithms and device prototype, was completed by a team from the Advanced Innovation Center For Future Visual Entertainment of Beijing Film Academy and SDU led by Chen Baoquan and other researchers from the Tel-Aviv University, Ben-Gurion University and University of British Columbia, among which SDU is responsible for the whole hardware.
The project has gained the support of the Major State Basic Research Development Program of China ("973" Program), Key Projects of the National Natural Science Foundation of China and the Chinese-Israeli International Special Projects Cooperation.
The new approach for data acquisition: turning shape modeling of 3D objects into a volumetric problem
The most notable part of this research lies in that the new method is able to measure the hidden part of 3D objects. Traditional 3D shape acquisition or reconstruction methods are based on optical devices, most commonly, laser scanners and cameras that successfully sample the visible shape surface. But this common approach tends to be limited. For instance, most devices cannot acquire the sample hidden from light, or can only obtain incomplete parts in a narrow slot or bulge, or fail to handle with some special transparent materials.
To solve these problems, the researchers utilized liquid to acquire the shape of an object. By dipping an object into the liquid, they were able to measure the displacement of the liquid volume and reconstruct the surface shapes of the object with information presented. Liquid has great advantages: it can penetrate complex surfaces and cavities, and the displacement can be measured without considering the refractive index of light and polarization, thus bypassing various limitations of optical and laser-based scanning devices.
In the research, the team made a set of convenient "3D dipping apparatus": objects in the water tank were dipped via a robotic arm, so they were able to acquire the cross section under the current angle by measuring the water displacement. Thus multiple cross sections could be acquired by dipping the objects many times with different angles to accurately calculate the geometry of an object, including the parts difficult to be captured normally by laser scanning devices.
Dip the 3D elephant into the water tank through different angles and record the displacement information to acquire different volume slices of the shape
(From left to right) Results after dipping into the water for 100, 500, and 1,000 times respectively. Different angles and multiple times help to acquire more accurate shapes
CT with water: more accurate results and wider range of application
This research adopted the principle of CT scanning. CT devices are bulky and expensive, and can only be used in a specific environment. By comparison, the team's dip transform to reconstruct complex 3D shapes, which generates a complete shape at a low-computational cost, has a high cost performance and a wider range of application. Meanwhile, a dip transform device is easy to build.
The main challenges of the research lie on two aspects: first, how to reconstruct complete information according to multiple partial information; second, how to repeatedly and accurately measure the displacement of the liquid volume.
The paper also presents other examples of the reconstruction of challenging 3D objects. It is shown that the result of dip transform is almost the same as the original 3D model. As for the reconstruction of hidden and complex parts, dip transform is better than structured light scanning. What's more, in order to solve the problem that data acquisition is slow—the mechanical arm dips the object vertically step by step and it is necessary to read the liquid level in between steps, the team is developing new methods, such as a continuous dipping and reading process and sparse recovery techniques based on compressed sensing.
Comparison of reconstruction: (a) objects in water tank (b) 3D-printing objects (c) result of structured light scanning (d) result of 3D reconstruction through dipping
It is known that the technology is mature for widespread application of dip transform in theory, and the key is to aim at the object's volume and accuracy demand and develop devices accordingly to achieve best cost performance.
Chen Baoquan, Dean of the School of Computer Science and Technology and Software College, Shandong University, is a member of the national "Ten Thousand Talents" Program, Chair Professor of "Yangtse River Scholar", and one of the NSFC "Outstanding Young Researchers". He also serves as the researcher and doctoral supervisor of Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, and the chief scientist of 973 Program "Computational Theory and Method of Big Urban Data" and of the Advanced Innovation Center for Future Visual Entertainment, Beijing Film Academy. His research interests lie in computer graphics and visualization, focusing specifically on large-scale city 3D modeling with mobile laser scanning and massive data visualization. He has published more than 100 papers in international journals and conferences, including ACM SIGGRAPH, IEEE VIS and ACM TOG.
Translated by: Lang Cuicui, Xing Chenyang
Edited by: Xie Tingting