Triangulation method of projection scanning as a basis of the combined system for input of three-dimensional images
Мордвинов А. А. Triangulation method of projection scanning as a basis of the combined system for input of three-dimensional images // Молодой ученый. 2011. №12. Т.1. С. 92-95. URL https://moluch.ru/archive/35/4072/ (дата обращения: 25.02.2018).
There are many different ways to obtain 3D measurements and the many types of scanners that are based on any one single way. Among the huge number of different methods to obtain three-dimensional model of the object can be identified several key: triangulation method, based on determining the time of the signal, the phase shift method and so on.
Currently, the development of information technology enables the measurement of geometrical parameters of three-dimensional objects. However, to date there are no samples of three-dimensional scanners are available in daily use, can get a three-dimensional model of the object only by means of an ordinary webcam and a projector. Non-contact method of measuring the geometrical parameters allows for nondestructive testing parameters of products.
The most promising optical techniques that will improve the measurement accuracy, improve productivity and increase the measurement range.
Three-dimensional contouring is an important research topic in industrial inspection, computer vision, navigation, rapid prototyping, reverse engineering, and object modeling. Nowadays, the contouring is achieved by noncontact systems based on lighting methods [1, p. 270]. These kinds of sensors use methods such as fringe projection, line projection, spot projection, time of ﬂ;ight, and interferometry. Many researches have been concentrated on these sensors, and new techniques are still being developed [3, p. 286]. Today, commercial solutions are available and used by the scientiﬁ;c community. And there is no, unfortunately, a scanning device capable of a three-dimensional model of the object immediately, without providing the necessary conditions, so any such system is narrowly applicable, and perfect. However, these sensors are still very expensive, and a long time is required to obtain the object reconstruction. Also, manual operations are required for the data collection in these sensors [2, p. 1660]. Therefore, there is a task to develop a new universal system that includes several methods for determining the topography and the construction of three-dimensional model of the object and does not depend on the conditions of scanning and the research is now focused on low cost, good accuracy, and fast processing.
In our work for the design of the combined system of the input three-dimensional images, I use two methods: triangulation method and method of phase shift. In this article we will briefly consider only the first, since it is a fundamental and basing.
Triangulation method and mobile setup for object contouring
In these researches, active triangulation has been used by the lighting methods to perform the contouring via image processing. In active triangulation, the distance between the image sensor and the laser projector provides the depth resolution. But in a static setup, holes in the surface occur due to the limitation of view ﬁ;eld of the image sensor and to depth variation. Therefore, occlusions appear, and there are problems in detecting small details. In this case, the object reconstruction is not completed [4, p. 70].
To overcome these limitations, the object is proﬁ;led from different views to obtain the complete object. This is done by using multiple cameras or a mobile setup. Also, fringe projection, line projection, and spot projection have been applied to acquire different views of the object. In fringe projection, the object surface is retrieved by applying a phase detection algorithm. Then, the phase is converted to actual dimensions based on the setup geometry. In line projection and spot projection, the object depth is computed by triangulation using the position of the light pattern and the setup geometry. These kinds of optical sensors have been successfully applied to detect complete objects.
A mobile setup avoids occlusions and improves the resolution. However, a new equation must be deduced to compute the object depth in each modiﬁ;cation of the geometry. This step includes a new measurement of the modiﬁ;ed geometry and the determination of the parameters of the vision system. According to these considerations, modeling of the mobile setup is required to retrieve the object depth automatically at any camera position. Also, modeling of the mobile setup is necessary to improve its performance.
Modeling of a mobile setup is performed to achieve contouring of a complete object. The proposed model provides an equation that computes the object depth at any camera position. The mobile setup is implemented by an electromechanical device, which moves the camera and the object on an axis. To perform the contouring, the object is moved and scanned by a laser line. (To simulate the laser beam is used ordinary projector that displays the required image on the object). Based on the deformation of the laser line, the algorithm of triangulation method generates a model to compute the object dimension by means of the camera position. To detect the small details, the setup begins with a long distance between the laser line and the camera. When an occlusion of the laser line appears, the camera is moved toward the laser line to detect the occluded region.
For this mobile setup, the object dimension is proportional to the deformation of the laser line. Also, the deformation depends on the camera position. Thus, the algorithm computes the object dimension by means of the laser line deformation and the camera position. Also, this algorithm provides the intrinsic and extrinsic parameters of the vision system. In this manner, parameters such as the focal length, camera orientation, and distances in the setup geometry are deduced by computer algorithms. Thus, the mobile setup performs the contouring automatically.
In the reconstruction system, the produced information is stored in an array memory to obtain the complete object shape. This computational process improves the performance, the resolution, and the accuracy of the reconstruction system. This procedure represents a new contribution to laser-line projection methods. The experimental results are evaluated based on the root mean square error. The evaluation of these results includes measurement error, resolution, processing time, range of measurement, and limitations of the CCD array. In this evaluation, good repeatability is achieved.
Shape detection by means of multiple views is an important task in optical metrology and computer vision. In the mentioned methods, the vision parameters are computed to achieve the measurement of the object shape. Typically, these parameters are obtained by a procedure external to the reconstruction system. In the proposed mobile setup, the object contouring is performed by an automatic vision system. This means that the extrinsic and intrinsic parameters of the vision system are deduced by computational algorithms.
The mobile setup is shown in Fig. 1. This setup includes an electromechanical device, a CCD camera, a laser line projector, and a computer. In the electromechanical device, the object is moved along the x axis by means of a platform and control software. On the object, a laser line is projected to perform the scanning. In each step of the movement, the CCD camera captures the laser line. The camera is aligned at an angle to the object surface. This camera can be moved, independently of the laser projector, along the x axis. Every laser line is deformed at the image plane according to the object surface. The relationship between the laser line deformation and the object dimension is evaluated. Thus, the contouring of the object shape is performed.
Fig. 1 Experimental mobile setup.
The relationship between the position of the laser line and the object depth is described by the geometry shown in Fig. 2. For this geometry, the reference plane is the platform of the electromechanical device. In this reference plane, the three-dimensional Cartesian coordinates are deﬁ;ned. The coordinates (x, y) are on the reference plane, and the coordinate z is perpendicular to the coordinates (x, y).The plane (x, y) is the reference from which the object depth is measured. The reference z=0 is obtained based on the projection of the laser line on reference plane. In this case, the coordinate of the laser line on the x axis is the same as on the y axis. In the geometry of Fig. 2, the x axis and y axis are located on the reference plane, and the object depth is indicated by h(x, y). The points A and B correspond to the projections of the laser line on the reference plane and on the object surface, respectively. The laser line is deformed in the image plane due to the surface variation and the camera position. Thus, the coordinate of the laser line is changed from xA to xB in a step of the scanning. This displacement of the laser line is described by
s(x,y) = xA − xB (1)
The object dimension is proportional to the displacement s(x, y).
Fig. 2 Geometry of the experimental setup.
To detect the displacement, the maximum of the laser line is measured in the image. To do so, the pixels of each row are approximated by a continuous function.
To simulate the laser beam is used ordinary projector that displays the required image on the object. To increase the increase the accuracy in the process of obtaining three-dimensional model of the object is scanned in two directions: horizontally and vertically. Fig. 3(a) and Fig. 3(b) shows the images with horizontal and vertical position of the laser, but Fig. 3(c) also shows the total lattice positions of the scan line, which contains information about the relief of the object. Typically, occlusions of laser line appear in the initial conﬁ;guration due to the surface variation. This lack of data is observed in the line occlusion and its broken contour. To avoid this occlusion, the CCD camera is moved toward the laser projector or object is rotated about itself. In this manner, the occlusion is avoided and the object contour is completed. However, the scale factor of these contours is not the same. This is because the contours are computed in different camera positions. In the model of the mobile setup, the scale factor is corrected according to the camera position.
Fig. 3 (a) Horizontal laser line projected on the object. (b) Vertical laser line projected on the object. (c) The total lattice positions of the scanning line.
The model of the mobile setup is available to perform the contouring from different views of the object. Thus, occlusions are avoided, and small details are detected. Also, the vision parameters are obtained, and physical measurements on the setup are avoided. Thus, the contouring is performed automatically by the model of the mobile setup.
In the arrangement Fig. 1, the object is moved along the x axis in steps of 1.27 mm. This device can be moved 0.0127 mm as a minimum step along the x axis, y axis, and z axis. A laser line is projected on the target by a 15-mW laser diode to perform the scanning. The laser line is captured by a CCD camera and digitized by a frame grabber of 256 gray levels. The displacement of the laser line is computed based on the maximum intensity. The resolution in the x direction is deduced by detecting the laser line in two different positions. To do so, the laser line is moved 127.00 mm away from the initial position by means of the electromechanical device. Then the number of pixels between these two positions is 328.324. Thus, the resolution on the x axis is computed by the relationship resolution = (pixel number)/distance. The resolution in the y direction is obtained by detecting the object on the laser line at two different positions on the y axis. To do so, the object is moved 95.00 mm away from the initial position on the y axis. Then the maximum displacement of the laser line is detected in each movement. The resolution in the z direction is provided by the displacement of the laser line along the x axis. The position of the laser line is measured with a resolution of a fraction of a pixel. Also, the displacement in Eq. (1) is achieved with a resolution of a fraction of a pixel.
The experiment was performed with one object. The object to be proﬁ;led was the Pushkin's bust shown in Fig. 3(a). An occlusion appears due to the surface variation. This occlusion was detected and recovered by the mobile setup. Thus, the contouring was performed completely. To do so, data produced was scanned along the x axis in steps of 1.27 mm. Data produced by the algorithm generate the complete object shape. The results of reconstruction of the object is shown in Fig. 4.
Fig. 4 Three-dimensional shape of the object.
A technique of contouring performed by a model of a mobile setup has been presented. The technique described here provides a valuable tool for industrial inspection and reverse engineering. The automatic process avoids physical measurements on the setup, which are common in methods of laser line projection. This procedure improves the accuracy of the measurement, because measurement errors are not passed to the contouring system. This step is achieved with few operations. By using this computational-optical setup, good repeatability has been achieved in each experiment.
F. Remondino and S. El-Hakim, Image-based 3D modelling: A review, Photogramm. Rec. 21(115), 269–291, 2006.
H. Y. Lin and M. Subbarao, Vision system for fast 3-D model reconstruction, Opt. Eng. 43(7), 1651–1664, 2004.
L. M. Song and D. N. Wang, A novel grating matching method for 3D reconstruction, NDT & E Int., 39, 282–288, 2006.
L. Zagorchev and A. Goshtasby, A paintbrush laser range scanner, Comput. Vis. Image Underst. 10, 65–86, 2006.