WO2023272902A1 - 基于条纹投影的双目双频互补三维面型测量方法 - Google Patents

基于条纹投影的双目双频互补三维面型测量方法 Download PDF

Info

Publication number
WO2023272902A1
WO2023272902A1 PCT/CN2021/113251 CN2021113251W WO2023272902A1 WO 2023272902 A1 WO2023272902 A1 WO 2023272902A1 CN 2021113251 W CN2021113251 W CN 2021113251W WO 2023272902 A1 WO2023272902 A1 WO 2023272902A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency
phase
camera
dimensional
fringe
Prior art date
Application number
PCT/CN2021/113251
Other languages
English (en)
French (fr)
Inventor
左超
钱佳铭
陈钱
冯世杰
尹维
李艺璇
苗鈈元
杨苏皖
Original Assignee
南京理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京理工大学 filed Critical 南京理工大学
Publication of WO2023272902A1 publication Critical patent/WO2023272902A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Definitions

  • the invention belongs to the technical field of optical measurement, in particular to a binocular and dual-frequency complementary three-dimensional surface shape measurement method based on fringe projection.
  • Fringe projection profilometry is one of the most widely used 3D imaging techniques [J. Qian, S. Feng, T. Tao, Y. Hu, K. Liu, S. Wu, Q. Chen, and C. Zuo, "Highresolution real-time 360° 3D model reconstruction of a handheld object with fringe projection profilometry," Opt. Lett.44, 5751–5754 (2019).].
  • applications such as fast reverse modeling, online quality inspection, and intelligent manufacturing put forward higher speed requirements for 3D sensing technology, how to reconstruct the 3D information of objects with high speed and high precision has become a current research and engineering hotspot.
  • Phase unwrapping is a key step in fringe projection profilometry [J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, "Motion-artifact-free dynamic 3D shape measurement with hybrid fourier-transform phase- shifting profilometry,” Opt. Express 27, 2713–2731 (2019).].
  • phase unwrapping is temporal phase unwrapping [C.Zuo, L.Huang, M.Zhang, Q.Chen, and A.Asundi, "Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt.Lasers Eng.85,84–103(2016).]
  • this method determines the fringe order pixel by pixel through the unique fringe image light intensity distribution on different time axes.
  • this method needs to project additional auxiliary fringe images of different frequencies (usually at least nine three-step phase-shifted fringe images of three different frequencies need to be projected), this reduces the efficiency of phase unwrapping and increases the sensitivity of the 3D reconstruction algorithm to object motion.
  • the purpose of the present invention is to provide a binocular and dual-frequency complementary three-dimensional surface measurement method based on fringe projection.
  • the technical solution for realizing the purpose of the present invention is: a method for measuring a three-dimensional surface shape based on fringe projection with two-eye and two-frequency complementarity, and the specific steps are as follows:
  • Step 1 Build a binocular fringe projection contour system and complete the calibration of the system
  • Step 2 Use the binocular fringe projection contour system to project 6 dual-frequency three-step phase-shift fringe images, and use the stereo phase expansion method to unambiguously expand the low-frequency phase to restore the three-dimensional shape of the object;
  • Step 3 According to the obtained depth information, use the adaptive depth constraint to realize the phase unambiguous expansion through the fringe image of a single frequency, and restore the three-dimensional shape of the object.
  • the binocular fringe projection profile system includes a projector and two cameras, the two cameras are placed symmetrically with respect to the projector, and the projector and the camera are connected by two trigger lines; Calibrate to the unified world coordinate system, obtain the internal and external parameters of 2 cameras and 1 projector, and convert the internal and external parameters into three-dimensional to two-dimensional, three-dimensional to two-dimensional mapping parameters.
  • the binocular fringe projection contour system is used to project 6 dual-frequency three-step phase-shifted fringe images
  • the three-dimensional phase expansion method is used to carry out low-frequency phase unambiguous expansion
  • the specific method for recovering the three-dimensional shape of the object is as follows:
  • Step 2.1 Use the projector to project three high-frequency three-step phase-shift fringe images and three low-frequency three-step phase-shift fringe images respectively to the object under test, and the two cameras collect the projected fringe images synchronously. According to the collected images, Get the package phase;
  • Step 2.2 Determine the 3D candidate point corresponding to any pixel in camera 1 according to the mapping parameters
  • Step 2.3 Find the two-dimensional candidate points in camera 2, and obtain the matching points in camera 2 through the phase similarity measure;
  • Step 2.4 Obtain the absolute phase of the object according to the candidate point
  • Step 2.5 Perform steps 2.1 to 2.4 for each pixel in camera 1 in parallel in the computer GPU to obtain the low-frequency absolute phase of the measured object under the perspective of camera 1, and reconstruct the object through the calibration parameters obtained in step 1 3D shape information.
  • the specific formula for obtaining the wrapping phase is:
  • the specific method for finding the two-dimensional candidate points in the camera 2 and obtaining the matching points in the camera 2 through the phase similarity measure is as follows:
  • mapping parameters obtained in step 1 project the three-dimensional candidate points determined in step 2.2 into the camera 2 to obtain corresponding two-dimensional candidate points;
  • the phase consistency test is performed on the low-frequency wrapping phase and the high-frequency wrapping phase, and the two-dimensional candidate points whose phase similarities between the high-frequency and low-frequency wrapping phases are higher than the set threshold are selected as matching points.
  • the formula for calculating the absolute phase of an object is:
  • ki is the serial number corresponding to the candidate point, for The low frequency wrapping phase of the dot.
  • the adaptive depth constraint is used to realize the phase unambiguous expansion through the fringe image of a single frequency, and the specific method for restoring the three-dimensional shape of the object is as follows:
  • Step 3.1 Use the projector to project 3 high-frequency or low-frequency 3-step phase-shifted fringe images to the object under test, and the two cameras collect the projected fringe images synchronously to obtain the high-frequency wrapping phase;
  • Step 3.2 Find 3D candidate points corresponding to any pixel in camera 1, and use adaptive depth constraints to eliminate wrong candidate points;
  • Step 3.3 Find the 2D candidate points in camera 2, and obtain the high-frequency matching points in camera 2 through the phase similarity measure;
  • Step 3.4 Obtain the absolute phase of the object according to the candidate point
  • Step 3.5 Perform steps 3.1 to 3.4 for each pixel in camera 1 in parallel in the computer GPU to obtain the high-frequency absolute phase of the measured object under the perspective of camera 1, and reconstruct the The three-dimensional shape information of the object.
  • step 2 Using the three-dimensional shape information obtained in step 2, count the maximum depth and minimum depth in the rectangular frame centered on each pixel point to form a pixel-by-pixel depth constraint range; for each pixel point, exclude the pixel point 3D candidate points outside the depth constraints.
  • the present invention has the remarkable advantages that: the present invention does not need to project additional auxiliary fringe images, and can realize stable phase expansion through three images, and the imaging efficiency is increased by 2/3; Aided by adaptive depth constraints, stable phase ambiguity removal is achieved with fewer camera views, further reducing the hardware imaging required for stereo phase unwrapping methods.
  • Fig. 1 is a schematic flow chart of the steps of the binocular dual-frequency complementary three-dimensional surface measurement method based on fringe projection in the present invention.
  • Fig. 2 is a schematic diagram of a three-dimensional measurement result of a dynamic scene according to the present invention.
  • a binocular dual-frequency complementary three-dimensional surface measurement method based on fringe projection firstly based on the binocular stereo phase expansion method, guided by the phase similarity measure of the two frequencies to realize the unambiguous expansion of the low-frequency phase, and then restore the absolute value of the measured object. Depth; then use only a single (high or low) frequency fringe image, based on the depth information measured at the previous moment, quickly correct the outlier point cloud to ensure fast, accurate, high-quality 3D topography measurement.
  • the present invention can realize high-precision three-dimensional information measurement of dynamic scenes with less viewing angle information and fewer projection images, including the following steps:
  • Step 1 Build the binocular fringe projection contour system and complete the calibration of the system, as follows:
  • the binocular fringe projection contour system includes a projector and two cameras, the two cameras (camera 1 and camera 2) are placed symmetrically about the projector, and the projector and the camera are connected with 2 trigger lines; then use Zhang Zhengyou Calibration algorithm [Z. Zhang, "A flexible new technique for camera calibration.” IEEE Transactions on pattern analysis and machine intelligence.
  • Step 2 Use a projector to project 6 double-frequency three-step phase-shift fringe images, and use the stereo phase expansion method to realize the unambiguous expansion of the low-frequency phase, and then restore the three-dimensional shape of the object, which includes depth information. details as follows:
  • Step 2.1 Use the projector to project six 2-frequency 3-step phase-shift fringe images to the object under test. First, project three high-frequency three-step phase-shift fringe images, and then project three low-frequency three-step phase-shift fringe images.
  • the two cameras (camera 1 and camera 2) collect the projected fringe images synchronously, and obtain the wrapping phase according to the collected images:
  • the streak image collected by camera 1 is:
  • Step 2.2 Determine the 3D candidate point corresponding to any pixel in camera 1 according to the mapping parameters
  • Step 2.3 Find the two-dimensional candidate points in camera 2, and obtain the matching points in camera 2 through the phase similarity measure;
  • the number of remaining three-dimensional candidate points corresponding to one pixel in camera 1 is L 1 (0 ⁇ L 1 ⁇ l), and the L 1 three-dimensional candidate points are projected to camera 2 through the three-dimensional to two-dimensional mapping parameters obtained in step 1
  • the corresponding L 1 two-dimensional candidate points are obtained.
  • the correct matching point can be found by performing a phase consistency check on the low-frequency phase.
  • the phase consistency test is carried out on the high-frequency wrapping phase, and the two-dimensional candidate points whose phase similarities between high-frequency and low-frequency wrapping are both higher than 0.6 radians are selected as matching points.
  • Step 2.4 Obtain the absolute phase and three-dimensional information of the object
  • the only correct candidate point can be confirmed at this time, and the sequence number k i corresponding to the candidate point is The phase order of the point; finally, the camera 1 can be obtained by the following formula low frequency absolute phase of the point
  • the low-frequency absolute phase of the measured object under the view angle of the camera 1 can be obtained by performing the above operations on each pixel in the camera 1 in parallel in the computer GPU. Finally, the three-dimensional shape information of the object can be reconstructed through the calibration parameters obtained in step 1 [K.Liu, Y.Wang, D.L.Lau, et al, “Dual-frequency pattern scheme for high-speed 3-D shape measurement. "Optics express. 18(5):5229-5244(2010).].
  • Phase unwrapping is a key step in fringe projection profilometry.
  • the traditional phase unwrapping method is the time phase unwrapping method. This method needs to project nine three-step phase-shifted fringe images of at least three frequencies, which greatly affects the efficiency of phase unwrapping.
  • the present invention adopts the stereoscopic phase unwrapping method based on geometric constraints, and only needs two frequency fringe images for the first reconstruction, and then through the adaptive depth constraint in step 3, the image required for reconstruction can be further reduced to one frequency fringe image , increasing the efficiency of phase unwrapping by 2/3.
  • Step 3 According to the obtained depth information, use the adaptive depth constraint to realize phase unambiguous expansion through a single (high or low, high frequency is described below) frequency fringe image, and then restore the three-dimensional shape of the object. details as follows:
  • Step 3.1 Use the projector to project 3 high-frequency 3-step phase-shifted fringe images to the object under test, and the two cameras (camera 1 and camera 2) collect the projected fringe images synchronously.
  • the two cameras cameras 1 and camera 2 collect the projected fringe images synchronously.
  • Step 3.2 Find h 3D candidate points corresponding to any pixel in camera 1, and use adaptive depth constraints to exclude wrong candidate points:
  • step 2 Using the three-dimensional shape information obtained in step 2, count the maximum depth and minimum depth in a 5 ⁇ 5 rectangular frame centered on each pixel point to form a pixel-by-pixel depth constraint range; for each pixel point, Exclude 3D candidate points outside the depth constraint range of this pixel.
  • Step 3.3 Find the 2D candidate points in camera 2, and obtain the high-frequency matching points in camera 2 through the phase similarity measure;
  • the number of remaining three-dimensional candidate points is H 1 (0 ⁇ H 1 ⁇ h), and then through the three-dimensional to two-dimensional mapping parameters obtained in step 1, project H 1 three-dimensional candidate points into camera 2 to obtain corresponding H 1 2D candidate points.
  • the correct matching point can be found by performing a phase consistency test on the high-frequency phase.
  • the adaptive depth constraint in step 3.2 can eliminate most incorrect candidate points, so the correct match on camera 2 can be obtained through a round of phase similarity test point.
  • Step 3.4 Obtain the absolute phase of the object
  • the only correct candidate point can be confirmed at this time, and the sequence number k i_3 corresponding to the candidate point is The phase order of the point, where the subscript 3 is used to distinguish a series of parameters acquired through the 6 images in step 2; finally, it can be obtained by the following formula high frequency absolute phase of the point
  • Step 3.5 Perform the above operations on each pixel in the camera 1 in parallel in the computer GPU to obtain the high-frequency absolute phase of the object under the camera 1 viewing angle. Finally, the three-dimensional shape information of the object can be reconstructed through the calibration parameters obtained in step 1 [K.Liu, Y.Wang, D.L.Lau, et al, “Dual-frequency pattern scheme for high-speed 3-D shape measurement. "Optics express. 18(5):5229-5244(2010).].
  • the calculations of steps 2 and 3 can be performed by looping 3 high-frequency 3-step phase-shift fringe images and 3 low-frequency 3-step phase-shift fringe images through the projector, so as to restore the three-dimensional shape of the object. That is, when performing step 3, the projector successively projects 3 high-frequency 3-step phase-shift fringe images and 3 low-frequency 3-step phase-shift fringe images, and the present invention can only aim at 3 high-frequency 3-step phase-shift fringe images or Only 3 low-frequency 3-step phase shift fringe images are calculated, and the 3D image is recovered, which is essentially a 3D reconstruction method based on fringe projection of a single frequency.
  • the present invention by projecting two sets of fringe images of high and low frequencies, firstly, based on the binocular stereoscopic phase expansion method, the high-frequency phase is unambiguously expanded under the guidance of the phase similarity measure of the two frequencies, and then the absolute depth of the measured object is restored; then only Using a single (high or low) frequency fringe image, based on the depth information measured at the previous moment, the outlier point cloud is quickly corrected to ensure fast, accurate, and high-quality 3D topography measurement.
  • this method can realize high-precision three-dimensional information measurement of dynamic scenes with less viewing angle information and fewer projected images.
  • a projector (model LightCrafter 4500, TI) built a set of fringe projection contour system.
  • the two cameras (camera 1 and camera 2) are placed symmetrically with respect to the projector, and the projector and the camera are connected with 2 trigger lines.
  • the present invention is used to measure a continuous dynamic process. During the measurement process, the projection speed of the projector is 100 Hz, and the used high frequency and low frequency frequencies are 48 and 19 respectively.
  • the measured 3D results are shown in Figure 2, and the final 3D imaging speed is 45Hz, enabling real-time 3D imaging. It can be seen from Fig. 2 that the present invention can restore the three-dimensional shape of the dynamic scene with high quality and high speed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种基于条纹投影的双目双频互补三维面型测量方法,通过投影两组高低频率的条纹图像,基于双目立体相位展开法,以两种频率的相位相似性度量为指导实现低频相位无歧义展开,进而恢复被测物绝对深度;然后仅使用单种频率的条纹图像,以上一时刻所测量的深度信息为依据,迅速纠正离群点云,确保快速、准确、高质量三维形貌测量的进行。无需投影额外的辅助条纹图像,可通过三幅图像实现稳定相位展开,成像效率提高2/3;利用高低频互补特性,以更少的相机视角实现了对动态场景的高精度的三维信息测量。

Description

基于条纹投影的双目双频互补三维面型测量方法 技术领域
本发明属于光学测量技术领域,具体为一种基于条纹投影的双目双频互补三维面型测量方法。
背景技术
条纹投影轮廓术(FPP)是目前最为广泛使用的三维成像技术之一[J.Qian,S.Feng,T.Tao,Y.Hu,K.Liu,S.Wu,Q.Chen,and C.Zuo,“Highresolution real-time 360°3D model reconstruction of a handheld object with fringe projection profilometry,”Opt.Lett.44,5751–5754(2019).]。随着快速逆向建模、在线质量检测、智能制造等应用对三维传感技术提出更高的速度要求,如何高速度高精度重构出物体的三维信息成为当下的研究和工程热点。
相位展开是条纹投影轮廓术中的关键步骤[J.Qian,T.Tao,S.Feng,Q.Chen,and C.Zuo,“Motion-artifact-free dynamic 3D shape measurement with hybrid fourier-transform phase-shifting profilometry,”Opt.Express 27,2713–2731(2019).]。传统的相位展开法为时间相位展开法[C.Zuo,L.Huang,M.Zhang,Q.Chen,and A.Asundi,“Temporal phase unwrapping algorithms for fringe projection profilometry:A comparative review,”Opt.Lasers Eng.85,84–103(2016).],该方法通过不同时间轴上的唯一的条纹图像光强分布来逐像素确定条纹级次。但是,由于该方法需投影额外的不同频率辅助条纹图像(通常需至少投影三种不同频率的9幅三步相移条纹图像),这降低了相位展开效率,增加了三维重构算法对物体运动的敏感程度,因此不适合对快速运动场景的测量。基于几何约束的立体相位展开法[T.Weise,B.Leibe,and L.Van Gool,“Fast 3D scanning with automatic motion compensation,”in 2007 IEEE Conference on Computer Vision and Pattern Recognition(IEEE,2007),pp.1–8.]可以通过多台相机和一台投影机之间的空间位置关系解决相位模糊问题,而无需投影任何辅助图案。尽管需要比传统方法更多的相机(至少两个),立体相位展开法的确使FPP的效率最大化。但是,传统立体相位展开法很难稳定展开包裹相位,这通常需要更多相机视角(4个)的辅助[T.Tao,Q.Chen,S.Feng,Y.Hu,M.Zhang,and C.Zuo,“High-precision real time 3D shape measurement based on a quad-camera system,”J.Opt.20,014009(2017).], 但这样的操作进一步增加了硬件成本。
发明内容
本发明目的在于提供一种基于条纹投影的双目双频互补三维面型测量方法。
实现本发明目的的技术方案为:一种基于条纹投影的双目双频互补三维面型测量方法,具体步骤为:
步骤1:搭建双目条纹投影轮廓***,完成***的标定;
步骤2:利用双目条纹投影轮廓***投影6幅双频三步相移条纹图像,采用立体相位展开法进行低频相位无歧义展开,恢复出物体三维形貌;
步骤3:根据获取的深度信息,利用自适应深度约束通过单个频率的条纹图像实现相位无歧义展开,恢复物体的三维形貌。
优选地,所述双目条纹投影轮廓***包括一个投影仪和两个相机,两相机关于投影仪对称摆放,投影仪和相机之间用2根触发线相连接;利用张正友标定算法将整个***标定到统一世界坐标系下,得到2个相机与1个投影仪的内参和外参,并将内参和外参转化为三维到二维、三维到二维的映射参数。
优选地,利用双目条纹投影轮廓***投影6幅双频三步相移条纹图像,采用立体相位展开法进行低频相位无歧义展开,恢复出物体三维形貌的具体方法为:
步骤2.1:使用投影仪分别向被测物投影三幅高频三步相移条纹图像和三幅低频三步相移条纹图像,由两相机同步采集所投影的条纹图像,根据采集到的图像,获取包裹相位;
步骤2.2:根据映射参数确定相机1中任意一个像素点对应的三维候选点;
步骤2.3:寻找相机2中的二维候选点,通过相位相似性度量获取相机2中的匹配点;
步骤2.4:根据候选点获取物体绝对相位;
步骤2.5:在计算机GPU中并行地对相机1中的每个像素点执行步骤2.1~步骤2.4获取相机1视角下的被测物的低频绝对相位,通过步骤1中获得的标定参数重构出物体的三维形貌信息。
优选地,获取包裹相位的具体公式为:
Figure PCTCN2021113251-appb-000001
Figure PCTCN2021113251-appb-000002
式中,
Figure PCTCN2021113251-appb-000003
Figure PCTCN2021113251-appb-000004
分别表示高频条纹图像和低频条纹图像的包裹相位,
Figure PCTCN2021113251-appb-000005
表示相机1采集到的高频三步相移条纹图像中的第n幅,n∈[1,3],上标中的c 1表示相机1,h表示高频频率,
Figure PCTCN2021113251-appb-000006
表示相机1采集到的低频三步相移条纹图像中的第n幅,上标中的c 1表示相机1,l表示低频频率。
优选地,寻找相机2中的二维候选点,通过相位相似性度量获取相机2中的匹配点的具体方法为:
通过步骤1中获得的映射参数将步骤2.2确定的三维候选点投影到相机2中获得对应的二维候选点;
对低频包裹相位和高频包裹相位进行相位一致性检验,将高频和低频包裹相位相似性均高于设定阈值的二维候选点选为匹配点。
优选地,物体绝对相位的计算公式为:
Figure PCTCN2021113251-appb-000007
式中,
Figure PCTCN2021113251-appb-000008
为相机1中候选点的低频绝对相位,k i为候选点对应的序号,
Figure PCTCN2021113251-appb-000009
Figure PCTCN2021113251-appb-000010
点的低频包裹相位。
优选地,根据获取的深度信息,利用自适应深度约束通过单个频率的条纹图像实现相位无歧义展开,恢复物体的三维形貌的具体方法为:
步骤3.1:使用投影仪向被测物投影3幅高频率或者低频率的3步相移条纹图像,两相机同步采集所投影的条纹图像,获取高频包裹相位;
步骤3.2:寻找相机1中任意像素点对应的3D候选点,利用自适应深度约束排除错误候选点;
步骤3.3:寻找相机2中的2D候选点,通过相位相似性度量获取相机2中的高频匹配点;
步骤3.4:根据候选点获取物体绝对相位;
步骤3.5:在计算机GPU中并行地对相机1中的每个像素点执行步骤3.1~步骤3.4获取相机1视角下的被测物的高频绝对相位,通过步骤1中获得的标定参数重构出物体的三维形貌信息。
优选地,寻找相机1中任意像素点对应的3D候选点,利用自适应深度约束排除错误候选点的具体方法为:
a.利用步骤1中获得的三维到二维的映射参数将相机1中的任意一个像素点
Figure PCTCN2021113251-appb-000011
的高频包裹相位的h个可能的绝对相位重构出h个三维候选点;
b.利用步骤2获取的三维形貌信息,统计以每个像素点为中心的矩形框内的最大深度和最小深度,构成一个逐像素的深度约束范围;对于每个像素点,排除在像素点深度约束范围外的3D候选点。
本发明与现有技术相比,其显著优点为:本发明无需投影额外的辅助条纹图像,可通过三幅图像实现稳定相位展开,成像效率提高2/3;本发明利用高低频互补特性,在自适应深度约束的辅助下,以更少的相机视角实现了稳定的相位歧义性去除,进一步降低了立体相位展开法所需的硬件成像。
下面结合附图对本发明作进一步详细描述。
附图说明
图1为本发明基于条纹投影的双目双频互补三维面型测量方法的步骤流程示意图。
图2为本发明对动态场景的三维测量结果示意图。
具体实施方式
一种基于条纹投影的双目双频互补三维面型测量方法,首先基于双目立体相位展开法,以两种频率的相位相似性度量为指导实现低频相位无歧义展开,进而恢复被测物绝对深度;然后仅使用单种(高或低)频率的条纹图像,以上一时刻所测量的深度信息为依据,迅速纠正离群点云,确保快速、准确、高质量三维形貌测量的进行。本发明可以以更少的视角信息、更少的投影图像实现对动态场景的高精度的三维信息测量,包括以下步骤:
步骤1:搭建双目条纹投影轮廓***,完成***的标定,具体如下:
所述双目条纹投影轮廓***包括一个投影仪和两个相机,两相机(相机1与相机2)关于投影仪对称摆放,投影仪和相机之间用2根触发线相连接;然后利用张正友标定算法[Z.Zhang,“A flexible new technique for camera calibration.”IEEE Transactions on pattern analysis and machine intelligence.22(11),1330-1334 (2000).],将整个***标定到统一世界坐标系下,得到2个相机与1个投影仪的内参和外参,并将这些参数转化为三维到二维的映射参数[K.Liu,Y.Wang,D.L.Lau,et al,“Dual-frequency pattern scheme for high-speed 3-D shape measurement.”Optics express.18(5):5229-5244(2010).]。
步骤2:利用投影仪投影6幅双频三步相移条纹图像,采用立体相位展开法实现低频相位无歧义展开,进而恢复出物体三维形貌,所述物体三维形貌包括深度信息。具体如下:
步骤2.1:使用投影仪向被测物投影6幅2个频率的3步相移条纹图像,首先投影三幅高频三步相移条纹图像,然后投影三幅低频三步相移条纹图像,由两相机(相机1和相机2)同步采集所投影的条纹图像,根据采集到的图像获取包裹相位:
相机1采集到的条纹图像为:
Figure PCTCN2021113251-appb-000012
Figure PCTCN2021113251-appb-000013
其中,
Figure PCTCN2021113251-appb-000014
表示相机1采集到的高频三步相移条纹图像中的第n幅,n∈[1,3],上标中的c 1表示相机1,h表示高频的频率,
Figure PCTCN2021113251-appb-000015
表示高频条纹图像的像素点坐标,
Figure PCTCN2021113251-appb-000016
Figure PCTCN2021113251-appb-000017
分别表示高频条纹图像的平均值和调制度,
Figure PCTCN2021113251-appb-000018
表示高频相位;
Figure PCTCN2021113251-appb-000019
表示相机1采集到的低频三步相移条纹图像中的第n幅,上标中的c 1表示相机1,l表示低频的频率,
Figure PCTCN2021113251-appb-000020
表示低频条纹图像的像素点坐标,
Figure PCTCN2021113251-appb-000021
Figure PCTCN2021113251-appb-000022
分别表示低频条纹图像的平均值和调制度,
Figure PCTCN2021113251-appb-000023
表示低频相位;
获取高频和低频的包裹相位,具体公式为:
Figure PCTCN2021113251-appb-000024
Figure PCTCN2021113251-appb-000025
式中,
Figure PCTCN2021113251-appb-000026
Figure PCTCN2021113251-appb-000027
分别表示高频和低频的包裹相位。
步骤2.2:根据映射参数确定相机1中任意一个像素点对应的三维候选点;
对于相机1中的任意一个像素点
Figure PCTCN2021113251-appb-000028
其低频包裹相位都有l个可能的绝对相位,利用步骤1中获得的三维到二维的映射参数可将这些可能的绝对相位重构出l个三维候选点,假设这l个三位候选点分别有一个序号,表示为k i,其中i=0,1,2....l-1;然后通过预先设定的深度约束范围[-200,200]可排除部分错误的、在深度约束范围之外的三维候选点。
步骤2.3:寻找相机2中的二维候选点,通过相位相似性度量获取相机2中的匹配点;
相机1中一个像素点对应的剩余三维候选点的数目为L 1(0<L 1<l),通过步骤1中获得的三维到二维的映射参数将L 1个三维候选点投影到相机2中获得对应的L 1个二维候选点。这些二维候选点中必有一个正确的匹配点,且该正确的匹配点与相机1中的
Figure PCTCN2021113251-appb-000029
应有相似的低频包裹相位值,利用该特性可通过对低频相位进行相位一致性检验来找出正确的匹配点。但是由于环境噪声和***误差等因素的存在,上述假设可能不成立,一些错误候选点的低频包裹相位可能更接近于
Figure PCTCN2021113251-appb-000030
的低频包裹相位。因此再对高频包裹相位进行相位一致性检验,将高频和低频包裹相位相似性均高于0.6弧度的二维候选点选为匹配点。
步骤2.4:获取物体绝对相位及三维信息;
经过步骤2.3中的两轮相位一致性检验,此时唯一正确的候选点可以被确认,该候选点对应的序号k i即为
Figure PCTCN2021113251-appb-000031
点的相位级次;最后可由下式获取相机1中
Figure PCTCN2021113251-appb-000032
点的低频绝对相位
Figure PCTCN2021113251-appb-000033
Figure PCTCN2021113251-appb-000034
在计算机GPU中并行地对相机1中的每个像素点执行上述操作即可获取相机1视角下的被测物的低频绝对相位。最后通过步骤1中获得的标定参数可重构出物体的三维形貌信息[K.Liu,Y.Wang,D.L.Lau,et al,“Dual-frequency pattern scheme for high-speed 3-D shape measurement.”Optics express.18(5):5229-5244(2010).]。
相位展开是条纹投影轮廓术中的关键步骤,传统相位展开法是时间相位展开 法,该方法需投影至少3个频率的9幅三步相移条纹图像,极大影响了相位展开效率。本发明采用基于几何约束的立体相位展开法,首次重构仅需2个频率的条纹图像,后续通过步骤3中的自适应深度约束可进一步将重构所需图像降低为1种频率的条纹图像,将相位展开效率提高2/3。
步骤3:根据获取的深度信息,利用自适应深度约束通过单个(高或低,下面以高频为例进行叙述)频率的条纹图像实现相位无歧义展开,进而恢复物体的三维形貌。具体如下:
步骤3.1:使用投影仪向被测物投影3幅高频率的3步相移条纹图像,由两相机(相机1和相机2)同步采集所投影的条纹图像,根据公式(3)由采集到的图像获取高频包裹相位;
步骤3.2:寻找相机1中任意像素点对应的h个3D候选点,利用自适应深度约束排除错误候选点:
a.对于相机1中的任意一个像素点
Figure PCTCN2021113251-appb-000035
其高频包裹相位都有h个可能的绝对相位,利用步骤1中获得的三维到二维的映射参数可将这些可能的绝对相位重构出h个三维候选点;
b.利用步骤2获取的三维形貌信息,统计以每个像素点为中心的5×5的矩形框内的最大深度和最小深度,构成一个逐像素的深度约束范围;对于每个像素点,排除在该像素点深度约束范围外的3D候选点。
步骤3.3:寻找相机2中的2D候选点,通过相位相似性度量获取相机2中的高频匹配点;
剩余三维候选点的数目为H 1(0<H 1<h),再通过步骤1中获得的三维到二维的映射参数将H 1个三维候选点投影到相机2中获得对应的H 1个二维候选点。这些二维候选点中必有一个正确的匹配点,且该正确的匹配点与相机1中的
Figure PCTCN2021113251-appb-000036
应有相似的高频包裹相位值,利用该特性可通过对高频相位进行相位一致性检验来找出正确的匹配点。虽然此时环境噪声和***误差等因素依然存在,但步骤3.2中的自适应深度约束可排除大部分不正确的候选点,因此这里通过一轮相位相似性检验即可获取相机2上正确的匹配点。
步骤3.4:获取物体绝对相位;
经过步骤3.3中的相位一致性检验,此时唯一正确的候选点可以被确认,该 候选点对应的序号k i_3即为
Figure PCTCN2021113251-appb-000037
点的相位级次,其中下标3用于区分步骤2中通过6幅图像获取的一系列参数;最后可由下式获取相机1中
Figure PCTCN2021113251-appb-000038
点的高频绝对相位
Figure PCTCN2021113251-appb-000039
Figure PCTCN2021113251-appb-000040
步骤3.5:在计算机GPU中并行地对相机1中的每个像素点执行上述操作即可获取相机1视角下的被测物的高频绝对相位。最后通过步骤1中获得的标定参数可重构出物体的三维形貌信息[K.Liu,Y.Wang,D.L.Lau,et al,“Dual-frequency pattern scheme for high-speed 3-D shape measurement.”Optics express.18(5):5229-5244(2010).]。
在实际运用中,可以通过投影仪循环播放3幅高频3步相移条纹图像和3幅低频3步相移条纹图像来进行步骤2和步骤3的计算,从而恢复出物体三维形貌。即在进行步骤3的时候,投影仪先后投影了3幅高频3步相移条纹图像和3幅低频3步相移条纹图像,本发明可以只针对3幅高频3步相移条纹图像或者只针对3幅低频3步相移条纹图像进行计算,恢复出三维图像,本质上是单种频率的基于条纹投影的三维重建方法。
本发明通过投影两组高低频率的条纹图像,首先基于双目立体相位展开法,以两种频率的相位相似性度量为指导实现高频相位无歧义展开,进而恢复被测物绝对深度;然后仅使用单种(高或低)频率的条纹图像,以上一时刻所测量的深度信息为依据,迅速纠正离群点云,确保快速、准确、高质量三维形貌测量的进行。本方法相比于传统基于条纹投影的多视角三维面型测量方法,可以以更少的视角信息、更少的投影图像实现对动态场景的高精度的三维信息测量。
实施例:
为验证本发明所述方法的有效性,基于2台灰度相机(型号acA640-750um,Basler),一台投影仪(型号LightCrafter 4500,TI)构建了一套条纹投影轮廓***。两相机(相机1与相机2)关于投影仪对称摆放,投影仪和相机之间用2根触发线相连接。利用本发明对一连续的动态过程进行测量,测量过程中,投影仪投影速度为100Hz,所使用的高频和低频频率分别为48和19。测量的三维结果如图2所,最终的三维成像速度为45Hz,可进行实时三维成像。从图2可见,本发明可高质量高速度恢复出动态场景的三维形貌。

Claims (8)

  1. 一种基于条纹投影的双目双频互补三维面型测量方法,其特征在于,具体步骤为:
    步骤1:搭建双目条纹投影轮廓***,完成***的标定;
    步骤2:利用双目条纹投影轮廓***投影6幅双频三步相移条纹图像,采用立体相位展开法进行低频相位无歧义展开,恢复出物体三维形貌,所述三维形貌包括深度信息;
    步骤3:根据获取的深度信息,利用自适应深度约束通过单个频率的条纹图像实现相位无歧义展开,恢复物体的三维形貌。
  2. 根据权利要求1所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,所述双目条纹投影轮廓***包括一个投影仪和两个相机,两相机关于投影仪对称摆放,投影仪和相机之间用2根触发线相连接;利用张正友标定算法将整个***标定到统一世界坐标系下,得到2个相机与1个投影仪的内参和外参,并将内参和外参转化为三维到二维的映射参数。
  3. 根据权利要求1所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,利用双目条纹投影轮廓***投影6幅双频三步相移条纹图像,采用立体相位展开法进行低频相位无歧义展开,恢复出物体三维形貌的具体方法为:
    步骤2.1:使用投影仪分别向被测物投影三幅高频三步相移条纹图像和三幅低频三步相移条纹图像,由两相机同步采集所投影的条纹图像,根据采集到的图像,获取包裹相位;
    步骤2.2:根据映射参数确定相机1中任意一个像素点对应的三维候选点;
    步骤2.3:寻找相机2中的二维候选点,通过相位相似性度量获取相机2中的匹配点;
    步骤2.4:根据候选点获取物体绝对相位;
    步骤2.5:在计算机GPU中并行地对相机1中的每个像素点执行步骤2.1~步骤2.4获取相机1视角下的被测物的低频绝对相位,通过步骤1中获得的标定参数重构出物体的三维形貌信息。
  4. 根据权利要求3所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,获取包裹相位的具体公式为:
    Figure PCTCN2021113251-appb-100001
    Figure PCTCN2021113251-appb-100002
    式中,
    Figure PCTCN2021113251-appb-100003
    Figure PCTCN2021113251-appb-100004
    分别表示高频条纹图像和低频条纹图像的包裹相位,
    Figure PCTCN2021113251-appb-100005
    表示相机1采集到的高频三步相移条纹图像中的第n幅,n∈[1,3],上标中的c 1表示相机1,h表示高频频率,
    Figure PCTCN2021113251-appb-100006
    表示相机1采集到的低频三步相移条纹图像中的第n幅,上标中的c 1表示相机1,l表示低频频率。
  5. 根据权利要求3所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,寻找相机2中的二维候选点,通过相位相似性度量获取相机2中的匹配点的具体方法为:
    通过步骤1中获得的映射参数将步骤2.2确定的三维候选点投影到相机2中获得对应的二维候选点;
    对低频包裹相位和高频包裹相位进行相位一致性检验,将高频和低频包裹相位相似性均高于设定阈值的二维候选点选为匹配点。
  6. 根据权利要求3所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,物体绝对相位的计算公式为:
    Figure PCTCN2021113251-appb-100007
    式中,
    Figure PCTCN2021113251-appb-100008
    为相机1中候选点的低频绝对相位,k i为候选点对应的序号,
    Figure PCTCN2021113251-appb-100009
    Figure PCTCN2021113251-appb-100010
    点的低频包裹相位。
  7. 根据权利要求1所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,根据获取的深度信息,利用自适应深度约束通过单个频率的条纹图像实现相位无歧义展开,恢复物体的三维形貌的具体方法为:
    步骤3.1:使用投影仪向被测物投影3幅高频率或者低频率的3步相移条纹图像,两相机同步采集所投影的条纹图像,获取高频包裹相位;
    步骤3.2:寻找相机1中任意像素点对应的3D候选点,利用自适应深度约束排除错误候选点;
    步骤3.3:寻找相机2中的2D候选点,通过相位相似性度量获取相机2中的高频匹配点;
    步骤3.4:根据候选点获取物体绝对相位;
    步骤3.5:在计算机GPU中并行地对相机1中的每个像素点执行步骤3.1~步骤3.4获取相机1视角下的被测物的高频绝对相位,通过步骤1中获得的标定参数重构出物体的三维形貌信息。
  8. 根据权利要求7所述的基于条纹投影的双目双频互补三维面型测量方法,其特征在于,寻找相机1中任意像素点对应的3D候选点,利用自适应深度约束排除错误候选点的具体方法为:
    a.利用步骤1中获得的三维到二维的映射参数将相机1中的任意一个像素点
    Figure PCTCN2021113251-appb-100011
    的高频包裹相位的h个可能的绝对相位重构出h个三维候选点;
    b.利用步骤2获取的三维形貌信息,统计以每个像素点为中心的矩形框内的最大深度和最小深度,构成一个逐像素的深度约束范围;对于每个像素点,排除在像素点深度约束范围外的3D候选点。
PCT/CN2021/113251 2021-06-30 2021-08-18 基于条纹投影的双目双频互补三维面型测量方法 WO2023272902A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110743612.XA CN113551617B (zh) 2021-06-30 2021-06-30 基于条纹投影的双目双频互补三维面型测量方法
CN202110743612.X 2021-06-30

Publications (1)

Publication Number Publication Date
WO2023272902A1 true WO2023272902A1 (zh) 2023-01-05

Family

ID=78102631

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113251 WO2023272902A1 (zh) 2021-06-30 2021-08-18 基于条纹投影的双目双频互补三维面型测量方法

Country Status (2)

Country Link
CN (1) CN113551617B (zh)
WO (1) WO2023272902A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333649A (zh) * 2023-10-25 2024-01-02 天津大学 一种动态扰动下高频线扫描稠密点云的优化方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102322822A (zh) * 2011-08-08 2012-01-18 西安交通大学 一种三频彩色条纹投影三维测量方法
US20120092463A1 (en) * 2010-08-06 2012-04-19 University Of Kentucky Research Foundation (Ukrf) Dual-frequency Phase Multiplexing (DFPM) and Period Coded Phase Measuring (PCPM) Pattern Strategies in 3-D Structured Light Systems, and Lookup Table (LUT) Based Data Processing
CN103759673A (zh) * 2014-01-21 2014-04-30 南京理工大学 基于双频三灰阶正弦光栅条纹投影的时间相位去包裹方法
CN104034285A (zh) * 2014-06-25 2014-09-10 西北工业大学 整数线性规划搜索法的双频正弦光栅绝对相位解包裹方法
CN105043301A (zh) * 2015-07-24 2015-11-11 南京理工大学 一种用于三维测量的光栅条纹相位求解方法
CN107063128A (zh) * 2016-04-29 2017-08-18 华南师范大学 一种双频相移三维测量方法及***
CN109242895A (zh) * 2018-07-20 2019-01-18 南京理工大学 一种基于多相机***实时三维测量的自适应深度约束方法
CN109489585A (zh) * 2018-12-06 2019-03-19 广西师范大学 基于改进多频条纹结构光的三维测量方法
CN109798845A (zh) * 2019-03-25 2019-05-24 青岛小优智能科技有限公司 一种基于激光光栅扫描的三维重建精度提升的方法与装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2979059B1 (en) * 2013-03-27 2021-05-05 Seikowave Health Solutions Inc. Portable structured light measurement module with pattern shifting device incorporating a fixed-pattern optic for illuminating a subject-under-test
US20160094830A1 (en) * 2014-09-26 2016-03-31 Brown University System and Methods for Shape Measurement Using Dual Frequency Fringe Patterns
CN107044833B (zh) * 2017-03-24 2019-03-05 南京理工大学 一种基于改进的傅立叶变换轮廓技术的超快三维形貌测量方法及其***
CN109540038A (zh) * 2018-09-12 2019-03-29 天津大学 基于彩色多通道双频相移的机器视觉自适应补光测量方法
CN110595388B (zh) * 2019-08-28 2021-04-16 南京理工大学 一种基于双目视觉的高动态实时三维测量方法
CN110672038A (zh) * 2019-09-03 2020-01-10 安徽农业大学 一种基于双频相移条纹投影的快速三维测量方法
CN112097687B (zh) * 2020-08-19 2021-11-02 天津大学 一种基于求导的叠加相移光栅的分离方法
CN112001959B (zh) * 2020-08-20 2023-06-13 四川大学 一种循环相移的实时三维面形测量方法及***
CN112880589B (zh) * 2021-01-18 2022-04-01 南昌航空大学 基于双频相位编码的光学三维测量方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120092463A1 (en) * 2010-08-06 2012-04-19 University Of Kentucky Research Foundation (Ukrf) Dual-frequency Phase Multiplexing (DFPM) and Period Coded Phase Measuring (PCPM) Pattern Strategies in 3-D Structured Light Systems, and Lookup Table (LUT) Based Data Processing
CN102322822A (zh) * 2011-08-08 2012-01-18 西安交通大学 一种三频彩色条纹投影三维测量方法
CN103759673A (zh) * 2014-01-21 2014-04-30 南京理工大学 基于双频三灰阶正弦光栅条纹投影的时间相位去包裹方法
CN104034285A (zh) * 2014-06-25 2014-09-10 西北工业大学 整数线性规划搜索法的双频正弦光栅绝对相位解包裹方法
CN105043301A (zh) * 2015-07-24 2015-11-11 南京理工大学 一种用于三维测量的光栅条纹相位求解方法
CN107063128A (zh) * 2016-04-29 2017-08-18 华南师范大学 一种双频相移三维测量方法及***
CN109242895A (zh) * 2018-07-20 2019-01-18 南京理工大学 一种基于多相机***实时三维测量的自适应深度约束方法
CN109489585A (zh) * 2018-12-06 2019-03-19 广西师范大学 基于改进多频条纹结构光的三维测量方法
CN109798845A (zh) * 2019-03-25 2019-05-24 青岛小优智能科技有限公司 一种基于激光光栅扫描的三维重建精度提升的方法与装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG PENGCHENG: "3D Measurement Based on Binocular Structured Light System", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 9, 15 September 2014 (2014-09-15), CN , XP093017932, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333649A (zh) * 2023-10-25 2024-01-02 天津大学 一种动态扰动下高频线扫描稠密点云的优化方法
CN117333649B (zh) * 2023-10-25 2024-06-04 天津大学 一种动态扰动下高频线扫描稠密点云的优化方法

Also Published As

Publication number Publication date
CN113551617B (zh) 2023-03-31
CN113551617A (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
CN110487216B (zh) 一种基于卷积神经网络的条纹投影三维扫描方法
CN110288642B (zh) 基于相机阵列的三维物体快速重建方法
WO2021088481A1 (zh) 一种基于条纹投影的高精度动态实时360度全方位点云获取方法
CN112053432B (zh) 一种基于结构光与偏振的双目视觉三维重建方法
CN111473744B (zh) 一种基于散斑嵌入相移条纹的三维形貌视觉测量方法及***
CN109754459B (zh) 一种用于构建人体三维模型的方法及***
CN108362228B (zh) 一种基于双光机的光刀光栅混合式三维测量装置及测量方法
CN107990846B (zh) 基于单帧结构光的主被动结合深度信息获取方法
CN112833818B (zh) 一种单帧条纹投影三维面型测量方法
WO2018107427A1 (zh) 相位映射辅助三维成像***快速对应点匹配的方法及装置
CN111947599B (zh) 基于学习的条纹相位恢复和散斑相关的三维测量方法
CN108036740A (zh) 一种基于多视角的高精度实时三维彩色测量***及其方法
CN113763540A (zh) 一种基于散斑条纹混合调制的三维重建方法及设备
WO2023272902A1 (zh) 基于条纹投影的双目双频互补三维面型测量方法
CN116592792A (zh) 一种使用散斑辅助相对相位立体匹配的测量方法和***
CN117115336A (zh) 一种基于遥感立体影像的点云重建方法
CN114877826B (zh) 一种双目立体匹配三维测量方法、***及存储介质
Budd et al. Hierarchical shape matching for temporally consistent 3D video
CN114943755B (zh) 一种基于双目结构光三维重建相位图像的处理方法
CN114252020B (zh) 一种多工位全场条纹图相移辅助散斑大长宽比间隙测量方法
Sui et al. 3D surface reconstruction using a two-step stereo matching method assisted with five projected patterns
Jiang et al. Stereo matching based on random speckle projection for dynamic 3D sensing
CN111815697B (zh) 一种热变形动态三维测量方法
CN107941147B (zh) 大型***三维坐标非接触在线测量方法
WO2024113127A1 (zh) 一种基于结构光和双目视觉的三维重建方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21947839

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21947839

Country of ref document: EP

Kind code of ref document: A1