WO2019153569A1 - 一种针对条纹投影三维测量***离焦现象的相位误差校正方法 - Google Patents

一种针对条纹投影三维测量***离焦现象的相位误差校正方法 Download PDF

Info

Publication number
WO2019153569A1
WO2019153569A1 PCT/CN2018/087387 CN2018087387W WO2019153569A1 WO 2019153569 A1 WO2019153569 A1 WO 2019153569A1 CN 2018087387 W CN2018087387 W CN 2018087387W WO 2019153569 A1 WO2019153569 A1 WO 2019153569A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
pixel
image
camera
background image
Prior art date
Application number
PCT/CN2018/087387
Other languages
English (en)
French (fr)
Inventor
达飞鹏
饶立
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Publication of WO2019153569A1 publication Critical patent/WO2019153569A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré

Definitions

  • the invention belongs to the field of three-dimensional reconstruction in computer vision, and particularly relates to a phase error correction method for the defocus phenomenon of a stripe projection three-dimensional measurement system.
  • FPP fringe projection profilometry
  • active light projection technology it is often assumed that an object point on the surface of the object to be tested only directly receives illumination from the sensor of the projection device. This assumption does not hold true in many practical situations.
  • an object point on the object may also receive indirect illumination caused by mutual reflection, subsurface scattering and defocusing. If these indirect illuminations are not considered in the FPP system, it may lead to more obvious systematic errors.
  • the invention aims to provide a phase error correction method for the defocus phenomenon of a stripe projection three-dimensional measurement system, first analyzing an analytical expression of a phase error caused by camera defocus, and then directly solving the phase error and correcting the phase. method.
  • this method has no additional hardware requirements for the measurement system, and it does not need to project additional fringe patterns.
  • the correction can be done directly by using the fringe pattern originally affected by the camera's defocusing phenomenon.
  • the corrected phase combined with the calibration parameters can obtain high-precision three-dimensional reconstruction results.
  • the present invention adopts the following technical solutions:
  • a phase error correction method for a defocusing phenomenon of a stripe projection three-dimensional measurement system comprising the following steps:
  • step S2 For the fringe pattern acquired in step S1, the background image I' is solved, and then the phase error is solved by the traditional phase shift method. Phase
  • step S4 Using the edge image obtained in step S3, recovering the clear background image I s ' of the background image I′ before defocusing;
  • step S6 For each pixel to be processed determined in step S5, calculate the phase gradient direction according to the neighborhood average method:
  • u and v are the horizontal and vertical indices of the pixel coordinates of the image;
  • w is the width of a predetermined square neighborhood;
  • ⁇ u and ⁇ v are the phase partial derivatives along the u and v directions, respectively;
  • ⁇ (x i , x o ) is the phase difference between the pixels x i and x o , on the premise of the near-neighborhood planarity assumption, Where vector Pointed to x i by x o ; Is the gradient direction of the pixel x o , that is, the step obtained in step S6
  • the phase density of the neighborhood where ⁇ is x o that is, the phase difference of adjacent pixels along the phase gradient direction, can be directly obtained in the phase map.
  • the range of the neighborhood to be summed is wide. For a square region of 6 ⁇ +1, ⁇ is calculated in step S5.
  • the corrected phase information is obtained, and finally, the three-dimensional information of the measuring object can be obtained by combining the calibration information.
  • the phase error correction method for the defocusing phenomenon of the stripe projection three-dimensional measurement system the specific operation of projecting the N standard phase-shifted sinusoidal stripe images required on the object using the projector in step S1 is: according to the active light
  • the hardware triangle relationship in the projection 3D measurement system fixes the projector and the camera, and places the object with a complex surface texture in an appropriate position.
  • the grayscale value of the stripe is set to:
  • a and B are the stripe background intensity and the stripe modulation degree coefficient respectively
  • is the set phase value
  • the phase error correction method for the defocusing phenomenon of the stripe projection three-dimensional measurement system is: first adjusting the aperture size, shutter speed and sensitivity of the camera, so that The captured image will not appear image saturation, that is, the grayest value of the brightest area in the image is less than 255. Under this camera parameter, the N fringe pattern is collected. When the camera is out of focus, the stripe gray collected by the camera The degree is:
  • x c represents any pixel of the acquired image
  • x o is the pixel corresponding to x c corresponding to the projector format
  • x i is the neighborhood pixel of x o in the projector format
  • T(x i , x o ) is the influence coefficient of pixel x i on x o
  • T(x i , x o ) ⁇ G(x i , x o ) ⁇ r i , where ⁇ is the gain of the camera, G(x i , x o )
  • the point spread function PSF caused by camera blur, r i is the reflectance coefficient of the object surface point corresponding to x c .
  • the phase error correction method for the defocusing phenomenon of the stripe projection three-dimensional measurement system the specific method of restoring the clear background image I s ' before the defocusing of the background image I′ in step S4 is: for each edge pixel Find the local maximum and minimum along the direction of the gray gradient. Since the maximum and minimum values are distributed on both sides of the pixel, the gray value of all pixels from the maximum and minimum pixel positions to the edge pixel position is set to the maximum. The value or the minimum value, for each edge pixel to perform such processing, a clear background image I s ' can be obtained.
  • the present invention is directed to a conventional fringe projection three-dimensional measuring system, which is easy to cause image blur due to camera depth of field problems, and further causes significant phase error.
  • a phase error correction algorithm based on analytical expression is proposed.
  • the method proposed in this patent does not rely on any hardware other than the measurement system itself, nor does it rely on projection of high frequency fringes.
  • an analytical expression of phase error is established. Then combine the point spread function PSF of each pixel to blur the front background image I s ', phase direction And the phase density ⁇ , accurately solve the magnitude of the phase error, thereby directly correcting the phase solved by the conventional phase shift method.
  • the corrected phase combined with the calibration information can obtain the corrected three-dimensional reconstruction result.
  • the whole phase correction is based on rigorous mathematical process, and the algorithm implementation process is simple. It is suitable for the case where the depth of the camera is small and the image is often blurred in the traditional stripe projection three-dimensional measurement system. It is also suitable for the case where the measurement object is translucent and subsurface scattering occurs.
  • Figure 1 is a flow chart of the entire process of the invention.
  • Figure 2 is a frame diagram of a stripe projection three-dimensional measurement system.
  • Figure 3 is a schematic diagram of a test object.
  • Figure 4 is a schematic diagram of pixel points that need to be processed in this patent.
  • Figure 5 is a calculated pre-blur background image.
  • Fig. 6 is a schematic diagram showing the calculation result of the blur function PSF.
  • Fig. 7 is a schematic diagram of the phase difference ⁇ (x i , x o ).
  • Figure 8 is a schematic diagram of phase error calculated using the patent for a test object.
  • Figure 9 is a schematic view of an experimental object.
  • FIG. 10 is a diagram of a three-dimensional reconstruction result obtained by directly applying a conventional method.
  • FIG. 11 is a diagram of a three-dimensional reconstruction result obtained by phase correction after applying the patent algorithm.
  • MATLAB is used as a programming tool to process the sinusoidal stripes generated by the computer and the fringe images collected by the CCD camera.
  • This example uses a white plane with a black texture as the object to be tested, confirming the validity of the error correction method proposed in this patent. It is understood that the examples are intended to be illustrative only and not to limit the scope of the invention, and the scope of the invention Limited range.
  • a phase error correction method for the defocus phenomenon of the stripe projection three-dimensional measurement system the algorithm flow is shown in Figure 1.
  • the block diagram of the measurement system is shown in Figure 2.
  • Step 1 Fix the projector and camera according to the hardware triangle relationship in the active light projection 3D measurement system, and place the object with complex surface texture in a suitable position. Use the projector to project the required N standard phase-shifted sinusoidal image on the object.
  • the grayscale value of the stripe is set to:
  • a and B are the stripe background intensity and the stripe modulation degree coefficient respectively
  • is the set phase value
  • Step 2 Set the camera related parameters: aperture size, shutter speed and sensitivity so that the captured image will not be saturated (ie, the grayest value of the brightest area in the image is less than 255).
  • the N fringe pattern is acquired under this camera parameter.
  • the grayscale value of the stripe captured by the camera is:
  • x c represents any pixel of the acquired image
  • x o is the pixel corresponding to x c corresponding to the projector format
  • x i is the neighborhood pixel of x o in the projector format
  • T(x i , x o ) is the influence coefficient of pixel x i on x o
  • T(x i , x o ) ⁇ G(x i , x o ) ⁇ r i , where ⁇ is the gain of the camera, G(x i , x o )
  • the point spread function PSF caused by camera blur, r i is the reflectance coefficient of the object surface point corresponding to x c .
  • Step 3 For the fringe pattern acquired in step 2, the background image I' is solved, as shown in FIG. Then use the traditional phase shift method to solve the phase with phase error ⁇ (x c )
  • Step 4 Perform edge extraction on the background image I' obtained in step 3. For each pixel in the image, it is judged whether there is an edge point within 10 pixels of its neighborhood. If not, the pixel is not processed; if so, the pixel is the object to be processed by the patent. 4 is a classification result of the background image I' shown in FIG. 3, in which the all black area (image gray value is 0) is a pixel not processed by the patent; for the non-all black area (image gray value is greater than 0) Each pixel, as there are edge pixels in the neighborhood, is the part to be processed by this patent.
  • Step 5 Using the edge image obtained in step 4, the clear background image I s ' of the background image I' before defocusing is restored.
  • the specific method is: for each edge pixel, find local maximum and minimum along the gray gradient direction, since the maximum and minimum values are distributed on both sides of the pixel, all from the maximum minimum pixel position to the edge pixel position The gray value of the pixel is set to the maximum or minimum value.
  • a clear background image I s ' can be obtained, as shown in FIG. As can be seen from the figure, I s 'better reflects the background image that is not blurred by the camera's defocus.
  • Step 6 Calculate the point spread function G caused by camera blur for each edge pixel according to the clear background image I s ' obtained in step 5 by minimizing the image distance as follows, which is described by a single parameter standard deviation ⁇ .
  • Figure 6 shows the calculated PSF results. It is worth noting that in order to reduce the complexity of the algorithm, only the PSF of the edge pixels is calculated. The other pixels of the area to be processed have their PSF set to be the same as the nearest edge pixel.
  • Step 7 Calculate the phase gradient direction according to the neighborhood average method for each pixel to be processed determined in step 4:
  • u and v are the horizontal and vertical indices of the pixel coordinates of the image; w is the width of a predetermined square neighborhood; with They are phase partial derivatives along the u and v directions, respectively.
  • the method can obtain the phase gradient direction of each pixel under the influence of camera defocus and random noise with higher precision.
  • Step 8 The pre-defocus background image I s ', the point spread function G and the phase gradient direction obtained according to the above steps For each pixel to be processed, calculate the phase error caused by the camera's defocus:
  • ⁇ (x i , x o ) is the phase difference between the pixels x i and x o .
  • the gradient direction of the pixel x o which is obtained in step 7.
  • the phase density of the neighborhood where ⁇ is x o that is, the phase difference of adjacent pixels along the phase gradient direction, can be directly obtained in the phase diagram, and the phase difference diagram is as shown in FIG. 7 .
  • the size of the neighborhood in which the summation is performed is a square region having a width of 6 ⁇ +1, and ⁇ is calculated in step 6.
  • the phase error of the final solution is shown in Fig. 8. It can be seen that the systematic error caused by camera blur is concentrated at the edge of the image, that is, where the reflectance of the surface of the object changes greatly.
  • Step 9 According to the formula Obtain corrected phase information. Finally, combined with the calibration information, three-dimensional information of the measured object can be obtained.
  • 9 to 11 are the second set of measured experiments, and FIG. 9 is the target of the object to be tested, and the surface of the object has a textured area with a large jump.
  • Fig. 10 and Fig. 11 are the results of the three-dimensional reconstruction measured by the conventional method and the phase error correction by the method, respectively. It can be seen that the correction caused by the defocusing of the camera is significantly reduced by the correction of the patented method. It is worth mentioning that the method proposed in this patent does not need to project additional fringe patterns, but directly performs phase error analysis and correction with the pictures required by the traditional phase shift algorithm. The three-dimensional reconstruction map obtained from the corrected phase information effectively reduces the systematic error caused by the camera's defocus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种针对条纹投影三维测量***离焦现象的相位误差校正方法,该方法包括首先由计算机生成相移条纹图像,并用相机采集。然后,对于采集到的图像,计算背景图像I'和含误差的相位φ',并对于背景图I'进行边缘提取。获取边缘图之后计算每个边缘像素的点扩散函数(PSF)。然后在相位图φ'中用梯度滤波和邻域平均法计算每个待处理像素的相位梯度方向和相位密度。最后对于待处理的像素,逐像素计算由相机离焦引起的相位误差Δφ,从而获取最终校正后的相位φ=φ'-Δφ。校正后的相位信息可以通过相位-高度映射关系转化为待测物体的三维信息。

Description

一种针对条纹投影三维测量***离焦现象的相位误差校正方法 技术领域:
本发明属于计算机视觉中三维重构的领域,具体涉及一种针对条纹投影三维测量***离焦现象的相位误差校正方法。
背景技术:
基于条纹投影的三维测量技术FPP(fringe projection profilometry)由于其精度高,速度快,受环境光影响较小等优点,近年来受到了广泛的研究和应用。作为一种基于主动光投影的三维测量方法,FPP也有相应的局限性。在主动光投影技术中,常常假定待测物体表面的某一物点仅直接接收来自于投影设备传感器的光照。这个假设在许多实际情况中并不成立。物体上某一物点除了直接接收投影仪某一像素的光照外,还可能接收因互反射,次表面散射和离焦等现象引起的非直接光照。在FPP***中如果不考虑这些非直接光照,可能会导致较为明显的***误差。
在实际测量过程中,由于相机镜头的景深十分有限以及物体形貌变化复杂,相机离焦现象十分常见。尤其是当FPP***测量视场较小时,由于景深限制,相机离焦现象几乎是不可避免的。作为上述非直接光照的一种,相机离焦现象将会在图片中产生局部模糊,从而影响最终应用相移算法解得的相位精度。除了相机离焦现象,局部模糊还会由投影仪离焦和次表面散射两个因素引起。虽然本专利仅针对相机离焦现象提出相位校正算法,但由于次表面散射和相机离焦现象在FPP***中产生相位误差的机理相似,故本专利的方法也可以一定程度上用于校正次表面散射现象引起的相位误差。另外,一定程度的投影仪离焦现象不会对相位误差产生影响,故不在本专利讨论范围之内。
针对包括相机离焦现象在内的非直接光照对相位的影响,目前绝大部分解决方法是基于高频条纹投影的方法。其原理是当投影的条纹频率很高时,非直接光照引起的误差可以被抵消掉。该类方法可以一定程度地解决非直接光照如互相反射和次表面散射引起的相位误差,但是却对相机离焦现象作用不大。其原因是相机离焦现象产生的模糊往往非常局部,图像中某像素仅接收物体表面很小区域的反射光。在这种情况下,基于高频条纹投影的方法必须要投影频率非常高的条纹图才能有效抑制相机离焦产生的影响。但是工业投影仪无法准确投影条纹宽度非常小的条纹,例如对于常见的投影仪,当投影条纹宽度小于8个像素时,投影仪往往无法准确投射。所以该类方法无法用来解决FPP***中相机离焦引起的相位误差。
发明内容:
本发明旨在提供一种针对条纹投影三维测量***离焦现象的相位误差校正方法,先分析出由相机离焦引起的相位误差的解析表达式,然后直接求解该相位误差并对相位进行校正的方法。该方法作为一种数学算法,对测量***没有额外的硬件需求,也无需投影额外的条纹图,直接利用原始受相机离焦现象影响的条纹图即可完成校正。校正后的相位结合标定参数可以获取高精度的三维重构结果。
为解决上述问题,本发明采用以下技术方案:
一种针对条纹投影三维测量***离焦现象的相位误差校正方法,该方法包括如下步骤:
S1.使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像,对N幅条纹图进行采集;
S2.对于步骤S1中采集得到的条纹图,求解背景图像I',然后用传统相移法求解带有相位误差
Figure PCTCN2018087387-appb-000001
的相位
Figure PCTCN2018087387-appb-000002
S3.对S2中得到的背景图像I'进行边缘提取;
S4.用步骤S3中得到的边缘图像,恢复背景图像I'在离焦之前的清晰背景图像I s';
S5.根据步骤S4中求得的清晰背景图像I s',通过最小化如下图像距离来计算每一个边缘像素由相机模糊引起的点扩散函数G,由单参数标准差σ描述,
d=||I'-I s′*G|| 2
S6.对于步骤S5中确定的每一个待处理的像素,跟据邻域平均法计算相位梯度方向:
Figure PCTCN2018087387-appb-000003
其中,u和v为图像像素坐标的横向和纵向索引;w为一个预设的正方形邻域的宽;φ u和φ v分别为沿u和v方向的相位偏导数;
S7.根据步骤S4中得到的离焦前背景图像I s',步骤S5中得到的点扩散函数G和步骤S6中得到的相位梯度方向
Figure PCTCN2018087387-appb-000004
对每一个待处理的像素,计算其由相机离焦引起的相位误差:
Figure PCTCN2018087387-appb-000005
其中Δ(x i,x o)是像素x i与x o之间的相位差,在近邻域平面性假设的前提下,
Figure PCTCN2018087387-appb-000006
其中向量
Figure PCTCN2018087387-appb-000007
由x o指向x i
Figure PCTCN2018087387-appb-000008
为像素点x o的梯度方向,即步骤S6中求得的
Figure PCTCN2018087387-appb-000009
ρ为x o的邻域的相位密度,即相邻像素沿着相位梯度方向的相位差值,可直接在相位图中获取,在计算相位误差时,进行求和的邻域的大小范围为宽为6σ+1的正方形区域,σ在步骤S5中计算得到。
S8.根据式
Figure PCTCN2018087387-appb-000010
获取校正后的相位信息,最终结合标定信息,可求得测量物体的三维信息。
所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,步骤S1中所述的使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像的具体操作是:根据主动光投影三维 测量***中的硬件三角关系固定投影仪和摄像机,将表面纹理复杂的待测物体放置在合适的位置。使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像,条纹灰度值设置为:
Figure PCTCN2018087387-appb-000011
其中,
Figure PCTCN2018087387-appb-000012
为第n幅条纹图像的灰度值;A和B分别为条纹背景强度和条纹调制度系数;φ为设定的相位值;δ n为条纹的相移量,n=1,2,…,N,N为总的相移步数。
所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,步骤S1中所述的对N幅条纹图进行采集的具体方法是:首先调整摄像机的光圈大小,快门速度和感光度,使得采集回来的图像不会出现图像饱和,即图像中最亮区域灰度值小于255),在此相机参数下对N幅条纹图进行采集,在相机出现离焦现象时,相机采集到的条纹灰度值为:
Figure PCTCN2018087387-appb-000013
其中,
Figure PCTCN2018087387-appb-000014
为采集到的条纹图,x c代表采集图像的任一像素,x o为x c对应的在投影仪幅面的像素,x i为x o在投影仪幅面的邻域像素;T(x i,x o)是像素x i对x o的影响系数且T(x i,x o)=β·G(x i,x o)·r i,其中β是相机的增益,G(x i,x o)为相机模糊引起的点扩散函数PSF,r i为x c多对应的物体表面物点的反射率系数。
所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,步骤S2中所述的背景图像I'和相位的求解方法:
S21.对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解背景图像:
Figure PCTCN2018087387-appb-000015
S22.对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解相位:
Figure PCTCN2018087387-appb-000016
所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,步骤S4中所述的恢复背景图像I'在离焦之前的清晰背景图像I s'的具体方法是:对于每一个边缘像素,沿着灰度梯度方向寻找局部最大值和最小值,由于最大最小值分布于该像素的两侧,故将从最大最小值像素位置到边缘像素位置的所有像素的灰度值设为该最大值或最小值,对每一个边缘像素进行这样的处理,则可以得到清晰的背景图像I s'。
有益效果:本发明针对传统条纹投影三维测量***在实际测量容易因相机景深问题导致图片模糊,进一步导致明显相位误差的问题,提出了基于解析表达的相位误差校正算法。相 比现有的技术,本专利提出的方法除了测量***本身外不依赖任何硬件,也不依赖于投影高频条纹。通过分析相机离焦对相位质量的影响,建立相位误差的解析表达式。然后结合每个像素的点扩散函数PSF,模糊前背景图I s',相位方向
Figure PCTCN2018087387-appb-000017
和相位密度ρ,准确地求解相位误差的大小,从而直接对用传统相移法解得的相位进行校正。校正后的相位结合标定信息可以获取校正后的三维重构结果。整个相位校正基于严谨的数学过程,算法实现过程简便,适用于传统条纹投影三维测量***中相机景深较小而经常出现图像模糊的情况。同时也适用于当测量物体为半透明而产生次表面散射的情况。
附图说明:
图1是发明的整个过程的流程图。
图2是条纹投影三维测量***框架图。
图3是测试物体示意图。
图4是本专利需要处理的像素点示意图。
图5是计算得到的模糊前背景图。
图6是模糊函数PSF计算结果示意图。
图7是相位差Δ(x i,x o)示意图。
图8是对测试物体用本专利计算得到的相位误差示意图。
图9是实验物体示意图。
图10是直接应用传统方法获取的三维重构结果图。
图11是应用本专利算法对相位校正后获取的三维重构结果图。
具体实施方式:
下面结合具体实施方式,进一步阐明本发明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。
实施例1:
下面结合附图和具体实施例,进一步阐明本发明。在Windows操作***下选用MATLAB作为编程工具,对计算机生成的正弦条纹以及CCD相机采集到的条纹图像进行处理。该实例采用具有黑色纹理的白色平面作为被测物体,证实本专利提出的误差校正方法的有效性。应理解这些实例仅用于说明本发明而不用于限制本发明的范围,在阅读了本发明之后,本领域技术人员对本发明的各种等价形式的修改均落于本申请所附权利要求所限定的范围。
一种针对条纹投影三维测量***离焦现象的相位误差校正方法,算法流程如图1所示。测量***结构框图如图2所示。
具体包括以下步骤:
步骤1:根据主动光投影三维测量***中的硬件三角关系固定投影仪和摄像机,将表面纹理复杂的待测物体放置在合适的位置。使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像,条纹灰度值设置为:
Figure PCTCN2018087387-appb-000018
其中,
Figure PCTCN2018087387-appb-000019
为第n幅条纹图像的灰度值;A和B分别为条纹背景强度和条纹调制度系数;φ为设定的相位值;δ n为条纹的相移量,n=1,2,…,N,N为总的相移步数。
步骤2:将摄像机相关参数:光圈大小,快门速度和感光度进行合理设置,使得采集回来的图像不会出现图像饱和(即图像中最亮区域灰度值小于255)。在此相机参数下对N幅条纹图进行采集。在相机出现离焦现象时,相机采集到的条纹灰度值为:
Figure PCTCN2018087387-appb-000020
其中,
Figure PCTCN2018087387-appb-000021
为采集到的条纹图,x c代表采集图像的任一像素,x o为x c对应的在投影仪幅面的像素,x i为x o在投影仪幅面的邻域像素;T(x i,x o)是像素x i对x o的影响系数且T(x i,x o)=β·G(x i,x o)·r i,其中β是相机的增益,G(x i,x o)为相机模糊引起的点扩散函数PSF,r i为x c多对应的物体表面物点的反射率系数。
步骤3:对于步骤2中采集得到的条纹图,求解背景图像I',如图3所示。然后用传统相移法求解带有相位误差Δφ(x c)的相位
Figure PCTCN2018087387-appb-000022
步骤3.1:对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解背景图像:
Figure PCTCN2018087387-appb-000023
步骤3.2:对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解相位:
Figure PCTCN2018087387-appb-000024
步骤4:对步骤3中得到的背景图像I'进行边缘提取。对于该图像中的每个像素,判断其邻域10个像素以内是否有边缘点。如果没有,则对该像素不作处理;如果有,则该像素为本专利要处理的对象。图4是图3所示背景图像I'的分类结果,其中全黑区域(图像灰度值为0)为本专利不处理的像素;对于非全黑区域(图像灰度值大于0)中的每一个像素,由于在邻域内有边缘像素点,故为本专利要处理的部分。
步骤5:用步骤4中得到的边缘图像,恢复背景图像I'在离焦之前的清晰背景图像I s'。具体方法是:对于每一个边缘像素,沿着灰度梯度方向寻找局部最大值和最小值,由于最大最小值分布于该像素的两侧,故将从最大最小值像素位置到边缘像素位置的所有像素的灰度值设为该最大值或最小值。对每一个边缘像素进行这样的处理,则可以得到清晰的背景图像I s', 如图5所示。从图中看可以看出,I s'较好地反映了未受相机离焦而模糊的背景图像。
步骤6:根据步骤5中求得的清晰背景图像I s',通过最小化如下图像距离来计算每一个边缘像素由相机模糊引起的点扩散函数G,由单参数标准差σ描述。图6为计算得到的PSF结果,值得注意的是为了降低算法的复杂度,仅计算边缘像素的PSF。待处理区域的其他像素,其PSF都设为与最近的边缘像素相同。
d=||I'-I s′*G|| 2
步骤7:对于步骤4中确定的每一个待处理的像素,跟据邻域平均法计算相位梯度方向:
Figure PCTCN2018087387-appb-000025
其中,u和v为图像像素坐标的横向和纵向索引;w为一个预设的正方形邻域的宽;
Figure PCTCN2018087387-appb-000026
Figure PCTCN2018087387-appb-000027
分别为沿u和v方向的相位偏导数。该方法可以较高精度地在相机离焦和随机噪声的影响下获取每个像素的相位梯度方向。
步骤8:根据上述步骤得到的离焦前背景图像I s',点扩散函数G和相位梯度方向
Figure PCTCN2018087387-appb-000028
对每一个待处理的像素,计算其由相机离焦引起的相位误差:
Figure PCTCN2018087387-appb-000029
其中Δ(x i,x o)是像素x i与x o之间的相位差。在近邻域平面性假设的前提下,
Figure PCTCN2018087387-appb-000030
其中向量
Figure PCTCN2018087387-appb-000031
由x o指向x i
Figure PCTCN2018087387-appb-000032
为像素点x o的梯度方向,即步骤7中求得的
Figure PCTCN2018087387-appb-000033
ρ为x o的邻域的相位密度,即相邻像素沿着相位梯度方向的相位差值,可直接在相位图中获取,相位差示意图如图7所示。在计算相位误差时,进行求和的邻域的大小范围为宽为6σ+1的正方形区域,σ在步骤6中计算得到。最终求解的相位误差如图8所示,可以看到由相机模糊引起的***误差集中于图像边缘处,即物体表面反射率变化较大的地方。
步骤9:根据式
Figure PCTCN2018087387-appb-000034
获取校正后的相位信息。最终结合标定信息,可求得测量物体的三维信息。图9至图11是第二组实测实验,图9为待测物体目标,物体表面具有跳变较大的纹理区域。图10和图11分别是采用传统方法测量得到的三维重构结果和用本方法进行相位误差校正后得到的结果。可以看到经过本专利方法的校正,由相机离焦引起的重构误差明显减小。值得一提的是,本专利提出的方法无需投影额外的条纹图,而是直接用传统相移算法所需的图片进行相位误差分析和校正。由校正后的相位信息获得的三维重构图 有效降低了相机离焦造成的***误差。
应当指出,上述实施实例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定,这里无需也无法对所有的实施方式予以穷举。本实施例中未明确的各组成部分均可用现有技术加以实现。对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (5)

  1. 一种针对条纹投影三维测量***离焦现象的相位误差校正方法,其特征在于,该方法包括如下步骤:
    S1.使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像,对N幅条纹图进行采集;
    S2.对于步骤S1中采集得到的条纹图,求解背景图像I',然后用传统相移法求解带有相位误差Δφ(x c)的相位
    Figure PCTCN2018087387-appb-100001
    S3.对S2中得到的背景图像I'进行边缘提取;
    S4.用步骤S3中得到的边缘图像,恢复背景图像I'在离焦之前的清晰背景图像I s';
    S5.根据步骤S4中求得的清晰背景图像I s',通过最小化如下图像距离来计算每一个边缘像素由相机模糊引起的点扩散函数G,由单参数标准差σ描述,
    d=||I'-I s'*G|| 2
    S6.对于步骤S5中确定的每一个待处理的像素,跟据邻域平均法计算相位梯度方向:
    Figure PCTCN2018087387-appb-100002
    其中,u和v为图像像素坐标的横向和纵向索引;w为一个预设的正方形邻域的宽;φ u和φ v分别为沿u和v方向的相位偏导数;
    S7.根据步骤S4中得到的离焦前背景图像I s',步骤S5中得到的点扩散函数G和步骤S6中得到的相位梯度方向
    Figure PCTCN2018087387-appb-100003
    对每一个待处理的像素,计算其由相机离焦引起的相位误差:
    Figure PCTCN2018087387-appb-100004
    其中Δ(x i,x o)是像素x i与x o之间的相位差,在近邻域平面性假设的前提下,
    Figure PCTCN2018087387-appb-100005
    其中向量
    Figure PCTCN2018087387-appb-100006
    由x o指向x i
    Figure PCTCN2018087387-appb-100007
    为像素点x o的梯度方向,即步骤S6中求得的
    Figure PCTCN2018087387-appb-100008
    ρ为x o的邻域的相位密度,即相邻像素沿着相位梯度方向的相位差值,可直接在相位图中获取,在计算相位误差时,进行求和的邻域的大小范围为宽为6σ+1的正方形区域,σ在步骤S5中计算得到。
    S8.根据式
    Figure PCTCN2018087387-appb-100009
    获取校正后的相位信息,最终结合标定信息,可求得测量物体的三维信息。
  2. 根据权利要求1所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,其特征在于,步骤S1中所述的使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像的具体操作是:根据主动光投影三维测量***中的硬件三角关系固定投影仪和摄像机,将表面纹理复杂的待测物体放置在合适的位置。使用投影仪在物体上投射所需的N幅标准相移正弦条纹图像,条纹灰度值设置为:
    Figure PCTCN2018087387-appb-100010
    其中,
    Figure PCTCN2018087387-appb-100011
    为第n幅条纹图像的灰度值;A和B分别为条纹背景强度和条纹调制度系数;φ为设定的相位值;δ n为条纹的相移量,n=1,2,…,N,N为总的相移步数。
  3. 根据权利要求1所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,其特征在于,步骤S1中所述的对N幅条纹图进行采集的具体方法是:首先调整摄像机的光圈大小,快门速度和感光度,使得采集回来的图像不会出现图像饱和,即图像中最亮区域灰度值小于255),在此相机参数下对N幅条纹图进行采集,在相机出现离焦现象时,相机采集到的条纹灰度值为:
    Figure PCTCN2018087387-appb-100012
    其中,
    Figure PCTCN2018087387-appb-100013
    为采集到的条纹图,x c代表采集图像的任一像素,x o为x c对应的在投影仪幅面的像素,x i为x o在投影仪幅面的邻域像素;T(x i,x o)是像素x i对x o的影响系数且T(x i,x o)=β·G(x i,x o)·r i,其中β是相机的增益,G(x i,x o)为相机模糊引起的点扩散函数PSF,r i为x c多对应的物体表面物点的反射率系数。
  4. 根据权利要求1所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,其特征在于,步骤S2中所述的背景图像I'和相位的求解方法:
    S21.对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解背景图像:
    Figure PCTCN2018087387-appb-100014
    S22.对于采集到的N幅相移条纹图I i,i=1,2,..,N,根据以下公式求解相位:
    Figure PCTCN2018087387-appb-100015
  5. 根据权利要求1所述的针对条纹投影三维测量***离焦现象的相位误差校正方法,其特征在于,步骤S4中所述的恢复背景图像I'在离焦之前的清晰背景图像I s'的具体方法是:对于每一个边缘像素,沿着灰度梯度方向寻找局部最大值和最小值,由于最大最小值分布于该像素的两侧,故将从最大最小值像素位置到边缘像素位置的所有像素的灰度值设为该最大值或最小值,对每一个边缘像素进行这样的处理,则可以得到清晰的背景图像I s'。
PCT/CN2018/087387 2018-02-09 2018-05-17 一种针对条纹投影三维测量***离焦现象的相位误差校正方法 WO2019153569A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711442917.7 2018-02-09
CN201711442917.7A CN108168464B (zh) 2018-02-09 2018-02-09 针对条纹投影三维测量***离焦现象的相位误差校正方法

Publications (1)

Publication Number Publication Date
WO2019153569A1 true WO2019153569A1 (zh) 2019-08-15

Family

ID=62521935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/087387 WO2019153569A1 (zh) 2018-02-09 2018-05-17 一种针对条纹投影三维测量***离焦现象的相位误差校正方法

Country Status (2)

Country Link
CN (1) CN108168464B (zh)
WO (1) WO2019153569A1 (zh)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064474B (zh) * 2018-07-30 2022-01-04 安徽慧视金瞳科技有限公司 一种交互式课堂教学***自动获取掩膜图方法
CN109631797B (zh) * 2018-12-28 2020-08-11 广东奥普特科技股份有限公司 一种基于相移技术的三维重构无效区域快速定位方法
CN109781030B (zh) * 2019-01-23 2020-03-03 四川大学 基于点扩散函数估计的相位校正方法及装置
CN110068287B (zh) * 2019-04-24 2020-12-29 杭州光粒科技有限公司 相位校正方法、装置、计算机设备和计算机可读存储介质
CN110223337B (zh) * 2019-06-11 2021-08-27 张羽 一种针对结构光成像的多径干扰的解扰方法
CN110793463B (zh) * 2019-09-25 2020-11-10 西安交通大学 一种基于相位分布的解包裹相位误差检测与校正方法
CN111311686B (zh) * 2020-01-15 2023-05-02 浙江大学 一种基于边缘感知的投影仪失焦校正方法
CN112184788B (zh) * 2020-09-16 2023-11-07 西安邮电大学 一种四步相移的主值相位提取方法
CN112762858B (zh) * 2020-12-06 2021-11-19 复旦大学 一种偏折测量***中相位误差的补偿方法
CN115479556A (zh) * 2021-07-15 2022-12-16 四川大学 一种减相位误差均值的二值离焦三维测量方法及装置
CN113959360B (zh) * 2021-11-25 2023-11-24 成都信息工程大学 基于相移与焦移三维面形垂直测量方法、装置、介质
CN114688995A (zh) * 2022-04-27 2022-07-01 河北工程大学 一种条纹投影三维测量中的相位误差补偿方法
CN115546285B (zh) * 2022-11-25 2023-06-02 南京理工大学 基于点扩散函数解算的大景深条纹投影三维测量方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009141838A1 (en) * 2008-05-19 2009-11-26 Zhermack S.P.A. Method for contactless measurement of surface shape objects, particularly for dental arch portions or teeth portions
CN105115446A (zh) * 2015-05-11 2015-12-02 南昌航空大学 基于三角波条纹离焦的条纹反射三维测量方法
CN106595522A (zh) * 2016-12-15 2017-04-26 东南大学 一种光栅投影三维测量***的误差校正方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8089635B2 (en) * 2007-01-22 2012-01-03 California Institute Of Technology Method and system for fast three-dimensional imaging using defocusing and feature recognition
WO2010103527A2 (en) * 2009-03-13 2010-09-16 Ramot At Tel-Aviv University Ltd. Imaging system and method for imaging objects with reduced image blur
JP5891554B2 (ja) * 2011-08-29 2016-03-23 国立大学法人山梨大学 立体感提示装置および方法ならびにぼけ画像生成処理装置,方法およびプログラム
CN104025255B (zh) * 2011-12-30 2016-09-07 英特尔公司 用于工艺优化的相位调谐的技术
JP2014163812A (ja) * 2013-02-26 2014-09-08 Institute Of National Colleges Of Technology Japan パターン投影方法、パターン投影装置及びこれを用いた三次元計測装置
CN104006765B (zh) * 2014-03-14 2016-07-13 中国科学院上海光学精密机械研究所 单幅载频干涉条纹相位提取方法及检测装置
CN104457614B (zh) * 2014-11-11 2017-09-01 南昌航空大学 基于二进制条纹离焦的条纹反射三维测量方法
JP2016170122A (ja) * 2015-03-13 2016-09-23 キヤノン株式会社 計測装置
CN105806259B (zh) * 2016-04-29 2018-08-10 东南大学 一种基于二值光栅离焦投影的三维测量方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009141838A1 (en) * 2008-05-19 2009-11-26 Zhermack S.P.A. Method for contactless measurement of surface shape objects, particularly for dental arch portions or teeth portions
CN105115446A (zh) * 2015-05-11 2015-12-02 南昌航空大学 基于三角波条纹离焦的条纹反射三维测量方法
CN106595522A (zh) * 2016-12-15 2017-04-26 东南大学 一种光栅投影三维测量***的误差校正方法

Also Published As

Publication number Publication date
CN108168464B (zh) 2019-12-13
CN108168464A (zh) 2018-06-15

Similar Documents

Publication Publication Date Title
WO2019153569A1 (zh) 一种针对条纹投影三维测量***离焦现象的相位误差校正方法
US10415957B1 (en) Error correction method for fringe projection profilometry system
EP3594617B1 (en) Three-dimensional-shape measurement device, three-dimensional-shape measurement method, and program
US9122946B2 (en) Systems, methods, and media for capturing scene images and depth geometry and generating a compensation image
CN113358063B (zh) 一种基于相位加权融合的面结构光三维测量方法及***
KR20140027468A (ko) 깊이 측정치 품질 향상
Tang et al. High-precision camera distortion measurements with a “calibration harp”
JP2024507089A (ja) 画像のコレスポンデンス分析装置およびその分析方法
US20190087943A1 (en) Method for determining a point spread function of an imaging system
Guan et al. Pixel-level mapping method in high dynamic range imaging system based on DMD modulation
CN116608794B (zh) 一种抗纹理3d结构光成像方法、***、装置及存储介质
Ghita et al. A video-rate range sensor based on depth from defocus
JP2014066538A (ja) 写真計測用ターゲット及び写真計測方法
Xiaohui et al. The image adaptive method for solder paste 3D measurement system
CN112378348B (zh) 一种针对低质量条纹图像的迭代相位校正方法
Gottfried et al. Time of flight motion compensation revisited
JP7088232B2 (ja) 検査装置、検査方法、およびプログラム
JP2002162215A (ja) 3次元形状計測方法およびそのシステム
JP2008170282A (ja) 形状測定装置
Urban et al. On the Issues of TrueDepth Sensor Data for Computer Vision Tasks Across Different iPad Generations
An et al. A modified multi-exposure fusion method for laser measurement of specular surfaces
US20210349218A1 (en) System and method for processing measured 3d values of a scene
US20240062401A1 (en) Measurement system, inspection system, measurement device, measurement method, inspection method, and program
WO2024134935A1 (ja) 三次元情報補正装置及び三次元情報補正方法
WO2023104443A1 (en) Time-of-flight detection circuitry and time-of-flight detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905538

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905538

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18905538

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18905538

Country of ref document: EP

Kind code of ref document: A1