CN117589087A - Phase correction method and system for structural light stripe contour projection - Google Patents

Phase correction method and system for structural light stripe contour projection Download PDF

Info

Publication number
CN117589087A
CN117589087A CN202311616673.5A CN202311616673A CN117589087A CN 117589087 A CN117589087 A CN 117589087A CN 202311616673 A CN202311616673 A CN 202311616673A CN 117589087 A CN117589087 A CN 117589087A
Authority
CN
China
Prior art keywords
phase
region
pixel point
pixel
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311616673.5A
Other languages
Chinese (zh)
Inventor
曹红燕
乔大勇
彭安杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202311616673.5A priority Critical patent/CN117589087A/en
Publication of CN117589087A publication Critical patent/CN117589087A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2504Calibration devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a phase correction method and a phase correction system for structural light stripe contour projection, which are characterized in that when boundary identification is carried out, the phase correction method comprises the steps of carrying out boundary extraction through a gray level diagram in primary boundary identification, carrying out boundary information extraction through phase calculation in secondary boundary identification, and identifying boundaries in two directions through the gray level diagram and the phase diagram, thereby improving the accuracy of boundary identification, further fusing the results of the two boundary identifications, ensuring the accuracy of boundary division without the interference of jumping pixels on the phase of a stable region, carrying out filtering treatment on the stable region based on the divided region, reducing the loss of characteristic information, providing reference information for the correction of the unstable region, reducing error transfer, and reducing the phase noise when error compensation is carried out on the unstable region by taking the stable region as a reference, thereby reducing the point cloud noise and improving the accuracy.

Description

Phase correction method and system for structural light stripe contour projection
Technical Field
The invention belongs to the technical field of optical three-dimensional measurement, and relates to a phase correction method and a phase correction system for structural light stripe contour projection.
Background
Structured light stripe profile projection (Fringe projection profilometry, FPP) is a contactless, high-precision optical three-dimensional measurement technique. FPP has been widely used in medical imaging, manufacturing, archaeology, package sorting, robotics, computer vision, and the like. Conventional FPPs consist of one or two cameras and a projector. A set of coded fringe patterns is projected onto the object by a projector and the deformed fringe images are captured by a camera, and three-dimensional information of the object can be restored by decoding and triangulation. Phase unwrapping is a key problem for three-dimensional shape measurement based on Fringe Projection Profilometry (FPP), and the retrieved phases are wrapped between-pi and pi, blurring the further phase-to-depth mapping due to the inverse trigonometric function operation. In order to eliminate the phase ambiguity, spatial and temporal phase unwrapping methods are proposed. The spatial phase unwrapping method detects and eliminates 2 pi phase jumps by judging the phase difference between adjacent pixels, while the temporal phase unwrapping (Temporal Phase Unwrapping, TPU) method eliminates phase ambiguity by projecting additional patterns to uniquely mark each cycle of wrapping phase. Because the pixel-by-pixel phase unwrapping is adopted, the TPU method is more suitable for being applied to complex and isolated scenes, the phase ambiguity is eliminated by matching the wrapping phase and the corresponding stripe sequence, and the TPU method has good adaptability to complex or isolated test scenes, but in actual measurement, the TPU method has phase unwrapping errors, and the final three-dimensional reconstruction result is seriously affected. The error sources mainly comprise two types, namely, random phase spread errors caused by factors such as noise of an optical system, interference of ambient light and the like, and the amplitude distribution of the errors is relatively uniform; secondly, errors caused by the inverse tangent calculation and unequal wrapping phases and stripe sequences are mostly concentrated on discontinuous points of the wrapped phases, which are called jump errors.
In actual measurement, the phase expansion is easy to generate errors due to various factors, and when a camera captures a stripe image, the image quality is affected by noise generated by hardware such as a camera or a sensor, wherein the noise of the sensor comprises dark current noise, readout noise, gain noise and the like, and the noise of the camera comprises thermal image noise, fixed mode noise and the like. Second, in addition to hardware factors, environmental factors such as insufficient light and uneven illumination during image acquisition can interfere with image quality. Third, noise may be generated during image transmission, such as signal attenuation, electromagnetic interference, and the like. Fourth, some non-linear errors may be caused due to the optical system. Fifth, in the process of calculating the phase of the fringe image, the periodic edge of the phase is blurred due to the limitation of the arctangent function, and the phase spread error is easy to occur in the pixel area adjacent to the boundary. In addition to the above reasons, there are also reasons for the imperfection of the algorithm, the lack of computer power, and the like. According to the error variation amplitude of the adjacent values after the phase expansion, the error with larger variation can be called a jump error, and the error with smaller variation can be called a random error.
In practical situations, the random error is that the phase tends to be gradually increased in a wavy line due to an optical imaging system and the like, adjacent phase changes are inconsistent, and when the point cloud is calculated according to the phase, the point cloud shakes due to the fluctuation of the phase, so that the RMS becomes large, and the measurement accuracy is affected. The other type of jump error is mainly a periodic error caused by the inconsistent period of the gray code and the phase shift code, and the existing errors affect the final three-dimensional reconstruction result.
Disclosure of Invention
The invention aims to solve the problem of phase error in the optical three-dimensional measurement technology in the prior art, and provides a phase correction method and a phase correction system for structural light stripe contour projection.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
a phase correction method for structured-light stripe profile projection, comprising the steps of:
obtaining a gray level image of a target object according to the stripe image, and carrying out first boundary identification on the obtained gray level image to obtain first extracted edge information;
unwrapping the stripe image to calculate a phase to obtain edge information extracted for the second time;
fusing the edge information extracted for the first time and the edge information extracted for the second time to obtain a final boundary result, and dividing a stable region and an unstable region according to the final boundary result;
and filtering the stable region to obtain the error distribution condition of the phase data of the region, calculating the priority of pixels in the unstable region, and sequentially carrying out error compensation correction on the phase of the unstable region from the priority pixel according to the error distribution condition of the phase data of the stable region.
The invention further improves that:
the obtaining of the first extracted edge information comprises the following steps:
and carrying out first boundary identification by adopting a Sobel operator:
respectively calculating the average gradient of adjacent pixels and the average gradient of outer-layer pixels;
acquiring average gray scale estimation of a central pixel point based on the average gradient of adjacent pixels and the average gradient of outer-layer pixels, and sequentially calculating all pixel points to obtain gray scale values and edge directions of all pixel points of an image;
setting a central pixel point threshold value, dividing the image according to the set threshold value, gray values of all pixel points and edge directions, and determining preliminary division boundaries of a stable region and an unstable region.
Calculating an average gray scale estimate for the center pixel point by equation (1):
g(x,y)=|g x (x,y)|/σ+|g y (x,y)|/σ (1)
in the formula g x (x, y) and g y The (X, Y) score represents the gradient values in the X-direction and the Y-direction; sigma represents an attenuation factor;
binarizing the selected threshold value:
setting the size of a window, removing the maximum value and the minimum value in the window, and taking the average value as the final threshold value of the central pixel point:
in the method, in the process of the invention,representing the sum of the gray values of the pixel points within the window.
The obtaining of the second extracted edge information comprises the following steps:
unwrapping the stripe image to calculate a phase to obtain a phase gradient and a correlation coefficient convolution;
performing secondary boundary identification based on the phase gradient;
the stable region and the unstable region are divided according to the final boundary result:
and generating a phase mask after fusing the edge information extracted for the first time and the edge information extracted for the second time, and taking the phase mask as a division basis of the stable region and the unstable region.
The filtering treatment of the stable region comprises the following steps:
average filtering is performed in the row direction of the image:
wherein W represents a one-dimensional window with a radius r centered on a pixel (x, y), input data is a stable region phase, n is a continuous effective pixel in a template window W, and p i Representing the phase of the effective pixel point; phase (x, y) represents a phase corrected by averaging to obtain pixel (x, y).
The error compensation correction of the phase of the unstable region comprises the following steps:
screening the pixel point with the largest priority in the unstable area, and searching the pixel point which is finally matched in the horizontal direction of the pixel point;
performing error correction by referring to the bit error distribution of the stable region and the position relationship between the finally matched pixel point and the point to be repaired;
updating the phase value of the pixel point in the unstable region and the structural mask after the error correction is completed;
and correcting pixels in the unstable region one by one until all phases in the unstable region are corrected.
The determining process of the pixel point with the largest priority in the unstable area comprises the following steps:
I=Ω+ψ (6)
P(x,y)=C(x,y)*phase(x,y)/d(x,y) (7)
wherein P (x, y) represents the priority of a certain pixel; c (x, y) represents a mask of a certain pixel point; phase (x, y) represents a phase value; d represents the distance between a pixel point and the nearest pixel point of the stable region in the horizontal direction;
and moving along the horizontal direction near the pixel point with the highest priority, and taking the pixel point with the number of pixels in the stable region in the moving window being more than 80% and the central pixel point belonging to the stable region as the final matched pixel point.
A phase correction system for structural light stripe contour projection comprises a primary edge identification module, a secondary edge identification module, a region division module and a correction module;
the primary edge recognition module is used for obtaining a gray image of the target object according to the stripe image, and carrying out primary boundary recognition on the obtained gray image to obtain edge information extracted for the first time;
the secondary edge recognition module is used for unwrapping the stripe image to calculate a phase to obtain edge information extracted for the second time;
the region dividing module is used for fusing the edge information extracted for the first time and the edge information extracted for the second time to obtain a final boundary result, and dividing a stable region and an unstable region according to the final boundary result;
the correction module is used for carrying out filtering processing on the stable region, obtaining the error distribution condition of the phase data of the region, calculating the priority of pixels in the unstable region, and sequentially carrying out error compensation correction on the phase of the unstable region from the priority pixels according to the error distribution condition of the phase data of the stable region.
A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any of the invention when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any of the present invention.
Compared with the prior art, the invention has the following beneficial effects:
the invention discloses a phase correction method for structural light stripe contour projection, which comprises the steps of carrying out boundary extraction through a gray level diagram in primary boundary identification and carrying out boundary information extraction through phase calculation in secondary boundary identification, and carrying out boundary identification through two directions of the gray level diagram and the phase diagram, so that the accuracy of boundary identification is improved, the results of the two boundary identifications are further fused, the phase of a stable region is not interfered by jump pixels, the accuracy of boundary division is ensured, filtering treatment is carried out on the stable region based on the divided region, the loss of characteristic information is reduced, reference information is provided for correction of the unstable region, error transfer is reduced, and when the stable region is used as a reference pair for carrying out error compensation on the unstable region, the phase noise is reduced, and the accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic diagram of a boundary recognition process according to the present invention;
FIG. 3 is a fringe pattern composite image of the invention;
FIG. 4 is a schematic diagram of a 5*5 Cartesian network and urban distances according to the present invention; (wherein a represents the neighborhood pixel position of the target pixel point and b represents the distance weighting value of the neighborhood pixel);
FIG. 5 is a schematic diagram of a filtered phase correlation coefficient template according to the present invention (wherein a represents 6 directions of a target pixel correlation scale template, b represents a horizontal direction scale template, c represents a 45 ° direction scale template, d represents a 135 ° direction scale template, e represents a vertical direction scale template, f represents a 225 ° direction scale template, g represents a 315 ° direction scale template, and h represents an integrated template);
FIG. 6 is a schematic diagram of an edge detection result obtained by integrating two boundary recognition results according to the present invention; (wherein a represents the result of performing only the boundary recognition of the gray-scale image, b represents the result of performing only the boundary recognition of the phase image, c represents the result of fusing the boundary recognition of the gray-scale image and the phase image);
FIG. 7 is a schematic diagram of a phase correction flow chart according to the present invention;
FIG. 8 is a schematic diagram of a priority calculation according to the present invention;
fig. 9 shows measurement results of standard parts in the test process in the embodiment of the invention (wherein a represents a partial fringe pattern of a standard sphere, b represents a composite image of the standard sphere, c represents a reconstruction result of the standard sphere, d represents a partial fringe pattern of a standard plane, e represents a composite image of the standard plane, and f represents a reconstruction result of the standard plane).
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the embodiments of the present invention, it should be noted that, if the terms "upper," "lower," "horizontal," "inner," and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the inventive product is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and does not indicate or imply that the apparatus or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Furthermore, the term "horizontal" if present does not mean that the component is required to be absolutely horizontal, but may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the embodiments of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" should be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The invention is described in further detail below with reference to the attached drawing figures:
referring to fig. 1, the embodiment of the invention discloses a phase correction method for structural light stripe contour projection, which specifically comprises the following steps:
step 1: boundary recognition, as shown in FIG. 2
Step 1.1: boundary primary identification, splicing fringe patterns to obtain a complete gray image, and adopting an improved Sobel operator to primarily identify the boundary
A complete target object image is obtained according to the stripe image stitching, as shown in fig. 3; and carrying out threshold filtering and morphological processing on the gray level image of the target object, and carrying out image recognition by adopting an improved solid operator to obtain the edge information extracted for the first time.
Firstly, a complete gray image of a target object is obtained according to stripe image stitching, image enhancement is carried out on the image by adopting morphological processing on the image in order to obtain rich characteristics, and corrosion and expansion are carried out.
In the edge detection method, the Sobel operator has obvious inhibition effect on Gaussian noise, has better image processing effect on gray level gradient and larger noise, and selects the Sobel operator to carry out edge detection on the enhanced image. The main purpose of Sobel operator detection is to calculate the average gradient estimation of the center pixel, and the main process is the gradient summation of the gradient vector (the ratio of the forward difference of the pixel gray scale to the city distance) of the image in 4 directions (vertical, horizontal, 45 DEG diagonal, 135 DEG diagonal) in a Cartesian network, and the receptive field scale of the convolution template is changed to 5*5 in order to enlarge the boundary range.
Sobel uses city distance as pixel distance, and the distance value between diagonally adjacent pixels is 2, as shown in FIG. 4, Z 13 Represents the center pixel point, Z 7 、Z 8 、Z 9 、Z 12 、Z 14 、Z 17 、Z 18 、Z 19 Representing adjacent pixels, Z 1 、Z 3 、Z 5 、Z 11 、Z 15 、Z 21 、Z 23 、Z 25 Representing the outer layer pixels in line with the neighboring pixel direction. Calculating center pixelsThe average gradient estimation of the points mainly refers to the adjacent pixels and the outer pixels, and the specific calculation process is as follows.
Calculating the average gradient of adjacent pixels:
the direction vector is (Z) 7 ,Z 19 )、(Z 8 ,Z 18 )、(Z 9 ,Z 17 )、(Z 14 ,Z 12 ) The unit components of the corresponding control difference directions are (-1, 1), (0, 1), (1, 0) in sequence, the corresponding inverse distance weights are 1/4, 1/2, 1/4 and 1/2 in sequence, and the average gradient obtained by calculation from 4 directions is as follows:
calculating the average gradient of the outer layer pixels:
the direction vector is (Z) 1 ,Z 25 )、(Z 3 ,Z 23 )、(Z 5 ,Z 21 )、(Z 15 ,Z 11 ) The unit components of the corresponding control difference directions are (-1, 1), (0, 1), (1, 0) in sequence, the corresponding inverse distance weights are 1/8, 1/4, 1/8, 1/4 in sequence, and the average gradient calculated from the 4 directions is:
G 2 =(Z 1 -Z 25 )/8*[-1,1]+(Z 3 -Z 23 )/4*[0,1]+(Z 5 -Z 21 )/8*[1,1]+(Z 15 -Z 11 )/4*[1,0] (9)
calculating an average gradient estimate for the center pixel point:
the components obtained in the two steps are summed to obtain the average gray level estimation of the central pixel, the average gray level estimation is denominator solution formula is removed, and the Sobel operator template with the scale of 5*5 can be obtained by decomposing the components in the X direction and the Y direction, wherein directionX represents a convolution template in the horizontal X direction, and directionY represents a convolution template in the vertical Y direction:
further, the gray level image and the operator templates in two directions obtained by the third step are subjected to convolution operation to obtain gradient values g in two directions x (x,y)、g y (x, y) and dividing the data by the attenuation factor sigma to obtain an average gray scale estimation of the central pixel point, wherein sigma is 10, and the direction is determined by the direction of the obtained central pixel point. And (3) calculating all the pixel points in the way, so that the gray values and the edge directions of all the pixel points of the image can be determined.
g(x,y)=|g x (x,y)|/σ+|g y (x,y)|/σ (1)
Further, the adaptive dynamic threshold value selects the optimal threshold value for binarization processing.
In a 3×3 window, taking the average value after removing the maximum value and the minimum value as the final threshold value of the central pixel point:
wherein:and subtracting the maximum value and the minimum value of the gray values from the sum of the gray values of the pixel points in the 3X 3 neighborhood of the representative pixel points (x, y) and taking the average value as a threshold value T which is adaptively determined in the neighborhood range.
And setting the gray value of the pixel points with the gray value larger than the threshold value T in each template as 255, namely marking the pixel points as an edge unstable region, setting the gray value of the pixel points smaller than the threshold value as 0, dividing the pixel points smaller than the threshold value and with effective phases as a stable region according to the distribution condition of phase data, and determining a global stable region and an unstable region by traversing all the pixel points of the image.
Step 1.2: boundary secondary identification
And then calculating phase data and phase gradient according to the fringe image, extracting edge information for the second time according to the distribution of the phase gradient, and obtaining a final image segmentation result after edge detection from two directions of the gray level image and the phase, and dividing the final image segmentation result into a stable region and an unstable region.
The method specifically comprises the following steps:
and unwrapping calculation is carried out according to the fringe pattern to obtain corresponding phase data, the phase can be understood as 2-dimensional data with depth information, the error of the adjacent phase of each pixel point is calculated, error analysis is carried out from 6 directions of the pixel point by 3*3 scale, and the value is used as a judging condition of whether the current pixel point belongs to an edge. The error weighting scale is accumulated to obtain the weighting matrix coefficient of the point.
As shown in fig. 5, the five-pointed star region represents a target pixel point, the arrow in fig. 5 (a) represents 6 directions, the arrow in fig. 5 (b) represents a horizontal direction scale template, the arrow in fig. 5 (c) represents a 45 ° direction scale template, the arrow in fig. 5 (d) represents a 135 ° direction scale template, the arrow in fig. 5 (e) represents a vertical direction scale template, the arrow in fig. 5 (f) represents a 225 ° direction scale template, the arrow in fig. 5 (g) represents a 315 ° direction scale template, and the correlation coefficient of the pixel point is integrated to obtain a weighted coefficient of 5*5 scale, and as shown in fig. 5 (h), the phase gradient after filtering in the stable region, that is, the absolute value of the error of the adjacent phase and the correlation coefficient are multiplied to obtain a calculated value as the error evaluation index of the point.
The two boundary recognition results are integrated, and the edge detection result shown in fig. 6 can be obtained through the initial edge detection of the gray level diagram and the secondary edge detection of the phase. Part of the details can be repeatedly detected, and as a result, the edges become thicker, so that edge information is prevented from being mixed in the data of the stable region, and interference of the neighborhood abnormal pixels on phase correction of the stable region can be avoided in the following data processing process. And regenerating a mask plate with the same size as the original phase, assigning 1 to the boundary block, and assigning 0 to the stable region, thereby being used as a structural judgment basis for the subsequent phase correction.
And identifying the boundary in two directions through a gray level diagram and a phase diagram, obtaining a primary segmentation result through an improved sobel operator and a self-adaptive threshold value when the boundary is identified through the gray level diagram, and obtaining a secondary segmentation result through a phase error distribution gradient and a weighting coefficient of adjacent pixels when the boundary is identified through the phase diagram. And fusing the two segmentation results to obtain the structural mask plate of the phase stabilization region and the unstable region, wherein the purpose is to prevent the phase of the stabilization region from being interfered by jumping pixels.
Step 2: phase correction, see fig. 7.
Firstly, stabilizing the phase of a region, screening and filtering the conforming data, then analyzing the distribution condition of phase residual errors in the region, and taking the obtained result as reference data for phase correction of an unstable region. The phase correction of the boundary unstable region mainly refers to the data and residual error distribution condition of the stable region, a proper reference point is found, and error compensation is carried out on a target point, wherein the specific process is as follows:
step 2.1 phase correction in the stability region
And according to the segmented result, performing adaptive window filtering processing on the stable region, calculating the error distribution condition of the phase data of the region, and taking the processed data as a reference point for subsequent calculation.
The general filtering needs to refer to surrounding neighborhood pixels to process target points, in order to enable phase data to have good linear increasing trend and less loss of original data, mean filtering is selected in the row direction of an image, and compared with other smooth filtering, the mean filtering method has a good Gaussian noise suppression effect. For the selection of the filter data and the window, the texture information around the pixel needs to be referenced, and since the phase error of the adjacent pixel is taken into consideration in the edge detection, the size of the filter window can be determined according to the number of continuous pixel points in the horizontal direction.
Wherein W represents a pixel point (x, y) as the centerA one-dimensional window with radius r, input data is a stable region phase, n is a continuous effective pixel point in a template window W, and p i The phase of the effective pixel point is represented, the phase (x, y) after the correction of the pixel point (x, y) is obtained through averaging, the process is only used for the condition that the number of continuous effective pixel points is not less than 3, the type of pixel point is regarded as an isolated point for the condition that the number of effective points is less than 3, the corresponding mask value is updated to 1, and then the correction is carried out together with an unstable region. After the filtering of the horizontal line direction stable region is completed, calculating the phase gradient of the filtered data adjacent pixel points, analyzing the distribution condition, and taking the gradient with the maximum probability as the amplitude reference of the next boundary correction.
The adaptive window filtering correction is carried out on the phase of the stable region, so that the characteristic information loss is reduced, effective reference information is provided for subsequent correction, and error transfer is reduced
Step 2.2: unstable region phase correction
Thirdly, a boundary phase correction strategy is provided, a Manhattan distance formula and pixel priority are combined, a pixel point closest to an unstable phase is selected in a stable region, the phase of the unstable region is subjected to fitting correction according to the error distribution condition obtained by calculation in the step 2.1, corrected phase data are finally obtained, and the corrected phase is converted according to a coordinate system to obtain a three-dimensional point cloud.
The phase is easy to be disordered due to the interference of factors such as phase ambiguity, hardware, algorithm, environment and the like, so that noise points or calculation errors of the calculated point cloud occur, in order to reduce noise influence, after phase filtering of a stable region is completed, error compensation is conducted on unstable phases, and reconstruction accuracy is improved on the basis of minimizing phase loss as much as possible.
The main phase correction process for the unstable region refers to the data and gradient distribution conditions of the stable region, and the error compensation is carried out on the phase of the target point by matching proper pixel points.
Firstly, pixel information of an unstable region is used as input, pixel points with the greatest priority in the region are screened, then a window with a size of 5*5 is determined in a stable region in the horizontal direction of the point, the best matched pixel points are searched along with window movement, phase reference phase error distribution of the best matched pixel points is used for obtaining phases of points to be corrected, and finally phase data and a structural mask plate are updated.
First, the priority of the phase to be corrected is calculated, as shown in fig. 8, Ω represents a stable region, ψ represents an unstable region, ζ represents a pixel region in the unstable region and closest to the stable region, where the priority of the pixel point in the region is higher, and the repair is performed preferentially during error correction. The priority P (x, y) of a certain pixel point is determined by the mask C (x, y) generated by boundary recognition, the phase value phase (x, y) and the distance d between the point and the nearest stable region pixel point in the horizontal direction.
I=Ω+ψ (6)
P(x,y)=C(x,y)*phase(x,y)/d(x,y) (7)
And moving in the horizontal direction of a 9*9 window near the pixel point with high priority to find the pixel point of the stable region with the best match, wherein the measurement standard is that the number of pixels of the stable region in the window is more than 80 percent, the central pixel point belongs to the stable region, and error correction is carried out by reference phase error distribution and the position relation between the matching point and the point to be repaired. After the error correction is completed, the phase value of the pixel point in the xi area is updated, the mask value is changed from 1 to 0, and the like until the phase of the unstable area is completely corrected.
And carrying out error compensation on the phase of the unstable region according to the phase result of the stable region, searching a final matching point in the horizontal direction, and updating the structural mask until all phase correction is completed. The purpose of the correction is to reduce phase noise based on reliable data, thereby reducing point cloud noise and improving accuracy.
In this embodiment, before calculating the phase-height mapping, the phase is corrected, so that part of phase errors are avoided in advance, point cloud noise is reduced, a more stable point cloud reconstruction result can be obtained, two boundary divisions do not affect each other, the dimensions are consistent, sequential or parallel calculation can be selected, time consumption is reduced, and compared with the traditional method, the phase correction method disclosed in this embodiment can improve measurement accuracy, based on this, the invention discloses a verification embodiment:
two standard workpieces (plain flat panels with flatness of 0.05mm or more, standard dumbbell spheres with diameters of 38.1.+ -. 0.01mm, and distances between the centers of the spheres of 201.09.+ -. 0.01 mm) were tested in the working ranges of 300 to 900mm, and the results are shown in tables 1 and 2. Compared with the traditional method, the fitting error after phase correction is obviously reduced. The measurement errors of the two methods at the far positions are obvious, the result is reasonable for a structured light three-dimensional measurement system, and the efficiency of the measurement result subjected to noise interference is higher when the positions of the camera and the projection module are unchanged and are far away from a target object.
See fig. 9 for measurement results of standard parts.
TABLE 1 three-dimensional measurements of standard dumbbell spheres at different distances
TABLE 2 three-dimensional measurement of standard planes at different distances
The embodiment of the invention also discloses a phase correction system for the structural light stripe outline projection, which comprises a primary edge identification module, a secondary edge identification module, a region division module and a correction module;
the primary edge recognition module is used for obtaining a gray image of the target object according to the stripe image, and carrying out primary boundary recognition on the obtained gray image to obtain edge information extracted for the first time;
the secondary edge recognition module is used for unwrapping the stripe image to calculate a phase to obtain edge information extracted for the second time;
the region dividing module is used for fusing the edge information extracted for the first time and the edge information extracted for the second time to obtain a final boundary result, and dividing the stable region and the unstable region according to the final boundary result.
The correction module is used for carrying out filtering processing on the stable region, obtaining the error distribution condition of the phase data of the region, calculating the priority of pixels in the unstable region, and sequentially carrying out error compensation correction on the phase of the unstable region from the priority pixels according to the error distribution condition of the phase data of the stable region.
The embodiment of the invention provides a schematic diagram of terminal equipment. The terminal device of this embodiment includes: a processor, a memory, and a computer program stored in the memory and executable on the processor. The steps of the various method embodiments described above are implemented when the processor executes the computer program. Alternatively, the processor may implement the functions of the modules/units in the above-described device embodiments when executing the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory.
The processor may be a central processing unit (CentralProcessingUnit, CPU), but may also be other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory.
The modules/units integrated in the terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), an electrical carrier signal, a telecommunication signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A phase correction method for structured-light stripe profile projection, comprising the steps of:
obtaining a gray level image of a target object according to the stripe image, and carrying out first boundary identification on the obtained gray level image to obtain first extracted edge information;
unwrapping the stripe image to calculate a phase to obtain edge information extracted for the second time;
fusing the edge information extracted for the first time and the edge information extracted for the second time to obtain a final boundary result, and dividing a stable region and an unstable region according to the final boundary result;
and filtering the stable region to obtain the error distribution condition of the phase data of the region, calculating the priority of pixels in the unstable region, and sequentially carrying out error compensation correction on the phase of the unstable region from the priority pixel according to the error distribution condition of the phase data of the stable region.
2. A phase correction method for structured-light stripe profile projection according to claim 1, wherein said obtaining the first extracted edge information comprises the steps of:
and carrying out first boundary identification by adopting a Sobel operator:
respectively calculating the average gradient of adjacent pixels and the average gradient of outer-layer pixels;
acquiring average gray scale estimation of a central pixel point based on the average gradient of adjacent pixels and the average gradient of outer-layer pixels, and sequentially calculating all pixel points to obtain gray scale values and edge directions of all pixel points of an image;
setting a central pixel point threshold value, dividing the image according to the set threshold value, gray values of all pixel points and edge directions, and determining preliminary division boundaries of a stable region and an unstable region.
3. A phase correction method for structured-light stripe profile projection according to claim 1, wherein the average gray scale estimate of the center pixel is calculated by equation (1):
g(x,y)=|g x (x,y)|/σ+|g y (x,y)|/σ (1)
in the formula g x (x, y) and g y The (X, Y) score represents the gradient values in the X-direction and the Y-direction; sigma represents an attenuation factor;
binarizing the selected threshold value:
setting the size of a window, removing the maximum value and the minimum value in the window, and taking the average value as the final threshold value of the central pixel point:
in the method, in the process of the invention,representing the sum of the gray values of the pixel points within the window.
4. A phase correction method for structured-light stripe profile projection according to claim 1, wherein said obtaining second extracted edge information comprises the steps of:
unwrapping the stripe image to calculate a phase to obtain a phase gradient and a correlation coefficient convolution;
performing secondary boundary identification based on the phase gradient;
the stable region and the unstable region are divided according to the final boundary result:
and generating a phase mask after fusing the edge information extracted for the first time and the edge information extracted for the second time, and taking the phase mask as a division basis of the stable region and the unstable region.
5. A phase correction method for structured-light stripe profile projection as claimed in claim 1, wherein said filtering of the stable region comprises the steps of:
average filtering is performed in the row direction of the image:
wherein W represents a one-dimensional window with a radius r centered on a pixel (x, y), input data is a stable region phase, n is a continuous effective pixel in a template window W, and p i Representing the phase of the effective pixel point; phase (x, y) represents a phase corrected by averaging to obtain pixel (x, y).
6. A phase correction method for structured-light stripe profile projection as claimed in claim 1, characterized in that the error compensation correction of the phase of the unstable region comprises the steps of:
screening the pixel point with the largest priority in the unstable area, and searching the pixel point which is finally matched in the horizontal direction of the pixel point;
performing error correction by referring to the bit error distribution of the stable region and the position relationship between the finally matched pixel point and the point to be repaired;
updating the phase value of the pixel point in the unstable region and the structural mask after the error correction is completed;
and correcting pixels in the unstable region one by one until all phases in the unstable region are corrected.
7. A phase correction method for structured-light stripe profile projection as claimed in claim 6, wherein said determining of the pixel with the greatest priority in said unstable region comprises the steps of:
I=Ω+ψ (6)
P(x,y)=C(x,y)*phase(x,y)/d(x,y) (7)
wherein P (x, y) represents the priority of a certain pixel; c (x, y) represents a mask of a certain pixel point; phase (x, y) represents a phase value; d represents the distance between a pixel point and the nearest pixel point of the stable region in the horizontal direction;
and moving along the horizontal direction near the pixel point with the highest priority, and taking the pixel point with the number of pixels in the stable region in the moving window being more than 80% and the central pixel point belonging to the stable region as the final matched pixel point.
8. The phase correction system for the structured light stripe profile projection is characterized by comprising a primary edge identification module, a secondary edge identification module, a region division module and a correction module;
the primary edge recognition module is used for obtaining a gray image of the target object according to the stripe image, and carrying out primary boundary recognition on the obtained gray image to obtain edge information extracted for the first time;
the secondary edge recognition module is used for unwrapping the stripe image to calculate a phase to obtain edge information extracted for the second time;
the region dividing module is used for fusing the edge information extracted for the first time and the edge information extracted for the second time to obtain a final boundary result, and dividing a stable region and an unstable region according to the final boundary result;
the correction module is used for carrying out filtering processing on the stable region, obtaining the error distribution condition of the phase data of the region, calculating the priority of pixels in the unstable region, and sequentially carrying out error compensation correction on the phase of the unstable region from the priority pixels according to the error distribution condition of the phase data of the stable region.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1-7.
CN202311616673.5A 2023-11-29 2023-11-29 Phase correction method and system for structural light stripe contour projection Pending CN117589087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311616673.5A CN117589087A (en) 2023-11-29 2023-11-29 Phase correction method and system for structural light stripe contour projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311616673.5A CN117589087A (en) 2023-11-29 2023-11-29 Phase correction method and system for structural light stripe contour projection

Publications (1)

Publication Number Publication Date
CN117589087A true CN117589087A (en) 2024-02-23

Family

ID=89911322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311616673.5A Pending CN117589087A (en) 2023-11-29 2023-11-29 Phase correction method and system for structural light stripe contour projection

Country Status (1)

Country Link
CN (1) CN117589087A (en)

Similar Documents

Publication Publication Date Title
CN110866924B (en) Line structured light center line extraction method and storage medium
CN108369650B (en) Method for identifying possible characteristic points of calibration pattern
CN108346148B (en) High-density flexible IC substrate oxidation area detection system and method
CN109631797B (en) Three-dimensional reconstruction invalid region rapid positioning method based on phase shift technology
CN111354047B (en) Computer vision-based camera module positioning method and system
CN115096206B (en) High-precision part size measurement method based on machine vision
JP2024507089A (en) Image correspondence analysis device and its analysis method
US20110164129A1 (en) Method and a system for creating a reference image using unknown quality patterns
CN113970560B (en) Defect three-dimensional detection method based on multi-sensor fusion
JP5772675B2 (en) Gray image edge extraction method, edge extraction device, and gray image edge extraction program
CN117611540A (en) Lithium battery pole piece coating defect detection method and related equipment
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN112923870A (en) Color object structured light three-dimensional measurement method based on phase shift and multi-bit code
CN117589087A (en) Phase correction method and system for structural light stripe contour projection
CN111415378A (en) Image registration method for automobile glass detection and automobile glass detection method
CN115184362B (en) Rapid defect detection method based on structured light projection
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN112797917B (en) High-precision digital speckle interference phase quantitative measurement method
CN113077429B (en) Speckle quality evaluation method based on adjacent sub-area correlation coefficient
CN109784121B (en) Dot-peep DPM code identification method and device
CN110298799B (en) PCB image positioning correction method
CN110264417B (en) Local motion fuzzy area automatic detection and extraction method based on hierarchical model
CN113505811A (en) Machine vision imaging method for hub production
CN111798506A (en) Image processing method, control method, terminal and computer readable storage medium
JP2000207557A (en) Method for measuring quantity of positional deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination