CN114549307B - High-precision point cloud color reconstruction method based on low-resolution image - Google Patents
High-precision point cloud color reconstruction method based on low-resolution image Download PDFInfo
- Publication number
- CN114549307B CN114549307B CN202210106255.0A CN202210106255A CN114549307B CN 114549307 B CN114549307 B CN 114549307B CN 202210106255 A CN202210106255 A CN 202210106255A CN 114549307 B CN114549307 B CN 114549307B
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- color
- gray
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 16
- 230000009466 transformation Effects 0.000 claims abstract description 16
- 238000001914 filtration Methods 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 12
- 230000002146 bilateral effect Effects 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 4
- 230000004069 differentiation Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000000654 additive Substances 0.000 claims description 2
- 230000000996 additive effect Effects 0.000 claims description 2
- 238000009792 diffusion process Methods 0.000 claims description 2
- 238000007667 floating Methods 0.000 claims description 2
- 238000009499 grossing Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 abstract description 8
- 238000001514 detection method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a high-precision point cloud color reconstruction method based on a low-resolution image, which comprises the steps of firstly reconstructing a high-precision achromatic point cloud P, then shooting a high-resolution gray level image and a low-resolution color image, detecting characteristic points, judging matching point pairs, and obtaining a coordinate transformation matrix H according to an optimized matching point pair set; then for low resolution color image I color Coordinate transformation for high resolution color image I using bilinear interpolation reg Obtaining a registered high resolution color image in the fill-in region; then, fusion of the high-resolution color image and the high-resolution gray scale image is realized based on bilateral filtering and weighted least square filtering, and a fused high-resolution color image is obtained; finally, the RGB channels of the three-dimensional points in the high-precision achromatic point cloud P are assigned to obtain a true chromatic point cloud P * . The invention reserves the richer texture detail information of the high-precision key parts and improves the measurement precision and the measurement speed.
Description
Technical Field
The invention belongs to the technical field of three-dimensional reconstruction, and particularly relates to a high-precision point cloud color reconstruction method based on a low-resolution image.
Background
In the technical fields of aviation, aerospace, deep sea exploration and the like, the structure of key parts is more and more precise and complex. For precise critical parts, high-precision measurement is required in the process of machining and manufacturing. The measuring instrument needs to reconstruct the surface texture of the object, and whether the surface texture meets the design requirement is verified.
The traditional binocular structured light system technology is relatively mature, can realize three-dimensional high-precision point cloud reconstruction, and is widely applied to three-dimensional reconstruction measurement of key parts of various equipment such as military satellites, aerospace planes and the like, and good effects are achieved. In some application scenarios, it is necessary to restore the characteristics of the surface material of the object, such as the mechanism and the color, and generate high-precision point cloud data with real colors. However, the conventional structured light binocular three-dimensional reconstruction method can reconstruct only the geometric texture of the object surface, and generate a three-dimensional point cloud without color information.
In order to solve the problem of true color three-dimensional reconstruction, the current technical scheme is mainly realized based on color coding stripe light and a binocular color camera. On the one hand, considering that the current high resolution color cameras are expensive to manufacture, the introduction of color cameras and color grating projectors greatly increases the construction costs of the system. On the other hand, the multi-channel information of the color camera is processed, so that the calculation cost is greatly increased, and the system efficiency is reduced. The most critical limitation is that, compared with a gray-scale camera, a color camera only receives photons in red, green and blue bands, and a neighborhood averaging operation is adopted for demosaicing in the imaging process, so that a great amount of image details are lost. The original color information on the surface of the object can also interfere the color coding information of the color coding structured light, so that the measurement accuracy is limited to a great extent. In general, the existing true color three-dimensional reconstruction system has higher detection cost and large operation amount, and is difficult to realize the true color three-dimensional point cloud reconstruction with higher precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a high-precision point cloud color reconstruction method based on a low-resolution image, which can realize non-contact high-precision measurement of high-precision key parts, retain rich texture details and reduce detection cost and operation amount.
In order to achieve the above object, the present invention provides a high-precision point cloud color reconstruction method based on a low-resolution image, comprising:
(1) Reconstructing a high-precision achromatic point cloud P based on a binocular structured light system aiming at high-precision key parts;
(2) Aiming at high-precision key parts, a left high-resolution gray-scale camera is used for shooting a high-resolution gray-scale image I without coded light gray A low-resolution color image I without coded light is shot by a low-resolution color camera with the same pose and the resolution is a multiplied by b color Resolution is c×d;
(3) Calculating a low resolution color image I color To high resolution gray scale image I gray Transform matrix H
3.1 Extracting a low resolution color image I) color High resolution gray scale image I gray Is a feature point of (1);
3.2 For low resolution color image I) color In the high resolution gray scale image I gray Searching for two nearest neighbor feature points, and determining a low-resolution color image I if the ratio of the distance of the nearest neighbor feature point to the distance of the next nearest neighbor feature point is smaller than a set threshold value theta color The feature point in (2) and the high resolution gray scale image I gray The nearest neighbor feature points in the rule are matching point pairs, and a matching point pair set is obtained:
Ω rough ={(p color_1 ,p gray_1 ),(p color_2 ,p gray_2 ),...}
wherein p is color_1 、p color_2 For low resolution colour image I color Characteristic points, p of (a) gray_1 、p gray_2 For high resolution grey scale image I gray Respectively corresponding characteristic points;
3.3 Number of iterations K) is set ransac Error thresholdOptimizing matching point pair set omega by random sampling consistency RANSAC algorithm rough Obtaining an optimized matching point pair set:
Ω fine ={(p′ color_1 ,p′ gray_1 ),(p′ color_1 ,p′ gray_1 ),...}
3.4 According to the optimized matching point pair set omega) fine Obtaining a coordinate transformation matrix H;
(4) Registering low resolution color image I color To high resolution gray scale image I gray Obtaining a high-resolution color image I reg
First, for a low resolution color image I color The pixel points of (2) are subjected to coordinate transformation by using a coordinate transformation matrix H to obtain a low-resolution color image
Then, a blank high-resolution color image I with a resolution of a×b is created reg To be a low resolution color imageAnd high resolution color image I reg Overlapping according to coordinates, and then adopting bilinear interpolation method to make low-resolution colour image +.>High-resolution color image I in rectangular region composed of four corner lines reg Interpolation is carried out on the pixel point p of the ith pixel point p to be interpolated i (x i ,y i ) Low resolution colour image according to its neighbors +.>Four pixel points p in (a) i_1 (x i1 ,y i1 )、p i_2 (x i2 ,y i2 )、p i_3 (x i3 ,y i3 )、p i_4 (x i4 ,y i4 ) Pixel values of (2)Interpolation calculation is carried out:
first two by two in the x directionInterpolation to obtain pixel point p i_5 (x i ,y i5 )、p i_6 (x i ,y i6 ) Pixel values of (2):
then interpolation is carried out in the y direction to obtain a pixel point p i (x i ,y i ) Pixel values of (2):
finally, for high resolution color image I reg The rest area of the image is used for completing pixel values by set RGB values to obtain a registered high-resolution color image I reg ;
(5) Registered high resolution color image I reg And high resolution gray scale image I gray Fusing to obtain a fused image I with high resolution fused
First, a registered high resolution color image I reg Converting the RGB space into the YCbCr space to obtain a color image I reg_ycbcr The method comprises the steps of carrying out a first treatment on the surface of the Extraction of color I reg_ycbcr And is denoted as Y color Extracting CbCr channel, which is marked as CbCr color For Y channel Y color Weighted least binary filtering to obtain a base layerAnd detail layer->
Then, for the high resolution gray scale image I gray Weighted least binary filtering to obtain a base layerAnd detail layer->High resolution gray scale image I gray Bilateral filtering is carried out to obtain a base layer->And detail layer->For two detail layers->And->Averaging to obtain detail layer Y detail ;
Finally, Y channel Y fused And CbCr channel CbCr color Combining and converting to RGB space to obtain high-resolution fusion image I fused ;
(6) Corresponding the coordinates of the three-dimensional points of the high-precision achromatic point cloud P to a high-resolution gray image I according to an imaging model of the binocular camera gray Further to the fused image I fused The pixel coordinates in the image are used for obtaining a fusion image I fused The medium point RGB is transmitted to a three-dimensional point of a high-precision achromatic point cloud P to obtain a real chromatic point cloud P * 。
The invention aims at realizing the following steps:
aiming at the measurement of high-precision key parts, the high-precision point cloud color reconstruction method based on the low-resolution image firstly reconstructs high-precision achromatic point cloud P based on a traditional binocular structured light system, and then shoots a high-resolution gray scale image I gray Low resolution color image I color Detecting characteristic points of the two images, judging matching point pairs by using Euclidean distance, optimizing a matching point pair set by using RANSAC algorithm, and according to the optimized matching point pair set omega fine Obtaining a coordinate transformation matrix H; then for low resolution color image I color Coordinate transformation is carried out to obtain a low-resolution color imageMethod for high resolution color image I using bilinear interpolation reg Is (Low resolution color image +.>) Filling RGB values in each pixel coordinate of the left set region pixel point, artificially setting RGB to keep the data form uniform, and finally obtaining a registered high-resolution color image; then, fusion of the high-resolution color image and the high-resolution gray scale image is realized based on bilateral filtering and weighted least square filtering, and a fused high-resolution color image is obtained; finally, the RGB channels of the three-dimensional points in the high-precision achromatic point cloud P are assigned according to the conversion relation between the left high-resolution gray-scale camera and the reconstruction point cloud in the binocular structured light system, so that the color fusion of the three-dimensional point cloud is completed, and the true chromatic point cloud P is obtained * . The invention uses the binocular high-precision gray scale camera to replace the binocular color camera, greatly widens the range of capturing wavelength, reserves richer texture detail information of high-precision key parts and improves the measurement precision. The reconstruction of geometric textures by adopting a gray-scale camera to replace a color camera also avoids using the color camera and a projector in the current color reconstruction system on the premise of ensuring higher precision, and the three channels of R, G, BThe information is operated to cause the disadvantage of large calculation amount. The calculated amount of the gray level camera with the same resolution is only one third of that of the color camera, and the measuring speed of the system is greatly improved.
The related advantages and innovations of the invention are:
(1) The invention combines the high-resolution gray binocular camera with the color camera to realize high-precision reconstruction of the point cloud and avoid interference of color textures on the surface of an object on color stripe light;
(2) The color reconstruction is carried out based on the low-resolution color image, so that three-channel operation during high-precision reconstruction is avoided, the calculated amount is reduced, and the system speed is improved;
(3) Fusion of the color image and the gray image is realized based on bilateral filtering and weighted least square filtering, and rich detail textures are reserved.
Drawings
FIG. 1 is a flow chart of one embodiment of a high-precision point cloud color reconstruction method based on a low-resolution image of the present invention;
FIG. 2 is a schematic illustration of the registration of a low resolution color image with a high resolution gray scale image;
FIG. 3 is a schematic diagram of bilinear interpolation; the method comprises the steps of carrying out a first treatment on the surface of the
FIG. 4 is a schematic illustration of image fusion;
FIG. 5 is a low resolution color image of a dumbbell standard in a specific example;
FIG. 6 is a high resolution gray scale image of a dumbbell standard in a specific example;
FIG. 7 is a fused image in an embodiment;
FIG. 8 is a reconstructed high-precision true color point cloud in an embodiment.
Detailed Description
The following description of the embodiments of the invention is presented in conjunction with the accompanying drawings to provide a better understanding of the invention to those skilled in the art. It is to be expressly noted that in the description below, detailed descriptions of known functions and designs are omitted here as perhaps obscuring the present invention.
FIG. 1 is a flow chart of an embodiment of a high-precision point cloud color reconstruction method based on a low-resolution image according to the present invention.
In this embodiment, as shown in fig. 1, the high-precision true color three-dimensional reconstruction method of the present invention includes the following steps:
step S1: rebuilding high-precision achromatic point cloud P
And reconstructing a high-precision achromatic point cloud P based on the binocular structured light system aiming at the high-precision key parts. Under the condition of coded light, a group of high-resolution gray-scale image pairs shot by a left high-resolution gray-scale camera and a right high-resolution gray-scale camera generate a three-dimensional point cloud without color information based on a structured light binocular three-dimensional reconstruction method. The technical scheme is the prior art and is not described in detail herein.
Step S2: capturing high resolution grayscale image I gray Low resolution color image I color
Aiming at high-precision key parts, a left high-resolution gray-scale camera is used for shooting a high-resolution gray-scale image I without coded light gray A×b is resolution, a low-resolution color image I without coded light is shot by a low-resolution color camera with the same pose color C×d is the resolution.
Step S3: calculating a transformation matrix H
Calculating a low resolution color image I color To high resolution gray scale image I gray Transform matrix H
Step S3.1: extracting KAZE feature points
Extracting low resolution color image I color High resolution gray scale image I gray Characteristic points of (a)
In this embodiment, the extracted feature points are KAZE feature points, specifically:
firstly, smoothing an image by using a Gaussian function, determining a parameter k from a gradient histogram of the smoothed image, generating an image pyramid of an O group S layer, wherein O represents a group, S represents a layer and sigma 0 For the initial value, calculating the corresponding scale parameters of different groups of layer images:
σ i (o,s)=σ 0 2 o+s/S wherein o is E [0 ],1,...,O-1],s∈[0,1,...,S-1],i∈[0,1,...,O×S-1]
the scale parameters are then translated into each set of per-layer time parameters:
solving the nonlinear diffusion equation using AOS (additive operator splitting algorithm):
wherein L is i For each group of luminance values of layer image, A l (L i ) Is an image L i M is L i Number of pixels.
Then, a Hessian matrix is calculated:
L xx and L yy Respectively second-order transverse and second-order longitudinal differentiation, L xy For second order cross differentiation, the Hessian value at each pixel point is compared with the adjacent 26 points to find L i Extreme points of (2)Then solving the sub-pixel accurate position +.>
Finally, taking the characteristic point as the center and 6sigma as the radius to determine a circular area, and respectively obtaining first-order differential values L of points in the neighborhood of the characteristic point x And L y Performing Gaussian assignment operation, rotating in the neighborhood by taking 60 degrees as a sector window, superposing vectors at the midpoints of the neighborhood, determining the longest vector direction in vector sum as the main direction of the feature point, and then establishing description directions by using M-SURF descriptorsQuantity, for scale sigma i Is characterized by taking the characteristic point neighborhood 24 sigma i ×24σ i Is further decomposed into rectangles having 2σ i 4X 4 9 sigma of overlap i ×9σ i Sub-region, calculating first-order differential value L for all points x And L y And adopts sigma in each sub-region gi =2.5σ i To obtain floating point description vector d of the subregion v As the value of the feature point.
d v =(∑L x ∑L y ∑|L x |∑|L y |)
Step S3.2: searching to obtain a matching point pair set
For low resolution color image I color In the high resolution gray scale image I gray Searching for two nearest neighbor feature points, and determining a low-resolution color image I if the ratio of the distance of the nearest neighbor feature point to the distance of the next nearest neighbor feature point is smaller than a set threshold value theta color The feature point in (2) and the high resolution gray scale image I gray The nearest neighbor feature points in the rule are matching point pairs, and a matching point pair set is obtained:
Ω rough ={(p color_1 ,p gray_1 ),(p color_2 ,p gray_2 ),...}
wherein p is color_1 、p color_2 For low resolution colour image I color Characteristic points, p of (a) gray_1 、p gray_2 For high resolution grey scale image I gray Respectively corresponding characteristic points.
Step S3.3: optimizing matching point pair sets
Setting the iteration number K ransac Error thresholdOptimizing matching point pair set omega by random sampling consistency RANSAC algorithm rough Obtaining an optimized matching point pair set:
Ω fine ={(p′ color_1 ,p′ gray_1 ),(p′ color_1 ,p′ gray_1 ),...}。
step S3.4: obtaining a coordinate transformation matrix H
According to the optimized matching point pair set omega fine Obtaining a coordinate transformation matrix H;
step S4: registering low resolution color image I color To high resolution gray scale image I gray Obtaining a high-resolution color image I reg
In the present embodiment, as shown in fig. 2, first, for a low resolution color image I color The pixel points of (2) are subjected to coordinate transformation by using a coordinate transformation matrix H to obtain a low-resolution color image
Then, a piece of gray-scale image I with resolution of a multiplied by b, namely with high resolution, is created gray Blank high resolution color image I of equal size reg To be a low resolution color imageAnd high resolution color image I reg Overlapping according to coordinates, and then adopting bilinear interpolation method to make low-resolution colour image +.>High-resolution color image I in rectangular region composed of four corner lines reg Interpolation is carried out on the pixel point p of the ith pixel point p to be interpolated i (x i ,y i ) Low resolution colour image according to its neighbors +.>Four pixel points p in (a) i_1 (x i1 ,y i1 )、p i_2 (x i2 ,y i2 )、p i_3 (x i3 ,y i3 )、p i_4 (x i4 ,y i4 ) Pixel value +.> Interpolation calculation is carried out:
in this embodiment, as shown in fig. 3, first, the pixel point p is obtained by linear interpolation in the x direction i_5 (x i ,y i5 )、p i_6 (x i ,y i6 ) Pixel values of (2):
then interpolation is carried out in the y direction to obtain a pixel point p i (x i ,y i ) Pixel values of (2):
finally, for high resolution color image I reg The rest area of the image is used for completing pixel values by set RGB values to obtain a registered high-resolution color image I reg ;
Step S5: fusing registered high resolution color images and high resolution gray scale images
Registered high resolution color image I reg And high resolution gray scale image I gray Fusing to obtain a fused image I with high resolution fused :
First, a registered high resolution color image I reg Converting the RGB space into the YCbCr space to obtain a color image I reg_ycbcr The method comprises the steps of carrying out a first treatment on the surface of the Extraction of color I reg_ycbcr And is denoted as Y color Extracting CbCr channel, which is marked as CbCr color For Y channel Y color Weighted least binary filtering to obtain a base layerAnd detail layer->
Then, for the high resolution gray scale image I gray Weighted least binary filtering to obtain a base layerAnd detail layer->High resolution gray scale image I gray Bilateral filtering is carried out to obtain a base layer->And detail layer->For two detail layers->And->Averaging to obtain detail layer Y detail ;
Finally, Y channel Y fused And CbCr channel CbCr color Combining and converting to RGB space to obtain high-resolution fusion image I fused 。
Detail layerBase layer->Base layer->Is the filtered output, but the invention does not use these three layers.
Step S6: assigning values to three-dimensional points of the high-precision achromatic point cloud P to obtain a real chromatic point cloud
According to the imaging model of the binocular camera, the coordinates of the three-dimensional points of the high-precision achromatic point cloud P are corresponding to the high-resolution gray image I gray Further to the fused image I fused The pixel coordinates in the image are used for obtaining a fusion image I fused The medium point RGB is transmitted to a three-dimensional point of a high-precision achromatic point cloud P to obtain a real chromatic point cloud P * 。
Examples
In this example, a high-precision true color three-dimensional point cloud reconstruction is performed on a dumbbell-shaped standard piece. The dumbbell-shaped standard piece is subjected to high-precision true color three-dimensional reconstruction by using the high-precision point cloud color reconstruction method based on the low-resolution color image shown in fig. 1. The scheme is based on a high-precision gray binocular system and a low-resolution color camera. The dumbbell standard is fixed in the common field of view of the binocular camera and the low resolution camera. And respectively shooting an image by using a low-resolution color camera and a high-resolution gray-scale camera under the condition of no coded stripe light, transmitting the images to a computer as shown in fig. 5 and 6, projecting a series of coded lights by using a binocular structured light system, shooting corresponding coded light image pairs, transmitting the coded light image pairs to the computer, and reconstructing high-precision achromatic three-dimensional point clouds by using the computer according to the principle of the binocular structured light system. Extracting angular points and obtaining characteristic values of a high-resolution gray level image and a low-resolution color image of the uncoded light by using a KAZE algorithm respectively, and obtaining the matching according to the principle that the Euclidean distance of the characteristic vectors in the two images is minimumMatching points, and optimizing a matching point pair set by adopting a RANSAC algorithm to obtain a coordinate transformation matrix H. Applying H-transform to low resolution color image to obtain low resolution color imageCreating a blank high resolution color image I at the size of the high resolution gray scale image reg And a low resolution color image +.>Medium-coordinate high-resolution color image I reg The valid pixel values within the range are filled into the blank image. Determining low resolution color image +.>Quadrilateral region with four corner lines and high-resolution color image I reg The blank pixel points in the intersection set are pixels to be interpolated, four known pixel points in the nearest neighbor are searched, and a bilinear interpolation algorithm is adopted to calculate pixel values. For high resolution color image I reg The remaining blank pixel RGB of (1) is assigned a value of (0, 0). Registering the high-resolution color image I reg And high resolution gray scale image I gray Fusion is performed to obtain an image as in fig. 7. Finally, according to the reconstruction rule of the binocular structured light system, RGB of the three-dimensional point is assigned as I corresponding to the RGB reg RGB values at the middle pixels result in a reconstructed high-precision real color three-dimensional point cloud as shown in fig. 8.
While the foregoing describes illustrative embodiments of the present invention to facilitate an understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as protected by the following claims insofar as various changes are within the spirit and scope of the present invention as defined and defined by the appended claims.
Claims (1)
1. A high-precision point cloud color reconstruction method based on a low-resolution image is characterized by comprising the following steps of:
(1) Reconstructing a high-precision achromatic point cloud P based on a binocular structured light system aiming at high-precision key parts;
(2) Aiming at high-precision key parts, a left high-resolution gray-scale camera is used for shooting a high-resolution gray-scale image I without coded light gray A low-resolution color image I without coded light is shot by a low-resolution color camera with the same pose and the resolution is a multiplied by b color Resolution is c×d;
(3) Calculating a low resolution color image I color To high resolution gray scale image I gray Transform matrix H
3.1 Extracting a low resolution color image I) color High resolution gray scale image I gray Characteristic points of (a)
The extracted characteristic points are KAZE characteristic points, and specifically are:
firstly, smoothing an image by using a Gaussian function, determining a parameter k from a gradient histogram of the smoothed image, generating an image pyramid of an O group S layer, wherein O represents a group, S represents a layer and sigma 0 For the initial value, calculating the corresponding scale parameters of different groups of layer images:
σ i (o,s)=σ 0 2 o+s/S where o.e. [0, 1.,],s∈[0,1,...,S-1],i∈[0,1,...,O×S-1]
the scale parameters are then translated into each set of per-layer time parameters:
solving the nonlinear diffusion equation using AOS (additive operator splitting algorithm):
wherein L is i For the brightness of groups of layer imagesDegree value, A l (L i ) Is an image L i M is L i The number of pixels;
then, a Hessian matrix is calculated:
L xx and L yy Respectively second-order transverse and second-order longitudinal differentiation, L xy For second order cross differentiation, the Hessian value at each pixel point is compared with the adjacent 26 points to find L i Extreme points of (2)Then solving the sub-pixel accurate position +.>
Finally, taking the characteristic point as the center and 6sigma as the radius to determine a circular area, and respectively obtaining first-order differential values L of points in the neighborhood of the characteristic point x And L y Performing Gaussian assignment operation, rotating in the neighborhood by taking 60 degrees as a sector window, superposing vectors at the midpoints of the neighborhood, determining the longest vector direction in vector sum as the main direction of the feature point, then establishing a description vector by using M-SURF descriptors, and determining the scale sigma i Is characterized by taking the characteristic point neighborhood 24 sigma i ×24σ i Is further decomposed into rectangles having 2σ i 4X 4 9 sigma of overlap i ×9σ i Sub-region, calculating first-order differential value L for all points x And L y And adopts sigma in each sub-region gi =2.5σ i To obtain floating point description vector d of the subregion v Values as feature points:
d v =(∑L x ∑L y ∑|L x | ∑|L y |);
3.2 For low resolution color image I) color Is provided with a plurality of characteristic points,in high resolution gray scale image I gray Searching for two nearest neighbor feature points, and determining a low-resolution color image I if the ratio of the distance of the nearest neighbor feature point to the distance of the next nearest neighbor feature point is smaller than a set threshold value theta color The feature point in (2) and the high resolution gray scale image I gray The nearest neighbor feature points in the rule are matching point pairs, and a matching point pair set is obtained:
Ω rough ={(p color_1 ,p gray_1 ),(p color_2 ,p gray_2 ),...}
wherein p is color_1 、p color_2 For low resolution colour image I color Characteristic points, p gray_1 、p gray_2 For high resolution grey scale image I gray Respectively corresponding characteristic points;
3.3 Number of iterations K) is set ransac Error thresholdOptimizing matching point pair set omega by random sampling consistency RANSAC algorithm rough Obtaining an optimized matching point pair set:
Ω fine ={(p′ color_1 ,p′ gray_1 ),(p′ color_1 ,p′ gray_1 ),...}
3.4 According to the optimized matching point pair set omega) fine Obtaining a coordinate transformation matrix H;
(4) Registering low resolution color image I color To high resolution gray scale image I gray Obtaining a high-resolution color image I reg
First, for a low resolution color image I color The pixel points of (2) are subjected to coordinate transformation by using a coordinate transformation matrix H to obtain a low-resolution color image
Then, a blank high-resolution color image I with a resolution of a×b is created reg To be a low resolution color imageAnd high resolution color image I reg Overlapping according to coordinates, and then adopting bilinear interpolation method to make low-resolution colour image +.>High-resolution color image I in rectangular region composed of four corner lines reg Interpolation is carried out on the pixel point p of the ith pixel point p to be interpolated i (x i ,y i ) Low resolution colour image according to its neighbors +.>Four pixel points p in (a) i_1 (x i1 ,y i1 )、p i_2 (x i2 ,y i2 )、p i_3 (x i3 ,y i3 )、p i_4 (x i4 ,y i4 ) Pixel values of (2)Interpolation calculation is carried out:
first, obtaining a pixel point p by linear interpolation in the x direction i_5 (x i ,y i5 )、p i_6 (x i ,y i6 ) Pixel values of (2):
then insert in the y directionThe value is obtained to obtain a pixel point p i (x i ,y i ) Pixel values of (2):
finally, for high resolution color image I reg The rest area of the image is used for completing pixel values by set RGB values to obtain a registered high-resolution color image I reg ;
(5) Registered high resolution color image I reg And high resolution gray scale image I gray Fusing to obtain a fused image I with high resolution fused
First, a registered high resolution color image I reg Converting the RGB space into the YCbCr space to obtain a color image I reg_ycbcr The method comprises the steps of carrying out a first treatment on the surface of the Extraction of color I reg_ycbcr And is denoted as Y color Extracting CbCr channel, which is marked as CbCr color For Y channel Y color Weighted least binary filtering to obtain a base layerAnd detail layer->
Then, for the high resolution gray scale image I gray Weighted least binary filtering to obtain a base layerAnd detail layerHigh resolution gray scale image I gray Bilateral filtering is carried out to obtain a base layer->And detail layer->For two detail layers->And->Averaging to obtain detail layer Y detail ;
Finally, Y channel Y fused And CbCr channel CbCr color Combining and converting to RGB space to obtain high-resolution fusion image I fused ;
(6) Corresponding the coordinates of the three-dimensional points of the high-precision achromatic point cloud P to a high-resolution gray image I according to an imaging model of the binocular camera gray Further to the fused image I fused The pixel coordinates in the image are used for obtaining a fusion image I fused The medium point RGB is transmitted to a three-dimensional point of a high-precision achromatic point cloud P to obtain a real chromatic point cloud P * 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210106255.0A CN114549307B (en) | 2022-01-28 | 2022-01-28 | High-precision point cloud color reconstruction method based on low-resolution image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210106255.0A CN114549307B (en) | 2022-01-28 | 2022-01-28 | High-precision point cloud color reconstruction method based on low-resolution image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114549307A CN114549307A (en) | 2022-05-27 |
CN114549307B true CN114549307B (en) | 2023-05-30 |
Family
ID=81672654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210106255.0A Active CN114549307B (en) | 2022-01-28 | 2022-01-28 | High-precision point cloud color reconstruction method based on low-resolution image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549307B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063436B (en) * | 2022-06-01 | 2024-05-10 | 电子科技大学 | Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection |
CN117557733B (en) * | 2024-01-11 | 2024-05-24 | 江西啄木蜂科技有限公司 | Natural protection area three-dimensional reconstruction method based on super resolution |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034982A (en) * | 2012-12-19 | 2013-04-10 | 南京大学 | Image super-resolution rebuilding method based on variable focal length video sequence |
CN110827200A (en) * | 2019-11-04 | 2020-02-21 | Oppo广东移动通信有限公司 | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal |
CN111784578A (en) * | 2020-06-28 | 2020-10-16 | Oppo广东移动通信有限公司 | Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101614337B1 (en) * | 2008-12-19 | 2016-04-21 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | Method for driving electronic device |
CN102354397B (en) * | 2011-09-19 | 2013-05-15 | 大连理工大学 | Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs |
CN106651938B (en) * | 2017-01-17 | 2019-09-17 | 湖南优象科技有限公司 | A kind of depth map Enhancement Method merging high-resolution colour picture |
CN107065159B (en) * | 2017-03-24 | 2019-10-18 | 南京理工大学 | A kind of large visual field high resolution microscopic imaging device and iterative reconstruction method based on big illumination numerical aperture |
CN112037129B (en) * | 2020-08-26 | 2024-04-19 | 广州视源电子科技股份有限公司 | Image super-resolution reconstruction method, device, equipment and storage medium |
-
2022
- 2022-01-28 CN CN202210106255.0A patent/CN114549307B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103034982A (en) * | 2012-12-19 | 2013-04-10 | 南京大学 | Image super-resolution rebuilding method based on variable focal length video sequence |
CN110827200A (en) * | 2019-11-04 | 2020-02-21 | Oppo广东移动通信有限公司 | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal |
CN111784578A (en) * | 2020-06-28 | 2020-10-16 | Oppo广东移动通信有限公司 | Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114549307A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110473217B (en) | Binocular stereo matching method based on Census transformation | |
CN114549307B (en) | High-precision point cloud color reconstruction method based on low-resolution image | |
CN108921895B (en) | Sensor relative pose estimation method | |
CN114549746B (en) | High-precision true color three-dimensional reconstruction method | |
CN109191562B (en) | Three-dimensional reconstruction method based on color pseudo-random coding structured light | |
CN107633482A (en) | A kind of super resolution ratio reconstruction method based on sequence image | |
CN112697071B (en) | Three-dimensional measurement method for color structured light projection based on DenseNet shadow compensation | |
CN113570536B (en) | Panchromatic and multispectral image real-time fusion method based on CPU and GPU cooperative processing | |
CN111914913B (en) | Novel stereo matching optimization method | |
CN114549669B (en) | Color three-dimensional point cloud acquisition method based on image fusion technology | |
CN111932601A (en) | Dense depth reconstruction method based on YCbCr color space light field data | |
CN114996814A (en) | Furniture design system based on deep learning and three-dimensional reconstruction | |
CN107507263B (en) | Texture generation method and system based on image | |
CN114332125A (en) | Point cloud reconstruction method and device, electronic equipment and storage medium | |
CN110517348B (en) | Target object three-dimensional point cloud reconstruction method based on image foreground segmentation | |
CN115482268A (en) | High-precision three-dimensional shape measurement method and system based on speckle matching network | |
EP2202682A1 (en) | Image generation method, device, its program and program recorded medium | |
CN110580684A (en) | image enhancement method based on black-white-color binocular camera | |
CN116912338A (en) | Pixel picture vectorization method for textile | |
CN116469095A (en) | Method for detecting self-adaptive three-dimensional target of space-time scene by using radar sensing fusion | |
CN116503248A (en) | Infrared image correction method and system for crude oil storage tank | |
CN112669355B (en) | Method and system for splicing and fusing focusing stack data based on RGB-D super pixel segmentation | |
CN112200852B (en) | Stereo matching method and system for space-time hybrid modulation | |
CN114049423A (en) | Automatic realistic three-dimensional model texture mapping method | |
Li et al. | Quantitative evaluation for dehazing algorithms on synthetic outdoor hazy dataset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |