CN113096188A - Visual odometer pose optimization method based on highlight pixel detection - Google Patents
Visual odometer pose optimization method based on highlight pixel detection Download PDFInfo
- Publication number
- CN113096188A CN113096188A CN202110642779.7A CN202110642779A CN113096188A CN 113096188 A CN113096188 A CN 113096188A CN 202110642779 A CN202110642779 A CN 202110642779A CN 113096188 A CN113096188 A CN 113096188A
- Authority
- CN
- China
- Prior art keywords
- highlight
- pixel
- calculating
- psf
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000005457 optimization Methods 0.000 title claims abstract description 23
- 238000001514 detection method Methods 0.000 title claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims abstract description 27
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a visual odometer pose optimization method based on highlight pixel detection, which mainly comprises the following steps: highlight pixel detection, weight matrix calculation, feature descriptor calculation and least square optimization. Specular reflection often occurs on the surface of a metallic, partially smooth object, resulting in high light pixels in the image captured by the camera. The position of the highlight pixel can change along with the movement of the visual angle of the camera, so that the matching of adjacent images in visual positioning is wrong, and the positioning precision is reduced. The method has the core that highlight pixel detection is introduced, the detection result is converted into the weight matrix to be added into the pose optimization process, the error matching caused by highlight pixels is eliminated, and the accuracy of visual positioning is effectively improved.
Description
Technical Field
The invention relates to the field of robot vision positioning, in particular to a method for optimizing the pose of a vision odometer based on highlight pixel detection.
Background
Specular reflection often occurs on the surface of smooth objects such as metal, resulting in high light pixels in the image captured by the camera. The position of the highlight pixel can change along with the movement of the camera visual angle and the light source position, so that the matching of adjacent images in visual positioning is wrong, and the positioning precision is reduced.
Disclosure of Invention
In order to solve the defects of the prior art, the invention introduces a highlight weight matrix in the pose solution of the traditional visual odometer to realize the aim of improving the positioning precision of the visual odometer, and adopts the following technical scheme:
a visual odometer pose optimization method based on highlight pixel detection comprises the following steps:
s1, acquiring environment color image information in real time according to the fixed frame number;
s2, calculating a highlight pixel picture in the color image;
s3, calculating highlight weight matrix of each pixel through highlight pixel pictureW highlight ;
S4, calculating a gradient weight matrix of each pixelW grad ;
S5, calculating a final weight matrixW=W highlight •W grad ;
S6, calculating descriptors of all pixel points;
and S7, establishing a least square optimization equation, substituting the least square optimization equation into the weight matrix and the pixel point descriptors, and performing nonlinear optimization to solve pose information.
Further, in S2, the chroma values corresponding to all the color image pixels are introduced into the minimum chroma-maximum chroma two-dimensional space for clustering, and the chroma values obtained through clusteringI ratio (i) Computing highlight pixel componentsI highlight 。
Further, theI ratio (i) By calculating the minimum intensity value of the colour imageI min Maximum intensity valueI max The obtained colorimetric value Λ psf (i) And projecting the image to a minimum chroma-maximum chroma two-dimensional space for clustering, wherein the expression is as follows:
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively asIn (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity valueAnd maximum chroma valueThe minimum chroma valueAnd maximum chroma valueProjected to a coordinate point of a minimum chromaticity-maximum chromaticity two-dimensional spacex’,y’Corresponding to a plane coordinate system, a plurality of image coordinate points exist, the coordinate points are clustered by adopting a kmeans algorithm, the coordinate points are clustered into three classes of red, green and blue, and the coordinate points in each class are clusteredI max (i) AndI range (i) By passingmed() Solving median asI ratio (i)。
Further, the highlight pixel componentI highlight By passingI ratio (i) Calculating diffuse reflectance components in imagesI diffuse And then by the diffuse reflection componentI diffuse The expression is obtained as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse 。
further, in the S3, a highlight threshold is setI th1 Calculating pixel pointsiHighlight weight matrix:
further, the S4 includes the following steps:
s41, graying the color image, carrying out Gaussian blur processing on the image, and calculating gradient amplitudes of pixel points in x and y directions:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
wherein,G x (i)、G y (i) Are respectively likePlain dotiIn the imagexDirection andythe magnitude of the gradient in the direction is,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-is a pixel pointiThe upper adjacent pixel point corresponds to the intensity,I x y 1,+is a pixel pointiThe corresponding intensity of the upper adjacent pixel point;
s42, setting a gradient threshold valueI th2 Calculating pixel pointsiGradient weight matrixW grad :
Further, in S6, the function is calculated by a feature descriptor representing the pixel correspondence of the current frame imageI(w(p,θ+△θ) Obtained whereinw() Representing a transformation function in the camera imaging model, projecting the coordinates of the pixels in the current frame into a reference frame,pis the pixel coordinate position in the image,θpose information is transformed for the camera.
Further, in S7, the least squares optimization equation:
wherein,I’() A feature descriptor computation function representing pixels in the reference frame image.
Further, thew() Is a warp function.
Further, theθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
The invention has the advantages and beneficial effects that:
according to the invention, the highlight weight matrix is introduced in the pose solving of the traditional visual odometer, so that the phenomenon that the image has highlight pixels and the position of the highlight pixels changes along with the movement of the visual angle of the camera and the position of the light source, which causes the matching error of adjacent images in visual positioning is avoided, and the positioning precision of the visual odometer is effectively improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2a is a schematic diagram of a two-dimensional space of minimum chroma and maximum chroma after clustering in the present invention.
FIG. 2b is a schematic diagram of a two-dimensional space of minimum chroma and maximum chroma after clustering and median calculation according to the present invention.
Fig. 3a is an original diagram in the present invention.
FIG. 3b is a diagram of highlight pixel detection in the present invention.
Fig. 4 is a comparison diagram of the positioning accuracy of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the present invention is embodied as follows:
s1: the camera is carried on a mobile platform, the position and the visual angle of the camera are fixed, and environmental color image information is acquired in real time according to the fixed frame number;
s2: calculating high-light pixel pictures in a color image: calculating the minimum intensity of a color imageI min Maximum intensity ofI max Colorimetric value Λ psf (i) Introducing the chromatic values corresponding to all the pixels into a minimum-maximum chromaticity two-dimensional space for clustering, and calculating after clustering to obtainI ratio The specific calculation expression is as follows:
by passingI ratio Calculating diffuse reflectance components in imagesI diffuse And high light pixel componentI highlight The expression is as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse ;
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively asIn (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity valueAnd maximum chroma value;
The minimum chroma valueAnd maximum chroma valueProjected to a coordinate point of a minimum chromaticity-maximum chromaticity two-dimensional spacex’,y’Corresponding to a plane coordinate system, there will be many image coordinate points, as shown in fig. 2a, coordinate points are clustered by using a kmeans algorithm, the coordinate points are clustered into three classes of red, green and blue, as shown in fig. 2b, and for each class, the coordinate points are clusteredI max (i) AndI range (i) By passingmed() Solving median asI ratio (i)。
Fig. 3a shows the original image, and fig. 3b shows the detected highlight pixels. The white pixel in the figure is the image highlight region.
S3: calculating highlight weight matrix of each pixelW highlight The method comprises the following specific steps: setting a highlight thresholdI th1 Calculating a highlight weight matrix of a pixel point, wherein the expression is as follows:
s4: calculating a pixel gradient weight matrixW grad The method comprises the following specific steps: firstly, graying the color image, carrying out Gaussian blur processing on the image, and calculating gradient amplitudes of pixel points in x and y directions, wherein the expression is as follows:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
in the above formulaG x (i)、G y (i) The gradient amplitudes of the pixel points in the x direction and the y direction of the image are respectively,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-corresponding intensities of adjacent pixel points on the upper side of the pixel points,I x y 1,+corresponding intensities of adjacent pixel points on the upper sides of the pixel points;
then setting a gradient thresholdI th2 Calculating a weight matrixW grad The expression is as follows:
s5: computing a final weight matrixWThe expression is as follows:
W=W highlight •W grad
s6: calculating descriptors of all pixel points;
s7: establishing least square optimization equation, substituting weight matrixWPerforming nonlinear optimization to solve pose information DeltaθThe equation expression is as follows:
in the above formulapIs the pixel coordinate position in the image,w() The main role of the warp function, i.e. the transformation function in the camera imaging model, is to project the pixel coordinates in the current frame into the reference frame.I() The feature descriptor computation function corresponding to the pixel in the current frame image is shown,I’() The feature descriptor computation function of the pixel in the reference frame image is shown,θtransforming pose information for cameras, in particularθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
Fig. 4 is a schematic diagram showing the comparison of the positioning accuracy of the visual odometer with and without the introduction of highlight pixel detection. In the figure, the dotted line is a real track, the dark straight line is a visual odometer positioning track introducing a highlight pixel weight matrix, and the light straight line is a visual odometer positioning track introducing a highlight pixel weight matrix. From experimental results, it can be known that the visual odometer with highlight pixel detection can provide higher positioning accuracy.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A visual odometer pose optimization method based on highlight pixel detection is characterized by comprising the following steps:
s1, acquiring color image information;
s2, calculating a highlight pixel picture in the color image;
s3, calculating highlight weight matrix of each pixel through highlight pixel pictureW highlight ;
S4, calculating a gradient weight matrix of each pixelW grad ;
S5, calculating a final weight matrixW=W highlight •W grad ;
S6, calculating descriptors of all pixel points;
s7, establishing a least square optimization equation, substituting the least square optimization equation into the weight matrix and the pixel point descriptors, and performing nonlinear optimization to solve the pose information deltaθ。
2. The visual odometry station based on highlight pixel detection of claim 1The pose optimization method is characterized in that in the S2, the chromatic values corresponding to the color image pixels are introduced into a minimum-maximum chromaticity two-dimensional space for clustering, and the chromatic values are obtained through clusteringI ratio (i) Computing highlight pixel componentsI highlight 。
3. The method of claim 2, wherein the method comprises performing a visual odometry pose optimization based on highlight pixel detectionI ratio (i) By calculating the minimum intensity value of the colour imageI min Maximum intensity valueI max The obtained colorimetric value Λ psf (i) And projecting the image to a minimum chroma-maximum chroma two-dimensional space for clustering, wherein the expression is as follows:
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively asIn (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity valueAnd maximum chroma valueThe minimum chroma valueAnd maximum chroma valueProjecting to coordinate points of a minimum chromaticity-maximum chromaticity two-dimensional space, clustering the coordinate points into three classes of red, green and blue, and clustering the coordinate points in each classI max (i) AndI range (i) By passingmed() Solving median asI ratio (i)。
4. The method of claim 2, wherein the highlight pixel component is a component of the highlight pixel detection-based visual odometry pose optimization methodI highlight By passingI ratio (i) Calculating diffuse reflectance components in imagesI diffuse And then by the diffuse reflection componentI diffuse The expression is obtained as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse 。
6. the method of claim 1, wherein the step S4 comprises the following steps:
s41, graying the color image, blurring the image, and calculating gradient amplitudes of pixel points in x and y directions:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
wherein,G x (i)、G y (i) Are respectively pixel pointsiIn the imagexDirection andythe magnitude of the gradient in the direction is,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-is a pixel pointiThe upper adjacent pixel point corresponds to the intensity,I x y 1,+is a pixel pointiThe corresponding intensity of the upper adjacent pixel point;
s42, setting a gradient threshold valueI th2 Calculating pixel pointsiGradient weight matrixW grad :
7. The method of claim 1, wherein in step S6, the function is calculated by a feature descriptor representing the pixel correspondence of the current frame imageI(w(p,θ+△θ) Obtained whereinw() Representing a transformation function, projecting the coordinates of the pixels in the current frame into a reference frame,pis the pixel coordinate position in the image,θpose information is transformed for the camera.
9. The method of claim 7, wherein the method comprises performing a visual odometry pose optimization based on highlight pixel detectionw() Is a warp function.
10. The method of claim 7, wherein the method comprises performing a visual odometry pose optimization based on highlight pixel detectionθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110642779.7A CN113096188B (en) | 2021-06-09 | 2021-06-09 | Visual odometer pose optimization method based on highlight pixel detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110642779.7A CN113096188B (en) | 2021-06-09 | 2021-06-09 | Visual odometer pose optimization method based on highlight pixel detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113096188A true CN113096188A (en) | 2021-07-09 |
CN113096188B CN113096188B (en) | 2021-09-21 |
Family
ID=76665915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110642779.7A Active CN113096188B (en) | 2021-06-09 | 2021-06-09 | Visual odometer pose optimization method based on highlight pixel detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113096188B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170053007A (en) * | 2015-11-05 | 2017-05-15 | 삼성전자주식회사 | Method and apparatus for estimating pose |
CN108010081A (en) * | 2017-12-01 | 2018-05-08 | 中山大学 | A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization |
CN110346116A (en) * | 2019-06-14 | 2019-10-18 | 东南大学 | A kind of scene illumination calculation method based on Image Acquisition |
CN110390648A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom |
CN112734845A (en) * | 2021-01-08 | 2021-04-30 | 浙江大学 | Outdoor monocular synchronous mapping and positioning method fusing scene semantics |
-
2021
- 2021-06-09 CN CN202110642779.7A patent/CN113096188B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170053007A (en) * | 2015-11-05 | 2017-05-15 | 삼성전자주식회사 | Method and apparatus for estimating pose |
CN108010081A (en) * | 2017-12-01 | 2018-05-08 | 中山大学 | A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization |
CN110346116A (en) * | 2019-06-14 | 2019-10-18 | 东南大学 | A kind of scene illumination calculation method based on Image Acquisition |
CN110390648A (en) * | 2019-06-24 | 2019-10-29 | 浙江大学 | A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom |
CN112734845A (en) * | 2021-01-08 | 2021-04-30 | 浙江大学 | Outdoor monocular synchronous mapping and positioning method fusing scene semantics |
Also Published As
Publication number | Publication date |
---|---|
CN113096188B (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108760767B (en) | Large-size liquid crystal display defect detection method based on machine vision | |
US6768509B1 (en) | Method and apparatus for determining points of interest on an image of a camera calibration object | |
US8724885B2 (en) | Integrated image processor | |
CN108629756B (en) | Kinectv2 depth image invalid point repairing method | |
CN112304954B (en) | Part surface defect detection method based on line laser scanning and machine vision | |
KR101589167B1 (en) | System and Method for Correcting Perspective Distortion Image Using Depth Information | |
US10992913B2 (en) | Image processing apparatus, method, and storage medium storing program for transforming distortion of image projected by projection apparatus | |
CN114820817A (en) | Calibration method and three-dimensional reconstruction method based on high-precision line laser 3D camera | |
CN107367515B (en) | A kind of ultrathin flexible IC substrate ink foreign matter detecting method | |
CN114241438B (en) | Traffic signal lamp rapid and accurate identification method based on priori information | |
CN113096188B (en) | Visual odometer pose optimization method based on highlight pixel detection | |
JP2005345290A (en) | Streak-like flaw detecting method and streak-like flaw detector | |
CN110501339B (en) | Cloth cover positioning method in complex environment | |
CN108428250B (en) | X-corner detection method applied to visual positioning and calibration | |
CN113554672B (en) | Camera pose detection method and system in air tightness detection based on machine vision | |
CN111667429A (en) | Target positioning and correcting method for inspection robot | |
CN114998571B (en) | Image processing and color detection method based on fixed-size markers | |
CN115235335A (en) | Intelligent detection method for size of running gear of high-speed rail motor train unit | |
CN113255455B (en) | Monocular camera object identification and positioning method based on vector illumination influence removing algorithm | |
CN113222895B (en) | Electrode defect detection method and system based on artificial intelligence | |
CN112995641B (en) | 3D module imaging device and method and electronic equipment | |
CN113870146A (en) | Method for correcting false color of image edge of color camera | |
CN113610091A (en) | Intelligent identification method and device for air switch state and storage medium | |
CN113834488B (en) | Robot space attitude calculation method based on remote identification of structured light array | |
CN113222986B (en) | Continuous casting billet angular point and edge contour point set positioning method, system, medium and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |