CN113096188B - Visual odometer pose optimization method based on highlight pixel detection - Google Patents

Visual odometer pose optimization method based on highlight pixel detection Download PDF

Info

Publication number
CN113096188B
CN113096188B CN202110642779.7A CN202110642779A CN113096188B CN 113096188 B CN113096188 B CN 113096188B CN 202110642779 A CN202110642779 A CN 202110642779A CN 113096188 B CN113096188 B CN 113096188B
Authority
CN
China
Prior art keywords
highlight
pixel
calculating
psf
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110642779.7A
Other languages
Chinese (zh)
Other versions
CN113096188A (en
Inventor
宋伟
王程
朱世强
廖建峰
郑涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110642779.7A priority Critical patent/CN113096188B/en
Publication of CN113096188A publication Critical patent/CN113096188A/en
Application granted granted Critical
Publication of CN113096188B publication Critical patent/CN113096188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual odometer pose optimization method based on highlight pixel detection, which mainly comprises the following steps: highlight pixel detection, weight matrix calculation, feature descriptor calculation and least square optimization. Specular reflection often occurs on the surface of a metallic, partially smooth object, resulting in high light pixels in the image captured by the camera. The position of the highlight pixel can change along with the movement of the visual angle of the camera, so that the matching of adjacent images in visual positioning is wrong, and the positioning precision is reduced. The method has the core that highlight pixel detection is introduced, the detection result is converted into the weight matrix to be added into the pose optimization process, the error matching caused by highlight pixels is eliminated, and the accuracy of visual positioning is effectively improved.

Description

Visual odometer pose optimization method based on highlight pixel detection
Technical Field
The invention relates to the field of robot vision positioning, in particular to a method for optimizing the pose of a vision odometer based on highlight pixel detection.
Background
Specular reflection often occurs on the surface of smooth objects such as metal, resulting in high light pixels in the image captured by the camera. The position of the highlight pixel can change along with the movement of the camera visual angle and the light source position, so that the matching of adjacent images in visual positioning is wrong, and the positioning precision is reduced.
Disclosure of Invention
In order to solve the defects of the prior art, the invention introduces a highlight weight matrix in the pose solution of the traditional visual odometer to realize the aim of improving the positioning precision of the visual odometer, and adopts the following technical scheme:
a visual odometer pose optimization method based on highlight pixel detection comprises the following steps:
s1, acquiring environment color image information in real time according to the fixed frame number;
s2, calculating a highlight pixel picture in the color image;
s3, calculating highlight weight matrix of each pixel through highlight pixel pictureW highlight
S4, calculating a gradient weight matrix of each pixelW grad
S5, calculating a final weight matrixW=W highlight W grad
S6, calculating descriptors of all pixel points;
and S7, establishing a least square optimization equation, substituting the least square optimization equation into the weight matrix and the pixel point descriptors, and performing nonlinear optimization to solve pose information.
Further, in S2, the chroma values corresponding to all the color image pixels are introduced into the minimum chroma-maximum chroma two-dimensional space for clustering, and the chroma values obtained through clusteringI ratio (i) Computing highlight pixel componentsI highlight
Further, theI ratio (i) By calculating the minimum intensity value of the colour imageI min Maximum intensity valueI max The obtained colorimetric value Λ psf (i) And projecting the image to a minimum chroma-maximum chroma two-dimensional space for clustering, wherein the expression is as follows:
Figure 74398DEST_PATH_IMAGE001
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively as
Figure 522697DEST_PATH_IMAGE002
In (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity value
Figure 708959DEST_PATH_IMAGE003
And maximum chroma value
Figure 369747DEST_PATH_IMAGE004
The minimum chroma value
Figure 856224DEST_PATH_IMAGE003
And maximum chroma value
Figure 209844DEST_PATH_IMAGE004
Projected to a coordinate point of a minimum chromaticity-maximum chromaticity two-dimensional spacex’y’Corresponding to a plane coordinate system, a plurality of image coordinate points exist, the coordinate points are clustered by adopting a kmeans algorithm, the coordinate points are clustered into three classes of red, green and blue, and the coordinate points in each class are clusteredI max (i) AndI range (i) By passingmed() Solving median asI ratio (i)。
Further, the highlight pixel componentI highlight By passingI ratio (i) Calculating diffuse reflectance components in imagesI diffuse And then by the diffuse reflection componentI diffuse The expression is obtained as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse
further, in the S3, a highlight threshold is setI th1 Calculating pixel pointsiHighlight weight matrix:
Figure 148982DEST_PATH_IMAGE005
further, the S4 includes the following steps:
s41, graying the color image, carrying out Gaussian blur processing on the image, and calculating gradient amplitudes of pixel points in x and y directions:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
wherein,G x (i)、G y (i) Are respectively pixel pointsiIn the imagexDirection andythe magnitude of the gradient in the direction is,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-is a pixel pointiThe upper adjacent pixel point corresponds to the intensity,I x y 1,+is a pixel pointiThe corresponding intensity of the upper adjacent pixel point;
s42, setting a gradient threshold valueI th2 Calculating pixel pointsiGradient weight matrixW grad
Figure 754406DEST_PATH_IMAGE006
Further, in S6, the function is calculated by a feature descriptor representing the pixel correspondence of the current frame imageI(w(p,θ+△θ) Obtained whereinw() Representing a transformation function in the camera imaging model, projecting the coordinates of the pixels in the current frame into a reference frame,pis the pixel coordinate position in the image,θpose information is transformed for the camera.
Further, in S7, the least squares optimization equation:
Figure 954444DEST_PATH_IMAGE007
wherein,I’() A feature descriptor computation function representing pixels in the reference frame image.
Further, thew() Is a warp function.
Further, theθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
The invention has the advantages and beneficial effects that:
according to the invention, the highlight weight matrix is introduced in the pose solving of the traditional visual odometer, so that the phenomenon that the image has highlight pixels and the position of the highlight pixels changes along with the movement of the visual angle of the camera and the position of the light source, which causes the matching error of adjacent images in visual positioning is avoided, and the positioning precision of the visual odometer is effectively improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2a is a schematic diagram of a two-dimensional space of minimum chroma and maximum chroma after clustering in the present invention.
FIG. 2b is a schematic diagram of a two-dimensional space of minimum chroma and maximum chroma after clustering and median calculation according to the present invention.
Fig. 3a is an original diagram in the present invention.
FIG. 3b is a diagram of highlight pixel detection in the present invention.
Fig. 4 is a comparison diagram of the positioning accuracy of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, the present invention is embodied as follows:
s1: the camera is carried on a mobile platform, the position and the visual angle of the camera are fixed, and environmental color image information is acquired in real time according to the fixed frame number;
s2: calculating high-light pixel pictures in a color image: calculating the minimum intensity of a color imageI min Maximum intensity ofI max Colorimetric value Λ psf (i) Introducing the chromatic values corresponding to all the pixels into a minimum-maximum chromaticity two-dimensional space for clustering, and calculating after clustering to obtainI ratio The specific calculation expression is as follows:
Figure 876701DEST_PATH_IMAGE001
by passingI ratio Calculating diffuse reflectance components in imagesI diffuse And high light pixel componentI highlight The expression is as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively as
Figure 37556DEST_PATH_IMAGE002
In (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity value
Figure 305726DEST_PATH_IMAGE008
And maximum chroma value
Figure 235636DEST_PATH_IMAGE009
The minimum chroma value
Figure 196638DEST_PATH_IMAGE003
And maximum chroma value
Figure 703843DEST_PATH_IMAGE004
Projected to a coordinate point of a minimum chromaticity-maximum chromaticity two-dimensional spacex’y’Corresponding to a plane coordinate system, there will be many image coordinate points, as shown in fig. 2a, coordinate points are clustered by using a kmeans algorithm, the coordinate points are clustered into three classes of red, green and blue, as shown in fig. 2b, and for each class, the coordinate points are clusteredI max (i) AndI range (i) By passingmed() Solving median asI ratio (i)。
Fig. 3a shows the original image, and fig. 3b shows the detected highlight pixels. The white pixel in the figure is the image highlight region.
S3: calculating highlight weight matrix of each pixelW highlight The method comprises the following specific steps: setting a highlight thresholdI th1 Calculating a highlight weight matrix of a pixel point, wherein the expression is as follows:
Figure 385491DEST_PATH_IMAGE005
s4: calculating a pixel gradient weight matrixW grad The method comprises the following specific steps: firstly, graying the color image, carrying out Gaussian blur processing on the image, and calculating gradient amplitudes of pixel points in x and y directions, wherein the expression is as follows:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
in the above formulaG x (i)、G y (i) The gradient amplitudes of the pixel points in the x direction and the y direction of the image are respectively,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-corresponding intensities of adjacent pixel points on the upper side of the pixel points,I x y 1,+corresponding intensities of adjacent pixel points on the upper sides of the pixel points;
then setting a gradient thresholdI th2 Calculating a weight matrixW grad The expression is as follows:
Figure 560121DEST_PATH_IMAGE006
s5: computing a final weight matrixWThe expression is as follows:
W=W highlight W grad
s6: calculating descriptors of all pixel points;
s7: establishing least square optimization equation, substituting weight matrixWPerforming nonlinear optimization to solve pose information DeltaθThe equation expression is as follows:
Figure 36233DEST_PATH_IMAGE007
in the above formulapIs the pixel coordinate position in the image,w() The main role of the warp function, i.e. the transformation function in the camera imaging model, is to project the pixel coordinates in the current frame into the reference frame.I() The feature descriptor computation function corresponding to the pixel in the current frame image is shown,I’() The feature descriptor computation function of the pixel in the reference frame image is shown,θtransforming pose information for cameras, in particularθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
Fig. 4 is a schematic diagram showing the comparison of the positioning accuracy of the visual odometer with and without the introduction of highlight pixel detection. In the figure, the dotted line is a real track, the dark straight line is a visual odometer positioning track introducing a highlight pixel weight matrix, and the light straight line is a visual odometer positioning track introducing a highlight pixel weight matrix. From experimental results, it can be known that the visual odometer with highlight pixel detection can provide higher positioning accuracy.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (4)

1. A visual odometer pose optimization method based on highlight pixel detection is characterized by comprising the following steps:
s1, acquiring color image information;
s2, calculating a highlight pixel picture in the color image;
s3, calculating highlight weight matrix of each pixel through highlight pixel pictureW highlight Setting of highlight thresholdI th1 Calculating pixel pointsiHighlight weight matrix:
Figure DEST_PATH_IMAGE001
the above-mentionedI highlight Representing highlight pixel components;
s4, calculating a gradient weight matrix of each pixelW grad The method comprises the following steps:
s41, graying the color image, blurring the image, and calculating gradient amplitudes of pixel points in x and y directions:
G x (i)=I x 1 y+,+I x-1 y,-2I x y,
G y (i)=I x y 1,++I x y 1,--2I x y,
wherein,G x (i)、G y (i) Are respectively pixel pointsiIn the imagexDirection andythe magnitude of the gradient in the direction is,I x y,is a pixel pointiIn response to the intensity of the light beam,I x-1 y,is a pixel pointiThe left-hand neighboring pixel point corresponds to intensity,I x 1 y+,is a pixel pointiThe right adjacent pixel point corresponds to an intensity,I x y 1,-is a pixel pointiThe upper adjacent pixel point corresponds to the intensity,I x y 1,+is a pixel pointiThe corresponding strength of the lower adjacent pixel point;
s42, setting a gradient threshold valueI th2 Calculating pixel pointsiGradient weight matrixW grad
Figure DEST_PATH_IMAGE002
S5, calculating a final weight matrixW=W highlight W grad
S6, calculating descriptors of each pixel point, wherein the descriptors calculate functions by representing the characteristic descriptors corresponding to the pixels in the current frame imageI(w(p,θ+△θ) Obtained whereinw() Representing a transformation function, projecting the coordinates of the pixels in the current frame into a reference frame,pis the pixel coordinate position in the image,θtransforming pose information for the camera;
s7, establishing a least square optimization equation, substituting the least square optimization equation into the weight matrix and the pixel point descriptors, and performing nonlinear optimization to solve the pose information deltaθThe least squares optimization equation:
Figure DEST_PATH_IMAGE003
wherein,I’() A feature descriptor computation function representing pixels in the reference frame image.
2. The method of claim 1, wherein in step S2, the chroma values corresponding to the color image pixels are introduced into the minimum chroma value and the maximum chroma valueClustering in two-dimensional space, obtained by clusteringI ratio (i) Computing highlight pixel componentsI highlight (ii) a The above-mentionedI ratio (i) By calculating the minimum intensity value of the colour imageI min Maximum intensity valueI max The obtained colorimetric value Λ psf (i) And projecting the image to a minimum chroma-maximum chroma two-dimensional space for clustering, wherein the expression is as follows:
Figure DEST_PATH_IMAGE004
wherein,I r (i)、I g (i)、I b (i) Respectively represent color imagesiThe red, green and blue intensity values of each pixel point are respectively used asI psf (i)=I(i)- I min (i) In (1)I(i) Obtained byI r psf (i)、I g psf (i)、I b psf (i) Respectively as
Figure DEST_PATH_IMAGE005
In (1)I psf (i) Obtained Λ r psf (i)、Λ g psf (i)、Λ b psf (i) For finding the minimum chromaticity value
Figure DEST_PATH_IMAGE006
And maximum chroma value
Figure DEST_PATH_IMAGE007
The minimum chroma value
Figure 437420DEST_PATH_IMAGE006
And maximum chroma value
Figure 292244DEST_PATH_IMAGE007
Projecting to coordinate points of a minimum chromaticity-maximum chromaticity two-dimensional space, clustering the coordinate points into three classes of red, green and blue, and clustering the coordinate points in each classI max (i) AndI range (i) By passingmed() Solving median asI ratio (i) (ii) a The highlight pixel componentI highlight By passingI ratio (i) Calculating diffuse reflectance components in imagesI diffuse And then by the diffuse reflection componentI diffuse The expression is obtained as follows:
I range (i)=I max (i)- I min (i)
I diffuse =I ratio (i)I range (i)
I highlight =I max (i)- I diffuse
3. the method of claim 1, wherein the method comprises performing a visual odometry pose optimization based on highlight pixel detectionw() Is a warp function.
4. The method of claim 1, wherein the method comprises performing a visual odometry pose optimization based on highlight pixel detectionθ=[R T]WhereinRIn order to be a matrix of camera rotations,Tis the camera displacement.
CN202110642779.7A 2021-06-09 2021-06-09 Visual odometer pose optimization method based on highlight pixel detection Active CN113096188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642779.7A CN113096188B (en) 2021-06-09 2021-06-09 Visual odometer pose optimization method based on highlight pixel detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642779.7A CN113096188B (en) 2021-06-09 2021-06-09 Visual odometer pose optimization method based on highlight pixel detection

Publications (2)

Publication Number Publication Date
CN113096188A CN113096188A (en) 2021-07-09
CN113096188B true CN113096188B (en) 2021-09-21

Family

ID=76665915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642779.7A Active CN113096188B (en) 2021-06-09 2021-06-09 Visual odometer pose optimization method based on highlight pixel detection

Country Status (1)

Country Link
CN (1) CN113096188B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390648A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom
CN112734845A (en) * 2021-01-08 2021-04-30 浙江大学 Outdoor monocular synchronous mapping and positioning method fusing scene semantics

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102462799B1 (en) * 2015-11-05 2022-11-03 삼성전자주식회사 Method and apparatus for estimating pose
CN108010081B (en) * 2017-12-01 2021-12-17 中山大学 RGB-D visual odometer method based on Census transformation and local graph optimization
CN110346116B (en) * 2019-06-14 2021-06-15 东南大学 Scene illumination calculation method based on image acquisition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390648A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of image high-intensity region method distinguished based on unsaturation and saturation bloom
CN112734845A (en) * 2021-01-08 2021-04-30 浙江大学 Outdoor monocular synchronous mapping and positioning method fusing scene semantics

Also Published As

Publication number Publication date
CN113096188A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN108760767B (en) Large-size liquid crystal display defect detection method based on machine vision
CN111563889B (en) Liquid crystal screen Mura defect detection method based on computer vision
US6768509B1 (en) Method and apparatus for determining points of interest on an image of a camera calibration object
CN107169475B (en) A kind of face three-dimensional point cloud optimized treatment method based on kinect camera
US8724885B2 (en) Integrated image processor
CN112304954B (en) Part surface defect detection method based on line laser scanning and machine vision
CN108629756B (en) Kinectv2 depth image invalid point repairing method
CN114820817A (en) Calibration method and three-dimensional reconstruction method based on high-precision line laser 3D camera
CN107367515B (en) A kind of ultrathin flexible IC substrate ink foreign matter detecting method
CN114241438B (en) Traffic signal lamp rapid and accurate identification method based on priori information
CN113096188B (en) Visual odometer pose optimization method based on highlight pixel detection
Han et al. Target positioning method in binocular vision manipulator control based on improved canny operator
JP2005345290A (en) Streak-like flaw detecting method and streak-like flaw detector
CN110501339B (en) Cloth cover positioning method in complex environment
CN113554672B (en) Camera pose detection method and system in air tightness detection based on machine vision
CN111667429A (en) Target positioning and correcting method for inspection robot
CN114998571B (en) Image processing and color detection method based on fixed-size markers
CN108428250B (en) X-corner detection method applied to visual positioning and calibration
CN113255455B (en) Monocular camera object identification and positioning method based on vector illumination influence removing algorithm
CN115235335A (en) Intelligent detection method for size of running gear of high-speed rail motor train unit
CN112381896B (en) Brightness correction method and system for microscopic image and computer equipment
CN113610091A (en) Intelligent identification method and device for air switch state and storage medium
CN114066993A (en) Power distribution cabinet control panel segmentation method based on machine vision
CN112200824A (en) Method for accurately calculating actual width of single pixel in crack image
CN113834488B (en) Robot space attitude calculation method based on remote identification of structured light array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant