CN103925919A - Fisheye camera based planetary rover detection point positioning method - Google Patents

Fisheye camera based planetary rover detection point positioning method Download PDF

Info

Publication number
CN103925919A
CN103925919A CN201410015845.8A CN201410015845A CN103925919A CN 103925919 A CN103925919 A CN 103925919A CN 201410015845 A CN201410015845 A CN 201410015845A CN 103925919 A CN103925919 A CN 103925919A
Authority
CN
China
Prior art keywords
prime
point
image
coordinate
fisheye camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410015845.8A
Other languages
Chinese (zh)
Inventor
王镓
王保丰
刘传凯
周建亮
唐歌实
周立
张强
袁建平
卜彦龙
苗萍
刘飞
高薇
李弈霏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Control Center
Original Assignee
Beijing Aerospace Control Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Control Center filed Critical Beijing Aerospace Control Center
Priority to CN201410015845.8A priority Critical patent/CN103925919A/en
Publication of CN103925919A publication Critical patent/CN103925919A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fisheye camera based planetary rover detection point positioning method. The method comprises the following steps: transforming a fisheye imaging projection to a center projection; generating an image epipolar image; extracting and coupling an imaging characteristic point; and calculating the coordinates of the detection point. The high precision positioning of the planetary rover detection point can be realized through the method by using a fisheye camera.

Description

A kind of planet car sensing point localization method based on fisheye camera
Technical field
The present invention relates to computer vision and technical field of image matching, is a kind of localization method that utilizes fisheye camera image to carry out planet car sensing point.
Background technology
At present, abroad, in planetary detection, can realize planetary surface navigation and the detection of a target are located by the two CCD camera measure system of installing on planet car.U.S. NASA " curious number " Marsokhod in survey of deep space task is collected sample, diamond drill hole sampling by mechanical arm, and wherein the main barrier camera of keeping away carrying that utilizes obtains the three-dimensional coordinate that stereo-picture obtains sensing point position.Domesticly for mechanical arm Detection location, also carry out a series of research, but mainly concentrate on, used the less stereoscopic camera vision system of field angle to carry out sensing point location.The advantage of fisheye camera is to have large field angle, once can see interior in a big way scene.But fisheye camera exists larger distortion, how carrying out high-precision location is its difficult point that need to solve.
Summary of the invention
The technical problem to be solved in the present invention is, the present invention proposes a kind of planet car sensing point localization method based on fisheye camera, makes can realize by fisheye camera the hi-Fix of planet car sensing point.
For addressing the above problem, the present invention adopts following technical scheme:
A kind of planet car sensing point localization method based on fisheye camera comprises the following steps:
S1, left and right fisheye camera original image is converted to respectively to the central projection image under central projection pattern;
S2, central projection image is carried out to core line correct to process, be converted to left and right core line image;
S3, from left core line image extract minutiae, on right core line image, according to described unique point, utilize point that correlation coefficient process therefrom searches out related coefficient maximum as match point, then utilize least square method, the match point that correlation coefficient process is extracted carries out compensating computation, makes the matching precision of left and right core line image reach sub-pixel grade.
S4, the high precision matching characteristic point obtaining according to least squares adjustment, utilize forward intersection algorithm to calculate sensing point coordinate.
The present invention adopts perspective projection transformation, and fish eye images is converted to central projection image, and produced nucleus line image, and coupling is narrowed down in core line direction, has reduced without coupling; Adopt the first strategy of the rear exact matching of thick coupling, first realize the coupling of whole Pixel-level and then carry out the exact matching that least square coupling realizes sub-pixel precision, guaranteed the high precision of coupling, the accurate location of finally having realized sensing point.Adopt technical scheme of the present invention, make can realize by fisheye camera the hi-Fix of planet car sensing point.
Accompanying drawing explanation
Fig. 1 is the planet car sensing point localization method process flow diagram based on fisheye camera;
Fig. 2 is that flake projection is to central Projection Transformation And schematic diagram;
Fig. 3 is produced nucleus line image schematic diagram.
Embodiment
The invention provides a kind of planet car sensing point localization method based on fisheye camera comprises:
S1, flake projection are to central Projection Transformation And: left and right fisheye camera original image is converted to respectively to the central projection image under central projection pattern.
Arbitrary spatial point P is converted to central projection pattern hypograph coordinate (x ', y ') at the picture coordinate (x, y) of fisheye camera imaging; As shown in Figure 2, O ' is optical center, O ' X ' Y ' plane is that fish eye lens is as plane, f is the focal length of fisheye camera, and O ' Z is primary optical axis, and the O ' of take sets up image space coordinate system O '-X ' Y ' Z as coordinate origin, arbitrary spatial point P, under image space coordinate system corresponding to fisheye camera coordinate be (x ', y ', z).O ' P line and the projection sphere p that intersects at a point, the orthogonal projection of p point to O ' X ' Y ' as in plane, obtain picture point p ', coordinate is (x, y), (x, y) be a P under image space coordinate system corresponding to fisheye camera coordinate (x ', y ', spherical projection z), spherical equation is x ' 2+ y ' 2+ z 2=f 2.Supposing has a virtual picture plane OXY at the plane place of Z=f, corresponding change in coordinate axis direction is parallel with the middle coordinate axis of picture plane O ' X ' Y ', and O ' P line and its intersection point are p ".Obviously p " for the virtual central projection picture point of some P, this p " is shown (x ', y ') as coordinates table under the central projection of point.P, p " and 3 collinearity equations of following central projection of O ', between each coordinate, there is following relation, see formula (2):
x ′ = f · x f 2 - x 2 - y 2 y ′ = f · y f 2 - x 2 - y 2 - - - ( 1 )
Making the picpointed coordinate of the left image of original fisheye camera is (x l, y l), the picpointed coordinate of the right image of original fisheye camera is (x r, y r), according to formula (1), have the picpointed coordinate of their correspondences on central projection image be respectively (x ' l, y ' l), (x ' r, y ' r), their relation is shown in formula (2), (3):
x L ′ = f · x L f 2 - x L 2 - y L 2 y L ′ = f · y R f 2 - x L 2 - y L 2 - - - ( 2 ) x R ′ = f · x R f 2 - x L 2 - y L 2 y R ′ = f · y R f 2 - x R 2 - y R 2 - - - ( 3 )
S2, synthetic image core line image: central projection image is carried out to core line and correct processing, be converted to left and right core line image.
If the picpointed coordinate of picture point on left and right core line image be respectively (x " l, y " l), (x " r, y " r), the picpointed coordinate on they and central projection image (x ' l, y ' l), (x ' r, y ' r) transformational relation can be expressed as formula (4), (5)
N turnfor the transition matrix between left fisheye camera core line image and its original image, N ' turnfor the transition matrix between right fisheye camera core line image and its original image, X 0 ′ Y 0 ′ For the translation parameters of changing between left fisheye camera core line image and its original image, X 0 ′ ′ Y 0 ′ ′ For the translation parameters of changing between right fisheye camera core line image and its original image;
Solve N turn, X 0 ′ Y 0 ′ , N ' turnand X 0 ′ ′ Y 0 ′ ′ Process as follows:
As shown in Figure 3, P is space any point, O l, O rbe respectively the optical center of left and right camera, O l-X ly lz l, O r-X ry rz rbe respectively the image space coordinate system of left and right camera, the coordinate of P under two coordinate systems be respectively (x ' l, y ' l, z ' l) and (x ' r, y ' r, z ' r).S land S rfor left and right picture plane, o l, o rbe respectively the principal point of left and right photo, two focal lengths as plane are respectively f l, f r.O l-x ly l, o r-x ry rbe picture coordinate systems corresponding to two picture planes, o lx l, o ly laxle respectively with O lx l, O ly laxle is parallel, o rx r, o ry raxle respectively with O rx r, O ry raxle is parallel; P ' land p ' rbe respectively the picture point of P correspondence in left and right picture plane, coordinate be respectively (x ' l, y ' l), (x ' r, y ' r).At O lo ro lin 3 determined planes, set up left camera virtual representation space coordinates O l-XTZ, i.e. left camera core line image space coordinate system, O lfor coordinate origin, O lpoint to O rthe direction positive dirction that is X-axis, the positive dirction that the normal direction of plane is Z axis, Y-axis is determined by X and Z axis.In like manner set up the virtual representation space coordinates O of right camera r-X ' Y ' Z ', i.e. right camera core line image space coordinate system, it is by coordinate system O l-XYZ moves to O rproduce.Make the coordinate of P under two coordinate systems be respectively (x " l, y " l, z " l), (x " r, y " r, z " r); S ' land S ' rbe respectively virtual representation plane corresponding to above two virtual representation space coordinates, their focal lengths are f, and S ' land S ' rcoplanar.O ' l, o ' rbe respectively S " land S " rprincipal point, o ' l-xy, o ' r-xy is picture coordinate systems corresponding to two picture planes, o ' lx, o ' ly axle respectively with O lx, O ly-axis is parallel, o ' rx, o ' ry axle respectively with O rx, O ry-axis is parallel.The essence that core line image is corrected is exactly at S by two of left and right camera land S rupper imaging projects to O l-XYZ, O r-X ' Y ' Z ' is virtual representation planar S corresponding to virtual representation space coordinates " land S " ron.
Detailed process is as follows:
(1) the image space coordinate system O of right camera r-X ry rz rimage space coordinate system O with left camera l-X ly lz lcoordinate transformation relation can use formula (6) to represent, wherein, N is coordinate system O r-X ry rz rwith coordinate system O l-X ly lz lthe rotation matrix of conversion, (X 0, Y 0, Z 0) represent O l-X ly lz linitial point is at O r-X ry rz rcoordinate.
x L ′ y L ′ z L ′ = N x R ′ - X 0 y R ′ - Y 0 z R ′ - Z 0 - - - ( 6 )
(2) utilize the formula (6) can calculation level O rcoordinate under the image space coordinate system of left camera expression formula is:
x L O R ′ y L O R ′ z L O R ′ = N - X 0 - Y 0 - Z 0 - - - ( 7 )
(3) at the O of left camera virtual representation space coordinates lon Z axis, finding unit length is 1 some P 1, its coordinate under the image space coordinate system of left camera so can be obtained by following formula:
O L P 1 → = O L O R → × O L o L → | O L O R → × O L o L → | = x L P 1 ′ i + y L P 1 ′ j + z L P 1 ′ k - - - ( 8 )
(4) O so l, O r, P 13 coordinates under left camera virtual representation space coordinates be respectively (0,0,0), their coordinates under the image space coordinate system of left camera be respectively (0,0,0), according to common point transfer principle, can obtain coordinate system O l-XYZ and O l-X ly lz lconversion parameter: X ' 0L, Y ' 0L, Z ' 0L, ω ', κ ', φ ', wherein, X ' 0L, Y ' 0L, Z ' 0Lfor translation parameters, ω ', κ ', φ ' is rotation parameter, by ω ', κ ', φ ' parameter can calculate rotation matrix N core; Coordinate system O l-XYZ and O l-X ly lz lthere is following transformational relation:
Wherein,
(5) utilize above-mentioned (9) formula will look like planar S lpicture point p ' lcoordinate under left camera image space coordinate system (x ' l, y ' l,-f l) be converted to coordinate under left camera virtual representation space coordinates (x " l, y " l, z " l), there is following relation:
By formula (10), can solve N in formula (4) turn, value,
In like manner, utilize identical method can solve N ' turnwith Y 0 ′ ′ Y 0 ′ ′ .
(6) because the coordinate calculating not necessarily drops on the position of whole pixel just, to (x ' l, y ' l) carry out bilinear interpolation gray resample, can obtain (x " l, y " l) gray-scale value, thereby generate left core line image, in like manner obtain right core line image.
S3, image characteristic point extract and coupling: extract minutiae from left core line image, on right core line image, according to described unique point, utilize point that correlation coefficient process therefrom searches out related coefficient maximum as match point, then utilize least square method, the match point that correlation coefficient process is extracted carries out compensating computation, makes the matching precision of left and right core line image reach sub-pixel grade.
Set of assigned points in left core line image on right core line image, mate corresponding point, form coupling point set specifically comprise the following steps:
Step 1: sensing point is specified
The needs that can survey according to task on the left core line image generating, manually specify one or more detections of a target, and their picture coordinate meters under core line image coordinate system are done form point set P { ( x T ″ i , y T ″ i ) , i = 1,2 , . . . , n } .
Step 2: sensing point coupling
After the unique point of utilizing step 1 to realize left width image is specified, further work is exactly on another piece image, to carry out the coupling of specified point.Specified the core line image of sensing point to be called benchmark image, an other core line image to be matched is called registering images.Conventionally baseline shorter (being about 100mm) and the relative position between the fisheye camera on planet car fixed, and can think rigid body, therefore left and right stereo-picture similarity and degree of overlapping are all larger, can adopt correlation coefficient matching method method.Concrete steps are as follows:
(1) choose successively benchmark image point set in each sensing point, and centered by this puts, choose the capable n of m row (m, n is odd number, conventionally makes m=n=11) image-region as target area.
(2) in order to search for match point on registering images, first estimate the approximate range that this match point may exist, set up the field of search (k > m, 1 > n) of the capable l row of k.
(3) in the field of search of registering images, take out successively m * n pixel grey scale array (search window is got m=n conventionally), calculate the similarity measure of itself and target area ρ ij ( i = i 0 - l 2 + n 2 , . . . , i 0 + l 2 - n 2 ; j = j 0 - k 2 + m 2 , . . . , j 0 + k 2 - m 2 ) , (i 0, j 0) be field of search center pixel; Work as ρ ijobtain maximal value ρ maxtime, the center pixel of this search window is considered to same place, according to above-mentioned principle, can obtain point set whole pixel matching result point set on registering images P { ( x J ′ ′ ′ i , y J ′ ′ ′ i ) , i = 1,2 , . . . , n } .
ρ max = max ρ ij i = i 0 - l 2 + n 2 , . . . , i 0 + l 2 - n 2 j = j 0 - k 2 + m 2 , . . . , j 0 + k 2 - m 2 - - - ( 11 )
Wherein, similarity measure ρ ijthe correlation coefficient ρ of the pel array of the m * n centered by match point by target area image and the field of search kcalculate.
ρ k = σ g g ′ σ gg σ g ′ g ′ ( k = 0,1 , . . . , m - n ) - - - ( 12 )
If the coordinate of impact point in left image is (x, y), in the field of search, the coordinate of certain point is (x ', y '), and (x ', y ') is shown (k with respect to the offset table of (x, y) 1, k 2), g (x, y) is the target area image centered by (x, y), g ' (x ', y ') is the image-region of m * n pixel in the field of search centered by (x ', y ') in the field of search, g ijthe gradation of image for target area coordinate (i, j), the mean value of the target area image greyscale centered by (x, y), the mean value of the target area image gray scale centered by (x ', y '), σ gg 'the variance of target area image centered by (x, y) and the image centered by (x ', y '), σ ggthe variance of the target area image centered by (x, y), σ g ' g 'it is the variance of the image centered by (x ', y ') in the field of search.Relational expression between them:
g _ = 1 mn Σ i = 1 m Σ j = 1 n g ij , g _ ′ = 1 mn Σ i = 1 m Σ j = 1 n g i + k 1 , j + k 2 , ′ σ gg = 1 mn Σ i = 1 m Σ j = 1 n g ij 2 - g 2 _ σ g ′ g ′ = 1 mn Σ i = 1 m Σ j = 1 n g i + k 1 , j + k 2 ′ 2 - g _ ′ g _ ′ , σ g g ′ = 1 mn Σ i = 1 m Σ j = 2 n g ij g i + k 1 , j + k 2 g _ g _ ′ . - - - ( 13 )
Step 3: least square coupling
Utilize said method can complete point set between left and right two width images to point set pixel matching, in order further to improve matching precision, to point set in each match point, adopt least square coupling, the information in imaging window is carried out to overall adjustment calculating, finally obtain point set the point set of sub-pix ratings match on registering images P { ( x J ′ ′ i , y J ′ ′ i ) , i = 1,2 , . . . , n } .
The least square coupling of left and right image, the step of images match precision being brought up to sub-pixel grade is as follows:
1) geometric distortion is corrected: according to geometry deformation parameter a 0, a 1, a 2, b 0, b 1, b 2the photo coordinate transform of benchmark image window is to registering images array:
x J ′ ′ ′ = a 0 + a 1 x T ′ ′ + a 2 y T ′ ′ , y J ′ ′ ′ = b 0 + b 1 x T ′ ′ + b 2 y T ′ ′ . - - - ( 14 )
Consider that reference image can obtain with respect to the linear tonal distortion of registration picture:
g 1(x″ J′,y″ J′)+n 1(x″ J′,y″ J′)=h 0+h 1g 2(a 0+a 1x″ T+a 2y″ T,b 0+b 1x″ T+b 2y″T)+n 2(x″ T,y″ T)(15)
2) resample: according to formula (15), adopt bilinear interpolation to carry out gray resample and calculate g 2(x " t, y " t), after linearization, can obtain the error equation of least square images match:
V=c 1dh 0+ c 2dh 1+ c 3da 0+ c 4da 1+ c 3da 2+ c 6db 0+ c 7db 1+ c 8db 2-Δ g (16) wherein, dh 0, dh 1, da 0... da 2be the corrected value of distortion parameter, Δ g is the gray scale difference of respective pixel;
3) radiometric distortion corrects: utilize the radiometric distortion parameter h being tried to achieve by least square images match error equation 0, h 1, above-mentioned resampling result is made to radiative corrections, i.e. h 0+ h 1g 2(x " t, y " t);
4) calculate the correlation coefficient ρ between left image-region and the gray scale array of the right image-region after geometric distortion correction and the other distortion correction of spoke.If correlation coefficient ρ is less than the related coefficient of trying to achieve after a front iteration, calculate optimal match point, iteration finishes; Otherwise carry out step 5);
5) adopt least square images match, separate the corrected value dh of changes persuing shape parameter 0, dh 1, da 0...;
6) calculate deformation parameter: establish h 0 i - 1 , h 1 i - 1 , a 0 i - 1 , a 1 i - 1 , . . . A front deformation parameter, d h 0 i , dh 1 i , d a 0 i , da 1 i , . . . The corrected value that this iteration is tried to achieve, geometric deformation parameter by following relation, correct:
1 x 2 y 2 = 1 0 0 a 0 i a 1 i a 2 i b 0 i b 1 i b 2 i 1 x y = 1 0 0 d a 0 i 1 + d a 1 i d a 2 i d b 0 i db 1 i 1 + d b 2 i 1 0 0 a 0 i - 1 a 1 i - 1 a 2 i - 1 b 0 i - 1 b 1 i - 1 b 2 i - 1 1 x y ,
⇒ a 0 i = a 0 i - 1 + d a 0 i + a 0 i - 1 d a 1 i + b 0 i - 1 d a 2 i a 1 i = a 1 i - 1 + a 1 i - 1 d a 1 i + b 1 i - 1 d a 2 i a 2 i = a 2 i - 1 + a 2 i - 1 d a 1 i + b 2 i - 1 d a 2 i b 0 i = b 0 i - 1 + d b 0 i + a 0 i - 1 d b 1 i + b 0 i - 1 d b 2 i b 1 i = b 1 i - 1 + a 1 i - 1 d b 1 i + b 1 i - 1 d b 2 i b 2 i = b 2 i - 1 + a 2 i - 1 d b 1 i + b 2 i - 1 d b 2 i - - - ( 17 )
Radiometric distortion parameter corrects by following relation:
1 g 1 = 1 0 dh 0 i 1 + dh 1 i 1 0 h 0 i - 1 h 1 i - 1 1 g 2 ⇒ h 0 i = h 0 i - 1 + dh 0 i + h 0 i - 1 dh 1 i , h 1 i = h 1 i - 1 + h 1 i - 1 dh 1 i . - - - ( 18 )
7) known according to the Precision Theory of least square coupling, coordinate precision depends on the gradient of gradation of image according to gradient square for power, in benchmark image window, coordinate is made to weighted mean:
x t = Σ x T ′ ′ · g · x T ′ ′ 2 / Σ g · x T ′ ′ 2 , y t = Σ y T ′ ′ · g · y T ′ ′ 2 / Σ g · y T ′ ′ 2 - - - ( 19 )
Using it as coordinate of ground point, and the geometric transformation parameter that identical point coordinates can be tried to achieve by least square images match is tried to achieve:
x J ′ ′ = a 0 + a 1 x t + a 2 y t , y J ′ ′ b 0 + b 1 x t + b 2 y t . - - - ( 20 )
By (20) formula, can calculate successively point set the sub-pixel match point of every bit, finally obtains point set P { ( x J ′ ′ i , y J ′ ′ i ) , i = 1,2 , . . . , n } .
S4, sensing point coordinate calculate: the high precision matching characteristic point obtaining according to least squares adjustment, utilizes forward intersection algorithm to calculate sensing point position.
According to the set of assigned points of left core line image mate point set with it utilize forward intersection algorithm to calculate sensing point position; Concrete steps are:
(1) any point in set of assigned points in left core line image as shown in Figure 2, at left camera core line image space coordinate system O lcoordinate under-XYZ coordinate system be (x " lP, y " lP, z " lP), the position of sensing point namely.Due in set of assigned points in left core line image a bit by mating to right core line image from left core line image, establish it the concentrated coordinate of point is x " l, y " l(being T=L), in the point that mates with it be x " r, y " r(being J=R).Left camera intrinsic parameter f l, x l0, y l0with right camera intrinsic parameter f r, x r0, y r0, and right camera is with respect to the outer parameter (ω of left camera core line image space coordinate system rL, K rL, φ rL, X rL, Y rL, Z rL), equal Accurate Calibrations, is known quantity in advance, two cameras can obtain following collinearity condition equation to unknown point P:
x L ″ - x L 0 = - f L x LP ″ z LP ″ y L ″ - y L 0 = - f L y LP ″ z LP ″ x R ″ - x R 0 = - f R r 11 ( x LP - X RL ) + r 12 ( y LP ″ - Y RL ) + r 13 ( z LP ″ - Z RL ) r 31 ( x LP ″ - X RL ) + r 32 ( y LP ″ - Y RL ) + r 33 ( y LP ″ - Z RL ) y R ″ - y R 0 = - f r R 21 ( x LP ″ - X RL ) + r 22 ( y LP ″ - Y RL ) + r 23 ( y LP ″ - Z R L ) r 31 ( x LP ″ - X RL ) + r 32 ( y LP ″ - Y RL ) + r 33 ( y LP ″ - Z RL ) - - - ( 22 )
Wherein:
R ( r ij ) 3 × 3 = cos κ RL cos φ RL - sin ω RL sin κ RL sin φ RL - sin κ RL cos φ RL - sin φ RL sin ω RL cos κ RL - sin φ RL cos ω RL cos ω RL sin κ RL cos ω RL cos κ RL - sin ω RL sin φ RL cos κ RL + sin ω RL sin κ RL cos φ RL - sin φ RL sin κ RL + sin ω RL cos κ RL cos φ RL cos ω RL cos φ RL
In (22) formula, the internal and external parameter of two video cameras is known, thus only have unknown point coordinate (x " lP, y " lP, z " lP) be unknown number, two video cameras have been set up 4 equations and have been solved 3 unknown numbers, thus can to (22) formula set up error equation carry out least square resolve to obtain unknown point three-dimensional coordinate (x " lP, y " lP, z " lP).
(2) (22) formula is carried out to linearization process, the error equation that obtains left core line image and right core line image point coordinate is shown in formula 23:
v x L ′ ′ = - ∂ x L ′ ′ ∂ x LP ′ ′ d x LP ′ ′ - ∂ x L ′ ′ ∂ y LP ′ ′ d y LP ′ ′ - ∂ x L ′ ′ ∂ z LP ′ ′ d z LP ′ ′ - l x L ′ ′ v y L ′ ′ = - ∂ y L ′ ′ ∂ x LP ′ ′ d x LP ′ ′ - ∂ y L ′ ′ ∂ y LP ′ ′ d y LP ′ ′ - ∂ y L ′ ′ ∂ z LP ′ ′ d z LP ′ ′ - l y L ′ ′ v x R ′ ′ = - ∂ x R ′ ′ ∂ x RP ′ ′ d x LP ′ ′ - ∂ x R ′ ′ ∂ y RP ′ ′ d y LP ′ ′ - ∂ x R ′ ′ ∂ z RP ′ ′ d z LP ′ ′ - l x R ′ ′ v y R ′ ′ = - ∂ y R ′ ′ ∂ x RP ′ ′ d x LP ′ ′ - ∂ y R ′ ′ ∂ y RP ′ ′ d y LP ′ ′ - ∂ y R ′ ′ ∂ z RP ′ ′ d z LP ′ ′ - l y R ′ ′ - - - ( 23 )
be respectively x " l, y " l, x " r, y " rcorresponding residual error, be respectively x " l, y " l, x " r, y " rapproximate value.
To each picture point, can list two error equations, and this point appears in 2 width sequential images, lists 2*2 equation, available matrix representation, see formula 24:
V=AX-L (24) wherein, A = - ∂ x L ′ ′ ∂ x LP ′ ′ - ∂ x L ′ ′ ∂ y LP ′ ′ ∂ x L ′ ′ ∂ z LP ′ ′ - ∂ y L ′ ′ ∂ x LP ′ ′ - ∂ y L ′ ′ ∂ y LP ′ ′ - ∂ y L ′ ′ ∂ z LP ′ ′ - ∂ x R ′ ′ ∂ x RP ′ ′ - ∂ x R ′ ′ ∂ y RP ′ ′ - ∂ x R ′ ′ ∂ z RP ′ ′ - ∂ y R ′ ′ ∂ x RP ′ ′ - ∂ y R ′ ′ ∂ y RP ′ ′ - ∂ y R ′ ′ ∂ z RP ′ ′ , X = [ x LP ′ ′ , y LP ′ ′ , z LP ′ ′ ] T , V = [ v x L ′ ′ , v y L ′ ′ , v x R ′ ′ , v y R ′ ′ ] T , it is the constant term of error equation.
(2) solve error equation (being formula 24), its solution is shown in formula 25:
X=(A TA) -1A TL (25)
Can calculate thus (x " lP, y " lP, z " lP), this point coordinate is the sensing point position of driving.
The above; it is only the embodiment in the present invention; but protection scope of the present invention is not limited to this; any people who is familiar with this technology is in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprise scope within, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (3)

1. the planet car sensing point localization method based on fisheye camera, is characterized in that, comprises the following steps:
S1, left and right fisheye camera original image is converted to respectively to the central projection image under central projection pattern;
S2, central projection image is carried out to core line correct to process, be converted to left and right core line image;
S3, from left core line image extract minutiae, on right core line image, according to described unique point, utilize point that correlation coefficient process therefrom searches out related coefficient maximum as match point, then utilize least square method, the match point that correlation coefficient process is extracted carries out compensating computation, makes the matching precision of left and right core line image reach sub-pixel grade.
S4, the high precision matching characteristic point obtaining according to least squares adjustment, utilize forward intersection algorithm to calculate sensing point position.
2. a kind of planet car sensing point localization method based on fisheye camera as claimed in claim 1, is characterized in that,
S1 is specially: arbitrary spatial point P is converted to central projection pattern hypograph coordinate (x ', y ') at the picture coordinate (x, y) of fisheye camera imaging; If O ' is optical center, O ' X ' Y ' plane is that fish eye lens is as plane, f is the focal length of fisheye camera, O ' Z is primary optical axis, the O ' of take sets up image space coordinate system O '-X ' Y ' Z as coordinate origin, arbitrary spatial point P, under image space coordinate system corresponding to fisheye camera coordinate be (x ', y ', z); O ' P line and the projection sphere p that intersects at a point, the orthogonal projection of p point to O ' X ' Y ' as in plane, obtain picture point p ', coordinate is (x, y), (x, y) be a P under image space coordinate system corresponding to fisheye camera coordinate (x ', y ', spherical projection z), spherical equation is x ' 2+ y ' 2+ z 2=f 2; Supposing has a virtual picture plane OXY at the plane place of Z=f, corresponding change in coordinate axis direction is parallel with the middle coordinate axis of picture plane O ' X ' Y ', O ' P line and its intersection point are p "; p " virtual central projection picture point for a P, this p " under the central projection of point, as coordinates table, be shown (x ', y '); P, p " and 3 collinearity equations of following central projection of O ', between each coordinate, there is following relation:
x ′ = f · x f 2 - x 2 - y 2 y ′ = f · y f 2 - x 2 - y 2 - - - ( 1 )
Making the picpointed coordinate of the left image of original fisheye camera is (x l, y l), the picpointed coordinate of the right image of original fisheye camera is (x r, y r), according to formula (1), have the picpointed coordinate of their correspondences on central projection image be respectively (x ' l, y ' l), (x ' r, y ' r), that is,
x L ′ = f · x L f 2 - x L 2 - y L 2 y L ′ = f · y R f 2 - x L 2 - y L 2
x R ′ = f · x R f 2 - x L 2 - y L 2 y R ′ = f · y R f 2 - x R 2 - y R 2
3. a kind of planet car sensing point localization method based on fisheye camera as claimed in claim 1 or 2, is characterized in that, S2 is specially: establish the picpointed coordinate of picture point on left and right core line image be respectively (x " l, y " l), (x " r, y " r), the picpointed coordinate on they and central projection image (x ' l, y ' l), (x ' r, y ' r) transformational relation be:
Wherein, N turnfor the transition matrix between left fisheye camera core line image and its original image, N ' turnfor the transition matrix between right fisheye camera core line image and its original image, X 0 ′ Y 0 ′ For the translation parameters of changing between left fisheye camera core line image and its original image, X 0 ′ ′ Y 0 ′ ′ For the translation parameters of changing between right fisheye camera core line image and its original image.
CN201410015845.8A 2014-01-10 2014-01-10 Fisheye camera based planetary rover detection point positioning method Pending CN103925919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410015845.8A CN103925919A (en) 2014-01-10 2014-01-10 Fisheye camera based planetary rover detection point positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410015845.8A CN103925919A (en) 2014-01-10 2014-01-10 Fisheye camera based planetary rover detection point positioning method

Publications (1)

Publication Number Publication Date
CN103925919A true CN103925919A (en) 2014-07-16

Family

ID=51144220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410015845.8A Pending CN103925919A (en) 2014-01-10 2014-01-10 Fisheye camera based planetary rover detection point positioning method

Country Status (1)

Country Link
CN (1) CN103925919A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104713638A (en) * 2015-01-21 2015-06-17 北京科技大学 Cylinder face photometric measurement device
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4825971B2 (en) * 2005-07-14 2011-11-30 国立大学法人岩手大学 Distance calculation device, distance calculation method, structure analysis device, and structure analysis method.

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4825971B2 (en) * 2005-07-14 2011-11-30 国立大学法人岩手大学 Distance calculation device, distance calculation method, structure analysis device, and structure analysis method.

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张祖勋, 张剑清: "《数字摄影测量学》", 30 December 1996, article "数字摄影测量学" *
王保丰: "航天器交会对接和月球车导航中视觉测量关键技术研究与应用", 《中国博士学位论文全文数据库•工程科技Ⅱ辑》, 15 June 2008 (2008-06-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104713638A (en) * 2015-01-21 2015-06-17 北京科技大学 Cylinder face photometric measurement device
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance

Similar Documents

Publication Publication Date Title
Häne et al. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
CN103971353B (en) Splicing method for measuring image data with large forgings assisted by lasers
CN108801274B (en) Landmark map generation method integrating binocular vision and differential satellite positioning
CN111862673B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN102735216B (en) CCD stereoscopic camera three-line imagery data adjustment processing method
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN103278138A (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN103226840B (en) Full-view image splicing and measurement system and method
CN104268876A (en) Camera calibration method based on partitioning
US6175648B1 (en) Process for producing cartographic data by stereo vision
CN111415375B (en) SLAM method based on multi-fisheye camera and double-pinhole projection model
CN102005039A (en) Fish-eye camera stereo vision depth measuring method based on Taylor series model
CN102410831A (en) Design and positioning method of multi-stripe scan imaging model
CN106340045A (en) Calibration optimization method based on binocular stereoscopic vision in three-dimensional face reconstruction
CN105809706A (en) Global calibration method of distributed multi-camera system
CN103927738A (en) Planet vehicle positioning method based on binocular vision images in large-distance mode
Gong et al. DSM generation from high resolution multi-view stereo satellite imagery
CN101354796A (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
Re et al. Evaluation of area-based image matching applied to DTM generation with Hirise images
CN104864852A (en) High resolution satellite attitude fluttering detection method based on intensive control points
CN113947638A (en) Image orthorectification method for fisheye camera
CN103925919A (en) Fisheye camera based planetary rover detection point positioning method
Li et al. Photogrammetric processing of Tianwen-1 HiRIC imagery for precision topographic mapping on Mars
Das et al. Extrinsic calibration and verification of multiple non-overlapping field of view lidar sensors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140716

WD01 Invention patent application deemed withdrawn after publication