CN107300382A - A kind of monocular visual positioning method for underwater robot - Google Patents
A kind of monocular visual positioning method for underwater robot Download PDFInfo
- Publication number
- CN107300382A CN107300382A CN201710499542.1A CN201710499542A CN107300382A CN 107300382 A CN107300382 A CN 107300382A CN 201710499542 A CN201710499542 A CN 201710499542A CN 107300382 A CN107300382 A CN 107300382A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- points
- mfrac
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/881—Radar or analogous systems specially adapted for specific applications for robotics
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Robotics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Manipulator (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The present invention proposes a kind of monocular visual positioning method for underwater robot, mark is arranged first, and then according to the image of collection, optimization calculates multiplication factor mag, multiple sample datas are obtained by multi collect, multiplication factor mag and course angle ψ and ratio value dd functional relation is set up;Then in actual applications, multiplication factor is calculated according to course angle and by the ratio value dd that actual acquisition image is calculated, passes through method of geometry calculating robot's coordinate.This method is in the case where ensuring higher positioning accuracy, greatly reduce the complexity of monocular visual positioning method, improve ageing, so that it may apply in the system without mathematical models, either in the system higher to requirement of real-time or on the limited hardware platform of cheap, process performance.
Description
Technical field
The present invention relates to vision positioning technical field, specially a kind of monocular vision positioning side for underwater robot
Method, is a kind of simple and easy method that monocular vision positioning is carried out in two dimensional surface, is particularly suitable for use in UAV navigation
The docking recovery operation of underwater robots such as (Autonomous Underwater Vehicle, AUV).
Background technology
Itself position relative to specific objective is can determine, is the necessary bar of underwater robot successful execution job task
Part.But, complicated underwater environment brings many problems to being accurately positioned for robot.Due to gps signal can not be used to carry out
Positioning, High Accuracy Inertial Navigation System fusion acoustic positioning system is the underwater operation localization method generally used at present.It is such
The cost of system is very high, and the hydrolocation cycle it is longer, closely relative error it is larger, be unsatisfactory for accurate operation under water will
Ask.With the increasingly maturation of image processing techniques, the operation that vision positioning method is applied to underwater robot more and more is appointed
In business.Compared with acoustic positioning system, it is excellent that vision system has that turnover rate is high, measurement accuracy is high, cost is low, system is reliable etc.
Point, and can apply in non-structural natural environment.
Distinguished from the number of camera, vision positioning can be divided into monocular positioning and be positioned with binocular.Binocular positioning can be direct
The depth information of object is obtained, but the location algorithm is more complicated, execution cycle is long, and the complexity shadow of underwater environment
Image characteristics extraction and the precision matched are rung.Therefore, binocular visual positioning is not widely applied to underweater vision positioning
In task.
Many researchers are attempted to improve monocular visual positioning method to obtain depth information.It is, in general, that common method
There are two kinds:One kind is method of geometry, and one kind is EKF (Extended Kalman Filter, EKF) algorithm.Perhaps
Many scholars are studied and improved to EKF location algorithms in theory.But, this method requires that the model of robot is accurate
Know --- in most cases, this is difficult to realize.Also, the calculating of this method is numerous and diverse, not being suitable for will to real-time
Seek higher system.
The content of the invention
To solve the problem of prior art is present, the present invention proposes a kind of monocular vision positioning side for underwater robot
Method, using geometric algorithm, can either break away from the dependence to system model, can greatly simplify calculating process, and the party again
Method is convenient to carry out, in the actual location task that can use underwater robot.
, it is necessary to which two marks, mark is that robot will know another characteristic by vision in implementation process of the present invention,
The distance between accurately known two mark is answered before experiment starts, is transported by the identification to two marks and certain geometry
Calculate, robot results in the coordinate at relative to two mark line midpoints of body.For the ease of identification, reduce positioning mistake
Difference, mark should have enough discriminations with background, and the ratio between the physical dimension of mark and the spacing of two marks should
When smaller (being, for example, less than 1/20).The mark used in inventor's experiment is rod-shaped tag thing, and the external diameter of wherein bar is
5cm, two distance between tie rods are 2.0m.Two bars are respectively red and green, have enough discriminations with water body background.
Underwater robot will gather image by the camera of carrying, by the computer of carrying carry out image characteristics extraction and
Location Calculation, and utilize the certain operation of location information execution.The present invention uses the location algorithm based on method of geometry, in order to take
Preferable locating effect is obtained, camera should be arranged on the axis of robot.The underwater used in inventor's experiment
Artificial full driving structure, maximum headway 1m/s.The resolution ratio of camera is 780 × 580, focal length 5mm, maximum frame per second 67fps
(experiment uses frame per second 5fps).
Method needs definition two coordinate systems as shown in Figure 1 in realizing:Global coordinate system E and carrier coordinate system B.This hair
Bright use method of geometry solves monocular vision orientation problem.Two marks of identification needed for Fig. 1 midpoints 4 are represented.World coordinates
The origin O of systemeIt is selected in the midpoint of two mark lines, the origin O of carrier coordinate systembIt is selected in the center of gravity of robot.It is each to sit
The positive direction of parameter is referring to Fig. 1.ψ represents course angle of the robot under global coordinate system, is defined as XeAxle and XbThe angle of axle,
Positive direction is as shown in Figure 1.
Fig. 2 represents the geometrical principle figure of this method.The position that robot is presently in is represented with the position of monocular camera,
Represented with A points.The half angle of view size (namely ∠ DAC) of camera is represented, the visual angle value of camera can be by consulting databook
Or the means such as camera calibration are obtained.ψ represents course angle of the robot under global coordinate system, and what can be carried by robot leads
Boat element is measured.M points are located at YeAxle, meets AM ⊥ Ye.D points represent the angular bisector AD and Y of camera perspectiveeThe intersection point of axle.P points
And Q points represent two marks, and known to the coordinate of P, Q under global coordinate system.P ' Q ' places straight line represents imaging plane, and
There are P ' Q ' ⊥ AD.D ' points, C ' points, P ' points, Q ', O ' expression D points, the throwings of C points, P points, Q points, O points on imaging plane respectively
Shadow.It should be noted that under the coordinate system that Fig. 2 is represented:Work as ψ>When 0, C points are selected in YeOn negative semiaxis, and Q points are overlapped with Q ';Work as ψ<0
When, C points are selected in YeIn positive axis, and P points are overlapped with P '.In summary, the monocular vision orientation problem based on method of geometry
It can be described as:Known P points, coordinate of the Q points under global coordinate system, it is known that P ' points, Q ' coordinates in the picture, it is known that machine
The current course angle ψ of people, seeks coordinate of the A points under global coordinate system.
Basic calculating thinking, is the corresponding relation by length P ' Q ' in known physical length PQ and image, asks calculation
The corresponding physical length of the line segment of other in image.Specifically, the conversion of length P ' Q ' in from physical length PQ to image is utilized
Relation, we can calculate the conversion factor λ of intergrate with practice distance and an image distance.Further, we can lead to
The length of other line segments in measurement image is crossed, coordinate of the A points under global coordinate system is derived.
But by it is demonstrated experimentally that having larger position error (>=10%) based on the localization method that geometry is calculated.Error has
A part derives from the rounding-off of algorithm, but main error source is the non-linear of monocular camera imaging.That is, true empty
Between in equal length two line segments, corresponding picture may not be equal in imaging plane.Monocular visual positioning method as proposed above
In order to simplify calculating, only using a conversion factor λ come length and the image length of intergrating with practice, and conversion factor λ calculating is only
The only known image length P ' Q ' and actual length PQ corresponding relation can be relied on.The non-linear spy being imaged due to monocular camera
Property, from PQ to P ' Q ' conversion factor can not represent the average conversion factor from actual length to image length, and also it is past
Toward can be influenceed and bigger than normal or less than normal by other factors.This largely have impact on the precision of resolving.Meanwhile, this is also inspired
We, can reduce position error by compensating conversion factor λ method.Specifically, it is contemplated that λ*=mag × λ, its
Middle mag is conversion factor λ multiplication factor.Pass through analysis, it is believed that multiplication factor mag is ψ and dd function.Wherein, ψ tables
Show course angle, and dd is represented in the horizontal direction, the pixel l at mark line midpoint is deviateed at image midpoint1It is wide with image pixel
Spend half l2Ratio, as shown in figure 3, dd=l1/l2。
Multiplication factor mag function expression can be obtained by the following method.First, arbitrarily selected in global coordinate system
Select and be a little presently in position as robot, and specify course angle ψ value (it is ensured that robot in the point it can be seen that
Two marks).Some data points can be selected as the sample point of follow-up optimization process.Second step, is calculated in each point correspondence
λ*Value.By method of geometry before, the position that robot is presently in is calculated.Constantly change mag value so that calculate
The difference of the position of the robot gone out and actual position is minimum, then the value of the mag as this group of data mag optimal value.Reality exists
The Matlab programs that correlation has been write in experiment complete to operate above, and select robot x, y-coordinate under global coordinate system
The quadratic sum of error is used as optimal mag discrimination standards.3rd step, each point is analyzed using the Matlab cftool tool boxes carried
The relation of ψ, dd and mag optimal value, selects approximating method, is amplified multiple mag function expression.Can in fit procedure
To use polynomial interopolation, obtain mag and dd functional relation twice, with | ψ | linear function relation, fitting result is referring to figure
4。
During orientation problem is solved, the conversion factor λ after compensation is used*Rather than λ is calculated.Such one
Come, this method is not on the premise of original resolving process complexity is increased, using the thinking of compensation, is effectively improved positioning knot
The precision of fruit.
Based on above-mentioned principle, the technical scheme is that:
A kind of monocular visual positioning method for underwater robot, it is characterised in that:Comprise the following steps:
Step 1:Two marks, the camera that the mark can be carried by underwater robot are arranged in environment under water
Shoot, and can clearly be recognized;And the origin O using the midpoint of two mark lines as global coordinate systeme;With two
Mark line is used as global coordinate system YeAxle, with horizontal plane perpendicular to YeThe direction of axle is used as global coordinate system XeAxle;
Step 2:Multiplication factor mag is calculated by procedure below:
Step 2.1:Arbitrarily select to put known to a world coordinates in global coordinate system to be presently in as robot
Position A, and specify course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point;
Step 2.2:Image shot by camera is gathered, and multiplication factor mag initial values are set;
Step 2.3:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
Wherein P points and Q points represent two marks, and coordinate under global coordinate system of P, Q, it is known that P ' points, Q ' minutes
Not Biao Shi the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 2.4:According to formula
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D
Point represents the angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' difference tables
Show the projection of D points, C points on imaging plane;
Step 2.5:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 2.6:According to formula
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 2.7:According to formula
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value
θ=- θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Calculate coordinate of the A points position in global coordinate system;
Step 2.8:Judgment step 2.7 calculate A points position coordinates that obtained A point position coordinateses set with step 2.1 it
Between error whether meet sets requirement, if meet, obtain one group of data point being made up of mag, dd and ψ, and carry out step
3, if it is not satisfied, then modification multiplication factor mag and return to step 2.3;The dd represents that in the horizontal direction image midpoint is deviateed
The length in pixels l at mark line midpoint1With image pixel width half l2Ratio;
Step 3:Again arbitrarily select to put as current institute of robot known to a world coordinates in global coordinate system
Locate position A, and specify course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point, weight
Multiple step 2, obtains one group of new data point;After the data point of setting group number is obtained, into step 4;
Step 4:According to some groups of obtained number data points, fitting is obtained with mag dependent variables, and dd and ψ are independent variable
Functional relation;
Step 5:During actual location, gather camera image, obtain course angle ψ of the robot under global coordinate system with
And the dd under current state, the fitting function obtained according to step 4, obtain the multiplication factor mag under current state;
Step 6:Multiplication factor mag under the camera image gathered according to step 5, the current state obtained using step 5,
Calculating robot's actual coordinate:
Step 6.1:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
Wherein P points and Q points represent two marks, and coordinate under global coordinate system of P, Q, it is known that P ' points, Q ' minutes
Not Biao Shi the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 6.2:According to formula
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D
Point represents the angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' difference tables
Show the projection of D points, C points on imaging plane;
Step 6.3:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 6.4:According to formula
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 6.5:According to formula
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value
θ=- θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Coordinate of the calculating robot A points in global coordinate system.
Beneficial effect
The beneficial effects of the invention are as follows devise the higher monocular visual positioning method of a set of simple and easy to do, precision.The party
Method application is wider, and by reasonably arranging the position of two marks, this method can be applied to determining for underwater robot
In the tasks such as point recovery, target following, dynamic positioning.Method therefor is in the case where ensuring higher positioning accuracy, greatly reduction
The complexity of monocular visual positioning method, is improved ageing so that what it may apply to no mathematical models is
In system, either in the system higher to requirement of real-time or on the limited hardware platform of cheap, process performance.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become from description of the accompanying drawings below to embodiment is combined
Substantially and be readily appreciated that, wherein:
Fig. 1 is global coordinate system E and carrier coordinate system B schematic diagram.
Fig. 2 is the schematic diagram of method of geometry.
Fig. 3 is the schematic diagram that image midpoint is horizontally offset from mark line midpoint.
Fig. 4 is multiplication factor mag fitting result.
The robots of Tu Zhong 1., 2. carrier coordinate system origins, 3. monocular cameras, 4. characteristic points, 5. global coordinate system origins.
Embodiment
Embodiments of the invention are described below in detail, the embodiment is exemplary, it is intended to for explaining the present invention, and
It is not considered as limiting the invention.
To solve the problem of prior art is present, the present invention proposes a kind of monocular vision positioning side for underwater robot
Method, using geometric algorithm, can either break away from the dependence to system model, can greatly simplify calculating process, and the party again
Method is convenient to carry out, in the actual location task that can use underwater robot.
First, system is constituted
The present invention is main by two marks and underwater robot two for the monocular vision alignment system of underwater robot
Part is constituted.Mark is that robot will know another characteristic by vision, and accurately known two mark is answered before experiment starts
The distance between, by the identification to two marks and certain geometric operation, robot results in body relative to two
The coordinate at individual mark line midpoint.For the ease of identification, reduce position error, mark there should be enough differentiations with background
Degree, and the ratio between the physical dimension of mark and the spacing of two marks should be smaller (being, for example, less than 1/20).Inventor is done
The mark used in experiment is rod-shaped tag thing, and wherein the external diameter of bar is 5cm, and two distance between tie rods are 2.0m.Two bars are respectively
Red and green, has enough discriminations with water body background.
Underwater robot will gather image by the camera of carrying, by the computer of carrying carry out image characteristics extraction and
Location Calculation, and utilize the certain operation of location information execution.The present invention uses the location algorithm based on method of geometry, in order to take
Preferable locating effect is obtained, camera should be arranged on the axis of robot.The underwater used in inventor's experiment
Artificial full driving structure, maximum headway 1m/s.The resolution ratio of camera is 780 × 580, focal length 5mm, maximum frame per second 67fps
(experiment uses frame per second 5fps).
2nd, the monocular vision location model based on method of geometry
Method needs definition two coordinate systems as shown in Figure 1 in realizing:Global coordinate system E and carrier coordinate system B.This hair
Bright use method of geometry solves monocular vision orientation problem.Two marks of identification needed for Fig. 1 midpoints 4 are represented.World coordinates
The origin O of systemeIt is selected in the midpoint of two mark lines, the origin O of carrier coordinate systembIt is selected in the center of gravity of robot.It is each to sit
The positive direction of parameter is referring to Fig. 1.ψ represents course angle of the robot under global coordinate system, is defined as XeAxle and XbThe angle of axle,
Positive direction is as shown in Figure 1.
Fig. 2 represents the geometrical principle figure of this method.The position that robot is presently in is represented with the position of monocular camera,
Represented with A points.The half angle of view size (namely ∠ DAC) of camera is represented, the visual angle value of camera can be by consulting databook
Or the means such as camera calibration are obtained.ψ represents course angle of the robot under global coordinate system, and what can be carried by robot leads
Boat element is measured.M points are located at YeAxle, meets AM ⊥ Ye.D points represent the angular bisector AD and Y of camera perspectiveeThe intersection point of axle.P points
And Q points represent two marks, and known to the coordinate of P, Q under global coordinate system.P ' Q ' places straight line represents imaging plane, and
There are P ' Q ' ⊥ AD.D ' points, C ' points, P ' points, Q ', O ' expression D points, the throwings of C points, P points, Q points, O points on imaging plane respectively
Shadow.It should be noted that under the coordinate system that Fig. 2 is represented:Work as ψ>When 0, C points are selected in YeOn negative semiaxis, and Q points are overlapped with Q ';Work as ψ<0
When, C points are selected in YeIn positive axis, and P points are overlapped with P '.In summary, the monocular vision orientation problem based on method of geometry
It can be described as:Known P points, coordinate of the Q points under global coordinate system, it is known that P ' points, Q ' coordinates in the picture, it is known that machine
The current course angle ψ of people, seeks coordinate of the A points under global coordinate system.
Basic calculating thinking, is the corresponding relation by length P ' Q ' in known physical length PQ and image, asks calculation
The corresponding physical length of the line segment of other in image.Specifically, the conversion of length P ' Q ' in from physical length PQ to image is utilized
Relation, we can calculate the conversion factor λ of intergrate with practice distance and an image distance.Further, we can lead to
The length of other line segments in measurement image is crossed, coordinate of the A points under global coordinate system is derived.
3rd, error compensation thinking and method
By it is demonstrated experimentally that having larger position error (>=10%) based on the localization method that geometry is calculated.Error has one
The rounding-off of algorithm is derived partly from, but main error source is the non-linear of monocular camera imaging.That is, in real space
Two line segments of middle equal length, corresponding picture may not be equal in imaging plane.Monocular visual positioning method as proposed above is
Simplified calculating, only using a conversion factor λ come length and the image length of intergrating with practice, and conversion factor λ calculating can only
Rely on the only known image length P ' Q ' and actual length PQ corresponding relation.The non-linear spy being imaged due to monocular camera
Property, from PQ to P ' Q ' conversion factor can not represent the average conversion factor from actual length to image length, and also it is past
Toward can be influenceed and bigger than normal or less than normal by other factors.This largely have impact on the precision of resolving.Meanwhile, this is also inspired
We, can reduce position error by compensating conversion factor λ method.Specifically, it is contemplated that λ*=mag × λ, its
Middle mag is conversion factor λ multiplication factor.Pass through analysis, it is believed that multiplication factor mag is ψ and dd function.Wherein, ψ tables
Show course angle, and dd is represented in the horizontal direction, the pixel l at mark line midpoint is deviateed at image midpoint1It is wide with image pixel
Spend half l2Ratio, as shown in figure 3, dd=l1/l2。
Multiplication factor mag function expression can be obtained by the following method.First, arbitrarily selected in global coordinate system
Select and be a little presently in position as robot, and specify course angle ψ value (it is ensured that robot in the point it can be seen that
Two marks).Some data points can be selected as the sample point of follow-up optimization process.Second step, is calculated in each point correspondence
λ*Value.By method of geometry before, the position that robot is presently in is calculated.Constantly change mag value so that calculate
The difference of the position of the robot gone out and actual position is minimum, then the value of the mag as this group of data mag optimal value.Reality exists
The Matlab programs that correlation has been write in experiment complete to operate above, and select robot x, y-coordinate under global coordinate system
The quadratic sum of error is used as optimal mag discrimination standards.3rd step, each point is analyzed using the Matlab cftool tool boxes carried
The relation of ψ, dd and mag optimal value, selects approximating method, is amplified multiple mag function expression.Can in fit procedure
To use polynomial interopolation, obtain mag and dd functional relation twice, with | ψ | linear function relation, fitting result is referring to figure
4。
During orientation problem is solved, the conversion factor λ after compensation is used*Rather than λ is calculated.Such one
Come, this method is not on the premise of original resolving process complexity is increased, using the thinking of compensation, is effectively improved positioning knot
The precision of fruit.
A kind of monocular visual positioning method for underwater robot of the present invention, comprises the following steps:
Step 1:Two marks, the camera that the mark can be carried by underwater robot are arranged in environment under water
Shoot, and can clearly be recognized;And the origin O using the midpoint of two mark lines as global coordinate systeme;With two
Mark line is used as global coordinate system YeAxle, with horizontal plane perpendicular to YeThe direction of axle is used as global coordinate system XeAxle;
Step 2:Multiplication factor mag is calculated by procedure below:
Step 2.1:Arbitrarily select to put known to a world coordinates in global coordinate system to be presently in as robot
Position A, and specify course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point;
Step 2.2:Image shot by camera is gathered, and multiplication factor mag initial values are set;
Step 2.3:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
Wherein P points and Q points represent two marks, and coordinate under global coordinate system of P, Q, it is known that P ' points, Q ' minutes
Not Biao Shi the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 2.4:According to formula
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D
Point represents the angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' difference tables
Show the projection of D points, C points on imaging plane;
Step 2.5:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 2.6:According to formula
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 2.7:According to formula
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value
θ=- θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Calculate coordinate of the A points position in global coordinate system;
Step 2.8:Judgment step 2.7 calculate A points position coordinates that obtained A point position coordinateses set with step 2.1 it
Between error whether meet sets requirement, if meet, obtain one group of data point being made up of mag, dd and ψ, and carry out step
3, if it is not satisfied, then modification multiplication factor mag and return to step 2.3;The dd represents that in the horizontal direction image midpoint is deviateed
The length in pixels l at mark line midpoint1With image pixel width half l2Ratio;
Step 3:Again arbitrarily select to put as current institute of robot known to a world coordinates in global coordinate system
Locate position A, and specify course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point, weight
Multiple step 2, obtains one group of new data point;After the data point of setting group number is obtained, into step 4;
Step 4:According to some groups of obtained number data points, fitting is obtained with mag dependent variables, and dd and ψ are independent variable
Functional relation;
Step 5:During actual location, gather camera image, obtain course angle ψ of the robot under global coordinate system with
And the dd under current state, the fitting function obtained according to step 4, obtain the multiplication factor mag under current state;
Step 6:Multiplication factor mag under the camera image gathered according to step 5, the current state obtained using step 5,
Calculating robot's actual coordinate:
Step 6.1:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
Wherein P points and Q points represent two marks, and coordinate under global coordinate system of P, Q, it is known that P ' points, Q ' minutes
Not Biao Shi the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 6.2:According to formula
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D
Point represents the angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' difference tables
Show the projection of D points, C points on imaging plane;
Step 6.3:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 6.4:According to formula
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 6.5:According to formula
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value
θ=- θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Coordinate of the calculating robot A points in global coordinate system.
Based on above-mentioned technical proposal, two embodiments are given below:
Coordinate system definition, each parameter definition and positive direction definition are all referring to Fig. 1 and Fig. 2.Known PQ=2.0m,
The pixel wide of camera is 800pixel.Specific experiment condition is relied on, above parameter can be different.
Multiplication factor mag fitting function is:
Mag=1.013+1.105 × 10-2dd+1.175×10-2|ψ|-7.832×10-2dd2-2.426×10-2dd|ψ|
【Embodiment 1】
If in the image that camera is currently gathered, P '=156pixel, Q '=315pixel, the inertia device carried in robot
Part measures course angle ψ=5 ° at current time.Coordinate of the calculating robot in global coordinate system below.
The first step:Dd=0.411, calculates according to fitting function and obtains mag=1.013;
Second step:Calculating obtains λ*=80.55;
3rd step:Calculating obtains DC=4.97, AC=8.22;
4th step:Calculating obtains AM=6.97, CM=4.36;
5th step:Calculating obtains OC=2.92, OM=1.43;
6th step:Calculating obtains OA=7.12, θ=11.61 °;
7th step:Now, ψ>0, and OC<CM, therefore, θ=- 11.61 °;
8th step:Calculating obtains coordinate of the robot in global coordinate system for x=-6.97, y=1.43.
Error analysis:In this example, true coordinate of the robot in global coordinate system is (- 7.0,1.5), therefore, relatively
Position error is (0.42%, 4.7%).
【Embodiment 2】
If in the image that camera is currently gathered, P '=251pixel, Q '=462, the inertia device carried in robot is surveyed
Obtain course angle ψ=10 ° at current time.Coordinate of the calculating robot in global coordinate system below.
The first step:Dd=0.109, calculates according to fitting function and obtains mag=1.104;
Second step:Calculating obtains λ*=116.51;
3rd step:Calculating obtains DC=3.43, AC=5.62;
4th step:Calculating obtains AM=5.01, CM=2.55;
5th step:Calculating obtains OC=3.06, OM=0.51;
6th step:Calculating obtains OA=5.03, θ=5.81 °;
7th step:Now, ψ>0, and OC>CM, therefore be not required to be modified θ;
8th step:Calculating obtains coordinate of the robot in global coordinate system for x=-5.01, y=-0.51.
Error analysis:In this example, true coordinate of the robot in global coordinate system is (- 5.0, -0.5), therefore, relatively
Position error is (0.2%, 2%).
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art is not departing from the principle and objective of the present invention
In the case of above-described embodiment can be changed within the scope of the invention, change, replace and modification.
Claims (1)
1. a kind of monocular visual positioning method for underwater robot, it is characterised in that:Comprise the following steps:
Step 1:Two marks are arranged in environment under water, the camera that the mark can be carried by underwater robot is clapped
Take the photograph, and can clearly be recognized;And the origin O using the midpoint of two mark lines as global coordinate systeme;With two marks
Will thing line is used as global coordinate system YeAxle, with horizontal plane perpendicular to YeThe direction of axle is used as global coordinate system XeAxle;
Step 2:Multiplication factor mag is calculated by procedure below:
Step 2.1:Arbitrarily select to put as robot known to a world coordinates in global coordinate system to be presently in position
A, and specify course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point;
Step 2.2:Image shot by camera is gathered, and multiplication factor mag initial values are set;
Step 2.3:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
<mrow>
<mi>&lambda;</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>P</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>Q</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mrow>
<mi>P</mi>
<mi>Q</mi>
</mrow>
</mfrac>
</mrow>
<mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mi>m</mi>
<mi>a</mi>
<mi>g</mi>
<mo>&times;</mo>
<mfrac>
<mrow>
<msup>
<mi>P</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>Q</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mrow>
<mi>P</mi>
<mi>Q</mi>
</mrow>
</mfrac>
</mrow>
Wherein P points and Q points represent two marks, and the coordinate of P, Q under global coordinate system is, it is known that P ' points, Q ' difference tables
Show the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 2.4:According to formula
<mrow>
<mi>D</mi>
<mi>C</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>D</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>C</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
</mfrac>
</mrow>
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D points are represented
The angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' represent D points, C respectively
Projection of the point on imaging plane;
Step 2.5:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 2.6:According to formula
<mrow>
<mi>O</mi>
<mi>C</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>O</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>C</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
</mfrac>
</mrow>
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 2.7:According to formula
<mrow>
<mi>O</mi>
<mi>A</mi>
<mo>=</mo>
<msqrt>
<mrow>
<msup>
<mi>AM</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>OM</mi>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mrow>
<mrow>
<mi>&theta;</mi>
<mo>=</mo>
<msup>
<mi>tan</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mfrac>
<mrow>
<mi>O</mi>
<mi>M</mi>
</mrow>
<mrow>
<mi>A</mi>
<mi>M</mi>
</mrow>
</mfrac>
</mrow>
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value θ=-
θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Calculate coordinate of the A points position in global coordinate system;
Step 2.8:Judgment step 2.7 is calculated between the A point position coordinateses that obtained A point position coordinateses and step 2.1 are set
Whether error meets sets requirement, if meeting, obtains one group of data point being made up of mag, dd and ψ, and carries out step 3, if
It is unsatisfactory for, then changes multiplication factor mag and return to step 2.3;The dd represents that in the horizontal direction mark is deviateed at image midpoint
The length in pixels l at thing line midpoint1With image pixel width half l2Ratio;
Step 3:Again arbitrarily select to put as robot known to a world coordinates in global coordinate system to be presently in position
A is put, and specifies course angle ψ value to ensure that the camera that robot is carried can shoot two marks in the point, repeats to walk
Rapid 2, obtain one group of new data point;After the data point of setting group number is obtained, into step 4;
Step 4:According to some groups of obtained number data points, fitting is obtained with mag dependent variables, dd and the function that ψ is independent variable
Relation;
Step 5:During actual location, camera image is gathered, course angle ψ of the robot under global coordinate system is obtained and works as
Dd under preceding state, the fitting function obtained according to step 4 obtains the multiplication factor mag under current state;
Step 6:Multiplication factor mag under the camera image gathered according to step 5, the current state obtained using step 5, is calculated
Robot actual coordinate:
Step 6.1:' Q ' conversion factor λ is calculated from PQ to P, λ is obtained after being corrected with multiplication factor mag*
<mrow>
<mi>&lambda;</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>P</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>Q</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mrow>
<mi>P</mi>
<mi>Q</mi>
</mrow>
</mfrac>
</mrow>
<mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mi>m</mi>
<mi>a</mi>
<mi>g</mi>
<mo>&times;</mo>
<mfrac>
<mrow>
<msup>
<mi>P</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>Q</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<mrow>
<mi>P</mi>
<mi>Q</mi>
</mrow>
</mfrac>
</mrow>
Wherein P points and Q points represent two marks, and the coordinate of P, Q under global coordinate system is, it is known that P ' points, Q ' difference tables
Show the projection of P points, Q points on imaging plane, known to P ' points, Q ' coordinates in the picture;
Step 6.2:According to formula
<mrow>
<mi>D</mi>
<mi>C</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>D</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>C</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
</mfrac>
</mrow>
Line segment DC and AC length are calculated, wherein C points are camera half angle of view edge in global coordinate system YePoint on axle, D points are represented
The angular bisector AD and Y of camera perspectiveeThe intersection point of axle,Represent the half angle of view size of camera, D ' points, C ' represent D points, C respectively
Projection of the point on imaging plane;
Step 6.3:According to formula
Line segment AM and CM length are calculated, wherein M points are located at YeOn axle, AM ⊥ Y are mete;
Step 6.4:According to formula
<mrow>
<mi>O</mi>
<mi>C</mi>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>O</mi>
<mo>&prime;</mo>
</msup>
<msup>
<mi>C</mi>
<mo>&prime;</mo>
</msup>
</mrow>
<msup>
<mi>&lambda;</mi>
<mo>*</mo>
</msup>
</mfrac>
</mrow>
OM=| CM-OC |
Line segment OC and OM length are calculated, represent projection of the O points on imaging plane at wherein O ';
Step 6.5:According to formula
<mrow>
<mi>O</mi>
<mi>A</mi>
<mo>=</mo>
<msqrt>
<mrow>
<msup>
<mi>AM</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>OM</mi>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mrow>
<mrow>
<mi>&theta;</mi>
<mo>=</mo>
<msup>
<mi>tan</mi>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mfrac>
<mrow>
<mi>O</mi>
<mi>M</mi>
</mrow>
<mrow>
<mi>A</mi>
<mi>M</mi>
</mrow>
</mfrac>
</mrow>
A points position polar coordinates are calculated, if wherein ψ>0, and OC<CM;Or ψ<0, and OC>CM, then correct θ value θ=-
θ;According to formula
X=-OAcos θ
Y=-OAsin θ
Coordinate of the calculating robot in global coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710499542.1A CN107300382B (en) | 2017-06-27 | 2017-06-27 | Monocular vision positioning method for underwater robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710499542.1A CN107300382B (en) | 2017-06-27 | 2017-06-27 | Monocular vision positioning method for underwater robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107300382A true CN107300382A (en) | 2017-10-27 |
CN107300382B CN107300382B (en) | 2020-06-16 |
Family
ID=60135040
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710499542.1A Active CN107300382B (en) | 2017-06-27 | 2017-06-27 | Monocular vision positioning method for underwater robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107300382B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035326A (en) * | 2018-06-19 | 2018-12-18 | 北京理工大学 | High-precision location technique based on sub-pix image recognition |
CN109211240A (en) * | 2018-09-01 | 2019-01-15 | 哈尔滨工程大学 | A kind of monocular vision submarine navigation device navigator fix bearing calibration |
CN109460058A (en) * | 2018-11-22 | 2019-03-12 | 中国船舶重工集团公司第七0五研究所 | A kind of tail portion propulsion traversing control method of low speed submarine navigation device underwater mating |
CN112417948A (en) * | 2020-09-21 | 2021-02-26 | 西北工业大学 | Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision |
CN112836889A (en) * | 2021-02-19 | 2021-05-25 | 鹏城实验室 | Path optimization method, underwater vehicle and computer readable storage medium |
CN112946687A (en) * | 2021-01-22 | 2021-06-11 | 西北工业大学 | Image depth correction method for underwater imaging of TOF camera |
CN114998422A (en) * | 2022-05-26 | 2022-09-02 | 燕山大学 | High-precision rapid three-dimensional positioning system based on error compensation model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN103170980A (en) * | 2013-03-11 | 2013-06-26 | 常州铭赛机器人科技有限公司 | Positioning system and positioning method for household service robot |
CN105890589A (en) * | 2016-04-05 | 2016-08-24 | 西北工业大学 | Underwater robot monocular vision positioning method |
-
2017
- 2017-06-27 CN CN201710499542.1A patent/CN107300382B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN103170980A (en) * | 2013-03-11 | 2013-06-26 | 常州铭赛机器人科技有限公司 | Positioning system and positioning method for household service robot |
CN105890589A (en) * | 2016-04-05 | 2016-08-24 | 西北工业大学 | Underwater robot monocular vision positioning method |
Non-Patent Citations (3)
Title |
---|
MURALI SUBBARAO ETAL.: "Accurate Recovery of Three-Dimensional Shape from Image Focus", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
赵天云等: "基于单目视觉的空间定位算法", 《西北工业大学学报》 * |
黄桂平等: "单目视觉测量技术研究", 《计量学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035326A (en) * | 2018-06-19 | 2018-12-18 | 北京理工大学 | High-precision location technique based on sub-pix image recognition |
CN109211240A (en) * | 2018-09-01 | 2019-01-15 | 哈尔滨工程大学 | A kind of monocular vision submarine navigation device navigator fix bearing calibration |
CN109211240B (en) * | 2018-09-01 | 2021-06-18 | 哈尔滨工程大学 | Monocular vision underwater vehicle navigation positioning correction method |
CN109460058A (en) * | 2018-11-22 | 2019-03-12 | 中国船舶重工集团公司第七0五研究所 | A kind of tail portion propulsion traversing control method of low speed submarine navigation device underwater mating |
CN112417948A (en) * | 2020-09-21 | 2021-02-26 | 西北工业大学 | Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision |
CN112417948B (en) * | 2020-09-21 | 2024-01-12 | 西北工业大学 | Method for accurately guiding lead-in ring of underwater vehicle based on monocular vision |
CN112946687A (en) * | 2021-01-22 | 2021-06-11 | 西北工业大学 | Image depth correction method for underwater imaging of TOF camera |
CN112836889A (en) * | 2021-02-19 | 2021-05-25 | 鹏城实验室 | Path optimization method, underwater vehicle and computer readable storage medium |
CN114998422A (en) * | 2022-05-26 | 2022-09-02 | 燕山大学 | High-precision rapid three-dimensional positioning system based on error compensation model |
CN114998422B (en) * | 2022-05-26 | 2024-05-28 | 燕山大学 | High-precision rapid three-dimensional positioning system based on error compensation model |
Also Published As
Publication number | Publication date |
---|---|
CN107300382B (en) | 2020-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107300382A (en) | A kind of monocular visual positioning method for underwater robot | |
JP5618569B2 (en) | Position and orientation estimation apparatus and method | |
CN104299244B (en) | Obstacle detection method and device based on monocular camera | |
EP2959315B1 (en) | Generation of 3d models of an environment | |
US9841271B2 (en) | Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium | |
Wei et al. | A non-contact measurement method of ship block using image-based 3D reconstruction technology | |
Tang et al. | 3D mapping and 6D pose computation for real time augmented reality on cylindrical objects | |
CN102589530B (en) | Method for measuring position and gesture of non-cooperative target based on fusion of two dimension camera and three dimension camera | |
CN106379684A (en) | Submersible AGV abut-joint method and system and submersible AGV | |
US20160117824A1 (en) | Posture estimation method and robot | |
CN110276793A (en) | A kind of method and device for demarcating three-dimension object | |
CN113450408A (en) | Irregular object pose estimation method and device based on depth camera | |
CN110796700B (en) | Multi-object grabbing area positioning method based on convolutional neural network | |
US20240013505A1 (en) | Method, system, medium, equipment and terminal for inland vessel identification and depth estimation for smart maritime | |
WO2021120574A1 (en) | Obstacle positioning method and apparatus for autonomous driving system | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN108335325A (en) | A kind of cube method for fast measuring based on depth camera data | |
CN106504287A (en) | Monocular vision object space alignment system based on template | |
JP6626338B2 (en) | Information processing apparatus, control method for information processing apparatus, and program | |
CN110009689A (en) | A kind of image data set fast construction method for the robot pose estimation that cooperates | |
AU2021204030A1 (en) | Multi-sensor calibration system | |
CN112348890A (en) | Space positioning method and device and computer readable storage medium | |
CN111666935B (en) | Article center positioning method and device, logistics system and storage medium | |
Ni et al. | 3D-point-cloud registration and real-world dynamic modelling-based virtual environment building method for teleoperation | |
Fang et al. | A motion tracking method by combining the IMU and camera in mobile devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |