CN110147095A - Robot method for relocating based on mark information and Fusion - Google Patents
Robot method for relocating based on mark information and Fusion Download PDFInfo
- Publication number
- CN110147095A CN110147095A CN201910200079.5A CN201910200079A CN110147095A CN 110147095 A CN110147095 A CN 110147095A CN 201910200079 A CN201910200079 A CN 201910200079A CN 110147095 A CN110147095 A CN 110147095A
- Authority
- CN
- China
- Prior art keywords
- indicate
- robot
- data point
- data
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004927 fusion Effects 0.000 title claims abstract description 14
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 230000008447 perception Effects 0.000 claims abstract description 5
- 230000004438 eyesight Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 125000000623 heterocyclic group Chemical group 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The present invention is directed to the position error problem for synchronizing and building and occurring in figure and positioning immediately, it is proposed a kind of robot method for relocating based on mark information and Fusion, with reach overcome existing robot in complex environment the shortcomings that location technology with insufficient purpose.The present invention is directed to circulate the mark information of visual identity in the data of laser perception, its accurate location information on map is corresponded to using the road sign for having stamped semantic label, the anti-actual position for extrapolating robot on map, to correct the position error of robot, it improves robot and positioning accuracy and resets capability during independent navigation, enhance the level of the self-recision pose of robot.
Description
Technical field
The present invention relates to artificial intelligence fields, more particularly to a kind of melted based on mark information with multi-sensor data
The robot method for relocating of conjunction.
Background technique
Traditional location algorithm uses predicted value and observation if Hanten R [1] et al. proposition is based on particle filter
Matching is scanned to update location algorithm, is more accurately positioned although robot can be completed to a certain extent, and
And it is more accurate in odometer information, when ambient enviroment object features are obvious, it can adaptively complete pose by a small margin
Amendment, but it is too high to odometer information degree of dependence, it is also inadequate to the reaction sensitivity of surrounding environment change.Run into accumulative mistake
Poor excessive, position excursion, fault-tolerance when artificially moving these fortuitous events is not high, happens so as to cause all kinds of
When occur positioning and be significantly distorted.
Currently, there are many team to propose reversely to correct the algorithm of the pose of robot using road sign as object of reference,
Such as the pose figure optimization algorithm based on vslam that Frintrop S [2] et al. is proposed, vision is utilized and detects that road sign is believed automatically
Breath is repeatedly detected under closed-loop case and continuously tracks, the pose data of directly calculation road sign, corrects in conjunction with the mode of figure optimization
Pose, though the amendment of robot location can be completed to a certain extent in this way, due to obtaining depth and angle letter with vision
Breath is influenced by various factors such as light, angle, it is easy to generate accidental error, correction effect is often not so good as people's will, it is necessary to
It can be only achieved preferable effect under very specific simple environment.
Schuster F [3] et al. is in order to adapt to autonomous running automobile under more complicated changeable environment, the foundation proposed
Significant road sign and the more efficient accurate localization method that figure optimization is carried out using laser data, but the method that this article proposes
Just with laser data, and the feature of laser data is not so obviously, to have when being identified and being matched larger
Error can not accomplish to accurately identify and accurately correct.
Summary of the invention
It is the shortcomings that location technology and insufficient in known environment that the present invention overcomes existing robots, primarily directed to
SLAM is synchronized builds the position error problem that occurs in figure and positioning immediately, proposes a kind of fusion laser perception data and visual identity
Information, and correct the method positioned to eliminate position error again using mark information.It realizes according to visual identity, is given to
Each corresponding semantic label of Landmarks carries out information fusion, obtains laser semanteme mark in conjunction with corresponding laser cluster data
Label, formulate the relationship maps table of a set of Landmarks title and corresponding coordinate information.Then during the navigation process, if machine
The offset of device people's appearance position, goes out corresponding road sign using visual identity, finds its position coordinates according to the road sign title identified,
The currently practical pose of robot is extrapolated in conjunction with real-time laser data and IMU corner information, finally corrects determining for robot
Position deviation.To improve reorientation level when robot generates position and attitude error.
To realize the above goal of the invention, the technical solution adopted is that:
Robot method for relocating based on mark information and Fusion, comprising the following steps:
Step S1: building map, and be arranged simultaneously during constructing known map and determine that several features are apparent
Object will meet we in the position of current environment as the road sign in system, in the selection of Landmarks and determining Landmarks
The basic principle of method;
Step S2: being polymerized to a laser point cluster for the laser point fallen on same Landmarks using clustering algorithm, is formed
Laser data;
Step S3: vision semantic information is generated to the object of visual identity by way of deep learning and images match;
Step S4: utilizing calibrating parameters and geometrical model, and laser data and vision semantic information are carried out information fusion, produced
Raw semanteme laser data assigns vision semantic label to the corresponding laser cluster data of Landmarks each in environment;
Step S5: based on semantic laser data, building has the semantic map of each road sign title, and according to one-to-one correspondence
Road sign Name & Location coordinate establish relationship maps table;
Step S6: classification and language when deviation occurs in positioning, in navigation procedure using Landmarks near visual identity
Adopted title;
Step S7: respective coordinates in mapping table are searched according to the semantic information of road sign;
Step S8: it is public using conversion that angle information is recorded according to the coordinate information of road sign, laser real-time perception data and IMU
Formula reversely calculates robot current actual positions.
Preferably, specific step is as follows by the step S1:
Step S101: terrestrial reference object is select and set
Establish the environmental map of an included terrestrial reference object, complete the setting to terrestrial reference object, and its setting to meet as
Lower two basic principles:
(1) select that feature is more obvious or the object of included identification information is as road sign;
(2) road sign is fixed stationary body, then refuses to select in the object of dynamic change if it is position or shape;
Step S102: determine terrestrial reference object in the position of current environment
After having selected terrestrial reference object, road sign is placed in current environment, placement location meets following two constraint item
Part:
(1) since the state of road sign is that long inactivity is motionless, in order to not influence the movement of robot and other things, institute
Wall or corner should be being leaned on the placement location of road sign;
(2) placement position of each road sign will cover entire map environment;
Step S103: building map
After selected terrestrial reference object is placed in map environment according to placement principle is determining, it can construct ground
Figure obtains accurately cartographic information.
Preferably, the method for visual identity Landmarks described in the step S6 uses Tiny-Yolo method.?
In the step, current scene is the robot progress independent navigation in the semantic map with terrestrial reference built, if
There is positional shift in robot, finds terrestrial reference object nearby first, obtains corresponding mark information.So to carry out object inspection
Survey, due to be used to the modified data source of pose first is that laser data, the requirement of real-time of data acquisition is higher, input laser
The speed of data could obtain preferable effect in 10Hz or more in practical applications;And depth convolutional neural networks be by
The neural network model of multiple hidden layer compositions, each hidden layer extract the feature of image different levels, and the network number of plies is deeper, mentions
The feature taken is more abstract, and characterization ability is also stronger, but detecting speed simultaneously can seriously be dragged slowly, and real-time is influenced.Therefore,
It needs to be weighed in speed and effect in preference pattern.So the object detecting method being used in the present invention is to be based on
Tiny-Yolo model realization.Tiny-Yolo is to detect fastest one of object detection method at present.Tiny-Yolo is
Based on the reduced model of YOLO v2 model, and YOLO v2 is one end-to-end (end-to-end) depth convolutional Neural net
Network model, and solved target detection problems as regression problem.It is different from RCNN series of network, the training of YOLO v2
It is carried out in a network model with detection, the independent candidate frame for solving object is not needed, so being not required in the training process
Separation module is used to solve the candidate frame of object as RCNN series of network.And YOLO v2 makees target detection problems
For regression problem solution, detection only needs to carry out primary forward process (inference) every time can obtain object simultaneously
Position and object classification information, without being asked as RCNN series of network using object space and object classification as two parts
Solution.So the present invention directly obtains output result using this kind of target detection model.
Preferably, specific step is as follows by the step S8:
Assuming that sensor used in robot is laser radar, the coordinate system that radar is established is with polar coordinate representation, radar scanning
Laser data point interval is 0.5 degree, and scanning range is the 180 degree in front, then can scan 360 data points;And radar
Measurement data points be it is angularly tactic, then known to angle of i-th of data point in the polar coordinate system centered on radar
θiAre as follows:
Wherein, θiAngle where indicating i-th of data point, π indicate 180 ° of radian value, can be incited somebody to action by formula (2) (3)
Polar coordinates are converted to cartesian coordinate:
x′i=ρi·cosθi (2)
Wherein, x 'iIndicate i-th of data point abscissa, ρiIndicate distance of the robot to i-th of data point, cos θiTable
Show the cosine value of i-th of data point;
y′i=ρi·sinθi (3)
Wherein, y 'iIndicate i-th of data point ordinate, ρiIndicate distance of the robot to i-th of data point, sin θiTable
Show the sine value of i-th of data point;
Regard point M point of i-th of data point in cartesian coordinate system as terrestrial reference object in algorithm, passes through mapping
Its world coordinates known to tableIts local coordinate (x ' is acquired by formula (1) (2) (3)i, y 'i), and according to IMU
The available robot of measured value course angle, i.e., local coordinate system with respect to world coordinate system angle of deflection, to calculate machine
The currently practical pose of people:
If O is the origin of world coordinate system;O ' is sensing data source, the i.e. real time position of robot, using it as origin
A local coordinate system is constructed, robot is currently with respect to the position coordinates (x of world coordinate system0,y0) may be expressed as:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, lx1Indicate data point projection
Overall length, lx2Indicate the partial-length of data point projection;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, ly1Indicate that the data point longitudinal axis is thrown
The partial-length of shadow, ly2Indicate the partial-length of data point longitudinal axis projection;The length of L can be long according to the local coordinate of terrestrial reference M
Degree and robot course angle α are acquired:
lx1=x 'i·cosα (6)
Wherein, x 'iIndicate that i-th of data point abscissa, cos α indicate that robot deflects cosine of an angle, lx1Indicate data point
The overall length of projection;
lx2=y 'i·sinα (7)
Wherein, y 'iIndicate that i-th of data point ordinate, sin α indicate the sine of robot deflection angle, lx2Indicate data point
The partial-length of projection;
ly1=y 'i·cosα (8)
Wherein, y 'iIndicate that i-th of data point ordinate, cos α indicate that robot deflects cosine of an angle, ly1Indicate data point
The partial-length of longitudinal axis projection;
ly2=x 'i·sinα (9)
Wherein, x 'iIndicate that i-th of data point abscissa, sin α indicate the sine of robot deflection angle, ly2Indicate data point
The partial-length of longitudinal axis projection;
Aggregative formula (4) (5) (6) (7) (8) (9) can release coordinate of the robot with respect to world coordinate system are as follows:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, x 'iIndicate i-th of data point
Abscissa, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection
The sine at angle;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, x 'iIndicate i-th of data point
Abscissa, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection
The sine at angle;
I.e. when robot due to voyage deduction method defect, ground environment is uneven, artificial position of mobile robot and produce
When raw pose deviation, robot physical location can be calculated by algorithm above.
Compared with prior art, the beneficial effects of the present invention are:
The present invention assigns the semantic label of terrestrial reference using visual perception information, can allow terrestrial reference object more tool, in machine
In people's thinking there are more obvious;Using building landmark locations information that laser data during figure is just being derived by and swash in real time
Optically scanning information makes opposite positional relationship more accurate, effectively improves positioning accuracy and reorientation efficiency, takes full advantage of view
The recognition reaction of feel and the accuracy of laser measurement;Meanwhile the present invention is more accurate for pose correction effect and to all kinds of multiple
Heterocycle border all has stronger adaptive ability.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 indicates that the coordinate information for taking laser data source (robot) as the local coordinate system of origin foundation and inside turn
Change relationship.
What Fig. 3 was indicated is the relativeness and the transformation of coordinate information of world coordinate system and robot local coordinate system.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
As shown in Figure 1, Figure 2, Figure 3 shows, the robot method for relocating based on mark information and Fusion, packet
Include following steps:
Step S1: building map, and be arranged simultaneously during constructing known map and determine that several features are apparent
Object will meet we in the position of current environment as the road sign in system, in the selection of Landmarks and determining Landmarks
The basic principle of method;
Step S2: being polymerized to a laser point cluster for the laser point fallen on same Landmarks using clustering algorithm, is formed
Laser data;
Step S3: vision semantic information is generated to the object of visual identity by way of deep learning and images match;
Step S4: utilizing calibrating parameters and geometrical model, and laser data and vision semantic information are carried out information fusion, produced
Raw semanteme laser data assigns vision semantic label to the corresponding laser cluster data of Landmarks each in environment;
Step S5: based on semantic laser data, building has the semantic map of each road sign title, and according to one-to-one correspondence
Road sign Name & Location coordinate establish relationship maps table;
Step S6: classification and language when deviation occurs in positioning, in navigation procedure using Landmarks near visual identity
Adopted title;
Step S7: respective coordinates in mapping table are searched according to the semantic information of road sign;
Step S8: it is public using conversion that angle information is recorded according to the coordinate information of road sign, laser real-time perception data and IMU
Formula reversely calculates robot current actual positions.
Preferably, specific step is as follows by the step S1:
Step S101: terrestrial reference object is select and set
Establish the environmental map of an included terrestrial reference object, complete the setting to terrestrial reference object, and its setting to meet as
Lower two basic principles:
(1) select that feature is more obvious or the object of included identification information is as road sign;
(2) road sign is fixed stationary body, then refuses to select in the object of dynamic change if it is position or shape;
Step S102: determine terrestrial reference object in the position of current environment
After having selected terrestrial reference object, road sign is placed in current environment, placement location meets following two constraint item
Part:
(1) since the state of road sign is that long inactivity is motionless, in order to not influence the movement of robot and other things, institute
Wall or corner should be being leaned on the placement location of road sign;
(2) placement position of each road sign will cover entire map environment;
Step S103: building map
After selected terrestrial reference object is placed in map environment according to placement principle is determining, it can construct ground
Figure obtains accurately cartographic information.
Preferably, the method for visual identity Landmarks described in the step S6 uses Tiny-Yolo method.?
In the step, current scene is the robot progress independent navigation in the semantic map with terrestrial reference built, if
There is positional shift in robot, finds terrestrial reference object nearby first, obtains corresponding mark information.So to carry out object inspection
Survey, due to be used to the modified data source of pose first is that laser data, the requirement of real-time of data acquisition is higher, input laser
The speed of data could obtain preferable effect in 10Hz or more in practical applications;And depth convolutional neural networks be by
The neural network model of multiple hidden layer compositions, each hidden layer extract the feature of image different levels, and the network number of plies is deeper, mentions
The feature taken is more abstract, and characterization ability is also stronger, but detecting speed simultaneously can seriously be dragged slowly, and real-time is influenced.Therefore,
It needs to be weighed in speed and effect in preference pattern.So the object detecting method being used in the present invention is to be based on
Tiny-Yolo model realization.Tiny-Yolo is to detect fastest one of object detection method at present.Tiny-Yolo is
Based on the reduced model of YOLO v2 model, and YOLO v2 is one end-to-end (end-to-end) depth convolutional Neural net
Network model, and solved target detection problems as regression problem.It is different from RCNN series of network, the training of YOLO v2
It is carried out in a network model with detection, the independent candidate frame for solving object is not needed, so being not required in the training process
Separation module is used to solve the candidate frame of object as RCNN series of network.And YOLO v2 makees target detection problems
For regression problem solution, detection only needs to carry out primary forward process (inference) every time can obtain object simultaneously
Position and object classification information, without being asked as RCNN series of network using object space and object classification as two parts
Solution.So the present invention directly obtains output result using this kind of target detection model.
Preferably, specific step is as follows by the step S8:
As illustrated in fig. 2, it is assumed that sensor used in robot is laser radar, the coordinate system that radar is established is with polar coordinates table
Show, radar scanning laser data point interval is 0.5 degree, and scanning range is the 180 degree in front, then can scan 360 data
Point;And the measurement data points of radar be it is angularly tactic, then known to polar coordinates of i-th of data point centered on radar
Angle, θ in systemiAre as follows:
Wherein, θiAngle where indicating i-th of data point, π indicate 180 ° of radian value, can be incited somebody to action by formula (2) (3)
Polar coordinates are converted to cartesian coordinate:
x′i=ρi·cosθi (2)
Wherein, x 'iIndicate i-th of data point abscissa, ρiIndicate distance of the robot to i-th of data point, cos θiTable
Show the cosine value of i-th of data point;
y′i=ρi·sinθi (3)
Wherein, y 'iIndicate i-th of data point ordinate, ρiIndicate distance of the robot to i-th of data point, sin θiTable
Show the sine value of i-th of data point;
Regard point M point of i-th of data point in cartesian coordinate system as terrestrial reference object in algorithm, passes through mapping
Its world coordinates known to tableIts local coordinate (x ' is acquired by formula (1) (2) (3)i, y 'i), and according to IMU
The available robot of measured value course angle, i.e., local coordinate system with respect to world coordinate system angle of deflection, to calculate machine
The currently practical pose of people:
As shown in figure 3, setting O as the origin of world coordinate system;O ' is sensing data source, i.e. the real time position of robot,
A local coordinate system is constructed using it as origin, robot is currently with respect to the position coordinates (x of world coordinate system0,y0) can indicate
Are as follows:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, lx1Indicate data point projection
Overall length, lx2Indicate the partial-length of data point projection;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, ly1Indicate that the data point longitudinal axis is thrown
The partial-length of shadow, ly2Indicate the partial-length of data point longitudinal axis projection;The length of L can be long according to the local coordinate of terrestrial reference M
Degree and robot course angle α are acquired:
lx1=x 'i·cosα (6)
Wherein, x 'iIndicate that i-th of data point abscissa, cos α indicate that robot deflects cosine of an angle, lx1Indicate data point
The overall length of projection;
lx2=y 'i·sinα (7)
Wherein, y 'iIndicate that i-th of data point ordinate, sin α indicate the sine of robot deflection angle, lx2Indicate data point
The partial-length of projection;
ly1=y 'i·cosα (8)
Wherein, y 'iIndicate that i-th of data point ordinate, cos α indicate that robot deflects cosine of an angle, ly1Indicate data point
The partial-length of longitudinal axis projection;
ly2=x 'i·sinα (9)
Wherein, x 'iIndicate that i-th of data point abscissa, sin α indicate the sine of robot deflection angle, ly2Indicate data point
The partial-length of longitudinal axis projection;
Aggregative formula (4) (5) (6) (7) (8) (9) can release coordinate of the robot with respect to world coordinate system are as follows:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, x 'iIndicate i-th of data point
Abscissa, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection
The sine at angle;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, x 'iIndicate i-th of data point
Abscissa, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection
The sine at angle;
I.e. when robot due to voyage deduction method defect, ground environment is uneven, artificial position of mobile robot and produce
When raw pose deviation, robot physical location can be calculated by algorithm above.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair
The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description
To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this
Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention
Protection scope within.
Bibliography
[1]Hanten R,Buck S,Otte S,et al.Vector-AMCL:Vector Based Adaptive
Monte Carlo Localization for Indoor Maps[J].2016.
[2]Frintrop S,Jensfelt P,Christensen H I.Attentional Landmark
Selection for Visual SLAM[C]//Ieee/rsj International Conference on
Intelligent Robots and Systems.IEEE,2006:2582-2587.
[3]Schuster F,Keller C G,Rapp M,et al.Landmark based radar SLAM using
graph optimization[C]//IEEE,International Conference on Intelligent
Transportation Systems.IEEE,2016:2559-2564。
Claims (4)
1. the robot method for relocating based on mark information and Fusion, which is characterized in that including following step
It is rapid:
Step S1: building map, and be arranged simultaneously during constructing known map and determine the apparent object of several features
As the road sign in system, to meet this method in the position of current environment in the selection of Landmarks and determining Landmarks
Basic principle;
Step S2: the laser point fallen on same Landmarks is polymerized to a laser point cluster using clustering algorithm, forms laser
Data;
Step S3: vision semantic information is generated to the object of visual identity by way of deep learning and images match;
Step S4: utilizing calibrating parameters and geometrical model, and laser data and vision semantic information are carried out information fusion, generate language
Adopted laser data assigns vision semantic label to the corresponding laser cluster data of Landmarks each in environment;
Step S5: based on semantic laser data, building has the semantic map of each road sign title, and according to one-to-one road
Entitling claims to establish relationship maps table with position coordinates;
Step S6: when deviation occurs in positioning, the visual identity classification of Landmarks nearby and semantic name are utilized in navigation procedure
Claim;
Step S7: respective coordinates in mapping table are searched according to the semantic information of road sign;
Step S8: it is anti-using conversion formula that angle information is recorded according to the coordinate information of road sign, laser real-time perception data and IMU
To reckoning robot current actual positions.
2. the robot method for relocating according to claim 1 based on mark information and Fusion,
It is characterized in that, specific step is as follows by the step S1:
Step S101: terrestrial reference object is select and set
The environmental map of an included terrestrial reference object is established, the setting to terrestrial reference object is completed, and its setting will meet following two
A basic principle:
(1) select that feature is more obvious or the object of included identification information is as road sign;
(2) road sign is fixed stationary body, then refuses to select in the object of dynamic change if it is position or shape;
Step S102: determine terrestrial reference object in the position of current environment;
After having selected terrestrial reference object, road sign is placed in current environment, placement location meets following two constraint condition:
(1) since the state of road sign is that long inactivity is motionless, in order to not influence the movement of robot and other things, so road
Target placement location should lean on wall or corner;
(2) placement position of each road sign will cover entire map environment;
Step S103: building map
After selected terrestrial reference object is placed in map environment according to placement principle is determining, map can be constructed, is obtained
Take accurately cartographic information.
3. the robot method for relocating according to claim 2 based on mark information and Fusion,
It is characterized in that, the method for visual identity Landmarks described in the step S6 uses Tiny-Yolo method.
4. the robot method for relocating according to claim 3 based on mark information and Fusion,
It is characterized in that, specific step is as follows by the step S8:
Assuming that sensor used in robot is laser radar, the coordinate system that radar is established is with polar coordinate representation, radar scanning laser
Data point interval is 0.5 degree, and scanning range is the 180 degree in front, then can scan 360 data points;And the measurement of radar
Data point be it is angularly tactic, then known to angle, θ of i-th of data point in the polar coordinate system centered on radariAre as follows:
Wherein, θiAngle where indicating i-th data point, π indicate 180 ° of radian value, can be by formula (2) (3) by polar coordinates
It is converted to cartesian coordinate:
x′i=ρi·cosθi (2)
Wherein, x 'iIndicate i-th of data point abscissa, ρiIndicate distance of the robot to i-th of data point, cos θiIndicate i-th
The cosine value of a data point;
y′i=ρi·sinθi (3)
Wherein, y 'iIndicate i-th of data point ordinate, ρiIndicate distance of the robot to i-th of data point, sin θiIndicate i-th
The sine value of a data point;
Regard point M point of i-th of data point in cartesian coordinate system as terrestrial reference object in algorithm, by mapping table
Know its world coordinatesIts local coordinate (x ' is acquired by formula (1) (2) (3)i, y 'i), and according to IMU's
The course angle of the available robot of measured value, i.e. local coordinate system with respect to world coordinate system angle of deflection, to calculate robot
Currently practical pose:
If O is the origin of world coordinate system;O ' is sensing data source, the i.e. real time position of robot, is constructed using it as origin
One local coordinate system, robot is currently with respect to the position coordinates (x of world coordinate system0, y0) may be expressed as:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, lx1Indicate the total of data point projection
It is long, lx2Indicate the partial-length of data point projection;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, ly1Indicate the projection of the data point longitudinal axis
Partial-length, ly2Indicate the partial-length of data point longitudinal axis projection;The length of L can according to the local coordinate length of terrestrial reference M and
Robot course angle α is acquired:
lx1=x 'i·cosα (6)
Wherein, x 'iIndicate that i-th of data point abscissa, cos α indicate that robot deflects cosine of an angle, lx1Indicate data point projection
Overall length;
lx2=y 'i·sinα (7)
Wherein, y 'iIndicate that i-th of data point ordinate, sin α indicate the sine of robot deflection angle, lx2Indicate data point projection
Partial-length;
ly1=y 'iCos α (8)
Wherein, y 'iIndicate that i-th of data point ordinate, cos α indicate that robot deflects cosine of an angle, ly1Indicate the data point longitudinal axis
The partial-length of projection;
ly2=x 'i·sinα (9)
Wherein, xi' indicate that i-th of data point abscissa, sin α indicate the sine of robot deflection angle, ly2Indicate the data point longitudinal axis
The partial-length of projection;
Aggregative formula (4) (5) (6) (7) (8) (9) can release coordinate of the robot with respect to world coordinate system are as follows:
Wherein, x0Indicate robot location's abscissa,Indicate the abscissa of terrestrial reference object, xi' indicate the horizontal seat of i-th of data point
Mark, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection angle
It is sinusoidal;
Wherein, y0Indicate robot location's ordinate,Indicate the ordinate of terrestrial reference object, x 'iIndicate the horizontal seat of i-th of data point
Mark, cos α indicate that robot deflects cosine of an angle, y 'iIndicate that i-th of data point ordinate, sin α indicate robot deflection angle
It is sinusoidal;
I.e. when robot due to voyage deduction method defect, ground environment is uneven, artificial position of mobile robot and generate position
When appearance deviation, robot physical location can be calculated by algorithm above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910200079.5A CN110147095A (en) | 2019-03-15 | 2019-03-15 | Robot method for relocating based on mark information and Fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910200079.5A CN110147095A (en) | 2019-03-15 | 2019-03-15 | Robot method for relocating based on mark information and Fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110147095A true CN110147095A (en) | 2019-08-20 |
Family
ID=67588204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910200079.5A Pending CN110147095A (en) | 2019-03-15 | 2019-03-15 | Robot method for relocating based on mark information and Fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110147095A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110450167A (en) * | 2019-08-27 | 2019-11-15 | 南京涵曦月自动化科技有限公司 | A kind of robot infrared laser positioning motion trail planning method |
CN111546348A (en) * | 2020-06-10 | 2020-08-18 | 上海有个机器人有限公司 | Robot position calibration method and position calibration system |
CN112161618A (en) * | 2020-09-14 | 2021-01-01 | 灵动科技(北京)有限公司 | Storage robot positioning and map construction method, robot and storage medium |
CN112307978A (en) * | 2020-10-30 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and readable storage medium |
CN112540382A (en) * | 2019-09-07 | 2021-03-23 | 山东大学 | Laser navigation AGV auxiliary positioning method based on visual identification detection |
CN112902966A (en) * | 2021-01-28 | 2021-06-04 | 开放智能机器(上海)有限公司 | Fusion positioning system and method |
CN112925322A (en) * | 2021-01-26 | 2021-06-08 | 哈尔滨工业大学(深圳) | Autonomous positioning method of unmanned vehicle in long-term scene |
WO2021129347A1 (en) * | 2019-12-24 | 2021-07-01 | 炬星科技(深圳)有限公司 | Auxiliary positioning column and navigation assistance system of self-traveling robot |
CN113269831A (en) * | 2021-05-19 | 2021-08-17 | 北京能创科技有限公司 | Visual repositioning method, system and device based on scene coordinate regression network |
CN113343739A (en) * | 2020-03-02 | 2021-09-03 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
CN113359769A (en) * | 2021-07-06 | 2021-09-07 | 广东省科学院智能制造研究所 | Indoor autonomous mobile robot composite navigation method and device |
CN113534845A (en) * | 2021-08-18 | 2021-10-22 | 国网湖南省电力有限公司 | Unmanned aerial vehicle autonomous inspection method and system for overhead distribution line based on GNSS positioning |
CN115752476A (en) * | 2022-11-29 | 2023-03-07 | 重庆长安汽车股份有限公司 | Vehicle ground library repositioning method, device, equipment and medium based on semantic information |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120121161A1 (en) * | 2010-09-24 | 2012-05-17 | Evolution Robotics, Inc. | Systems and methods for vslam optimization |
CN105783936A (en) * | 2016-03-08 | 2016-07-20 | 武汉光庭信息技术股份有限公司 | Road sign drawing and vehicle positioning method and system for automatic drive |
CN107144285A (en) * | 2017-05-08 | 2017-09-08 | 深圳地平线机器人科技有限公司 | Posture information determines method, device and movable equipment |
CN107422730A (en) * | 2017-06-09 | 2017-12-01 | 武汉市众向科技有限公司 | The AGV transportation systems of view-based access control model guiding and its driving control method |
CN109099901A (en) * | 2018-06-26 | 2018-12-28 | 苏州路特工智能科技有限公司 | Full-automatic road roller localization method based on multisource data fusion |
-
2019
- 2019-03-15 CN CN201910200079.5A patent/CN110147095A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120121161A1 (en) * | 2010-09-24 | 2012-05-17 | Evolution Robotics, Inc. | Systems and methods for vslam optimization |
CN105783936A (en) * | 2016-03-08 | 2016-07-20 | 武汉光庭信息技术股份有限公司 | Road sign drawing and vehicle positioning method and system for automatic drive |
CN107144285A (en) * | 2017-05-08 | 2017-09-08 | 深圳地平线机器人科技有限公司 | Posture information determines method, device and movable equipment |
CN107422730A (en) * | 2017-06-09 | 2017-12-01 | 武汉市众向科技有限公司 | The AGV transportation systems of view-based access control model guiding and its driving control method |
CN109099901A (en) * | 2018-06-26 | 2018-12-28 | 苏州路特工智能科技有限公司 | Full-automatic road roller localization method based on multisource data fusion |
Non-Patent Citations (8)
Title |
---|
A. L. MAJDIK: "Laser and vision based map building techniques for mobile robot navigation", 《2010 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION》 * |
RAYMOND G. GOSINE: "Integrated Laser-Camera Sensor for the Detection and Localization of Landmarks for Robotic Applications", 《2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION PASADENA》 * |
周全程: "多传感器苹果采摘机器人定位***研究", 《机器人技术》 * |
宋宇: "《机器人学和计算机视觉的集群计算》", 31 January 2015 * |
曾碧: "A Trilaminar Data Fusion Localization Algorithm Supported by Sensor Network", 《SENSOR TRANSDUCERS》 * |
曾碧: "基于路标观测的改进EKF-SLAM 算法", 《应用技术》 * |
熊光明: "《无人驾驶汽车概论》", 31 July 2014 * |
贺利乐: "未知环境下履带式移动机器人SLAM 研究", 《传感器与微***》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110450167A (en) * | 2019-08-27 | 2019-11-15 | 南京涵曦月自动化科技有限公司 | A kind of robot infrared laser positioning motion trail planning method |
CN112540382A (en) * | 2019-09-07 | 2021-03-23 | 山东大学 | Laser navigation AGV auxiliary positioning method based on visual identification detection |
CN112540382B (en) * | 2019-09-07 | 2024-02-13 | 山东大学 | Laser navigation AGV auxiliary positioning method based on visual identification detection |
WO2021129347A1 (en) * | 2019-12-24 | 2021-07-01 | 炬星科技(深圳)有限公司 | Auxiliary positioning column and navigation assistance system of self-traveling robot |
CN113343739B (en) * | 2020-03-02 | 2022-07-22 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
CN113343739A (en) * | 2020-03-02 | 2021-09-03 | 杭州萤石软件有限公司 | Relocating method of movable equipment and movable equipment |
CN111546348A (en) * | 2020-06-10 | 2020-08-18 | 上海有个机器人有限公司 | Robot position calibration method and position calibration system |
CN112161618A (en) * | 2020-09-14 | 2021-01-01 | 灵动科技(北京)有限公司 | Storage robot positioning and map construction method, robot and storage medium |
CN112307978A (en) * | 2020-10-30 | 2021-02-02 | 腾讯科技(深圳)有限公司 | Target detection method and device, electronic equipment and readable storage medium |
CN112925322A (en) * | 2021-01-26 | 2021-06-08 | 哈尔滨工业大学(深圳) | Autonomous positioning method of unmanned vehicle in long-term scene |
CN112902966A (en) * | 2021-01-28 | 2021-06-04 | 开放智能机器(上海)有限公司 | Fusion positioning system and method |
CN113269831A (en) * | 2021-05-19 | 2021-08-17 | 北京能创科技有限公司 | Visual repositioning method, system and device based on scene coordinate regression network |
CN113269831B (en) * | 2021-05-19 | 2021-11-16 | 北京能创科技有限公司 | Visual repositioning method, system and device based on scene coordinate regression network |
CN113359769B (en) * | 2021-07-06 | 2022-08-09 | 广东省科学院智能制造研究所 | Indoor autonomous mobile robot composite navigation method and device |
CN113359769A (en) * | 2021-07-06 | 2021-09-07 | 广东省科学院智能制造研究所 | Indoor autonomous mobile robot composite navigation method and device |
CN113534845A (en) * | 2021-08-18 | 2021-10-22 | 国网湖南省电力有限公司 | Unmanned aerial vehicle autonomous inspection method and system for overhead distribution line based on GNSS positioning |
CN115752476A (en) * | 2022-11-29 | 2023-03-07 | 重庆长安汽车股份有限公司 | Vehicle ground library repositioning method, device, equipment and medium based on semantic information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110147095A (en) | Robot method for relocating based on mark information and Fusion | |
Nieto et al. | Recursive scan-matching SLAM | |
CN103674015B (en) | Trackless positioning navigation method and device | |
Kümmerle et al. | Large scale graph-based SLAM using aerial images as prior information | |
CN106405605B (en) | A kind of indoor and outdoor seamless positioning method and positioning system of the robot based on ROS and GPS | |
WO2017028653A1 (en) | Method and system for automatically establishing map indoors by mobile robot | |
Yu et al. | An autonomous restaurant service robot with high positioning accuracy | |
CN108571971A (en) | A kind of AGV vision positioning systems and method | |
CN100461058C (en) | Automatic positioning method for intelligent robot under complex environment | |
CN111149072A (en) | Magnetometer for robot navigation | |
CN106384353A (en) | Target positioning method based on RGBD | |
Xiao et al. | 3D point cloud registration based on planar surfaces | |
CN106055091A (en) | Hand posture estimation method based on depth information and calibration method | |
CN106525044B (en) | The personnel positioning navigation system and its method of large-scale naval vessels based on Ship Structure Graphing | |
George et al. | Humanoid robot indoor navigation based on 2D bar codes: Application to the NAO robot | |
CN109901590A (en) | Desktop machine people's recharges control method | |
CN107063229A (en) | Mobile robot positioning system and method based on artificial landmark | |
CN103268729A (en) | Mobile robot cascading type map creating method based on mixed characteristics | |
CN115388902B (en) | Indoor positioning method and system, AR indoor positioning navigation method and system | |
CN108549376A (en) | A kind of navigation locating method and system based on beacon | |
CN108680177B (en) | Synchronous positioning and map construction method and device based on rodent model | |
CN110108269A (en) | AGV localization method based on Fusion | |
CN110069058A (en) | Navigation control method in a kind of robot chamber | |
Choi et al. | Efficient simultaneous localization and mapping based on ceiling-view: ceiling boundary feature map approach | |
Qian et al. | Wearable-assisted localization and inspection guidance system using egocentric stereo cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190820 |