CN108072370A - Robot navigation method based on global map and the robot with this method navigation - Google Patents
Robot navigation method based on global map and the robot with this method navigation Download PDFInfo
- Publication number
- CN108072370A CN108072370A CN201611027397.9A CN201611027397A CN108072370A CN 108072370 A CN108072370 A CN 108072370A CN 201611027397 A CN201611027397 A CN 201611027397A CN 108072370 A CN108072370 A CN 108072370A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- msub
- mtr
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The robot to navigate the invention discloses a kind of air navigation aid based on global map and using this method, the method includes the steps:S1, the video for making robot shooting indoor ceiling and wall area extract the pattern distortion feature of each two field picture in the video;S2, the image matching method based on content match each two field picture of the video of shooting using described image distortion characteristics, and extract keyframe sequence according to matching result, and the key frame of adjacent position is overlapped and is connected, and build global map;S3, the image matching method based on content according to, robot real-time vision image is matched with the key frame in the global map, the key frame most like with robot Current vision image is found out, asks for the global position of robot in real time, realizes robot navigation.The present invention can realize the Camera calibration of robot, the problems such as effectively eliminating " abduction issue " and " similar object interference " that easily occurs in robot navigation.
Description
Technical field
The present invention relates to the fields such as robot navigation and positioning, more particularly, to a kind of navigation based on global map
Method and the robot to be navigated using this method.
Background technology
Preferable indoor service robot should be able in whole building contexture by self track route, Camera calibration, it is accurate
It really shuttles between multiple rooms and corridor, various services is provided for the mankind.The premise for realizing this target is that robot will deposit
A complete indoor map is stored up, and preferably by robot autonomous foundation.
For this field, mainly there are local map positioning and global map to position two kinds.Local map positions
Map is built based on indoor characteristic information, such as positioning simultaneously is with building diagram technology (Simultaneous Localization And
Mapping, SLAM).The V-SLAM of wherein vision is most widely used, it is newest research include ORB SLAM, dense SLAM,
Algorithm is substantially improved using technologies such as new orb features, 3D modelings in semi-dense SLAM, LSD SLAM and CV-SLAM etc.
Performance.The contour characteristic of indoor ceiling is also fully utilized, and CV-SLAM extracts feature by camera face ceiling,
It is more more efficient than general SLAM, convenient.It has been widely used in Dyson, Samsung, LG etc. well-known independent navigation sweeping robot
In product.
The main target for establishing global map is to extract quantity from the indoor environment video of robot autonomous shooting to the greatest extent may be used
Can less but can establish mutual alignment contact keyframe sequence.By matching, picking out with the keyframe sequence and work as forward sight
Feel the most like key frame of image, it can be achieved that being positioned to the indoor global position of robot.Common method includes three kinds:Base
In the key frame detection of time-domain, spatial domain and content frame variation.Key frame detection method wherein based on content frame variation is more
Added with effect, the accomplished in many ways such as pixel method, color histogram method, local histogram's method, characteristic matching method, BoW can be used.
Easily there is the problems such as " abduction issue " and " similar object interference " in local map positioning.Robot location is moved suddenly
When dynamic, such as slip or false hit cause previous location information all to fail, can not continue autonomous positioning, characterize with regard to machine
People " is kidnaped ".When robot shuttles between multiple rooms and corridor, the feature point pole in different rooms in similar object
It is easily confused, robot easily causes Wrong localization by error hiding, and similar object is usually very obvious indoors, also very
It is more.
When being positioned using global map, if using the key frame detection method based on time-domain or spatial domain, robot stops
It is so very big according to the key frame redundancy of time extraction or the possibility of discontinuity when vehicle or non-uniform movement.Robot spins,
When generation sidesway or skidding, the spatial movement of oneself can not be also measured exactly, and the crucial frame error of extraction is also very big.It is and current
Based on the key frame detection method of content frame variation, Color Statistical feature, the characteristic point information of object still only make use of mostly, it is right
The layout and shape information of object are not applied fully, at the same robot vision can also caused by shooting angle and position image it is abnormal
Become, content information is formed and is disturbed.
The content of the invention
(1) technical problems to be solved
The present invention proposes a kind of robot navigation method based on global map and the robot of application this method navigation,
The layout and shape information of indoor various objects are made full use of, space spacing is automatically extracted out in the video taken from robot
The keyframe sequence that very big but holding centainly overlaps, the global map in building is established using the keyframe sequence of extraction,
Then robot by the map can real-time resolving oneself position coordinates, so as to fulfill the positioning and navigation of robot.
(2) technical solution
Technical solution of the present invention is as follows:
The present invention provides a kind of robot navigation method based on global map, including:
S1, the video for making robot shooting indoor ceiling and wall area extract the figure of each two field picture in the video
Image distortion feature;
S2, using the image matching method based on content, according to described image distortion characteristics to each frame figure of the video of shooting
Keyframe sequence is extracted as being matched, and according to matching result, then the key frame of adjacent position is overlapped and is connected, structure is complete
Local figure;
S3, using the image matching method based on content, by robot real-time vision image and the global map
In key frame matching, find out the key frame most like with robot Current vision image, ask for the global position of robot in real time
It puts, realizes robot navigation.
The image matching method based on content includes treating matched two field pictures extraction according to described image feature
Overlay region judges the similitude of two field pictures.The overlay region is that two field pictures are adjusted to identical by translating and rotating
Behind camera site and shooting angle, the overlay region of the two frames picture is extracted.The image matching method based on content also wraps
It includes:To the overlay region of the first two field picture in the two field pictures to be matched, matching is decomposed based on sub-block and carries out overlay region weight
It builds, is formed and rebuild overlay region;The reconstruction overlay region and the overlay region of the second two field picture are compared, analyze similarity.Institute
It is that the overlay region of first two field picture is decomposed into several sub-blocks to state sub-block and decompose matching, will be with second two field picture
Sub-block with same characteristic features is translated according to the position in second two field picture, forms the reconstruction overlay region.It is described
Image matching method based on content is detected and rejects error hiding characteristic point, the detection before being additionally included in extraction overlay region
It is that distance length according to the characteristic point line of the positive viewed area of robot in different images is constant with error hiding characteristic point is rejected,
Weed out the characteristic point in other regions.
In the step S2, when extracting key frame, judge that the similarity of each two field picture and previous key frame reaches pre-
It is automatically extracted after first setting value;Alternatively, through judging that the similarity of follow-up each two field picture and previous key frame is both greater than preset
Value, and the number of frames being subsequently spaced between the last frame in each frame and previous key frame reaches preset value, then carries automatically
The last frame in follow-up each frame is taken as key frame.Preferably, in the step S2 when extracting key frame:By
One frame video is directly as the first width key frame;By video in n-th of key frame and follow-up 20 seconds frame by frame using based in image
The matching process of appearance is matched, and resolves their content similarities with the key frame;Find out that similarity maximum is corresponding to be regarded
Frequency frame, and find similarity backward and drop to 50% video frame as (n+1)th key frame;Alternatively, if n-th of key frame is with after
The frame similarity of video is both greater than 50% in 20 seconds continuous, then takes the last frame of video in described follow-up 20 seconds as (n+1)th
Key frame.
When global map is built in the step S2, by first frame video directly as the first width key frame;For n-th
With (n+1)th key frame, seek out the multipair characteristic point in two key frames, and determine respectively the characteristic point in n-th frame and
Coordinate (x ' in (n+1)th framen, y 'n)、(x′n+1, y 'n+1), then according to formula:
Calculate the alternate position spike x between with respect to n-th characteristic frame of (n+1)th key frameN+1, n, yN+1, nH is differed with boatN+1, n;
By n-th of key frame in global position coordinates (xn, yn), substitute into formula:
It calculates and obtains the global position (x of (n+1)th key framen+1, yn+1);Iteration frame by frame obtains keyframe sequence
In the global position of each frame indoors.
When robot real-time vision image is matched with the key frame in the global map in the step S3, use
The image matching method based on content finds out the key frame for taking the video frame most like;Then this is asked for SURF algorithm to regard
Characteristic point between frequency frame and most like key frame;Determine coordinate of the characteristic point in most like key frame and the video frame
(x′n, y 'n)、(x′n+1, y 'n+1), then according to formula:
Calculate the alternate position spike x between the relatively described most like key frame of the video frameN+1, n, yN+1, nIt is differed with boat
HN+1, n;By the most like key frame in global position coordinates (xn, yn), substitute into formula:
Calculate the global position (x of the video framen+1, yn+1), position of the robot in global map is asked in real time.
The present invention also provides a kind of robot based on global map navigation, including image collection module, main control module
And mobile module;Described image acquisition module obtains the video image of ambient enviroment, described image is transferred to main control module, institute
Positioning and navigation that main control module carries out robot are stated, the mobile module realizes the movement of robot;The main control module bag
Include pattern distortion feature extraction unit, key-frame extraction and global map construction unit and navigation elements;Described image distortion is special
Sign extraction unit is used to extract the distortion characteristics in the video image of described image acquisition module acquisition;The key-frame extraction and
Global map construction unit according to the image matching method based on content, is clapped robot according to the distortion characteristics of described image
Each two field picture of video taken the photograph is matched, and extracts keyframe sequence according to matching result, then by the key frame of adjacent position
Overlapping is connected, and builds global map;The navigation elements, it is according to the image matching method based on content, robot is real
When visual pattern matched with the key frame in the global map, find out the key most like with robot Current vision image
Frame asks for the global position of robot, robot is positioned and is navigated in real time, and refers to the mobile module output mobile
Order.
The image matching method based on content includes:Matched two field pictures are treated according to described image distortion characteristics
Overlay region is extracted, judges the similitude of two field pictures.The overlay region is that two field pictures are adjusted to phase by translating and rotating
Behind same camera site and shooting angle, the overlay region of the two frames picture is extracted.The images match based on content includes
Error hiding characteristic point is detected and rejected before overlay region is extracted, and the detection and rejecting error hiding characteristic point are according to machine
Distance length of the characteristic point line of the positive viewed area of people in different images is constant, weeds out the characteristic point in other regions.It is described
Image matching method based on content further includes:To the overlay region of the first two field picture in the two field pictures to be matched, base
Matching is decomposed in sub-block and carries out overlay region reconstruction, is formed and is rebuild overlay region;By the reconstruction overlay region and the institute of the second two field picture
Overlay region comparison is stated, analyzes similarity.It is to be decomposed into the overlay region of first two field picture that the sub-block, which decomposes matching,
Several sub-blocks will have the sub-block of same characteristic features with second two field picture, be carried out according to the position in second two field picture
Translation, forms the reconstruction overlay region.
The key-frame extraction and global map construction unit judge each two field picture and previous key when extracting key frame
The similarity of frame automatically extracts after reaching preset value;Alternatively, through judging that follow-up each two field picture is similar to previous key frame
Degree is both greater than preset value, and the number of frames being subsequently spaced between the last frame in each frame and previous key frame reaches advance
Setting value then automatically extracts the last frame in follow-up each frame as key frame.Preferably, the global map structure is single
Member is when extracting key frame, by first frame video directly as the first width key frame;By n-th of key frame with follow-up 20 seconds in regard
Frequency is matched frame by frame using the matching process of image content-based, resolves their content similarities with the key frame;It finds out
The corresponding video frame of similarity maximum, and find similarity backward and drop to 50% video frame as (n+1)th key frame;Or
Person if the frame similarity of video is both greater than 50% in n-th of key frame and follow-up 20 seconds, takes video in described follow-up 20 seconds
Last frame is as (n+1)th key frame.
When the key-frame extraction and global map construction unit structure global map, by first frame video directly as the
One width key frame;For n-th and (n+1)th key frame, the multipair characteristic point in two key frames is sought out, and determines institute respectively
State coordinate (x ' of the characteristic point in n-th frame and the (n+1)th framen, yn)、(x′n+1, y 'n+1), then according to formula:
Calculate the alternate position spike x between with respect to n-th characteristic frame of (n+1)th key frameN+1, n, yN+1, nH is differed with boatN+1, n;
By n-th of key frame in global position coordinates (xn, yn), substitute into formula:
It calculates and obtains the global position (x of (n+1)th key framen+1, yn+1);Iteration frame by frame obtains keyframe sequence
In the global position of each frame indoors.
When the navigation elements match robot real-time vision image with the key frame in the global map, using institute
It states the image matching method based on content and finds out the key frame for taking the video frame most like;Then the video is asked for SURF algorithm
Characteristic point between frame and most like key frame;Determine coordinate of the characteristic point in most like key frame and the video frame
(x′n, y 'n)、(x′n+1, y 'n+1), then according to formula:
Calculate the alternate position spike x between the relatively described most like key frame of the video frameN+1, n, yN+1, nIt is differed with boat
HN+1, n;By the most like key frame in global position coordinates (xn, yn), substitute into formula:
Calculate the global position (x of the video framen+1, yn+1), position of the robot in global map is asked in real time.
(3) advantageous effect
(1) air navigation aid provided by the invention and the robot of application this method navigation, for multiple rooms and corridor this
Kind complicated but very common interior of building environment, matches image based on content and similarity analysis, enables robot
Extract key frame in the video taken during enough indoor environments from study, and built based on key frame it is vertical build it is complete inside object
Local figure realizes the Camera calibration of robot.
(2) present invention uses the images match based on content in structure global map and using global map navigation procedure
Method can improve the accuracy of robot navigation.Based on the image matching method of content based on camera model, analysis chart
As content distortion in shooting process, a kind of analysis of image content and matched side are established for the feature of these distortion
Method is extracted including the overlay region between image, decomposes the overlay region similarity after matched overlay region reconstruction, reconstruction based on sub-block
Analyze three links, can accurate arbitrary two field pictures that calculating robot takes content similarity.
(3) the present invention provides a kind of key frame global map developing algorithm based on image, a kind of figures based on content
As matching autonomous positioning algorithm.Pass through the images match based on content, it can be ensured that the greatest extent may be used between the key frame and frame that are extracted
Widening space length while ensuring there is certain overlapping successively between key frame for energy, theirs are established will pass through signature analysis
Topological relation forms global map.The picture material of robot real-time vision and the keyframe sequence of map carry out matching positioning,
Abduction issue common in the interference of similar object and SLAM algorithms can be effectively eliminated.
Description of the drawings
Fig. 1 is a kind of flow chart of robot navigation method based on global map of the present invention;
Fig. 2 is the robot schematic diagram based on global map navigation of one embodiment of the invention;
Fig. 3 is the wall of one embodiment of the invention and the relation schematic diagram of robot camera;
Fig. 4 is the image processing process figure of the image matching method based on content of one embodiment of the invention;
Fig. 5 is the image processing process schematic diagram of rotation in one embodiment of the invention, translation and overlay region extraction;
Fig. 6 is the isometric schematic diagram of ceiling line segment in one embodiment of the invention;
Fig. 7 is to detect and reject the signal of wall characteristic point according to the contour feature of ceiling in one embodiment of the invention
Figure;
Fig. 8 is to decompose matched overlay region reconstruction process schematic diagram based on sub-block in one embodiment of the invention;
Fig. 9 is that a kind of overlay region of elimination error hiding of one embodiment of the invention rebuilds effect diagram;
Image reconstruction effect diagram when Figure 10 is non-overlapping area in one embodiment of the invention;
Figure 11 is at the image that the key-frame extraction of the image content-based of one embodiment of the invention is established with global map
Manage process schematic;
Figure 12 is the processing that matches positioning of the robot real-time vision with key frame global map of one embodiment of the invention
Process schematic;
Figure 13 is the global position relationship experiments result figure of the robot processing keyframe sequence of one embodiment of the invention;
Figure 14 is the robot vision of one embodiment of the invention and key frame global map image content-based match positioning,
Resolve the movement locus experimental result picture of robot.
Specific embodiment
Understand to make the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference
Attached drawing, the present invention is described in further detail.
The present invention provides a kind of robot navigation methods based on global map, can make full use of indoor various objects
Layout and shape information, it is very big but keep certain weight that space spacing is automatically extracted out in the video taken from robot
The keyframe sequence of conjunction can be that the indoor environment structure global map, robot of " multiple room+corridors " this complexity can be real
Now accurate autonomous positioning.The method of the present invention passes through camera model based on the robot vision camera shot vertically upward
It establishes the distortion model of ceiling and wall, extract their feature, establish a kind of image content match method.This method
It is extracted by overlay region, three links of overlay region similarity analysis that sub-block is decomposed after matched overlay region reconstruction, reconstruction are realized
The comparison of arbitrary two field pictures content similarity.And devise the detection of ceiling characteristic point and rebuild sub-block detection method, have
Effect improves the computational accuracy of similarity.By this method to the indoor environment video that is collected during robot learning indoor environment
It is handled, can effectively extract picture material difference greatly but mutually has the keyframe sequence being centainly overlapped again, pass through these passes
Key frame builds indoor environment global map, can implement accurate robot autonomous localization.
Present invention simultaneously provides a kind of robot based on global map navigation, including image collection module, main control module
And mobile module.Wherein image collection module is used to obtain the vedio data of ambient enviroment, and image data is transmitted master control
Module, global map air navigation aid of the main control module based on the present invention, extracts the distortion characteristics in image data, from video image
Middle extraction keyframe sequence simultaneously builds global map, then carries out the image of robot captured in real-time and the global map of structure
Matching realizes robot localization and navigation, and sends specific ro-bot movement instruction to mobile module.Mobile module is for real
The movement of existing robot.
(1) structure of the robot navigation method based on global map of the invention:
As shown in Figure 1, the robot navigation method based on global map includes following three links:
(1) pattern distortion device modeling and characterization:To pattern distortion founding mathematical models, the positive vision area in extraction machine people visual angle
Domain (such as robot viewing angles-both vertical it is upward when ceiling) and robot visual angle all around etc. surroundings side view region (such as
Wall body area when robot viewing angles-both vertical is upward, including wall, door, window, furniture) pattern distortion feature.
(2) the keyframe sequence extraction of image content-based and global map are established:Using the images match based on content
Method, the indoor environment video that handling machine people collects in learning process, analysis robot shooting video in two frames draw
The similarity in face extracts keyframe sequence according to matching result from video;And determine each key frame global position indoors,
The key frame overlapping of adjacent position is connected, builds indoor global map.Wherein, based on the image matching method of content to extract
Pattern distortion feature based on, establish the image matching method based on content, including:Overlay region, base between extraction image
Matched overlay region reconstruction, the similarity for comparing reconstruction area and overlay region are decomposed in sub-block.
(3) robot navigation of global map is utilized:Using the image matching method based on content, by subsequent robot
Real-time vision image is matched with the keyframe sequence in map, finds out the key most like with robot current real-time vision image
Frame.Then the characteristic point between the video frame and most like key frame, the relative position for asking for the two are extracted.Finally by the two
Relative position is added with the key frame global position, and global position of the robot when shooting Current vision frame is asked in realization.
(2) robot based on global map navigation of the invention:
Robot based on global map navigation includes following three modules:
(1) image collection module:Single camera may be employed or dual camera obtains the video image number of ambient enviroment
According to image data is transferred to main control module.According to the face visual angle of robot camera, it is big that ambient enviroment can be divided into two
Part:Camera, it is necessary to be carried out the robot side view region of pitch rotation by the positive viewed area of robot at face camera visual angle.
(2) main control module based on global map air navigation aid:Main control module includes pattern distortion feature extraction unit, closes
Key frame extracts and global map construction unit and navigation elements totally three units;
Pattern distortion feature extraction unit:To pattern distortion founding mathematical models, the positive viewed area of extraction machine people and machine
The pattern distortion feature in people's side view region;
Key-frame extraction and global map construction unit:Using the image matching method based on content, robot is being learned
It practises the indoor environment video collected in the process to be handled, the similarity of two frame pictures in the video of analysis robot shooting,
Keyframe sequence is extracted from video according to matching result;And determine each key frame global position indoors, by adjacent position
Key frame overlapping be connected, build indoor global map.
Navigation elements:By robot real-time vision image, according to the image matching method based on content, and in global map
Keyframe sequence matched, find out the key frame most like with robot current real-time vision image, and extract the video
Characteristic point between frame and most like key frame, the relative position both asked for.Finally by the relative position of the two and the key
Frame global position is added, so as to ask for global position of the robot when shooting Current vision frame.And according to current location to machine
Device human hair goes out navigational action instruction, is transmitted to the mobile module of robot.
(3) mobile module:Be responsible for realizing the movement of robot, its realization method is including but not limited to wheeled, sufficient formula or
The moving parts such as person's crawler type.
Carry out the robot navigation method based on global map that the present invention will be described in detail below in conjunction with concrete implementation scheme
Realization and application the method for the present invention navigation robot.It should be understood that following embodiment is to shoot vertically upward
Robot vision based on, be to state the positive viewed area at robot visual angle and side view region for clarity, it is of the invention should
Include but is not limited to such situation with situation.The concrete methods of realizing of the present invention includes but not limited to following embodiment simultaneously
The concrete methods of realizing of each step.Within the spirit and principles of the invention, any modification for being made, equivalent substitution change
Into etc., it should all be included in the protection scope of the present invention.
Fig. 2 is the robot schematic diagram based on global map navigation of one embodiment of the invention.As described in Figure 2, the implementation
Robot graphics' acquisition module of example is the single camera shot vertically upward.It is face camera visual angle, robot to face
Region is mainly ceiling, robot side view region parallel to camera axis, positioned at robot surrounding be mainly wall,
Furniture, door and window etc..The main control module of robot is run in a MINI computer, is led according to the robot based on global map
Boat method is run, and realizes image procossing, global map structure and robot localization and the navigation of camera shooting, and sends movement
Control command.The mobile module of robot is robot motion executing agency, receives the motion control sent by MINI computers
Order performs corresponding movement.
(1) course of work of the main control module of the robot of one embodiment of the invention in MINI computers, i.e., based on complete
The realization process of the robot navigation method of local figure, including:
(1) pattern distortion device modeling and characterization
For image (frame A, frame B) to be matched, the scenery in them can be because the difference of robot visual angle and displacement goes out
Existing different pattern distortion, the present invention by for these distortion founding mathematical models, extract their feature, and based on this will
The distortion of two field pictures is adjusted to consistent, realizes the accurate match of image similarity.
1. robot visual angle and displacement characteristic during indoor familiar object shooting
The process that real world three dimensional scenery becomes two dimensional surface photo can be described by the inside and outside parameter model of camera:
In formula,By phase
The displacement of machine visual angle (roll R, pitching P, course H) and robot is formed, and (X, Y, Z) is the position of point in space on object
Put, (x, y) be its correspond to visual pattern in point coordinates, f is the focal length of camera, and dx and dy are the scales of camera, they are all
It is the constant of camera.
Fig. 3 is the wall of one embodiment of the invention and the relation schematic diagram of robot camera.As shown in the figure.With in Fig. 3
Robot visual angle is reference, and indoor object can be divided into two classes:The ceiling of face robot, the wall body area of robot side view
(including wall, furniture, door, window etc.).
For ceiling, it is parallel to the ground of robot motion, the camera only translation T of robotx, TyWith course angle H
Variable, pitching and roll are all 0.
For wall body area, because it is vertical with robot motion's plane (floor) and position is fixed, robot shooting wall
The degree of freedom of body receives many limitations.Using wall as reference, vertical wall direction is X-axis, parallel ground and wall intersection
Direction is axis Y, direction is Z axis perpendicular to the ground.It can be only freely rotated about the z axis between wall and robot, i.e., course changes,
It is set to H degree;Because ground is vertical fixed with wall, for vertically shooting the camera of ceiling, the wall positioned at side view region
Body is equivalent to around y-axis from 90 degree of the position pitching (or 90 degree of roll) of ceiling, around 0 degree of the unchanged i.e. roll of x-axis, (or pitching is
0 degree), being specifically defined needs to consider that wall is located at the front and rear still both sides of robot, and front-back wall is 90 degree of pitching, 0 degree of roll, both sides
Wall is 90 degree of roll, 0 degree of pitching.
Above-mentioned different shooting visual angles, the shift transformation with reference to robot, it will to the ceiling in robot vision and
Wall causes different picture materials to distort, and as can be seen that robot displacement can also be made shooting result from formula (1)
Into distortion effects.Therefore the present invention will be modeled ceiling and wall pattern distortion by camera model, analyze theirs
Feature.
2. ceiling pattern distortion modeling and signature analysis
For indoor environment, ceiling is the positive viewed area of robot, and pitch angle and roll angle are all 0, institute on ceiling
Height a little is all identical, and robot moves on floor, the height change of camera is 0.These variations are to the shadow of photograph taking
Sound can represent that formula (1) can be transformed to by camera model:
It can be reduced to:
This is the form of affine transformation, the rotation and translation distortion only existed.Image to be matched is made to lead to for frame A, frame B
Cross SURF algorithm and ask for their characteristic point, by the following formula can calculate two frames heading crossing angle and translate it is poor, two frames into
They can be adjusted to identical camera site and shooting angle by row rotation and translation.By this process, if they are comprising identical
Scenery, then also can all be adjusted to identical position, realize similarity analysis.
3. wall pattern distortion modeling and signature analysis
For pitching in front of robot be 90 degree wall and wall that side roll is 90 degree, be only comparable to robot
90 degree are had rotated when shooting front wall, shoots wall again in side-looking direction, i.e., 90 degree of course angle difference in the case of two kinds (or -90
Degree), shape distortion of the scenery in photo is consistent.
Wall in front of robot, when becoming H in course, becoming the mathematical model of 2-dimentional photo is:
Abbreviation is:
Wall positioned at robot side, when becoming H in course, becoming the mathematical model of 2-dimentional photo is:
Abbreviation is:
From formula (7) (8) (10) (11), if X, Y-axis are swapped, the distortion phase of side wall and front wall
Together, a kind of wall distortion therefore need to only be analyzed.The present invention will analyze the distortion characteristics of wall images based on the wall of front.
The shooting course for making frame A is H, robot displacement tx、ty, the parameter of frame B is H+ Δs H, tx+Δtx、ty+Δty, then
Frame B expression formulas are:
Using formula (5) to frame A processing, identical ceiling is adjustable to same position, but the wall in frame A in frame A, B
Body image can occur to change as follows after being processed:
In formula, (xAB, yAB) for the point coordinates after converting in frame A.Formula (14) can be deformed into:
If following translational movement (S is used to the twox, Sy) carry out second of translation:
Then translating result is:
Compared to original frame A, the frame A and frame B after second of translation are very close.Caused by reducing denominator difference
It influences, while in view of many of secondary translation unknown parameters (such as (X, Y, Z)), the side that the present invention is rebuild using sub-block matching
Image is divided into several zonules and asks for their own (S respectively by methodx, Sy), and design a kind of detection method and eliminate similar object
Caused by error hiding.It, can correctly matched sub-block quantity can be very more if image to be matched includes identical content.
For image to be matched, if having taken indoor the same area, in image frame the scenery in the region and
Scenery layout is all identical, therefore by second of translation of the rotation and translation of formula (5), formula (19) and (20), it is to be matched
Image will it is much like, be very beneficial to images match.For ceiling region, with correlation analysis phase after translation and rotation
It can be highly effective like degree.And the correct matched sub-block quantity of divided-fit surface method for reconstructing output is to the similarity analysis of wall body area
It is largely effective.Integrated application related coefficient and matched sub-block quantity are realized images match and similarity analysis by the present invention.
(2) the keyframe sequence extraction of image content-based and global map are established
Using the image matching method based on content, the video of robot vision shooting is matched frame by frame, judges two
The similarity of two field picture, and keyframe sequence is extracted according to matching result.Then each frame in keyframe sequence is asked for indoors
Adjacent key frame is overlapped the indoor global map of the structure that is connected by global position.
Image matching method based on content includes overlay region between extraction image, decomposes matched overlapping based on sub-block
Area rebuilds, compares three steps of similarity of reconstruction area and overlay region.
The image processing process of image matching method based on content is as shown in Figure 4.For frame A, B, the figure based on content
As matching includes:First time rotation and translation extracts the overlay region between image;Second of translation, frame A overlay regions are divided into
Sub-block carries out matching reconstruction with frame B.Compare the similarity that frame A rebuilds overlay region and frame B overlay regions can assess two frames.
1. extract the overlay region between image
Frame A, B are adjusted to identical camera site and course angle by the step, if the two photographed some identical images
Content, these same objects will be in the overlay regions of two frames.
The image processing process of rotation, translation and overlay region extraction is as shown in Figure 5.In Fig. 5:(a) frame A, (b) frame B, (c)
Translation, (d) rotate the overlapping region of (e) two frame, the mask of (f) overlay region.The characteristic point of selection SURF method extraction frames A, B,
It brings formula (5) into and asks for course and translational movement.Then frame A, B are adjusted, (c), (d) are shown in Fig. 5.The excessive effects of two frames exists
It is fairly obvious in Fig. 5 (e).Extracted region simultaneously comprising two images point is out made into mask (Fig. 5 (f)), you can extraction
The overlay region of two images.
If characteristic point on the wall less than ceiling, can influence the calculation result of course and displacement, and then influence overlapping
The result of extracted region.Therefore a kind of feature contour according to ceiling is devised to detect and reject the side of wall characteristic point
Method, as shown in Figure 6, Figure 7.It is isometric that Fig. 6 shows that ceiling line segment is illustrated as in the image of different visual angles, and Fig. 7 is shown
The treatment effect of wall characteristic point is detected and rejects according to the contour feature of ceiling.For characteristic point a, b on ceiling,
Distance d in frame A, BA, ab, dB, abRelation is as follows:
In formula, (Xa, Ya, Za) and (Xb, Yb, Zb) be point a, b in the interior space coordinate, (xA, a, yA, a)、(xA, b,
yA, b) and (xB, a, yB, a)、(xB, b, yB, b) it is coordinate of a, b point in frame A, B respectively, (TX, A, TY, A) and (TX, B, TY, B) it is machine
Displacement during device people photographed frame A, B.From formula (11), (12), length is not in frame A, B for characteristic point line segment on ceiling
Become, dA, ab=dB, ab, such as the ceiling line segment in Fig. 6.
If characteristic point bW(height T on the wallw, less than ceiling), then above-mentioned relation no longer exists.Make point a, bWIn frame
A, the wire length in B is dA, abw, dB, abw:
In formula, (xA, bw, yA, bw) and (xB, bw, yB, bw) it is bWIn the coordinate of frame A, B.Δ H in formula (24) denominator is led
Cause dA, abw≠dB, abw, two lines section Length discrepancy.If therefore characteristic point line Length discrepancy in frame A, B, be determined as wall point,
It can reject, and the characteristic point on ceiling can be retained, treatment effect is as shown in Figure 7.
For the ceiling region of image, if comprising identical scenery, they are asked in the same position of frame A, B overlay regions
Picture material similarity can be assessed by taking the related coefficient of two frame overlay regions.But for wall images, since shape is distorted,
Second of translation amendment is needed to distort.
2. it decomposes matched overlay region based on sub-block to rebuild
Fig. 8 is that one kind of one embodiment of the invention is based on the matched overlay region reconstruction process schematic diagram of sub-block decomposition.Sub-block
Decomposing matching process can replace the formula (17) containing unknown quantity and (18) to ask for translational movement Sx, Sy.The processing procedure of algorithm is as schemed
Shown in 8, the overlay region of frame A is broken down into multiple sub-blocks, is matched with the overlay region of frame B.If the overlapping region of frame A, B contain
Identical content, each frame A sub-blocks can find its most suitable position in frame B, obtain the translational movement S of each sub-blockx,
Sy, the frame A overlay regions and frame B overlay regions after reconstruction be closely similar.It, being capable of matched sub-block if frame A, B do not have identical content
Quantity can be seldom.
Matched position of each sub-block of frame A in frame B is resolved using SAD methods:
In formula, ATIt is the sub-block (size M × N) in frame A overlay regions, BLIt is the overlay region (size L × D) of frame B.Travel through BL,
(i, j) for making the value minimum of formula (25) is exactly sub-block ATTranslational movement Sx, Sy。
Fig. 9 is that a kind of overlay region of elimination error hiding of one embodiment of the invention rebuilds effect diagram.In order to eliminate phase
The error hiding like caused by object devises a kind of method for identifying error hiding sub-block.According to formula (17), (18), sub-block translation
Amount can write matrix form:
For the two field pictures that robot is arrived in different position and angle shot, same scenery can be located at different position, table
It levies to relatively move, including two kinds of situations:1) robot translate when, object is caused to relatively move in the picture, and also with quilt
It is related to shoot the relative altitude on object distance ground, it is lower, mobile bigger;2) when robot rotates, shooting angle changes, indoor
Object is moved to different position in image because the distance apart from pivot is different.In formula (26), Qian Liangxiangshi robots
It is moved caused by rotation, the core of wherein Section 1 isIt characterizes by rotation Δ H and generates
Displacement caused by projecting sin Δs H.Section 2 is consistent with the spin matrix of formula (5), the rotation of wall image when being overlay region extraction
Turn amount, be in the relation subtracted each other with Section 1, because before this two be equivalent to and extracted by overlapping region after, pass through second of translation
(Sx, Sy) correct the remaining shifting spinning momentum of the wall as caused by Δ H between frame A, B.Section 3, Section 4 are by robot translational movement
It forms.Section 3 characterizes the pattern distortion of robot translation and the wall generation of different height, and Section 4 is that overlapping region carries
Translational movement when taking, the two are in the relation subtracted each other, and are after being extracted by overlapping region, pass through second of translation (Sx, Sy) correct frame
A, B pattern distortions caused by wall height.
Therefore, for (Sx, Sy) this feature (being formed by translating and rotating two parts), according to the displacement difference of frame A, B
Can be the displacement that each sub-block resolves by affine Transform Model judgment formula (25) with the position of boat difference and each sub-block
(Sx, Sy) whether accurate.And because in formula (26)It is the ratio in project objects to photo
The example factor, (X, Y, Z) in formula (25) can be replaced directly with the sub-block locations (x, y) in photo.Be set to for frame A middle positions (x,
Y) sub-block, (Sx, Sy) discrimination threshold Δ Px, Δ PyIt can be according to the solution of affine transformation:
In formula, Δ xT, Δ yT, Δ HTTranslation and threshold rotating value for setting, to ensure to delete the son of all erroneous matchings
Block, they are slightly less than frame A, B displacement differences and the maximum of heading crossing angle after overlay region is extracted.If the sub-block positioned at (x, y) is put down
Shifting amount Sx, SyIt is more than threshold value Δ P with the absolute value of (x, y) differencex, Δ Py, then it is determined as it being error hiding, is deleted.It deletes and misses
After matched sub-block, the reconstructed results of frame A overlay regions and frame B overlay regions are quite similar, as shown in Figure 9.
If two field pictures are substantially without identical picture material, frame A rebuilds overlay region and can differ greatly with frame B, such as Figure 10
It is shown.Therefore whether the method for matched overlay region reconstruction being decomposed based on sub-block can be easy to assessment two images comprising identical
Content.
3. compare the similarity of reconstruction area and overlay region
The similarity S of overlay region and frame B overlay regions can be rebuild by comparing frame A by assessing the similarity of frame A, BABIt realizes.
SABBy the related coefficient to ceiling image sensitivity and the correct matched sub-block quantity N all sensitive to wall imagekIt forms, is
The product of the two:
In formula, CA(x, y) is that frame A rebuilds the pixel value overlapped in area, CB(x, y) is the pixel value in frame B coincidences area,
WithIt is the pixel average of two frames.
The image procossing mistake established according to the key-frame extraction of the above-mentioned image matching method based on content and global map
Journey is as shown in figure 11.By first frame video directly as the first width key frame, subsequent key-frame extraction process is as follows in sequence:
Step1, to video (60 frame) in n-th of key frame, with follow-up 20 seconds frame by frame according to above-mentioned image content-based
Matching process is matched, and resolves their content similarities with the key frame.
Step2, finds out similarity maximum, and the corresponding video frame of maximum is closest with n-th key frame.Take phase
50% video frame is dropped to as (n+1)th key frame like degree.If the similarity of this 60 frame is both greater than 50%, last frame is taken
As (n+1)th key frame.50% similarity may insure that n-th is connected with n+1 key frame overlapping.
Step1,2 are repeated, extract keyframe sequence.
After extracting keyframe sequence, the foundation of global map is carried out.The key frame global map of foundation includes two parts:
Keyframe sequence, the global position of each key frame indoors.
The solution process of global position is:For (n+1)th key frame, it is Chong Die with n-th of key frame connected, uses SURF
Algorithm asks for the multipair characteristic point in two frames, brings formula (5) into and resolves its relative position (position between n-th of characteristic frame
Poor xN+1, n, yN+1, n, boat difference HN+1, n), then the cumulative overall situation that can obtain (n+1)th key frame is carried out according to formula (29)
Position:
In formula, (xn, yn)、(xn+1, yn+1) it is the n-th, global position of n+1 frames.By formula (29) iteration frame by frame, you can ask
Take the global position of each frame indoors in keyframe sequence.
This method extraction crucial number of frames it is few, spacing is big, content independence is big, be robot Rapid matching and in real time
Positioning is laid a good foundation.In an experiment, the indoor environment video of robot learning is 11 minutes, and totally 1752 frame, extracts altogether
72 width characteristic frame structure sequences build key frame global map.
(3) robot navigation of global map is utilized
Figure 12 is that the robot real-time vision of an embodiment shows with the processing procedure for matching positioning of key frame global map
It is intended to.Using the image matching method based on content, by the key frame sequence in subsequent robot real-time vision image and map
Row matching, finds out the key frame most like with robot Current vision image.Using with resolving as key frame global position
Method:First, the characteristic point between the video frame and most like key frame is extracted, brings formula (5) into, asks for their opposite position
It puts.Then bring formula (29) into, according to the method for asking for the (n+1)th key frame global position, ask for the global position of robot in real time
It puts.
(2) experimental result
The robot navigation method based on global map in one embodiment of the invention and the machine using this method navigation
Device people is tested in large complicated indoor environment.Experimental Area is made of a corridor and two rooms.It walks
Corridor is 26 meters long, 3 meters wide, and every room is averaged 20 square meters.
Figure 13 is the global position relationship experiments result figure of the robot processing keyframe sequence of an embodiment.Robot exists
When indoor environment learns, the video information of 11 minutes is acquired altogether, totally 1752 frame.After method processing by the present invention, finally
It is extracted 72 width key frames structure global map.These key frames splice according to their global position, as shown in figure 13, comparable
More visible description indoor environment (ceiling, door, window, wall, furniture), the accuracy of map≤0.7 meter.
Figure 14 is that the robot vision of an embodiment matches positioning, solver with key frame global map image content-based
The movement locus experimental result picture of device people.Keyframe sequence is matched with robot vision, positioned frame by frame when map uses, you can real
When calculate the driving trace of robot.
Particular embodiments described above has carried out the purpose of the present invention, technical solution and advantageous effect further in detail
Describe in detail bright, it should be understood that the above is only a specific embodiment of the present invention, is not intended to limit the invention, it is all
Within the spirit and principles in the present invention, any modification, equivalent substitution, improvement and etc. done should be included in the protection of the present invention
Within the scope of.
Claims (20)
1. a kind of robot navigation method based on global map, which is characterized in that including:
S1, the video for making robot shooting indoor ceiling and wall area, the distortion for extracting each two field picture in the video are special
Sign;
S2, using the image matching method based on content, according to described image distortion characteristics to each two field picture of the video of shooting into
Row matching, and keyframe sequence is extracted according to matching result, then the key frame of adjacent position is overlapped and is connected, structure is globally
Figure;
S3, using the image matching method based on content, will robot real-time vision image in the global map
Key frame matches, and finds out the key frame most like with robot Current vision image, asks for the global position of robot in real time, real
Existing robot navigation.
2. the robot navigation method according to claim 1 based on global map, which is characterized in that described to be based on content
Image matching method include:
Matched two field pictures extraction overlay region is treated according to described image distortion characteristics, judges the similitude of two field pictures.
3. the robot navigation method according to claim 2 based on global map, which is characterized in that the overlay region is
By two field pictures by translating and rotate, after being adjusted to identical camera site and shooting angle, the two frames picture is extracted
Overlay region.
4. the robot navigation method according to claim 2 based on global map, which is characterized in that described to be based on content
Images match be included in extraction overlay region before be detected and reject error hiding characteristic point, the detection and rejecting error hiding spy
Sign point is that the distance length according to the characteristic point line of the positive viewed area of robot in different images is constant, weeds out other regions
Characteristic point.
5. the robot navigation method according to claim 2 based on global map, which is characterized in that described to be based on content
Image matching method further include:
To the overlay region of the first two field picture in the two field pictures to be matched, matching is decomposed based on sub-block and carries out overlay region weight
It builds, is formed and rebuild overlay region;
The reconstruction overlay region and the overlay region of the second two field picture are compared, analyze similarity.
6. the robot navigation method according to claim 5 based on global map, which is characterized in that the sub-block is decomposed
Matching is that the overlay region of first two field picture is decomposed into several sub-blocks, will have identical spy with second two field picture
The sub-block of sign is translated according to the position in second two field picture, forms the reconstruction overlay region.
7. the robot navigation method based on global map according to claim 1 to 6 any one, which is characterized in that
In the step S2, when extracting key frame,
Judge that the similarity of each two field picture and previous key frame automatically extracts after reaching preset value;
Alternatively, through judging that the similarity of follow-up each two field picture and previous key frame is both greater than preset value, and in follow-up each frame
Last frame and previous key frame between the number of frames that is spaced reach preset value, then automatically extract in follow-up each frame
Last frame as key frame.
8. the robot navigation method according to claim 7 based on global map, which is characterized in that in the step S2
In when extracting key frame:
By first frame video directly as the first width key frame;
N-th of key frame with video in follow-up 20 seconds using the matching process of image content-based is matched frame by frame, is resolved
Their content similarities with the key frame;The corresponding video frame of similarity maximum is found out, and finds similarity backward and drops to
50% video frame is as (n+1)th key frame;
Alternatively, if the frame similarity of video is both greater than 50% in n-th of key frame and follow-up 20 seconds, take in described follow-up 20 seconds
The last frame of video is as (n+1)th key frame.
9. the robot navigation method according to claim 7 based on global map, which is characterized in that in the step S2
During middle structure global map,
By first frame video directly as the first width key frame;
For n-th and (n+1)th key frame, the multipair characteristic point in two key frames is sought out, and determines the feature respectively
Coordinate (x ' of the point in n-th frame and the (n+1)th framen, y 'n)、(x′n+1, y 'n+1), then according to formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the alternate position spike x between with respect to n-th characteristic frame of (n+1)th key frameN+1, n, yN+1, nH is differed with boatN+1, n;
By n-th of key frame in global position coordinates (xn, yn), substitute into formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
It calculates and obtains the global position (x of (n+1)th key framen+1, yn+1);
Iteration frame by frame obtains the global position of each frame indoors in keyframe sequence.
10. the robot navigation method as claimed in any of claims 1 to 6 based on global map, feature exist
In when robot real-time vision image is matched with the key frame in the global map in the step S3, using described
Image matching method based on content finds out the key frame for taking the video frame most like;Then the video frame is asked for SURF algorithm
With the characteristic point between most like key frame;
Determine coordinate (x ' of the characteristic point in most like key frame and the video framen, y 'n)、(x′n+1, y 'n+1), Ran Hougen
According to formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the alternate position spike x between the relatively described most like key frame of the video frameN+1, n, yN+1, nH is differed with boatN+1, n;
By the most like key frame in global position coordinates (xn, yn), substitute into formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the global position (x of the video framen+1, yn+1), position of the robot in global map is asked in real time.
11. a kind of robot based on global map navigation, including image collection module, main control module and mobile module;It is described
Image collection module obtains the video image of ambient enviroment, described image is transferred to main control module, the main control module carries out
The positioning and navigation of robot, the mobile module realize the movement of robot;It is characterized in that:
The main control module includes pattern distortion feature extraction unit, key-frame extraction and global map construction unit and navigation is single
Member;
The distortion that described image distortion characteristics extraction unit is used to extract in the video image of described image acquisition module acquisition is special
Sign;
The key-frame extraction and global map construction unit, according to the image matching method based on content, according to described image
Each two field picture of video for being shot to robot of distortion characteristics match, keyframe sequence is then extracted according to matching result,
And the key frame of adjacent position is overlapped and is connected, build global map;
The navigation elements, according to the image matching method based on content, by robot real-time vision image with it is described complete
Key frame matching in local figure, finds out the key frame most like with robot Current vision image, asks for robot in real time
Global position positions robot and is navigated, and is instructed to the mobile module output mobile.
12. the robot according to claim 11 based on global map navigation, which is characterized in that described based on content
Image matching method includes:
Matched two field pictures extraction overlay region is treated according to described image distortion characteristics, judges the similitude of two field pictures.
13. it is according to claim 12 based on global map navigation robot, which is characterized in that the overlay region be by
Two field pictures are by translating and rotate, and after being adjusted to identical camera site and shooting angle, extract the weight of the two frames picture
Folded area.
14. the robot according to claim 12 based on global map navigation, which is characterized in that described based on content
Images match is detected and rejects error hiding characteristic point, the detection and rejecting error hiding feature before being included in extraction overlay region
Point is that the distance length according to the characteristic point line of the positive viewed area of robot in different images is constant, weeds out other regions
Characteristic point.
15. the robot according to claim 12 based on global map navigation, which is characterized in that described based on content
Image matching method further includes:
To the overlay region of the first two field picture in the two field pictures to be matched, matching is decomposed based on sub-block and carries out overlay region weight
It builds, is formed and rebuild overlay region;
The reconstruction overlay region and the overlay region of the second two field picture are compared, analyze similarity.
16. the robot according to claim 15 based on global map navigation, which is characterized in that the sub-block decomposition
With being that the overlay region of first two field picture is decomposed into several sub-blocks, there will be same characteristic features with second two field picture
Sub-block, translated according to the position in second two field picture, form the reconstruction overlay region.
17. the robot based on global map navigation according to any one in claim 11 to 16, which is characterized in that
The key-frame extraction and global map construction unit when extracting key frame,
Judge that the similarity of each two field picture and previous key frame automatically extracts after reaching preset value;
Alternatively, through judging that the similarity of follow-up each two field picture and previous key frame is both greater than preset value, and in follow-up each frame
Last frame and previous key frame between the number of frames that is spaced reach preset value, then automatically extract in follow-up each frame
Last frame as key frame.
18. the robot according to claim 17 based on global map navigation, which is characterized in that the global map structure
Unit is built when extracting key frame,
By first frame video directly as the first width key frame;
N-th of key frame with video in follow-up 20 seconds using the matching process of image content-based is matched frame by frame, is resolved
Their content similarities with the key frame;The corresponding video frame of similarity maximum is found out, and finds similarity backward and drops to
50% video frame is as (n+1)th key frame;
Alternatively, if the frame similarity of video is both greater than 50% in n-th of key frame and follow-up 20 seconds, take in described follow-up 20 seconds
The last frame of video is as (n+1)th key frame.
19. the robot according to claim 17 based on global map navigation, which is characterized in that carried in the key frame
When taking with global map construction unit structure global map,
By first frame video directly as the first width key frame;
For n-th and (n+1)th key frame, the multipair characteristic point in two key frames is sought out, and determines the feature respectively
Coordinate (x ' of the point in n-th frame and the (n+1)th framen, y 'n)、(x′n+1, y 'n+1), then according to formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the alternate position spike x between with respect to n-th characteristic frame of (n+1)th key frameN+1, n, yN+1, nH is differed with boatN+1, n;
By n-th of key frame in global position coordinates (xn, yn), substitute into formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
It calculates and obtains the global position (x of (n+1)th key framen+1, yn+1);
Iteration frame by frame obtains the global position of each frame indoors in keyframe sequence.
20. the robot based on global map navigation according to any one in claim 11 to 16, which is characterized in that
When the navigation elements match robot real-time vision image with the key frame in the global map, according to described based on interior
The image matching method of appearance finds out the key frame for taking the video frame most like;Then the video frame and most phase are asked for SURF algorithm
Like the characteristic point between key frame;
Determine coordinate (x ' of the characteristic point in most like key frame and the video framen, y 'n)、(x′n+1, y 'n+1), Ran Hougen
According to formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>x</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
<mtr>
<mtd>
<msubsup>
<mi>y</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the alternate position spike x between the relatively described most like key frame of the video frameN+1, n, yN+1, nH is differed with boatN+1, n;
By the most like key frame in global position coordinates (xn, yn), substitute into formula:
<mrow>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>cos</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>n</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>sin</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mi> </mi>
<msub>
<mi>H</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>+</mo>
<mfenced open = "(" close = ")">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mi>n</mi>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Calculate the global position (x of the video framen+1, yn+1), position of the robot in global map is asked in real time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611027397.9A CN108072370A (en) | 2016-11-18 | 2016-11-18 | Robot navigation method based on global map and the robot with this method navigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611027397.9A CN108072370A (en) | 2016-11-18 | 2016-11-18 | Robot navigation method based on global map and the robot with this method navigation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108072370A true CN108072370A (en) | 2018-05-25 |
Family
ID=62161083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611027397.9A Pending CN108072370A (en) | 2016-11-18 | 2016-11-18 | Robot navigation method based on global map and the robot with this method navigation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108072370A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108748184A (en) * | 2018-06-13 | 2018-11-06 | 四川长虹电器股份有限公司 | A kind of robot patrol method and robot device based on area map mark |
CN109035291A (en) * | 2018-08-03 | 2018-12-18 | 重庆电子工程职业学院 | Robot localization method and device |
CN109269493A (en) * | 2018-08-31 | 2019-01-25 | 北京三快在线科技有限公司 | A kind of localization method and device, mobile device and computer readable storage medium |
CN109506658A (en) * | 2018-12-26 | 2019-03-22 | 广州市申迪计算机***有限公司 | Robot autonomous localization method and system |
CN109540122A (en) * | 2018-11-14 | 2019-03-29 | ***股份有限公司 | A kind of method and device constructing cartographic model |
CN110553648A (en) * | 2018-06-01 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | method and system for indoor navigation |
CN110561423A (en) * | 2019-08-16 | 2019-12-13 | 深圳优地科技有限公司 | pose transformation method, robot and storage medium |
CN110561416A (en) * | 2019-08-01 | 2019-12-13 | 深圳市银星智能科技股份有限公司 | Laser radar repositioning method and robot |
CN110595480A (en) * | 2019-10-08 | 2019-12-20 | 瓴道(上海)机器人科技有限公司 | Navigation method, device, equipment and storage medium |
CN110647609A (en) * | 2019-09-17 | 2020-01-03 | 上海图趣信息科技有限公司 | Visual map positioning method and system |
WO2020015548A1 (en) * | 2018-07-19 | 2020-01-23 | 科沃斯机器人股份有限公司 | Robot control method, robot and storage medium |
CN110855601A (en) * | 2018-08-21 | 2020-02-28 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN111267079A (en) * | 2018-12-05 | 2020-06-12 | ***通信集团山东有限公司 | Intelligent inspection robot inspection method and device |
US20210041886A1 (en) * | 2018-01-24 | 2021-02-11 | Zhuineng Robotics (Shanghai) Co., Ltd. | Multi-device visual navigation method and system in variable scene |
CN112598743A (en) * | 2021-02-08 | 2021-04-02 | 智道网联科技(北京)有限公司 | Pose estimation method of monocular visual image and related device |
WO2022134057A1 (en) * | 2020-12-25 | 2022-06-30 | Intel Corporation | Re-localization of robot |
US11965744B2 (en) | 2018-06-01 | 2024-04-23 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for indoor positioning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101281644A (en) * | 2007-03-08 | 2008-10-08 | 霍尼韦尔国际公司 | Vision based navigation and guidance system |
CN101650178A (en) * | 2009-09-09 | 2010-02-17 | 中国人民解放军国防科学技术大学 | Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images |
KR20100037376A (en) * | 2008-10-01 | 2010-04-09 | (주)엠앤소프트 | Method and apparatus for displaying equipment data to stereoscopic image |
CN105204505A (en) * | 2015-09-22 | 2015-12-30 | 深圳先进技术研究院 | Positioning video acquiring and drawing system and method based on sweeping robot |
-
2016
- 2016-11-18 CN CN201611027397.9A patent/CN108072370A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101281644A (en) * | 2007-03-08 | 2008-10-08 | 霍尼韦尔国际公司 | Vision based navigation and guidance system |
KR20100037376A (en) * | 2008-10-01 | 2010-04-09 | (주)엠앤소프트 | Method and apparatus for displaying equipment data to stereoscopic image |
CN101650178A (en) * | 2009-09-09 | 2010-02-17 | 中国人民解放军国防科学技术大学 | Method for image matching guided by control feature point and optimal partial homography in three-dimensional reconstruction of sequence images |
CN105204505A (en) * | 2015-09-22 | 2015-12-30 | 深圳先进技术研究院 | Positioning video acquiring and drawing system and method based on sweeping robot |
Non-Patent Citations (5)
Title |
---|
PENGJIN CHEN等: "Ceiling Vision Localization with Feature Pairs for Home Service Robots", 《PROCEEDINGS OF THE 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS》 * |
RUI LIN等: "Image Features-Based Mobile Robot Visual SLAM", 《PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 * |
吴俊君等: "室内环境仿人机器人快速视觉定位算法", 《中山大学学报(自然科学版)》 * |
方辉等: "基于地面特征点匹配的无人驾驶车全局定位", 《机器人》 * |
贾松敏等: "基于RGB-D相机的移动机器人三维SLAM", 《华中科技大学学报(自然科学版)》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210041886A1 (en) * | 2018-01-24 | 2021-02-11 | Zhuineng Robotics (Shanghai) Co., Ltd. | Multi-device visual navigation method and system in variable scene |
US11965744B2 (en) | 2018-06-01 | 2024-04-23 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for indoor positioning |
CN110553648A (en) * | 2018-06-01 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | method and system for indoor navigation |
CN108748184A (en) * | 2018-06-13 | 2018-11-06 | 四川长虹电器股份有限公司 | A kind of robot patrol method and robot device based on area map mark |
US11850753B2 (en) | 2018-07-19 | 2023-12-26 | Ecovacs Robotics Co., Ltd. | Robot control method, robot and storage medium |
US11534916B2 (en) | 2018-07-19 | 2022-12-27 | Ecovacs Robotics Co., Ltd. | Robot control method, robot and storage medium |
WO2020015548A1 (en) * | 2018-07-19 | 2020-01-23 | 科沃斯机器人股份有限公司 | Robot control method, robot and storage medium |
CN109035291B (en) * | 2018-08-03 | 2020-11-20 | 重庆电子工程职业学院 | Robot positioning method and device |
CN109035291A (en) * | 2018-08-03 | 2018-12-18 | 重庆电子工程职业学院 | Robot localization method and device |
CN110855601B (en) * | 2018-08-21 | 2021-11-19 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN110855601A (en) * | 2018-08-21 | 2020-02-28 | 华为技术有限公司 | AR/VR scene map acquisition method |
CN109269493A (en) * | 2018-08-31 | 2019-01-25 | 北京三快在线科技有限公司 | A kind of localization method and device, mobile device and computer readable storage medium |
CN109540122A (en) * | 2018-11-14 | 2019-03-29 | ***股份有限公司 | A kind of method and device constructing cartographic model |
CN111267079A (en) * | 2018-12-05 | 2020-06-12 | ***通信集团山东有限公司 | Intelligent inspection robot inspection method and device |
CN109506658B (en) * | 2018-12-26 | 2021-06-08 | 广州市申迪计算机***有限公司 | Robot autonomous positioning method and system |
CN109506658A (en) * | 2018-12-26 | 2019-03-22 | 广州市申迪计算机***有限公司 | Robot autonomous localization method and system |
CN110561416A (en) * | 2019-08-01 | 2019-12-13 | 深圳市银星智能科技股份有限公司 | Laser radar repositioning method and robot |
CN110561423B (en) * | 2019-08-16 | 2021-05-07 | 深圳优地科技有限公司 | Pose transformation method, robot and storage medium |
CN110561423A (en) * | 2019-08-16 | 2019-12-13 | 深圳优地科技有限公司 | pose transformation method, robot and storage medium |
CN110647609A (en) * | 2019-09-17 | 2020-01-03 | 上海图趣信息科技有限公司 | Visual map positioning method and system |
CN110647609B (en) * | 2019-09-17 | 2023-07-18 | 上海图趣信息科技有限公司 | Visual map positioning method and system |
CN110595480A (en) * | 2019-10-08 | 2019-12-20 | 瓴道(上海)机器人科技有限公司 | Navigation method, device, equipment and storage medium |
WO2022134057A1 (en) * | 2020-12-25 | 2022-06-30 | Intel Corporation | Re-localization of robot |
CN112598743A (en) * | 2021-02-08 | 2021-04-02 | 智道网联科技(北京)有限公司 | Pose estimation method of monocular visual image and related device |
CN112598743B (en) * | 2021-02-08 | 2023-10-13 | 智道网联科技(北京)有限公司 | Pose estimation method and related device for monocular vision image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108072370A (en) | Robot navigation method based on global map and the robot with this method navigation | |
CN111414798B (en) | Head posture detection method and system based on RGB-D image | |
Rogez et al. | Mocap-guided data augmentation for 3d pose estimation in the wild | |
CN109684925B (en) | Depth image-based human face living body detection method and device | |
CN107392964B (en) | The indoor SLAM method combined based on indoor characteristic point and structure lines | |
CN111126304A (en) | Augmented reality navigation method based on indoor natural scene image deep learning | |
CN102834845B (en) | The method and apparatus calibrated for many camera heads | |
CN103988226B (en) | Method for estimating camera motion and for determining real border threedimensional model | |
CN106997605B (en) | A method of foot type video is acquired by smart phone and sensing data obtains three-dimensional foot type | |
CN107705328A (en) | Balance probe location for 3D alignment algorithms selects | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
JP6483832B2 (en) | Method and system for scanning an object using an RGB-D sensor | |
WO2018019272A1 (en) | Method and apparatus for realizing augmented reality on the basis of plane detection | |
Wang et al. | Outdoor markerless motion capture with sparse handheld video cameras | |
CN110334701A (en) | Collecting method based on deep learning and multi-vision visual under the twin environment of number | |
Rodríguez et al. | Obstacle avoidance system for assisting visually impaired people | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN109613974A (en) | A kind of AR household experiential method under large scene | |
CN117036612A (en) | Three-dimensional reconstruction method based on nerve radiation field | |
JP2009121824A (en) | Equipment and program for estimating camera parameter | |
Ugrinovic et al. | Body size and depth disambiguation in multi-person reconstruction from single images | |
Krispel et al. | Automatic texture and orthophoto generation from registered panoramic views | |
US20200184656A1 (en) | Camera motion estimation | |
CN108765384A (en) | A kind of conspicuousness detection method of joint manifold ranking and improvement convex closure | |
Moliner et al. | Better prior knowledge improves human-pose-based extrinsic camera calibration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180525 |