CN106500714A - A kind of robot navigation method and system based on video - Google Patents
A kind of robot navigation method and system based on video Download PDFInfo
- Publication number
- CN106500714A CN106500714A CN201610839909.5A CN201610839909A CN106500714A CN 106500714 A CN106500714 A CN 106500714A CN 201610839909 A CN201610839909 A CN 201610839909A CN 106500714 A CN106500714 A CN 106500714A
- Authority
- CN
- China
- Prior art keywords
- robot
- ground location
- camera
- pixel
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3647—Guidance involving output of stored or live camera images or video streams
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
A kind of the present invention relates to navigation field, more particularly to robot navigation method and system based on video.Corresponding relation by setting up video image pixel position and ground location of the invention;Obtain video image pixel;Position acquisition ground location according to the pixel.Realize passing through camera is installed in robot, actual scene that robot can detect is obtained in real time, obtain video image, a pixel is chosen in video image, and the ground location represented by driven machine people to the pixel, so as to realize the actual scene navigating robot that can be directed to around robot.
Description
Technical field
A kind of the present invention relates to navigation field, more particularly to robot navigation method and system based on video.
Background technology
The robot navigation of industry, substantially in a width navigation map, clicks on certain pixel on map at present
Or select certain position, then the Aspect Ratio by selected location relative to whole map is converted in reality scene to calculate
Target location, then controls robot and moves again to that position.The shortcoming that robot movement is controlled using digital map navigation is
Map is static images, finishes the marks such as route, obstacle in advance on map in proportion, it is impossible to reflect actual field in real time
Change in scape.
Content of the invention
The technical problem to be solved is:A kind of robot navigation method and system based on video is provided, real
The actual scene navigating robot that can be now directed to around robot.
In order to solve above-mentioned technical problem, the technical solution used in the present invention is:
The present invention provides a kind of robot navigation method based on video, including:
Set up the corresponding relation of video image pixel position and ground location;
Obtain video image pixel;
Position acquisition ground location according to the pixel.
The above-mentioned robot navigation method based on video, its advantage are:It is different from prior art to lead using map
Boat control robot movement, it is impossible to navigated according to the change in actual scene.The present invention is taken the photograph by installing in robot
As head, obtain the actual scene that robot can be detected in real time, obtain video image, resettle pixel position in video image
With the corresponding relation of ground location, a pixel can be chosen in video image, and driven machine people is to pixel institute table
The ground location for showing, so that realize the actual scene navigating robot that can be directed to around robot.
The present invention also provides a kind of Algorithms of Robots Navigation System based on video, including:
Module is set up, for setting up the corresponding relation of video image pixel position and ground location;
First acquisition module, for obtaining video image pixel;
Second acquisition module, for the position acquisition ground location according to the pixel.
The above-mentioned Algorithms of Robots Navigation System based on video, its advantage are:Video image is set up by setting up module
Middle pixel position and the corresponding relation of ground location, choose a pixel in video image by the first acquisition module, then
By the second acquisition module obtain choose pixel corresponding to ground location, so as to can driven machine people to the pixel
Represented ground location, realizes the actual scene navigating robot that can be directed to around robot.
Description of the drawings
Fig. 1 is a kind of FB(flow block) of the robot navigation method based on video of the present invention;
Fig. 2 is a kind of structured flowchart of the Algorithms of Robots Navigation System based on video of the present invention;
Label declaration:
1st, module is set up;2nd, the first acquisition module;3rd, the second acquisition module;31st, the first computing unit;32nd, second calculate
Unit;4th, the 3rd acquisition module;5th, computing module;6th, the 4th acquisition module;7th, drive module.
Specific embodiment
By describing the technology contents of the present invention in detail, realizing purpose and effect, below in conjunction with embodiment and coordinate attached
Figure is explained.
The design of most critical of the present invention is:By obtaining the scene video image around robot in real time, in scene visual
Pixel is selected in frequency image, and the position by the pixel in video image is converted to ground location, driven machine people
To the ground location, the actual scene navigating robot that can be directed to around robot is realized.
As shown in figure 1, the present invention provides a kind of robot navigation method based on video, including:
Set up the corresponding relation of video image pixel position and ground location;
Obtain video image pixel;
Position acquisition ground location according to the pixel.
Further, according to the position acquisition ground location of the pixel, specially:
According to the position of the pixel, the Vertical Square of the camera relative to the ground location of robot is calculated
To drift angle and horizontal direction drift angle;
According to the vertical direction drift angle and the horizontal direction drift angle, the ground location is calculated relative to machine
The vertical range and horizontal range of the camera of people.
Seen from the above description, ground location that can be according to corresponding to the position calculation of pixel goes out the pixel is apart from machine
The side-play amount of the camera of device people.
Preferably, the camera of calculating robot relative to the ground location vertical direction drift angle and horizontal direction inclined
The method and step at angle is specially:
As origin, right is X-axis positive axis, and lower section is set up for Y-axis positive axis in the upper left corner with the video image of monitoring client
Rectangular coordinate system, obtains the first coordinate system.A pixel is chosen in the video image of monitoring client as target pixel points, the picture
Coordinate of the vegetarian refreshments in the first coordinate system is (x1, y1), and the resolution ratio of monitoring client is width1*height1.Calculate the pixel
Vertical scale and horizontal proportion of the point in the video image of monitoring client, rateX1=x1/width1, rateY1=y1/
height1;
As origin, right is X-axis positive axis, and lower section is set up for Y-axis positive axis in the upper left corner with the video image of monitoring client
Rectangular coordinate system, obtains the second coordinate system.Obtain resolution ratio width2*height2 of robotic end video image;Calculate described
Coordinate (x2, y2) of the target pixel points in robotic end video image, x2=width2*rateX1, y2=height2*
rateY1;
Obtain the horizontal view angle angW and vertical angle of view angH of camera;It is calculated the target pixel points correspondingly
Level angle angX and vertical drift angle angY between face position and camera, angX=(x2/width2-0.5) * angW,
AngY=(Y2/height2-0.5) * angH;
Obtain vertical height z of the camera away from ground;The corresponding ground location of the target pixel points is calculated relative to taking the photograph
As horizontal offset x3 and vertical offset y3 of a corresponding ground location, y3=z/tan (angY);X3=y3*tan
(angX).Further, also include:
The camera of robot is obtained relative to the angle immediately ahead of robot;
According to the ground location relative to the vertical range and horizontal range and the angle of the camera of robot, count
Calculate and obtain vertical range and horizontal range of the ground location relative to robot.
Seen from the above description, side-play amount that can be according to ground location relative to the camera of robot calculates the ground
Side-play amount of the position relative to robot.The camera of robot may be mounted at arbitrary orientation, when robot camera not
Be mounted in robot front when, need by ground location relative to robot camera side-play amount be converted to relative to
Side-play amount immediately ahead of robot, could be correctly according to side-play amount driven machine people to the ground location.
Preferably, calculate the ground location to be specially relative to the vertical range and horizontal range of robot:
With the artificial origin of machine, front is Y-axis positive axis, and right is X-axis positive axis, and top is set up for Z axis positive axis
Coordinate system, obtains three-coordinate.Obtain coordinate (x4, y4, z4) of the camera of robot in three-coordinate, camera
Relative to angle angC immediately ahead of robot, the corresponding ground location of the target pixel points relative to camera correspondingly
The horizontal offset x3 of face position and vertical offset y3, be calculated the corresponding ground location of the target pixel points relative to
The horizontal offset x5 of robot and vertical offset y5, x5=x3*cos (- angC) y3*sin (- angC)+x4;Y5=x3*
sin(-angC)+y3*cos(-angC)+y4.
Further, also include:
Obtain the picture that the camera of robot shoots in real time, obtain the video image.
Seen from the above description, realize obtaining the actual scene around robot in real time.
Further, also include:
Driven machine people is to the ground location.
Seen from the above description, realize that driven machine people corresponds to actual scene to the pixel that chooses in video image
In ground location.
As shown in Fig. 2 the present invention also provides a kind of Algorithms of Robots Navigation System based on video, including:
Module 1 is set up, for setting up the corresponding relation of video image pixel position and ground location;
First acquisition module 2, for obtaining video image pixel;
Second acquisition module 3, for the position acquisition ground location according to the pixel.
Further, second acquisition module 3 includes:
First computing unit 31, for the position according to the pixel, be calculated the camera of robot relative to
The vertical direction drift angle and horizontal direction drift angle of the ground location;
Second computing unit 32, for according to the vertical direction drift angle and the horizontal direction drift angle, being calculated institute
State vertical range and horizontal range of the ground location relative to the camera of robot.
Further, also include:
3rd acquisition module 4, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module 5, for according to the ground location relative to the camera of robot vertical range and level away from
From with the angle, be calculated vertical range and horizontal range of the ground location relative to robot.
Further, also include:
4th acquisition module 6, for obtaining the picture that the camera of robot shoots in real time, obtains the video image.
Further, also include:
Drive module 7, for driven machine people to the ground location.
Embodiments of the invention one are:
Obtain the picture that the camera of robot shoots in real time, obtain video image;
Set up the corresponding relation of the video image pixel position and ground location;
Obtain video image pixel;
According to the position of the pixel, the Vertical Square of the camera relative to the ground location of robot is calculated
To drift angle and horizontal direction drift angle;
According to the vertical direction drift angle and the horizontal direction drift angle, the ground location is calculated relative to machine
The vertical range and horizontal range of the camera of people;
Driven machine people is to the ground location.
Seen from the above description, the present embodiment realizes the video image that can obtain the actual scene around robot in real time,
And the pixel navigating robot in selecting video image can be passed through.
Embodiments of the invention two are:
On the basis of embodiment one, also include:
The camera of robot is obtained relative to the angle immediately ahead of robot;
According to the ground location relative to the vertical range and horizontal range and the angle of the camera of robot, count
Calculate and obtain vertical range and horizontal range of the ground location relative to robot.
Seen from the above description, the camera for obtaining actual scene video image around robot in the present embodiment can be pacified
Arbitrary orientation loaded on robot, is converted to ground location phase by the side-play amount by ground location relative to robot camera
For the side-play amount of robot, can driven machine people to the ground location.
Embodiments of the invention three are:
The pixel resolution of the video image of robot camera be 1280*720, the resolution ratio of video image on monitoring client
For 1440*810.Monitoring client obtains video image in real time from robotic end, and carries out distortion correction to the video image for getting.
As origin, right is X-axis positive axis, and rectangular co-ordinate is set up for Y-axis positive axis in lower section in the upper left corner with video image
System.Choose in the video image of monitoring client a pixel (800,710) as the target pixel points of navigating robot.
Vertical scale of the target pixel points in the video image of monitoring client:710/810=0.8765;Horizontal proportion:
800/1440=0.5556;Pixel coordinate of the target pixel points in the video image of robot camera for (711,631), its
Calculating process is 1280*0.5556=711,720*0.8765=631;28.64 degree of vertical angle of view, level according to camera regards
The pixel coordinate of 61.18 degree of angle, camera 1 meter of height and target pixel points away from ground in the video image of camera is calculated
Go out the ground location corresponding to target pixel points, i.e. target floor position relative to camera vertical drift angle be 10.78 degree with
Level angle is 3.39 degree.With camera as origin, the front of camera is X-axis positive axis, and the front-right of camera is Y-axis
Positive axis sets up rectangular coordinate system, according to target floor position relative to camera vertical drift angle and level angle calculate mesh
Mark ground location is 0.31 relative to the horizontal offset of camera, and vertical offset is 5.25.With robot as origin, machine
The front of device people is Y-axis positive axis, and the front-right of robot sets up rectangular coordinate system for X-axis positive axis.By target floor position
Put relative to camera horizontal offset and vertical offset to be converted to target floor position inclined relative to the level of robot
Shifting amount and vertical offset.According to video image and the corresponding relation of ground location, target can be calculated according to the side-play amount
Real standard distance and actual vertical range of the ground location relative to camera.According to target floor positional distance robot
Real standard distance and actual vertical range driven machine people are to target location.
Embodiments of the invention four are:
4th acquisition module obtains the picture that the camera of robot shoots in real time, obtains the video image;
Set up the corresponding relation that module sets up video image pixel position and ground location;
First acquisition module obtains video image pixel;
The first computing unit in second acquisition module, for the position according to the pixel, is calculated robot
Camera relative to the ground location vertical direction drift angle and horizontal direction drift angle;Second computing unit, for basis
The vertical direction drift angle and the horizontal direction drift angle, are calculated the ground location relative to the camera of robot
Vertical range and horizontal range;
3rd acquisition module, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module, for according to the ground location relative to the camera of robot vertical range and horizontal range
With the angle, vertical range and horizontal range of the ground location relative to robot is calculated;
Drive module driven machine people is to the ground location.
Seen from the above description, the present embodiment provides a kind of Algorithms of Robots Navigation System based on video, and the system includes
4th acquisition module, set up module, the first acquisition module, the second acquisition module, the 3rd acquisition module, computing module and drive mould
Block.Wherein, the second acquisition module includes the first computing unit and the second computing unit.Can achieve to obtain in real time by said system
The video image of the actual scene around robot, and the pixel navigating robot in selecting video image can be passed through.
In sum, a kind of robot navigation method based on video that the present invention is provided, by installing in robot
Camera, obtains the actual scene that robot can be detected in real time, obtains video image, resettle pixel position in video image
The corresponding relation with ground location is put, and a pixel can be chosen in video image, and driven machine people is to the pixel institute
The ground location of expression, so that realize the actual scene navigating robot that can be directed to around robot;Further, can be according to picture
The position calculation of vegetarian refreshments goes out side-play amount of the ground location corresponding to the pixel apart from the camera of robot;Further,
Side-play amount that can be according to ground location relative to the camera of robot calculates skew of the ground location relative to robot
Amount;Further, realize obtaining the actual scene around robot in real time;Further, realize driven machine people in video
The pixel that chooses in image is corresponding to the ground location in actual scene.The present invention also provides a kind of robot based on video
Navigation system, the system include the 4th acquisition module, set up module, the first acquisition module, the second acquisition module, the 3rd acquisition
Module, computing module and drive module;Wherein, the second acquisition module includes the first computing unit and the second computing unit;Pass through
Said system can achieve the video image for obtaining the actual scene around robot in real time, and can pass through in selecting video image
Pixel navigating robot.
Embodiments of the invention are the foregoing is only, the scope of the claims of the present invention is not thereby limited, every using this
The equivalents made by bright specification and accompanying drawing content, or the technical field of correlation is directly or indirectly used in, include in the same manner
In the scope of patent protection of the present invention.
Claims (10)
1. a kind of robot navigation method based on video, it is characterised in that include:
Set up the corresponding relation of video image pixel position and ground location;
Obtain video image pixel;
Position acquisition ground location according to the pixel.
2. a kind of robot navigation method based on video according to claim 1, it is characterised in that according to the pixel
The position acquisition ground location of point, specially:
According to the position of the pixel, the camera for being calculated robot is inclined relative to the vertical direction of the ground location
Angle and horizontal direction drift angle;
According to the vertical direction drift angle and the horizontal direction drift angle, the ground location is calculated relative to robot
The vertical range and horizontal range of camera.
3. a kind of robot navigation method based on video according to claim 2, it is characterised in that also include:
The camera of robot is obtained relative to the angle immediately ahead of robot;
According to the ground location relative to the vertical range and horizontal range and the angle of the camera of robot, calculate
Arrive vertical range and horizontal range of the ground location relative to robot.
4. a kind of robot navigation method based on video according to claim 1, it is characterised in that also include:
Obtain the picture that the camera of robot shoots in real time, obtain the video image.
5. a kind of robot navigation method based on video according to claim 1, it is characterised in that also include:
Driven machine people is to the ground location.
6. a kind of Algorithms of Robots Navigation System based on video, it is characterised in that include:
Module is set up, for setting up the corresponding relation of video image pixel position and ground location;
First acquisition module, for obtaining video image pixel;
Second acquisition module, for the position acquisition ground location according to the pixel.
7. a kind of Algorithms of Robots Navigation System based on video according to claim 6, it is characterised in that described second obtains
Module includes:
First computing unit, for the position according to the pixel, is calculated the camera of robot relative to described
The vertical direction drift angle and horizontal direction drift angle of face position;
Second computing unit, for according to the vertical direction drift angle and the horizontal direction drift angle, being calculated the ground
Vertical range and horizontal range of the position relative to the camera of robot.
8. a kind of Algorithms of Robots Navigation System based on video according to claim 6, it is characterised in that also include:
3rd acquisition module, for obtaining the camera of robot relative to the angle immediately ahead of robot;
Computing module, for according to the ground location relative to the camera of robot vertical range and horizontal range and institute
Angle is stated, vertical range and horizontal range of the ground location relative to robot is calculated.
9. a kind of Algorithms of Robots Navigation System based on video according to claim 6, it is characterised in that also include:
4th acquisition module, for obtaining the picture that the camera of robot shoots in real time, obtains the video image.
10. a kind of Algorithms of Robots Navigation System based on video according to claim 6, it is characterised in that also include:
Drive module, for driven machine people to the ground location.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610839909.5A CN106500714B (en) | 2016-09-22 | 2016-09-22 | A kind of robot navigation method and system based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610839909.5A CN106500714B (en) | 2016-09-22 | 2016-09-22 | A kind of robot navigation method and system based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106500714A true CN106500714A (en) | 2017-03-15 |
CN106500714B CN106500714B (en) | 2019-11-29 |
Family
ID=58290950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610839909.5A Active CN106500714B (en) | 2016-09-22 | 2016-09-22 | A kind of robot navigation method and system based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106500714B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110733424A (en) * | 2019-10-18 | 2020-01-31 | 深圳市麦道微电子技术有限公司 | Calculation method for horizontal distance between ground position and vehicle body in driving video systems |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004252817A (en) * | 2003-02-21 | 2004-09-09 | Victor Co Of Japan Ltd | Method and device for detecting image position |
CN102156481A (en) * | 2011-01-24 | 2011-08-17 | 广州嘉崎智能科技有限公司 | Intelligent tracking control method and system for unmanned aircraft |
CN102682292A (en) * | 2012-05-10 | 2012-09-19 | 清华大学 | Method based on monocular vision for detecting and roughly positioning edge of road |
CN102999051A (en) * | 2011-09-19 | 2013-03-27 | 广州盈可视电子科技有限公司 | Method and device for controlling tripod head |
CN103459099A (en) * | 2011-01-28 | 2013-12-18 | 英塔茨科技公司 | Interfacing with mobile telepresence robot |
CN104034305A (en) * | 2014-06-10 | 2014-09-10 | 杭州电子科技大学 | Real-time positioning method based on monocular vision |
CN104034316A (en) * | 2013-03-06 | 2014-09-10 | 深圳先进技术研究院 | Video analysis-based space positioning method |
CN104200469A (en) * | 2014-08-29 | 2014-12-10 | 暨南大学韶关研究院 | Data fusion method for vision intelligent numerical-control system |
CN104835173A (en) * | 2015-05-21 | 2015-08-12 | 东南大学 | Positioning method based on machine vision |
CN104954747A (en) * | 2015-06-17 | 2015-09-30 | 浙江大华技术股份有限公司 | Video monitoring method and device |
CN105171756A (en) * | 2015-07-20 | 2015-12-23 | 缪学良 | Method for controlling remote robot through combination of videos and two-dimensional coordinate system |
-
2016
- 2016-09-22 CN CN201610839909.5A patent/CN106500714B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004252817A (en) * | 2003-02-21 | 2004-09-09 | Victor Co Of Japan Ltd | Method and device for detecting image position |
CN102156481A (en) * | 2011-01-24 | 2011-08-17 | 广州嘉崎智能科技有限公司 | Intelligent tracking control method and system for unmanned aircraft |
CN103459099A (en) * | 2011-01-28 | 2013-12-18 | 英塔茨科技公司 | Interfacing with mobile telepresence robot |
CN102999051A (en) * | 2011-09-19 | 2013-03-27 | 广州盈可视电子科技有限公司 | Method and device for controlling tripod head |
CN102682292A (en) * | 2012-05-10 | 2012-09-19 | 清华大学 | Method based on monocular vision for detecting and roughly positioning edge of road |
CN104034316A (en) * | 2013-03-06 | 2014-09-10 | 深圳先进技术研究院 | Video analysis-based space positioning method |
CN104034305A (en) * | 2014-06-10 | 2014-09-10 | 杭州电子科技大学 | Real-time positioning method based on monocular vision |
CN104200469A (en) * | 2014-08-29 | 2014-12-10 | 暨南大学韶关研究院 | Data fusion method for vision intelligent numerical-control system |
CN104835173A (en) * | 2015-05-21 | 2015-08-12 | 东南大学 | Positioning method based on machine vision |
CN104954747A (en) * | 2015-06-17 | 2015-09-30 | 浙江大华技术股份有限公司 | Video monitoring method and device |
CN105171756A (en) * | 2015-07-20 | 2015-12-23 | 缪学良 | Method for controlling remote robot through combination of videos and two-dimensional coordinate system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110733424A (en) * | 2019-10-18 | 2020-01-31 | 深圳市麦道微电子技术有限公司 | Calculation method for horizontal distance between ground position and vehicle body in driving video systems |
CN110733424B (en) * | 2019-10-18 | 2022-03-15 | 深圳市麦道微电子技术有限公司 | Method for calculating horizontal distance between ground position and vehicle body in driving video system |
Also Published As
Publication number | Publication date |
---|---|
CN106500714B (en) | 2019-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104848858B (en) | Quick Response Code and be used for robotic vision-inertia combined navigation system and method | |
JP4537557B2 (en) | Information presentation system | |
JP6211157B1 (en) | Calibration apparatus and calibration method | |
JP5725708B2 (en) | Sensor position and orientation measurement method | |
US20160117824A1 (en) | Posture estimation method and robot | |
US11015930B2 (en) | Method for 2D picture based conglomeration in 3D surveying | |
WO2014204548A1 (en) | Systems and methods for tracking location of movable target object | |
JP2006148745A (en) | Camera calibration method and apparatus | |
EP1262909A3 (en) | Calculating camera offsets to facilitate object position determination using triangulation | |
JP2017106959A (en) | Projection device, projection method, and computer program for projection | |
CN106629399A (en) | Container aligning guide system for containers | |
CN104976950B (en) | Object space information measuring device and method and image capturing path calculating method | |
KR101379787B1 (en) | An apparatus and a method for calibration of camera and laser range finder using a structure with a triangular hole | |
KR102174729B1 (en) | Method and system for recognizing lane using landmark | |
JP2017120551A (en) | Autonomous traveling device | |
CN101802738A (en) | Arrangement for detecting an environment | |
CN107300382A (en) | A kind of monocular visual positioning method for underwater robot | |
US9990004B2 (en) | Optical detection of bending motions of a flexible display | |
JP2006349607A (en) | Distance measuring device | |
CN107055331A (en) | Container guides system to case | |
JP6180158B2 (en) | Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus | |
JP2010112731A (en) | Joining method of coordinate of robot | |
KR101153221B1 (en) | Computation of Robot Location and Orientation using Repeating Feature of Ceiling Textures and Optical Flow Vectors | |
JP2017151026A (en) | Three-dimensional information acquiring device, three-dimensional information acquiring method, and program | |
CN106500714A (en) | A kind of robot navigation method and system based on video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |