CN109556616A - A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method - Google Patents

A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method Download PDF

Info

Publication number
CN109556616A
CN109556616A CN201811329204.4A CN201811329204A CN109556616A CN 109556616 A CN109556616 A CN 109556616A CN 201811329204 A CN201811329204 A CN 201811329204A CN 109556616 A CN109556616 A CN 109556616A
Authority
CN
China
Prior art keywords
visual indicia
robot
jian
automatic
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811329204.4A
Other languages
Chinese (zh)
Inventor
陈广
王法
陈凯
余卓平
瞿三清
葛艺忻
卢凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201811329204.4A priority Critical patent/CN109556616A/en
Publication of CN109556616A publication Critical patent/CN109556616A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)

Abstract

The invention proposes a kind of automatic Jian Tu robots of view-based access control model label to build figure dressing method, belongs to field of navigation technology.This method comprises: (1) detects the visual indicia in environment to be measured;Read the location information for its corresponding visual indicia point that the visual indicia is included;Based on the angle point information of the visual indicia, relative position and course of the automatic Jian Tu robot relative to the visual indicia point at this time is calculated;(2) coordinate and the course of automatic Jian Tu robot are obtained relative to the relative position of the visual indicia point by the location information of visual indicia point and automatic Jian Tu robot, appearance between visual indicia point is deviated, the diagram data of building of error is modified and optimizes.Cost is relatively low for this method, it is easy to implement, it can be realized the real-time accurate amendment to the position error in high-precision map collection process and the real-time finishing and offline optimization to cartographic information is acquired, eliminate traditional while positioning and build demand of the figure in the process to winding detection.

Description

A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method
Technical field
The invention belongs to field of navigation technology, are related to one kind and build figure dressing method, and especially automatic Jian Tu robot builds figure Dressing method.
Background technique
In recent years, the acquisition demand of the gradually application with autonomous driving vehicle in some fields, high-precision map increasingly increases It is long.In many indoor and outdoor scenes, GPS signal is faint, and WiFi, UWB erection are with high costs, and positioning accuracy is not high, so that being built Map shifts, distorts.Be limited by current locating scheme there are drift, cumulative errors, it is expensive the problems such as, and tradition accidentally The methods of winding detection that poor technology for eliminating uses, map match have certain requirement to calculating power and environmental characteristic, in height Demand in smart map collection process to accurate positioning and error correction to adopt the at high price of figure process, calculating complexity etc., And it is difficult to cope with special scenes (such as white metope).
Summary of the invention
The purpose of the present invention is to provide one kind, cost is relatively low, and operation builds figure dressing method compared with simple and precision is higher.
In order to achieve the above object, solution of the invention is:
A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method, comprising: (1) detects in environment to be measured Visual indicia;Read the location information for its corresponding visual indicia point that the visual indicia is included;Based on the vision mark The angle point information of note calculates relative position and course of the automatic Jian Tu robot relative to the visual indicia point at this time;(2) by The location information of visual indicia point is built with automatic Jian Tu robot relative to the relative position of the visual indicia point automatically The coordinate of figure robot and course deviate appearance between visual indicia point, the diagram data of building of error is modified and optimizes.
The figure process of building of the automatic Jian Tu robot includes: to obtain currently to build figure automatically by GPS or dead reckoning The initial positioning information of robot is carried out in environment to be measured by laser radar while positioning and building figure (i.e. SLAM ((simultaneous localization and mapping, i.e., synchronous to position and build figure) process).
In the step (1), the location information includes the coordinate of the visual indicia, direction.
The visual indicia is two dimensional code.
Detecting the visual indicia in environment to be measured includes: the image information that environment to be measured is obtained by forward sight monocular camera, Therefrom identify and extract the information of two dimensional code.
The visual indicia is evenly arranged according to setting interval.
The step (2) includes: the mobile Jian Tu robot, according to setting arranged for interval in repeated measures environment to be measured Visual indicia, read the location information that the visual indicia is included;Closed-loop optimization is carried out to the visual indicia of repeated measures, Integrated noise is eliminated, reads and records the position of the visual indicia point where the visual indicia, pass through Extended Kalman filter The relative position of automatic Jian Tu robot and the visual indicia point that device optimization fusion is solved based on the visual indicia, with And the relative position that inertial navigation is observed, it carries out building figure finishing in real time;It is more to the progress of figure result is built using figure optimization method Suboptimization, to improve the precision for building figure finished result.
The automatic Jian Tu robot includes mobile platform and the visual sensor being mounted on the mobile platform and swashs Optical radar;The visual sensor is used to detect the visual indicia in environment to be measured, and the laser radar is for building figure.
The visual sensor is forward sight monocular sensor, and the laser radar is multi-line laser radar.
The mobile platform is vehicle.
By adopting the above scheme, the beneficial effects of the present invention are: the invention proposes a kind of view-based access control model label from Dong Jiantu robot builds figure dressing method, and this method pointedly uses artificial vision's signature, such as two dimensional code, utilizes vision Sensor is identified, read and is positioned, and point-by-point eliminate builds figure error by position error bring.Visual sensor facilitates reality With cost is relatively low, and algorithm is mature, and artificial vision's mark point flexible arrangement is convenient, and this method is easy real as core Apply, cost is relatively low, can be realized to the position error in high-precision map collection process it is real-time it is accurate amendment and to acquisition Cartographic information it is real-time finishing and offline optimization, eliminate tradition simultaneously positioning with build figure in the process to winding detect (pass through The mode for passing through same position point more than once, reduces the step of building figure and position error) demand.
Detailed description of the invention
Fig. 1 is that the automatic Jian Tu robot of view-based access control model label in one embodiment of the invention builds the process of figure dressing method Figure;
Fig. 2 is the result schematic diagram in this embodiment of the invention to certain two dimensional code corner recognition, positioning.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings.
The invention proposes a kind of automatic Jian Tu robots of view-based access control model label to build figure dressing method, wherein building figure automatically Robot repeatedly obtains leading to from the precise location information for the artificial vision's label being arranged according to certain intervals during building figure It crosses and reads the opposite position that artificial vision marks the location information of interior coding, calculating artificial vision label and automatic Jian Tu robot It sets, obtains the error correction to automatic Jian Tu robot GPS positioning and dead reckoning.This method generally comprises three parts:
It reads artificial vision's label: by forward-looking vision sensor, capturing, identification is arranged in anchor point (i.e. visual indicia Point) at artificial vision mark (such as two dimensional code), pass through read artificial vision label in encoded information, available current institute The artificial vision of reading marks the precise location information of itself, such as direction, coordinate;
It calculates relative position: on the basis of automatic Jian Tu robot reads visual indicia point specifying information, utilizing view Feel sensor, obtain the location information (i.e. aforementioned direction, coordinate) of visual indicia point, is calculated at this time certainly by vision positioning Relative position and course of the Dong Jiantu robot relative to visual indicia point;
It accurately builds figure finishing: being obtained by the relative position information of the location information and automatic Jian Tu robot of visual indicia point The precise location information (including it is accurately positioned point coordinate and course information) of automatic Jian Tu robot, and thus it is accurately positioned letter Breath deviates appearance between visual indicia point, the diagram data of building of error is modified and optimizes, without passing through winding detection, environment Matching etc. carries out error concealment.
This method can be carried out according to techniqueflow in detail below:
The image information in front of automatic Jian Tu robot is obtained by inexpensive forward sight monocular camera (i.e. visual sensor), GPS or dead reckoning obtain the Primary Location information for currently building figure robot location automatically, and are carried out together by laser radar Shi Dingwei and build figure;
It is identified from preceding visible image (image that i.e. aforementioned forward sight monocular camera obtains) and extracts two dimensional code (i.e. artificial vision Label) type and two dimensional code four angle points, utilize the hypothesis of plane mechanism of two dimensional code to calculate automatic Jian Tu robot and two dimensional code Between relative positional relationship;
Using the encoded information of two-dimentional code mark, closed-loop optimization is carried out to the two dimensional code of repeated measures, eliminates integrated noise, The information such as the exact position to corresponding visual indicia point are read out and record, and pass through extended Kalman filter optimization fusion base The opposite position for the automatic Jian Tu robot that the vehicle relative position information and inertial navigation solved in two dimensional code is observed Information is moved, realizes real-time map finishing;
Repeatedly optimized using figure optimization method to map finished result, improves the precision that visual indicia builds figure finishing;
Figure optimum results are built by generated location information when building figure, pass through the real-time identification Optimization Solution to two dimensional code The position of current automatic Jian Tu robot, realizes high accuracy positioning and builds figure.
Wherein, artificial vision's label is evenly arranged according to the interval of setting.By the way of GPS positioning and dead reckoning into The general positioning of row, and real-time perfoming SLAM (simultaneous localization and mapping, i.e., synchronous positioning with Build figure) process.It builds figure process and intersects progress with dressing process, often detected during building figure and resolved a vision mark Note can modify the map before this moment, and continue can continuous cumulative errors, be mixed with the noises such as jump build figure Process, until observing next visual indicia to be modified to these errors.It is being held always it may also be said that building figure process It is continuous, and when the finishing of map and the amendment of error occur only at detection and resolved visual indicia.
Fig. 1 is that the automatic Jian Tu robot of view-based access control model label in the present embodiment builds the flow chart of figure dressing method.
In the present embodiment, this method the specific implementation process is as follows:
Acquisition platform is used as using the electric vehicle by repacking, its in the present embodiment is short wheelbase automatic running vehicle, On be mounted with the components such as GPS positioning module, laser radar, forward sight monocular camera, trajectory planning module, control module, can from Row drives, and constitutes automatic Jian Tu robot.In the present invention, said modules can also replace the vehicle for being installed to other by repacking Or other moveable platforms on, constitute automatic Jian Tu robot.
Forward sight monocular camera is arranged in vehicle front position diagonally downward, by gridiron pattern internal reference standardization to forward sight camera Intrinsic parameter demarcated.GPS antenna is arranged in right above vehicle roof in front and back axis.Using laser radar, the laser thunder Up to for multi-line laser radar, it is arranged in the top of automatic Jian Tu robot.
Each two codes have unique serial number code.It include following information coding: mark point serial number in two dimensional code, The position coordinates of corresponding map mark point, towards angle.
The cloudy surface waterproof paper that two dimensional code is printed on to A2 dimensions above builds each Orientation on map in figure field scape needed for being posted in At point, such as road level middle line, corner position.The density puted up about opens/10m along linear vehicle diatom 1, increases at turning Density is puted up to 1/5m.
Automatic Jian Tu robot believes according to preparatory acquisition path using GPS positioning information and dead reckoning as initial alignment Breath, the point cloud information that Airborne Lidar measures carry out part and build figure process as pixel material is built.
Two dimensional code is detected using forward sight monocular camera, since four angle points are located in world coordinate system in image On same plane, and two dimensional code side length is it is known that PnP (Perspective-n-Point) model solution can be used.
Utilize PnP model, it is known that n (n > 2) organizes three-dimensional point coordinate and the corresponding subpoint coordinate on bidimensional image, i.e., It can determine Camera extrinsic number.Specific solution procedure is as follows:
The corresponding relationship of three-dimensional point coordinate and two-dimentional picture point may be expressed as:
spc=K [R T] pw
Wherein, pw=[x y z 1]T, it is the homogeneous form of three-dimensional point coordinate in world coordinate system, (x, y, z is respectively three Space coordinate in a dimension, the regulation of x coordinate axis, y-coordinate axis and z coordinate axis is referring to lower section.);pc=[u v 1]T, it is shadow As the homogeneous form of two-dimensional points coordinate in coordinate system (u, v are respectively two coordinates on image plane);K is camera intrinsic parameter, It is acquired in advance in demarcating steps;Since (R, T be respectively rotation parameter matrix in Camera extrinsic number to outer parameter [R T], flat Shifting parameter matrix) freedom degree be 6Dof (having six-freedom degree (be specifically related to 3 translation directions, 3 direction of rotation)), As n > 2, SVD Decomposition iteration can be used to solve [R T].
Definition is x-axis along two dimensional code direction, vertically upward along two dimensional code direction using two dimensional code center as origin horizontally to the right It is inwardly the two dimensional code space coordinates of z-axis perpendicular to two dimensional code plane for y-axis.Due to the phase between four angle points of two dimensional code To positional relationship it is known that its coordinate value can be calculated (such as Fig. 2, wherein length s is the half of two dimensional code side length).Therefore pw? Know, pcIt is detected and is determined by two dimensional code, calibration for cameras obtains K, n=4 > 2, that is, PnP model iterative solution [R T] can be used.
To R3×3(i.e. R matrix described in upper section, it is the matrix of 3 rows 3 column in 3 D auto space) carries out Rodri Case transformation to get forward sight monocular camera to the two dimensional code line of centres and vehicle course angle a, to t3×1(the T square i.e. in upper section Battle array, it includes 3 rows 1 column) to take two norms be distance d of the forward sight monocular camera to two dimensional code mark center.Then visual indicia exists Automatically coordinate under figure robot coordinate system is built are as follows: (xtag=sin (a) d, ytag=cos (a) d) (xtagAnd ytagIt respectively indicates Automatically the abscissa and ordinate under figure robot coordinate system are built).Obtain the phase of visual indicia point with automatic Jian Tu robot To position.
Two dimensional code mark array under continuous repeated measures certain intervals.Laser radar is established during point cloud map, often When detecting that the artificial visual signature such as two dimensional code marks, i.e., is identified according to predefined coding rule, reads the visual indicia Position coordinates itself that included, towards information such as angles, and by telltale mark angle point, automatic Jian Tu robot is calculated The position of itself, and closed-loop optimization is carried out to the two dimensional code of repeated measures, integrated noise is eliminated, extended Kalman filter is passed through The automatic Jian Tu robot relative position information and inertial navigation that optimization fusion is solved based on two dimensional code observe from Dong Jiantu robot relative displacement information, and the information according to recorded in coding, to marking dot similar in one or more At the positioning and optimizings benchmark such as line, grid or region.
Visual indicia point location reference map is collected, vertex is each visual indicia point position, i.e., each two dimension Code position, while being based on the result for reading coding record information connection in two dimensional code.One new observation of every acquisition or vehicle It is displaced, then increases new vertex and edge in figure, and by expanded Kalman filtration algorithm to established fixed Position profile be modified, incrementally obtain current vehicle position, and in immediate updating map visual indicia position.
After adding all data, according to the relationship on side in figure and vertex, edge error equation is listed, gauss-newton method is used Iteration optimizes.
Graph model can be described with following error equation:
Wherein:
xcoordinateIt indicates the location information matrix of certain node, is the independent variable of error function
K indicates any one node (visual indicia or vehicle) in graph model;
C indicates all node sets in graph model;
xkIt indicates the location information matrix of k-th of node, if the node is visual indicia, stores the position of visual indicia Coordinate;If the node is vehicle, position coordinates and the course of vehicle are stored;
zkIndicate the location information matrix of k-th of the node obtained by observation relevant to k-th of node;
ΩkIndicate the covariance matrix obtained by observation relevant to k-th of node;
ek(xk, zk) indicate xkWith zkError function;
F(xcoordinate) indicate graph model error function;
Indicate the globally optimal solution of graph model;
Solution is iterated to above formula with gauss-newton method:
To F (xcoordinate) first order Taylor expansion is carried out, linear equation is solved, new globally optimal solution is solved, as first Initial value substitutes into F (xcoordinate) carry out new round iteration.
For the efficiency for improving optimization, global figure optimization is taken to optimize the mode combined with Local map, first to node with Side carries out uniform sampling, carries out global figure optimization, sample frequency is adjusted according to two dimensional code observing frequency, in practical application In, generally 0.02Hz.Again on the basis of the point in global figure, Local map optimization is carried out.Storage is corrected, optimizes obtained height Precision navigation map.
To sum up, the present invention using low cost forward sight monocular camera, to the artificial vision arranged in advance in environment mark into Row identification, the information that row information of going forward side by side reads, obtains being accurately positioned mark point accordingly, and read according to label are being adopted It is placed on the map collected and is accurately positioned point, the cartographic information collected is modified according to newly added accurate positioning point With local optimum, finally obtains required high-precision and build figure result.
This hair can be understood and applied the above description of the embodiments is intended to facilitate those skilled in the art It is bright.Person skilled in the art obviously easily can make various modifications to these embodiments, and described herein General Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to implementations here Example, those skilled in the art's announcement according to the present invention, improvement and modification made without departing from the scope of the present invention all should be Within protection scope of the present invention.

Claims (10)

1. a kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method, it is characterised in that:
(1) visual indicia in environment to be measured is detected;Read its corresponding visual indicia point that the visual indicia is included Location information;Based on the angle point information of the visual indicia, calculate at this time automatic Jian Tu robot relative to the visual indicia The relative position and course of point;
(2) it is obtained with automatic Jian Tu robot relative to the relative position of the visual indicia point by the location information of visual indicia point Coordinate and course to automatic Jian Tu robot deviate appearance between visual indicia point, the diagram data of building of error is modified With optimization.
2. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the figure process of building of the automatic Jian Tu robot includes:
The initial positioning information that current automatic Jian Tu robot is obtained by GPS or dead reckoning, by laser radar to It surveys in environment and carries out while positioning and build figure.
3. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: in the step (1), the location information includes the coordinate of the visual indicia, direction.
4. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the visual indicia is two dimensional code.
5. a kind of automatic Jian Tu robot of view-based access control model label according to claim 4 builds figure dressing method, feature Be: detecting the visual indicia in environment to be measured includes: the image information that environment to be measured is obtained by forward sight monocular camera, therefrom Identify and extract the information of two dimensional code.
6. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the visual indicia is evenly arranged according to setting interval.
7. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the step (2) includes:
The mobile Jian Tu robot, according to setting spaced apart visual indicia in repeated measures environment to be measured, described in reading The location information that visual indicia is included;
Closed-loop optimization is carried out to the visual indicia of repeated measures, eliminates integrated noise, read and records the visual indicia place Visual indicia point position, by extended Kalman filter optimization fusion based on the visual indicia solve automatically build The relative position that the relative position and inertial navigation of figure robot and the visual indicia point are observed, is built in real time Figure finishing;
Repeatedly optimized using figure optimization method to figure result is built, to improve the precision for building figure finished result.
8. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the automatic Jian Tu robot includes mobile platform and the visual sensor being mounted on the mobile platform and laser thunder It reaches;The visual sensor is used to detect the visual indicia in environment to be measured, and the laser radar is for building figure.
9. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the visual sensor is forward sight monocular sensor, and the laser radar is multi-line laser radar.
10. a kind of automatic Jian Tu robot of view-based access control model label according to claim 1 builds figure dressing method, feature Be: the mobile platform is vehicle.
CN201811329204.4A 2018-11-09 2018-11-09 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method Pending CN109556616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811329204.4A CN109556616A (en) 2018-11-09 2018-11-09 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811329204.4A CN109556616A (en) 2018-11-09 2018-11-09 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method

Publications (1)

Publication Number Publication Date
CN109556616A true CN109556616A (en) 2019-04-02

Family

ID=65866152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811329204.4A Pending CN109556616A (en) 2018-11-09 2018-11-09 A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method

Country Status (1)

Country Link
CN (1) CN109556616A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110861082A (en) * 2019-10-14 2020-03-06 北京云迹科技有限公司 Auxiliary mapping method and device, mapping robot and storage medium
CN111060118A (en) * 2019-12-27 2020-04-24 炬星科技(深圳)有限公司 Scene map establishing method, device and storage medium
CN111240330A (en) * 2020-01-17 2020-06-05 电子科技大学 Method and system for synchronous navigation and accurate positioning of grain leveling robot
CN111256689A (en) * 2020-01-15 2020-06-09 北京智华机器人科技有限公司 Robot positioning method, robot and storage medium
CN111958636A (en) * 2020-08-07 2020-11-20 湖南神通智能股份有限公司 Marking method and system for robot position
CN112146662A (en) * 2020-09-29 2020-12-29 炬星科技(深圳)有限公司 Method and device for guiding map building and computer readable storage medium
CN112580375A (en) * 2019-09-27 2021-03-30 苹果公司 Location-aware visual marker
CN112596070A (en) * 2020-12-29 2021-04-02 四叶草(苏州)智能科技有限公司 Robot positioning method based on laser and vision fusion
CN112650207A (en) * 2019-10-11 2021-04-13 杭州萤石软件有限公司 Robot positioning correction method, apparatus, and storage medium
WO2021129347A1 (en) * 2019-12-24 2021-07-01 炬星科技(深圳)有限公司 Auxiliary positioning column and navigation assistance system of self-traveling robot
CN113124850A (en) * 2019-12-30 2021-07-16 北京极智嘉科技股份有限公司 Robot, map generation method, electronic device, and storage medium
CN113776523A (en) * 2021-08-24 2021-12-10 武汉第二船舶设计研究所 Low-cost navigation positioning method and system for robot and application
CN114237262A (en) * 2021-12-24 2022-03-25 陕西欧卡电子智能科技有限公司 Automatic mooring method and system for unmanned ship on water
CN114322933A (en) * 2021-12-28 2022-04-12 珠海市运泰利自动化设备有限公司 Visual feedback compensation method based on tray inclination angle
CN116242366A (en) * 2023-03-23 2023-06-09 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN206038015U (en) * 2016-08-31 2017-03-22 湖南瑞森可机器人科技有限公司 Intelligent mobile device
CN106989746A (en) * 2017-03-27 2017-07-28 远形时空科技(北京)有限公司 Air navigation aid and guider
CN107180215A (en) * 2017-05-31 2017-09-19 同济大学 Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN108290294A (en) * 2015-11-26 2018-07-17 三星电子株式会社 Mobile robot and its control method
CN108352071A (en) * 2015-12-29 2018-07-31 德州仪器公司 Method for the motion structure processing in computer vision system
CN108363386A (en) * 2017-12-30 2018-08-03 杭州南江机器人股份有限公司 Position Method for Indoor Robot, apparatus and system based on Quick Response Code and laser
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108734737A (en) * 2018-06-14 2018-11-02 哈尔滨工业大学 The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN108290294A (en) * 2015-11-26 2018-07-17 三星电子株式会社 Mobile robot and its control method
CN108352071A (en) * 2015-12-29 2018-07-31 德州仪器公司 Method for the motion structure processing in computer vision system
CN206038015U (en) * 2016-08-31 2017-03-22 湖南瑞森可机器人科技有限公司 Intelligent mobile device
CN106989746A (en) * 2017-03-27 2017-07-28 远形时空科技(北京)有限公司 Air navigation aid and guider
CN107180215A (en) * 2017-05-31 2017-09-19 同济大学 Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically
CN107422735A (en) * 2017-07-29 2017-12-01 深圳力子机器人有限公司 A kind of trackless navigation AGV laser and visual signature hybrid navigation method
CN108363386A (en) * 2017-12-30 2018-08-03 杭州南江机器人股份有限公司 Position Method for Indoor Robot, apparatus and system based on Quick Response Code and laser
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM
CN108734737A (en) * 2018-06-14 2018-11-02 哈尔滨工业大学 The method that view-based access control model SLAM estimation spaces rotate noncooperative target shaft

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580375A (en) * 2019-09-27 2021-03-30 苹果公司 Location-aware visual marker
CN112650207A (en) * 2019-10-11 2021-04-13 杭州萤石软件有限公司 Robot positioning correction method, apparatus, and storage medium
CN110861082A (en) * 2019-10-14 2020-03-06 北京云迹科技有限公司 Auxiliary mapping method and device, mapping robot and storage medium
WO2021129347A1 (en) * 2019-12-24 2021-07-01 炬星科技(深圳)有限公司 Auxiliary positioning column and navigation assistance system of self-traveling robot
CN111060118A (en) * 2019-12-27 2020-04-24 炬星科技(深圳)有限公司 Scene map establishing method, device and storage medium
CN111060118B (en) * 2019-12-27 2022-01-07 炬星科技(深圳)有限公司 Scene map establishing method, device and storage medium
CN113124850A (en) * 2019-12-30 2021-07-16 北京极智嘉科技股份有限公司 Robot, map generation method, electronic device, and storage medium
CN113124850B (en) * 2019-12-30 2023-07-28 北京极智嘉科技股份有限公司 Robot, map generation method, electronic device, and storage medium
CN111256689A (en) * 2020-01-15 2020-06-09 北京智华机器人科技有限公司 Robot positioning method, robot and storage medium
CN111256689B (en) * 2020-01-15 2022-01-21 北京智华机器人科技有限公司 Robot positioning method, robot and storage medium
CN111240330B (en) * 2020-01-17 2021-03-23 电子科技大学 Method and system for synchronous navigation and accurate positioning of grain leveling robot
CN111240330A (en) * 2020-01-17 2020-06-05 电子科技大学 Method and system for synchronous navigation and accurate positioning of grain leveling robot
CN111958636A (en) * 2020-08-07 2020-11-20 湖南神通智能股份有限公司 Marking method and system for robot position
CN112146662B (en) * 2020-09-29 2022-06-10 炬星科技(深圳)有限公司 Method and device for guiding map building and computer readable storage medium
WO2022068781A1 (en) * 2020-09-29 2022-04-07 炬星科技(深圳)有限公司 Guided mapping method and device, and computer-readable storage medium
CN112146662A (en) * 2020-09-29 2020-12-29 炬星科技(深圳)有限公司 Method and device for guiding map building and computer readable storage medium
CN112596070A (en) * 2020-12-29 2021-04-02 四叶草(苏州)智能科技有限公司 Robot positioning method based on laser and vision fusion
CN112596070B (en) * 2020-12-29 2024-04-19 四叶草(苏州)智能科技有限公司 Robot positioning method based on laser and vision fusion
CN113776523A (en) * 2021-08-24 2021-12-10 武汉第二船舶设计研究所 Low-cost navigation positioning method and system for robot and application
CN113776523B (en) * 2021-08-24 2024-03-19 武汉第二船舶设计研究所 Robot low-cost navigation positioning method, system and application
CN114237262A (en) * 2021-12-24 2022-03-25 陕西欧卡电子智能科技有限公司 Automatic mooring method and system for unmanned ship on water
CN114237262B (en) * 2021-12-24 2024-01-19 陕西欧卡电子智能科技有限公司 Automatic berthing method and system for unmanned ship on water surface
CN114322933A (en) * 2021-12-28 2022-04-12 珠海市运泰利自动化设备有限公司 Visual feedback compensation method based on tray inclination angle
CN114322933B (en) * 2021-12-28 2024-06-11 珠海市运泰利自动化设备有限公司 Visual feedback compensation method based on tray inclination angle
CN116242366A (en) * 2023-03-23 2023-06-09 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method
CN116242366B (en) * 2023-03-23 2023-09-12 广东省特种设备检测研究院东莞检测院 Spherical tank inner wall climbing robot walking space tracking and navigation method

Similar Documents

Publication Publication Date Title
CN109556616A (en) A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method
CN109556617A (en) A kind of map elements extracting method of automatic Jian Tu robot
CN109270953B (en) Multi-rotor unmanned aerial vehicle autonomous landing method based on concentric circle visual identification
CN108571971B (en) AGV visual positioning system and method
CN103064417B (en) A kind of Global localization based on many sensors guiding system and method
CN111427360B (en) Map construction method based on landmark positioning, robot and robot navigation system
CN102997910B (en) A kind of based on road of ground surface target location guidance system and method
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
EP3062066A1 (en) Determination of object data by template-based UAV control
CN109991984A (en) For generating the method, apparatus and computer storage medium of fine map
CN104200086A (en) Wide-baseline visible light camera pose estimation method
CN110308729A (en) The AGV combined navigation locating method of view-based access control model and IMU or odometer
CN108363386A (en) Position Method for Indoor Robot, apparatus and system based on Quick Response Code and laser
CN102419178A (en) Mobile robot positioning system and method based on infrared road sign
CN111192318B (en) Method and device for determining position and flight direction of unmanned aerial vehicle and unmanned aerial vehicle
CN108805930A (en) The localization method and system of automatic driving vehicle
CN108932477A (en) A kind of crusing robot charging house vision positioning method
CN109146958B (en) Traffic sign space position measuring method based on two-dimensional image
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN110488838B (en) Accurate repeated positioning method for indoor autonomous navigation robot
CN107063229A (en) Mobile robot positioning system and method based on artificial landmark
CN111426320A (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN110389369A (en) Canopy point cloud acquisition methods based on RTK-GPS and mobile two dimensional laser scanning
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
JP2011112556A (en) Search target position locating device, method, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190402