CN109556617A - A kind of map elements extracting method of automatic Jian Tu robot - Google Patents
A kind of map elements extracting method of automatic Jian Tu robot Download PDFInfo
- Publication number
- CN109556617A CN109556617A CN201811329208.2A CN201811329208A CN109556617A CN 109556617 A CN109556617 A CN 109556617A CN 201811329208 A CN201811329208 A CN 201811329208A CN 109556617 A CN109556617 A CN 109556617A
- Authority
- CN
- China
- Prior art keywords
- map
- information
- map elements
- jian
- automatic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention proposes a kind of map elements extracting methods of automatic Jian Tu robot, belong to field of navigation technology.This method obtains the first map elements information of environment to be measured, constructs the point cloud profile map of the environment to be measured;The visual indicia on environment set position to be measured is detected, the second map elements information that acquired visual indicia is included is read;By the second map elements information flag into described cloud profile map, the map comprising specific semantic information is obtained;Wherein, the first map elements information includes the point cloud information of the environment position to be measured;Second map elements information includes the specific semantic information of the map elements of corresponding visual indicia position.This method can fully automatically complete a cloud and build figure and semantic tagger, obtain high-precision semantic map, while cost is relatively low, be easy to implement, scalability is good.
Description
Technical field
The invention belongs to field of navigation technology, are related to a kind of map elements extracting method, especially automatic Jian Tu robot
Map elements extracting method.
Background technique
In recent years, the gradually application with autonomous driving vehicle in some fields, for used in autonomous driving vehicle
High acquisition cost, the cumbersome collecting flowchart of the semantic map of the great demand of high-precision map and at present high-precision all generate
Biggish contradiction, to its large-scale practical application brings comparable difficulty.At present in map collection process, local map
It can be constructed in the form of laser radar establishes point cloud map, and carry out the spelling of point cloud local map by automatic algorithms
It connects and extracts profile, global profile map may finally be generated.
But current automation algorithm is limited by for map elements semantic information itself (such as type, size, position)
Segmentation and identification, accuracy rate and calculating speed be still relatively inaccessible to the level manually marked, therefore in the post-processing rank of high-precision map
Section, present needed exist for by a large amount of manpowers and time, carried out manual map to the global profile map that programmed acquisition obtains
Element semantic tagger.In the process, it will bring a large amount of manpower, time cost, on autonomous driving vehicle accurately
The more large-scale application of figure causes no small obstruction.
Summary of the invention
A cloud is automatically finished the purpose of the present invention is to provide one kind and builds figure and semantic tagger, obtains high-precision semantically
The method of figure.
In order to achieve the above object, solution of the invention is:
A kind of map elements extracting method of automatic Jian Tu robot, obtains the first map elements information of environment to be measured,
Construct the point cloud profile map of the environment to be measured;The visual indicia on environment set position to be measured is detected, is read acquired
The second map elements information that visual indicia is included;By the second map elements information flag into described cloud profile map,
Obtain the map comprising specific semantic information;Wherein, the first map elements information includes the point of the environment position to be measured
Cloud information;Second map elements information includes the specific semantic information of the map elements of corresponding visual indicia position.
The visual indicia is two dimensional code.
First map elements information includes the profile of map elements, distance, height difference information, the semanteme not comprising map elements
Information.
Using laser radar, the first map elements information of environment to be measured is obtained;Preferably, the laser radar is multi-thread
Laser radar.
Second map elements information includes the position of the map elements of corresponding visual indicia position, type, size
Encoded information.
Using visual sensor, the visual indicia on environment set position to be measured is detected;Preferably, the visual sensor
For forward sight monocular camera.
The first map elements information in environment to be measured is obtained by laser radar, and passes through SLAM (simultaneous
Localization and mapping, i.e., synchronous to position and build figure) algorithm progress data preprocessing, it generates and is based on laser
The scene map of radar points cloud, and then establish the point cloud profile map for not including map elements semantic information.
The visual indicia is two dimensional code;Using forward sight monocular camera, obtain in front of automatic Jian Tu robot on ground
Image information;The type of two dimensional code mark and four angle points of two dimensional code are identified and extracted from described image information, utilize two
The hypothesis of plane mechanism of dimension code calculates the relative positional relationship between automatic Jian Tu robot and two dimensional code;It reads and records two dimensional code pair
The the second map elements information answered builds figure machine based on what two dimensional code solved by extended Kalman filter optimization fusion automatically
The automatic Jian Tu robot relative displacement information that device people relative position information and inertial navigation are observed realizes that map is wanted
Plain semantic marker point builds figure, and the information according to recorded in two dimensional code, marks dot to two dimensional code similar in one or more
At the semantic profile in line, grid or region, the semantic profile map of environment to be measured is obtained.
It is described by the second map elements information flag into described cloud profile map, obtain comprising specific semantic information
Map includes: to merge the semantic profile map with described cloud profile map, identifies described cloud profile map, will be described
The second map elements information flag in semantic profile map is obtained into described cloud profile map comprising complete semantic ground
Figure.
The different coding information of the visual indicia is predefined in each row of table, by increasing the line number of table,
Expand to adapt to the map elements type of increment type;After predefined table expands, the map acquired, existing visual indicia and
The table three not expanded is still compatible.
By adopting the above scheme, the beneficial effects of the present invention are: the automatic Jian Tu robot high-precision map of the present invention is wanted
Plain rapid extracting method is that one kind completely automatically build figure, generate global profile map and lead to automatically by completion laser radar point cloud
Cross the method that artificial vision's label carries out real-time semantic tagger on profile map that reads.Visual sensor used by this method
It is cheap, easy to use;Its related artificial vision's label placement is simple, and algorithm is mature, passes through the two dimension of Apriltag
Code identification time-consuming is short, accuracy rate is high, implements simply, fast and reliable;Directly reading two-dimensional barcode information by vision, (discrimination is close
100%) (discrimination is often in 80-90%), is identified compared to the deep learning of view-based access control model, with high excellent of identification accuracy
Point;This method carries out manual mark without each element in artificial to map in a conventional manner, eliminates a large amount of human costs
And the authorization expense of marking software, instead after putting up two dimensional code, can while laser builds figure automatic collection
It is at low cost to the map elements semantic information point of mark.To, the present invention by low cost, easily and fast in a manner of realize
To the high-precision map comprising integrated semantic, it is particularly suitable for adopting automatically for the high-precision semanteme map of automatic driving vehicle
Collection, can effectively promote large-scale application of the high-precision map on autonomous driving vehicle.This method can be applied to moor automatically
The usage scenarios such as vehicle parking lot.
Detailed description of the invention
Fig. 1 is the flow chart of the map elements extracting method of the automatic Jian Tu robot of one embodiment of the invention;
Fig. 2 is the two dimensional code arrangement effect picture in the embodiment in a certain single road;
Fig. 3 is the two dimensional code arrangement effect picture in the embodiment in a certain distributor road;
Fig. 4 is the two dimensional code arrangement effect picture in the embodiment in a certain open area;
Fig. 5 is in the embodiment to the result schematic diagram of a certain two dimensional code corner recognition, positioning.
In attached drawing: 1, forthright starting point two dimensional code;2, forthright terminal two dimensional code;3, intersection turning point two dimensional code;4, it opens
Put zone boundary point two dimensional code;5, open area exit point two dimensional code;6, open area entrance two dimensional code.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings.
The invention proposes a kind of map elements extracting method of automatic Jian Tu robot, this method view-based access control model sensors
And laser radar, the map elements of high-precision map are extracted towards automatic Jian Tu robot.First map of its environment to be measured is wanted
Prime information constructs the point cloud profile map of the environment to be measured;The visual indicia on environment set position to be measured is detected, institute is read
The second map elements information that the visual indicia of acquisition is included;By the second map elements information flag to described cloud profile
In figure, the map comprising specific semantic information is obtained;Wherein, the first map elements information includes the environment position to be measured
Point cloud information;Second map elements information includes the specific semantic information of the map elements of corresponding visual indicia position.
That is, the figure process of building of Jian Tu robot mainly includes two parts: the laser of Jian Tu robot in this method
The map elements information for building figure and Jian Tu robot marks automatically, in which:
It includes: to obtain the map of periphery local environment using multi-line laser radar that the laser of automatic Jian Tu robot, which builds figure,
The information such as element profile, distance, height difference are completed to building figure in conjunction with automatic Jian Tu robot grouped by itself alignment sensor information
The building of the point cloud level essence map without map elements semantic information of robot periphery local environment;
The automatic standard of map elements information includes: to utilize visual sensing on the basis of the laser of Jian Tu robot builds figure
Device, obtains in target map element or defined artificial vision marks on surrounding specific position, such as two dimensional code, passes through reading
Artificial vision's label, can directly read the specifying informations such as position, type, the size of this map elements.By establishing before
Robot periphery local environment the map elements information without map elements semantic information point cloud level essence map binding marker,
It realizes precisely to the identification of cloud map elements profile and a mark, and respective point cloud outline position in map is recorded, complete packet
Map elements information automatic marking containing specific semantic information.
Fig. 1 is the flow chart of the map elements extracting method of automatic Jian Tu robot in the present embodiment.
In the present embodiment, the specific implementation process of this method are as follows:
Acquisition platform is used as using the electric vehicle by repacking, its in the present embodiment is short wheelbase automatic running vehicle,
On be mounted with the components such as GPS positioning module, laser radar, forward sight monocular camera, trajectory planning module, control module, can from
Row drives, and constitutes automatic Jian Tu robot.In the present invention, said modules can also replace the vehicle for being installed to other by repacking
Or other moveable platforms on, constitute automatic Jian Tu robot.
Forward sight monocular camera is arranged in vehicle front center position diagonally downward, and resolution ratio is 1080p or more, and frame per second is
25Hz, delay are less than 5ms, are demarcated by intrinsic parameter of the gridiron pattern internal reference standardization to forward sight monocular camera.Laser radar
It is arranged in right above vehicle roof, resolution ratio is 32 or 64 lines, it can be achieved that 360 degree of horizontal field of view angles, using preceding also first to outside it
Parameter is demarcated with intrinsic parameter.
The map elements such as lane line, metope, pillar, traffic signals line, obstacles borders as important environmental mark,
It is easy in traditional vision SLAM (simultaneous localization and mapping, i.e., synchronous to position and build figure)
Cause error detection.Although RGBD camera can be detected effectively, higher cost, and to the strong reflections such as glass material without
Effect.Therefore, artificial vision's label (being two dimensional code in the present embodiment) is introduced into carry out auxiliary detection.
Using Apriltag two dimensional code scheme, each two codes have unique serial number, unambiguity.In two dimensional code
Include following information coding code name: mark point serial number, the position of corresponding map element, size, type need group to be combined into semantic wheel
The necessary informations such as wide other mark point serial numbers and sequence.It is that a line formulates definition table, two dimensional code according to every kind of different information
It itself only include the line number for defining table list.
The cloudy surface waterproof paper that two dimensional code is printed on to A2 dimensions above identifies each in scene needed for being then posted in
The positions such as starting point, terminating point, the angle point of map elements.The density puted up about opens/10m along linear vehicle diatom 1, at turning
Increase puts up density to 1/5m.It is about 1.7m-2.5m that height is puted up on metope, cylinder, guarantees vehicle forward sight monocular camera energy
It enough observes two dimensional code, and avoids blocking for other vehicles.Fig. 2 is the two dimensional code arrangement effect picture in a certain single road;Fig. 3
It is the two dimensional code arrangement effect picture in a certain distributor road;Fig. 4 is the two dimensional code arrangement effect picture in a certain open area.
Visual sensor (i.e. forward sight monocular camera) reads the content of two dimensional code, and according to the line number read from definition table
It is middle to read corresponding meaning information and its coding, and the relative position for calculating Cai Tu robot and two dimensional code itself simultaneously (utilizes
Two dimensional code angle point and hypothesis of plane mechanism PnP model coordinate are calculated), in conjunction with Cai Tu robot self poisoning information (rtkGPS),
The location information of map elements representated by two dimensional code can be obtained.Detailed process are as follows:
Two dimensional code is detected using forward sight monocular camera, since four angle points are located in world coordinate system in image
On same plane, and two dimensional code side length is it is known that PnP (Perspective-n-Point) model solution can be used.
Utilize PnP model, it is known that n (n > 2) organizes three-dimensional point coordinate and the corresponding subpoint coordinate on bidimensional image, i.e.,
It can determine Camera extrinsic number.Specific solution procedure is as follows:
The corresponding relationship of three-dimensional point coordinate and two-dimentional picture point may be expressed as:
spc=K [R T] pw
Wherein, pw=[x y z 1]T, it is the homogeneous form of three-dimensional point coordinate in world coordinate system, (x, y, z is respectively three
Space coordinate in a dimension, the regulation of x coordinate axis, y-coordinate axis and z coordinate axis is referring to lower section);pc=[u v 1]T, it is image
The homogeneous form of two-dimensional points coordinate in coordinate system (u, v are respectively two coordinates on image plane);K is camera intrinsic parameter,
It is acquired in advance in demarcating steps;Since (R, T are respectively rotation parameter matrix in Camera extrinsic number, translation to outer parameter [R T]
Parameter matrix) freedom degree be 6Dof (having six-freedom degree (be specifically related to 3 translation directions, 3 direction of rotation)), when
When n > 2, SVD Decomposition iteration can be used to solve [R T].
In the method, definition is x-axis along two dimensional code direction, along two dimensional code using two dimensional code center as origin horizontally to the right
Direction is y-axis vertically upward, is inwardly the two dimensional code space coordinates of z-axis perpendicular to two dimensional code plane.Due to two dimensional code four
Relative positional relationship between angle point it is known that their coordinate value of being easy to get (such as Fig. 5, wherein length s is the one of two dimensional code side length
Half).Therefore, pwIt is known that pcIt is detected and is determined by two dimensional code, calibration for cameras obtains K, n=4 > 2, and PnP model iteration can be used
It solves [R T].
To R3×3(i.e. R matrix described in upper section, it is the matrix of 3 rows 3 column in 3 D auto space) carries out Rodri
Case transformation to get forward sight monocular camera to the two dimensional code line of centres and vehicle course angle a, to t3×1(the T square i.e. in upper section
Battle array, it includes 3 rows 1 column) to take two norms be distance d of the forward sight monocular camera to two dimensional code mark center.Then visual indicia exists
Automatically coordinate under figure robot coordinate system is built are as follows: (xtag=sin (a) d, ytag=cos (a) d) (xtagAnd ytagIt respectively indicates
Automatically the abscissa and ordinate under figure robot coordinate system are built).
The csv file of corresponding map element type is generated, and is added line by line by the sequencing observed: from definition table
The semantic information and two dimensional code position coordinates read, and add serial number.
In actual use, for the connection syntagmatic between clearly each two dimensional code mark point, additionally to each map
Element defines a kind of " edge " label, and definition method is same as above, posts up on the two dimensional code mark point straight line line on each vertex, uses
In label connection relationship, using the acquisition of same method, it is stored in independent csv file.
It obtains and saves multiple files, as the high-precision map of required vector, wherein containing every kind of map elements (such as
Traffic lights, lane, stop line, open area etc.) each vertex or starting point (or other characteristic points) position, and show
" edge " mark position of connection relationship is combined between these points.
In use, system reads all csv files automatically, and generated on map according to the coordinate of accordingly pel vegetarian refreshments
Mark vertex;Then, label edge is generated (if straight line connects between vertex on map according to the coordinate of corresponding map element edge point
There are corresponding sides on line along point, then generates straight line connection, and connected vertex and edge group are combined into map element object;If vertex
Between line need multistage straight line fitting to can be used when curvature is smaller, then there is still a need for subsequent artefacts' marks when curvature is larger for curve)
Due to carrying out dead reckoning based on inertial navigation, there are accumulated errors, it is therefore desirable to pass through continuous repeated measures two dimension
Accumulated error is corrected in code labeling or warehouse compartment position.During laser radar builds figure, when recognizing artificial vision's label,
The available great elimination of the accumulated error of dead reckoning eliminates winding detection.
In this method, using Extended Kalman filter, according to relative position (straight line) between the mark point having been observed that, often
It is secondary to observe the error that reduce when new two dimensional code for newly observing two dimensional code mark point location estimation.Using figure optimization side
Method is iterated optimization and calculates, make to minimize the error, obtain finally marking dot position information knot for obtained mark point
Fruit.Specifically:
Laser radar carries out during point cloud chart builds figure without semantic map, whenever detecting the artificial visual signature such as two dimensional code
When label, i.e., according to predefined rule identification, the map elements position read the artificial visual indicia itself and included, type
Etc. information the position of vehicle itself is calculated, realization is accurately positioned in map, simultaneously will and by telltale mark angle point
The map elements semantic information read is tagged to the reading position on established map, and it is semantic to complete specific position on map
Enterprise model, the mark of mark point.
And the mark point map for collecting, it can be according to the relevant information recorded in two dimensional code, by semantic marker
Line, combination are carried out between point, form semantic profile, including form response curve or respective shapes.In this method, vertex is every
A semantic marker point position, i.e., each two dimensional code position, while being based on coding record information connection in reading two dimensional code
Result.One new observation of every acquisition or vehicle are displaced, then increase new vertex and edge in figure, and pass through expansion
Exhibition Kalman filtering algorithm is modified established semantic profile.After adding all data, according to side in figure and vertex
Relationship, list edge error equation, optimized using gauss-newton method iteration.
Graph model can be described with following error equation:
Wherein,
xcoordinateIt indicates the location information matrix of certain node, is the independent variable of error function
K indicates any one node (visual indicia or vehicle) in graph model;
C indicates all node sets in graph model;
xkIt indicates the location information matrix of k-th of node, if the node is visual indicia, stores the position of visual indicia
Coordinate;If the node is vehicle, position coordinates and the course of vehicle are stored;
zkIndicate the location information matrix of k-th of the node obtained by observation relevant to k-th of node;
ΩkIndicate the covariance matrix obtained by observation relevant to k-th of node;
ek(xk,zk) indicate xkWith zkError function;
Then F (xcoordinate) be entire graph model error function,Indicate the globally optimal solution of graph model;
In order to find out globally optimal solutionSolution is iterated to above formula with gauss-newton method.
To F (xcoordinate) first order Taylor expansion is carried out, which is converted into solution linear equation, solves the new overall situation most
Excellent solution substitutes into F (x as initial valuecoordinate) new round iteration is carried out, and with Extended Kalman filter formula optimization, can increase
Obtain to amount formula current vehicle position, and in immediate updating map visual indicia position.
The acquisition of point cloud information is carried out to the Environment Obstacles object of vehicle-surroundings using multi-line laser radar, and passes through expansion card
Kalman Filtering and Graphic Pattern Matching algorithm are completed the real-time splicing for building figure and local map to cloud profile in real time, are finally obtained
The global scene map of point cloud composition.Map is made of point cloud information completely, and is stored.
According to information such as semantic profile map location, size, types after offline optimization, by map elements semantic marker
Profile is built figure result queue and is put in cloud profile map to what laser radar was completed without semanteme, generates with the high-precision of map elements information
Spend map.
In the present invention, the content of certain rows in representative predefined table can be numbered by two dimensional code, automatically to it is a variety of not
Be acquired with map elements semantic information point, for example, predefined lane, open area, stop line, traffic lights two dimensional code
Label etc., is not limited only to the identification and acquisition to parking lot warehouse compartment.Therefore, the map elements type of acquisition not only has
This is specific a kind of for parking lot warehouse compartment, but a variety of (the being also possible to one kind) map elements defined in predefined table.It is this
The different coding information of visual indicia is predefined in each row of table, it can be by increasing table line number, to adapt to increase
The map elements type of amount formula expands.By the way of predefined table, the map elements type for being adapted to increment type expands, i.e.,
The content that adds line of downward increment can be to the map of expansion to define new two dimensional code type in order in predefined table
Element is acquired.And after the expansion of predefined table, the map acquired, existing visual indicia and the table three not expanded
Person it is still compatible (after i.e. predefined table expands, fief figure, existing visual indicia, do not expand table and still can make
With its content (i.e. the use process of map) of new predefined form analysis, without being resurveyed with table predefined after expansion or
Other extra works).Thus, the map elements type of acquisition not necessarily, can be by increasing in predefined table to all kinds of elements
The definition content of point acquires new diverse map elements semantic information for demand.In use, only needing to guarantee to adopt
The predefined table of foundation, identical as when using when collection, or the proper subclass of predefined table based on when use map.
This hair can be understood and applied the above description of the embodiments is intended to facilitate those skilled in the art
It is bright.Person skilled in the art obviously easily can make various modifications to these embodiments, and described herein
General Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to implementations here
Example, those skilled in the art's announcement according to the present invention, improvement and modification made without departing from the scope of the present invention all should be
Within protection scope of the present invention.
Claims (10)
1. a kind of map elements extracting method of automatic Jian Tu robot, it is characterised in that:
The first map elements information of environment to be measured is obtained, the point cloud profile map of the environment to be measured is constructed;
The visual indicia on environment set position to be measured is detected, the second map elements that acquired visual indicia is included are read
Information;
By the second map elements information flag into described cloud profile map, the map comprising specific semantic information is obtained;
Wherein, the first map elements information includes the point cloud information of the environment position to be measured;Second map elements information
The specific semantic information of map elements including corresponding visual indicia position.
2. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: the vision
Labeled as two dimensional code.
3. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: the first map
Element information includes the profile of map elements, distance, height difference information, the semantic information not comprising map elements.
4. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: use laser
Radar obtains the first map elements information of environment to be measured;
Preferably, the laser radar is multi-line laser radar.
5. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: the second map
Element information includes the position of the map elements of corresponding visual indicia position, type, the encoded information of size.
6. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: use vision
Sensor detects the visual indicia on environment set position to be measured;
Preferably, the visual sensor is forward sight monocular camera.
7. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: pass through laser
Radar obtains the first map elements information in environment to be measured, and carries out data preprocessing by SLAM algorithm, generates base
In the scene map of laser radar point cloud, and then establish the point cloud profile map for not including map elements semantic information.
8. the map elements extracting method of automatic Jian Tu robot according to claim 6, it is characterised in that: the vision
Labeled as two dimensional code;
Using forward sight monocular camera, the image information in front of automatic Jian Tu robot on ground is obtained;From described image information
The type of two dimensional code mark and four angle points of two dimensional code are identified and extracted, is calculated using the hypothesis of plane mechanism of two dimensional code and builds figure automatically
Relative positional relationship between robot and two dimensional code;
The corresponding second map elements information of two dimensional code is read and recorded, is based on two by extended Kalman filter optimization fusion
The automatic Jian Tu robot phase that the automatic Jian Tu robot relative position information and inertial navigation that dimension code solves are observed
To displacement information, realize that map elements semantic marker point builds figure, and the information according to recorded in two dimensional code, to one or more
Similar two dimensional code mark point forms line, grid or the semantic profile in region, obtains the semantic profile map of environment to be measured.
9. the map elements extracting method of automatic Jian Tu robot according to claim 8, it is characterised in that: described by
Into described cloud profile map, obtaining the map comprising specific semantic information includes: for two map elements information flags
The semantic profile map is merged with described cloud profile map, described cloud profile map is identified, by the semanteme
The second map elements information flag in profile map is obtained into described cloud profile map comprising complete semantic map.
10. the map elements extracting method of automatic Jian Tu robot according to claim 1, it is characterised in that: the view
Feel that the different coding information of label is predefined in each row of table, by increasing the line number of table, Lai Shiying increment type
Map elements type expands;After predefined table expands, the map acquired, existing visual indicia and the table three not expanded
Person is still compatible.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811329208.2A CN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811329208.2A CN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109556617A true CN109556617A (en) | 2019-04-02 |
Family
ID=65865871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811329208.2A Pending CN109556617A (en) | 2018-11-09 | 2018-11-09 | A kind of map elements extracting method of automatic Jian Tu robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109556617A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110057373A (en) * | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
CN110220517A (en) * | 2019-07-08 | 2019-09-10 | 紫光云技术有限公司 | A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words |
CN110440811A (en) * | 2019-08-29 | 2019-11-12 | 湖北三江航天红峰控制有限公司 | A kind of universal automatic navigation control method, device and equipment terminal |
CN110555801A (en) * | 2019-07-26 | 2019-12-10 | 纵目科技(上海)股份有限公司 | Correction method, terminal and storage medium for track deduction |
CN110736465A (en) * | 2019-11-15 | 2020-01-31 | 北京云迹科技有限公司 | Navigation method, navigation device, robot and computer readable storage medium |
CN110861082A (en) * | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
CN111256689A (en) * | 2020-01-15 | 2020-06-09 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
CN111551185A (en) * | 2020-06-12 | 2020-08-18 | 弗徕威智能机器人科技(上海)有限公司 | Method for adding traffic lane |
CN111652248A (en) * | 2020-06-02 | 2020-09-11 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
CN111723173A (en) * | 2020-06-15 | 2020-09-29 | 中国第一汽车股份有限公司 | Vehicle-mounted map making method and device, electronic equipment and storage medium |
CN111765892A (en) * | 2020-05-12 | 2020-10-13 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN112149471A (en) * | 2019-06-28 | 2020-12-29 | 北京初速度科技有限公司 | Loopback detection method and device based on semantic point cloud |
CN112365606A (en) * | 2020-11-05 | 2021-02-12 | 日立楼宇技术(广州)有限公司 | Method and device for marking device position, computer device and storage medium |
CN112581533A (en) * | 2020-12-16 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
WO2021129345A1 (en) * | 2019-12-27 | 2021-07-01 | 炬星科技(深圳)有限公司 | Scene map building method, device, and storage medium |
CN113375682A (en) * | 2021-06-09 | 2021-09-10 | 深圳朗道智通科技有限公司 | System and method for automatically marking real-time high-precision map through data fusion |
CN113535868A (en) * | 2021-06-11 | 2021-10-22 | 上海追势科技有限公司 | Autonomous parking high-precision map generation method based on public navigation map |
WO2022068781A1 (en) * | 2020-09-29 | 2022-04-07 | 炬星科技(深圳)有限公司 | Guided mapping method and device, and computer-readable storage medium |
CN115381354A (en) * | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
CN115655262A (en) * | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
TWI836366B (en) * | 2022-03-04 | 2024-03-21 | 歐特明電子股份有限公司 | Automatic parking mapping system mounted on vehicle |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104330090A (en) * | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
CN105678476A (en) * | 2016-03-01 | 2016-06-15 | 浙江大学 | Video-based intelligent guidance system and guidance method for self-study room |
CN107180215A (en) * | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
CN107449427A (en) * | 2017-07-27 | 2017-12-08 | 京东方科技集团股份有限公司 | A kind of method and apparatus for generating navigation map |
CN108303101A (en) * | 2018-03-05 | 2018-07-20 | 弗徕威智能机器人科技(上海)有限公司 | A kind of construction method of navigation map |
-
2018
- 2018-11-09 CN CN201811329208.2A patent/CN109556617A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104330090A (en) * | 2014-10-23 | 2015-02-04 | 北京化工大学 | Robot distributed type representation intelligent semantic map establishment method |
CN105678476A (en) * | 2016-03-01 | 2016-06-15 | 浙江大学 | Video-based intelligent guidance system and guidance method for self-study room |
CN107180215A (en) * | 2017-05-31 | 2017-09-19 | 同济大学 | Figure and high-precision locating method are built in parking lot based on warehouse compartment and Quick Response Code automatically |
CN107449427A (en) * | 2017-07-27 | 2017-12-08 | 京东方科技集团股份有限公司 | A kind of method and apparatus for generating navigation map |
CN108303101A (en) * | 2018-03-05 | 2018-07-20 | 弗徕威智能机器人科技(上海)有限公司 | A kind of construction method of navigation map |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110057373B (en) * | 2019-04-22 | 2023-11-03 | 上海蔚来汽车有限公司 | Method, apparatus and computer storage medium for generating high-definition semantic map |
CN110057373A (en) * | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
CN112149471B (en) * | 2019-06-28 | 2024-04-16 | 北京初速度科技有限公司 | Loop detection method and device based on semantic point cloud |
CN112149471A (en) * | 2019-06-28 | 2020-12-29 | 北京初速度科技有限公司 | Loopback detection method and device based on semantic point cloud |
CN110220517A (en) * | 2019-07-08 | 2019-09-10 | 紫光云技术有限公司 | A kind of Indoor Robot robust slam method of the combining environmental meaning of one's words |
CN110555801A (en) * | 2019-07-26 | 2019-12-10 | 纵目科技(上海)股份有限公司 | Correction method, terminal and storage medium for track deduction |
CN110440811A (en) * | 2019-08-29 | 2019-11-12 | 湖北三江航天红峰控制有限公司 | A kind of universal automatic navigation control method, device and equipment terminal |
CN110440811B (en) * | 2019-08-29 | 2021-05-14 | 湖北三江航天红峰控制有限公司 | Universal autonomous navigation control method, device and equipment terminal |
CN110861082A (en) * | 2019-10-14 | 2020-03-06 | 北京云迹科技有限公司 | Auxiliary mapping method and device, mapping robot and storage medium |
CN110736465A (en) * | 2019-11-15 | 2020-01-31 | 北京云迹科技有限公司 | Navigation method, navigation device, robot and computer readable storage medium |
WO2021129345A1 (en) * | 2019-12-27 | 2021-07-01 | 炬星科技(深圳)有限公司 | Scene map building method, device, and storage medium |
CN111256689A (en) * | 2020-01-15 | 2020-06-09 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
CN111256689B (en) * | 2020-01-15 | 2022-01-21 | 北京智华机器人科技有限公司 | Robot positioning method, robot and storage medium |
CN111765892A (en) * | 2020-05-12 | 2020-10-13 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN111765892B (en) * | 2020-05-12 | 2022-04-29 | 驭势科技(北京)有限公司 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
CN111652248A (en) * | 2020-06-02 | 2020-09-11 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
CN111652248B (en) * | 2020-06-02 | 2023-08-08 | 上海岭先机器人科技股份有限公司 | Positioning method and device for flexible cloth |
CN111551185A (en) * | 2020-06-12 | 2020-08-18 | 弗徕威智能机器人科技(上海)有限公司 | Method for adding traffic lane |
CN111723173A (en) * | 2020-06-15 | 2020-09-29 | 中国第一汽车股份有限公司 | Vehicle-mounted map making method and device, electronic equipment and storage medium |
WO2022068781A1 (en) * | 2020-09-29 | 2022-04-07 | 炬星科技(深圳)有限公司 | Guided mapping method and device, and computer-readable storage medium |
CN112365606B (en) * | 2020-11-05 | 2023-08-01 | 日立楼宇技术(广州)有限公司 | Labeling method and device for equipment positions, computer equipment and storage medium |
CN112365606A (en) * | 2020-11-05 | 2021-02-12 | 日立楼宇技术(广州)有限公司 | Method and device for marking device position, computer device and storage medium |
CN112581533A (en) * | 2020-12-16 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN112581533B (en) * | 2020-12-16 | 2023-10-03 | 百度在线网络技术(北京)有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN113375682A (en) * | 2021-06-09 | 2021-09-10 | 深圳朗道智通科技有限公司 | System and method for automatically marking real-time high-precision map through data fusion |
CN113535868A (en) * | 2021-06-11 | 2021-10-22 | 上海追势科技有限公司 | Autonomous parking high-precision map generation method based on public navigation map |
TWI836366B (en) * | 2022-03-04 | 2024-03-21 | 歐特明電子股份有限公司 | Automatic parking mapping system mounted on vehicle |
CN115381354A (en) * | 2022-07-28 | 2022-11-25 | 广州宝乐软件科技有限公司 | Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment |
CN115655262A (en) * | 2022-12-26 | 2023-01-31 | 广东省科学院智能制造研究所 | Deep learning perception-based multi-level semantic map construction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109556617A (en) | A kind of map elements extracting method of automatic Jian Tu robot | |
CN111273305B (en) | Multi-sensor fusion road extraction and indexing method based on global and local grid maps | |
CN109556616A (en) | A kind of automatic Jian Tu robot of view-based access control model label builds figure dressing method | |
CN106092104B (en) | A kind of method for relocating and device of Indoor Robot | |
CN109752701B (en) | Road edge detection method based on laser point cloud | |
CN107967473B (en) | Robot autonomous positioning and navigation based on image-text recognition and semantics | |
CN111220993B (en) | Target scene positioning method and device, computer equipment and storage medium | |
CN105930819B (en) | Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system | |
EP3650814B1 (en) | Vision augmented navigation | |
CN109446973B (en) | Vehicle positioning method based on deep neural network image recognition | |
CN111928862A (en) | Method for constructing semantic map on line by fusing laser radar and visual sensor | |
CN110598743A (en) | Target object labeling method and device | |
CN108303103A (en) | The determination method and apparatus in target track | |
CN114076956A (en) | Lane line calibration method based on laser radar point cloud assistance | |
CN115032651A (en) | Target detection method based on fusion of laser radar and machine vision | |
CN112346463B (en) | Unmanned vehicle path planning method based on speed sampling | |
CN111325136B (en) | Method and device for labeling object in intelligent vehicle and unmanned vehicle | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
CN108805930A (en) | The localization method and system of automatic driving vehicle | |
CN114509065B (en) | Map construction method, system, vehicle terminal, server and storage medium | |
CN101620672B (en) | Method for positioning and identifying three-dimensional buildings on the ground by using three-dimensional landmarks | |
CN113358125A (en) | Navigation method and system based on environmental target detection and environmental target map | |
Vu et al. | Traffic sign detection, state estimation, and identification using onboard sensors | |
CN109407115A (en) | A kind of road surface extraction system and its extracting method based on laser radar | |
CN115205382A (en) | Target positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190402 |
|
WD01 | Invention patent application deemed withdrawn after publication |