CN112258391A - Fragmented map splicing method based on road traffic marking - Google Patents
Fragmented map splicing method based on road traffic marking Download PDFInfo
- Publication number
- CN112258391A CN112258391A CN202011086390.0A CN202011086390A CN112258391A CN 112258391 A CN112258391 A CN 112258391A CN 202011086390 A CN202011086390 A CN 202011086390A CN 112258391 A CN112258391 A CN 112258391A
- Authority
- CN
- China
- Prior art keywords
- road traffic
- feature
- traffic marking
- matching
- fragmented
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000008447 perception Effects 0.000 claims abstract description 13
- 238000005516 engineering process Methods 0.000 claims abstract description 11
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 45
- 230000008569 process Effects 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 6
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000008859 change Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a fragmentized map splicing method based on road traffic marking lines, which comprises the following steps: detecting the road traffic marking based on a computer vision perception technology, and storing each road traffic marking in a sequence format according to a detection sequence; sliding matching is carried out on the sequences of the road traffic marking lines of the two fragmented maps to be spliced; calculating the relative pose relationship between the two fragmented maps according to the position coordinate information of the road traffic marking in the matching success sequence, and splicing the fragmented maps according to the relative pose relationship; the probability of successful splicing of fragmented maps can be effectively improved based on a robust road traffic marking matching mode without depending on a manually designed sparse key point extraction and matching method; the method can be used for dealing with the environment with the existence of dynamic objects and larger illumination transformation, can improve the effectiveness and robustness of the splicing scheme of a plurality of fragmented maps in the same scene, and improves the composition precision of the low-cost composition scheme.
Description
Technical Field
The invention relates to the field of high-precision map automatic manufacturing, in particular to a fragmentized map splicing method based on road traffic marking lines.
Background
The high-precision map plays an important role in the automatic driving System and is an essential part of the automatic driving System, the production method of the traditional high-precision map depends on professional acquisition equipment and professional acquisition personnel, the acquisition equipment generally uses a high-frequency mapping laser radar and a high-precision POS System for positioning and attitude determination, and the POS System comprises a GNSS (Global Navigation Satellite System) and a high-precision IMU (Inertial Measurement unit), so that the cost for acquiring data is high. Meanwhile, the time consumption of a complex acquisition process and operation mode is long, so that the high-precision map cannot frequently update data and keep the freshness of the map. At present, in China, the road data is about more than 600 kilometers, and the cost for acquiring the data frequently by adopting the method cannot be borne by most enterprises. Thus, updating an automatic composition scheme in real-time through a lower cost implementation would have significant advantages.
The low-cost composition scheme data acquisition equipment generally comprises a camera, consumer-grade GNSS positioning equipment and other sensors, the camera is low in cost compared with a laser radar, meanwhile, abundant environment detail information can be provided, and the consumer-grade GNSS positioning equipment can provide absolute space position information and can be used for assisting composition. The low-cost composition scheme has low data acquisition cost and is more suitable for widely deploying and acquiring road traffic scene data with high freshness and improving the updating frequency of a high-precision map. However, the low-cost composition scheme data are directly observed, the used absolute position precision is low, the acquired relative scene data have large errors and often contain different error data, such as perception error detection, motion blur recognition error and influence of internal and external parameter changes caused by vibration of a camera on a stereo matching result. Therefore, it is desirable to perform big data fusion optimization through the results of multiple redundant acquisitions to improve the accuracy and consistency of the data. The fragmented map data continuously collected by the low-cost equipment is processed in a fusion mode, and map splicing is carried out on the fragmented map data of the same scene at the cloud end, so that the composition precision of the low-cost composition scheme is improved.
Splicing of a plurality of fragmented maps in the same scene is essentially a scene matching and recognition problem, and the main method for solving the problem at present generally is to calculate the similarity between images by using a computer vision method through image data acquired by a camera, so as to judge whether the images are in the same scene, and then perform subsequent image inter-frame correlation matching splicing. The core problem of the computer vision image-based method is how to calculate the similarity between images, and at present, a commonly used method is to extract manually designed key Feature points from the images, then perform descriptor operations of the Feature points, and perform similarity calculation between Feature descriptions of the Feature point pairs, such as SIFT (Scale-Invariant Feature Transform), ORB (organized FAST and Rotated BRIEF), and the like. The method can obtain better effect in static environment and environment with little change of illumination. However, the characteristic point detection and characteristic description method has great limitation in real outdoor traffic scenes, and once illumination change occurs in the scene, detection of characteristics is affected, on the other hand, when large vehicles appear in the scene, most matching relations are performed based on dynamic targets, and characteristic points on a few static targets in the scene are ignored, so that wrong image matching relations are caused. How to design sparse key points to optimally represent image information is still an important problem which is not solved in the field of computer vision at present. In addition, the key point of manual design is based on the manual experience of design algorithm personnel, and when the conditions such as illumination change, weather change, season change and the like are met in the composition process, the problems such as reduced scene identification accuracy rate, poor working stability and the like occur, so that the use of the method is limited.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a fragmented map splicing method based on road traffic marking, and solves the problem of poor splicing effect in the prior art.
The technical scheme for solving the technical problems is as follows: a fragmentized map splicing method based on road traffic marking lines comprises the following steps:
step 1, detecting road traffic marking lines based on a computer vision perception technology, and storing each road traffic marking line in a sequence format according to a detection sequence;
step 2, performing sliding matching on the sequence of the road traffic marking lines of the two fragmented maps to be spliced;
and 3, calculating the relative pose relationship between the two fragmented maps according to the position coordinate information of the road traffic marking in the sequence after successful matching, and splicing the fragmented maps according to the relative pose relationship.
The invention has the beneficial effects that: according to the fragmentized map splicing method based on the road traffic marking, the road traffic marking is considered to be easy to find by a driver at the beginning of design, the traffic signs are printed according to national standards, the shapes are fixed, and the characteristics are obvious, so that the road traffic marking is detected based on a computer vision perception technology, matching and confirmation of relative pose relations are performed according to the sequence and the positions of the road traffic marking, drawing and updating of a high-precision map can be completed only by relying on low-cost equipment such as a camera and a consumer-grade GNSS receiver without relying on laser radar and high-precision GNSS positioning equipment, and the data acquisition cost of the high-precision map is reduced; the probability of successful splicing of fragmented maps can be effectively improved based on a robust road traffic marking matching mode without depending on a manually designed sparse key point extraction and matching method; the method can be used for dealing with the environment with the existence of dynamic objects and larger illumination transformation, can improve the effectiveness and robustness of the splicing scheme of a plurality of fragmented maps in the same scene, and improves the composition precision of the low-cost composition scheme.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the types of the road traffic markings include arrows, stop lines, and pedestrian crossings.
Further, the step 1 of detecting the road traffic marking further comprises:
transferring the pixel coordinates p (u, v) of the road traffic marking detected based on computer vision perception to the local coordinate system p of the vehicle bodyv(X, Y, Z), the conversion formula is:
k represents an internal reference matrix of the camera, T represents a transformation matrix between a camera coordinate system and a vehicle body local coordinate system, and d is the depth of the camera;
the local coordinate system p of the vehicle bodyv(X, Y, Z) to the global coordinate system p (X, Y, Z):
Further, the process of performing storage in step 1 further includes:
storing the sequence of the road traffic marking of each fragmented map into a road traffic marking matching library corresponding to the fragmented map; the capacity of the sequence stored in the road traffic marking matching library is adjustable as required.
Further, the process of performing sliding matching in step 2 includes:
step 201, comparing whether the road traffic marking types in the two sequences are completely the same or not, if so, executing step 202, otherwise, judging that the two sequences are not matched;
step 202, calculating whether the distance between the corresponding road traffic marking lines in the two identical sequences is smaller than a set threshold value, and if so, judging that the two sequences are matched.
Further, the step 3 comprises:
step 301, defining the road traffic marking as a surface feature, a line feature or a point feature;
step 302, the surface feature, the line feature and the point feature are expressed by an expression formula, wherein the expression formula is as follows:
(x-p)TRΩRT(x-p) ═ 0; wherein p is a characteristic 3-dimensional central point, R is a 3 x 3 attitude matrix, and omega is a 3 x 3 dimensional morphology matrix;
step 303, for any point feature, line feature and surface feature object, uniformly parameterizing the surface feature, line feature and point feature expressed by a formula into a form m: { pm,Rm,Ωm};pmRepresents the center of an element, RmRepresenting the element attitude matrix, ΩmRepresenting an element form matrix;
and step 304, uniformly optimizing the successfully matched fragmented map and road marking according to the form m.
Further, in step 301, an arrow and a pedestrian crossing are defined as a surface feature, a stop line is defined as a line feature, and a center point of the road traffic marking is defined as a point feature;
the centers of the point feature, the line feature and the surface feature are respectively defined as a point coordinate, a line segment center coordinate and a surface center coordinate;
the directions of the point feature, the line feature and the surface feature are respectively defined as a non-direction, a straight line direction of a line segment and a plane normal vector;
the attitude matrix R of the point feature in the step 302 is diag (1,1, 1);
the attitude matrix R of the line feature and the face feature isWhereinIs a direction vector of the element, | d |, isDistance to the origin of the global coordinate system;
the form matrices Ω of the point feature, the line feature and the surface feature are diag (1,1,1), diag (0,1,1) and diag (1,0,0), respectively.
Further, the step 304 further includes:
if the matching of the N fragmented maps and the M road traffic marked lines is successful, the optimized variable is obtainedm1,…,mMIs the position and pose of the road traffic marking,the pose of the fragmented map in the global coordinate system is shown;
the function to be optimized is:whereinei,jRepresents the residual error between the prediction and the measurement value of the jth element observed by the ith coordinate system Ci; sigmaijIs the sum of the residuals of all observed elements, Ωi,jRepresenting the residual covariance matrix.
Furthermore, the residual e between the prediction and the measurement of the jth element observed in the ith coordinate system Ci of the spacei,jComprises the following steps:
wherein, Twci={Rwci|twciThe element belongs to SE3 and is a conversion matrix from the ith coordinate system Ci to be spliced to the reference coordinate system W;
mij:{pij,Rij,Ωijand the parameterized object and its predicted value of the jth element observed for the ith frame respectively,for a road traffic marking prediction center,and predicting the direction for the road traffic marking.
Further, the residual covariance matrix Ωi,jIs defined as:
wherein, in the point-to-point matching process, Ωp=I,Ωd=0,Ωo0; in the process of line-to-point matching,Ωd=0,Ωo0; in the process of line-to-line matching,Ωd=I,Ωo0; in the process of the face-to-point matching,Ωd=0,Ωo0; during the face-to-line matching process,Ωd=0,Ωoi ═ I; in the process of face-to-face matching,Ωd=I,Ωo=0;representing element MijThe morphology matrix of (2).
The beneficial effect of adopting the further scheme is that: after the sequences are subjected to sliding matching, whether the distance between the corresponding road traffic marking lines in the two completely same sequences is smaller than a set threshold value is calculated, if so, the two sequences are judged to be matched, and the set threshold value can be set according to the error range of a map data acquisition device, so that the matching accuracy is further ensured; the method is characterized in that a road surface arrow and a pedestrian crossing are defined as surface features, a stop line is defined as line features, the center point of a road traffic marking is defined as point features, and the surface features, the line features or the point features are subjected to unified parametric representation, so that the relative pose relation calculation process is simple, rapid and accurate.
Drawings
FIG. 1 is a flow chart of a method for splicing a fragmented map based on road traffic markings according to the present invention;
FIG. 2 is a flowchart of an embodiment of a method for stitching a fragmented map based on road traffic markings according to the present invention;
FIG. 3 is a schematic view of an embodiment of a road traffic marking provided by the present invention;
FIG. 4 is a schematic diagram of a road-marking matching library for fragmented maps according to an embodiment of the present invention;
fig. 5 is a schematic diagram of matching by using a sliding window according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a flowchart of a method for splicing a fragmented map based on a road traffic marking according to the present invention, and as shown in fig. 1, the method includes:
step 1, detecting the road traffic marking based on a computer vision perception technology, and storing each road traffic marking in a sequence format according to a detection sequence.
The computer perception technology utilizes the neural network technology to detect and position road traffic marking lines in sequence images continuously acquired by a camera in the vehicle running process.
And 2, performing sliding matching on the sequence of the road traffic marking lines of the two fragmented maps to be spliced.
And 3, calculating the relative pose relationship between the two fragmented maps according to the position coordinate information of the road traffic marking in the matching success sequence, and splicing the fragmented maps according to the relative pose relationship.
According to the fragmentized map splicing method based on the road traffic marking, the road traffic marking is considered to be easy to find by a driver at the beginning of design, the traffic signs are printed according to national standards, the shapes are fixed, and the characteristics are obvious, so that the road traffic marking is detected based on a computer vision perception technology, matching and confirmation of relative pose relations are performed according to the sequence and the positions of the road traffic marking, drawing and updating of a high-precision map can be completed only by relying on low-cost equipment such as a camera and a consumer-grade GNSS receiver without relying on laser radar and high-precision GNSS positioning equipment, and the data acquisition cost of the high-precision map is reduced; the probability of successful splicing of fragmented maps can be effectively improved based on a robust road traffic marking matching mode without depending on a manually designed sparse key point extraction and matching method; the method can be used for dealing with the environment with the existence of dynamic objects and larger illumination transformation, can improve the effectiveness and robustness of the splicing scheme of a plurality of fragmented maps in the same scene, and improves the composition precision of the low-cost composition scheme.
Example 1
Embodiment 1 provided by the present invention is an embodiment of a method for splicing a fragmented map based on a road traffic marking provided by the present invention, and as shown in fig. 2, is a flowchart of an embodiment of a method for splicing a fragmented map based on a road traffic marking provided by the present invention, and as can be seen from fig. 2, the embodiment includes:
step 1, detecting the road traffic marking based on a computer vision perception technology, and storing each road traffic marking in a sequence format according to a detection sequence.
The computer perception technology utilizes the neural network technology to detect and position road traffic marking lines in sequence images continuously acquired by a camera in the vehicle running process.
Preferably, as shown in fig. 3, which is a schematic view of an embodiment of the road traffic marking, as can be seen from fig. 3, the types of the road traffic marking include an arrow, a stop line, and a crosswalk.
Specifically, the process of detecting the road traffic marking further comprises:
transferring the pixel coordinates p (u, v) of the road traffic marking detected based on the computer vision perception to the local coordinate system p of the vehicle bodyv(X, Y, Z), the conversion formula is:
k represents an internal reference matrix of the camera, T represents a transformation matrix between a camera coordinate system and a vehicle body local coordinate system, K and T can be obtained through camera calibration, d is the depth of the camera, and the K and T can be obtained through triangularization through multiple observations of road traffic markings.
The local coordinate system p of the vehicle bodyv(X, Y, Z) to the global coordinate system p (X, Y, Z):
The process of storing further comprises:
storing the sequence of the road traffic marking of each fragmented map into a road traffic marking matching library corresponding to the fragmented map; the capacity of the sequences stored in the road traffic marking matching library is adjustable as required.
Specifically, the road traffic markings of each fragmented map are stored in the corresponding road traffic marking matching library in a sequence format according to the detection sequence, as shown in fig. 4, which is a schematic diagram of the road marking matching library of the fragmented map according to the embodiment of the present invention, in the embodiment of the road marking matching library shown in fig. 4, the capacity k of the sequence is 4, that is, one sequence includes four consecutive road traffic markings. The size of the capacity k of the sequence can be set according to actual conditions, such as the size of map data and the probability that a plurality of continuous identical road traffic marked lines correspond to different scenes.
And 2, performing sliding matching on the sequence of the road traffic marking lines of the two fragmented maps to be spliced.
The process of performing sliding matching includes:
step 201, comparing whether the road traffic marking types in the two sequences are completely the same, if yes, executing step 202, otherwise, judging that the two sequences are not matched.
Step 202, calculating whether the distance between the corresponding road traffic marking lines in the two identical sequences is smaller than a set threshold value, and if so, judging that the two sequences are matched.
Fig. 5 is a schematic diagram of matching by using a sliding window according to the present invention, and the set threshold may be set according to an error range of the map data acquisition device.
And 3, calculating the relative pose relationship between the two fragmented maps according to the position coordinate information of the road traffic marking in the matching success sequence, and splicing the fragmented maps according to the relative pose relationship.
Preferably, in the embodiment of the present invention, the road surface arrow and the crosswalk are defined as a surface feature, the stop line is defined as a line feature, and the center point of the road traffic marking is defined as a point feature, and the surface feature, the line feature or the point feature is parameterized and uniformly expressed, specifically, step 3 may include:
step 301, defining the road traffic marking as a surface feature, a line feature or a point feature.
Specifically, in step 301, the arrow and the crosswalk are defined as a face feature, the stop line is defined as a line feature, and the center point of the road traffic marking is defined as a point feature.
TABLE 1 definition table of center points and direction vectors of point, line and surface features
As shown in table 1, the center points of the point, line, and surface features and the direction vector definition table show that, as can be seen from table 1, the centers of the point, line, and surface features are defined as point coordinates, line segment center coordinates, and surface center coordinates, respectively.
The directions of the point feature, the line feature and the surface feature are respectively defined as a non-direction, a straight line direction of the line segment and a plane normal vector.
Step 302, the surface feature, the line feature and the point feature are expressed by an expression formula, wherein the expression formula is as follows:
(x-p)TRΩRT(x-p) ═ 0; wherein p is the 3-dimensional central point of the feature, R is the 3 × 3 attitude matrix, and Ω is the 3 × 3 morphology matrix.
Specifically, the attitude matrix R of the element is calculated from the direction vector of the element, and the form matrix Ω of the element is defined based on the types of the three elements, i.e., the point, the line, and the plane. As shown in tables 2 and 3, the attitude matrix table and the form matrix table are respectively a point, a line, and a surface feature.
TABLE 2 attitude matrix table of point, line, surface characteristics
TABLE 3 form matrix table of point, line and surface characteristics
As can be seen from tables 2 and 3, the orientation matrix R of the point feature in step 302 is diag (1,1, 1).
The attitude matrix R of the line feature and the face feature isWherein Is a direction vector of the element, | d |, isDistance to the origin of the global coordinate system.
The shape matrices Ω of the point feature, the line feature, and the plane feature are diag (1,1,1), diag (0,1,1), and diag (1,0,0), respectively.
Step 303, for any point feature, line feature and surface feature object, uniformly parameterizing the surface feature, line feature and point feature expressed by a formula into a form m: { pm,Rm,Ωm};pmRepresents the center of an element, RmRepresenting the element attitude matrix, ΩmAn element form matrix is represented.
And step 304, uniformly optimizing the successfully matched fragmented maps and road marking lines according to the form m, and further obtaining the relative pose relationship among the fragmented maps.
Preferably, step 304 further comprises:
if the matching of the N fragmented maps and the M road traffic marking lines is successful, the optimized variable is obtained m1,...,mMIs the position and pose of the road traffic marking,the pose of the fragmented map in the global coordinate system is shown.
The function to be optimized is:whereinei,jRepresents the residual error between the prediction and the measurement value of the jth element observed by the ith coordinate system Ci; sigmaijIs the sum of the residuals of all observed elements.
Specifically, for the matching relationship among the three elements of point, line and surface, the residual e between the prediction and measurement values of the jth element observed by the ith coordinate system Ci in spacei,jComprises the following steps:
wherein, Twci={Rwci|twciThe element belongs to SE3 and is a conversion matrix from the ith coordinate system Ci to be spliced to a reference coordinate system W, the reference coordinate system W is a unit matrix, and SE3(Special Euclidean Group) is a three-dimensional Special Euclidean Group.
mij;{pij,Rij,ΩijAnd the parameterized object and its predicted value of the jth element observed for the ith frame respectively,for a road traffic marking prediction center,and predicting the direction for the road traffic marking.
Residual covariance matrix omegai,jIs defined as:
as shown in table 4, the definition table of the residual covariance matrix (information matrix) is shown:
table 4 residual covariance matrix table
As can be seen from Table 4, for matching residuals between different feature types, Ω is the point-to-point matching processp=I,Ωd=0,Ωo0; in the process of line-to-point matching,Ωd=0,Ωo0; in the process of line-to-line matching,Ωd=I,Ωo0; in the process of the face-to-point matching,Ωd=0,Ωo0; during the face-to-line matching process,Ωd=0,Ωoi ═ I; in the process of face-to-face matching,Ωd=I,Ωo=0;representing element MijThe form matrix of (2) is defined as shown in Table 3.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A fragmentized map splicing method based on road traffic marked lines is characterized by comprising the following steps:
step 1, detecting road traffic marking lines based on a computer vision perception technology, and storing each road traffic marking line in a sequence format according to a detection sequence;
step 2, performing sliding matching on the sequence of the road traffic marking lines of the two fragmented maps to be spliced;
and 3, calculating the relative pose relationship between the two fragmented maps according to the position coordinate information of the road traffic marking in the sequence after successful matching, and splicing the fragmented maps according to the relative pose relationship.
2. The method of claim 1, wherein the types of road traffic markings include arrows, stop lines, and pedestrian crossings.
3. The method of claim 1, wherein the step 1 detecting the road traffic marking further comprises:
transferring the pixel coordinates p (u, v) of the road traffic marking detected based on computer vision perception to the local coordinate system p of the vehicle bodyv(X, Y, Z), the conversion formula is:
k represents an internal reference matrix of the camera, T represents a transformation matrix between a camera coordinate system and a vehicle body local coordinate system, and d is the depth of the camera;
the local coordinate system p of the vehicle bodyv(X, Y, Z) to the global coordinate system p (X, Y, Z):
4. The method of claim 1, wherein the storing in step 1 further comprises:
storing the sequence of the road traffic marking of each fragmented map into a road traffic marking matching library corresponding to the fragmented map; the capacity of the sequence stored in the road traffic marking matching library is adjustable as required.
5. The method of claim 1, wherein the step 2 of performing sliding matching comprises:
step 201, comparing whether the road traffic marking types in the two sequences are completely the same or not, if so, executing step 202, otherwise, judging that the two sequences are not matched;
step 202, calculating whether the distance between the corresponding road traffic marking lines in the two identical sequences is smaller than a set threshold value, and if so, judging that the two sequences are matched.
6. The method of claim 2, wherein step 3 comprises:
step 301, defining the road traffic marking as a surface feature, a line feature or a point feature;
step 302, the surface feature, the line feature and the point feature are expressed by an expression formula, wherein the expression formula is as follows:
(x-p)TRΩRT(x-p) ═ 0; wherein p is a characteristic 3-dimensional central point, R is a 3 x 3 attitude matrix, and omega is a 3 x 3 dimensional morphology matrix;
step 303, for any point feature, line feature and surface feature object, uniformly parameterizing the surface feature, line feature and point feature expressed by a formula into a form m: { pm,Rm,Ωm};pmRepresents the center of an element, RmRepresenting the element attitude matrix, ΩmRepresenting an element form matrix;
and step 304, uniformly optimizing the successfully matched fragmented map and road marking according to the form m.
7. The method of claim 6, wherein in step 301, arrows and crosswalks are defined as surface features, stop lines are defined as line features, and center points of the road traffic markings are defined as point features;
the centers of the point feature, the line feature and the surface feature are respectively defined as a point coordinate, a line segment center coordinate and a surface center coordinate;
the directions of the point feature, the line feature and the surface feature are respectively defined as a non-direction, a straight line direction of a line segment and a plane normal vector;
the attitude matrix R of the point feature in the step 302 is diag (1,1, 1);
the attitude matrix R of the line feature and the face feature isWhereinIs a direction vector of the element, | d |, isDistance to the origin of the global coordinate system;
the form matrices Ω of the point feature, the line feature and the surface feature are diag (1,1,1), diag (0,1,1) and diag (1,0,0), respectively.
8. The method of claim 6, wherein the step 304 further comprises:
if the matching of the N fragmented maps and the M road traffic marked lines is successful, the optimized variable is obtainedm1,...,mMIs the position and pose of the road traffic marking,the pose of the fragmented map in the global coordinate system is shown;
9. The method according to claim 8, wherein the residual e between the predicted and measured values of the jth element observed in the ith spatial coordinate system Ci isi,jComprises the following steps:
wherein, Twci={Rwci|twciE is the ith coordinate system C to be spliced according to SE3iA transformation matrix to a reference coordinate system W;
10. The method of claim 8, wherein the residual covariance matrix Ωi,jIs defined as:
wherein, in the point-to-point matching process, Ωp=I,Ωd=0,Ωo0; in the process of line-to-point matching,Ωd=0,Ωo0; in the process of line-to-line matching,Ωd=I,Ωo0; in the process of the face-to-point matching,Ωd=0,Ωo0; during the face-to-line matching process,Ωd=0,Ωoi ═ I; in the process of face-to-face matching,Ωd=I,Ωo=0;representing element MijThe morphology matrix of (2).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011086390.0A CN112258391B (en) | 2020-10-12 | 2020-10-12 | Fragmented map splicing method based on road traffic marking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011086390.0A CN112258391B (en) | 2020-10-12 | 2020-10-12 | Fragmented map splicing method based on road traffic marking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112258391A true CN112258391A (en) | 2021-01-22 |
CN112258391B CN112258391B (en) | 2022-05-17 |
Family
ID=74242666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011086390.0A Active CN112258391B (en) | 2020-10-12 | 2020-10-12 | Fragmented map splicing method based on road traffic marking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112258391B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7375149B2 (en) | 2021-11-30 | 2023-11-07 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Positioning method, positioning device, visual map generation method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
CN107918927A (en) * | 2017-11-30 | 2018-04-17 | 武汉理工大学 | A kind of matching strategy fusion and the fast image splicing method of low error |
CN108648141A (en) * | 2018-05-15 | 2018-10-12 | 浙江大华技术股份有限公司 | A kind of image split-joint method and device |
US20180341022A1 (en) * | 2017-05-24 | 2018-11-29 | Beijing Green Valley Technology Co., Ltd. | Lidar-based mapping method, device and system |
CN109961078A (en) * | 2017-12-22 | 2019-07-02 | 展讯通信(上海)有限公司 | Images match and joining method, device, system, readable medium |
CN110060277A (en) * | 2019-04-30 | 2019-07-26 | 哈尔滨理工大学 | A kind of vision SLAM method of multiple features fusion |
CN111080529A (en) * | 2019-12-23 | 2020-04-28 | 大连理工大学 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
CN111754388A (en) * | 2019-03-28 | 2020-10-09 | 北京初速度科技有限公司 | Picture construction method and vehicle-mounted terminal |
-
2020
- 2020-10-12 CN CN202011086390.0A patent/CN112258391B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
US20180341022A1 (en) * | 2017-05-24 | 2018-11-29 | Beijing Green Valley Technology Co., Ltd. | Lidar-based mapping method, device and system |
CN107918927A (en) * | 2017-11-30 | 2018-04-17 | 武汉理工大学 | A kind of matching strategy fusion and the fast image splicing method of low error |
CN109961078A (en) * | 2017-12-22 | 2019-07-02 | 展讯通信(上海)有限公司 | Images match and joining method, device, system, readable medium |
CN108648141A (en) * | 2018-05-15 | 2018-10-12 | 浙江大华技术股份有限公司 | A kind of image split-joint method and device |
CN111754388A (en) * | 2019-03-28 | 2020-10-09 | 北京初速度科技有限公司 | Picture construction method and vehicle-mounted terminal |
CN110060277A (en) * | 2019-04-30 | 2019-07-26 | 哈尔滨理工大学 | A kind of vision SLAM method of multiple features fusion |
CN111080529A (en) * | 2019-12-23 | 2020-04-28 | 大连理工大学 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7375149B2 (en) | 2021-11-30 | 2023-11-07 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Positioning method, positioning device, visual map generation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112258391B (en) | 2022-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111862672A (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
CN111830953B (en) | Vehicle self-positioning method, device and system | |
CN111862673B (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
CN113989450B (en) | Image processing method, device, electronic equipment and medium | |
CN111882612A (en) | Vehicle multi-scale positioning method based on three-dimensional laser detection lane line | |
CN111241988B (en) | Method for detecting and identifying moving target in large scene by combining positioning information | |
US10872246B2 (en) | Vehicle lane detection system | |
CN113223045B (en) | Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation | |
CN114413881B (en) | Construction method, device and storage medium of high-precision vector map | |
Dawood et al. | Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera | |
CN114755662A (en) | Calibration method and device for laser radar and GPS with road-vehicle fusion perception | |
WO2021017211A1 (en) | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN115564865A (en) | Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle | |
CN115471748A (en) | Monocular vision SLAM method oriented to dynamic environment | |
CN114140533A (en) | Method and device for calibrating external parameters of camera | |
Liao et al. | SE-Calib: Semantic Edge-Based LiDAR–Camera Boresight Online Calibration in Urban Scenes | |
CN112258391B (en) | Fragmented map splicing method based on road traffic marking | |
CN113838129A (en) | Method, device and system for obtaining pose information | |
Betge-Brezetz et al. | Object-based modelling and localization in natural environments | |
CN107248171B (en) | Triangulation-based monocular vision odometer scale recovery method | |
CN112767458B (en) | Method and system for registering laser point cloud and image | |
CN116468858B (en) | Map fusion method and system based on artificial intelligence | |
CN117170501B (en) | Visual tracking method based on point-line fusion characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |