CN114882182A - Semantic map construction method based on vehicle-road cooperative sensing system - Google Patents

Semantic map construction method based on vehicle-road cooperative sensing system Download PDF

Info

Publication number
CN114882182A
CN114882182A CN202210428195.4A CN202210428195A CN114882182A CN 114882182 A CN114882182 A CN 114882182A CN 202210428195 A CN202210428195 A CN 202210428195A CN 114882182 A CN114882182 A CN 114882182A
Authority
CN
China
Prior art keywords
vehicle
point cloud
data
road
traffic participant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210428195.4A
Other languages
Chinese (zh)
Inventor
耿可可
成小龙
殷国栋
庄伟超
王金湘
张宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202210428195.4A priority Critical patent/CN114882182A/en
Publication of CN114882182A publication Critical patent/CN114882182A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a semantic map construction method based on a vehicle-road cooperative sensing system, which relates to the technical field of intelligent driving and solves the technical problem of smaller single-vehicle sensing range and larger semantic map error in the unmanned driving technology; and performing semantic segmentation on the point cloud by using a RangeNet + + network, and establishing a point cloud map by using a LeGO-LOAM algorithm. The method has the advantages that the method can detect and identify the total traffic participants of the road, not only can construct the point cloud semantic map for a single vehicle in real time, but also can form a large-range semantic map by fusing the point cloud semantic maps of all vehicles, so that the map can be used by all vehicles running in the area, and has better application prospect.

Description

Semantic map construction method based on vehicle-road cooperative sensing system
Technical Field
The application relates to the technical field of intelligent driving, in particular to a driverless vehicle-road cooperative sensing technology, and particularly relates to a semantic map construction method based on a vehicle-road cooperative sensing system.
Background
The acquisition of vehicle data and information processing are one of the key technologies for realizing the autonomous driving of the unmanned automobile. The intelligent automatic driving of the bicycle is easily influenced by environmental conditions such as shielding and severe weather, and has difficulties in aspects such as full target detection, track prediction and driving intention game. And the cooperative automatic driving of the vehicle and the road can greatly expand the perception range of the single vehicle and improve the perception capability through information interaction cooperation, cooperative perception and cooperative decision control, and new intelligent elements represented by high-dimensional data are introduced to realize group intelligence. The technical bottleneck that the intelligent automatic driving of the bicycle meets can be solved essentially, and the automatic driving capability is improved.
At present, the development of a vehicle-road cooperative system is taken as a current research hotspot in all countries. The united states vehicle and road coordination system (VII) is a special united structure composed of the federal highway administration of the united states, AASH-TO, department of transportation of each state, alliance of automobile industry, ITS American, etc., and realizes integration of automobiles and road facilities through information and communication technology. The development of the Smartway project in japan focuses on integrating the functions of ITS in japan and establishing a common platform for on-board units, so that roads and vehicles can become Smartway and Smartcar by bidirectional transmission of lTS consultation, thereby reducing traffic accidents and alleviating traffic congestion. The main contents of the european union eSafety program are: advanced information and communication technology are fully utilized, research and development and integrated application of the safety system are accelerated, and a comprehensive safety solution is provided for road traffic. In centuries, in 5 months of 2021, Apollo Air plan is formally proposed by the institute of Intelligent industry of Qinghua university, and V2X vehicle-road cloud cooperation technology is proposed, so that a global view can be provided, a ubiquitous connection technology and an end-to-end application service are provided for automatic driving and intelligent traffic, and global perception capability is provided for urban traffic. It is expected that the application and popularization of the vehicle-road cooperative intelligent technology become the inevitable choice for the development of modern road traffic.
How to construct a semantic map through a vehicle-road cooperative sensing system to ensure that an automobile does not depend on a GPS any more, and the accurate and timely planning, control and decision of the automobile are problems which need to be solved urgently.
Disclosure of Invention
The application provides a semantic map construction method based on a vehicle-road cooperative sensing system, which aims to improve the sensing range of a single vehicle and construct a semantic map with a large range for all vehicles running in an area.
The technical purpose of the application is realized by the following technical scheme:
a semantic map construction method based on a vehicle-road cooperative perception system comprises the following steps:
s1: the road side sensing equipment detects the traffic participants in real time based on an image and point cloud example matching algorithm to obtain road end traffic participant data and obtain road line boundary information;
s2: converting the road end traffic participant data into a structured data set T1(m) and then sending the structured data set T1(m) to a vehicle end; wherein m represents the total number of road-end traffic participants;
s3: collecting point cloud data by a laser radar at a vehicle end, and performing real-time semantic segmentation on the point cloud through a RangeNet + + network;
s4: post-processing the point cloud after semantic segmentation, converting the point cloud after post-processing into a depth map to obtain vehicle end traffic participant data and converting the vehicle end traffic participant data into a structured data set T2 (n); wherein n represents the total number of vehicle-end traffic participants;
s5: the vehicle-side carries out data integration and data matching fusion on a structured data set T1(m) and a structured data set T2(n) to obtain data of all traffic participants;
s6: and (3) building a map of the semantically segmented point cloud through a LeGO-LOAM algorithm to obtain a point cloud map, and then visualizing the data of all traffic participants in the point cloud map to obtain a global point cloud semantic map.
The beneficial effect of this application lies in: the semantic map construction method based on the vehicle-road cooperative sensing system is provided for the unmanned vehicle, and the traffic participant data detected by vehicle-end and roadside sensing equipment are matched and fused by utilizing the idea of vehicle-road cooperative sensing; and performing semantic segmentation on the point cloud by using a RangeNet + + network, and establishing a point cloud map by using a LeGO-LOAM algorithm.
The road cooperation system detects and identifies the total traffic participants of the road, not only can construct a point cloud semantic map for a single vehicle in real time, but also can form a large-range semantic map by fusing the point cloud semantic maps of all vehicles for all vehicles running in the area, and has a good application prospect. Experiments prove that the map algorithm constructed by the method is simple in structure, rich in semantic information, large in map construction range, small in error and strong in robustness.
Drawings
FIG. 1 is a flow chart of a method described herein;
FIG. 2 is a flow chart of a traffic participant matching fusion algorithm;
fig. 3 is a schematic diagram of the cooperative vehicle-road sensing system.
Detailed Description
The technical solution of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method according to the present application, as shown in fig. 1, the method comprising:
s1: the roadside sensing equipment detects the traffic participants in real time based on an image and point cloud example matching algorithm to obtain road end traffic participant data and obtain road line boundary information.
Specifically, the image and point cloud example matching algorithm comprises the following steps:
the method comprises the steps of carrying out instance segmentation on an image based on a YOLACT algorithm, carrying out point cloud perspective projection, carrying out instance matching, target point cloud clustering and three-dimensional model fitting on the image subjected to instance segmentation and the point cloud subjected to perspective projection, fusing detection structures, generating an enclosing frame, and obtaining road end traffic participant data.
The road side sensing equipment is the pointing equipment, so the road side sensing equipment can be added by combining means such as GPS and physical measurement.
S2: converting the road end traffic participant data into a structured data set T1(m) and then sending the structured data set T1(m) to a vehicle end; where m represents the total number of road-end traffic participants.
Specifically, the roadside sensing device may communicate with the vehicle end through a TCP communication protocol.
Structured dataset T1(m) ═ T1 1 ,t1 2 ,...,t1 m ) Wherein, t1 m Representing the mth end-of-road traffic participant data, t1 m Is represented by
Figure BDA0003609023640000031
(x1 m ,y1 m ,z1 m ) Representing the coordinates of the center point of the mth road end traffic participant; l1 m 、ω1 m 、h1 m Respectively showing the length, width and height of the mth road end traffic participant surrounding frame;
Figure BDA0003609023640000032
representing the direction included angle of the mth road end traffic participant; c1 m And the class of the mth detection target of the road end is shown.
S3: and (3) collecting point cloud data by a laser radar at the vehicle end, and performing real-time semantic segmentation on the point cloud through a RangeNet + +.
S4: post-processing the point cloud after semantic segmentation, converting the point cloud after post-processing into a depth map to obtain vehicle end traffic participant data and converting the vehicle end traffic participant data into a structured data set T2 (n); where n represents the total number of vehicle end traffic participants.
The post-processing operation of the point cloud after the semantic segmentation comprises the following steps: compensating the point cloud missing data, removing ground points, converting the point cloud into a depth map, clustering and fitting a three-dimensional model to the point cloud to obtain vehicle end traffic participant data, and converting the vehicle end traffic participant data into a structureQuantizing the data set T2 (n); wherein, T2(n) ═ T2 1 ,t2 2 ,...,t2 n ) Wherein, t2 n Representing nth vehicle end traffic participant data, t2 n Is represented by
Figure BDA0003609023640000033
(x2 n ,y2 n ,z2 n ) Representing the coordinates of the center point of the nth vehicle-end traffic participant; l2 n 、ω2 n 、h2 n Respectively showing the length, width and height of the n-th vehicle end traffic participant surrounding frame;
Figure BDA0003609023640000034
representing the direction included angle of the nth vehicle end traffic participant; c2 n And the category of the nth detection target at the vehicle end is shown.
Point cloud data is collected by lidar and converted to an undirected graph of 1800 x 64 dimensions, i.e., μ, at an angular resolution of 0.2 ° max =1800,k max 64. Each point p in the point cloud μ,k Determined by the rotation index μ and the beam index k. r is μ,k Represents a point p μ,k If the distance is measured, then in the point cloud of the adjacent beams, if point p μ,k R of upper beam spot μ,k R of lower beam spot μ,k If the value is small, the point p μ,k The key point is obtained primarily because the distance scanned by the upper beam is definitely farther than that of the lower beam in the case of normal scanning to the ground.
Passing distance H r Height H z And beam index H i Estimating the distribution density of the point cloud, and realizing the preliminary clustering of key points so as to expand the key points; clustering adjacent key points in each row into a container, calculating the height and the light beam index range of each container, if the number of the key points in the container is less than 2, discarding the container, otherwise, judging non-key points of each container in the light beam index range, and determining whether the non-key points are contained in the container and converted into the key points, wherein the judgment function is as follows:
Figure BDA0003609023640000035
wherein H z Indicating height, i indicates points i, H i All beam indices, z, representing point clouds i Representing the coordinate of point i on the z-axis, p height Representing the height of the point cloud.
A pile of invalid points surrounded by the key points is found, wherein the invalid points are due to the fact that the laser radar beam hits an object with a low reflectivity (e.g. a car glass or a black object) so that the beam cannot return to the sensor, resulting in an invalid value for the beam at that moment. If the number of the invalid points in one container is less than 30 and the Euclidean distance between the key points around the invalid points is not more than 1.3m, simulating the invalid points in a linear fitting mode so as to compensate the point cloud missing data.
In addition, the ground points are removed through the semantic segmentation result of the RangeNet + + network.
The above-mentioned point cloud converts the depth map into, and carry out clustering and three-dimensional model fitting to the point cloud, obtain vehicle end traffic participant data at last, converts vehicle end traffic participant data into structured data set T2(n), includes:
mapping the 3D point cloud (x, y, z) into a 2D depth image (mu, v) according to the following mapping formula:
Figure BDA0003609023640000041
wherein:
Figure BDA0003609023640000046
represents the width of the 2D depth map, h represents the height of the 2D depth map; f. of up Representing the upward viewing angle range in the vertical direction of the radar, f down Representing a downward view angle range in the vertical direction of the radar; f ═ f up +f down Represents the total viewing angle range in the vertical direction; γ represents the distance of a point to the radar.
And clustering the point clouds by adopting a BFS clustering algorithm combined with the breakpoints of the point cloud depth map, finally fitting a 2D bounding box in the target point clouds according to a clustering result by adopting a convex hull contour approximation algorithm, obtaining a 3D bounding box by combining the height information of the point clouds, and obtaining a three-dimensional bounding box structured data set T2(n) of the vehicle-end traffic participants.
S5: and the vehicle end carries out data integration and data matching fusion on the structured data set T1(m) and the structured data set T2(n) to obtain the data of the total traffic participants.
Specifically, the data integration comprises the unification of coordinate systems of the road side sensing equipment and the vehicle end, the unification of data models and the unification of classification and classification.
The data matching and fusing comprises the following steps: and matching the repeatedly detected road end traffic participant data and vehicle end traffic participant data, and fusing the traffic participant data missing from the vehicle end. The specific process is shown in fig. 2, and includes:
s511: firstly, converting coordinate systems of a road end structured data set T1(m) and a vehicle end structured data set T2(n) into a world coordinate system, wherein the converted coordinate systems T1(m) and T2(n) are respectively represented as follows:
T1(m)=[x ωm ,y ωm ,z ωm ,l ωmωm ,h ωmωm ,c ωm ];
T2(n)=[x ωn ,y ωn ,z ωn ,l ωnωn ,h ωnωn ,c ωn ]。
the problem of data matching and fusion is converted into the problem of similarity of two matrixes, and the information dimensions included in the structured data comprise spatial position, shape, direction and semantic labels.
S512: spatial position coordinates (x) from vehicle-end structured dataset T2(n) ωn ,y ωn ,z ωn ) Constructing a KD data structure to end-of-road structure the spatial location coordinates (x) of the data set T1(m) ωm ,y ωm ,z ωm ) For reference, the 1 st data T1 in the T1(m) of the structured data set T2(n) is searched for based on the KD Tree search algorithm 1 Then comparing the similarity of the road side sensing device and the data of the vehicle end traffic participants.
S513: compared with the euclidean distance, the mahalanobis distance can solve the variance and correlation between different dimensions, specifically:
calculating the mean value of each dimension vector:
Figure BDA0003609023640000042
calculate the covariance between every two dimensions, where x w ,y w The covariance between them is calculated as:
Figure BDA0003609023640000043
finally, 8-8 covariance matrix sigma can be obtained, and inverse matrix sigma thereof is calculated -1 Then, t1 is calculated 1 Mahalanobis distance from the two nearest vehicle end traffic participants is expressed as:
Figure BDA0003609023640000044
if it is
Figure BDA0003609023640000045
If the data is less than the preset threshold value D, the traffic participant with repeated detection is considered to exist, the vehicle end traffic participant closest to the vehicle end is matched with the road end traffic participant, if the matching is consistent, the traffic participant with repeated detection is confirmed to exist, and if the matching is inconsistent, the road end traffic participant data t1 is obtained 1 Adding the data into a vehicle-end structured data set T2 (n); if MD is larger than a preset threshold value D, the repeated detection is not considered, and the road end traffic participant data t1 is obtained 1 Added to the vehicle-side structured data set T2 (n).
And repeating the steps S511 to S513 until all the data in the road-end structured data set T1(m) are completely matched, so as to obtain a final vehicle-end structured data set T2(n '), wherein n' is more than or equal to n.
S6: and (3) building a map of the semantically segmented point cloud through a LeGO-LOAM algorithm to obtain a point cloud map, and then visualizing the data of all traffic participants in the point cloud map to obtain a global point cloud semantic map. The global point cloud semantic map comprises a point cloud map constructed by a full amount of traffic participants with semantic labels, road line boundaries and a LeGO-LOAM algorithm.
Fig. 3 is a schematic diagram of a vehicle-road cooperative sensing system, as shown in fig. 3, in a large-scale vehicle-road cooperative sensing system with multiple vehicles and multiple paths, a road-side sharing server fuses traffic participants of each road-side sensing device under the road network, sends the traffic participants to an automobile running under the road network, performs matching fusion with vehicle-side data, and feeds back a fusion result to the road-side sharing server by each vehicle side for data matching fusion, thereby implementing real-time detection of all traffic participants under the road network.
And constructing a local point cloud semantic map by the automobile running under the road network, and performing matching fusion on the point cloud map on line to form a global point cloud semantic map. And combining the global point cloud semantic map with the fusion result of the road side sharing server to obtain the road network real-time semantic map for all vehicles to use.
The foregoing is illustrative of the embodiments of the present application and the scope of protection is defined by the claims and their equivalents.

Claims (8)

1. A semantic map construction method based on a vehicle-road cooperative perception system is characterized by comprising the following steps:
s1: the road side sensing equipment detects the traffic participants in real time based on an image and point cloud example matching algorithm to obtain road end traffic participant data and obtain road line boundary information;
s2: converting the road end traffic participant data into a structured data set T1(m) and then sending the structured data set T1(m) to a vehicle end; wherein m represents the total number of road-end traffic participants;
s3: collecting point cloud data by a laser radar at a vehicle end, and performing real-time semantic segmentation on the point cloud through a RangeNet + + network;
s4: post-processing the point cloud after semantic segmentation, converting the point cloud after post-processing into a depth map to obtain vehicle end traffic participant data and converting the vehicle end traffic participant data into a structured data set T2 (n); wherein n represents the total number of vehicle-end traffic participants;
s5: the vehicle-side carries out data integration and data matching fusion on a structured data set T1(m) and a structured data set T2(n) to obtain data of all traffic participants;
s6: and (3) building a map of the semantically segmented point cloud through a LeGO-LOAM algorithm to obtain a point cloud map, and then visualizing the data of all traffic participants in the point cloud map to obtain a global point cloud semantic map.
2. The semantic mapping method according to claim 1, wherein in the step S1, the image and point cloud instance matching algorithm comprises:
the method comprises the steps of carrying out instance segmentation on an image based on a YOLACT algorithm, carrying out point cloud perspective projection, carrying out instance matching, target point cloud clustering and three-dimensional model fitting on the image subjected to instance segmentation and the point cloud subjected to perspective projection, fusing detection structures, generating an enclosing frame, and obtaining road end traffic participant data.
3. The semantic mapping method according to claim 1, wherein in the step S2, the structured data set T1(m) ═ T1 (T1) 1 ,t1 2 ,...,t1 m ) Wherein, t1 m Representing mth end-of-road traffic participant data, t1 m Is represented by
Figure FDA0003609023630000011
(x1 m ,y1 m ,z1 m ) Representing the coordinates of the center point of the mth road end traffic participant; l1 m 、ω1 m 、h1 m Respectively showing the length, width and height of the mth road end traffic participant surrounding frame;
Figure FDA0003609023630000012
representing the direction included angle of the mth road end traffic participant; c1 m And the class of the mth detection target of the road end is shown.
4. The semantic map construction method according to claim 1, wherein in the step S4, the post-processing operation on the semantically segmented point cloud comprises: compensating the point cloud missing data, removing ground points, converting the point cloud into a depth map, clustering and fitting a three-dimensional model to the point cloud to obtain vehicle-end traffic participant data, and converting the vehicle-end traffic participant data into a structured data set T2 (n); wherein, T2(n) ═ T2 1 ,t2 2 ,...,t2 n ) Wherein, t2 n Representing nth vehicle end traffic participant data, t2 n Is represented by
Figure FDA0003609023630000013
(x2 n ,y2 n ,z2 n ) Representing the coordinates of the center point of the nth vehicle-end traffic participant; l2 n 、ω2 n 、h2 n Respectively showing the length, width and height of the n-th vehicle end traffic participant surrounding frame;
Figure FDA0003609023630000014
representing the direction included angle of the nth vehicle end traffic participant; c2 n And the category of the nth detection target at the vehicle end is shown.
5. The semantic mapping method of claim 4, wherein the compensating for point cloud missing data comprises: acquiring key points, expanding the key points, clustering adjacent key points in each row into a container, discarding the container if the number of the key points in the container is less than 2, otherwise, judging non-key points of each container in the light beam index range, determining whether the non-key points are contained in the container and converted into the key points, wherein the judgment function is as follows:
Figure FDA0003609023630000021
obtaining invalid points surrounded by the key points, and if the number of the invalid points in the container is less than 30 and the Euclidean distance of the key points around the invalid points is not more than 1.3m, simulating the invalid points in a linear fitting mode so as to compensate point cloud missing data;
removing ground points includes: removing ground points through semantic segmentation results of a RangeNet + + network;
wherein H z Indicating height, i indicates points i, H i Index of the beam, z, representing point i i Representing the coordinate of point i on the z-axis, p height Representing the height of the point cloud.
6. The semantic map construction method according to claim 4, wherein the converting point cloud into depth map, clustering and three-dimensional model fitting the point cloud, finally obtaining vehicle-end traffic participant data, and converting the vehicle-end traffic participant data into a structured data set T2(n) comprises:
mapping the 3D point cloud (x, y, z) into a 2D depth image (μ, ν) according to the following formula:
Figure FDA0003609023630000022
wherein:
Figure FDA0003609023630000023
represents the width of the 2D depth map, h represents the height of the 2D depth map; f. of up Representing the upward viewing angle range in the vertical direction of the radar, f down Representing a downward view angle range in the vertical direction of the radar; f ═ f up +f down Represents the total viewing angle range in the vertical direction; γ represents the distance of a point to the radar;
and clustering the point cloud through a BFS clustering algorithm, fitting a clustering result through a three-dimensional model to obtain a 3D surrounding frame, and then obtaining vehicle-end traffic participant data and a structured data set T2(n) thereof.
7. The semantic map construction method according to claim 4, wherein in the step S5, the data integration comprises coordinate system unification, data model unification and classification and grading unification of road side sensing equipment and vehicle side;
the data matching fusion comprises: and matching the repeatedly detected road end traffic participant data and vehicle end traffic participant data, and fusing the traffic participant data missing from the vehicle end.
8. The semantic mapping method of claim 7, wherein the data matching fusion comprises:
s511: the coordinate systems of the structured data set T1(m) and the structured data set T2(n) are firstly converted into a world coordinate system, and the converted T1(m) and T2(n) are respectively expressed as:
T1(m)=[x ωm ,y ωm ,z ωm ,l ωmωm ,h ωmωm ,c ωm ];
T2(n)=[x ωn ,y ωn ,z ωn ,l ωnωn ,h ωnωn ,c ωn ];
s512: spatial position coordinates (x) from vehicle-end structured dataset T2(n) ωn ,y ωn ,z ωn ) Constructing a KD data structure to end-of-road structure the spatial location coordinates (x) of the data set T1(m) ωm ,y ωm ,z ωm ) For reference, the 1 st data T1 in T1(m) is searched in T2(n) based on the KD tree search algorithm 1 The two nearest vehicle end traffic participants;
s513: calculate t1 1 Mahalanobis distance from the two nearest vehicle end traffic participants is expressed as:
Figure FDA0003609023630000024
if it is
Figure FDA0003609023630000025
If the value is less than the preset threshold value D, the existence of the repeatedly detected traffic participants is considered, and the traffic participants are takenMatching the nearest vehicle end traffic participant with the road end traffic participant, confirming that the repeatedly detected traffic participant exists if the matching is consistent, and determining that the road end traffic participant data t1 is not consistent if the matching is inconsistent 1 Adding the data into a vehicle-end structured data set T2 (n); if MD is larger than a preset threshold value D, the repeated detection is not considered, and the road end traffic participant data t1 is obtained 1 Adding the data into a vehicle-end structured data set T2 (n);
s514: and repeating the steps S511 to S513 until all the data in the road-end structured data set T1(m) are completely matched, so as to obtain a final vehicle-end structured data set T2(n '), wherein n' is more than or equal to n.
CN202210428195.4A 2022-04-22 2022-04-22 Semantic map construction method based on vehicle-road cooperative sensing system Pending CN114882182A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210428195.4A CN114882182A (en) 2022-04-22 2022-04-22 Semantic map construction method based on vehicle-road cooperative sensing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210428195.4A CN114882182A (en) 2022-04-22 2022-04-22 Semantic map construction method based on vehicle-road cooperative sensing system

Publications (1)

Publication Number Publication Date
CN114882182A true CN114882182A (en) 2022-08-09

Family

ID=82670903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210428195.4A Pending CN114882182A (en) 2022-04-22 2022-04-22 Semantic map construction method based on vehicle-road cooperative sensing system

Country Status (1)

Country Link
CN (1) CN114882182A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467323A (en) * 2023-04-11 2023-07-21 北京中科东信科技有限公司 High-precision map updating method and system based on road side facilities
CN116804560A (en) * 2023-08-23 2023-09-26 四川交通职业技术学院 Unmanned automobile safety navigation method and device under controlled road section

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467323A (en) * 2023-04-11 2023-07-21 北京中科东信科技有限公司 High-precision map updating method and system based on road side facilities
CN116467323B (en) * 2023-04-11 2023-12-19 北京中科东信科技有限公司 High-precision map updating method and system based on road side facilities
CN116804560A (en) * 2023-08-23 2023-09-26 四川交通职业技术学院 Unmanned automobile safety navigation method and device under controlled road section
CN116804560B (en) * 2023-08-23 2023-11-03 四川交通职业技术学院 Unmanned automobile safety navigation method and device under controlled road section

Similar Documents

Publication Publication Date Title
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN109829386B (en) Intelligent vehicle passable area detection method based on multi-source information fusion
CN107063275B (en) Intelligent vehicle map fusion system and method based on road side equipment
Han et al. Research on road environmental sense method of intelligent vehicle based on tracking check
US20210078562A1 (en) Planning for unknown objects by an autonomous vehicle
US10281920B2 (en) Planning for unknown objects by an autonomous vehicle
Holgado‐Barco et al. Automatic inventory of road cross‐sections from mobile laser scanning system
Zhao et al. On-road vehicle trajectory collection and scene-based lane change analysis: Part i
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
WO2021238306A1 (en) Method for processing laser point cloud and related device
US20180259967A1 (en) Planning for unknown objects by an autonomous vehicle
CN107862287A (en) A kind of front zonule object identification and vehicle early warning method
Ma et al. Generation of horizontally curved driving lines in HD maps using mobile laser scanning point clouds
CN106199558A (en) Barrier method for quick
CN114882182A (en) Semantic map construction method based on vehicle-road cooperative sensing system
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
Ye et al. Robust lane extraction from MLS point clouds towards HD maps especially in curve road
CN116685874A (en) Camera-laser radar fusion object detection system and method
CN111880191A (en) Map generation method based on multi-agent laser radar and visual information fusion
CN109241855A (en) Intelligent vehicle based on stereoscopic vision can travel area detection method
CN115775378A (en) Vehicle-road cooperative target detection method based on multi-sensor fusion
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
CN109993081A (en) A kind of statistical method of traffic flow based on road video and car plate detection
Yuan et al. Estimation of vehicle pose and position with monocular camera at urban road intersections
CN115451987A (en) Path planning learning method for automatic driving automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination