CN117441197A - Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field - Google Patents
Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field Download PDFInfo
- Publication number
- CN117441197A CN117441197A CN202280026657.8A CN202280026657A CN117441197A CN 117441197 A CN117441197 A CN 117441197A CN 202280026657 A CN202280026657 A CN 202280026657A CN 117441197 A CN117441197 A CN 117441197A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- data
- field
- risk
- target vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 70
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims abstract description 78
- 230000008447 perception Effects 0.000 claims abstract description 17
- 238000011156 evaluation Methods 0.000 claims abstract description 16
- 238000012216 screening Methods 0.000 claims abstract description 8
- 230000003068 static effect Effects 0.000 claims description 116
- 238000000034 method Methods 0.000 claims description 94
- 238000001514 detection method Methods 0.000 claims description 71
- 239000013598 vector Substances 0.000 claims description 52
- 238000005070 sampling Methods 0.000 claims description 27
- 238000000926 separation method Methods 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 238000012937 correction Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 12
- 239000011265 semifinished product Substances 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims 1
- 238000007906 compression Methods 0.000 claims 1
- 238000013461 design Methods 0.000 claims 1
- 238000002474 experimental method Methods 0.000 claims 1
- 230000007246 mechanism Effects 0.000 abstract description 8
- 206010039203 Road traffic accident Diseases 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000013144 data compression Methods 0.000 description 3
- 238000005290 field theory Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 235000019580 granularity Nutrition 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/87—Combinations of systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/003—Transmission of data between radar, sonar or lidar systems and remote stations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
- G01S7/4972—Alignment of sensor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0133—Traffic data processing for classifying traffic situation
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0141—Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0145—Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/048—Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/164—Centralised systems, e.g. external to vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Data Mining & Analysis (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a vehicle road laser radar point cloud dynamic segmentation and fusion method based on a vehicle safety risk field, which comprises the following steps: 1. a calculation mechanism of a driving safety risk field is provided, and the risk of various stationary objects (road side parking, roadblocks, traffic signs and the like) or moving objects (driving vehicles, non-motor vehicles, pedestrians and the like) in a traffic environment to a specific position is quantitatively analyzed; 2. by using the calculation method, the laser radar point cloud data of the road side sensing unit is used as a data source, the risk of other various objects on a target vehicle (an automatic driving vehicle) in a scanning range is calculated, and unified driving safety risk field distribution taking the target vehicle as a core is established; 3. screening out an area with larger risk for the target vehicle by using a threshold value, and dividing point cloud data from the original data to serve as supplementary perception information provided for the automatic driving vehicle; 4. and (3) after the point cloud level information acquired by the road side perception unit laser radar is processed by the steps, fusing the point cloud level information with the point cloud level information acquired by the vehicle-end laser radar, and providing a reference evaluation system of the fusion method.
Description
The invention relates to a vehicle automatic driving perception auxiliary technology, in particular to a vehicle road laser radar point cloud dynamic segmentation and fusion method based on a vehicle driving safety risk field, which is mainly oriented to enhancing the dangerous perception capability of an automatic driving vehicle through infrastructure data acquisition, vehicle driving safety risk field calculation, data segmentation, data release and data fusion under a vehicle road cooperative environment.
The automatic driving technology is an emerging technology in the traffic field, which is greatly put into practice worldwide, and each country is trying to develop and establish a safe and efficient automatic driving technology route, wherein one technical scheme supported by China is a vehicle-road cooperative mode. The intelligent system is widened to the whole traffic environment by the aid of the intelligent means of the single vehicle, and the limitation and the data processing capacity of single vehicle perception are made up by means of multi-dimensional data channels of vehicle-to-vehicle communication and vehicle-to-road communication, so that the whole running safety and the whole running efficiency of the whole traffic system are improved.
In the field of automatic driving safety, driving safety assistance systems have been studied for a long time. Since the 90 s of the 20 th century, automobile companies have proposed a number of driving safety assistance algorithms. For longitudinal safety, a safety distance model is mainly adopted. When the following distance is less than the safety distance, the auxiliary system will sound an alarm and automatically brake. Many safety distance models determine the safety status of a vehicle by analyzing the safety distance of the relative movement of the vehicle in front and back in real time. For lateral safety, the driver safety assistance algorithm is mainly based on the current position of the car (CCP), the lane crossing Time (TLC) and the variable elevation band (VRBS). Existing safety models are mostly based on vehicle kinematics and dynamics, and their description of vehicle driving safety is typically based on information about the vehicle state, such as position, speed, acceleration and yaw rate, and information about the relative motion of the vehicle, including relative speed and relative distance. However, these models have the following problems: it is difficult to reflect the influence of all types of traffic factors on driving safety; it is difficult to describe interactions between driver behavior characteristics, vehicle states, and road environments; it is difficult to provide accurate judgment basis for vehicle control.
Based on the above, the theory becomes an emerging direction in the field of automatic driving safety, and is originally used for solving the navigation of vehicles and robots. One significant advantage is that it allows the vehicle to use only its position and local sensor measurements from autonomous navigation. The obstacle of the vehicle is modeled as a repulsive potential field (risk field). The vehicle may use the field strength gradient of its location to generate a control action to navigate while avoiding an obstacle. However, the theory is mainly applied to motion planning of autonomous vehicles and driver behavior modeling under specific traffic scenes, such as following vehicles and the like. The main problem with existing models is that risk factors such as personality, psychological and physiological characteristics of the driver, complex road conditions, etc. are not fully considered, and the description of driver-vehicle-road interactions is inadequate. Therefore, practical applications of these models are limited.
The environmental perception is a precondition for realizing automatic driving decision planning, and various environmental perception means are available, including high-definition cameras, infrared sensing, laser radar, millimeter wave radar and the like, and different perception means have advantages and disadvantages. In the 3D perception field, a laser radar technology (LiDAR) is a widely used and better-effect technical means, has the advantages of wide scanning range, visual result and no influence of natural light of the environment, and is very suitable for the perception field of automatic driving. The laser radar outputs a result in a point cloud format, has data characteristics relative to the bottom layer, and the scanning data is recorded in the form of points, wherein each point comprises three-dimensional coordinates, and some points may comprise color information (RGB) or reflection Intensity information (Intensity). Similar to the vigorous development of image processing technology, the processing method of laser radar point cloud data is gradually increased, and the processing method relates to multiple directions of target detection, target tracking, semantic segmentation and the like.
Semantic segmentation is a fundamental task in computer vision, which can be a finer granularity of knowledge of images, and is very important in the field of autopilot. The point cloud semantic segmentation is extended by image semantic segmentation in computer vision. And (3) aiming at the original point cloud data, sensing characteristics such as the types, the quantity and the like of the targets in the scene, and rendering the points of the similar targets into the same color. Three-dimensional point cloud segmentation requires knowledge of both global geometry and fine-grained detail of each point. According to different segmentation granularities, the three-dimensional point cloud segmentation method can be divided into three types, namely semantic segmentation, instance segmentation and partial segmentation.
The effect of point cloud segmentation has a great relationship with the quality of point cloud data. Obviously, in a vehicle-road collaborative fusion perception scene, the road side equipment can maximize the detection effect by providing unprocessed point cloud data, but the problem of overlarge data transmission quantity can be caused. Recent V2V studies have shown that transmitting information of all road objects detected by onboard sensors may still result in high load on the transmission network. Then, a dynamic screening mechanism is needed for road objects to screen "skeleton" point clouds that are more representative and that lose little or no feature information. The specific implementation process is as follows: according to theoretical deduction, a point cloud value judgment standard is prepared; setting a value for each point in the point cloud data; and judging whether to input into a transmission network according to the value. Simulation research based on real road traffic shows that the V2V communication of value prediction can obviously improve the collaborative perception performance under the network load. The method is popularized to the fusion sensing of the road side sensing equipment and the vehicle end sensing equipment, a dynamic point cloud screening mechanism is formulated, road objects detected by road sections can be selectively transmitted to the vehicle end according to certain standards, and therefore the method has important significance for improving the sensing capability of the vehicle end on the full semantics of scenes and improving the safety of automatic driving.
Prior Art
CN105892471A;
CN108932462A;
CN110850431A;
CN111985322A;
CN108639059A;
US10281920 B2
WO2015008380A1
Disclosure of Invention
In order to solve the problems, the invention provides a dynamic Vehicle-road laser radar point cloud segmentation and fusion method based on a Vehicle safety risk field, which is used for providing a Vehicle safety risk calculation mechanism which can be used for actual calculation and covers most road safety influencing objects by referring to a risk field calculation theory thought and specific point cloud data deduced from the paper of Jianqiang Wang et al, namely The Driving Safety Field Based on Driver-Vehicle-Road Interactions, and for dividing the point cloud of the object with larger risk for an automatic driving Vehicle as a final transmission result based on a calculation result; and then carrying out data fusion with the point cloud acquired by the target vehicle laser radar, and (optionally) carrying out method evaluation. The method comprises the following steps:
A. data acquisition
In an automatic driving traffic scene, the laser radar scans to obtain the point cloud of the traffic scene, the data are the data sources of all subsequent links, and a data acquisition module flow chart is shown in fig. 2.
In the data acquisition module, there are two alternatives:
A 1 firstly, only a road side laser radar in a road side sensing unit is used for laser radar scanning so as to construct scene point clouds, and then only the point clouds of the road side laser radar are used for constructing a safety risk field and calculating the numerical value in the subsequent links;
A 2 Secondly, laser radar scanning is carried out by a road side laser radar in a road side sensing unit and a laser radar loaded on a calibration vehicle preset in a scene, so that scene point clouds are constructed, and then the construction and numerical calculation of a safety risk field in a subsequent link use the point clouds of the road side laser radar and the point clouds of the laser radar loaded on the calibration vehicle to play roles of mutual verification and calibration.
B. Driving safety risk field calculation
Comprising a target detection submodule B 1 Scene acquisition submodule B 2 And a security field computation sub-module B 3 The flow chart is shown in fig. 3.
B 1 And a target detection sub-module. And (3) in the target detection submodule, performing deep learning 3d target detection on the scene point cloud obtained in the step (1), wherein an algorithm is PV-RCNN. And inputting scene point cloud data to obtain a target detection result. Since the data source is the laser radar point cloud data, the layout position of the laser radar determines the size, the characteristics and the like of the scene point cloud data, the bounding box is the bounding box of each target in the scene, and the attributes are the position, the length, the height, the width, the deflection angle and the like, as shown in fig. 4.
B 2 And a scene acquisition sub-module. The scene acquisition sub-module is used for acquiring some characteristics and information in the scene in advance before the target detection sub-module, so that target detection and subsequent safe field calculation can be better performed, and various alternative schemes exist in the sub-module as follows:
B 21 : by adding a camera sensor in the road side sensing unitThe RGB information of the scene acquired by the camera sensor and the corresponding horizontal and vertical boundaries are used for judging the type of the object so as to assist in distinguishing the type of the static object.
B 22 : before the automatic processing of the target detection sub-module, a manual judging process is added, and a worker trained by related professions performs calibration work on the static object in the traffic scene, so that the purpose of distinguishing the static object is achieved.
B 23 : the type of the static object is distinguished by the lane-level information in the high-precision map according to the existing high-precision map and positioning to the position of the scene according to the coordinate system.
B 3 A security field calculation sub-module. The inputs to this sub-module are the type of static object and the target detection bounding box. By means of field theory methods such as gravitational field and magnetic field in physics, things possibly causing risks in a traffic environment are regarded as dangerous occurrence sources, the things are taken as centers to be diffused to the surroundings, and the field intensity of a dangerous field can be understood to be the magnitude of a risk coefficient at a certain distance from the dangerous occurrence sources. The closer to the dangerous center, the greater the possibility of accident, the farther the distance, the lower the accident probability, and when the distance approaches 0, the contact collision between the target vehicle and the dangerous occurrence source can be considered, namely, the traffic accident has occurred.
The driving safety risk field model comprises a static risk field and a dynamic risk field. I.e. driving safety risk field = static risk field + dynamic risk field.
E S =E R +E V
Wherein E is S For field strength vectors of driving safety risk fields, E R Is the field intensity vector of static risk field, E V The driving safety risk field model can be expressed as potential driving risks caused by traffic factors in an actual scene for the field intensity vector of the dynamic risk field. The risk is measured by the probability of an accident and the severity of the accident.
The driving safety risk fields are divided into two types according to the difference of generating sources, namely a static risk field source and a dynamic risk field source:
1) Static risk field: the field source is an object which is relatively static in a traffic environment, is mainly a pavement marking line such as a lane dividing line and the like, and also comprises a hard dividing facility such as a central dividing belt and the like. Such objects have two characteristics: (1) under the condition of not considering road construction, the object is in a static state relative to the target vehicle; (2) except for a part of hard separation facilities, the object enables a driver to be away from the position of the object on the basis of legal effect, but even if the driver actually performs the action of crossing the lane lines, traffic accidents do not necessarily occur immediately. For this type of object, based on the above analysis, it is assumed that the static risk field source a is at (x a ,y a ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) R The method comprises the following steps:
wherein LT a Is a risk coefficient of the static risk field source a; r is R a Is a constant greater than 0, representing (x a ,y a ) Road condition influence factors at the location; f (f) d A distance influence factor is a static risk field source a; r is (r) aj Is the distance vector between the static risk field source a and the target vehicle j, in this case (x j ,y j ) Is the centroid of the target vehicle j, (x) a ,y a ) Then the representation (x) j ,y j ) Making a point at which the vertical line intersects with the static risk field source a; r is (r) aj /|r aj I indicates the direction of the field strength, d is the width of the target vehicle j. E (E) R The larger the value, the higher the risk that the static risk field source a poses on the target vehicle j, the static risk field sourceIncluding but not limited to lane markings, etc.
2) Dynamic risk field: the field source is an object which is in relative movement in a traffic environment, and mainly comprises vehicles, pedestrians, roadblock facilities and the like. This type of object also has two features: (1) the relative speed is set by taking a moving target vehicle as a reference system. (2) Such objects are strictly prohibited from collision, otherwise serious traffic accidents will necessarily occur. For this class of objects, based on the above analysis, it is assumed that the dynamic risk field source b is at (x b ,y b ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) V The method comprises the following steps:
r bj =(x j -x b ,y j -y b )
wherein the x-axis is located on a road line and the y-axis is perpendicular to the road line. r is (r) bj The distance vector is the distance vector between the dynamic risk field source b and the target vehicle j; k (k) 2 、k 3 And G is a constant greater than 0; r is R b Meaning and R a The same; t (T) bj Correcting coefficients for types between the dynamic risk field source b and the target vehicle j; v bj Is the relative velocity between the dynamic risk field source b and the target vehicle j, θ is v bj And r bj The angle between the directions is positive in the clockwise direction. E (E) V The greater the value, the higher the risk that the dynamic risk field source b poses to the target vehicle j.
Based on the driving safety risk calculation method, the risk of various objects in the road to a certain target vehicle can be analyzed. In view of the comprehensiveness of the data acquisition range and the advantages of the data acquisition range in the aspect of target positioning, the method selects the point cloud data acquired by the road side laser radar as a data source and takes the non-shielding point cloud scanning result of the road side as a calculation carrier.
For a certain target vehicle in the field, the risk size calculation flow of various objects is as follows:
1) And constructing static scene data of the point cloud scanning result through early-stage data acquisition. And (3) separating out static risk field sources in the static scene in a manual mode, wherein the static risk field sources comprise lane separation lines, central separation strips, road edge areas and the like, and randomly sampling and fitting a plane linear equation of each static risk field source.
2) And selecting certain frame data as a calculation time, and extracting the previous frame data as an object moving speed reference. And respectively identifying various objects (vehicles, pedestrians and the like in the calculation frame and the previous frame data which participate in the safety field calculation process by using a 3D target detection and tracking algorithm based on the point cloud data, and establishing a corresponding relation between the same object in the previous frame and the next frame. And calculating the moving speed of the object by using the marking frame of the same object and the scanning frame rate of the laser radar. For a newly added object without previous frame data for calculating the speed, the speed is regarded as the standard speed.
3) Substituting the relative position, type and other attributes of the target vehicle and other objects except the target vehicle, and parameters such as the distance between the static risk field source and the target vehicle in the step 1) and traffic environment factors such as road conditions into a driving safety risk field calculation mechanism, setting the speed of the target vehicle in the relative speed as an unknown parameter, and setting the calculation process to be backward, wherein the relative speed is an expression with the unknown parameter. And obtaining the safety risk of each object in the scanning range for calculating the target, thereby forming the driving safety risk distribution taking the target vehicle as the core.
C. Data segmentation
The data segmentation module flow diagram is shown in fig. 8. Firstly, the scene point cloud is divided into point cloud in the boundary box and point cloud outside the boundary box. According to the input scene point cloud and boundary box data, whether the detection point cloud is in the boundary box or not is designed, so that the point cloud is divided into two types, namely, the point cloud in the boundary box and the point cloud outside the boundary box.
Before the safety risk field study, two methods of sampling and segmentation are adopted to carry out data segmentation. After the safety risk field is introduced, setting a safety risk field threshold based on a safety risk field calculation result, and screening out objects with higher risks to the target vehicle by using a threshold screening method. There are four data partitioning alternatives:
C 1 : a sampling scheme; scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As the input of the submodule, firstly, the scene point cloud data P is judged by condition judgment 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then setting the super parameter f 1 、f 2 Cloud data P of inner and outer points of boundary frame 11 、P 12 According to f 1 、f 2 Randomly sampling to obtain point cloud data P after data segmentation 2 。
C 2 : a segmentation scheme; scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As the input of the submodule, firstly, the scene point cloud data P is judged by condition judgment 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then select point cloud data P within bounding box 11 Removing the point cloud data P outside the bounding box 12 Obtaining point cloud data P after data segmentation 2 。
C 3 : based on a security risk field sampling scheme; scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As the input of the submodule, firstly, the scene point cloud data P is judged by condition judgment 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then setting a security risk field numerical threshold f 3 Sampling the cloud data P of the inner point and the outer point of the boundary box according to the threshold value 11 、P 12 Obtaining point cloud data P after data segmentation 2 。
C 4 : based on a security risk field segmentation scheme. Scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As the input of the submodule, firstly, the scene point cloud data P is judged by condition judgment 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then setting a security risk field numerical threshold f 3 Partitioning the inside and outside point cloud data P of the bounding box according to a threshold 11 、P 12 Obtaining point cloud data P after data segmentation 2 。
The specific method for extracting the dangerous range comprises the following steps: if the dangerous target is a static risk field source, taking a plane in which the risk field source is positioned as a center, taking a region with the intercepting width of d/2 as a dangerous range, wherein d is the width of the dangerous target; if the dangerous object is a dynamic dangerous field source, taking the centroid of the dangerous object as the center, and intercepting a rectangular area with the width of 1.5d and the length of (0.5l+0.5l×k) as a dangerous range, wherein d is the width of the dangerous object, l is the length of the dangerous object, and k is a speed correction coefficient not smaller than 1; and sequentially extracting the dangerous ranges according to the dangerous coefficient of the dangerous source, wherein the region where the dangerous ranges overlap is extracted only once. The final extracted total hazard range result may be provided to the target vehicle as perception assistance data for the target vehicle.
D. Data release (optional)
The flow chart of the data release module is shown in fig. 9. Based on the data segmentation result, compressing data by the road side sensing unit, and then establishing a data transmission channel between the road side sensing unit and the target vehicle, wherein the target vehicle meets the following characteristics: at some point in the time stamp, a numbered vehicle is in a certain position in the scene. Judging whether the target vehicle is moving, two alternatives exist:
D 1 : if the target vehicle is stationary, directly issuing a point cloud, a static risk field, a dynamic risk field vector sum after data segmentation, namely a safety risk field vector sum, wherein a modulus is a numerical value;
D 2 : if the target vehicle moves, the point cloud, the static risk field and the semi-finished product risk field data after data segmentation are issued, and then the data are substituted into the speed of the target vehicle, so that the safety risk field vector sum of the target vehicle is obtained, and the modulus is a numerical value.
E. Data fusion (optional)
And fusing the point cloud of a certain area around the object with high security field risk value obtained after data segmentation with the point cloud scanned by the target vehicle laser radar, namely designing a point cloud coordinate conversion matrix to register the vehicle end with the road side high-risk data point cloud, and performing data compression on the fused point cloud to obtain compressed point cloud data.
F. Method evaluation (optional)
Testing different data segmentation methods, wherein V represents an unprocessed vehicle end origin point cloud;
i represents an original point cloud obtained by a road side sensing unit which is not processed;
I 1 the representative road side sensing unit uses the segmentation in the data segmentation method to segment the original point cloud to obtain the point cloud;
I 2 the representative road side sensing unit uses sampling in a data segmentation method to segment an original point cloud to obtain a point cloud;
I 1S the representative road side sensing unit uses the point cloud obtained by dividing the original point cloud by the security-field-based division in the data division method;
I 2S the representative road side sensing unit uses the sampling based on the security field in the data segmentation method to segment the original point cloud to obtain the point cloud;
and finally, obtaining detection results of different data segmentation methods and giving out evaluation.
All the symbols and their associated meanings are summarized in the following table:
(symbol) | meaning of |
E R | Field intensity vector of a static risk field source a in (x < a >, y < a) for a target vehicle j in (x < j >, y < j >) |
LT a | Risk coefficients of different types of static risk field sources |
R a | Calculating a constant of a static risk field and road condition influence factors at (x < a >, y < a >) |
D | Lane width |
D | Width of target vehicle j |
r aj | Distance vector between static risk field source a and target vehicle j |
x j | Abscissa of centroid of target vehicle j |
y j | Ordinate of centroid of target vehicle j |
x a | The j-shaped center of the target vehicle is perpendicular to the abscissa of the intersection point of the static risk field source a |
y a | Vertical coordinate of intersection point of j-shaped center of target vehicle and static risk field source a |
k 1 | Static risk field calculation constant, amplification factor representing distance |
E V | The field intensity vector of the dynamic risk field source b at (x < b >, y < b) is at (x < j >, y < j) for the target vehicle j |
r bj | Distance vector of dynamic risk field source b and target vehicle j |
R b | Constant of dynamic risk field calculation, (x < b >, y < b >) road condition influence factors |
T bj | Type correction coefficient between dynamic risk field source b and object target vehicle j |
v bj | Relative speed between dynamic risk field source b and target vehicle j |
θ | An angle between the v < bj > and r < bj > directions |
k 2 | Dynamic risk field calculation constant |
k 3 | Dynamic risk field calculation constant |
G | Dynamic risk field calculation constant |
K | Speed correction coefficient for dangerous target extraction |
Interpretation of the terms
Driving safety risk field: the distribution of the running safety risk of static and dynamic objects in a scene to a running vehicle is synonymous with a safety risk field when no special description exists in the invention.
Lidar (Lidar): the active remote sensing equipment adopts a laser as an emission light source and adopts a photoelectric detection technical means.
And (3) point cloud: a point cloud is a dataset of points in a coordinate system.
And (3) point cloud data: including three-dimensional coordinates X, Y, Z, color, intensity values, time, etc., i.e., the structured matrix.
Target vehicle end laser radar L 1 : lidar mounted on a target vehicle.
Roadside laser radar L 2 : and a laser radar installed on the road side.
Vehicle-end laser radar point cloud: representing the point cloud acquired by the vehicle-end laser radar.
Road side laser radar point cloud: representing the point cloud acquired by the roadside lidar.
Scene point cloud: a point cloud of a traffic scene.
V2V: the moving vehicles provide end-to-end wireless communication, i.e., through V2V communication technology, the vehicle terminals exchange wireless information with each other without forwarding through the base station.
Convolution: the two variables are multiplied within a certain range and then summed.
Convolutional Neural Network (CNN): one type of feedforward neural network that includes convolution calculations and has a deep structure is a representative algorithm for deep learning.
Voxel: in short for volume elements, a volume containing a voxel may be represented by a volume rendering or extraction of a polygonal iso-surface of a given threshold contour. Is the minimum unit of digital data in three-dimensional space division.
MLP: the multilayer perceptron, also called artificial neural network, may have multiple hidden layers in the middle, except for the input and output layers, the simplest MLP contains only one hidden layer, i.e. a three-layer structure.
V2X: vehicle to everything, i.e. exchange of information from the vehicle to the outside world.
RSU: the Road Side Unit is installed on the Road Side and communicates with the vehicle-mounted Unit.
OBU: on board Unit, i.e. On board Unit.
Skeleton points: key nodes of the three-dimensional point cloud model.
Safety risk threshold: according to the actual application scene, the numerical value of the size is manually set.
Dangerous target: in the security risk calculation, a security risk value for a target vehicle is greater than a set threshold value.
The field source: various objects participating in the calculation process in the calculation of the driving safety risk.
And (3) point cloud registration: for the point cloud data under different coordinate systems, a transformation matrix, namely a rotation matrix R and a translation matrix T, is obtained through registration, and errors are calculated to compare matching results.
And a data acquisition module A: the function is data acquisition, input as traffic scene, output as scene point cloud data P 1 。
The driving safety risk field calculation module B: the function is running safety risk field calculation, comprising a target detection sub-module B 1 Scene acquisition submodule B 2 Security field computation submodule B 3 Input as scene point cloud data P 1 Output is target detection bounding box X 1 Safety risk field value S 1 Semi-finished product S 2 。
Bounding box: the detection result of the point cloud target includes position, length, height, width, deflection angle, etc., such as target detection boundary box X 1 。
Safety risk field value S 1 : safe wind of all risk sources in scene for certain objectAnd (5) modeling the vector sum of the dangerous fields.
Semi-finished product risk field S 2 : that is, in the dynamic risk field calculation, the speed of the target vehicle is set as an unknown parameter, and the expression with the unknown parameter and transmitted backward is called a semi-finished product.
Data segmentation module C: the function is data segmentation, and the input is scene point cloud data P 1 Target detection bounding box X 1 Safety risk field value S 1 Semi-finished product risk field S 2 Output as point cloud data P after data segmentation 2 。
The data release module D: the function is data release, namely, a road side sensing unit releases data to a target vehicle in a scene, and the released data is point cloud data P after data segmentation 2 Safety risk field/semi-finished product risk field data S 1 /S 2 。
And a data fusion module E: the function is to divide the data into point cloud data P 2 And point cloud data P of target vehicle 3 Fusion to obtain fused point cloud data P 4 Obtaining compressed point cloud data P through data compression 5 。
Method evaluation module F: for compressed point cloud data P 5 Obtaining a target detection result R through a PV-RCNN deep learning target detection algorithm 1 For the target detection result R 1 And (5) giving evaluation, and selecting an optimal data segmentation scheme.
Road side perception unit: including but not limited to roadside lidar, cameras, and the like.
Target vehicle: the object of risk generation of each risk source in the driving safety risk field is also the object of data transmission of the drive test sensing unit in the data transmission, namely ego car.
Target vehicle: i.e. the target vehicle in the security risk field calculation process.
Target vehicle lidar L 3 : lidar mounted on a target vehicle.
Segmentation: the method for data segmentation divides the point cloud of the detection target and the non-detection target, and the meaning of the method is different from that of the data segmentation.
Sampling: according to the data segmentation method, point clouds of a detection target and a non-detection target are randomly sampled according to weights, and parameters are defaulted to 0.8 and 0.2.
Data flow symbol description:
FIG. 1 is a flow chart of a method
FIG. 2 is a flow chart of a data acquisition module
FIG. 3 is a flow chart of a driving safety risk field calculation module
FIG. 4 is a schematic diagram of a bounding box of a target detection result
FIG. 5 is a schematic diagram of field intensity distribution of two security risk fields
(a) Static risk field distribution
(b) Mobile risk field distribution
(c) Static risk field r aj Description of the calculation
FIG. 6 is a schematic diagram of a security risk field distribution
FIG. 7 schematic view of a security risk field xoy plan projection
FIG. 8 is a flow chart of a data splitting module
FIG. 9 is a flow chart of a data publishing module
FIG. 10 is a flow chart of a data fusion module
FIG. 11 is a flow chart of a method evaluation module
FIG. 12 method evaluation reference System schematic diagram
FIG. 13 variant flow chart (A, B, C)
FIG. 14 variant flow chart (A, B, C, D)
FIG. 15 variant flow chart (A, B, C, D, E)
The invention is described in further detail below with reference to the drawings and the detailed description.
A road side laser radar point cloud segmentation method based on a security risk field mechanism is shown in a flow chart in fig. 1, namely six modules: the system comprises a data acquisition module A, a driving safety risk field calculation module B, a data segmentation module C, a data release module D, a data fusion module E and a method evaluation module F.
Description of the preferred embodiments two examples, A 1 、B、C 1 ;A 2 、B、C 3 、D 2 、E、F
The method comprises the following steps:
1.A 1 、B、C 1
the flow chart is shown in FIG. 13
A. Data acquisition
A 1 In the automatic driving traffic scene, only a road side laser radar in a road side sensing unit is used for carrying out laser radar scanning to obtain point cloud data P of the traffic scene 1 The data is the data source for all subsequent links.
B. Driving safety risk field calculation
The driving safety risk field calculation module comprises a target detection sub-module and a safety field calculation sub-module, and a flow chart is shown in fig. 3.
B 1 And a target detection sub-module. And in the target detection submodule, the scene point cloud obtained in the step A is subjected to deep learning 3d target detection, and the algorithm is PV-RCNN. And inputting scene point cloud data to obtain a target detection result. Since the data source is laser radar point cloud data, the layout position of the laser radar determines the size, the characteristics and the like of the scene point cloud data, and the boundary box is the boundary box X of each target in the scene 1 The attributes are position, length, height, width, deflection angle, etc., as shown in fig. 4.
B 2 And a scene acquisition sub-module. The scene acquisition sub-module is used for acquiring some characteristics and information in the scene in advance before the target detection sub-module, so that target detection and subsequent safe field calculation can be better performed, and various alternative schemes exist in the sub-module:
B 21 : the camera sensor is added in the road side sensing unit, and the type of the object is judged by utilizing RGB information of a scene acquired by the camera and corresponding horizontal and vertical boundaries, so that the type of the static object is assisted to be distinguished.
B 22 : before the automatic processing of the target detection sub-module, a manual judging process is added, and a worker trained by related professions performs calibration work on the static object in the traffic scene, so that the purpose of distinguishing the static object is achieved.
B 23 : the type of the static object is distinguished by the lane-level information in the high-precision map according to the existing high-precision map and positioning to the position of the scene according to the coordinate system.
B 3 A security field calculation sub-module. The inputs to this sub-module are the type of static object and the target detection bounding box. By means of field theory methods such as gravitational field and magnetic field in physics, things possibly causing risks in a traffic environment are regarded as dangerous occurrence sources, the things are taken as centers to be diffused to the surroundings, and the field intensity of a dangerous field can be understood to be the magnitude of a risk coefficient at a certain distance from the dangerous occurrence sources. The closer to the dangerous center, the greater the possibility of accident, the farther the distance, the lower the accident probability, and when the distance approaches 0, the contact collision between the target vehicle and the dangerous occurrence source can be considered, namely, the traffic accident has occurred.
The driving safety risk field model comprises a static risk field and a dynamic risk field. I.e. driving safety risk field = static risk field + dynamic risk field.
E S =E R +E V
Wherein E is S Field intensity vector for driving safety risk fieldAmount, E R Is the field intensity vector of static risk field, E V The driving safety risk field model can be expressed as potential driving risks caused by traffic factors in an actual scene for the field intensity vector of the dynamic risk field. The risk is measured by the probability of an accident and the severity of the accident.
The driving safety risk fields are divided into two types according to the difference of generating sources, namely a static risk field source and a dynamic risk field source:
1) Static risk field: the field source is an object which is relatively static in a traffic environment, is mainly a pavement marking line such as a lane dividing line and the like, and also comprises a hard dividing facility such as a central dividing belt and the like. For example, traffic regulations dictate that vehicles must not travel or traverse solid lanes. However, if the driver unintentionally leaves the current lane, the risk of violating the lane marking constraints is perceived and the driver will drive the vehicle back into the lane center. Meanwhile, the closer the vehicle is to the lane markings, the greater the risk. Driving risks are also related to road conditions, which may lead to high risks. Furthermore, the driving risk with respect to stationary objects is mainly affected by the visibility, the lower the visibility, the higher the driving risk.
Such objects have two characteristics: (1) under the condition that road construction is not considered, the object is in a static state relative to the target vehicle, and has no speed attribute because the actual meaning of the object is expressed as a dangerous boundary; (2) except for a part of hard separation facilities, the object enables a driver to be away from the position of the object on the basis of legal effect, but even if the driver actually performs the action of crossing the lane lines, traffic accidents do not necessarily occur immediately.
For this type of object, based on the above analysis, it is assumed that the static risk field source a is at (x a ,y a ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) R The method comprises the following steps:
wherein:
LT a is a risk coefficient of different static risk field source a types, is determined by traffic regulations, and is a common hard separation facility>Non-straddlable lane dividing line>The lane separation line may be crossed. The common parameters of facilities and lane lines are as follows: guardrail type or green belt type central partition belt: 20-25 parts; sidewalk kerb: 18-20; yellow solid or dashed line: 15-18; white solid line: 10 to 15 percent; white dotted line: 0 to 5.
R a Is a constant greater than 0, representing (x a ,y a ) The road condition influence factor is determined by road attachment coefficient, road gradient, road curvature, visibility and other traffic environment factors in the area near the object a, and a fixed value is generally selected for a section of road in the actual use process. The data interval is generally [0.5,1.5 ] ]。
f d The distance influence factor is a static risk field source a of different types and is determined by the type of the object, the width of the lane and the like. There are two types of lane dividing lines and hard dividing strips at present.
D is the lane width, D is the width of the target vehicle j, and D is generally the width of the boundary box of the target vehicle j.
r aj Is the distance vector between the static risk field source a and the target vehicle j, in this case (x j ,y j ) Is the centroid of the target vehicle j, (x) a ,y a ) Then the representation (x) j ,y j ) And (3) making a point at which the vertical line intersects the static risk field source a.
k 1 Is a constant greater than 0 and represents the magnification factor of the distance, since in general the risk of collision does not vary linearly with the distance between the two objects. General k 1 The range of the value of (2) is 0.5-1.5.
r aj /|r aj In general, in practical applications, even if two security risk field sources face each other in the field intensity direction of a certain point, the risk of the certain point cannot be considered to be reduced, and therefore, the risk is still superimposed in a scalar quantity.
E R The larger the value, the higher the risk that the static risk field source a poses to the target vehicle j, the static risk field source including, but not limited to, lane markers and the like. The field intensity distribution results are shown in fig. 5 (a).
2) Dynamic risk field: the field source is an object in a relatively dynamic state in a traffic environment, and the magnitude and the direction of the field intensity vector are determined by the attribute and the state of a moving object and road conditions. The dynamic object herein refers to a dynamic object that may actually collide with a vehicle and cause a significant loss, mainly vehicles, pedestrians, road-block facilities, and the like.
This type of object also has two features: (1) although the object may be stationary relative to the road surface, such as a roadside parking, a road barrier facility, etc., it still has a relative velocity with the dynamic target vehicle as a reference frame. (2) Such objects are strictly prohibited from collision, otherwise serious traffic accidents will necessarily occur.
Obviously, the risk of relatively dynamic objects does not increase linearly with decreasing distance, and the rate of risk increase becomes faster with decreasing distance. The present invention therefore assumes that the power function form of driving risk is a function of vehicle-target distance.
For this class of objects, based on the above analysis, it is assumed that the dynamic risk field source b is at (x b ,y b ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) V The method comprises the following steps:
r bj =(x j -x b ,y j -y b )
wherein the x-axis is located on a road line and the y-axis is perpendicular to the road line.
r bj Is the distance vector of the dynamic risk field source b and the target vehicle j.
k 2 、k 3 And G is a constant greater than 0, k 2 Meaning of (d) is as defined above for k 1 Same, k 3 For the risk correction of different speeds, G is analogous to the electrostatic force constant, describing the magnitude of the risk factor between two units of mass of the object at a unit distance. General k 2 The value range of (2) is 0.5-1.5, k 3 The value of (2) is in the range of 0.05 to 0.2, and G is usually 0.001.
R b Meaning and R a The data interval used is likewise [0.5,1.5 ]]。
T bj The coefficient of risk, such as car-to-car collision, car-to-person collision, is different for the type correction between the dynamic risk field source b and the target vehicle j. The common type correction parameters are as follows: vehicle-to-vehicle frontal collision: 2.5 to 3; vehicle-to-vehicle rear end collision: 1 to 1.5; human-vehicle collision: 2 to 2.5; vehicle-barrier collision: 1.5 to 2.
v bj Is the relative speed between the dynamic risk source b and the target vehicle j, i.e. the speed v of the source b b Velocity v with target vehicle j j Vector difference of (c). θ is v bj And r bj The angle between the directions is positive in the clockwise direction.
The semifinished product of the relative speed is:
v bj =v b -v j
if the target vehicle j is stationary, i.e. v j =0, then v bj =v b The method comprises the steps of carrying out a first treatment on the surface of the If the target vehicle j moves, v bj =v b -v j 。
E V The greater the value, the higher the risk that the dynamic risk field source b poses to the target vehicle j. The field intensity distribution results are shown in fig. 5 (b).
Based on the driving safety risk calculation method, the risk of various objects in the road to a certain target vehicle can be analyzed. In view of the comprehensiveness of the data acquisition range and the advantages in the aspect of target positioning, the method selects the laser radar point cloud data as a data source and takes the point cloud scanning result without shielding at the road side as a calculation carrier.
For a certain target vehicle in the field, the risk size calculation flow of each object is as follows:
1) Through early-stage data acquisition, static scene data of a point cloud scanning result is constructed, and the specific method comprises the following steps:
a. collecting multi-frame point cloud data, dividing each frame of point cloud data into n statistical spaces, wherein the value range of n can be 50-100 according to different scanning ranges of the laser radar;
b. sequentially superposing the point cloud data of the next frame from the initial frame, manually eliminating dynamic objects in the point cloud data during superposition, and ensuring that the point cloud data does not have a black domain which is blocked as far as possible;
c. and detecting the point cloud density of each statistical space when the point cloud density is overlapped every time, and if the point cloud density is larger than a threshold value alpha (related to the point cloud density, which is generally 1000, randomly sampling the point cloud in the space to keep the density, so as to finally obtain an ideal global static point cloud background.
And (3) separating out static risk field sources in the static scene in a manual mode, wherein the static risk field sources comprise lane separation lines, central separation strips, road edge areas and the like, and randomly sampling and fitting a plane linear equation of each static risk field source. It is generally required to uniformly collect more than 100 points along the visual alignment direction, and the collected points should not deviate too far from the target.
2) A certain frame of data is selected as a calculation time, as shown in fig. 7, and the previous frame of data is extracted as an object moving speed reference. And respectively identifying various objects (vehicles, pedestrians and the like in the calculation frame and the previous frame data which participate in the safety field calculation process by using a 3D target detection and tracking algorithm based on the point cloud data, and establishing a corresponding relation between the same object in the previous frame and the next frame. The 3D target detection and tracking method adopted in this step is not inventive and will not be described. In theory, the algorithm is not limited, but in consideration of timeliness in practical use environments, the calculation efficiency of target detection and tracking is generally required to be not less than 1.25f as much as possible, and f is the laser radar point cloud scanning frequency.
Calculating the approximate centroid of the target vehicle by using the boundary frame of the same object, calculating the displacement of the centroid of the same object in the front and back frames, and finally calculating the moving speed of the object by using the scanning frame rate of the laser radar. For a newly added object without previous frame data for calculating the speed, the speed is regarded as the standard speed. The standard speed is the average speed in the point cloud scanning road section under the historical statistical condition, and the direction is consistent with the traffic lane where the target is located.
3) And (2) substituting the relative position, relative speed, type and other attributes of the target vehicle and other objects except the target vehicle and parameters such as the distance between the static risk field source and the target vehicle in step 1) and traffic environment factors such as road conditions into the running safety risk field calculation mechanism in sequence to obtain the risk of each object in the scanning range for calculating the target, thereby forming running safety risk distribution taking the target vehicle as a core, as shown in fig. 5 and 6.
C. Data segmentation
The data segmentation module flow diagram is shown in fig. 8. Firstly, the scene point cloud is divided into point cloud in the boundary box and point cloud outside the boundary box. According to the input scene point cloud and boundary box data, whether the detection point cloud is in the boundary box or not is designed, so that the point cloud is divided into two types, namely, the point cloud in the boundary box and the point cloud outside the boundary box.
C 1 : a sampling scheme; scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As sub-module input, first passCondition judgment, judging scene point cloud data P 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then setting the super parameter f 1 、f 2 Cloud data P of inner and outer points of boundary frame 11 、P 12 According to f 1 、f 2 Randomly sampling to obtain point cloud data P after data segmentation 2 . Taking the screened object as a center, combining sampling or dividing to extract the point cloud of a certain area around the screened object as a dangerous range, wherein the specific method for extracting the dangerous range comprises the following steps:
if the dangerous target is a static risk field source, taking a plane in which the risk field source is positioned as a center, taking a region with the intercepting width of d/2 as a dangerous range, wherein d is the width of the target vehicle.
If the dangerous object is a dynamic risk field source, taking the centroid of the dangerous object as the center, and intercepting a rectangular area with the width of 1.5d and the length of (0.5l+0.5l×k) as a dangerous range, wherein 0.5l part is the half side length far away from the target vehicle, and 0.5l×k is the half side length close to the target vehicle. d is the width of the dangerous object and l is the length of the dangerous object. k is a speed correction coefficient not smaller than 1, and the value interval is as follows depending on the speed of the dangerous target: v e (0, 30) km/h, k=2; v e (30, 70) km/h, k=3; v >70km/h, k=5.
And sequentially extracting the dangerous ranges according to the dangerous coefficient of the dangerous source, wherein the region where the dangerous ranges overlap is extracted only once. The final extracted total hazard range result may be provided to the target vehicle as perception assistance data for the target vehicle.
2.A 2 、B、C 3 、D 2 、E、F
The flow chart is shown in FIG. 1
A. Data acquisition
A 2 In the automatic driving traffic scene, laser radar scanning is carried out by a road side laser radar in a road side sensing unit and a laser radar loaded on a calibration vehicle preset in the scene to obtain a traffic fieldPoint cloud data P of scenery 1 The data is the data source for all subsequent links.
B. Driving safety risk field calculation
The driving safety risk field calculation module comprises a target detection sub-module and a safety field calculation sub-module, and a flow chart is shown in fig. 3.
B 1 And a target detection sub-module. And in the target detection submodule, the scene point cloud obtained in the step A is subjected to deep learning 3d target detection, and the algorithm is PV-RCNN. And inputting scene point cloud data to obtain a target detection result. Since the data source is laser radar point cloud data, the layout position of the laser radar determines the size, the characteristics and the like of the scene point cloud data, and the boundary box is the boundary box X of each target in the scene 1 The attributes are position, length, height, width, deflection angle, etc., as shown in fig. 4.
B 2 And a scene acquisition sub-module. The scene acquisition sub-module is used for acquiring some characteristics and information in the scene in advance before the target detection sub-module, so that target detection and subsequent safe field calculation can be better performed, and various alternative schemes exist in the sub-module as follows:
B 21 : the camera sensor is added in the road side sensing unit, and the type of the object is judged by utilizing RGB information of a scene acquired by the camera and corresponding horizontal and vertical boundaries, so that the type of the static object is assisted to be distinguished.
B 22 : before the automatic processing of the target detection sub-module, a manual judging process is added, and a worker trained by related professions performs calibration work on the static object in the traffic scene, so that the purpose of distinguishing the static object is achieved.
B 23 : the type of the static object is distinguished by the lane-level information in the high-precision map according to the existing high-precision map and positioning to the position of the scene according to the coordinate system.
B 3 A security field calculation sub-module. Inputs to the sub-module are the type of static object and the target detection bounding box. By means of field theory methods such as gravitational field and magnetic field in physics, things possibly causing risks in a traffic environment are regarded as dangerous occurrence sources, the things are taken as centers to be diffused to the surroundings, and the field intensity of a dangerous field can be understood to be the magnitude of a risk coefficient at a certain distance from the dangerous occurrence sources. The closer to the dangerous center, the greater the possibility of accident, the farther the distance, the lower the accident probability, and when the distance approaches 0, the contact collision between the target vehicle and the dangerous occurrence source can be considered, namely, the traffic accident has occurred.
The driving safety risk field model comprises a static risk field and a dynamic risk field. I.e. driving safety risk field = static risk field + dynamic risk field.
E S =E R +E V
Wherein E is S For field strength vectors of driving safety risk fields, E R Is the field intensity vector of static risk field, E V The driving safety risk field model can be expressed as potential driving risks caused by traffic factors in an actual scene for the field intensity vector of the dynamic risk field. The risk is measured by the probability of an accident and the severity of the accident.
The driving safety risk fields are divided into two types according to the difference of generating sources, namely a static risk field source and a dynamic risk field source:
1) Static risk field: the field source is an object which is relatively static in a traffic environment, is mainly a pavement marking line such as a lane dividing line and the like, and also comprises a hard dividing facility such as a central dividing belt and the like. For example, traffic regulations dictate that vehicles must not travel or traverse solid lanes. However, if the driver unintentionally leaves the current lane, the risk of violating the lane marking constraints is perceived and the driver will drive the vehicle back into the lane center. Meanwhile, the closer the vehicle is to the lane markings, the greater the risk. Driving risks are also related to road conditions, which may lead to high risks. Furthermore, the driving risk with respect to stationary objects is mainly affected by the visibility, the lower the visibility, the higher the driving risk.
Such objects have two characteristics: (1) under the condition that road construction is not considered, the object is in a static state relative to the target vehicle, and has no speed attribute because the actual meaning of the object is expressed as a dangerous boundary; (2) except for a part of hard separation facilities, the object enables a driver to be away from the position of the object on the basis of legal effect, but even if the driver actually performs the action of crossing the lane lines, traffic accidents do not necessarily occur immediately.
For this type of object, based on the above analysis, it is assumed that the static risk field source a is at (x a ,y a ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) R The method comprises the following steps:
wherein:
LT a is a risk coefficient of different static risk field source a types, is determined by traffic regulations, and is a common hard separation facility>Non-straddlable lane dividing line>The lane separation line may be crossed. The common parameters of facilities and lane lines are as follows: guardrail type or green belt type central separation belt: 20-25 parts; sidewalk kerb: 18-20; yellow solid or dashed line: 15-18; white solid line: 10 to 15 percent; white dotted line: 0 to 5.
R a Is a constant greater than 0, representing (x a ,y a ) The road condition influence factors are determined by road attachment coefficient, road gradient, road curvature, visibility and other traffic environment factors in the area near the object a, and are generally in practical use A fixed value is selected for a section of road. The data interval is generally [0.5,1.5 ]]。
f d The distance influence factor is a static risk field source a of different types and is determined by the type of the object, the width of the lane and the like. There are two types of lane dividing lines and hard dividing strips at present.
D is the lane width, D is the width of the target vehicle j, and D is generally the width of the boundary box of the target vehicle j.
r aj Is the distance vector between the static risk field source a and the target vehicle j, in this case (x j ,y j ) Is the centroid of the target vehicle j, (x) a ,y a ) Then the representation (x) j ,y j ) And (3) making a point at which the vertical line intersects the static risk field source a.
k 1 Is a constant greater than 0 and represents the magnification factor of the distance, since in general the risk of collision does not vary linearly with the distance between the two objects. General k 1 The range of the value of (2) is 0.5-1.5.
r aj /|r aj In general, in practical applications, even if two security risk field sources face each other in the field intensity direction of a certain point, the risk of the certain point cannot be considered to be reduced, and therefore, the risk is still superimposed in a scalar quantity.
E R The larger the value, the higher the risk that the static risk field source a poses to the target vehicle j, the static risk field source including, but not limited to, lane markers and the like. The field intensity distribution results are shown in fig. 5 (a).
2) Dynamic risk field: the field source is an object in a relatively dynamic state in a traffic environment, and the magnitude and the direction of the field intensity vector are determined by the attribute and the state of a moving object and road conditions. The dynamic object herein refers to a dynamic object that may actually collide with a vehicle and cause a significant loss, mainly vehicles, pedestrians, road-block facilities, and the like.
This type of object also has two features: (1) although the object may be stationary relative to the road surface, such as a roadside parking, a road barrier facility, etc., it still has a relative velocity with the dynamic target vehicle as a reference frame. (2) Such objects are strictly prohibited from collision, otherwise serious traffic accidents will necessarily occur.
Obviously, the risk of relatively dynamic objects does not increase linearly with decreasing distance, and the rate of risk increase becomes faster with decreasing distance. The present invention therefore assumes that the power function form of driving risk is a function of vehicle-target distance.
For this class of objects, based on the above analysis, it is assumed that the dynamic risk field source b is at (x b ,y b ) The potential field formed is in (x) j ,y j ) Is the field strength vector E of (2) V The method comprises the following steps:
r bj =(x j -x b ,y j -y b )
wherein the x-axis is located on a road line and the y-axis is perpendicular to the road line.
r bj Is the distance vector of the dynamic risk field source b and the target vehicle j.
k 2 、k 3 And G is a constant greater than 0, k 2 Meaning of (d) is as defined above for k 1 Same, k 3 For the risk correction of different speeds, G is analogous to the electrostatic force constant, describing the magnitude of the risk factor between two units of mass of the object at a unit distance. General k 2 The value range of (2) is 0.5-1.5, k 3 The value of (2) is in the range of 0.05 to 0.2, and G is usually 0.001.
R b Meaning and R a The data interval used is likewise [0.5,1.5 ]]。
T bj Correction factors for type between dynamic risk source b and target vehicle j, e.g. risk factors for vehicle-to-vehicle collisions, vehicle-to-person collisionsIs different. The common type correction parameters are as follows: vehicle-to-vehicle frontal collision: 2.5 to 3; vehicle-to-vehicle rear end collision: 1 to 1.5; human-vehicle collision: 2 to 2.5; vehicle-barrier collision: 1.5 to 2.
v bj Is the relative speed between the dynamic risk source b and the target vehicle j, i.e. the speed v of the source b b Velocity v with object j j Vector difference of (c). θ is v bj And r bj The angle between the directions is positive in the clockwise direction.
The semifinished product of the relative speed is:
v bj =v b -v j
if the target vehicle j is stationary, i.e. v j =0, then v bj =v b The method comprises the steps of carrying out a first treatment on the surface of the If the target vehicle j moves, v bj =v b -v j 。
E V The greater the value, the higher the risk that the dynamic risk field source b poses to the target vehicle j. The field intensity distribution results are shown in fig. 5 (b).
Based on the driving safety risk calculation method, the risk of various objects in the road to a specific object can be analyzed. In view of the comprehensiveness of the data acquisition range and the advantages in the aspect of target positioning, the method selects the laser radar point cloud data as a data source and takes the point cloud scanning result without shielding at the road side as a calculation carrier.
For a certain target vehicle in the field, the risk size calculation flow of each object is as follows:
1) Through early-stage data acquisition, static scene data of a point cloud scanning result is constructed, and the specific method comprises the following steps:
a. collecting multi-frame point cloud data, dividing each frame of point cloud data into n statistical spaces, wherein the value range of n can be 50-100 according to different scanning ranges of the laser radar;
b. sequentially superposing the point cloud data of the next frame from the initial frame, manually eliminating dynamic objects in the point cloud data during superposition, and ensuring that the point cloud data does not have a black domain which is blocked as far as possible;
c. and detecting the point cloud density of each statistical space when the point cloud density is overlapped every time, and if the point cloud density is larger than a threshold value alpha (related to the point cloud density, which is generally 1000, randomly sampling the point cloud in the space to keep the density, so as to finally obtain an ideal global static point cloud background.
And (3) separating out static risk field sources in the static scene in a manual mode, wherein the static risk field sources comprise lane separation lines, central separation strips, road edge areas and the like, and randomly sampling and fitting a plane linear equation of each static risk field source. It is generally required to uniformly collect more than 100 points along the visual alignment direction, and the collected points should not deviate too far from the target.
2) A certain frame of data is selected as a calculation time, as shown in fig. 7, and the previous frame of data is extracted as an object moving speed reference. And respectively identifying various objects (vehicles, pedestrians and the like in the calculation frame and the previous frame data which participate in the safety field calculation process by using a 3D target detection and tracking algorithm based on the point cloud data, and establishing a corresponding relation between the same object in the previous frame and the next frame. The 3D target detection and tracking method adopted in this step is not inventive and will not be described. In theory, the algorithm is not limited, but in consideration of timeliness in practical use environments, the calculation efficiency of target detection and tracking is generally required to be not less than 1.25f as much as possible, and f is the laser radar point cloud scanning frequency.
Calculating the approximate centroid of the target vehicle by using the boundary frame of the same object, calculating the displacement of the centroid of the same object in the front and back frames, and finally calculating the moving speed of the object by using the scanning frame rate of the laser radar. For a newly added object without previous frame data for calculating the speed, the speed is regarded as the standard speed. The standard speed is the average speed in the point cloud scanning road section under the historical statistical condition, and the direction is consistent with the traffic lane where the target is located.
3) And (2) substituting the relative position, relative speed, type and other attributes of the target vehicle and other objects except the target vehicle and parameters such as the distance between the static risk field source and the target vehicle in step 1) and traffic environment factors such as road conditions into the running safety risk field calculation mechanism in sequence to obtain the risk of each object in the scanning range for calculating the target, thereby forming running safety risk distribution taking the target vehicle as a core, as shown in fig. 5 and 6.
4)
C. Data segmentation
The data segmentation module flow diagram is shown in fig. 8. Firstly, the scene point cloud is divided into point cloud in the boundary box and point cloud outside the boundary box. According to the input scene point cloud and boundary box data, whether the detection point cloud is in the boundary box or not is designed, so that the point cloud is divided into two types, namely, the point cloud in the boundary box and the point cloud outside the boundary box.
C 3 : based on a security risk field sampling scheme; scene point cloud data P obtained by data acquisition submodule 1 The method comprises the steps of carrying out a first treatment on the surface of the Target detection boundary box X obtained by driving safety risk field calculation module 1 Security risk field data S 1 As the input of the submodule, firstly, the scene point cloud data P is judged by condition judgment 1 Whether or not the point in (a) is in the bounding box X 1 In, obtain the point cloud data P in the bounding box 11 Out-of-bounding box point cloud data P 12 . Then setting a security risk field numerical threshold f 3 Sampling the cloud data P of the inner point and the outer point of the boundary box according to the threshold value 11 、P 12 Obtaining point cloud data P after data segmentation 2 。
If the dangerous target is a static risk field source, taking a plane in which the risk field source is positioned as a center, taking a region with the intercepting width of d/2 as a dangerous range, wherein d is the width of the target vehicle.
If the dangerous object is a dynamic risk field source, taking the centroid of the dangerous object as the center, and intercepting a rectangular area with the width of 1.5d and the length of (0.5l+0.5l×k) as a dangerous range, wherein 0.5l part is the half side length far away from the target vehicle, and 0.5l×k is the half side length close to the target vehicle. d is the width of the dangerous object and l is the length of the dangerous object. k is a speed correction coefficient not smaller than 1, and the value interval is as follows depending on the speed of the dangerous target: v e (0, 30) km/h, k=2; v e (30, 70) km/h, k=3; v >70km/h, k=5.
And sequentially extracting the dangerous ranges according to the dangerous coefficient of the dangerous source, wherein the region where the dangerous ranges overlap is extracted only once. The final extracted total hazard range result may be provided to the target vehicle as perception assistance data for the target vehicle.
D. Data distribution
The flow chart of the data release module is shown in fig. 9. Based on the data segmentation result, compressing data by the road side sensing unit, and then establishing a data transmission channel between the road side sensing unit and the target vehicle, wherein the target vehicle meets the following characteristics: at some point in the time stamp, a numbered vehicle is in a certain position in the scene. And then judging whether the target vehicle moves:
D 2 : post-release data-segmentation point cloud data P 2 Semi-finished product risk field data S 2 From semi-finished product risk field data S 2, Substituting the speed of the target vehicle to obtain safety risk field data S 1 。
E. Data fusion
The flow chart of the data fusion module is shown in fig. 10. Point cloud data P obtained by dividing data 2 And target vehicle point cloud data P scanned by target vehicle lidar 3 Fusing, namely designing a point cloud coordinate transformation matrix to perform cloud registration of high-risk data points of a vehicle end and a road side to obtain fused point cloud data P 4 For the fused point cloud data P 4 Data compression is carried out to obtain compressed point cloud data P 5 。
F. Method evaluation
The flow chart of the method evaluation module is shown in fig. 11, belongs to an optional submodule, and is described below with reference to the method evaluation submodule. For compressed point cloud data P 5 Obtaining a target detection result R through a PV-RCNN deep learning target detection algorithm 1
For the target detection result R 1 Evaluation was performed. The reference system is shown in fig. 12.
V represents an unprocessed vehicle end original point cloud;
i represents an original point cloud obtained by a road side sensing unit which is not processed;
I 1 the representative road side sensing unit uses the segmentation in the data segmentation method to segment the original point cloud to obtain the point cloud;
I 2 the representative road side sensing unit uses sampling in a data segmentation method to segment an original point cloud to obtain a point cloud;
I 1S the representative road side sensing unit uses the point cloud obtained by dividing the original point cloud by the security-field-based division in the data division method;
I 2S the representative road side sensing unit uses the sampling based on the security field in the data segmentation method to segment the original point cloud to obtain the point cloud;
and finally, obtaining detection results of different data segmentation methods and giving out evaluation.
Claims (11)
- A laser radar point cloud dynamic segmentation and fusion method based on a driving safety risk field comprises the following steps:(1) Data acquisitionLaying a road side laser radar, and scanning the road side laser radar to obtain point cloud data (P) of a traffic scene 1 ) The point cloud data (P 1 ) Is the data source of the subsequent steps;(2) Driving safety risk field calculationThe driving safety risk fields comprise a static risk field and a dynamic risk field;Field intensity vector E of driving safety risk field S =E R +E VWherein E is S For field strength vectors of driving safety risk fields, E R Is the field intensity vector of static risk field, E V Is the field intensity vector of the dynamic risk field; the field intensity vector of the safety risk field represents the risk of various objects in the road to the target vehicle;by point cloud data (P 1 ) Calculating to obtain target detection boundaryFrame (X) 1 ) Safety risk field value (S) 1 ) And a semi-finished product security risk field (S 2 );(3) Data segmentationThe point cloud data (P) acquired according to step (1) 1 ) And the target detection bounding box (X 1 ) A design algorithm detects the point cloud data (P 1 ) Whether or not to detect a boundary box (X 1 ) In, the point cloud data (P 1 ) Dividing into point cloud data (P) 11 ) Out-of-bounding box point cloud data (P 12 ) Two kinds;based on the security risk field value (S) of step (2) 1 ) Setting a security risk field threshold (f 3 ) Screening out a safety risk field value greater than a safety risk field threshold value (f) by using a threshold screening method 3 ) As a dangerous object; taking the screened dangerous target as the center, and combining sampling or dividing to extract point cloud data (P) of a certain area around the dangerous target 2 ) As a dangerous range.
- The method of claim 1, further comprising at least one of the following steps:(4) Data distributionDividing the data to obtain point cloud data (P 2 ) Compressing by a roadside laser radar;establishing a data transmission channel of a road side laser radar and a target vehicle; the target vehicle should satisfy the following characteristics: at a certain moment on the time stamp, a certain number of vehicles are positioned at a certain position in the automatic driving traffic scene;whether the target vehicle is moving or not is determined, and if the target vehicle is stationary, the data-divided point cloud data (P 2 ) Static risk field, dynamic risk field vector sum, i.e. safety risk field strength vector (E S ) The method comprises the steps of carrying out a first treatment on the surface of the If the target vehicle moves, data-divided point cloud data (P 2 ) Static windDangerous field (E) R ) Semi-finished risk field data (S) 2 ) Substituting the speed of the target vehicle to obtain the vector sum of the safety risk fields of the target vehicle, wherein the modulus is a numerical value;(5) Data fusionThe data is divided to obtain point cloud data (P) 2 ) And target vehicle lidar scanned point cloud data (P 3 ) Fusing, namely designing a point cloud coordinate transformation matrix to perform cloud registration of high-risk data points of a vehicle end and a road side to obtain fused point cloud data (P) 4 ) The method comprises the steps of carrying out a first treatment on the surface of the And fusing the point cloud data (P 4 ) Compression is performed to obtain compressed point cloud data (P 5 )。(6) Method evaluationExperiments were performed on different data segmentation methods,v represents an unprocessed vehicle end original point cloud;i represents an original point cloud obtained by an unprocessed roadside laser radar;I 1 representing point clouds obtained by dividing original point clouds by using segmentation in a data segmentation method of the road side laser radar;I 2 representing a point cloud obtained by dividing an original point cloud by using sampling in a data dividing method of the road side laser radar;I 1S the point cloud obtained by dividing the original point cloud based on safe field segmentation in the data segmentation method is used for representing the roadside laser radar;I 2S the point cloud obtained by dividing the original point cloud by representing the safe field-based sampling in the road side laser radar using the data dividing method;and finally, obtaining detection results of different data segmentation methods and giving out evaluation.
- The method of claim 1 or 2, wherein the field source of the static risk field is an object in a relatively stationary state in a traffic environment, including a lane dividing line and other pavement marking markings, and further including a rigid dividing means such as a central dividing strip;the field intensity vector calculation method of the static risk field comprises the following steps:field strength vector E R Representing a static risk field source a at (x a ,y a ) The potential field formed is in (x) j ,y j ) Is a field strength vector of (a);wherein,LT a is a risk coefficient of the static risk field source a type;R a is a constant greater than 0, representing (x a ,y a ) Road condition influence factors at the location;f d a distance influence factor is a static risk field source a;r aj is the distance vector between the static risk field source a and the target vehicle j, in this case (x j ,y j ) Is the centroid of the target vehicle j, (x) a ,y a ) Then the representation (x) j ,y j ) Perpendicular line and lane mark a 1 A point of intersection;r aj /|r aj i indicates the direction of the field strength;E R the greater the value, the higher the risk that the static risk field source a poses to the target vehicle j, including but not limited to lane markers.
- The method of claim 3, wherein when the static risk field source a is a lane segmentation line,
- the method of claim 3, wherein when the static risk field source a is a hard segmentation band,
- the method according to claim 1 or 2, wherein the field source of the dynamic risk field is an object in relative movement in a traffic environment, including vehicles, pedestrians, and road-block facilities;the field intensity vector calculation method of the dynamic risk field comprises the following steps:field strength vector E V Representing a dynamic risk field source b at (x b ,y b ) The potential field formed is in (x) j ,y j ) Is a field strength vector of (a);r bj =(x j -x b ,y j -y b )wherein the x-axis is located on the road line and the y-axis is perpendicular to the road line;r bj the distance vector is the distance vector between the dynamic risk field source b and the target vehicle j;k 2 、k 3 and G is a constant greater than 0;R b meaning and R a Identical, means (x b ,y b ) Road condition influence factors at the location;T bj correcting coefficients for types between the dynamic risk field source b and the target vehicle j;v bj is the relative speed between the dynamic risk field source b and the target vehicle j;θ is v bj And r bj The angle between the directions is positive in the clockwise direction;E V the greater the value, the higher the risk that the dynamic risk field source b poses to the target vehicle j.
- The method according to claim 1 or 2, wherein for a certain target vehicle in the driving safety risk field, the risk size calculation flow of various objects is as follows:1) Static scene data of a point cloud scanning result is constructed through early-stage data acquisition; the static risk field sources in the static scene are separated in a manual mode, the static risk field sources comprise lane separation lines, central separation strips and road edge areas, and plane linear equations of the static risk field sources are obtained through random sampling and fitting;2) Selecting certain frame data as a calculation time, and extracting the previous frame data as an object moving speed reference; utilizing a 3D target detection and tracking algorithm based on point cloud data to respectively identify various objects, generally vehicles and pedestrians, participating in a security field calculation process in a calculation frame and previous frame data, and establishing a corresponding relation between the same object of the previous frame and the next frame; calculating the moving speed of the object by using the marking frame of the same object and the scanning frame rate of the laser radar; for a newly added object without front frame data for calculating the speed, setting the default speed as the speed of the object according to different types of the object;3) The relative position and type of the target vehicle and other objects except the target vehicle, and the distance between the static risk field source and the target vehicle in the step 4.1) and other parameters, including road condition trafficCalculation method for field intensity vector of driving safety risk field by substituting environmental factors into relative speed v bj Setting the speed of the target vehicle as an unknown parameter, and backwardly extending the calculation process, wherein the relative speed is an expression with the unknown parameter; and obtaining the safety risk of each object in the whole traffic scene to the target vehicle, thereby forming the traffic safety risk size distribution taking the calculation target as the core.
- The method of claim 1 or 2, wherein the screening method for the hazard range is:5.1 If the dangerous target is a static risk field source, taking a plane in which the risk field source is positioned as a center, taking a region with the intercepting width of d/2 as a dangerous range, wherein d is the width of the dangerous target;5.2 If the dangerous object is a dynamic dangerous field source, taking the centroid of the dangerous object as the center, and intercepting a rectangular area with the width of 1.5d and the length of 0.5l+0.5l×k as a dangerous range, wherein d is the width of the dangerous object, l is the length of the dangerous object, and k is a speed correction coefficient not smaller than 1;5.3 Sequentially extracting dangerous ranges according to the dangerous coefficient of the dangerous target, wherein the areas with the overlapped dangerous ranges are extracted only once;5.4 For a certain target vehicle in the driving safety risk field, the final extracted total hazard range result is provided as perception auxiliary data of the target vehicle to the target vehicle.
- The method according to claim 1 or 2, wherein the calculation of the driving safety risk field comprises the following modules:6.1 A target detection sub-module;6.2 A scene acquisition sub-module;6.3 A security field computation sub-module).
- The method of claim 9, wherein the scene acquisition sub-module employs one of:6.2.1 Adding a camera sensor into the roadside laser radar, and judging the type of the object by utilizing the RGB information of the scene acquired by the camera sensor and the corresponding horizontal and vertical boundaries to assist in distinguishing the type of the static object;6.2.2 Before the target detection sub-module performs target detection, adding a manual judgment process, and calibrating static objects in a traffic scene by staff trained by related professions to achieve the purpose of distinguishing the static objects;6.2.3 Through the existing high-precision map, the static object type is distinguished by utilizing the lane-level information in the high-precision map according to the position of the scene positioned by the coordinate system.
- The method according to claim 1 or 2, wherein the dangerous range is extracted by the following way:setting the sampling weight (f) 1 ) Sampling weight outside bounding box (f 2 ) For point cloud data (P 11 ) Out-of-bounding box point cloud data (P 12 ) Randomly sampling and then selecting point cloud data (P 11 ) Out-of-bounding-box point cloud data (P 12 ) Obtain point cloud data (P) 2 )。
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110000327 | 2021-01-01 | ||
CN202110228419 | 2021-03-01 | ||
PCT/CN2021/085146 WO2022141910A1 (en) | 2021-01-01 | 2021-04-01 | Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field |
CNPCT/CN2021/085146 | 2021-04-01 | ||
PCT/CN2022/084738 WO2022206942A1 (en) | 2021-01-01 | 2022-04-01 | Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117441197A true CN117441197A (en) | 2024-01-23 |
Family
ID=82260124
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180011148.3A Pending CN116685873A (en) | 2021-01-01 | 2021-04-01 | Vehicle-road cooperation-oriented perception information fusion representation and target detection method |
CN202280026659.7A Pending CN117836653A (en) | 2021-01-01 | 2022-04-01 | Road side millimeter wave radar calibration method based on vehicle-mounted positioning device |
CN202280026656.3A Pending CN117836667A (en) | 2021-01-01 | 2022-04-01 | Static and non-static object point cloud identification method based on road side sensing unit |
CN202280026658.2A Pending CN117441113A (en) | 2021-01-01 | 2022-04-01 | Vehicle-road cooperation-oriented perception information fusion representation and target detection method |
CN202280026657.8A Pending CN117441197A (en) | 2021-01-01 | 2022-04-01 | Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180011148.3A Pending CN116685873A (en) | 2021-01-01 | 2021-04-01 | Vehicle-road cooperation-oriented perception information fusion representation and target detection method |
CN202280026659.7A Pending CN117836653A (en) | 2021-01-01 | 2022-04-01 | Road side millimeter wave radar calibration method based on vehicle-mounted positioning device |
CN202280026656.3A Pending CN117836667A (en) | 2021-01-01 | 2022-04-01 | Static and non-static object point cloud identification method based on road side sensing unit |
CN202280026658.2A Pending CN117441113A (en) | 2021-01-01 | 2022-04-01 | Vehicle-road cooperation-oriented perception information fusion representation and target detection method |
Country Status (3)
Country | Link |
---|---|
CN (5) | CN116685873A (en) |
GB (2) | GB2618936A (en) |
WO (9) | WO2022141912A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114724362B (en) * | 2022-03-23 | 2022-12-27 | 中交信息技术国家工程实验室有限公司 | Vehicle track data processing method |
CN115236628B (en) * | 2022-07-26 | 2024-05-31 | 中国矿业大学 | Method for detecting residual cargoes in carriage based on laser radar |
CN115358530A (en) * | 2022-07-26 | 2022-11-18 | 上海交通大学 | Vehicle-road cooperative sensing roadside test data quality evaluation method |
CN115113157B (en) * | 2022-08-29 | 2022-11-22 | 成都瑞达物联科技有限公司 | Beam pointing calibration method based on vehicle-road cooperative radar |
CN115480243B (en) * | 2022-09-05 | 2024-02-09 | 江苏中科西北星信息科技有限公司 | Multi-millimeter wave radar end-edge cloud fusion calculation integration and application method thereof |
CN115166721B (en) * | 2022-09-05 | 2023-04-07 | 湖南众天云科技有限公司 | Radar and GNSS information calibration fusion method and device in roadside sensing equipment |
CN115272493B (en) * | 2022-09-20 | 2022-12-27 | 之江实验室 | Abnormal target detection method and device based on continuous time sequence point cloud superposition |
CN115235478B (en) * | 2022-09-23 | 2023-04-07 | 武汉理工大学 | Intelligent automobile positioning method and system based on visual label and laser SLAM |
CN115830860B (en) * | 2022-11-17 | 2023-12-15 | 西部科学城智能网联汽车创新中心(重庆)有限公司 | Traffic accident prediction method and device |
CN115966084B (en) * | 2023-03-17 | 2023-06-09 | 江西昂然信息技术有限公司 | Holographic intersection millimeter wave radar data processing method and device and computer equipment |
CN116189116B (en) * | 2023-04-24 | 2024-02-23 | 江西方兴科技股份有限公司 | Traffic state sensing method and system |
CN117471461B (en) * | 2023-12-26 | 2024-03-08 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Road side radar service device and method for vehicle-mounted auxiliary driving system |
CN117452392B (en) * | 2023-12-26 | 2024-03-08 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Radar data processing system and method for vehicle-mounted auxiliary driving system |
CN117961915B (en) * | 2024-03-28 | 2024-06-04 | 太原理工大学 | Intelligent auxiliary decision-making method of coal mine tunneling robot |
Family Cites Families (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6661370B2 (en) * | 2001-12-11 | 2003-12-09 | Fujitsu Ten Limited | Radar data processing apparatus and data processing method |
US9562971B2 (en) * | 2012-11-22 | 2017-02-07 | Geosim Systems Ltd. | Point-cloud fusion |
KR101655606B1 (en) * | 2014-12-11 | 2016-09-07 | 현대자동차주식회사 | Apparatus for tracking multi object using lidar and method thereof |
TWI597513B (en) * | 2016-06-02 | 2017-09-01 | 財團法人工業技術研究院 | Positioning system, onboard positioning device and positioning method thereof |
CN105892471B (en) * | 2016-07-01 | 2019-01-29 | 北京智行者科技有限公司 | Automatic driving method and apparatus |
WO2018126248A1 (en) * | 2017-01-02 | 2018-07-05 | Okeeffe James | Micromirror array for feedback-based image resolution enhancement |
KR102056147B1 (en) * | 2016-12-09 | 2019-12-17 | (주)엠아이테크 | Registration method of distance data and 3D scan data for autonomous vehicle and method thereof |
CN106846494A (en) * | 2017-01-16 | 2017-06-13 | 青岛海大新星软件咨询有限公司 | Oblique photograph three-dimensional building thing model automatic single-body algorithm |
US10281920B2 (en) * | 2017-03-07 | 2019-05-07 | nuTonomy Inc. | Planning for unknown objects by an autonomous vehicle |
CN108629231B (en) * | 2017-03-16 | 2021-01-22 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, apparatus, device and storage medium |
CN107133966B (en) * | 2017-03-30 | 2020-04-14 | 浙江大学 | Three-dimensional sonar image background segmentation method based on sampling consistency algorithm |
CN108932462B (en) * | 2017-05-27 | 2021-07-16 | 华为技术有限公司 | Driving intention determining method and device |
FR3067495B1 (en) * | 2017-06-08 | 2019-07-05 | Renault S.A.S | METHOD AND SYSTEM FOR IDENTIFYING AT LEAST ONE MOVING OBJECT |
CN109509260B (en) * | 2017-09-14 | 2023-05-26 | 阿波罗智能技术(北京)有限公司 | Labeling method, equipment and readable medium of dynamic obstacle point cloud |
CN107609522B (en) * | 2017-09-19 | 2021-04-13 | 东华大学 | Information fusion vehicle detection system based on laser radar and machine vision |
CN108152831B (en) * | 2017-12-06 | 2020-02-07 | 中国农业大学 | Laser radar obstacle identification method and system |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108639059B (en) * | 2018-05-08 | 2019-02-19 | 清华大学 | Driver based on least action principle manipulates behavior quantization method and device |
CN109188379B (en) * | 2018-06-11 | 2023-10-13 | 深圳市保途者科技有限公司 | Automatic calibration method for driving auxiliary radar working angle |
EP3819668A4 (en) * | 2018-07-02 | 2021-09-08 | Sony Semiconductor Solutions Corporation | Information processing device, information processing method, computer program, and moving body device |
US10839530B1 (en) * | 2018-09-04 | 2020-11-17 | Apple Inc. | Moving point detection |
CN109297510B (en) * | 2018-09-27 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | Relative pose calibration method, device, equipment and medium |
CN111429739A (en) * | 2018-12-20 | 2020-07-17 | 阿里巴巴集团控股有限公司 | Driving assisting method and system |
JP7217577B2 (en) * | 2019-03-20 | 2023-02-03 | フォルシアクラリオン・エレクトロニクス株式会社 | CALIBRATION DEVICE, CALIBRATION METHOD |
CN110220529B (en) * | 2019-06-17 | 2023-05-23 | 深圳数翔科技有限公司 | Positioning method for automatic driving vehicle at road side |
CN110296713B (en) * | 2019-06-17 | 2024-06-04 | 广州卡尔动力科技有限公司 | Roadside automatic driving vehicle positioning navigation system and single/multiple vehicle positioning navigation method |
CN110532896B (en) * | 2019-08-06 | 2022-04-08 | 北京航空航天大学 | Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision |
CN110443978B (en) * | 2019-08-08 | 2021-06-18 | 南京联舜科技有限公司 | Tumble alarm device and method |
CN110458112B (en) * | 2019-08-14 | 2020-11-20 | 上海眼控科技股份有限公司 | Vehicle detection method and device, computer equipment and readable storage medium |
CN110850378B (en) * | 2019-11-22 | 2021-11-19 | 深圳成谷科技有限公司 | Automatic calibration method and device for roadside radar equipment |
CN110850431A (en) * | 2019-11-25 | 2020-02-28 | 盟识(上海)科技有限公司 | System and method for measuring trailer deflection angle |
CN110906939A (en) * | 2019-11-28 | 2020-03-24 | 安徽江淮汽车集团股份有限公司 | Automatic driving positioning method and device, electronic equipment, storage medium and automobile |
CN111121849B (en) * | 2020-01-02 | 2021-08-20 | 大陆投资(中国)有限公司 | Automatic calibration method for orientation parameters of sensor, edge calculation unit and roadside sensing system |
CN111999741B (en) * | 2020-01-17 | 2023-03-14 | 青岛慧拓智能机器有限公司 | Method and device for detecting roadside laser radar target |
CN111157965B (en) * | 2020-02-18 | 2021-11-23 | 北京理工大学重庆创新中心 | Vehicle-mounted millimeter wave radar installation angle self-calibration method and device and storage medium |
CN111476822B (en) * | 2020-04-08 | 2023-04-18 | 浙江大学 | Laser radar target detection and motion tracking method based on scene flow |
CN111554088B (en) * | 2020-04-13 | 2022-03-22 | 重庆邮电大学 | Multifunctional V2X intelligent roadside base station system |
CN111192295B (en) * | 2020-04-14 | 2020-07-03 | 中智行科技有限公司 | Target detection and tracking method, apparatus, and computer-readable storage medium |
CN111537966B (en) * | 2020-04-28 | 2022-06-10 | 东南大学 | Array antenna error correction method suitable for millimeter wave vehicle-mounted radar field |
CN111766608A (en) * | 2020-06-12 | 2020-10-13 | 苏州泛像汽车技术有限公司 | Environmental perception system based on laser radar |
CN111880191B (en) * | 2020-06-16 | 2023-03-28 | 北京大学 | Map generation method based on multi-agent laser radar and visual information fusion |
CN111880174A (en) * | 2020-07-03 | 2020-11-03 | 芜湖雄狮汽车科技有限公司 | Roadside service system for supporting automatic driving control decision and control method thereof |
CN111914664A (en) * | 2020-07-06 | 2020-11-10 | 同济大学 | Vehicle multi-target detection and track tracking method based on re-identification |
CN111985322B (en) * | 2020-07-14 | 2024-02-06 | 西安理工大学 | Road environment element sensing method based on laser radar |
CN111862157B (en) * | 2020-07-20 | 2023-10-10 | 重庆大学 | Multi-vehicle target tracking method integrating machine vision and millimeter wave radar |
CN112019997A (en) * | 2020-08-05 | 2020-12-01 | 锐捷网络股份有限公司 | Vehicle positioning method and device |
CN112509333A (en) * | 2020-10-20 | 2021-03-16 | 智慧互通科技股份有限公司 | Roadside parking vehicle track identification method and system based on multi-sensor sensing |
-
2021
- 2021-04-01 GB GB2313215.2A patent/GB2618936A/en active Pending
- 2021-04-01 WO PCT/CN2021/085148 patent/WO2022141912A1/en active Application Filing
- 2021-04-01 WO PCT/CN2021/085149 patent/WO2022141913A1/en active Application Filing
- 2021-04-01 WO PCT/CN2021/085146 patent/WO2022141910A1/en unknown
- 2021-04-01 WO PCT/CN2021/085150 patent/WO2022141914A1/en unknown
- 2021-04-01 WO PCT/CN2021/085147 patent/WO2022141911A1/en unknown
- 2021-04-01 CN CN202180011148.3A patent/CN116685873A/en active Pending
- 2021-04-01 GB GB2316625.9A patent/GB2620877A/en active Pending
-
2022
- 2022-04-01 CN CN202280026659.7A patent/CN117836653A/en active Pending
- 2022-04-01 CN CN202280026656.3A patent/CN117836667A/en active Pending
- 2022-04-01 WO PCT/CN2022/084929 patent/WO2022206978A1/en active Application Filing
- 2022-04-01 CN CN202280026658.2A patent/CN117441113A/en active Pending
- 2022-04-01 WO PCT/CN2022/084925 patent/WO2022206977A1/en active Application Filing
- 2022-04-01 CN CN202280026657.8A patent/CN117441197A/en active Pending
- 2022-04-01 WO PCT/CN2022/084912 patent/WO2022206974A1/en active Application Filing
- 2022-04-01 WO PCT/CN2022/084738 patent/WO2022206942A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CN117836667A (en) | 2024-04-05 |
GB202316625D0 (en) | 2023-12-13 |
WO2022206977A1 (en) | 2022-10-06 |
WO2022206974A1 (en) | 2022-10-06 |
CN116685873A (en) | 2023-09-01 |
WO2022141911A1 (en) | 2022-07-07 |
WO2022141914A1 (en) | 2022-07-07 |
WO2022141913A1 (en) | 2022-07-07 |
WO2022206978A1 (en) | 2022-10-06 |
WO2022141910A1 (en) | 2022-07-07 |
CN117836653A (en) | 2024-04-05 |
GB2620877A (en) | 2024-01-24 |
WO2022141912A1 (en) | 2022-07-07 |
WO2022206942A1 (en) | 2022-10-06 |
GB2618936A (en) | 2023-11-22 |
CN117441113A (en) | 2024-01-23 |
GB202313215D0 (en) | 2023-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117441197A (en) | Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field | |
Han et al. | Research on road environmental sense method of intelligent vehicle based on tracking check | |
CN108345822B (en) | Point cloud data processing method and device | |
CN111874006B (en) | Route planning processing method and device | |
GB2621048A (en) | Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field | |
WO2018020954A1 (en) | Database construction system for machine-learning | |
CN110414418B (en) | Road detection method for multi-scale fusion of image-laser radar image data | |
CN114970321A (en) | Scene flow digital twinning method and system based on dynamic trajectory flow | |
CN113705636B (en) | Method and device for predicting track of automatic driving vehicle and electronic equipment | |
CN113359709B (en) | Unmanned motion planning method based on digital twins | |
US11628850B2 (en) | System for generating generalized simulation scenarios | |
CN107808524B (en) | Road intersection vehicle detection method based on unmanned aerial vehicle | |
CN114821507A (en) | Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving | |
US20220146277A1 (en) | Architecture for map change detection in autonomous vehicles | |
DE112021006101T5 (en) | Systems and methods for object detection with LiDAR decorrelation | |
CN114419874A (en) | Target driving safety risk early warning method based on data fusion of roadside sensing equipment | |
CN115019043A (en) | Image point cloud fusion three-dimensional target detection method based on cross attention mechanism | |
CN117015792A (en) | System and method for generating object detection tags for automated driving with concave image magnification | |
DE112021005607T5 (en) | Systems and methods for camera-LiDAR-fused object detection | |
CN116052124A (en) | Multi-camera generation local map template understanding enhanced target detection method and system | |
Tarko et al. | Tscan: Stationary lidar for traffic and safety studies—object detection and tracking | |
CN114882182A (en) | Semantic map construction method based on vehicle-road cooperative sensing system | |
CN113298781B (en) | Mars surface three-dimensional terrain detection method based on image and point cloud fusion | |
CN114648549A (en) | Traffic scene target detection and positioning method fusing vision and laser radar | |
Wang et al. | Lane detection algorithm based on temporal–spatial information matching and fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |