WO2022206942A1 - 一种基于行车安全风险场的激光雷达点云动态分割及融合方法 - Google Patents

一种基于行车安全风险场的激光雷达点云动态分割及融合方法 Download PDF

Info

Publication number
WO2022206942A1
WO2022206942A1 PCT/CN2022/084738 CN2022084738W WO2022206942A1 WO 2022206942 A1 WO2022206942 A1 WO 2022206942A1 CN 2022084738 W CN2022084738 W CN 2022084738W WO 2022206942 A1 WO2022206942 A1 WO 2022206942A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
field
risk
risk field
Prior art date
Application number
PCT/CN2022/084738
Other languages
English (en)
French (fr)
Inventor
许军
赵聪
倪澜涛
陆日琪
Original Assignee
许军
马儒争
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 许军, 马儒争 filed Critical 许军
Priority to CN202280026657.8A priority Critical patent/CN117441197A/zh
Publication of WO2022206942A1 publication Critical patent/WO2022206942A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • the invention relates to a vehicle automatic driving perception assistance technology, in particular to a vehicle-road laser radar point cloud dynamic segmentation and fusion method based on a driving safety risk field, which is mainly oriented to infrastructure data collection and driving safety in a vehicle-road collaborative environment. Risk field calculation, data segmentation, data publishing, and data fusion to enhance the hazard perception capabilities of autonomous vehicles.
  • Autonomous driving technology is an emerging technology in the field of transportation that my country and the world are investing more in. All countries are trying to develop and establish a set of safe and efficient autonomous driving technology routes. Among them, my country is strongly supporting a technology.
  • the solution is a vehicle-road coordination model.
  • vehicle-road coordination model not only relies on the intelligent means of bicycles, but also expands the intelligent system to the entire traffic environment. Thereby improving the overall operational safety and efficiency of the entire transportation system.
  • the safety distance model is mainly used. When the following distance is less than the safe distance, the auxiliary system will issue an alarm and automatically brake.
  • Many safety distance models determine the safety state of the vehicle by analyzing the safety distance of the relative movement of the front and rear vehicles in real time.
  • the driver safety assistance algorithm is mainly based on the current position of the car (CCP), time to lane crossing (TLC) and variable rumble band (VRBS).
  • field theory has become an emerging direction in the field of autonomous driving safety, originally used to solve vehicle and robot navigation.
  • a distinct advantage is that it allows the vehicle to navigate autonomously using only its position and local sensor measurements.
  • the obstacles of the vehicle are modeled as repulsive potential fields (risk fields).
  • the vehicle can use the field strength gradient at its location to generate control actions to navigate while avoiding obstacles.
  • field theory is mainly applied to motion planning of autonomous vehicles and modeling of driver behavior in specific traffic scenarios, such as car following.
  • risk factors such as driver's personality, psychological and physiological characteristics, complex road conditions, etc., are not fully considered, and the driver-vehicle-road interaction is insufficiently described. Therefore, the practical application of these models is limited.
  • lidar technology is a widely used and effective technical means. It has the advantages of wide scanning range, intuitive results, and is not affected by ambient natural light. It is very suitable for the perception field of autonomous driving.
  • the output result of lidar is in point cloud format, which has relatively low-level data characteristics.
  • the scanned data is recorded in the form of points. Each point contains three-dimensional coordinates, and some may contain color information (RGB) or reflection intensity information (Intensity). Similar to the vigorous development of image processing technology, the processing methods of lidar point cloud data are gradually increasing, involving target detection, target tracking, semantic segmentation and other directions.
  • Semantic segmentation is a basic task in computer vision. Semantic segmentation can have a more fine-grained understanding of images, which is very important in the field of autonomous driving.
  • point cloud semantic segmentation it is an extension of image semantic segmentation in computer vision. Facing the original point cloud data, it perceives the target type, quantity and other characteristics in the scene, and renders the points of the same target into the same color.
  • 3D point cloud segmentation requires knowledge of both the global geometry and fine-grained details of each point. According to the different segmentation granularity, 3D point cloud segmentation methods can be divided into three categories: semantic segmentation, instance segmentation and partial segmentation.
  • the effect of point cloud segmentation has a lot to do with the quality of point cloud data.
  • the roadside equipment can provide unprocessed point cloud data to maximize the detection effect, but this will lead to the problem of excessive data transmission.
  • the latest V2V research shows that sending information on all road objects detected by onboard sensors can still lead to high load on the transmission network. Then, a dynamic screening mechanism is needed for road objects to screen out more representative "skeleton" point clouds that only lose a little or no feature information.
  • the specific implementation process is as follows: according to the theoretical derivation, formulate a point cloud value judgment standard; set a value for each point in the point cloud data; judge whether to input it into the transmission network according to the value.
  • the present invention provides a vehicle-road lidar point cloud dynamic segmentation and fusion method based on the driving safety risk field. Based on the theoretical idea of risk field calculation and specific point cloud data, a driving safety risk calculation mechanism that can be used for actual calculation and covers most of the objects affecting road safety is proposed. The point cloud of the object at risk is used as the final transmission result; then data fusion is performed with the point cloud collected by the lidar of the target vehicle, and (optionally) method evaluation is performed. It includes the following steps:
  • the point cloud of the traffic scene is obtained through lidar scanning. This data is the data source of all subsequent links.
  • the flow chart of the data acquisition module is shown in Figure 2.
  • A1 The first one is to use only the roadside lidar in the roadside perception unit to perform lidar scanning to construct the scene point cloud, then the construction of the safety risk field and the numerical calculation in the subsequent links only use the roadside Lidar point cloud;
  • a 2 The second is to use the roadside lidar in the roadside perception unit and the lidar mounted on the preset calibration vehicle in the scene to scan the lidar to construct the scene point cloud, so it is safe in the subsequent links.
  • the construction of the risk field and the numerical calculation both use the point cloud of the roadside lidar and the point cloud of the lidar mounted on the calibration vehicle, which play the role of mutual verification and proofreading.
  • It includes a target detection sub-module B 1 , a scene acquisition sub-module B 2 and a safety field calculation sub-module B 3 , and the flowchart is shown in FIG. 3 .
  • B 1 target detection sub-module.
  • the scene point cloud obtained in (1) is subjected to deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box of each target in the scene, and the attributes are position, length, height, width, Deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera sensor and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively static object in the traffic environment, mainly road markings such as lane dividing lines, and rigid separation facilities such as the central divider. This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle; (2) Except for some rigid separation facilities, this type of object makes the driver intentionally stay away from its location based on legal effects. , but even if the driver actually crosses the lane line, it is not necessarily an immediate traffic accident.
  • LT a is the risk coefficient of the static risk field source a
  • Ra is a constant greater than 0, representing the road condition influencing factor at (x a , y a );
  • f d is the static risk field source a distance influencing factor;
  • r aj is the distance vector between the static risk field source a and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y j ) Make a point where the vertical line intersects the static risk field source a; r aj /
  • the field source is the relatively moving objects in the traffic environment, mainly vehicles, pedestrians, roadblocks, etc. This type of object also has two characteristics: (1) The moving target vehicle is used as the reference frame and has relative speed. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the field strength vector E V is:
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j; k 2 , k 3 and G are all constants greater than 0; R b has the same meaning as Ra ; T bj is the dynamic risk field source b and the target vehicle j Type correction coefficient between ; v bj is the relative velocity between dynamic risk field source b and target vehicle j, ⁇ is the angle between v bj and r bj directions, positive in the clockwise direction. The larger the EV value, the higher the risk caused by the dynamic risk field source b to the target vehicle j.
  • the present invention selects the point cloud data obtained by the roadside laser radar as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • step 3 Substitute the attributes such as the relative position and type of the target vehicle and other objects other than it, as well as parameters such as the distance of the static risk field source relative to the target vehicle in step 1), as well as traffic environment factors such as road conditions into the driving safety.
  • the speed of the target vehicle in the relative speed is set as an unknown parameter, and the calculation process is delayed, then the relative speed is an expression with unknown parameters.
  • the safety risk of each object in the scanning range for the calculation target is obtained, so as to form the driving safety risk distribution with the target vehicle as the core.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C 1 Sampling scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then set the hyperparameters f 1 and f 2 , and randomly sample the point cloud data P 11 and P 12 inside and outside the bounding box according to f 1 and f 2 to obtain the point cloud data P 2 after data segmentation.
  • C 2 segmentation scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then, the point cloud data P 11 in the bounding box is selected, the point cloud data P 12 outside the bounding box is eliminated, and the point cloud data P 2 after data segmentation is obtained.
  • C3 Based on the safety risk field sampling scheme; the scene point cloud data P1 obtained by the data acquisition submodule ; the target detection bounding box X1 and the safety risk field data S1 obtained by the driving safety risk field calculation module are input as the submodules , First, through conditional judgment, it is judged whether the point in the scene point cloud data P 1 is within the bounding box X 1 , and the point cloud data P 11 within the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then set the numerical threshold value f 3 of the security risk field, sample the point cloud data P 11 and P 12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P 2 after data segmentation.
  • the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-modules, and the scene point cloud data is first judged after conditional judgment. Whether the point in P 1 is within the bounding box X 1 , the point cloud data P 11 in the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then, set the numerical threshold value f3 of the security risk field to divide the point cloud data P11 and P12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P2 after data segmentation .
  • the specific method for extracting the dangerous range is: if the dangerous target is a static risk field source, then take the plane where the risk field source is located as the center, and the areas with a width of d/2 on the left and right are the dangerous range, where d is the width of the dangerous target; If the dangerous target is a dynamic risk field source, then take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, where d is the width of the dangerous target, l is the length of the dangerous target, and k is the speed correction coefficient not less than 1; the dangerous range is extracted in sequence according to the size of the risk coefficient of the dangerous source, and the overlapping area of the dangerous range is only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • the flow chart of the data publishing module is shown in Figure 9. Based on the results of data segmentation, the data is compressed by the roadside sensing unit, and then a data transmission channel between the roadside sensing unit and the target vehicle is established. numbered vehicle. Then to determine whether the target vehicle is moving, there are two alternatives:
  • D 1 If the target vehicle is stationary, the point cloud, static risk field, and dynamic risk field vector sum after data segmentation can be directly released, that is, the safety risk field vector sum, and the modulus is a numerical value;
  • D 2 If the target vehicle moves, publish the point cloud, static risk field, and semi-finished product risk field data after data segmentation, and then substitute the speed of the target vehicle to obtain the vector sum of its safety risk field, and the modulus is a numerical value.
  • the point cloud of a certain area around the object with high safety field risk value obtained after data segmentation is fused with the point cloud scanned by the target vehicle's lidar, that is, the point cloud coordinate transformation matrix is designed to match the high-risk data point cloud between the vehicle and the roadside. Then, compress the fused point cloud to obtain compressed point cloud data.
  • V represents the original point cloud of the vehicle without processing
  • I represents the original point cloud obtained by the unprocessed roadside perception unit
  • I 1 represents the point cloud obtained by segmenting the original point cloud by the roadside perception unit using the segmentation in the data segmentation method
  • I 2 represents the point cloud obtained by the roadside perception unit segmenting the original point cloud using the sampling in the data segmentation method
  • I 1S represents the point cloud obtained by dividing the original point cloud by the roadside sensing unit using the segmentation based on the safety field in the data segmentation method
  • I 2S represents the point cloud obtained by segmenting the original point cloud by the roadside sensing unit using the sampling based on the safety field in the data segmentation method
  • Driving safety risk field the distribution of the driving safety risk of the static and dynamic objects in the scene to the driving vehicle. In the present invention, unless otherwise specified, it is synonymous with the safety risk field.
  • Lidar an active remote sensing device that uses a laser as the emission light source and adopts photoelectric detection technology.
  • Point Cloud A point cloud is a dataset of points in a coordinate system.
  • Point cloud data including three-dimensional coordinates X, Y, Z, color, intensity value, time, etc., that is, a structured matrix.
  • Target vehicle-side lidar L 1 lidar mounted on the target vehicle.
  • Roadside Lidar L 2 Lidar installed on the roadside.
  • Vehicle-side lidar point cloud Represents the point cloud acquired by the vehicle-side lidar.
  • Roadside lidar point cloud represents the point cloud acquired by roadside lidar.
  • Scene Point Cloud The point cloud of the traffic scene.
  • V2V Vehicles in motion provide end-to-end wireless communication, that is, through V2V communication technology, vehicle terminals exchange wireless information with each other without forwarding through base stations.
  • Convolution The result of summing two variables after multiplying them in a certain range.
  • CNN Convolutional Neural Network
  • Voxel Short for volume element, a volume containing voxels can be represented by volume rendering or by extracting polygon isosurfaces for a given threshold outline. It is the smallest unit of division of digital data in three-dimensional space.
  • MLP Multi-layer perceptron, also known as artificial neural network, in addition to the input and output layers, there can be multiple hidden layers in the middle.
  • the simplest MLP only contains one hidden layer, that is, a three-layer structure.
  • V2X vehicle to everything, that is, the exchange of information between vehicles and the outside world.
  • RSU Road Side Unit, the roadside unit, is installed on the roadside and communicates with the vehicle-mounted unit.
  • OBU On board Unit, the on-board unit.
  • Skeleton point The key node of the 3D point cloud model.
  • Security risk threshold manually set the value according to the actual application scenario.
  • the safety risk value for the target vehicle is greater than the set threshold.
  • Field source all kinds of objects involved in the calculation process in the calculation of driving safety risk.
  • Point cloud registration For point cloud data in different coordinate systems, the transformation matrix, that is, the rotation matrix R and the translation matrix T, is obtained through registration, and the error is calculated to compare the matching results.
  • Data acquisition module A The function is data acquisition, the input is the traffic scene, and the output is the scene point cloud data P 1 .
  • Driving safety risk field calculation module B the function is driving safety risk field calculation, including target detection sub-module B 1 , scene acquisition sub-module B 2 , and safety field calculation sub-module B 3 , the input is the scene point cloud data P 1 , and the output is Target detection bounding box X 1 , safety risk field value S 1 /semi-finished product S 2 .
  • Bounding box The point cloud target detection result, the attributes are position, length, height, width, deflection angle, etc., such as the target detection bounding box X 1 .
  • Safety risk field value S 1 the modulus of the safety risk field vector sum of all risk sources in the scene for a certain object.
  • Semi-finished product risk field S 2 that is, when calculating the dynamic risk field, the speed of the target vehicle is set as an unknown parameter, and the expression with the unknown parameter and passed backwards is called semi-finished product.
  • Data segmentation module C the function is data segmentation, the input is the scene point cloud data P 1 , the target detection bounding box X 1 , the safety risk field value S 1 /semi-finished product risk field S 2 , and the output is the point cloud data P 2 after data segmentation .
  • Data release module D The function is data release, that is, the roadside perception unit releases data to the target vehicle in the scene, and the released data is the point cloud data P 2 after data segmentation, and the safety risk field/semi-finished product risk field data S 1 /S 2 .
  • Data fusion module E The function is to fuse the point cloud data P 2 after data segmentation with the point cloud data P 3 of the target vehicle to obtain the fused point cloud data P 4 , and obtain compressed point cloud data P 5 through data compression.
  • Method evaluation module F For the compressed point cloud data P 5 , the target detection result R 1 is obtained through the PV-RCNN deep learning target detection algorithm, and the evaluation is given for the target detection result R 1 , and the best data segmentation scheme is selected.
  • Roadside perception unit including but not limited to roadside lidar, cameras and other sensors.
  • Target vehicle The object of risk generated by each risk source in the driving safety risk field is also the object of data transmission by the drive test perception unit in data transmission, that is, ego car.
  • Target vehicle the target vehicle in the process of calculating the safety risk field.
  • Target vehicle lidar L 3 lidar mounted on the target vehicle.
  • Segmentation The method of data segmentation, which separates the point cloud of the detection target and the non-detection target, which is different from the meaning of data segmentation.
  • Sampling The method of data segmentation, randomly sampling the point cloud of the detection target and the non-detection target according to the weight, the default parameters are 0.8 and 0.2.
  • FIG. 4 Schematic diagram of the bounding box of the target detection result
  • FIG. 6 Schematic diagram of security risk field distribution
  • FIG. 7 Schematic diagram of the xoy plane projection of the security risk field
  • FIG. 12 Schematic diagram of the method evaluation reference system
  • a roadside lidar point cloud segmentation method based on the safety risk field mechanism its flowchart is shown in Figure 1, that is, six modules: data acquisition module A, driving safety risk field calculation module B, data segmentation module C, data Publishing module D, data fusion module E, and method evaluation module F.
  • a 1 In an autonomous driving traffic scene, only the roadside lidar in the roadside perception unit performs lidar scanning to obtain the point cloud data P1 of the traffic scene, which is the data source for all subsequent links.
  • the driving safety risk field calculation module includes a target detection sub-module and a safety field calculation sub-module.
  • the flow chart is shown in Figure 3.
  • B 1 target detection sub-module.
  • the scene point cloud obtained in A is used for deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box X 1 of each target in the scene, and the attributes are position, length, height, width, deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively stationary object in the traffic environment, mainly road markings such as lane dividing lines, and also rigid separation facilities such as central dividers.
  • road markings such as lane dividing lines
  • rigid separation facilities such as central dividers.
  • traffic regulations stipulate that vehicles are not allowed to drive or cross in solid lanes. However, if the driver unintentionally leaves the current lane, perceiving the risk of violating the constraints of the lane markings, the driver will steer the vehicle back into the center of the lane. At the same time, the closer the vehicle is to the lane markings, the greater the risk.
  • Driving risk is also related to road conditions, and poor road conditions can lead to high risks.
  • the driving risk of relatively stationary objects is mainly affected by the visibility, and the lower the visibility, the higher the driving risk.
  • This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle, because its actual meaning is a dangerous boundary and does not have speed attributes; (2) Except for some rigid separation facilities, This type of object makes the driver intentionally stay away from its location based on legal effects, but even if the driver actually crosses the lane line, it may not necessarily cause a traffic accident immediately.
  • LT a is the risk factor of different static risk field source a types, which is determined by traffic regulations.
  • hard separation facilities > can not cross the lane separation line > can cross the lane separation line.
  • the parameters of common facilities and lane lines are as follows: guardrail type or green belt type median: 20 to 25; sidewalk stones: 18 to 20; yellow solid line or dotted line: 15 to 18; white solid line: 10 ⁇ 15; white dotted line: 0-5.
  • R a is a constant greater than 0, which indicates the influencing factors of road conditions at (x a , y a ), which are determined by traffic environment factors such as road adhesion coefficient, road slope, road curvature and visibility in the vicinity of object a. Pick a fixed value for a section of road.
  • the data interval generally used is [0.5, 1.5].
  • f d is the distance influencing factor of different types of static risk field sources a, which is determined by object type, object width, lane width, etc.
  • object type object width
  • lane width etc.
  • rigid dividing strips There are currently two types of lane dividing lines and rigid dividing strips.
  • D is the lane width
  • d is the width of the target vehicle j
  • d generally takes the width of the bounding box of the target vehicle j.
  • r aj is the distance vector between the static risk field source a and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y ) j ) Make a point where the vertical line intersects the static risk field source a.
  • k 1 is a constant greater than 0, representing the amplification factor of the distance, because the collision risk and the distance between the two objects do not change linearly in general. Generally, the value of k 1 ranges from 0.5 to 1.5.
  • represents the direction of the field strength, but in general practical applications, even if the field strength directions of the two safety risk field sources are opposite to a certain point, the risk size of the point cannot be considered to be reduced accordingly, so it is usually still The scalars are superimposed.
  • the field strength distribution results are shown in Fig. 5(a).
  • Dynamic risk field The field source is a relatively dynamic object in the traffic environment, and the magnitude and direction of its field strength vector are determined by the properties and states of the moving object and road conditions.
  • the dynamic objects here refer to dynamic objects that can actually collide with vehicles and cause heavy losses, mainly vehicles, pedestrians, and roadblocks.
  • This type of object also has two characteristics: (1) Although the above objects may be stationary relative to the road surface, such as roadside parking, roadblock facilities, etc., they still have relative speed with the dynamic target vehicle as the reference frame. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the present invention assumes that the power function form of driving risk is a function of vehicle-target distance.
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j.
  • k 2 , k 3 and G are all constants greater than 0.
  • the meaning of k 2 is the same as that of k 1 above, and k 3 is the hazard correction for different speeds.
  • G is analogous to the electrostatic force constant and is used to describe an object with two units of mass at a unit distance. The size of the risk factor between. Generally, the value range of k 2 is 0.5 to 1.5, the value range of k 3 is 0.05 to 0.2, and the value of G is usually 0.001.
  • R b is the same as that of R a , and the data interval used is also [0.5, 1.5].
  • T bj is the type correction coefficient between the dynamic risk field source b and the target vehicle j.
  • the risk coefficients of vehicle-vehicle collision and vehicle-person collision are different.
  • Common types of correction parameters are as follows: vehicle-vehicle frontal collision: 2.5 to 3; vehicle-vehicle rear-end collision: 1 to 1.5; person-vehicle collision: 2 to 2.5; vehicle-to-barrier collision: 1.5 to 2.
  • v bj is the relative velocity between the dynamic risk field source b and the target vehicle j, that is, the vector difference between the velocity v b of the field source b and the velocity v j of the target vehicle j.
  • is the angle between the v bj and r bj directions, positive in the clockwise direction.
  • the semi-finished product of relative velocity is:
  • the present invention selects the lidar point cloud data as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • n can range from 50 to 100;
  • the point cloud density of each statistical space is detected. If it is greater than the threshold ⁇ (related to the point cloud density, the general value is 1000), the point cloud in the space is randomly sampled to keep its density, and finally obtain a more ideal global static point cloud background.
  • the static risk field sources in the static scene are separated, including the lane dividing line, the median strip, the roadside area, etc., and the plane linear equation of each static risk field source is fitted by random sampling. Generally, it is required to collect more than 100 points evenly along the visual linear direction, and the collected points should not deviate too far from the target.
  • the speed is regarded as the standard speed.
  • the standard speed is the average vehicle speed in the point cloud scanning section under historical statistical conditions, and the direction is consistent with the lane where the target is located.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C 1 Sampling scheme; the scene point cloud data P 1 obtained by the data acquisition sub-module; the target detection bounding box X 1 and the safety risk field data S 1 obtained by the driving safety risk field calculation module are input as the sub-module, and the condition is first judged , judging whether the points in the scene point cloud data P 1 are within the bounding box X 1 , and obtain point cloud data P 11 within the bounding box and point cloud data P 12 outside the bounding box. Then set the hyperparameters f 1 and f 2 , and randomly sample the point cloud data P 11 and P 12 inside and outside the bounding box according to f 1 and f 2 to obtain the point cloud data P 2 after data segmentation. Taking the screened object as the center, combining sampling or segmentation to extract the point cloud of a certain area around it as the dangerous area.
  • the specific method of extracting the dangerous area is as follows:
  • the dangerous target is a static risk field source, take the plane where the risk field source is located as the center, and the left and right intercepted areas with a width of d/2 are the dangerous range, where d is the width of the target vehicle.
  • the dangerous target is a dynamic risk field source, take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, of which the 0.5l part is far away from the target vehicle
  • the half side length of , 0.5l ⁇ k is the half side length close to the target vehicle.
  • d is the width of the dangerous target
  • l is the length of the dangerous target.
  • k is a speed correction coefficient not less than 1, which depends on the speed of the dangerous target.
  • Dangerous ranges are extracted in sequence according to the hazard coefficient of the hazard source, and the overlapping areas of the hazard ranges are only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • A2 In the traffic scene of automatic driving, the roadside lidar in the roadside perception unit and the lidar mounted on the preset calibration vehicle in the scene are used to scan the lidar to obtain the point cloud data P1 of the traffic scene , which is the data source for all subsequent links.
  • the driving safety risk field calculation module includes a target detection sub-module and a safety field calculation sub-module.
  • the flow chart is shown in Figure 3.
  • B 1 target detection sub-module.
  • the scene point cloud obtained in A is used for deep learning 3D target detection, and the algorithm is PV-RCNN. That is, input the scene point cloud data to obtain the target detection result.
  • the data source is lidar point cloud data
  • the layout position of lidar determines the size and characteristics of the scene point cloud data, etc.
  • the bounding box is the bounding box X 1 of each target in the scene, and the attributes are position, length, height, width, deflection angle, etc., as shown in Figure 4.
  • the scene acquisition sub-module is to obtain some features and information in the scene in advance before the target detection sub-module, so as to facilitate better target detection and subsequent safety field calculation.
  • This sub-module There are various options for this sub-module as follows:
  • the RGB information of the scene collected by the camera and the corresponding horizontal and vertical boundaries are used to determine the type of the object, so as to assist in identifying the type of the static object.
  • B 3 Security field calculation sub-module.
  • the input to this submodule is the type of static object and the object detection bounding box. Drawing on the field theory methods such as gravity field and magnetic field in physics, everything that may cause risks in the traffic environment is regarded as the source of danger, and it spreads around it.
  • the field strength of the risk field can be understood as the distance from the source of danger.
  • the magnitude of the risk factor at a certain distance The closer the distance to the danger center, the greater the possibility of an accident, and the farther the distance, the lower the accident probability. When the distance approaches 0, it can be considered that there is a contact collision between the target vehicle and the source of danger, that is, a traffic accident has occurred. .
  • ES is the field strength vector of the driving safety risk field
  • ER is the field strength vector of the static risk field
  • EV is the field strength vector of the dynamic risk field.
  • the driving safety risk field model can be expressed as the traffic factor in the actual scene. potential driving risk. Risk is measured by the likelihood of an accident and the severity of the accident.
  • the driving safety risk field is divided into two categories according to the different sources, namely the static risk field source and the dynamic risk field source:
  • Static risk field The field source is a relatively static object in the traffic environment, mainly road markings such as lane dividing lines, and rigid separation facilities such as the central divider.
  • road markings such as lane dividing lines
  • rigid separation facilities such as the central divider.
  • traffic regulations stipulate that vehicles are not allowed to drive or cross in solid lanes. However, if the driver unintentionally leaves the current lane, perceiving the risk of violating the constraints of the lane markings, the driver will steer the vehicle back into the center of the lane. At the same time, the closer the vehicle is to the lane markings, the greater the risk.
  • Driving risk is also related to road conditions, and poor road conditions can lead to high risks.
  • the driving risk of relatively stationary objects is mainly affected by visibility, and the lower the visibility, the higher the driving risk.
  • This type of object has two characteristics: (1) Without considering road construction, this type of object is in a stationary state relative to the target vehicle, because its actual meaning is a dangerous boundary and does not have speed attributes; (2) Except for some rigid separation facilities, This type of object makes the driver intentionally stay away from its location based on legal effects, but even if the driver actually crosses the lane line, it may not necessarily cause a traffic accident immediately.
  • LT a is the risk factor of different static risk field source a types, which is determined by traffic regulations.
  • hard separation facilities > can not cross the lane separation line > can cross the lane separation line.
  • the parameters of common facilities and lane lines are as follows: guardrail type or green belt type median: 20 to 25; sidewalk stones: 18 to 20; yellow solid line or dotted line: 15 to 18; white solid line: 10 ⁇ 15; white dotted line: 0-5.
  • R a is a constant greater than 0, which indicates the influencing factors of road conditions at (x a , y a ), which are determined by traffic environment factors such as road adhesion coefficient, road slope, road curvature and visibility in the vicinity of object a. Pick a fixed value for a section of road.
  • the data interval generally used is [0.5, 1.5].
  • f d is the distance influencing factor of different types of static risk field sources a, which is determined by object type, object width, lane width, etc.
  • object type object width
  • lane width etc.
  • rigid dividing strips There are currently two types of lane dividing lines and rigid dividing strips.
  • D is the lane width
  • d is the width of the target vehicle j
  • d generally takes the width of the bounding box of the target vehicle j.
  • r aj is the distance vector between the static risk field source a and the target vehicle j, in this case (x j , y j ) is the centroid of the target vehicle j, (x a , y a ) means (x j , y ) j ) Make a point where the vertical line intersects the static risk field source a.
  • k 1 is a constant greater than 0, representing the amplification factor of the distance, because the collision risk and the distance between the two objects do not change linearly in general. Generally, the value of k 1 ranges from 0.5 to 1.5.
  • represents the direction of the field strength, but in general practical applications, even if the field strength directions of the two safety risk field sources are opposite to a certain point, the risk size of the point cannot be considered to be reduced accordingly, so it is usually still The scalars are superimposed.
  • the field strength distribution results are shown in Fig. 5(a).
  • Dynamic risk field The field source is a relatively dynamic object in the traffic environment, and the magnitude and direction of its field strength vector are determined by the properties and states of the moving object and road conditions.
  • the dynamic objects here refer to dynamic objects that can actually collide with vehicles and cause heavy losses, mainly vehicles, pedestrians, and roadblocks.
  • This type of object also has two characteristics: (1) Although the above-mentioned objects may be stationary relative to the road, such as roadside parking, roadblock facilities, etc., they still have relative speed with the dynamic target vehicle as the reference frame. 2It is strictly forbidden to collide with such objects, otherwise it will inevitably cause a serious traffic accident.
  • the present invention assumes that the power function form of driving risk is a function of vehicle-target distance.
  • r bj (x j -x b ,y j -y b )
  • r bj is the distance vector between the dynamic risk field source b and the target vehicle j.
  • k 2 , k 3 and G are all constants greater than 0.
  • the meaning of k 2 is the same as that of k 1 above, and k 3 is the hazard correction for different speeds.
  • G is analogous to the electrostatic force constant and is used to describe an object with two units of mass at a unit distance. The size of the risk factor between. Generally, the value range of k 2 is 0.5 to 1.5, the value range of k 3 is 0.05 to 0.2, and the value of G is usually 0.001.
  • R b is the same as that of R a , and the data interval used is also [0.5, 1.5].
  • T bj is the type correction coefficient between the dynamic risk field source b and the target vehicle j.
  • the risk coefficients of vehicle-vehicle collision and vehicle-person collision are different.
  • Common types of correction parameters are as follows: vehicle-vehicle frontal collision: 2.5 to 3; vehicle-vehicle rear-end collision: 1 to 1.5; person-vehicle collision: 2 to 2.5; vehicle-to-barrier collision: 1.5 to 2.
  • v bj is the relative velocity between the dynamic risk field source b and the target vehicle j, that is, the vector difference between the velocity v b of the field source b and the velocity v j of the object j.
  • is the angle between the v bj and r bj directions, positive in the clockwise direction.
  • the semi-finished product of relative velocity is:
  • the present invention selects the lidar point cloud data as the data source, and uses the roadside unobstructed point cloud scanning result as the calculation carrier.
  • n can range from 50 to 100;
  • the point cloud density of each statistical space is detected. If it is greater than the threshold ⁇ (related to the point cloud density, the general value is 1000), the point cloud in the space is randomly sampled to keep its density, and finally obtain a more ideal global static point cloud background.
  • the static risk field sources in the static scene are separated, including lane dividing lines, median strips, roadside areas, etc., and the plane linear equations of each static risk field source are fitted by random sampling. Generally, it is required to collect more than 100 points evenly along the visual linear direction, and the collected points should not deviate too far from the target.
  • the speed is regarded as the standard speed.
  • the standard speed is the average vehicle speed in the point cloud scanning section under historical statistical conditions, and the direction is consistent with the lane where the target is located.
  • the flow chart of the data segmentation module is shown in Figure 8.
  • the scene point cloud is divided into the point cloud inside the bounding box and the point cloud outside the bounding box.
  • an algorithm is designed to detect whether the point cloud is within the bounding box, so that the point cloud can be divided into two types: point cloud inside the bounding box and point cloud outside the bounding box.
  • C3 Based on the safety risk field sampling scheme; the scene point cloud data P1 obtained by the data acquisition submodule ; the target detection bounding box X1 and the safety risk field data S1 obtained by the driving safety risk field calculation module are input as the submodules , First, through conditional judgment, it is judged whether the point in the scene point cloud data P 1 is within the bounding box X 1 , and the point cloud data P 11 within the bounding box and the point cloud data P 12 outside the bounding box are obtained. Then set the numerical threshold value f 3 of the security risk field, sample the point cloud data P 11 and P 12 inside and outside the bounding box according to the threshold value, and obtain the point cloud data P 2 after data segmentation.
  • the dangerous target is a static risk field source, take the plane where the risk field source is located as the center, and the left and right intercepted areas with a width of d/2 are the dangerous range, where d is the width of the target vehicle.
  • the dangerous target is a dynamic risk field source, take the centroid of the dangerous target as the center, and intercept a rectangular area with a width of 1.5d and a length of (0.5l+0.5l ⁇ k) as the dangerous range, of which the 0.5l part is far away from the target vehicle
  • the half side length of , 0.5l ⁇ k is the half side length close to the target vehicle.
  • d is the width of the dangerous target
  • l is the length of the dangerous target.
  • k is a speed correction coefficient not less than 1, which depends on the speed of the dangerous target.
  • Dangerous ranges are extracted in sequence according to the hazard coefficient of the hazard source, and the overlapping areas of the hazard ranges are only extracted once.
  • the final extracted total danger range result can be provided to the target vehicle as the perception assistance data of the target vehicle.
  • the flow chart of the data publishing module is shown in Figure 9. Based on the results of data segmentation, the data is compressed by the roadside sensing unit, and then a data transmission channel between the roadside sensing unit and the target vehicle is established. numbered vehicle. Then determine whether the target vehicle is moving:
  • D 2 Publish the point cloud data P 2 and semi-finished product risk field data S 2 after data segmentation , and substitute the semi-finished product risk field data S 2 into the target vehicle speed to obtain safety risk field data S 1 .
  • the flow chart of the data fusion module is shown in Figure 10.
  • the point cloud data P 2 after data segmentation and the target vehicle point cloud data P 3 scanned by the target vehicle lidar are fused, that is, the point cloud coordinate transformation matrix is designed to register the high-risk data point cloud between the vehicle end and the road side, and the fusion result is obtained.
  • the point cloud data P 4 is obtained by data compression on the fused point cloud data P 4 to obtain the compressed point cloud data P 5 .
  • the flowchart of the method evaluation module is shown in Figure 11, which belongs to an optional sub-module.
  • the following describes the method evaluation sub-module for reference.
  • the target detection result R 1 is obtained through the PV-RCNN deep learning target detection algorithm
  • the target detection result R1 is evaluated.
  • the reference system is shown in Figure 12.
  • V represents the original point cloud of the vehicle without processing
  • I represents the original point cloud obtained by the unprocessed roadside perception unit
  • I 1 represents the point cloud obtained by segmenting the original point cloud by the roadside perception unit using the segmentation in the data segmentation method
  • I 2 represents the point cloud obtained by the roadside perception unit segmenting the original point cloud using the sampling in the data segmentation method
  • I 1S represents the point cloud obtained by dividing the original point cloud by the roadside sensing unit using the segmentation based on the safety field in the data segmentation method
  • I 2S represents the point cloud obtained by segmenting the original point cloud by the roadside sensing unit using the sampling based on the safety field in the data segmentation method

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明提供一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法,包括如下步骤:1、提出一种行车安全风险场计算机制,量化分析交通环境中的各类静止物体(路侧停车、路障、交通标识等)或移动物体(行驶车辆、非机动车、行人等)对于某一特定位置的风险大小;2、利用上述计算方法,以路侧感知单元的激光雷达点云数据为数据源,针对扫描范围内某一目标车辆(自动驾驶车辆),计算其他各类物体对目标车辆造成的风险大小,建立以目标车辆为核心的统一化的行车安全风险场分布;3、利用阈值筛选出对于上述目标车辆风险较大的区域,将点云数据从原始数据中分割出来,作为提供给自动驾驶车辆的补充感知信息;4、将路侧感知单元激光雷达获取的点云级信息经过上述步骤处理后,与车端激光雷达获取的点云级信息进行融合,并给出了融合方法的参考评价体系。

Description

一种基于行车安全风险场的激光雷达点云动态分割及融合方法 技术领域
本发明涉及一种车辆自动驾驶感知辅助技术,特别涉及一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法,主要面向于车路协同环境下通过基础设施数据采集、行车安全风险场计算、数据分割、数据发布和数据融合来增强自动驾驶车辆的危险感知能力。
背景技术
自动驾驶技术是交通领域中我国乃至于全世界均在加大投入的一项新兴技术,各国都在尝试开辟并建立一套安全、高效的自动驾驶技术路线,其中我国正大力支持的一种技术方案是车路协同模式。所谓车路协同模式即不仅依靠单车智能手段,而是将智慧体系拓宽至整个交通环境,意图利用车-车通讯、车-路通讯的多维数据渠道来弥补单车感知的局限性和数据处理能力,从而提升整个交通***整体的运行安全和效率。
在自动驾驶安全领域,对驾驶安全辅助***的研究由来已久。自20世纪90年代以来,汽车公司已经提出了许多驾驶安全辅助算法。对于纵向安全,主要采用安全距离模型。当跟随距离小于安全距离时,辅助***将发出警报并自动制动。许多安全距离模型通过实时分析前后车辆相对运动的安全距离来确定车辆的安全状态。对于横向安全性,驾驶员安全辅助算法主要基于汽车的当前位置(CCP)、车道交叉时间(TLC)和可变隆声带(VRBS)。现有的安全模型大多基于车辆运动学和动力学,并且它们对车辆驾驶安全性的描述通常基于关于车辆状态的信息,例如位置、速度、加速度和偏航速度,以及关于车辆相对运动的信息,包括相对速度和相对距离。然而,这些模型存在以下问题:难以反映所有类型的交通因素对驾驶安全的影响;难以描述驾驶员行为特征、车辆状态和道路环境之间的相互作用;难以为车辆控制提供准确的判断依据。
基于此,场论成为了自动驾驶安全领域的新兴方向,原用于解决车辆和机器人导航。一个明显优势是,它允许车辆仅使用其位置和本地传感器测量来自主导航。车辆的障碍物被建模为排斥势场(风险场)。车辆可以使用其所在位置的场强梯度来产生控制动作,以在避开障碍物的同时导航。然而,场论主要应用于自主车辆的运动规划和特定交通场景下的驾驶员行为建模,如跟车等。现有模型的主要问题在于没有充分考虑风险因素,如驾驶员的个性、心理和生理特征、复杂的路况等,以及对驾驶员-车辆-道路相互作用的描述不充分。因此,这些模型的实际应用是有限的。
对于环境的感知是实现自动驾驶决策规划的前提,环境感知手段有许多种,包括高清摄像头、红外传感、激光雷达、毫米波雷达等,不同感知手段各具优势和不足之处。在3D感知领域,激光雷达技术(LiDAR)是一种使用广泛且效果较好的技术手段,其具有扫描范围广、结果直观和不受环境自然光影响的优势,非常适合自动驾驶的感知领域。激光雷达输出的结果为点云格式,具有相对底层的数据特征,扫描资料以点的形式记录,每一个点包含有三维坐标,有些可能含有颜色信息(RGB)或反射强度信息(Intensity)。类似于图像处理技术的蓬勃发展,激光雷达点云数据的处理方法也在逐步增多,涉及目标检测、目标追踪、语义分割等多个方向。
语义分割是计算机视觉中的基本任务,语义分割可以对图像有更细粒度的了解,在自动驾驶领域是非常重要的。关于点云语义分割,则是由计算机视觉中的图像语义分割延申而来。面向原始点云数据,感知场景中的目标类型、数量等特征,并将同类目标的点渲染成同一颜色。三维点云分割既需要了解全局几何结构,又需要了解每个点的细粒度细节。根据分割粒度的不同,三维点云分割方法可以分为三类:语义分割、实例分割和部分分割。
点云分割的效果与点云数据质量有很大关系。显然,在车路协同的融合感知场景中,路侧设备提供不加处理的点云数据可以最大化检测效果,但这会导致数据传输量过大的问题。最新 V2V研究表明,发送由车载传感器检测到的所有道路对象的信息仍然可能导致传输网络高负载。那么,对于道路对象需要一种动态筛选机制,筛选出更有代表性,且仅丢失少量或者不丢失特征信息的“骨架”点云。具体实施过程为:根据理论推导,制定点云价值判断标准;将点云数据中的每一个点设定一个价值;根据价值大小判断是否要输入传输网络中。基于真实道路交通的仿真研究表明,价值预测的V2V通信能够显著提高网络负载下的协作感知性能。推广到路侧感知设备与车端感知设备的融合感知,制定一个动态的点云筛选机制,可以把路段检测到的道路对象按照某种标准有选择地传输给车端,从而提升车端对于场景全语义的感知能力,对于提高自动驾驶的安全性也具有重要意义。
现有技术
CN105892471A;
CN108932462A;
CN110850431A;
CN111985322A;
CN108639059A;
US10281920 B2
WO2015008380A1
发明内容
为了解决上述问题,本发明提供一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法,借鉴Jianqiang Wang等人的论文《The Driving Safety Field Based on Driver–Vehicle–Road Interactions》中推导的风险场计算理论思路和具体点云数据,提出一种可用于实际计算且涵盖绝大部分道路安全影响物体的行车安全风险计算机制,并基于计算结果,分割出对自动驾驶车辆有较大风险的物体的点云作为最终传输结果;然后与目标车辆激光雷达采集的点云进行数据融合,并(可选地)进行方法评价。包括如下步骤:
A.数据采集
在自动驾驶的交通场景中,通过激光雷达扫描,获取交通场景的点云,该数据是后续所有环节的数据源,数据采集模块流程图如图2所示。
在数据采集模块中,有两种备选方案:
A 1:第一种是仅由路侧感知单元中的路侧激光雷达进行激光雷达扫描,以此来构建场景点云,那么后续环节中安全风险场的构建以及数值计算都仅使用了路侧激光雷达的点云;
A 2:第二种是由路侧感知单元中的路侧激光雷达和场景中预设的标定车辆上装载的激光雷达,进行激光雷达扫描,以此来构建场景点云,那么后续环节中安全风险场的构建以及数值计算都使用了路侧激光雷达的点云和标定车辆上装载的激光雷达的点云,起到互相验证、校对的作用。
B.行车安全风险场计算
包括目标检测子模块B 1、场景获取子模块B 2和安全场计算子模块B 3,流程图如图3所示。
B 1:目标检测子模块。在目标检测子模块中,将(1)中得到的场景点云进行深度学习3d目标检测,算法为PV-RCNN。即输入场景点云数据,得到目标检测结果。由于数据来源是激光雷达点云数据,所以激光雷达的布设位置决定了场景点云数据的大小、特征等,边界框为场景中每一个目标的边界框,属性为位置、长度、高度、宽度、偏转角等,如图4所示。
B 2:场景获取子模块。场景获取子模块是为了在目标检测子模块之前,提前获取场景中的一些特征与信息,便于更好的进行目标检测以及后续的安全场计算,该子模块存在多种备选方案如下:
B 21:通过在路侧感知单元中加入相机传感器,利用相机传感器采集到的场景的RGB信息,以及对应的水平、竖直边界来判断物体的类型,来辅助辨别静态物体的类型。
B 22:在目标检测子模块的自动化处理之前,加入人工的判断过程,由经过相关专业培训的工作人员对交通场景中的静态物体进行标定工作,达到辨别静态物体的目的。
B 23:通过已有的高精度地图,根据坐标系定位到场景所在位置,利用高精度地图中的车道级信息来辨别静态物体的类型。
B 3:安全场计算子模块。该子模块的输入为静态物体的类型和目标检测边界框。借鉴物理学中重力场、磁力场等场论方法,将交通环境中可能造成风险的事物均视为危险发生源,以其为中心向周围扩散,风险场的场强可理解为距危险发生源一定距离处的风险系数大小。距离危险中心越近则发生事故的可能性越大,距离越远则事故发生几率越低,当距离趋近于0时可认为目标车辆与危险发生源之间产生接触碰撞,即交通事故已经发生。
行车安全风险场模型包括静态风险场和动态风险场。即行车安全风险场=静态风险场+动态风险场。
E S=E R+E V
其中,E S为行车安全风险场的场强矢量,E R为静态风险场的场强矢量,E V为动态风险场的场强矢量,行车安全风险场模型可以表示为实际场景中交通因素引起的潜在驾驶风险。风险是通过事故的可能性和事故的严重性来衡量的。
行车安全风险场依产生源的不同,即静态风险场源和动态风险场源,分为两类:
1)静态风险场:场源为交通环境中处于相对静止的物体,主要为车道分割线等路面标志标线,还包括中央分隔带等硬性分隔设施。该类物体有两个特征:①不考虑道路施工的情况下,该类物体相对于目标车辆处于静止状态;②除部分硬性分隔设施外,该类物体基于法律效应使得驾驶员有意远离其所在位置,但即使驾驶员实际发生了跨越车道线的行为,也不一定即刻会发生交通事故。对于该类物体,根据上述分析,假定静态风险场源a在(x a,y a)形成的势场对于目标车辆j在(x j,y j)的场强矢量E R为:
Figure PCTCN2022084738-appb-000001
Figure PCTCN2022084738-appb-000002
其中,LT a是静态风险场源a的风险系数;R a是大于0的常数,表示(x a,y a)处的路况影响因素;f d为静态风险场源a距离影响因子;r aj是静态风险场源a和目标车辆j之间的距离矢量,该情况下(x j,y j)为目标车辆j的形心,(x a,y a)则表示(x j,y j)做垂线与静态风险场源a相交的一点;r aj/|r aj|表示场强的方向,d为目标车辆j的宽度。E R值越大,静态风险场源a对目标车辆j造成的风险越高,静态风险场源包括但不限于车道标志等。
2)动态风险场:场源为交通环境中处于相对移动的物体,主要为车辆、行人、路障设施等。该类物体同样有两个特征:①以移动的目标车辆为参考系,具备相对速度。②该类物体均严格禁止碰撞,否则将必然造成严重的交通事故。对于该类物体,根据上述分析,假定动态风险场源b在(x b,y b)形成的势场对于目标车辆j在(x j,y j)的场强矢量E V为:
Figure PCTCN2022084738-appb-000003
r bj=(x j-x b,y j-y b)
其中x轴位于道路线上,y轴垂直于道路线。r bj为动态风险场源b与目标车辆j的距离向量;k 2、k 3和G均为大于0的常数;R b含义与R a相同;T bj为动态风险场源b和目标车辆j之间的类型修正系数;v bj是动态风险场源b与目标车辆j之间的相对速度,θ是v bj和r bj方向之间的角度,顺时针方向为正。E V值越大,动态风险场源b对目标车辆j造成的风险越高。
基于上述行车安全风险计算方法,可以分析道路中的各类物体对某一目标车辆的风险大小。鉴于数据采集范围的全面性和在目标定位方面的优势,本发明选择路侧激光雷达获取的点云数据为数据源,以路侧无遮挡的点云扫描结果为计算载体。
对于场中的某一目标车辆,各类物体的风险大小计算流程如下:
1)通过前期数据采集,构建点云扫描结果的静态场景数据。通过人工方式,分离出静态场景中的静态风险场源,包括车道分隔线、中央分隔带、路沿区域等,通过随机采样并拟合出各静态风险场源的平面线性方程。
2)选定某一帧数据为计算时刻,并提取前一帧数据作为物体移动速度参考。利用基于点云数据的3D目标检测与追踪算法,分别标识出计算帧和前一帧数据中的参与安全场计算过程的各类物体(一般为车辆、行人等),并建立前后两帧同一个物体间的对应关系。利用同一个物体的标注框和激光雷达的扫描帧率计算该物体的移动速度。对于无前帧数据用以计算速度的新增物体,视其速度为标准速度。
3)将目标车辆和除它之外的其他各个物体的相对位置、类型等属性,以及第1)步中静态风险场源相对目标车辆的距离等参数,还包括路况等交通环境因素代入行车安全风险场计算机制中,相对速度中目标车辆的速度设置为未知参数,将计算过程向后延,那么相对速度则是带有未知参数的表达式。得到扫描范围中每一个对象对于计算目标的安全风险,从而形成以目标车辆为核心的行车安全风险分布。
C.数据分割
数据分割模块流程图如图8所示。首先要将场景点云分为边界框内点云、边界框外点云。根据输入的场景点云、边界框数据,设计算法检测点云是否在边界框内,从而将点云分为边界框内点云、边界框外点云两种。
在安全风险场研究之前,有采样、分割两种方法进行数据分割。引入安全风险场后,基于安全风险场计算结果,设置安全风险场阈值,利用阈值筛选法,筛选出对目标车辆风险较高的物体。那么存在四种数据分割备选方案:
C 1:采样方案;将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点云数据P 12。然后设置超参数f 1、f 2,对边界框内、外点云数据P 11、P 12,按f 1、f 2随机采样,得到数据分割后的点云数据P 2
C 2:分割方案;将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点云数据P 12。然后选择边界框内点云数据P 11,剔除边界框外点云数据P 12,得到数据分割后的点云数据P 2
C 3:基于安全风险场采样方案;将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点 云数据P 12。然后设置安全风险场数值阈值f 3,根据阈值采样边界框内、外点云数据P 11、P 12,得到数据分割后的点云数据P 2
C 4:基于安全风险场分割方案。将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点云数据P 12。然后设置安全风险场数值阈值f 3根据阈值分割边界框内、外点云数据P 11、P 12,得到数据分割后的点云数据P 2
危险范围提取具体方法为:若危险目标为静态风险场源,则以该风险场源所在的平面为中心,左右各截取宽度为d/2的区域为危险范围,d是危险目标的宽度;若危险目标为动态风险场源,则以危险目标形心为中心,截取宽为1.5d、长为(0.5l+0.5l×k)的矩形区域为危险范围,其中d是危险目标的宽度,l是危险目标的长度,k为不小于1的速度修正系数;按危险源的危险系数大小依次提取危险范围,危险范围重合的区域仅提取一次。最终提取的总危险范围结果可作为目标车辆的感知辅助数据提供给目标车辆。
D.数据发布(可选)
数据发布模块流程图如图9所示。基于数据分割的结果,通过路侧感知单元压缩数据,再建立路侧感知单元和目标车辆的数据传输通道,目标车辆应满足以下特征:时间戳上某一时刻,场景中处于某个位置的某编号车辆。再判断目标车辆是否移动,那么存在两种备选方案:
D 1:若目标车辆静止,则可以直接发布数据分割后的点云、静态风险场、动态风险场矢量和即安全风险场矢量和,模为数值;
D 2:若目标车辆移动,则发布数据分割后点云、静态风险场、半成品风险场数据,再代入目标车辆的速度,即得到其安全风险场矢量和,模为数值。
E.数据融合(可选)
将数据分割后得到的安全场风险数值高的物体周边一定区域的点云和目标车辆激光雷达扫描的点云进行融合,即设计点云坐标转换矩阵进行车端与路侧高风险数据点云配准,并对融合后的点云进行数据压缩,得到压缩点云数据。
F.方法评价(可选)
对于不同的数据分割方法进行试验,V代表不加处理的车端原始点云;
I代表不加处理的路侧感知单元获取的原始点云;
I 1代表路侧感知单元使用数据分割方法中的分割对原始点云进行分割得到的点云;
I 2代表路侧感知单元使用数据分割方法中的采样对原始点云进行分割得到的点云;
I 1S代表路侧感知单元使用数据分割方法中的基于安全场的分割对原始点云进行分割得到的点云;
I 2S代表路侧感知单元使用数据分割方法中的基于安全场的采样对原始点云进行分割得到的点云;
最后得到不同的数据分割方法的检测结果,并给出评价。
以上所有符号及相关含义归纳如下表:
符号 含义
E R 某静态风险场源a在(x a,y a)对于目标车辆j在(x j,y j)的场强矢量
LT a 静态风险场源不同类型的风险系数
R a 静态风险场计算常数,(x a,y a)处的路况影响因素
D 车道宽度
D 目标车辆j的宽度
r aj 静态风险场源a和目标车辆j之间的距离矢量
x j 目标车辆j的形心的横坐标
y j 目标车辆j的形心的纵坐标
x a 目标车辆j形心做垂线与静态风险场源a相交点的横坐标
y a 目标车辆j形心做垂线与静态风险场源a相交点的纵坐标
k 1 静态风险场计算常数,表示距离的放大系数
E V 动态风险场源b在(x b,y b)对于目标车辆j在(x j,y j)的场强矢量
r bj 动态风险场源b与目标车辆j的距离向量
R b 动态风险场计算的常数,(x b,y b)处的路况影响因素
T bj 动态风险场源b和物体目标车辆j之间的类型修正系数
v bj 动态风险场源b与目标车辆j之间的相对速度
θ v bj和r bj方向之间的角度
k 2 动态风险场计算常数
k 3 动态风险场计算常数
G 动态风险场计算常数
K 危险目标提取的速度修正系数
术语解释
行车安全风险场:在场景中的静态、动态物体对行驶车辆的行车安全风险大小分布情况,在本发明中,无特殊说明时,与安全风险场同义。
激光雷达(Lidar):用激光器作为发射光源,采用光电探测技术手段的主动遥感设备。
点云:点云是坐标系下的点的数据集。
点云数据:包括三维坐标X,Y,Z、颜色、强度值、时间等,即结构化矩阵。
目标车辆车端激光雷达L 1:目标车辆上装载的激光雷达。
路侧激光雷达L 2:路侧安装的激光雷达。
车端激光雷达点云:代表车端激光雷达获取的点云。
路侧激光雷达点云:代表路侧激光雷达获取的点云。
场景点云:交通场景的点云。
V2V:移动中的车辆提供端到端的无线通信,即通过V2V通信技术,车辆终端彼此交换无线信息,无需通过基站转发。
卷积:两个变量在某范围内相乘后求和的结果。
卷积神经网络(CNN):一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法。
体素:体积元素的简称,包含体素的立体可以通过立体渲染或者提取给定阈值轮廓的多边形等值面表现出来。是数字数据于三维空间分割上的最小单位。
MLP:多层感知机,也称为人工神经网络,除了输入输出层,中间可有多个隐层,最简单的MLP只含一个隐层,即三层的结构。
V2X:vehicle to everything,即车对外界的信息交换。
RSU:Road Side Unit,即路侧单元,安装在路侧,与车载单元进行通讯。
OBU:On board Unit,即车载单元。
骨架点:三维点云模型的关键节点。
安全风险阈值:根据实际应用场景,人工设定大小的数值。
危险目标:在安全风险计算中,对于目标车辆的安全风险数值大于设定阈值的目标。
场源:在行车安全风险计算中的参与计算过程的各类物体。
点云配准:对于不同坐标系下的点云数据,通过配准求出变换矩阵,即旋转矩阵R和平移矩阵T,并计算误差,来比较匹配结果。
数据采集模块A:功能为数据采集,输入为交通场景,输出为场景点云数据P 1
行车安全风险场计算模块B:功能为行车安全风险场计算,包括目标检测子模块B 1、场景获取子模块B 2、安全场计算子模块B 3,输入为场景点云数据P 1,输出为目标检测边界框X 1、安全风险场数值S 1/半成品S 2
边界框:点云目标检测结果,属性为位置、长度、高度、宽度、偏转角等,如目标检测边界框X 1
安全风险场数值S 1:场景中所有风险源对于某个对象的安全风险场矢量和的模。
半成品风险场S 2:即在动态风险场计算时,将目标车辆的速度设置为未知参数,将带有未知参数且向后传递的表达式称为半成品。
数据分割模块C:功能为数据分割,输入为场景点云数据P 1、目标检测边界框X 1、安全风险场数值S 1/半成品风险场S 2,输出为数据分割后的点云数据P 2
数据发布模块D:功能为数据发布,即路侧感知单元向场景内的目标车辆发布数据,发布的数据为数据分割后点云数据P 2、安全风险场/半成品风险场数据S 1/S 2
数据融合模块E:功能为将数据分割后点云数据P 2和目标车辆的点云数据P 3融合,得到融合后的点云数据P 4,经过数据压缩得到压缩点云数据P 5
方法评价模块F:对压缩点云数据P 5,通过PV-RCNN深度学习目标检测算法得到目标检测结果R 1,对于目标检测结果R 1,给出评价,选取最佳数据分割方案。
路侧感知单元:包括但不限于路侧激光雷达、相机等传感器。
目标车辆:行车安全风险场中各个风险源产生风险的对象,也是数据传输中路测感知单元传输数据的对象,即ego car。
目标车辆:即安全风险场计算过程中的目标车辆。
目标车辆激光雷达L 3:目标车辆上装载的激光雷达。
分割:数据分割的方法,将检测目标与非检测目标的点云分割开来,这里与数据分割含义不同。
采样:数据分割的方法,将检测目标与非检测目标的点云按照权重随机采样,参数默认为0.8和0.2。
数据流符号说明:
Figure PCTCN2022084738-appb-000004
Figure PCTCN2022084738-appb-000005
附图说明
图1方法流程图
图2数据采集模块流程图
图3行车安全风险场计算模块流程图
图4目标检测结果边界框示意图
图5两种安全风险场的场强分布示意图
(a)静止风险场分布
(b)移动风险场分布
(c)静止风险场r aj计算说明
图6安全风险场分布示意图
图7安全风险场xoy平面投影示意图
图8数据分割模块流程图
图9数据发布模块流程图
图10数据融合模块流程图
图11方法评价模块流程图
图12方法评价参考体系示意图
图13方案变种流程图(A、B、C)
图14方案变种流程图(A、B、C、D)
图15方案变种流程图(A、B、C、D、E)
具体实施方式
下面结合附图与具体实施方式对本发明作进一步详细描述。
一种基于安全风险场机制的路侧激光雷达点云分割方法,其流程图如图1所示,即六个模块:数据采集模块A、行车安全风险场计算模块B、数据分割模块C、数据发布模块D、数据融合模块E、方法评价模块F。
具体实施方式介绍两种样例,A 1、B、C 1;A 2、B、C 3、D 2、E、F
包括如下步骤:
1.A 1、B、C 1
流程图如图13所示
A.数据采集
A 1:在自动驾驶的交通场景中,仅由路侧感知单元中的路侧激光雷达进行激光雷达扫描,获取交通场景的点云数据P 1,该数据是后续所有环节的数据源。
B.行车安全风险场计算
行车安全风险场计算模块包括目标检测子模块和安全场计算子模块,流程图如图3所示。
B 1:目标检测子模块。在目标检测子模块中,将A中得到的场景点云进行深度学习3d目标检测,算法为PV-RCNN。即输入场景点云数据,得到目标检测结果。由于数据来源是激光雷达点云数据,所以激光雷达的布设位置决定了场景点云数据的大小、特征等,边界框为场景中每一个目标的边界框X 1,属性为位置、长度、高度、宽度、偏转角等,如图4所示。
B 2:场景获取子模块。场景获取子模块是为了在目标检测子模块之前,提前获取场景中的一些特征与信息,便于更好的进行目标检测以及后续的安全场计算,该子模块存在多种备选方案 如下:
B 21:通过在路侧感知单元中加入相机传感器,利用相机采集到的场景的RGB信息,以及对应的水平、竖直边界来判断物体的类型,来辅助辨别静态物体的类型。
B 22:在目标检测子模块的自动化处理之前,加入人工的判断过程,由经过相关专业培训的工作人员对交通场景中的静态物体进行标定工作,达到辨别静态物体的目的。
B 23:通过已有的高精度地图,根据坐标系定位到场景所在位置,利用高精度地图中的车道级信息来辨别静态物体的类型。
B 3:安全场计算子模块。该子模块的输入为静态物体的类型和目标检测边界框。借鉴物理学中重力场、磁力场等场论方法,将交通环境中可能造成风险的事物均视为危险发生源,以其为中心向周围扩散,风险场的场强可理解为距危险发生源一定距离处的风险系数大小。距离危险中心越近则发生事故的可能性越大,距离越远则事故发生几率越低,当距离趋近于0时可认为目标车辆与危险发生源之间产生接触碰撞,即交通事故已经发生。
行车安全风险场模型包括静态风险场和动态风险场。即行车安全风险场=静态风险场+动态风险场。
E S=E R+E V
其中,E S为行车安全风险场的场强矢量,E R为静态风险场的场强矢量,E V为动态风险场的场强矢量,行车安全风险场模型可以表示为实际场景中交通因素引起的潜在驾驶风险。风险是通过事故的可能性和事故的严重性来衡量的。
行车安全风险场依产生源的不同,即静态风险场源和动态风险场源,分为两类:
1)静态风险场:场源为交通环境中处于相对静止的物体,主要为车道分割线等路面标志标线,还包括中央分隔带等硬性分隔设施。比如交通法规规定车辆不得在实线车道上行驶或者穿越。然而,如果驾驶员无意识地离开当前车道,会察觉到违反车道标志约束的风险,驾驶员将会把车辆驶回车道中心。同时,车辆离车道标志越近,风险就越大。驾驶风险也与路况有关,路况不佳可能导致高风险。此外,相对静止物体的驾驶风险主要受能见度的影响,能见度越低,驾驶风险越高。
该类物体有两个特征:①不考虑道路施工的情况下,该类物体相对于目标车辆处于静止状态,因其实际含义表现为危险边界,不具备速度属性;②除部分硬性分隔设施外,该类物体基于法律效应使得驾驶员有意远离其所在位置,但即使驾驶员实际发生了跨越车道线的行为,也不一定即刻会发生交通事故。
对于该类物体,根据上述分析,假定静态风险场源a在(x a,y a)形成的势场对于目标车辆j在(x j,y j)的场强矢量E R为:
Figure PCTCN2022084738-appb-000006
Figure PCTCN2022084738-appb-000007
Figure PCTCN2022084738-appb-000008
其中:
LT a是不同静态风险场源a类型的风险系数,由交通法规决定,一般硬性分隔设施>不可跨越车道分隔线>可跨越车道分隔线。常见的设施及车道线的参数取值为:护栏式或绿化带式中 央分隔带:20~25;人行道路沿石:18~20;黄色实线或虚线:15~18;白色实线:10~15;白色虚线:0~5。
R a是大于0的常数,表示(x a,y a)处的路况影响因素,由物体a附近区域的道路附着系数、道路坡度、道路曲率和能见度等交通环境因素决定,实际使用过程中一般对于一段道路选取一个固定值。一般采用的数据区间为[0.5,1.5]。
f d为不同类型静态风险场源a距离影响因子,它由物体类型,物体宽度,车道宽度等决定。目前有车道分割线、硬性分隔带两种。
D是车道宽度,d为目标车辆j的宽度,d一般取目标车辆j边界框的宽度即可。
r aj是静态风险场源a和目标车辆j之间的距离矢量,该情况下(x j,y j)为目标车辆j的形心,(x a,y a)则表示(x j,y j)做垂线与静态风险场源a相交的一点。
k 1是大于0的常数,表示距离的放大系数,因为通常情况下碰撞风险与两物体间距并非呈线性变化。一般k 1的取值范围为0.5~1.5。
r aj/|r aj|表示场强的方向,但一般实际应用中,即使两安全风险场源对某一点的场强方向相对,也不能认为该点的风险大小因此减小,故通常仍按标量进行叠加。
E R值越大,静态风险场源a对目标车辆j造成的风险越高,静态风险场源包括但不限于车道标志等。场强分布结果如图5(a)所示。
2)动态风险场:场源为交通环境中处于相对动态的物体,其场强矢量的大小和方向由运动物体的属性和状态以及道路条件决定。这里的动态物体是指实际上可以与车辆碰撞并造成重大损失的动态物体,主要为车辆、行人、路障设施等。
该类物体同样有两个特征:①虽然上述物体可能相对于路面处于静止,如路侧停车、路障设施等,但以动态的目标车辆为参考系,则其仍具备相对速度。②该类物体均严格禁止碰撞,否则将必然造成严重的交通事故。
显然,相对动态物体的风险不会随着距离的减小而线性增加,随着距离减小,风险增加的速度变得更快。因此,本发明假设驾驶风险的幂函数形式是车辆-目标距离的函数。
对于该类物体,根据上述分析,假定动态风险场源b在(x b,y b)形成的势场对于目标车辆j在(x j,y j)的场强矢量E V为:
Figure PCTCN2022084738-appb-000009
r bj=(x j-x b,y j-y b)
其中x轴位于道路线上,y轴垂直于道路线。
r bj为动态风险场源b与目标车辆j的距离向量。
k 2、k 3和G均为大于0的常数,k 2的含义与上述k 1相同,k 3为不同速度的危险修正,G类比于静电力常量,用于描述单位距离下两单位质量的物体之间的风险系数大小。一般k 2的取值范围为0.5~1.5,k 3的取值范围为0.05~0.2,G通常取为0.001。
R b含义与R a相同,采用的数据区间同样为[0.5,1.5]。
T bj为动态风险场源b和目标车辆j之间的类型修正系数,如车-车碰撞、车-人碰撞的危险系数是不同的。常见的类型修正参数取值为:车-车正面碰撞:2.5~3;车-车追尾碰撞:1~1.5;人-车碰撞:2~2.5;车-路障碰撞:1.5~2。
v bj是动态风险场源b与目标车辆j之间的相对速度,即场源b的速度v b与目标车辆j速度v j的向量差。θ是v bj和r bj方向之间的角度,以顺时针方向为正。
相对速度的半成品为:
v bj=v b-v j
若目标车辆j静止,即v j=0,那么v bj=v b;若目标车辆j移动,v bj=v b-v j
E V值越大,动态风险场源b对目标车辆j造成的风险越高。场强分布结果如图5(b)所示。
基于上述行车安全风险计算方法,可以分析道路中的各类物体对某一目标车辆的风险大小。鉴于数据采集范围的全面性和在目标定位方面的优势,本发明选择激光雷达点云数据为数据源,以路侧无遮挡的点云扫描结果为计算载体。
对于场中的某一目标车辆,各物体的风险大小计算流程如下:
1)通过前期数据采集,构建点云扫描结果的静态场景数据,具体方法为:
a、采集多帧点云数据,将每帧点云数据划分为n个统计空间,依据激光雷达的扫描范围不同,n的取值范围可为50~100;
b、从初始帧开始,依次叠加下一帧点云数据,叠加时需人工剔除其中的动态物体,应尽量保障点云数据中不存在被遮挡的黑域;
c、每次叠加时,检测每个统计空间的点云密度,若其大于阈值α(与点云密度有关,一般取值为1000),则对该空间内的点云进行随机采样以保持其密度,最终得到较为理想的全域静态点云背景。
通过人工方式,分离出静态场景中的静态风险场源,包括车道分隔线、中央分隔带、路沿区域等,通过随机采样并拟合出各静态风险场源的平面线性方程。一般要求沿目测线形方向均匀采集100点以上,采集的点位不应偏离目标过远。
2)选定某一帧数据为计算时刻,如图7所示,并提取前一帧数据作为物体移动速度参考。利用基于点云数据的3D目标检测与追踪算法,分别标识出计算帧和前一帧数据中的参与安全场计算过程的各类物体(一般为车辆、行人等),并建立前后两帧同一个物体间的对应关系。该步骤所采取的3D目标检测和追踪方法非发明内容,不做描述。理论上这里的算法选择不受限制,但考虑到实际使用环境下的时效性,一般要求目标检测与追踪的运算效率应尽量不小于1.25f,f为激光雷达点云扫描频率。
利用同一个物体的边界框,计算目标车辆的近似形心,再计算前后两帧同一物体形心的位移,最后利用激光雷达的扫描帧率计算该物体的移动速度。对于无前帧数据用以计算速度的新增物体,视其速度为标准速度。标准速度为历史统计条件下,点云扫描路段内的平均车速大小,方向与目标所处的行车道一致。
3)将目标车辆和除它之外的其他各个物体的相对位置、相对速度、类型等属性,以及第1)步中静态风险场源相对目标车辆的距离等参数,还包括路况等交通环境因素依次代入上述行车安全风险场计算机制中,得到扫描范围中每一个对象对于计算目标的风险大小,从而形成以目标车辆为核心的行车安全风险分布,如图5、图6所示。
C.数据分割
数据分割模块流程图如图8所示。首先要将场景点云分为边界框内点云、边界框外点云。根据输入的场景点云、边界框数据,设计算法检测点云是否在边界框内,从而将点云分为边界框内点云、边界框外点云两种。
C 1:采样方案;将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点云数据P 12。然后设置超参数f 1、f 2,对边界框内、外点云数据P 11、P 12,按f 1、f 2随机采样,得到数据分割后的点云数据P 2。以筛选出的对象为中心,结合采样或者分割提取其周边一定区域的点云作为危险 范围,危险范围提取具体方法为:
若危险目标为静态风险场源,则以该风险场源所在的平面为中心,左右各截取宽度为d/2的区域为危险范围,d是目标车辆的宽度。
若危险目标为动态风险场源,则以危险目标形心为中心,截取宽为1.5d、长为(0.5l+0.5l×k)的矩形区域为危险范围,其中0.5l部分为远离目标车辆的半侧边长,0.5l×k为靠近目标车辆的半侧边长。d是危险目标的宽度,l是危险目标的长度。k为不小于1的速度修正系数,取决于危险目标的速度大小,取值区间为:v∈(0,30)km/h,k=2;v∈(30,70)km/h,k=3;v>70km/h,k=5。
按危险源的危险系数大小依次提取危险范围,危险范围重合的区域仅提取一次。最终提取的总危险范围结果可作为目标车辆的感知辅助数据提供给目标车辆。
2.A 2、B、C 3、D 2、E、F
流程图如图1所示
A.数据采集
A 2:在自动驾驶的交通场景中,由路侧感知单元中的路侧激光雷达和场景中预设的标定车辆上装载的激光雷达,进行激光雷达扫描,获取交通场景的点云数据P 1,该数据是后续所有环节的数据源。
B.行车安全风险场计算
行车安全风险场计算模块包括目标检测子模块和安全场计算子模块,流程图如图3所示。
B 1:目标检测子模块。在目标检测子模块中,将A中得到的场景点云进行深度学习3d目标检测,算法为PV-RCNN。即输入场景点云数据,得到目标检测结果。由于数据来源是激光雷达点云数据,所以激光雷达的布设位置决定了场景点云数据的大小、特征等,边界框为场景中每一个目标的边界框X 1,属性为位置、长度、高度、宽度、偏转角等,如图4所示。
B 2:场景获取子模块。场景获取子模块是为了在目标检测子模块之前,提前获取场景中的一些特征与信息,便于更好的进行目标检测以及后续的安全场计算,该子模块存在多种备选方案如下:
B 21:通过在路侧感知单元中加入相机传感器,利用相机采集到的场景的RGB信息,以及对应的水平、竖直边界来判断物体的类型,来辅助辨别静态物体的类型。
B 22:在目标检测子模块的自动化处理之前,加入人工的判断过程,由经过相关专业培训的工作人员对交通场景中的静态物体进行标定工作,达到辨别静态物体的目的。
B 23:通过已有的高精度地图,根据坐标系定位到场景所在位置,利用高精度地图中的车道级信息来辨别静态物体的类型。
B 3:安全场计算子模块。该子模块的输入为静态物体的类型和目标检测边界框。借鉴物理学中重力场、磁力场等场论方法,将交通环境中可能造成风险的事物均视为危险发生源,以其为中心向周围扩散,风险场的场强可理解为距危险发生源一定距离处的风险系数大小。距离危险中心越近则发生事故的可能性越大,距离越远则事故发生几率越低,当距离趋近于0时可认为目标车辆与危险发生源之间产生接触碰撞,即交通事故已经发生。
行车安全风险场模型包括静态风险场和动态风险场。即行车安全风险场=静态风险场+动态风险场。
E S=E R+E V
其中,E S为行车安全风险场的场强矢量,E R为静态风险场的场强矢量,E V为动态风险场的场强矢量,行车安全风险场模型可以表示为实际场景中交通因素引起的潜在驾驶风险。风险是通过事故的可能性和事故的严重性来衡量的。
行车安全风险场依产生源的不同,即静态风险场源和动态风险场源,分为两类:
1)静态风险场:场源为交通环境中处于相对静止的物体,主要为车道分割线等路面标志标线,还包括中央分隔带等硬性分隔设施。比如交通法规规定车辆不得在实线车道上行驶或者穿越。然而,如果驾驶员无意识地离开当前车道,会察觉到违反车道标志约束的风险,驾驶员将会把车辆驶回车道中心。同时,车辆离车道标志越近,风险就越大。驾驶风险也与路况有关,路况不佳可能导致高风险。此外,相对静止物体的驾驶风险主要受能见度的影响,能见度越低,驾驶风险越高。
该类物体有两个特征:①不考虑道路施工的情况下,该类物体相对于目标车辆处于静止状态,因其实际含义表现为危险边界,不具备速度属性;②除部分硬性分隔设施外,该类物体基于法律效应使得驾驶员有意远离其所在位置,但即使驾驶员实际发生了跨越车道线的行为,也不一定即刻会发生交通事故。
对于该类物体,根据上述分析,假定静态风险场源a在(x a,y a)形成的势场对于目标车辆j在(x j,y j)的场强矢量E R为:
Figure PCTCN2022084738-appb-000010
Figure PCTCN2022084738-appb-000011
Figure PCTCN2022084738-appb-000012
其中:
LT a是不同静态风险场源a类型的风险系数,由交通法规决定,一般硬性分隔设施>不可跨越车道分隔线>可跨越车道分隔线。常见的设施及车道线的参数取值为:护栏式或绿化带式中央分隔带:20~25;人行道路沿石:18~20;黄色实线或虚线:15~18;白色实线:10~15;白色虚线:0~5。
R a是大于0的常数,表示(x a,y a)处的路况影响因素,由物体a附近区域的道路附着系数、道路坡度、道路曲率和能见度等交通环境因素决定,实际使用过程中一般对于一段道路选取一个固定值。一般采用的数据区间为[0.5,1.5]。
f d为不同类型静态风险场源a距离影响因子,它由物体类型,物体宽度,车道宽度等决定。目前有车道分割线、硬性分隔带两种。
D是车道宽度,d为目标车辆j的宽度,d一般取目标车辆j边界框的宽度即可。
r aj是静态风险场源a和目标车辆j之间的距离矢量,该情况下(x j,y j)为目标车辆j的形心,(x a,y a)则表示(x j,y j)做垂线与静态风险场源a相交的一点。
k 1是大于0的常数,表示距离的放大系数,因为通常情况下碰撞风险与两物体间距并非呈线性变化。一般k 1的取值范围为0.5~1.5。
r aj/|r aj|表示场强的方向,但一般实际应用中,即使两安全风险场源对某一点的场强方向相对,也不能认为该点的风险大小因此减小,故通常仍按标量进行叠加。
E R值越大,静态风险场源a对目标车辆j造成的风险越高,静态风险场源包括但不限于车道 标志等。场强分布结果如图5(a)所示。
2)动态风险场:场源为交通环境中处于相对动态的物体,其场强矢量的大小和方向由运动物体的属性和状态以及道路条件决定。这里的动态物体是指实际上可以与车辆碰撞并造成重大损失的动态物体,主要为车辆、行人、路障设施等。
该类物体同样有两个特征:①虽然上述物体可能相对于路面处于静止,如路侧停车、路障设施等,但以动态的目标车辆为参考系,则其仍具备相对速度。②该类物体均严格禁止碰撞,否则将必然造成严重的交通事故。
显然,相对动态物体的风险不会随着距离的减小而线性增加,随着距离减小,风险增加的速度变得更快。因此,本发明假设驾驶风险的幂函数形式是车辆-目标距离的函数。
对于该类物体,根据上述分析,假定动态风险场源b在(x b,y b)形成的势场对于目标车辆j在(x j,y j)的场强矢量E V为:
Figure PCTCN2022084738-appb-000013
r bj=(x j-x b,y j-y b)
其中x轴位于道路线上,y轴垂直于道路线。
r bj为动态风险场源b与目标车辆j的距离向量。
k 2、k 3和G均为大于0的常数,k 2的含义与上述k 1相同,k 3为不同速度的危险修正,G类比于静电力常量,用于描述单位距离下两单位质量的物体之间的风险系数大小。一般k 2的取值范围为0.5~1.5,k 3的取值范围为0.05~0.2,G通常取为0.001。
R b含义与R a相同,采用的数据区间同样为[0.5,1.5]。
T bj为动态风险场源b和目标车辆j之间的类型修正系数,如车-车碰撞、车-人碰撞的危险系数是不同的。常见的类型修正参数取值为:车-车正面碰撞:2.5~3;车-车追尾碰撞:1~1.5;人-车碰撞:2~2.5;车-路障碰撞:1.5~2。
v bj是动态风险场源b与目标车辆j之间的相对速度,即场源b的速度v b与物体j速度v j的向量差。θ是v bj和r bj方向之间的角度,以顺时针方向为正。
相对速度的半成品为:
v bj=v b-v j
若目标车辆j静止,即v j=0,那么v bj=v b;若目标车辆j移动,v bj=v b-v j
E V值越大,动态风险场源b对目标车辆j造成的风险越高。场强分布结果如图5(b)所示。
基于上述行车安全风险计算方法,可以分析道路中的各类物体对某一具体对象的风险大小。鉴于数据采集范围的全面性和在目标定位方面的优势,本发明选择激光雷达点云数据为数据源,以路侧无遮挡的点云扫描结果为计算载体。
对于场中的某一目标车辆,各物体的风险大小计算流程如下:
1)通过前期数据采集,构建点云扫描结果的静态场景数据,具体方法为:
a、采集多帧点云数据,将每帧点云数据划分为n个统计空间,依据激光雷达的扫描范围不同,n的取值范围可为50~100;
b、从初始帧开始,依次叠加下一帧点云数据,叠加时需人工剔除其中的动态物体,应尽量保障点云数据中不存在被遮挡的黑域;
c、每次叠加时,检测每个统计空间的点云密度,若其大于阈值α(与点云密度有关,一般取值为1000),则对该空间内的点云进行随机采样以保持其密度,最终得到较为理想的全域静态点云背景。
通过人工方式,分离出静态场景中的静态风险场源,包括车道分隔线、中央分隔带、路沿 区域等,通过随机采样并拟合出各静态风险场源的平面线性方程。一般要求沿目测线形方向均匀采集100点以上,采集的点位不应偏离目标过远。
2)选定某一帧数据为计算时刻,如图7所示,并提取前一帧数据作为物体移动速度参考。利用基于点云数据的3D目标检测与追踪算法,分别标识出计算帧和前一帧数据中的参与安全场计算过程的各类物体(一般为车辆、行人等),并建立前后两帧同一个物体间的对应关系。该步骤所采取的3D目标检测和追踪方法非发明内容,不做描述。理论上这里的算法选择不受限制,但考虑到实际使用环境下的时效性,一般要求目标检测与追踪的运算效率应尽量不小于1.25f,f为激光雷达点云扫描频率。
利用同一个物体的边界框,计算目标车辆的近似形心,再计算前后两帧同一物体形心的位移,最后利用激光雷达的扫描帧率计算该物体的移动速度。对于无前帧数据用以计算速度的新增物体,视其速度为标准速度。标准速度为历史统计条件下,点云扫描路段内的平均车速大小,方向与目标所处的行车道一致。
3)将目标车辆和除它之外的其他各个物体的相对位置、相对速度、类型等属性,以及第1)步中静态风险场源相对目标车辆的距离等参数,还包括路况等交通环境因素依次代入上述行车安全风险场计算机制中,得到扫描范围中每一个对象对于计算目标的风险大小,从而形成以目标车辆为核心的行车安全风险分布,如图5、图6所示。
4)
C.数据分割
数据分割模块流程图如图8所示。首先要将场景点云分为边界框内点云、边界框外点云。根据输入的场景点云、边界框数据,设计算法检测点云是否在边界框内,从而将点云分为边界框内点云、边界框外点云两种。
C 3:基于安全风险场采样方案;将数据采集子模块得到的场景点云数据P 1;行车安全风险场计算模块得到的目标检测边界框X 1、安全风险场数据S 1作为子模块输入,首先经过条件判断,判断场景点云数据P 1中的点是否在边界框X 1内,得到边界框内点云数据P 11、边界框外点云数据P 12。然后设置安全风险场数值阈值f 3,根据阈值采样边界框内、外点云数据P 11、P 12,得到数据分割后的点云数据P 2
若危险目标为静态风险场源,则以该风险场源所在的平面为中心,左右各截取宽度为d/2的区域为危险范围,d是目标车辆的宽度。
若危险目标为动态风险场源,则以危险目标形心为中心,截取宽为1.5d、长为(0.5l+0.5l×k)的矩形区域为危险范围,其中0.5l部分为远离目标车辆的半侧边长,0.5l×k为靠近目标车辆的半侧边长。d是危险目标的宽度,l是危险目标的长度。k为不小于1的速度修正系数,取决于危险目标的速度大小,取值区间为:v∈(0,30)km/h,k=2;v∈(30,70)km/h,k=3;v>70km/h,k=5。
按危险源的危险系数大小依次提取危险范围,危险范围重合的区域仅提取一次。最终提取的总危险范围结果可作为目标车辆的感知辅助数据提供给目标车辆。
D.数据发布
数据发布模块流程图如图9所示。基于数据分割的结果,通过路侧感知单元压缩数据,再建立路侧感知单元和目标车辆的数据传输通道,目标车辆应满足以下特征:时间戳上某一时刻,场景中处于某个位置的某编号车辆。再判断目标车辆是否移动:
D 2:发布数据分割后点云数据P 2、半成品风险场数据S 2,由半成品风险场数据S 2,代入目标车辆车速,得到安全风险场数据S 1
E.数据融合
数据融合模块流程图如图10所示。将数据分割后的点云数据P 2和目标车辆激光雷达扫描的目标车辆点云数据P 3融合,即设计点云坐标转换矩阵进行车端与路侧高风险数据点云配准, 得到融合后的点云数据P 4,对融合后的点云数据P 4进行数据压缩得到压缩点云数据P 5
F.方法评价
方法评价模块流程图如图11所示,属于可选子模块,下面对于方法评价子模块进行参考性描述。对压缩点云数据P 5通过PV-RCNN深度学习目标检测算法得到目标检测结果R 1
对于目标检测结果R 1进行评价。参考体系如图12所示。
V代表不加处理的车端原始点云;
I代表不加处理的路侧感知单元获取的原始点云;
I 1代表路侧感知单元使用数据分割方法中的分割对原始点云进行分割得到的点云;
I 2代表路侧感知单元使用数据分割方法中的采样对原始点云进行分割得到的点云;
I 1S代表路侧感知单元使用数据分割方法中的基于安全场的分割对原始点云进行分割得到的点云;
I 2S代表路侧感知单元使用数据分割方法中的基于安全场的采样对原始点云进行分割得到的点云;
最后得到不同的数据分割方法的检测结果,并给出评价。

Claims (11)

  1. 一种基于行车安全风险场的激光雷达点云动态分割及融合方法,包括如下步骤:
    (1)数据采集
    布设路侧激光雷达,通过路侧激光雷达扫描获取交通场景的点云数据(P 1),所述点云数据(P 1)是后续步骤的数据源;
    (2)行车安全风险场计算
    行车安全风险场包括静态风险场和动态风险场;
    行车安全风险场的场强矢量E S=E R+E V
    其中,E S为行车安全风险场的场强矢量,E R为静态风险场的场强矢量,E V为动态风险场的场强矢量;所述安全风险场的场强矢量代表道路中各类物体对目标车辆的风险大小;
    通过点云数据(P 1)计算得到目标检测边界框(X 1)、安全风险场数值(S 1)和半成品安全风险场(S 2);
    (3)数据分割
    根据步骤(1)采集的所述点云数据(P 1)以及所述的目标检测边界框(X 1),设计算法检测所述点云数据(P 1)是否在所述目标检测边界框(X 1)内,将所述点云数据(P 1)分为边界框内点云数据(P 11)、边界框外点云数据(P 12)两种;
    基于步骤(2)安全风险场数值(S 1),设置安全风险场阈值(f 3),利用阈值筛选法,筛选出安全风险场数值大于安全风险场阈值(f 3)的物体,作为危险目标;以筛选出的危险目标为中心,结合采样或者分割提取危险目标周边一定区域的点云数据(P 2)作为危险范围。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括如下步骤中至少一个:
    (4)数据发布
    将数据分割得到点云数据(P 2),通过路侧激光雷达进行压缩;
    建立路侧激光雷达和目标车辆的数据传输通道;目标车辆应满足以下特征:时间戳上某一时刻,自动驾驶的交通场景中处于某个位置的某编号车辆;
    判断目标车辆是否移动,若目标车辆静止,则可以直接发布数据分割后的点云数据(P 2)、静态风险场、动态风险场矢量和,即安全风险场场强矢量(E S);若目标车辆移动,则发布数据分割后点云数据(P 2)、静态风险场(E R)、半成品风险场数据(S 2)),再代入目标车辆的速度,即得到其安全风险场矢量和,模为数值;
    (5)数据融合
    将数据分割后得到点云数据(P 2)和目标车辆激光雷达扫描的点云数据(P 3)进行融合,即设计点云坐标转换矩阵进行车端与路侧高风险数据点云配准,得到融合后的点云数据(P 4);并对融合后的点云数据(P 4)进行压缩,得到压缩点云数据(P 5)。
    (6)方法评价
    对于不同的数据分割方法进行试验,
    V代表不加处理的车端原始点云;
    I代表不加处理的路侧激光雷达获取的原始点云;
    I 1代表路侧激光雷达使用数据分割方法中的分割对原始点云进行分割得到的点云;
    I 2代表路侧激光雷达使用数据分割方法中的采样对原始点云进行分割得到的点云;
    I 1S代表路侧激光雷达使用数据分割方法中的基于安全场的分割对原始点云进行分割得到的点云;
    I 2S代表路侧激光雷达使用数据分割方法中的基于安全场的采样对原始点云进行分割得到的点云;
    最后得到不同的数据分割方法的检测结果,并给出评价。
  3. 如权利要求1或2所述的方法,其特征在于,所述静态风险场的场源为交通环境中处于相对静止的物体,包括车道分割线以及其他路面标志标线,还包括中央分隔带这种硬性分隔设施;
    所述静态风险场的场强矢量计算方法为:
    场强矢量E R代表静态风险场源a在(x a,y a)形成的势场对于目标物体j在(x j,y j)的场强矢量;
    Figure PCTCN2022084738-appb-100001
    Figure PCTCN2022084738-appb-100002
    其中,
    LT a是静态风险场源a类型的风险系数;
    R a是大于0的常数,表示(x a,y a)处的路况影响因素;
    f d为静态风险场源a距离影响因子;
    r aj是静态风险场源a和目标车辆j之间的距离矢量,该情况下(x j,y j)为目标车辆j的形心,(x a,y a)则表示(x j,y j)做垂线与车道标志a 1相交的一点;
    r aj/|r aj|表示场强的方向;
    E R值越大,静态风险场源a对目标车辆j造成的风险越高,静态风险场源包括但不限于车道标志。
  4. 如权利要求3所述的方法,其特征在于,当静态风险场源a为车道分割线时,
    Figure PCTCN2022084738-appb-100003
  5. 如权利要求3所述的方法,其特征在于,当静态风险场源a为硬性分割带时,
    Figure PCTCN2022084738-appb-100004
  6. 如权利要求1或2所述的方法,其特征在于,所述动态风险场的场源为交通环境中处于相对移动的物体,包括车辆、行人、路障设施;
    所述动态风险场的场强矢量计算方法为:
    场强矢量E V代表动态风险场源b在(x b,y b)形成的势场对于目标物体j在(x j,y j)的场强矢量;
    Figure PCTCN2022084738-appb-100005
    r bj=(x j-x b,y j-y b)
    其中x轴位于道路线上,y轴垂直于道路线;
    r bj为动态风险场源b与目标车辆j的距离向量;
    k 2、k 3和G均为大于0的常数;
    R b含义与R a相同,表示(x b,y b)处的路况影响因素;
    T bj为动态风险场源b和目标车辆j之间的类型修正系数;
    v bj是动态风险场源b与目标车辆j之间的相对速度;
    θ是v bj和r bj方向之间的角度,顺时针方向为正;
    E V值越大,动态风险场源b对目标车辆j造成的风险越高。
  7. 如权利要求1或2所述的方法,其特征在于,对于行车安全风险场中的某一目标车辆,各类物体的风险大小计算流程如下:
    1)通过前期数据采集,构建点云扫描结果的静态场景数据;通过人工方式,分离出静态场景中的静态风险场源,包括车道分隔线、中央分隔带、路沿区域,通过随机采样并拟合出各静态风险场源的平面线性方程;
    2)选定某一帧数据为计算时刻,并提取前一帧数据作为物体移动速度参考;利用基于点云数据的3D目标检测与追踪算法,分别标识出计算帧和前一帧数据中的参与安全场计算过程的各类物体,一般为车辆、行人,并建立前后两帧同一个物体间的对应关系;利用同一个物体的标注框和激光雷达的扫描帧率计算该物体的移动速度;对于无前帧数据用以计算速度的新增物体,根据物体的不同类型,设置默认速度为该物体速度;
    3)将目标车辆和除它之外的其他各个物体的相对位置、类型的属性,以及第4.1)步中静态风险场源相对目标车辆的距离等参数,还包括路况交通环境因素代入行车安全风险场场强矢量计算方法,相对速度v bj中目标车辆的速度设置为未知参数,将计算过程向后延,那么相对速度则是带有未知参数的表达式;得到整个交通场景中每一个物体对于目标车辆的安全风险,从而形成以计算目标为核心的行车安全风险大小分布。
  8. 如权利要求1或2所述的方法,其特征在于,所述危险范围的筛选方法为:
    5.1)若所述危险目标为静态风险场源,则以该风险场源所在的平面为中心,左右各截取宽度为d/2的区域为危险范围,d是危险目标的宽度;
    5.2)若所述危险目标为动态风险场源,则以危险目标形心为中心,截取宽为1.5d、长为0.5l+0.5l×k的矩形区域为危险范围,其中d是危险目标的宽度,l是危险目标的长度,k为不小于1的速度修正系数;
    5.3)按所述危险目标的危险系数大小依次提取危险范围,危险范围重合的区域仅提取一次;
    5.4)对于行车安全风险场中的某一目标车辆,最终提取的总危险范围结果作为所述目标车辆的感知辅助数据提供给所述目标车辆。
  9. 如权利要求1或2所述的方法,其特征在于,所述行车安全风险场的计算包含以下模块:
    6.1)目标检测子模块;
    6.2)场景获取子模块;
    6.3)安全场计算子模块。
  10. 如权利要求9所述的方法,其特征在于,所述场景获取子模块采用如下方案之一:
    6.2.1)通过在路侧激光雷达中加入相机传感器,利用相机传感器采集到的场景RGB信息,以及对应的水平、竖直边界来判断物体的类型,来辅助辨别静态物体的类型;
    6.2.2)在目标检测子模块进行目标检测之前,加入人工的判断过程,由经过相关专业培训的工作人员对交通场景中的静态物体进行标定工作,达到辨别静态物体的目的;
    6.2.3)通过已有的高精度地图,根据坐标系定位到场景所在位置,利用高精度地图中的车道级信息来辨别静态物体的类型。
  11. 如权利要求1或2所述的方法,其特征在于,所述危险范围的提取采用如下方式:
    设置边界框内采样权重(f 1)、边界框外采样权重(f 2),对边界框内点云数据(P 11)、边界框外点云数据(P 12)随机采样,然后选择边界框内点云数据(P 11),剔除边界框外点云数据(P 12),得到数据分割后的点云数据(P 2)。
PCT/CN2022/084738 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法 WO2022206942A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280026657.8A CN117441197A (zh) 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110000327 2021-01-01
CN202110228419 2021-03-01
CNPCT/CN2021/085146 2021-04-01
PCT/CN2021/085146 WO2022141910A1 (zh) 2021-01-01 2021-04-01 一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法

Publications (1)

Publication Number Publication Date
WO2022206942A1 true WO2022206942A1 (zh) 2022-10-06

Family

ID=82260124

Family Applications (9)

Application Number Title Priority Date Filing Date
PCT/CN2021/085149 WO2022141913A1 (zh) 2021-01-01 2021-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2021/085148 WO2022141912A1 (zh) 2021-01-01 2021-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2021/085146 WO2022141910A1 (zh) 2021-01-01 2021-04-01 一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法
PCT/CN2021/085150 WO2022141914A1 (zh) 2021-01-01 2021-04-01 一种基于雷视融合的多目标车辆检测及重识别方法
PCT/CN2021/085147 WO2022141911A1 (zh) 2021-01-01 2021-04-01 一种基于路侧感知单元的动态目标点云快速识别及点云分割方法
PCT/CN2022/084738 WO2022206942A1 (zh) 2021-01-01 2022-04-01 一种基于行车安全风险场的激光雷达点云动态分割及融合方法
PCT/CN2022/084925 WO2022206977A1 (zh) 2021-01-01 2022-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2022/084929 WO2022206978A1 (zh) 2021-01-01 2022-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084912 WO2022206974A1 (zh) 2021-01-01 2022-04-01 一种基于路侧感知单元的静态和非静态物体点云识别方法

Family Applications Before (5)

Application Number Title Priority Date Filing Date
PCT/CN2021/085149 WO2022141913A1 (zh) 2021-01-01 2021-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2021/085148 WO2022141912A1 (zh) 2021-01-01 2021-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2021/085146 WO2022141910A1 (zh) 2021-01-01 2021-04-01 一种基于行车安全风险场的车路激光雷达点云动态分割及融合方法
PCT/CN2021/085150 WO2022141914A1 (zh) 2021-01-01 2021-04-01 一种基于雷视融合的多目标车辆检测及重识别方法
PCT/CN2021/085147 WO2022141911A1 (zh) 2021-01-01 2021-04-01 一种基于路侧感知单元的动态目标点云快速识别及点云分割方法

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/CN2022/084925 WO2022206977A1 (zh) 2021-01-01 2022-04-01 一种面向车路协同的感知信息融合表征及目标检测方法
PCT/CN2022/084929 WO2022206978A1 (zh) 2021-01-01 2022-04-01 一种基于车载定位装置的路侧毫米波雷达校准方法
PCT/CN2022/084912 WO2022206974A1 (zh) 2021-01-01 2022-04-01 一种基于路侧感知单元的静态和非静态物体点云识别方法

Country Status (3)

Country Link
CN (5) CN116685873A (zh)
GB (2) GB2618936A (zh)
WO (9) WO2022141913A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117961915A (zh) * 2024-03-28 2024-05-03 太原理工大学 煤矿掘进机器人的智能辅助决策方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724362B (zh) * 2022-03-23 2022-12-27 中交信息技术国家工程实验室有限公司 一种车辆轨迹数据处理方法
CN115236628B (zh) * 2022-07-26 2024-05-31 中国矿业大学 一种基于激光雷达检测车厢残留货物的方法
CN115358530A (zh) * 2022-07-26 2022-11-18 上海交通大学 一种车路协同感知路侧测试数据质量评价方法
CN115113157B (zh) * 2022-08-29 2022-11-22 成都瑞达物联科技有限公司 一种基于车路协同雷达的波束指向校准方法
CN115166721B (zh) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 路侧感知设备中雷达与gnss信息标定融合方法及装置
CN115480243B (zh) * 2022-09-05 2024-02-09 江苏中科西北星信息科技有限公司 多毫米波雷达端边云融合计算集成及其使用方法
CN115272493B (zh) * 2022-09-20 2022-12-27 之江实验室 一种基于连续时序点云叠加的异常目标检测方法及装置
CN115235478B (zh) * 2022-09-23 2023-04-07 武汉理工大学 基于视觉标签和激光slam的智能汽车定位方法及***
CN115830860B (zh) * 2022-11-17 2023-12-15 西部科学城智能网联汽车创新中心(重庆)有限公司 交通事故预测方法及装置
CN115966084B (zh) * 2023-03-17 2023-06-09 江西昂然信息技术有限公司 全息路口毫米波雷达数据处理方法、装置及计算机设备
CN116189116B (zh) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 一种交通状态感知方法及***
CN117452392B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶***的雷达数据处理***和方法
CN117471461B (zh) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) 一种用于车载辅助驾驶***的路侧雷达服务装置和方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (zh) * 2016-07-01 2016-08-24 北京智行者科技有限公司 汽车自动驾驶方法和装置
US20180259968A1 (en) * 2017-03-07 2018-09-13 nuTonomy, Inc. Planning for unknown objects by an autonomous vehicle
CN108932462A (zh) * 2017-05-27 2018-12-04 华为技术有限公司 驾驶意图确定方法及装置

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661370B2 (en) * 2001-12-11 2003-12-09 Fujitsu Ten Limited Radar data processing apparatus and data processing method
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
KR101655606B1 (ko) * 2014-12-11 2016-09-07 현대자동차주식회사 라이다를 이용한 멀티 오브젝트 추적 장치 및 그 방법
TWI597513B (zh) * 2016-06-02 2017-09-01 財團法人工業技術研究院 定位系統、車載定位裝置及其定位方法
WO2018126248A1 (en) * 2017-01-02 2018-07-05 Okeeffe James Micromirror array for feedback-based image resolution enhancement
KR102056147B1 (ko) * 2016-12-09 2019-12-17 (주)엠아이테크 자율주행차량을 위한 거리 데이터와 3차원 스캔 데이터의 정합 방법 및 그 장치
CN106846494A (zh) * 2017-01-16 2017-06-13 青岛海大新星软件咨询有限公司 倾斜摄影三维建筑物模型自动单体化算法
CN108629231B (zh) * 2017-03-16 2021-01-22 百度在线网络技术(北京)有限公司 障碍物检测方法、装置、设备及存储介质
CN107133966B (zh) * 2017-03-30 2020-04-14 浙江大学 一种基于采样一致性算法的三维声纳图像背景分割方法
FR3067495B1 (fr) * 2017-06-08 2019-07-05 Renault S.A.S Procede et systeme d'identification d'au moins un objet en deplacement
CN109509260B (zh) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 动态障碍物点云的标注方法、设备及可读介质
CN107609522B (zh) * 2017-09-19 2021-04-13 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测***
CN108152831B (zh) * 2017-12-06 2020-02-07 中国农业大学 一种激光雷达障碍物识别方法及***
CN108010360A (zh) * 2017-12-27 2018-05-08 中电海康集团有限公司 一种基于车路协同的自动驾驶环境感知***
CN108639059B (zh) * 2018-05-08 2019-02-19 清华大学 基于最小作用量原理的驾驶人操控行为量化方法及装置
CN109188379B (zh) * 2018-06-11 2023-10-13 深圳市保途者科技有限公司 驾驶辅助雷达工作角度的自动校准方法
CN112368598A (zh) * 2018-07-02 2021-02-12 索尼半导体解决方案公司 信息处理设备、信息处理方法、计算机程序和移动设备
US10839530B1 (en) * 2018-09-04 2020-11-17 Apple Inc. Moving point detection
CN109297510B (zh) * 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN111429739A (zh) * 2018-12-20 2020-07-17 阿里巴巴集团控股有限公司 一种辅助驾驶方法和***
JP7217577B2 (ja) * 2019-03-20 2023-02-03 フォルシアクラリオン・エレクトロニクス株式会社 キャリブレーション装置、キャリブレーション方法
CN110220529B (zh) * 2019-06-17 2023-05-23 深圳数翔科技有限公司 一种路侧自动驾驶车辆的定位方法
CN110296713B (zh) * 2019-06-17 2024-06-04 广州卡尔动力科技有限公司 路侧自动驾驶车辆定位导航***及单个、多个车辆定位导航方法
CN110532896B (zh) * 2019-08-06 2022-04-08 北京航空航天大学 一种基于路侧毫米波雷达和机器视觉融合的道路车辆检测方法
CN110443978B (zh) * 2019-08-08 2021-06-18 南京联舜科技有限公司 一种摔倒报警设备及方法
CN110458112B (zh) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 车辆检测方法、装置、计算机设备和可读存储介质
CN110850378B (zh) * 2019-11-22 2021-11-19 深圳成谷科技有限公司 一种路侧雷达设备自动校准方法和装置
CN110850431A (zh) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 一种拖车偏转角的测量***和方法
CN110906939A (zh) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 自动驾驶定位方法、装置、电子设备、存储介质及汽车
CN111121849B (zh) * 2020-01-02 2021-08-20 大陆投资(中国)有限公司 传感器的方位参数的自动校准方法、边缘计算单元和路侧传感***
CN111999741B (zh) * 2020-01-17 2023-03-14 青岛慧拓智能机器有限公司 路侧激光雷达目标检测方法及装置
CN111157965B (zh) * 2020-02-18 2021-11-23 北京理工大学重庆创新中心 车载毫米波雷达安装角度自校准方法、装置及存储介质
CN111476822B (zh) * 2020-04-08 2023-04-18 浙江大学 一种基于场景流的激光雷达目标检测与运动跟踪方法
CN111554088B (zh) * 2020-04-13 2022-03-22 重庆邮电大学 一种多功能v2x智能路侧基站***
CN111192295B (zh) * 2020-04-14 2020-07-03 中智行科技有限公司 目标检测与跟踪方法、设备和计算机可读存储介质
CN111537966B (zh) * 2020-04-28 2022-06-10 东南大学 一种适用于毫米波车载雷达领域的阵列天线误差校正方法
CN111766608A (zh) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 一种基于激光雷达的环境感知***
CN111880191B (zh) * 2020-06-16 2023-03-28 北京大学 基于多智能体激光雷达和视觉信息融合的地图生成方法
CN111880174A (zh) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 一种用于支持自动驾驶控制决策的路侧服务***及其控制方法
CN111914664A (zh) * 2020-07-06 2020-11-10 同济大学 基于重识别的车辆多目标检测和轨迹跟踪方法
CN111985322B (zh) * 2020-07-14 2024-02-06 西安理工大学 一种基于激光雷达的道路环境要素感知方法
CN111862157B (zh) * 2020-07-20 2023-10-10 重庆大学 一种机器视觉与毫米波雷达融合的多车辆目标跟踪方法
CN112019997A (zh) * 2020-08-05 2020-12-01 锐捷网络股份有限公司 一种车辆定位方法及装置
CN112509333A (zh) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 一种基于多传感器感知的路侧停车车辆轨迹识别方法及***

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892471A (zh) * 2016-07-01 2016-08-24 北京智行者科技有限公司 汽车自动驾驶方法和装置
US20180259968A1 (en) * 2017-03-07 2018-09-13 nuTonomy, Inc. Planning for unknown objects by an autonomous vehicle
CN108932462A (zh) * 2017-05-27 2018-12-04 华为技术有限公司 驾驶意图确定方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117961915A (zh) * 2024-03-28 2024-05-03 太原理工大学 煤矿掘进机器人的智能辅助决策方法
CN117961915B (zh) * 2024-03-28 2024-06-04 太原理工大学 煤矿掘进机器人的智能辅助决策方法

Also Published As

Publication number Publication date
GB2618936A (en) 2023-11-22
WO2022206974A1 (zh) 2022-10-06
GB202316625D0 (en) 2023-12-13
CN117441113A (zh) 2024-01-23
WO2022141912A1 (zh) 2022-07-07
WO2022206977A1 (zh) 2022-10-06
CN117441197A (zh) 2024-01-23
WO2022141913A1 (zh) 2022-07-07
CN117836667A (zh) 2024-04-05
WO2022141914A1 (zh) 2022-07-07
CN117836653A (zh) 2024-04-05
GB202313215D0 (en) 2023-10-11
WO2022141910A1 (zh) 2022-07-07
WO2022141911A1 (zh) 2022-07-07
WO2022206978A1 (zh) 2022-10-06
CN116685873A (zh) 2023-09-01
GB2620877A (en) 2024-01-24

Similar Documents

Publication Publication Date Title
WO2022206942A1 (zh) 一种基于行车安全风险场的激光雷达点云动态分割及融合方法
Han et al. Research on road environmental sense method of intelligent vehicle based on tracking check
CN114282597B (zh) 一种车辆可行驶区域检测方法、***以及采用该***的自动驾驶车辆
CN112700470B (zh) 一种基于交通视频流的目标检测和轨迹提取方法
EP4152204A1 (en) Lane line detection method, and related apparatus
CN111874006B (zh) 路线规划处理方法和装置
Zhao et al. On-road vehicle trajectory collection and scene-based lane change analysis: Part i
CN113313154A (zh) 一体化融合多传感器自动驾驶智能感知装置
CN112633176B (zh) 一种基于深度学习的轨道交通障碍物检测方法
CN108073170A (zh) 用于自主车辆的自动化协同驾驶控制
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN113359709B (zh) 一种基于数字孪生的无人驾驶运动规划方法
CN113705636A (zh) 一种自动驾驶车辆轨迹预测方法、装置及电子设备
DE102021124913A1 (de) Metrik-backpropagation für die beurteilung der leistung von untersystemen
CN114821507A (zh) 一种用于自动驾驶的多传感器融合车路协同感知方法
Wang et al. Advanced driver‐assistance system (ADAS) for intelligent transportation based on the recognition of traffic cones
Beck et al. Automated vehicle data pipeline for accident reconstruction: New insights from LiDAR, camera, and radar data
CN114882182A (zh) 一种基于车路协同感知***的语义地图构建方法
Cao et al. Data generation using simulation technology to improve perception mechanism of autonomous vehicles
CN117115690A (zh) 一种基于深度学习和浅层特征增强的无人机交通目标检测方法及***
Liu et al. Research on security of key algorithms in intelligent driving system
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
Shan et al. Vehicle collision risk estimation based on RGB-D camera for urban road
CN116311113A (zh) 一种基于车载单目摄像头的驾驶环境感知方法
Sanberg et al. Asteroids: A stixel tracking extrapolation-based relevant obstacle impact detection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22779122

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280026657.8

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22779122

Country of ref document: EP

Kind code of ref document: A1