GB2618936A - Vehicle-road collaboration-oriented sensing information fusion representation and target detection method - Google Patents

Vehicle-road collaboration-oriented sensing information fusion representation and target detection method Download PDF

Info

Publication number
GB2618936A
GB2618936A GB2313215.2A GB202313215A GB2618936A GB 2618936 A GB2618936 A GB 2618936A GB 202313215 A GB202313215 A GB 202313215A GB 2618936 A GB2618936 A GB 2618936A
Authority
GB
United Kingdom
Prior art keywords
lidar
roadside
voxel
point cloud
level features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2313215.2A
Other versions
GB202313215D0 (en
Inventor
Du Yuchuan
Zhao Cong
Zhu Yifan
Ji Yuxiong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of GB202313215D0 publication Critical patent/GB202313215D0/en
Publication of GB2618936A publication Critical patent/GB2618936A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/003Transmission of data between radar, sonar or lidar systems and remote stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G01S7/4972Alignment of sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A vehicle-road collaboration-oriented sensing information fusion representation and target detection method, comprising the following steps: providing a roadside lidar, and configuring a corresponding roadside computing device for the roadside lidar; calibrating extrinsic parameters of the roadside lidar; the roadside computing device calculating, according to positioning data of a self-driving vehicle and the extrinsic parameters of the roadside lidar, a relative position of the self-driving vehicle with respect to the roadside lidar; the roadside computing device deflecting, according to the relative position, roadside lidar point cloud detected by the roadside lidar into the coordinate system of the self-driving vehicle, so as to obtain a deflected point cloud; and the roadside computing device performing voxelization processing on the deflected point cloud to obtain a voxelized deflected point cloud. The self-driving vehicle performs voxelization processing on a vehicle-mounted lidar point cloud detected by a vehicle-mounted lidar to obtain a voxelized vehicle-mounted lidar point cloud; and the roadside computing device calculates voxel-level features of the voxelized deflected point cloud, to obtain voxel-level features of the deflected point cloud. The self-driving vehicle calculates voxel-level features of the voxelized vehicle-mounted lidar point cloud, to obtain voxel-level features of the vehicle-mounted lidar point cloud; and the point cloud voxel-level features are compressed and transmitted to the computing device, and a transmission device can be a self-driving vehicle, a roadside computing device or a cloud. The computing device performs data splicing and data aggregation on the voxel-level features of the vehicle-mounted lidar point cloud and the voxel-level features of the deflected point cloud to obtain aggregated voxel-level features; the computing device inputs the aggregated voxel-level features into a voxel-level feature based three-dimensional target detection network model to obtain a target detection result; and when the computing device is a roadside computing device or a cloud, finally the target detection result is sent to the self-driving vehicle.

Description

A Vehicle-road Cooperative Perception Method for 3D Object Detection Based on Deep Neural Networks with Feature Sharing
Technical Field
The present invention belongs to the field of autonomous driving vehicle-road coordination technology and relates to a method for vehicle-road collaborative 3D object detection method utilizing deep neural network feature sharing.
Background Technology
In the 21st century, with the continuous development of urban roads and the automotive industry, cars have become an essential mode of transportation for people, greatly facilitating daily life and productivity. However, excessive use of cars has also brought about environmental pollution, traffic congestion, traffic accidents, and other problems. To alleviate the problem of excessive car use and free people from the transportation system while improving vehicle driving ability, autonomous driving vehicles have gradually become an important direction for future car development. With the rise of deep learning technology and the increasing attention to artificial intelligence, autonomous driving, as a prominent focal point in Al, has also gained tremendous popularity.
Autonomous driving is a comprehensive system of software and hardware interaction. The core technologies of autonomous driving include hardware (automobile manufacturing technology, autonomous driving chips), autonomous driving software, high-precision maps, sensor communication networks, etc. From the perspective of software, it can be divided into three modules: environmental perception, behavioral decision-making, and motion control.
Perception is the first step in autonomous driving and serves as the link between vehicles and their environment. The overall performance of an autonomous driving system depends largely on the quality of its perception system. Perception in autonomous driving vehicles is achieved through sensors, with LiDAR using lasers for detection and measurement. Its principle involves emitting pulsed lasers around the vehicle which reflect when they encounter objects, calculating distance based on time difference to establish a 3D model of surrounding environments. LiDAR has high precision and long-range capabilities due to its short wavelength, allowing it to detect even small targets from far away distances. Point cloud data obtained by LiDAR contains large amounts of information with higher accuracy, making it ideal for target detection and classification within autonomous driving perception systems. On one hand, LiDAR overturns traditional 2D projection imaging modes by collecting depth information about target surfaces to obtain more complete spatial information about targets: reconstructed 3D models can better reflect geometric shapes while also providing rich feature information such as surface reflection characteristics or motion speed that support target detection, recognition, tracking, etc., reducing algorithmic complexity. On the other hand, active laser technology provides high measurement resolution along with strong anti-interference ability against stealthy targets while working under all weather conditions.
Currently, based on the presence or absence of mechanical components, LiDAR can be divided into mechanical LiDAR and solid-state LiDAR. Although solid-state LiDAR is considered to be the future trend, in the current battlefield of LiDAR, mechanical LiDAR still occupies the mainstream position. Mechanical LiDARs have rotating parts that control the angle of laser emission, while solid-state lasers do not require mechanical rotating parts and mainly rely on electronic components to control the angle of laser emission.
In existing autonomous driving solutions, LiDAR is the most important sensor in its environmental perception module, responsible for real-time map building and positioning, target detection, and other tasks in environmental perception. For example, Google Waymo has added five LiDARs to its sensor configuration plan. Four side-facing LiDARs are distributed around the front and rear of the vehicle as medium-to-short-range multi-line radar to supplement blind spot vision; a high-linecount LiDAR is installed on top for large-scale perception, with blind spots supplemented by four side-facing LiDARs.
The scanning data of the LiDAR sensor is recorded in the form of point clouds. Point cloud data refers to a collection of vectors in a three-dimensional coordinate system. These vectors are usually represented in the form of X, Y, and Z coordinates. In addition to containing three-dimensional coordinates, each point may also contain color information (RGB) or reflection intensity information.
Among them, the X, Y, and Z columns represent the three-dimensional position of point data in either the sensor coordinate system or the world coordinate system, generally measured in meters. The Intensity column represents the laser reflection intensity at each point, with values normalized between 0 and 255 and without a specific unit.
Due to the height limitation of the installation of the onboard LiDAR, which is mainly determined by the size of the vehicle and is usually only about two meters high, its ability to detect information can be easily affected by obstacles around the vehicle. For example, a cargo truck driving in front of a small car can almost completely block the forward vision of LiDAR on the small car, severely reducing its environmental perception capability. In addition, the performance of LiDAR itself may also be limited by overall vehicle cost considerations, therefore, expensive high-line-count LiDARs are often not installed in vehicles. As a result, point cloud data obtained from onboard LiDARs often have blind spots or sparse areas. It is difficult for automatic driving perception tasks to rely solely on sensors installed in vehicles. Compared with onboard LiDARs, roadside LiDARs have more transparent views that are less likely to be blocked because they can be deployed at higher gantries or lamp posts. In addition. roadside LiDARs have a higher tolerance for costs and can use higher line-count lasers while configuring higher computing power units for faster detection speed and better detection performance.
At present, the vehicle-road coordination system is in a wave of research and testing. The intelligent vehicle-road coordination solution based on V2X technology can enhance the current achievable advanced driving assistance functions, improve vehicle driving safety and road operation efficiency, and provide data services and technical support for autonomous driving in the future.
The existing LiDAR vehicle-road collaborative solution involves each vehicle and roadside facility detecting targets based on their own LiDAR point cloud data, and then the facility sends the detection results to the vehicle. Most scholars focus on analyzing the reliability of data transmission, calculating relative pose between vehicles and roadsides, or processing data transmission delays between vehicles and roadsides. They all assume that target detection results are directly sent during the collaborative process. Although this approach has a low amount of data transmission, it still cannot fully utilize the detection data from both ends. For example, when neither end detects complete target point clouds, missed detections or false alarms can easily occur leading to errors in collaborative target detection results. To address this issue, some scholars propose sending original point cloud data to prevent information loss. For instance, the Cooper framework proposed in 2019 first introduced a cooperative perception scheme at the raw point cloud level by fusing different sources of point cloud data significantly improving perception performance.
However, at the same time, the size of a single frame of LiDAR point cloud data is often over ten or even dozens of megabytes. The existing vehicle-road cooperative communication conditions are difficult to support such a large amount of real-time point cloud data transmission. Therefore, autonomous driving technology urgently needs a better collaborative detection method that utilizes LiDAR data on both ends, which not only meets the requirements for target detection accuracy but also minimizes the amount of data transmission.
Existing target recognition and classification algorithms based on LiDAR point cloud data are all based on deep neural network technology.
Existing Technology Patent document US9562971B2 Patent document U520150187216A1 Patent document CN110989620A Patent document CN110781927A Patent document CN111222441A Patent document CN108010360A Invention Content To solve the above problems, the present invention provides a vehicle-road cooperative perception method for 3D object detection based on deep neural networks with feature sharing and provides a LiDAR point cloud data-based vehicle-road collaboration scheme that balances transmission data size and information loss degree. It is used to solve the problem of insufficient single-vehicle perception ability of current autonomous driving vehicles while the bandwidth of vehicle-road collaborative communication is insufficient.
The specific technical problems to be solved include determining the layout plan of roadside LiDAR, selecting an extrinsic parameter calibration method for roadside LiDAR, calculating extrinsic parameters based on the relative pose between autonomous driving vehicles and roadside LiDAR, and determining a suitable information representation form for vehicle-road collaboration.
The goal of this invention is: to ensure reduced information transmission volume under the premise of collaborative perception capability between vehicles and roads.
The patent solution to the technical problems in this invention is divided into the preparation stage and the application stage. The steps in the preparation stage are as follows: A. Install a roadside LiDAR and configure the corresponding roadside computing device; B. Calibrate the extrinsic parameters for said roadside LiDAR. The steps in the application stage are as follows: C. Said roadside computing device calculates the relative pose between an autonomous driving vehicle and said roadside LiDAR based on positioning data from said autonomous driving vehicle and said extrinsic parameters of said roadside LiDAR; D. Said roadside computing device transforms roadside point clouds from the roadside LiDAR coordinate system into the coordinate system of said autonomous driving vehicles according to the relative poses obtained in step C, obtaining transformed point clouds; E. Said roadside computing device voxelizes said transformed point clouds, obtaining voxelized transformed point clouds; Said autonomous driving vehicle voxelizes the onboard [DAR point cloud, obtaining voxelized onboard point cloud; F. Said roadside computing devices calculate voxel-level features of transformed point clouds through a feature extraction network, obtaining voxel-level features of said voxelized transformed point clouds; Said autonomous driving vehicle calculates voxel-level features of voxelized onboard LiDAR point cloud through said deep feature extraction network, obtaining voxel-level features of onboard LiDAR point cloud; The follow-up steps are divided into three sub-plans: I, II, and Ill. Sub-plan 1 completes steps H1, and 11 on the roadside computing device; sub-plan 11 completes steps G2, H2, and 12 on the autonomous driving vehicle; sub-plan III completes steps G3, H3, and 13 in the cloud.
In sub-plan 1: Gl. Said autonomous driving vehicle compresses said voxel-level features of the onboard LiDAR point cloud to obtain compressed voxel-level features of the onboard LiDAR point cloud and transmits them to said roadside computing device. Said roadside computing device receives said compressed voxel-level features of the onboard LiDAR point cloud and restores them to voxel-level features of the onboard LiDAR point cloud: H1. Said roadside computing device performs data stitching and data aggregation on voxel-level features of both onboard LiDAR point clouds and transformed point clouds to obtain aggregated voxel-level features; 11. Said roadside computing device inputs aggregated voxel-level features into an object detection network model based on voxel-level features to obtain object detection results which are then transmitted back to said autonomous driving vehicle.
In sub-plan II: G2. Said roadside computing device compresses voxel-level features of transformed point clouds, obtains compressed voxel-level features of transformed point clouds, and transmits it to said autonomous driving vehicle; Said autonomous driving vehicle receives compressed voxel-level features of transformed point clouds, restore it as voxel-level features of transformed point clouds; H2. Said autonomous driving vehicle performs data stitching and data aggregation on both onboard LiDAR point cloud's voxel-level feature and transformed point cloud's voxel-level feature; 12. Said autonomous driving vehicle inputs aggregated volumetric features into a 3D object detection network model based on voxel-level features to obtain object detection results. In sub-plan III: G3. Said autonomous driving vehicle compresses the voxel-level features of the onboard LiDAR point cloud, obtains compressed voxel-level features of the onboard LiDAR point cloud, and transmits them to a cloud server. Said roadside computing device compresses the voxel-level features of transformed point clouds, obtains compressed voxel-level features of transformed point clouds, and transmits them to said cloud server. Said cloud server receives both compressed voxel-level features of transformed point clouds and compressed voxel-level features of the onboard LiDAR point clouds, restores them respectively; H3. Said cloud server performs data stitching and data aggregation on both onboard LiDAR point cloud's voxel-level feature and transformed point cloud's voxel-level feature; 13. Said cloud server inputs aggregated volumetric features into a 3D object detection network model based on voxel-level features to obtain object detection results which are then transmitted back to autonomous driving vehicles.
The specific technical solutions for the above steps in this invention patent are as follows: A. Deployment of LiDAR The deployment of the roadside LiDAR is determined based on the existing roadside pillar facilities and the type of installed LiDAR in the vehicle-road cooperative scene. The existing roadside LiDARs are installed in a pole or crossbar manner, with specific installation locations being infrastructure pillars such as roadside gantries, streetlights, and signal light poles that have electrical support.
According to whether there are internal rotating components, LiDARs can be divided into mechanical rotary-type LiDARs, hybrid-type LiDARs, and solid-state LiDARs. Among them, mechanical rotary-type and solid-state LiDARs are commonly used types of roadside LiDARs.
For intersection scenes and other scenarios, deploying a roadside LiDAR with a detection range greater than or equal to the scene range or containing key areas within the scene is sufficient. For long-distance large-scale complex scenes such as expressways, highways, and parks, it is recommended to follow the guidelines for deploying roadside LiDARs so that their coverage area meets full coverage requirements for scenes; i.e., a single roadside LiDAR supplements detection blind spots under other roadside LiDAR detections within its coverage area to achieve better vehicle-road cooperative target detection results.
Guidelines for deploying side-mounted mechanical rotary-type LiDARs differ from those for solid-state side-mounted LiDARs.
Ai) Roadside Mechanical Rotating LiDAR and Roadside Hybrid Solid-State LiDAR Deployment Scheme The mechanical rotating LiDAR achieves laser scanning through mechanical rotation. The laser emitting component is arranged in a vertical direction as a line array of laser sources, which can produce beams pointing at different angles within the vertical plane through lenses. Under the drive of an electric motor, continuous rotation changes the beam from "line" to "plane", forming multiple laser "planes" through rotational scanning and achieving detection in the detection area. The hybrid solid-state LiDAR refers to using semiconductor "micro-movement" devices (such as MEMS scanning mirrors) instead of macroscopic mechanical scanners to achieve laser scanning at a microscopic scale on the radar emission end.
The deployment guidelines for roadside mechanical rotating LiDAR and roadside hybrid solid-state LiDAR require that they be horizontally installed when installing them on roadsides, ensuring full utilization of beam information in all directions. As shown in Figure 2, deploying roadside mechanical rotating LiDAR and roadside hybrid solid-state LiDAR should meet at least the following requirements: > L (1) tan WE; ) a Where: Ho represents the installation height of the roadside mechanical rotating LiDAR or roadside hybrid solid-state LiDAR; 0,2 represents the angle between the highest elevation angle beam of the roadside mechanical rotating LiDAR or roadside hybrid solid-state LiDAR and the horizontal direction; L" represents the distance between adjacent mounting poles of roadside mechanical rotating LiDAR or roadside hybrid solid-state LiDAR.
A2) Roadside Solid-State Li DAR Deployment Scheme The solid-state LiDAR eliminates the mechanical scanning structure, and the laser scanning in both horizontal and vertical directions is achieved electronically. The phased array laser transmitter consists of a rectangular array of multiple transmitting and receiving units. By changing the phase difference of light rays emitted from different units in the array, it is possible to adjust the angle and direction of the emitted laser. After passing through an optical beam splitter, the laser source enters an optical waveguide array where external control changes the phase of light waves on each waveguide to achieve beam scanning using phase differences between waveguides.
As shown in Figure 3, for the roadside deployment of solid-state LiDARs, guidelines require that they meet at least the following requirements: 1 2 (2) H, tan(-tan(0;') 2 Where: Kb represents the installation height of the roadside solid-state LiDAR; represents the vertical field of view angle of the roadside solid-state LiDAR; 0,2 represents the angle between the highest elevation beam of the roadside solid-state LiDAR and the horizontal direction; Lb represents the distance between adjacent installation poles for roadside solid-state LiDARs.
For scenes where solid-state LiDARs are installed, a method of installing two reverse solid-state LiDARs on one pole can also be used to compensate for blind spots in roadside perception and reduce the demand for roadside poles. In this case, requirements as shown in Figure 4 should be met.
H' >L (3) tan(0,?) Where: Hc represents the installation height of the roadside solid-state LiDAR; 49,2. represents the angle between the highest elevation angle beam of the roadside solid-state LiDAR and the horizontal direction; Le represents the distance between adjacent installation poles of roadside solid-state LiDARs; For LiDAR vehicle-road coordination scenarios that meet these conditions, mechanical rotating or full-solid state LiDARs should be deployed on roadsides according to the above requirements, and increase their scanning areas when conditions permit. For LiDAR vehicle-road coordination scenarios that cannot meet these conditions, new poles can be installed and the number of roadside LiDARs can be increased to meet the deployment guidelines for roadside LiDARs.
B. Extrinsic Parameter Calibration To calculate the relative position and orientation between the roadside LiDAR and the onboard LiDAR, it is necessary to calibrate the installation position and angle of the roadside LiDAR, which is called extrinsic parameter calibration. This process obtains coordinate position parameters and angular pose parameters of the LiDAR relative to a certain reference coordinate system. The extrinsic parameters of the LiDAR can be represented by the following vectors: Pro = [x0 yo ao flo To] (4) Where: x" represents the coordinates of the roadside LiDAR in the reference coordinate system; y, represents the coordinates of the roadside LiDAR in the reference coordinate system; zo represents the coordinates of the roadside LiDAR in the reference coordinate system; ao represents the rotation angle around the axis of roadside LiDAR in the reference coordinate system; flo represents the rotation angle around the axis of roadside LiDAR in the reference coordinate system; 1" represents the rotation angle around the axis of roadside LiDAR in the reference coordinate system; The above benchmark coordinate system can be represented by longitude and latitude coordinate systems such as GCJO2 and WGS84, or it can be based on a specific geographic point in the geodetic coordinate system, such as the Beijing 54 coordinate system and the Xi'an 80 coordinate system. Correspondingly, the actual coordinates of a point in the benchmark coordinate system are related to the coordinates in the roadside LiDAR coordinate system obtained after being detected by the aforementioned LiDAR.
Xli dal Y hdar lidr _ R(a)1?3,(130)R,(Y 0) Yreal xU (5) Yo z 1 0 0 14 (ce0)= 0 cos ac, sin a" (6) 0 sin a" cos a,, cos fit 0 sin flu-R,U10)= () 1 0 (7) s n flo 0 cos,60 _ cos ex, sin yo 0 Rz(/0)= sin yo cos yo 0 (8) 0 0 1 Where: A7/flow. is the X coordinate of the point in the LiDAR coordinate system on the roadside; y,,, is the Y coordinate of the point in the LiDAR coordinate system on the roadside; zikop is the Z coordinate of thc point in the LiDAR coordinate system on thc roadside; Arreal is X -coordinate of this point in reference coordinates; is Y -coordinate of this point in reference coordinates; zreo, is Z -coordinate of this point in reference coordinates; R(a0) , and R(y0) are sub-rotation matrices calculated based on the three extrinsic parameters ao, flo, and 7" from three different angles.
The specific values of the extrinsic parameters of the roadside LiDAR are calculated by measuring the coordinates of control points in both the coordinate system of the roadside LiDAR and a reference
S
coordinate system. The steps are as follows: (1) Select at least 4 reflectivity feature points within the detection range of the roadside LiDAR as control points. Reflectivity feature points refer to those with significant differences in reflectivity compared to surrounding objects, such as traffic signs and license plates. The purpose of selecting these points is to quickly find corresponding points between point cloud data and a coordinate in a reference coordinate system based on their position and reflection intensity difference from other points, thus establishing correspondences between multiple pairs of point clouds and coordinates in a reference coordinate system. Control points should be distributed discretely, with no three control points being collinear. More control points lead to better calibration results under conditions that allow for it.
2) Use high-precision measurement instruments such as handheld RTK devices to measure precise coordinates of control points, then find corresponding point coordinates in the point cloud data from roadside LiDAR; if there is already an accurate map file available for this scene created by using high-precision surveying equipment or other means, direct matching can be performed without using handheld RTK devices.
(3) Use 3D registration algorithms (such as ICP algorithm or NDT algorithm) to calculate optimal values for extrinsic parameter vectors for LiDAR calibration, taking its result as calibration result. Among them, the ICP algorithm is mainly used when solving problems related to calibrating extrinsic parameters for LiDARs because it calculates optimal matches between target sets (the set consisting all positions where each selected control point appears in roadside LiDAR's local frame) and source sets (the set consisting all positions where each selected control point appears relative to some global frame). The error function minimized during the optimization process is defined as follows: n q, -(Rp +1') 2 (9) R=Rx(ce o)R,(110)R2,(7 0) (10) 1' [xo zo] (11) Where: E(R,T) is the target error function; R is the rotation transformation matrix; T is the translation transformation matrix; ii is the number of nearest point pairs in the point set; p, represents the coordinates of a point i in the target point set P; q, represents the point in the source point set Q that forms the nearest neighbor pair with a point /2,.
C. Relative Pose Calculation Based on the positioning data of the autonomous driving vehicle and the extrinsic calibration results of the roadside LiDAR obtained in previous preparation work, determine the relative pose between the autonomous driving vehicle and roadside LiDAR. The relative pose is calculated according to the following formula: ve://71 (12) x, xo = Rx(a0)R(160)R,,(70) . 0 -[a 1131) = [x1 yi ai Where: V' is the position and angle vector of the autonomous driving vehicle relative to the roadside LiDAR 17,71 is the position vector of the autonomous driving vehicle relative to the roadside LiDAR V' is the angle vector of the autonomous driving vehicle relative to the roadside LiDAR afit V, is the position and angle vector of the autonomous driving vehicle in a reference coordinate system.
D. Transformation Transform the roadside LiDAR point cloud D, to the coordinate system of the autonomous driving vehicle according to the following formula: y "" (18) R3.3 kl Fin' 4 (19) _01,3 1 1? = 1?,(aDRy(I1')l?,,(7') (18) 7' =[xt (19) Where: H is the transformation matrix from the roadside LiDAR coordinate system to the autonomous driving vehicle coordinate system; xegc,, Kg°, and ze, are the coordinates of a point in the roadside LiDAR point cloud after being transformed into the autonomous driving vehicle coordinate system The corresponding coordinates in the roadside LiDAR coordinate system are [x yi.," 0 is the perspective transformation vector. Since there is no perspective transformation in this scene, 0 is set to [0 0 0].
E. Voxelization Voxels are the abbreviation for Volume Pixels, which are the smallest unit of digital data segmentation in three-dimensional space. Conceptually similar to pixels, which are the smallest unit in two-dimensional space. By segmenting point cloud data using voxels, data features can be (13) (14) (15) calculated separately for each voxel and the collection of point cloud data within each voxel is called a voxel-level feature. A large class of existing 3D object detection algorithms process LiDAR point cloud data based on voxel-level features. After voxelizing the point cloud data and extracting voxellevel features, these algorithms input them into subsequent 3D object detection network models based on such features to obtain object detection results.
The steps for voxelizing point cloud data are as follows: E1) Based on the spatial dimensions [D TV fil of the onboard LiDAR point cloud D design a voxel size [D,,- , and divide the onboard LiDAR point cloud into voxels according to the designed voxel size.
E2) For the rotated point cloud Dr' , use the same voxel division method as that used for the onboard LiDAR point cloud D, to ensure that the spatial grids of the rotated point cloud Dr completely overlap with those of the onboard LiDAR point cloud D. For example, if the distribution space of the onboard LiDAR point cloud D, is [-31m,33m] in X -axis direction and its voxel is 4m, then if the distribution space of the rotated point cloud Dr is [-32rtt,34m], it should be expanded to [-35m,37m] to obtain an expanded rotated point cloud De, so that their voxel division grids are consistent. The specific calculation formula is as follows: Dego _start D ego _end ego Mare eIr'goend r ego _scot Mego end S = (20) ego
D
144.a, start Whdar,stare -Mar _start 11 Mar _end KKego non K Mot store end K esc > K"", _end nptz, E Where: Sego is the spatial range of the onboard LiDAR point cloud De: S ma: is the spatial range of the expanded transformed point cloud Dr"; start and are the starting and ending values of the expanded transformedend K7idar point cloud D: on K dimension; Kthke".",,, and K,""",, are the starting and ending values of transformed point cloud D: on K dimension; Keno_ start and Kego end are the starting and ending values of onboard LiDAR point cloud DE. on
_
K dimension; Vic is the size of voxel in K dimension.
S lzdar (21) (22)
E.) Grouping is done based on the voxels where scattered data from the point cloud Dr of the onboard LiDAR and the expanded transformed point cloud D: are located. The scattered data in the same voxel belong to the same group. Due to unevenness and sparsity of points, there may not be an equal number of scattered data in each voxel, and some voxels may have no scattered data.
ELL) To reduce computational burden and eliminate discrimination problems caused by inconsistent density, random sampling is performed for voxels with a scatter data volume greater than a certain threshold value. It is recommended that this threshold value be set at 35. When there are fewer scatter data in point cloud data, it can be appropriately reduced. This strategy can save computing resources and reduce the imbalance between voxels.
Through steps the voxelized onboard LiDAR point cloud Dr becomes voxclizcd onboard LiDAR point cloud 1.), , while the voxelized expanded transformed point cloud Dre becomes voxelized transformed point cloud D. . F. Voxel-Level Feature Calculation Depending on the target detection model used by autonomous driving vehicles, the method for calculating point cloud voxel-level features may vary. Taking the example of using VoxelNet model for object detection in autonomous driving vehicles, the steps are as follows: (11, First, organize the voxelized point cloud. For the 1 -th point in voxel A, its collected raw data is: (23) Where: y, ; are the X, IT, z coordinates of the i-th point; ri is the reflection intensity of the I -th point.
(2) Then calculate the average of all point coordinates within that voxel, and denote it as Vy Vs].
(I Afterwards, supplement the information of the 1-th point by using its offset relative to the center: = z r, x, y1 v (24) Where: a is the information of the 1-th point after supplementation; c4.Y The processed voxelized point cloud is input into the cascaded continuous VFE layers. The schematic diagram of the VFE layer processing voxelized point cloud data is shown in Figure 5. The processing logic of the VFE layer first allows each p to obtain point-level features through a fully connected network, and then performs max-pooling on the point-level features to obtain voxel-level features. Finally. the voxel-lcvel features arc concatenated with the previously obtained point-level features to obtain point concatenation feature results.
c5i After being processed by cascaded continuous VFE layers, the final voxel-level feature is obtained by integrating and max-pooling through a fully connected layer. Each voxel-level feature is a lx C dimensional vector.
The above method can be used to obtain voxel-level features D" for the voxelized point cloud If of the onboard LiDAR and voxel-level features Dr" for the transformed point cloud D. . G. Voxel-Level Point Cloud Feature Transmission Due to the sparse existence of point clouds in space, many voxels have no scattered data and therefore no corresponding voxel-level features. Storing point cloud voxel-level features using a special structure can greatly compress the data size and reduce the difficulty of transmission when sending it to processing devices, that is, compressing the point cloud voxel-level features. One of the special structures that can be used is a hash table, which is a data structure that directly accesses based on key code values. It speeds up searching by mapping key code values to a position in the table for access. The hash key of the hash table is the spatial coordinates of voxels, and its corresponding value is a voxel-level feature.
When using sub-scheme I, the subsequent processing is performed on the roadside computing device.
G2) The autonomous driving vehicle compresses the voxel-level features Di of the onboard LiDAR point cloud and obtains compressed voxel-level features Drft of the onboard LiDAR point cloud, which are then transmitted to the roadside computing device. The roadside computing device receives compressed voxel-level features Dr(' of the onboard LiDAR point cloud and restores them to voxel-level features Del of the onboard LiDAR point cloud.
When using sub-scheme II, subsequent processing is performed on the autonomous driving vehicle.
G2)The roadside computing device compresses the voxel-level feature Di of transformed point clouds and obtains compressed voxel-level feature D/ of transformed point clouds, which are then transmitted to an autonomous driving vehicle; The autonomous driving vehicle receives compressed voxel-level feature 1):. of transformed point clouds and restores it to voxel-level feature Di of transformed point clouds.
When using sub-scheme III, subsequent processing is performed in the cloud.
G3) The autonomous driving vehicle compresses its own LiDAR's voxel-level feature Di into compressed voxel-level feature D,ft and transmits it to the cloud; Roadside computing devices compress their own transformed points voxel-level feature DI into compressed voxel-level feature Di and transmit it to Cloud as well; Cloud receives both compressed voxel-level features D7 82 restores them respectively back into original form (DI -> ; -> H. Data Concatenation and Aggregation Data concatenation operation is performed by aligning the voxel-level features Di of the onboard LiDAR point cloud and the voxel-level features Di of the transformed point cloud according to their positions in the coordinate system of an autonomous driving vehicle.
Data aggregation operation is performed by taking one side's voxel-level feature as the aggregated feature for any position where either onboard LiDAR point cloud or transformed point cloud has empty voxels. For voxels that are not empty on both sides, the aggregated voxel-level feature is calculated according to the following formula: D= [f1 f, LIT (25) _k fide',* lc (26) flidar k.fego _k Where: DI( is the aggregated voxel-level feature; is the value of aggregated voxel-level feature Q: at position k fego_, is the voxel-level feature of the point cloud from onboard LiDAR, with value Dcf at position k is the voxel-level feature of transformed point cloud, with value Di at position k.
Aggregating features of the same coordinate voxel using the maximum pooling method. When using sub-scheme I, post-processing is performed on the roadside computing device. Hi) The roadside computing device uses the above method to concatenate and aggregate voxel-level features Di and Di of the onboard LiDAR point cloud to obtain aggregated voxel-level feature 4,f.
When using sub-scheme II, post-processing is performed on the autonomous driving vehicle.
H2) The autonomous driving vehicle uses the above method to concatenate and aggregate voxel-level features Di and Di of the onboard LiDAR point cloud to obtain aggregated voxellevel feature r):: . When using sub-scheme Ill, post-processing is performed in the cloud.
H3) The cloud uses the above method to concatenate and aggregate voxel-level features Df and Di of the onboard LiDAR point cloud to obtain aggregated voxel-level feature D. . I. Object Detection By inputting the aggregated voxel-level features into a subsequent 3D object detection network model, the detection targets can be obtained. Taking VoxelNet as an example, after obtaining the aggregated voxel-level features, they are inputted into a 3D object detection network model based on voxel-level features to obtain the object detection results.
The object detection results can be represented as U and are specific to: U =[1t, it"1 (27) = [x, y1 z, C, W, D, H, q), (28) Where: * is the information of the i-th target in the object detection result; * is the x-axis coordinate of the i-th detected target in the autonomous driving vehicle coordinate system; y, is the y-axis coordinate of the i-th detected target in the autonomous driving vehicle coordinate system; Z, is the z-axis coordinate of the i-th detected target in the autonomous driving vehicle coordinate system; C, is the confidence level of detecting ith object: W, is the width of the detection box corresponding to ith detected object; D, is the length of the detection box corresponding to ith detected object; H, is the height of the detection box corresponding to ith detected object; c), represents the orientation angle for the detecting box corresponding to ith detected object; v, represents projection on the x-axis direction for motion speed related to i -th detecting objective within an autonomous driving car's coordinates system.
/ represents projection on the y-axis direction for motion speed related to i -th detecting objective within an autonomous driving car's coordinates system.
v, represents projection on the z-axis direction for motion speed related to i -th detecting objective within the autonomous driving car's coordinates system.
For any 3D object detection network model based on voxel-level features, the detection results should at least include the position of the target, i.e., x, , , Z,. For high-performance 3D object detection network models based on voxel-level features, the detection results should include some or all of the attributes C, , If, T), H,, 9,, Vn, and -1)2, of the detected targets. Among them, attributes W, , D,, H, can only exist simultaneously or not exist simultaneously in the detection results. Attributes lin, v, xs, can only exist simultaneously or not exist simultaneously in the detection results.
When using sub-scheme I, target detection is performed on the roadside computing device.
13) The roadside computing device inputs aggregated voxel-level feature E. into a 3D object detection network model based on voxel-level features to obtain the target detection result U, and transmits the target detection result to the autonomous driving vehicle.
When using sub-scheme II, target detection is performed on the autonomous driving vehicle.
12) The autonomous driving vehicle inputs aggregated voxel-level feature T into a 3D object detection network model based on voxel-level features to obtain the target detection result U. When using sub-scheme III, target detection is performed in the cloud.
13) The cloud inputs aggregated voxel-level feature 1): into a 3D object detection network model based on voxel-level features to obtain the target detection result, and transmits the target detection result U to the autonomous driving vehicle.
The present invention has technical key points and advantages including: Using roadside LiDAR as a supplement to the perception of autonomous driving vehicles, improves the range and accuracy of object recognition. At the same time, using voxel features as data transmitted between vehicles and roads ensures that almost no original data information is lost while reducing bandwidth requirements.
The symbols and their meanings are summarized in the table above.
Symbol Meaning D, Roadside LiDAR point cloud DI Transformed point cloud Expanded transformed point cloud D, Onboard LiDAR point cloud Dt Voxelized transformed point cloud Dv Voxelized onboard LiDAR point cloud Di Onboard LiDAR voxel-level features Di Transformed point cloud voxel-level features Compressed onboard LiDAR voxel-level features Df Compressed transformed point cloud voxel-level features Aggregated voxel-level features (I Target detection results H, H H Installation height of roadside LiDARs a h ' c
Vertical field of view angle of roadside LiDARs
6,2 8,--; , The angle between the highest elevation beam and horizontal direction for roadside LiDARs Lb, Lc Distance between adjacent installation poles for roadside LiDARs Vo Extrinsic vector parameters for roadside LiDARs x 0 X-coordinate in the reference coordinate system for roadside LiDARs Yo Y-coordinate in the reference coordinate system for roadside LiDARs 0 Z-coordinate in the reference coordinate system for roadside LiDARs ao Rotation angle around X-axis in the reference coordinate system for roadside LiDARs /30 Rotation angle around Y-axis in the reference coordinate system for roadside LiDARs Yu Rotation angle around Z-axis in the reference coordinate system for roadside LiDARs xhclar X coordinate of the point in roadside LiDAR coordinates Yledur Y coordinate of the point in roadside LiDAR coordinates z'Mar Z coordinate of the point in roadside LiDAR coordinates xi.e.al X coordinate of the point in reference coordinates Y coordinate of the point in reference coordinates z real Z coordinate of the point in reference coordinates R(a0) Sub-rotation matrix calculated based on extrinsic parameter a0 12(p0) Sub-rotation matrix calculated based on extrinsic parameter go I?, (r0) Sub-rotation matrix calculated based on extrinsic parameter 7, E(R,T) Objective error function of ICP algorithm R Rotation transformation matrix T Translation transformation matrix pi The coordinates of the i -th point in the target point set P (1, The point in the source point set Q that forms the closest pair with point pr V' Position and angle vector of roadside LiDAR relative to autonomous driving vehicles V, Position and angle vector of autonomous driving vehicles in the reference coordinate system 11 Transformation matrix from the coordinate system of roadside LiDAR to that of autonomous driving vehicles xego X-coordinate of a point in the point cloud data from roadside LiDAR after being transformed into the coordinate system of autonomous driving vehicles yego Y-coordinate of a point in the point cloud data from roadside LiDAR after being transformed into the coordinate system of autonomous driving vehicles g, Z-coordinate of a point in the point cloud data from roadside LiDAR after being transformed into the coordinate system of autonomous driving vehicles D, W, TI Spatial dimension size where onboard LiDAR obtains its point cloud data Dr, W7. ff. Design the size of dimensions D, TV, and H Sego The spatial range of the onboard LiDAR point cloud data Shcthrf The spatial range of the expanded roadside LiDAR point cloud data Kid:deal, The starting value of the expanded transformed point cloud range in K dimension &dm' _end, The ending value of the expanded transformed point cloud range in K dimension K 1 tcy a t. t a The starting value of the transformed point cloud range in K dimension K The ending value of the transformed point cloud range in K dimension The size of the voxel in K dimension at The raw data of i -th point in voxel A y, The X, Y, and Z coordinates of the 1 -th point in voxel P The reflection intensity of the i -th point inside voxel P v, v z The mean value of the coordinates of all points within the voxel. y
cli, The data of the A -th point inside voxel A after additional information is provided 4 The value of aggregated voxel-level feature D at position k fego_ k The value of feature n / at position k for voxel-level point cloud of the onboard LiDAR I lzda, _k The value of voxel-level feature D/ for the point cloud at position k The above nouns and their corresponding meanings are summarized in the following table: Noun Meaning Point cloud data The data detected by the LiDAR can be processed through transformation or voxelization to become point cloud data.
LiDAR Light Detection And Ranging devices installed roadside or onboard Roadside LiDAR LiDAR devices installed roadside.
Roadside computing The computing device ce corresponding to the roadside LiDAR.
Extrinsic parameters of The position and angle of the roadside LiDAR in the reference roadside LiDAR coordinate system.
Autonomous driving An autonomous driving vehicle using this patent's vehicle-road vehicle cooperative solution.
Roadside LiDAR point The data detected by the roadside LiDAR belongs to point cloud cloud data.
Coordinate system of A coordinate system established based on an autonomous driving autonomous driving vehicle.
vehicle Transformed point cloud Point cloud data after deviating from roadside LiDAR to autonomous driving vehicle coordinate system.
Onboard LiDAR Onboard LiDAR installed on an autonomous driving vehicle.
Onboard LiDAR point The data detected by onboard LiDAR belongs to point cloud data. cloud
Voxelized point cloud Point cloud data after three-dimensional segmentation using voxels for point cloud data Voxelized transformed Point cloud data after three-dimensional segmentation using voxels point cloud for deviated point clouds Voxelized onboard LiDAR Point cloud data after three-dimensional segmentation using voxels point cloud for onboard LiDAR point clouds Deep neural network Multiple layers of interconnected neurons, including feature extraction network and object detection network.
Feature extraction A deep neural network designed to automatically learn and extract nctwork relevant features from point cloud data.
Object detection network A deep neural network designed to locate and identify objects within a point cloud Voxel-level features of Voxel-level features calculated based on voxelized onboard LiDAR onboard LiDAR point point clouds clouds Voxel-level features of Voxel-level features calculated based on voxelized deviated point transformed point clouds clouds Voxel-level features of Compressed voxel-level features compressed point clouds Scattered data One scattered dataset corresponds to one single dot in the Point Cloud Data set.
Data stitching Concatenation of voxel-level features from different sources according to their respective coordinates.
Data aggregation Aggregation calculation of voxel characteristics at the same coordinates after concatenating them together, so that each coordinate corresponds to only one characteristic feature.
Aggregated voxel-level Voxel-level feature obtained after concatenation and aggregation of features all relevant datasets.
Object detection results The output results of the 3D object detection network model include but are not limited to the position, size, angle, speed, and other characteristics of detected objects.
Localization data for Autonomous driving vehicles rely on positioning data obtained from autonomous driving sensors such as GPS and RTK.
vehicles
Attached Figure Brief Description
Figure 1 shows the flowchart of the proposed vehicle-road cooperative target detection method based on neural network feature sharing; Figure 2 shows a schematic diagram of a roadside mechanical rotating LiDAR deployment; Figure 3 shows a schematic diagram of a roadside solid-state LiDAR deployment; Figure 4 shows a schematic diagram of a roadside solid-state LiDAR deployment (two reverse solid-state LiDARs installed on the same pole); Figure 5 illustrates the processing of point cloud data by the VFE layer; Figure 6 illustrates voxelization feature extraction and aggregation; Figure 7 illustrates voxel point cloud object detection after merging; Figure 8 demonstrates coordinate transformation for roadside LiDAR point clouds; Figure 9 compares the results of target detection (the left image is from this patents proposed vehicle-road cooperative detection method, and the right image is from directly taking their respective high-confidence target detection results) Specific Implementation Method The following detailed description of the present invention is provided in conjunction with the accompanying drawings and specific implementation methods. The present invention relates to a collaborative target detection method based on neural network feature sharing. It can be roughly divided into three main steps: The first step is the installation and pre-calibration work of roadside LiDAR sensors. The layout of roadside LiDARs depends on the existing roadside pillar facilities and the type of installed LiDARs in vehicle-road cooperative scenes. Existing roadside LiDARs are installed in pole or crossbar form, specifically located on infrastructure pillars such as roadside gantries, streetlights, and signal light poles that have power support.
For intersection scenarios, it is sufficient to deploy a roadside LiDAR with a detection range greater than or equal to the scene range or containing key areas within the scene. For long-distance large-scale complex scenes such as expressways, highways, and parks, it is recommended to follow the guidelines for deploying roadside LiDARs in this invention so that their coverage area meets full coverage requirements for scenes; i.e., a single road-side LiDAR supplements blind spots under other road-side LiDARs within its coverage area to achieve better vehicle-road cooperative target detection results. In vehicle-road cooperation schemes, using roadside LiDARs enhances autonomous driving vehicles perception capabilities by obtaining information about surrounding targets relative to their position and their category, size dimensions, traveling direction, etc. Therefore, the perception capability of Roadside LiDARs themselves should also be as strong as possible including parameters such as radar line number and sampling frequency which should not be lower than those related parameters for onboard LiDARs. In addition, to compensate for shortcomings like easy occlusion by onboard LiDARs while achieving redundant sensing data, the sensing range of roadside LiDARs should ensure covering common areas where occlusion occurs while controlling unobstructed lines-of-sight without obstacles After completing the installation of the roadside LiDAR sensor, to calculate the relative pose between the roadside LiDAR and the onboard LiDAR, it is necessary to calibrate the installation position and angle of the roadside LiDAR, that is, extrinsic parameter calibration. This will obtain coordinate position parameters and angular attitude parameters of the LiDAR relative to a certain reference coordinate system. First, select at least 4 reflectivity feature points as control points within the detection area of the roadside LiDAR. Reflectivity feature points refer to points with significant differences in reflectivity compared to surrounding objects, such as traffic signs and license plates. The purpose of selecting reflectivity feature points as control points is to facilitate finding corresponding points between point cloud data and a coordinate in a reference coordinate system based on their positions and reflection intensity differences from other points quickly. Control points should be distributed discretely as much as possible. Under conditions allowed by the scene environment and meeting requirements including discrete distribution without any three control points being collinear, control point selection should aim for more rather than fewer for better calibration results. When selecting control point locations within range detected by roadside LiDARs, they should be located farther away from them if possible; usually, this distance should be greater than 50% of the maximum detection distance by LiDAR. If its difficult due to scene limitations when choosing control point locations at 50% or less than the maximum detection distance by LiDAR, increase the number of control points instead. Subsequently, high-precision measurement instruments such as handheld RTK were used to measure the precise coordinates of control points. The corresponding point coordinates were then found in the roadside LiDAR point cloud. When a high-precision map file of the LiDAR deployment scene is available, there is no need to use handheld RTK or other high-precision measuring instruments. Instead, the corresponding feature point coordinates can be directly found in the high-precision map. Finally, a 3D registration algorithm is used to calculate the optimal value of the LiDAR extrinsic parameters vector and its result is used as a calibration result. Commonly used 3D registration algorithms include ICP and NDT algorithms, among which the ICP algorithm is mainly used for LiDAR extrinsic parameter calibration problems. The basic principle of the ICP algorithm is to calculate the optimal matching extrinsic parameters between the target point set P (the coordinate set of control points in the roadside LiDAR coordinate system) and source point set Q (the coordinate set of control points in the reference coordinate system), so that error function can be minimized.
The method for calibrating roadside LiDAR extrinsic parameters here is not limited but it should ensure that calibration results contain three-dimensional world coordinates of sensors as well as pitch angle, yaw angle, and roll angle for subsequent steps involving point cloud transformation.
The second step is processing and feature extraction of LiDAR point cloud data at the vehicle and road ends. In the actual process of cooperative autonomous driving between vehicles and roads, first obtain real-time world coordinates, pitch angle, yaw angle, and roll angle of the vehicle based on the automatic driving's positioning module. Based on the calibration results of RTK positioning of the vehicle and roadside LiDAR, calculate the relative pose of the automatic driving vehicle to roadside LiDAR, and deflect roadside LiDAR point cloud data into the vehicle coordinate system.
According to the spatial dimension size where onboard LiDAR point clouds are located, design voxel sizes for partitioning onboard LiDARs into voxels. For transformed point clouds, use a voxel partition method that is consistent with that used for onboard LiDARs to ensure that spatial grids for partitioning transformed point clouds completely overlap those for onboard LiDARs. Group scattered data in each voxel according to their locations in both onboard LiDAR point clouds and expanded transformed ones; scattered data within one voxel belong to one group. Due to unevenness and sparsity among points, there may not be an equal number of scattered data in each voxel or some voxels may have no scattered data at all. To reduce computational burden while eliminating discrimination problems caused by inconsistent densities among voxels, randomly sample voxels whose amount exceeds a certain threshold (recommended value: 35) to save computing resources while reducing imbalances among voxels when there are few scattered data in a given set. Referring to Figure 6. the two sets of point cloud data are divided into several discrete voxels using a fixed-size lattice and then expanded. The voxelization method mentioned above is used to calculate the feature vectors of each voxel separately. Taking the VoxelNet network model, which is relatively classic in three-dimensional object detection algorithms, as an example, multiple consecutive VFE layers are used to extract feature vectors for each voxel. That is, the offset of each scattered data relative to the center within the voxel is used to supplement its spatial information. The processed voxelized point cloud data is input into a cascade of consecutive VFE layers. The schematic diagram of processing voxelized point cloud data by the VFE layer can be seen in Figure 5. The processing logic of the VFE layer first makes each expanded scattered data obtain point-level features through a fully connected network layer; then it performs max-pooling on these features to obtain voxel-level features; finally, it concatenates these with previously obtained point-level features and obtains concatenated results at point level. After being processed by cascaded consecutive VFE layers, final voxel-level features are obtained through integration and max-pooling via fully connected layers.
Due to the sparse existence of point clouds in space, many voxels have no scattered data and therefore no corresponding voxel-level features. Storing voxel-level features of point clouds using a special structure can greatly compress the data size and reduce the difficulty of transmission when sending it to processing devices. One such special structure is a hash table, which is a data structure that directly accesses data based on key values. It speeds up searching by mapping key values to positions in the table for accessing records. In this case, the hash key of the hash table is the spatial coordinates of voxels, while its corresponding value represents voxel-level features.
The third step is to aggregate the voxel-level features and transformed voxel-level features of the onboard LiDAR point cloud data to obtain aggregated voxel-level features and perform target detection.
Before performing data aggregation and data stitching, it is necessary to compress the voxellevel features of the point cloud and transmit them to a computing device. The computing device can be a roadside computing device, an autonomous driving vehicle, or a cloud. When using sub-scheme I, data aggregation, data stitching, and subsequent processing are performed on the roadside computing device; when using sub-scheme II, data aggregation, data stitching, and subsequent processing are performed on the autonomous driving vehicle. When using sub-scheme III, data aggregation, data stitching, and subsequent processing are performed in the cloud.
During the process of data stitching and data aggregation, voxelization does not change the spatial relative position of point clouds. Therefore, it is still possible to supplement the voxel-level features of onboard LiDAR point clouds based on the transformed point cloud voxel-level features in the previous step. This involves aligning the voxel-level features of onboard LiDAR point clouds and transformed point clouds according to their positions in the coordinate system of an autonomous driving vehicle. During data aggregation, if one side's voxels are empty at a certain position while the other side's voxels are not empty, then we take that non-empty side's voxel-level feature as our aggregated feature for that position. For two sets of vector characteristics with identical spatial coordinates in both datasets, we use the maximum value pooling method to aggregate them into a single vector characteristic. For non-overlapping vectors, we keep only those from non-empty voxels.
After inputting the aggregated voxel-level features into the subsequent 3D object detection network model, the detection targets are obtained. As shown in Figure], taking the VoxelNet network model as an example, the concatenated data is inputted into the continuous convolutional layer of the VoxelNet network model to obtain spatial feature maps. Finally, these feature maps are fed into the RPN (Region Proposal Network) of the VoxelNet network model to obtain the final object detection results.
The present invention has the following technical key points and advantages: Using roadside LiDAR as a supplement to autonomous driving vehicle perception improves the range and accuracy of object recognition. At the same time, using point cloud voxelization features as data transmitted between vehicles and roads ensures that almost no original data information is lost while reducing bandwidth requirements for data transmission.
At the intersection of the School of Transportation Engineering at Tongji University's Wading campus, an experimental scene is set up. In this scene, there are poles with a height of 6.4m every 20 meters on the road section. Innovusion Jaguar array-type 300-line LiDAR and Ouster 128-line 3600 LiDAR are used as roadside LiDARs in this experiment. The vertical field of view angle of Innovusion Jaguar array-type 300-line LiDAR is 40°, and its maximum detection distance is 200m. The vertical field of view angle for Ouster's 128-line,360-degree LiDAR is 45 degrees, and its maximum detection distance is 140 m. Autonomous driving vehicles use Ouster's 64-line 360-degree radar as their onboard LiDAR which has been installed horizontally at a height of 2m. The onboard LiDAR and the vehicle body are rigidly connected, and their relative posture and displacement remain unchanged. It has been calibrated during factory production, and the position and angle correction can be made in real-time based on RTK measurements obtained from the vehicle when it moves.
Implementation example 1 is as follows: (1) Deployment and calibration of roadside LiDAR sensors Only Ouster 128-line 360° LiDAR is used, considering the size of the LiDAR itself. The installation height of Ouster 128-line 360° LiDAR is set to be 6.5m, with one installed every five poles, which meets the guidelines for deployment of roadside mechanical rotating and mixed solid-state LiDARs.
Six reflectivity feature points are selected as control points within the area covered by the LiDAR. These six control points are located at the base of two poles on each side at distances of 80m, 100m, and 120m from the pole where the LiDAR is installed. Since there is a certain curvature in this section of the road, any three control points satisfy non-collinear conditions. The precise coordinates of these control points are measured using handheld RTK devices and matched with corresponding coordinates in point clouds obtained by the LiDAR sensor using the ICP algorithm for calibration.
(2) Processing and feature extraction of point cloud data.
After the calibration work in (1), the position of the roadside LiDAR point cloud in the coordinate system of the autonomous driving vehicle can be obtained, as shown in Figure 8. The roadside LiDAR point cloud is aligned with the coordinate system of the autonomous driving vehicle. The deviated point cloud is divided into voxels according to a fixed size grid [0.4 m 0.4 m 0.5 m] and expanded, resulting in voxelized deviated point clouds. After supplementing each scattered data with voxel mean information within voxelized deviated point clouds, voxel-level features are calculated by inputting them into multi-layer VFEs. For voxels that do not contain scattered data, no calculation is needed, and each voxel is finally represented by a 128-dimensional feature vector. The computed voxel-level features are stored in a hash table by roadside computing devices, where spatial positions of voxels serve as hash keys and corresponding contents represent their respective voxel-level features, resulting in compressed deviated point cloud voxel-level features. Similarly, onboard LiDAR point clouds are processed until obtaining onboard LiDAR's compressed voxel-level features without establishing a hash table for its data unlike roadside processing does above mentioned steps. At this time, compared with the original raw point cloud data, the size of the data has been reduced to about one-tenth approximately.
(3) Data concatenation, data aggregation, and target detection of voxel-level features Autonomous driving vehicles receive compressed transformed point cloud voxel-level features sent by roadside computing devices and restore them to transformed point cloud voxel-level features. Since the coordinate system of the received transformed point cloud voxel-level feature has been rotated to the coordinate system of the autonomous driving vehicle, it can be directly concatenated with the same coordinate system's onboard LiDAR point cloud voxel-level feature data. The method of maximum value pooling is used for data aggregation operation on voxels with identical coordinates. For example, if we have two sets of voxel-level features [15, 45, 90,... ,17] and [8, 17, 110,... ,43], their aggregated result would be [15, 45, 110,... ,43]. After completing all data concatenation and data aggregation operations on all voxel-level features they are input into subsequent RPNs to obtain target detection results. The proposed vehicle-road collaborative detection method and direct fusion-based onboard LiDAR point cloud and roadside LiDAR point cloud target detection results along with confidence levels are plotted in a bird's eye view as shown in Figure 9. It can be seen that sharing neural network characteristics for vehicle-road collaborative target detection can significantly improve accuracy while reducing bandwidth requirements for data transmission.
Implementation example 2 is as follows: (1) Deployment and calibration of roadside LiDAR sensors When only using the lnnovusion Jaguar array-type 300-line LiDAR and setting only one LiDAR per pole, the installation height of the LiDAR is 6.5m, with a tilt angle of 7°, and one is installed every eight poles. This meets the guidelines for deploying solid-state LiDARs on roadsides.
Six reflectivity feature points are selected within the LiDAR area as control points. The six control points are located at the base of two adjacent poles on both sides of the road at distances of 100m, 120m, and 140m from the pole where the LiDAR is installed. Since there is some curvature in this section of the road, any three control points satisfy non-collinear conditions. The precise coordinates of these control points are measured using a handheld RTK device and matched to their corresponding coordinates in the point cloud data obtained by the LiDAR sensor. Finally, an ICP algorithm is used to calibrate the LiDAR.
(2) Processing and feature extraction of point cloud data.
Following the steps in Example 1, obtain voxel-level features for the deviated point cloud and onboard LiDAR point cloud. The calculated voxel-level features of the onboard LiDAR point cloud are stored in a hash table by using the spatial position of each voxel as a hash key, with the corresponding content being the respective voxel-level feature. This results in compressed voxel-level features for the onboard LiDAR point cloud that can be used by autonomous driving vehicles.
(3) Data concatenation, data aggregation, and target detection of voxel-level features The roadside computing device receives compressed voxel-level features of the onboard LiDAR point cloud sent by the autonomous driving vehicle and decompresses them to restore the voxellevel features of the onboard LiDAR point cloud. The subsequent steps of data concatenation, data aggregation, and target detection are the same as those in Example 1 (3). After obtaining the target detection results, the roadside computing device sends them to the autonomous driving vehicle.
Implementation example 3 is as follows: (1) Deployment and calibration of roadside LiDAR sensors When only using the lnnovusion Jaguar array-type 300-line LiDAR and setting two reverse LiDARs on each pole, the installation height of the LiDAR is 6.5m, with a tilt angle of 7°, and two are installed between every nine poles, which complies with the guidelines for deployment of roadside solid-state LiDAR.
Six reflectivity feature points are selected within the area covered by the LiDAR as control points. The six control points are located at both sides of poles at distances of 100m, 120m, and 140m from the pole where the LiDAR is installed. Since there is some curvature in this section of the road, any three control points satisfy non-collinear conditions. The precise coordinates of these control points are measured using handheld RTK devices and matched to corresponding coordinates in point clouds obtained by LiDAR. The ICP algorithm is used to calibrate the LiDAR.
(2) Processing and feature extraction of point cloud data.
Obtain compressed transformed voxel-level features in step (2) of Example 1, and obtain compressed onboard LiDAR point cloud voxel-level features in step (2) of Example 2.
(3) Data concatenation, aggregation, and object detection of voxel-level features The cloud receives the compressed onboard LiDAR point cloud voxel-level features sent by the autonomous driving vehicle and restores them to the onboard LiDAR point cloud voxel-level features. The cloud also receives the compressed transformed voxel-level features sent by roadside computing devices and restores them to transformed voxel-level features. Subsequent steps for data concatenation, aggregation, and object detection are the same as those in step (3) of Example 1 until obtaining object detection results. The cloud then sends these results to the autonomous driving vehicle.
The above is only the preferred embodiment of the present invention, but the protection scope of the present invention is not limited thereto. Any skilled person in this technical field can easily conceive changes or substitutions within the technical scope disclosed by the present invention, which should be covered by the protection scope of the present invention. Therefore, the protection scope of the present invention shall be determined by what is claimed.

Claims (9)

  1. Claims 1. A vehicle-road cooperative perception method for 3D object detection based on deep neural networks with feature sharing, comprising the following steps: Preparation stage: A. Deploy a roadside LiDAR and configure the corresponding roadside computing device for said roadside LiDAR; B. Calibrate the extrinsic parameters of said roadside LiDAR; Application stage: C. Said roadside computing device calculates the relative pose of an autonomous driving vehicle concerning said roadside LiDAR based on localization data of said autonomous driving vehicle and extrinsic parameters of said roadside LiDAR; D. Said roadside computing device transforms point clouds detected by said roadside LiDAR into a coordinate system of said autonomous driving vehicle according to their relative pose, obtaining transformed point clouds; E. Said roadside computing device voxelizes and processes these transformed point clouds, obtaining voxelized transformed point clouds; meanwhile, an onboard LiDAR detects point clouds from its perspective and also voxelizes detected point clouds into voxelized onboard point clouds; F. Said roadside computing device computes voxel-level features for said voxelized transformed point clouds through a feature extraction network, obtaining features for said transformed point clouds; similarly, said autonomous driving vehicle computes point cloud voxel-level features for its onboard voxels through said feature extraction networks to obtain features for its onboard point cloud as well; G. Said autonomous driving vehicle compresses the point cloud voxel-level features of the onboard LiDAR and obtains compressed onboard LiDAR point cloud voxel-level features, which are then transmitted to said roadside computing device; said roadside computing device receives the compressed onboard LiDAR point cloud voxel-level features and restores them to onboard LiDAR point cloud voxel-level features; H. Said roadside computing device performs data stitching and data aggregation on said onboard LiDAR point cloud voxel-level features and said transformed point cloud voxel-level features to obtain aggregated voxel-level features; I. Said roadside computing device inputs said aggregated voxel-level features into a 3D object detection network model based on voxel-level features to obtain object detection results, which are then transmitted to said autonomous driving vehicle.
  2. 2. A vehicle-road cooperative perception method for 3D object detection based on deep neural networks with feature sharing, comprising the following steps: Preparation stage: A. Install a roadside LiDAR and configure the corresponding roadside computing device for said roadside LiDAR; B. Calibrate the extrinsic parameters of said roadside LiDAR; Application stage: C. Said roadside computing device calculates the relative pose of an autonomous driving vehicle concerning said roadside LiDAR based on its positioning data and extrinsic parameters; D. Said roadside computing device converts the point cloud detected by said roadside LiDAR into a coordinate system aligned with that of the said autonomous driving vehicle using its relative pose, obtaining transformed point clouds; E. Said transformed point cloud is voxelized by said roadside computing device to obtain a voxelized transformed point cloud, while a voxelized point cloud is also obtained from onboard LiDAR data processed similarly; F. Said roadside computing device calculates the voxel-level features of said voxelized transformed point cloud through a feature extraction network and obtains the voxel-level features of the transformed point cloud; said autonomous driving vehicle calculates the voxellevel features of the onboard LiDAR point cloud through said feature extraction networks, and obtains the voxel-level features of the onboard LiDAR point cloud; G. Said roadside computing device compresses and processes the voxel-level features of transformed point clouds to obtain compressed voxel-level features, which are then transmitted to autonomous driving vehicles; autonomous driving vehicles receive compressed voxel-level features and restore them to transformed point cloud's original voxel level feature; H. Autonomous driving vehicles perform data stitching and aggregation on both onboard LiDAR point cloud's voxel-level feature and transformed point cloud's voxel-level feature to obtain aggregated voxel-level feature; I. Autonomous driving vehicles input aggregated voxel-level features into a 3D object detection network model based on voxel-level characteristics to obtain object detection results.
  3. 3. A vehicle-road cooperative perception method for 3D object detection based on deep neural networks with feature sharing, comprising the following steps: Preparation stage: A. Install a roadside LiDAR and configure corresponding roadside computing devices for said roadside LiDAR; B. Calibrate the extrinsic parameters of said roadside LiDAR; Application stage: C. Said roadside computing device calculates the relative pose of said autonomous driving vehicle concerning said roadside LiDAR based on its positioning data and extrinsic parameters of said roadside LiDAR; D. Said roadside computing device converts the point cloud detected by the roadside LiDAR into a coordinate system aligned with that of the said autonomous driving vehicle using its calculated relative pose, obtaining a transformed point cloud; E. Said transformed point cloud is voxelized by said roadside computing device to obtain a voxelized transformed point cloud; similarly, a voxelized point cloud is obtained from onboard Li DAR's detection results by processing it through voxelization; F. Voxel-level features are extracted from both types of voxelized point clouds respectively through a feature extraction network: those from transformed one are computed by said roadside computing devices while those from onboard ones are computed by autonomous driving vehicles; G. Said autonomous driving vehicle compresses said voxel-level features of the said onboard LiDAR point cloud and transmits them to a cloud server; said roadside computing device compresses said voxel-level features of said transformed point cloud and transmits them to said cloud. Said cloud server receives both compressed voxel-level features, restores them into their original forms, and aggregates them; H. Said cloud server performs data fusion and aggregation on both sets of voxel-level features to obtain aggregated voxel-level features; I. Said aggregated voxel-level features are input into a 3D object detection network model based on voxel-level features to obtain object detection results, which are then transmitted back to the autonomous driving vehicle.
  4. 4. A method as claimed in any one of claims 1 to 3, characterized in that the configuration criteria for roadside LiDAR are: (I, For the case of installing mechanical rotating LiDAR and two reverse solid-state LiDARs on the same pole on the roadside, at least should meet:IItan(e2) > L Where: H represents the installation height of the LiDAR; 02 represents the angle between the highest elevation beam of the LiDAR and the horizontal direction; L represents the distance between adjacent mounting poles for LiDARs; (-2-' For roadside installation of roadside solid-state LiDAR, the following requirements should be met at least: Hb tan(- -et, )1Thli" tan(Oh) 2 Where: H represents the installation height of the roadside solid-state LiDAR; 0-,1, represents the vertical field of view angle of the roadside solid-state LiDAR; 8,2 represents the angle between the highest elevation beam of the roadside solid-state LiDAR and the horizontal direction; Lb represents the distance between adjacent mounting poles for roadside solid-state LiDARs.
  5. 5. A method according to any one of claims 1 to 3, characterized in that when calibrating the extrinsic parameters of the roadside LiDAR, the number, spatial distribution, and collinearity of control points are considered when selecting feature points within the scanning area of the roadside LiDAR as control points.
  6. 6. A method according to any one of claims 1 to 3, characterized in that the extrinsic parameters of the roadside Li DAR are calibrated using the following method: using the ICP algorithm to calculate the extrinsic parameters by taking coordinates of control points in the roadside LiDAR coordinate system and RTK-measured reference coordinate system as target point set /' and source point set 0 respectively.
  7. 7. A method as claimed in any one of claims Ito 3, characterized in that the transformed point cloud is expanded during the point cloud voxelization process to ensure that the voxel partition grids of the onboard LiDAR point cloud D, and the expanded transformed point cloud I): are consistent, with a calculation formula of: IK10 star; -K ego _start nrK K lidar start 1 K belt, end' -K ego end 11217r K Raz, end 111,n, E N Where: K BAIT start K lidar end are the starting and ending values of the expanded transformed point cloud D: in K dimensions; Kigdor _mart K Mar _end are the starting and ending values of the transformed point cloud g in K dimensions; Vic is the size of voxels in K dimension.
  8. 8. A method as claimed in any one of claims 1 to 3, characterized in that when extracting voxel-level features of point clouds, the information of points is supplemented by using the offset from the center.et, =[x, y, x, -vt. y z, -1;1 Where: ii, is the information of the i -th point in voxel A after supplementation; z, are the coordinates of the i-th point in voxel A; c is the reflection intensity of the i -th point in voxel A; v5, v,.; v: are the mean values of all points' coordinates within voxel A.
  9. 9. A method as claimed in any one of claims 1 to 3, characterized in that the voxel-level feature data aggregation method uses a maximum value pooling method to aggregate voxel-level features with the same coordinates, which is expressed by the following formula: k go _k fidar Ic > .1"ego k k Where: fk is the value of aggregated voxel-level feature DI at position k; go _A, is the value of vehicular LiDAR point cloud voxel-level feature D: at position k; fzaw._, is the value of transformed point cloud voxel-level feature])./ at position k.
GB2313215.2A 2021-01-01 2021-04-01 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method Pending GB2618936A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202110000327 2021-01-01
CN202110228419 2021-03-01
PCT/CN2021/085148 WO2022141912A1 (en) 2021-01-01 2021-04-01 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method

Publications (2)

Publication Number Publication Date
GB202313215D0 GB202313215D0 (en) 2023-10-11
GB2618936A true GB2618936A (en) 2023-11-22

Family

ID=82260124

Family Applications (2)

Application Number Title Priority Date Filing Date
GB2316625.9A Pending GB2620877A (en) 2021-01-01 2021-04-01 On-board positioning device-based roadside millimeter-wave radar calibration method
GB2313215.2A Pending GB2618936A (en) 2021-01-01 2021-04-01 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB2316625.9A Pending GB2620877A (en) 2021-01-01 2021-04-01 On-board positioning device-based roadside millimeter-wave radar calibration method

Country Status (3)

Country Link
CN (5) CN116685873A (en)
GB (2) GB2620877A (en)
WO (9) WO2022141912A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724362B (en) * 2022-03-23 2022-12-27 中交信息技术国家工程实验室有限公司 Vehicle track data processing method
CN115358530A (en) * 2022-07-26 2022-11-18 上海交通大学 Vehicle-road cooperative sensing roadside test data quality evaluation method
CN115236628B (en) * 2022-07-26 2024-05-31 中国矿业大学 Method for detecting residual cargoes in carriage based on laser radar
CN115113157B (en) * 2022-08-29 2022-11-22 成都瑞达物联科技有限公司 Beam pointing calibration method based on vehicle-road cooperative radar
CN115166721B (en) * 2022-09-05 2023-04-07 湖南众天云科技有限公司 Radar and GNSS information calibration fusion method and device in roadside sensing equipment
CN115480243B (en) * 2022-09-05 2024-02-09 江苏中科西北星信息科技有限公司 Multi-millimeter wave radar end-edge cloud fusion calculation integration and application method thereof
CN115272493B (en) * 2022-09-20 2022-12-27 之江实验室 Abnormal target detection method and device based on continuous time sequence point cloud superposition
CN115235478B (en) * 2022-09-23 2023-04-07 武汉理工大学 Intelligent automobile positioning method and system based on visual label and laser SLAM
CN115830860B (en) * 2022-11-17 2023-12-15 西部科学城智能网联汽车创新中心(重庆)有限公司 Traffic accident prediction method and device
CN115966084B (en) * 2023-03-17 2023-06-09 江西昂然信息技术有限公司 Holographic intersection millimeter wave radar data processing method and device and computer equipment
CN116189116B (en) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 Traffic state sensing method and system
CN117452392B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN117471461B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Road side radar service device and method for vehicle-mounted auxiliary driving system
CN117961915B (en) * 2024-03-28 2024-06-04 太原理工大学 Intelligent auxiliary decision-making method of coal mine tunneling robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
US20190324124A1 (en) * 2017-01-02 2019-10-24 James Thomas O'Keeffe Micromirror array for feedback-based image resolution enhancement
CN111192295A (en) * 2020-04-14 2020-05-22 中智行科技有限公司 Target detection and tracking method, related device and computer readable storage medium
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111880174A (en) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 Roadside service system for supporting automatic driving control decision and control method thereof
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar
CN111999741A (en) * 2020-01-17 2020-11-27 青岛慧拓智能机器有限公司 Method and device for detecting roadside laser radar target

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661370B2 (en) * 2001-12-11 2003-12-09 Fujitsu Ten Limited Radar data processing apparatus and data processing method
KR101655606B1 (en) * 2014-12-11 2016-09-07 현대자동차주식회사 Apparatus for tracking multi object using lidar and method thereof
TWI597513B (en) * 2016-06-02 2017-09-01 財團法人工業技術研究院 Positioning system, onboard positioning device and positioning method thereof
CN105892471B (en) * 2016-07-01 2019-01-29 北京智行者科技有限公司 Automatic driving method and apparatus
KR102056147B1 (en) * 2016-12-09 2019-12-17 (주)엠아이테크 Registration method of distance data and 3D scan data for autonomous vehicle and method thereof
CN106846494A (en) * 2017-01-16 2017-06-13 青岛海大新星软件咨询有限公司 Oblique photograph three-dimensional building thing model automatic single-body algorithm
US10281920B2 (en) * 2017-03-07 2019-05-07 nuTonomy Inc. Planning for unknown objects by an autonomous vehicle
CN108629231B (en) * 2017-03-16 2021-01-22 百度在线网络技术(北京)有限公司 Obstacle detection method, apparatus, device and storage medium
CN107133966B (en) * 2017-03-30 2020-04-14 浙江大学 Three-dimensional sonar image background segmentation method based on sampling consistency algorithm
CN108932462B (en) * 2017-05-27 2021-07-16 华为技术有限公司 Driving intention determining method and device
FR3067495B1 (en) * 2017-06-08 2019-07-05 Renault S.A.S METHOD AND SYSTEM FOR IDENTIFYING AT LEAST ONE MOVING OBJECT
CN109509260B (en) * 2017-09-14 2023-05-26 阿波罗智能技术(北京)有限公司 Labeling method, equipment and readable medium of dynamic obstacle point cloud
CN107609522B (en) * 2017-09-19 2021-04-13 东华大学 Information fusion vehicle detection system based on laser radar and machine vision
CN108152831B (en) * 2017-12-06 2020-02-07 中国农业大学 Laser radar obstacle identification method and system
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108639059B (en) * 2018-05-08 2019-02-19 清华大学 Driver based on least action principle manipulates behavior quantization method and device
CN109188379B (en) * 2018-06-11 2023-10-13 深圳市保途者科技有限公司 Automatic calibration method for driving auxiliary radar working angle
CN112368598A (en) * 2018-07-02 2021-02-12 索尼半导体解决方案公司 Information processing apparatus, information processing method, computer program, and mobile apparatus
US10839530B1 (en) * 2018-09-04 2020-11-17 Apple Inc. Moving point detection
CN111429739A (en) * 2018-12-20 2020-07-17 阿里巴巴集团控股有限公司 Driving assisting method and system
JP7217577B2 (en) * 2019-03-20 2023-02-03 フォルシアクラリオン・エレクトロニクス株式会社 CALIBRATION DEVICE, CALIBRATION METHOD
CN110296713B (en) * 2019-06-17 2024-06-04 广州卡尔动力科技有限公司 Roadside automatic driving vehicle positioning navigation system and single/multiple vehicle positioning navigation method
CN110220529B (en) * 2019-06-17 2023-05-23 深圳数翔科技有限公司 Positioning method for automatic driving vehicle at road side
CN110532896B (en) * 2019-08-06 2022-04-08 北京航空航天大学 Road vehicle detection method based on fusion of road side millimeter wave radar and machine vision
CN110443978B (en) * 2019-08-08 2021-06-18 南京联舜科技有限公司 Tumble alarm device and method
CN110458112B (en) * 2019-08-14 2020-11-20 上海眼控科技股份有限公司 Vehicle detection method and device, computer equipment and readable storage medium
CN110850378B (en) * 2019-11-22 2021-11-19 深圳成谷科技有限公司 Automatic calibration method and device for roadside radar equipment
CN110850431A (en) * 2019-11-25 2020-02-28 盟识(上海)科技有限公司 System and method for measuring trailer deflection angle
CN110906939A (en) * 2019-11-28 2020-03-24 安徽江淮汽车集团股份有限公司 Automatic driving positioning method and device, electronic equipment, storage medium and automobile
CN111121849B (en) * 2020-01-02 2021-08-20 大陆投资(中国)有限公司 Automatic calibration method for orientation parameters of sensor, edge calculation unit and roadside sensing system
CN111157965B (en) * 2020-02-18 2021-11-23 北京理工大学重庆创新中心 Vehicle-mounted millimeter wave radar installation angle self-calibration method and device and storage medium
CN111554088B (en) * 2020-04-13 2022-03-22 重庆邮电大学 Multifunctional V2X intelligent roadside base station system
CN111537966B (en) * 2020-04-28 2022-06-10 东南大学 Array antenna error correction method suitable for millimeter wave vehicle-mounted radar field
CN111766608A (en) * 2020-06-12 2020-10-13 苏州泛像汽车技术有限公司 Environmental perception system based on laser radar
CN111880191B (en) * 2020-06-16 2023-03-28 北京大学 Map generation method based on multi-agent laser radar and visual information fusion
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN111862157B (en) * 2020-07-20 2023-10-10 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN112019997A (en) * 2020-08-05 2020-12-01 锐捷网络股份有限公司 Vehicle positioning method and device
CN112509333A (en) * 2020-10-20 2021-03-16 智慧互通科技股份有限公司 Roadside parking vehicle track identification method and system based on multi-sensor sensing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9562971B2 (en) * 2012-11-22 2017-02-07 Geosim Systems Ltd. Point-cloud fusion
US20190324124A1 (en) * 2017-01-02 2019-10-24 James Thomas O'Keeffe Micromirror array for feedback-based image resolution enhancement
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium
CN111999741A (en) * 2020-01-17 2020-11-27 青岛慧拓智能机器有限公司 Method and device for detecting roadside laser radar target
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111192295A (en) * 2020-04-14 2020-05-22 中智行科技有限公司 Target detection and tracking method, related device and computer readable storage medium
CN111880174A (en) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 Roadside service system for supporting automatic driving control decision and control method thereof
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 Road environment element sensing method based on laser radar

Also Published As

Publication number Publication date
WO2022141912A1 (en) 2022-07-07
CN117836667A (en) 2024-04-05
WO2022141911A1 (en) 2022-07-07
WO2022206974A1 (en) 2022-10-06
GB202313215D0 (en) 2023-10-11
CN117836653A (en) 2024-04-05
CN117441197A (en) 2024-01-23
WO2022206978A1 (en) 2022-10-06
CN117441113A (en) 2024-01-23
WO2022206942A1 (en) 2022-10-06
WO2022141910A1 (en) 2022-07-07
CN116685873A (en) 2023-09-01
WO2022141913A1 (en) 2022-07-07
GB202316625D0 (en) 2023-12-13
WO2022141914A1 (en) 2022-07-07
WO2022206977A1 (en) 2022-10-06
GB2620877A (en) 2024-01-24

Similar Documents

Publication Publication Date Title
GB2618936A (en) Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
CN114282597B (en) Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN109920246B (en) Collaborative local path planning method based on V2X communication and binocular vision
CN111537980B (en) Laser radar parameter adjusting method and device and laser radar
CN110537109B (en) Sensing assembly for autonomous driving
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
CN113002396B (en) A environmental perception system and mining vehicle for automatic driving mining vehicle
CN102944224B (en) Work method for automatic environmental perception systemfor remotely piloted vehicle
EP3742200B1 (en) Detection apparatus and parameter adjustment method thereof
Hu et al. A complete uv-disparity study for stereovision based 3d driving environment analysis
CN108983248A (en) It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X
Duan et al. V2I based environment perception for autonomous vehicles at intersections
US11861784B2 (en) Determination of an optimal spatiotemporal sensor configuration for navigation of a vehicle using simulation of virtual sensors
WO2021046829A1 (en) Positioning method, device and system
Pantilie et al. Real-time obstacle detection using dense stereo vision and dense optical flow
CN112379674B (en) Automatic driving equipment and system
CN113743171A (en) Target detection method and device
Chetan et al. An overview of recent progress of lane detection for autonomous driving
CN112513876B (en) Road surface extraction method and device for map
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN114925769A (en) Multi-sensor data fusion processing system
Narksri et al. Visibility estimation in complex, real-world driving environments using high definition maps
Gu et al. A Review on Different Methods of Dynamic Obstacles Detection
Zhang et al. Traffic Sign Based Point Cloud Data Registration with Roadside LiDARs in Complex Traffic Environments. Electronics 2022, 11, 1559
Kumar et al. Internet of vehicles (IoV): A 5G connected car

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20240516 AND 20240522

732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20240523 AND 20240529