WO2023240805A1 - 一种基于滤波校正的网联车超速预警方法及*** - Google Patents

一种基于滤波校正的网联车超速预警方法及*** Download PDF

Info

Publication number
WO2023240805A1
WO2023240805A1 PCT/CN2022/116972 CN2022116972W WO2023240805A1 WO 2023240805 A1 WO2023240805 A1 WO 2023240805A1 CN 2022116972 W CN2022116972 W CN 2022116972W WO 2023240805 A1 WO2023240805 A1 WO 2023240805A1
Authority
WO
WIPO (PCT)
Prior art keywords
connected vehicle
image
point cloud
point
coordinates
Prior art date
Application number
PCT/CN2022/116972
Other languages
English (en)
French (fr)
Inventor
黄倩
刘云涛
李道勋
朱永东
赵志峰
Original Assignee
之江实验室
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 之江实验室 filed Critical 之江实验室
Publication of WO2023240805A1 publication Critical patent/WO2023240805A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • G08G1/054Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles

Definitions

  • the present invention relates to the field of intelligent transportation technology, and in particular to a network-connected vehicle speed warning method and system based on filter correction.
  • Connected vehicles are an important part of the construction of smart parks and are also the main application of C-V2X (vehicle-to-road collaboration) technology.
  • Safe driving of intelligent connected vehicles is an important topic, involving many aspects such as perception, coordination, decision-making, control, etc.
  • Accurately sensing the surrounding environment and controlling the vehicle speed are the basic safe driving principles.
  • Vehicle-road collaboration technology senses vehicle speed through roadside sensing equipment to further control the safe driving of connected vehicles.
  • the method of monitoring vehicle speed through millimeter-wave radar was gradually abandoned due to the inability to accurately distinguish vehicles. It was replaced by a method of monitoring vehicle speed through fusion perception of lidar and cameras.
  • the present invention proposes a method to improve the point cloud and image fusion alignment accuracy by estimating the time deviation of the same target generation in the lidar and camera sensing data, and using the estimated time deviation distribution to filter and correct the point cloud target position. , realize vehicle speeding recognition and early warning based on high-precision fusion of point clouds and images, and provide reliable technical support for connected vehicle safety monitoring based on multi-sensor fusion.
  • the purpose of this invention is to address the shortcomings of the existing technology and provide a network-connected vehicle overspeed warning method and system based on filter correction to solve the problem that the existing lidar and camera cannot match and align to the same target due to time deviation, resulting in problems based on lidar.
  • the problem of low detection accuracy of the Internet-connected vehicle speed warning system that integrates multi-sensors with cameras.
  • This invention marks the center point coordinates of the reference connected vehicle in the point cloud and image data during continuous driving, and uses an affine transformation matrix to map the center point of the reference connected vehicle in the point cloud to the image.
  • a network-connected vehicle speed warning method based on filter correction which method includes the following steps:
  • Step 1 Select a reference connected car, collect several frames of point cloud and image data of the reference connected car during continuous driving through the lidar and camera with data frame time synchronization, and mark the point where the reference connected car is For the center point coordinates in the cloud and image data, use the affine transformation matrix to map the center point of the reference connected vehicle in the point cloud to the image, and measure the mapping point on the image and the coordinates of the center point of the reference connected vehicle on the image. Position deviation, estimate the generation time deviation of the reference connected vehicle target in the point cloud and image, and calculate the time deviation distribution parameters;
  • Step 2 Acquire point cloud and image data in real time during the continuous driving of the road connected vehicle, and use the confidence filtering method to filter and correct the position of the point cloud center point for the point cloud connected vehicle target detected in any point cloud frame; specifically Calculate the confidence gain using the confidence score and time deviation distribution parameters of the point cloud network target detected by the detection algorithm, and re-filter to estimate the optimal position of the point cloud network target based on the confidence gain;
  • Step 3 Map the filtered and corrected point cloud connected vehicle targets to the corresponding image frames one by one, and calculate the mapping point coordinates of the center point of each point cloud connected vehicle target in the image and any connected vehicle in the image. The distance difference between the target center point coordinates. The one with the smallest distance difference and less than the threshold is the corresponding matching target in the image. In this way, the mapping, matching and alignment of all point clouds and connected vehicle targets in the image is completed;
  • Step 4 Fusion of the sensed information of the matched and aligned connected vehicle targets in point clouds and images to obtain the license plate number and instantaneous speed of the connected vehicle, and report the license plate number of the connected vehicle whose instantaneous speed exceeds the maximum speed limit to
  • the connected vehicle cloud control platform also issues speeding warnings and remotely controls the connected vehicle to slow down to a safe speed.
  • step one hardware wired control is used to control the lidar and camera data frame time synchronization.
  • step one it is assumed that the generation time deviation of the reference connected vehicle target in the point cloud to be estimated and the image is t, and the coordinates of the marked center point of the reference connected vehicle in the point cloud are (x, y, z) , the coordinates of the center point in the image are (u′, v′), the orientation angle is ⁇ , the calculated instantaneous speed is v_t, the coordinates of the mapping point mapped from the center point of the point cloud to the image are (u, v), and the measurement
  • the position deviation between the reference connected vehicle mapped to the mapping point coordinates (u, v) on the image and the center point coordinates (u′, v′) of the reference connected vehicle in the image is d;
  • H is the affine transformation matrix from point cloud to image.
  • the dimension of H matrix is 3*4, which is obtained by the joint external parameter calibration of lidar and camera and the internal parameter calibration of camera. It is expressed as follows: where h rs is the element in the H matrix, All are real numbers, 1 ⁇ r ⁇ 4, 1 ⁇ s ⁇ 4;
  • the coordinates x′, y′, and z′ of the reference connected vehicle’s position in the point cloud after moving within the time deviation t are obtained:
  • v_t is the instantaneous speed of the reference connected vehicle
  • (x′, y′, z′) is the position coordinate of the reference connected vehicle after movement
  • time deviation t is a value related to the known affine transformation matrix, point cloud center point coordinates, heading angle, instantaneous speed, and position deviation, expressed as follows:
  • the instantaneous driving speed of the reference connected vehicle is calculated by the moving distance of the center point of the reference connected vehicle target in the two frames before and after the nearest neighbor and the frame interval time ratio.
  • step two includes the following steps:
  • the center point position coordinates detected by the deep learning detection algorithm are (x k , y k , z k ), the orientation angle is ⁇ k , and the confidence score is c,
  • the calculated instantaneous speed is v k ;
  • the horizontal and vertical coordinates x k ′ and y k ′ of the point cloud center point of the connected vehicle target are estimated as:
  • x k ′ x k * ⁇ +(x k +v k *u t *cos ⁇ k )(1- ⁇ )
  • y k ′ y k * ⁇ +(y k +v k *u t *sin ⁇ k )(1- ⁇ )
  • the horizontal and vertical coordinates x k ′ and y k ′ of the point cloud center point of the connected vehicle target are re-estimated as:
  • step three includes the following steps:
  • the connected vehicle target in the corresponding image is the matching target
  • steps (1)-(3) complete the mapping, matching and alignment of all point cloud connected vehicle targets and image network connected vehicle targets.
  • step four the OCR license plate number recognition method is used based on the image data to identify the license plate number of the connected vehicle target in the image.
  • the present invention also provides a network-connected vehicle speed warning system based on filter correction.
  • the system includes a time deviation distribution parameter determination module, a filter correction module, a matching alignment module and a perception information fusion module;
  • the time deviation distribution parameter determination module is used to select a reference connected vehicle, collect several frames of point cloud and image data of the reference connected vehicle during continuous driving through the laser radar and camera with data frame time synchronization, and mark them Determine the coordinates of the center point of the reference connected vehicle in the point cloud and image data, use the affine transformation matrix to map the center point of the reference connected vehicle in the point cloud to the image, and measure the mapping points on the image and the reference connected vehicle. Based on the position deviation of the center point coordinates on the image, estimate the generation time deviation of the reference connected vehicle target in the point cloud and image, and calculate the time deviation distribution parameters;
  • the filter correction module is used to obtain point cloud and image data in real time during the continuous driving of road Internet-connected vehicles, and use the confidence filtering method to determine the position of the point cloud center point for the point cloud Internet-connected vehicle target detected in any point cloud frame.
  • Filter correction specifically: use the confidence score of the point cloud network vehicle target detected by the detection algorithm and the time deviation distribution parameter obtained by the time deviation distribution parameter determination module to calculate the confidence gain, and re-filter the estimated point cloud network connection based on the confidence gain The optimal position of the car target;
  • the matching and alignment module is used to map the point cloud network vehicle targets corrected by the filter correction module to the corresponding image frames one by one, and calculate the mapping point coordinates and image coordinates of the center point of each point cloud network vehicle target in the image.
  • the one with the smallest distance difference and less than the threshold is its corresponding matching target in the image. In this way, the mapping, matching and alignment of all point clouds and connected vehicle targets in the image is completed. ;
  • the perceptual information fusion module is used to fuse the perceptual information of the connected vehicle target in the point cloud and image after matching and alignment by the matching alignment module, so as to obtain the license plate number and instantaneous speed of the connected vehicle. If the instantaneous speed exceeds the maximum speed limit, The license plate number information of the connected vehicle is reported to the connected vehicle cloud control platform, and an overspeed warning is issued, and the connected vehicle is remotely controlled to slow down to a safe speed.
  • the beneficial effect of the present invention is that the present invention proposes a network-connected vehicle overspeed warning method and system based on filter correction, and uses a deep learning target detection method to detect connected vehicle targets in moving point clouds and image data.
  • the position of the moving network vehicle in the continuous video frame is filtered and corrected to achieve fusion matching and alignment of the same moving target, which significantly reduces the problem of low accuracy of speeding detection methods based on lidar and camera fusion due to time deviation.
  • Its implementation method is simple and efficient, and can be effectively applied to the safety monitoring of speeding driving of connected vehicles based on multi-sensor fusion, providing reliable technical support for accurate decision-making in safe driving management of intelligent connected vehicles.
  • Figure 1 is a flow chart of a speed warning method for connected vehicles based on filter correction according to the present invention.
  • Figure 2 is a schematic diagram of the position deviation of the reference connected vehicle point cloud detection frame mapped to the mapping frame on the image and the image detection frame under different time deviations.
  • Figure 3 is a schematic diagram of the pseudo 3D frame mapping the position-corrected point cloud reference connected vehicle to the image.
  • Figure 4 is a schematic structural diagram of a network-connected vehicle speed warning system based on filter correction according to the present invention.
  • Figure 5 is a schematic structural diagram of a network-connected vehicle speed warning device based on filter correction according to the present invention.
  • the present invention proposes a network-connected vehicle speed warning method based on filter correction, which is used to solve the network-connected vehicle speed warning process based on the fusion of lidar and camera. Since the moving target is in the point cloud and image data, The problem of low detection accuracy caused by incomplete fusion alignment.
  • the fusion method is a decision-level fusion method, that is, by separately detecting the position and speed information of the connected vehicle target in the point cloud and the license plate number information in the image, and then matching and aligning the connected vehicle targets in the two data sources. Fusion target information in two data sources to achieve the purpose of fusion using the sensing characteristics of multi-source heterogeneous data.
  • the method includes the following steps:
  • Step 1 Install solid-state lidar and cameras at the park's road network vehicle speed monitoring points, and use hardware line control to control lidar and camera data frame time synchronization.
  • the product model of the above-mentioned solid-state lidar is DJI AVIA, which uses a non-repetitive scanning method, with a horizontal FOV of 70.4° and a vertical FOV of 77.2°.
  • One frame of point cloud data contains 48,000 reflection points.
  • the camera is a webcam. The two are installed in the same direction on the vertical pole of the park's networked vehicle speed monitoring point. The camera and lidar are controlled to be exposed simultaneously through hardware wire control.
  • the data collection frequency is 10HZ.
  • Ethernet transmission delay, data encoding and decoding and other uncertain factors cause the content of the data frames collected by the two sensor devices to be not completely synchronized, resulting in a certain range of deviation in the generation time of the moving target, resulting in the same target not being completely fused when the two data are fused. Alignment, so the time offset of the data frame needs to be estimated first.
  • the target generation time deviation to be estimated is t
  • the center point coordinates of the marked reference connected car in the point cloud data are (x, y, z)
  • the center point coordinates in the image data are (u′, v ′)
  • the heading angle is ⁇
  • the calculated instantaneous speed is v_t
  • the point cloud center point is mapped to the mapping point coordinates on the image is (u, v)
  • the measured reference connected vehicle is mapped to the mapping point coordinates on the image
  • the position deviation between (u, v) and the center point coordinates (u′, v′) of the reference connected vehicle in the image is d.
  • H is the affine transformation matrix from point cloud to image.
  • the dimension of H matrix is 3*4. It can be obtained by joint external parameter calibration of lidar and camera and internal parameter calibration of camera. It can be expressed as follows, where h rs are the elements in the H matrix. , all are real numbers, 1 ⁇ r ⁇ 4, 1 ⁇ s ⁇ 4.
  • v_t is the instantaneous speed of the reference connected vehicle, which can be calculated by the moving distance of the center point of the connected vehicle target in the two frames before and after the nearest neighbor and the frame interval time ratio
  • (x′, y′, z′) is the reference The position coordinates of the connected vehicle after it moves. Since the connected vehicle is a rigid object, its vertical coordinate, that is, the value of the z-axis, does not change with the movement of the vehicle's position.
  • center point coordinates (x′, y′, z′) of the position point cloud of the reference connected vehicle after movement and the center point coordinates (u′, v′) in the image data satisfy the following relationship:
  • time deviation t is a value related to the known affine transformation parameters, point cloud center point coordinates, heading angle, instantaneous speed, and position deviation.
  • the reference connected vehicle target in the point cloud has not been position corrected.
  • the reference connected vehicle point cloud detection frame is mapped to the mapping frame on the image and the reference connected vehicle detection frame in the image.
  • the time deviation is small, the overlap between the mapping frame and the detection frame in the image is high.
  • the time deviation is large, the position deviation between the mapping frame and the detection frame in the image is large, and there is almost no overlap.
  • the estimated time deviations are N groups.
  • the above Kolmogorov-Smirnov test method is often used to detect whether a certain data distribution conforms to a certain distribution, here it is a normal distribution. By estimating the P value of a certain data distribution, it is determined whether the assumption that the data conforms to a normal distribution is true. If P If the value is greater than the significance level, the hypothesis is considered to be true, otherwise it is not true.
  • the specific process is as follows:
  • Step 2 Obtain point cloud and image data in real time during the continuous driving of the connected vehicle on the park road, and detect the center point, heading angle, confidence score, instantaneous speed, and image connected vehicle target of the point cloud connected vehicle target. license plate number.
  • the CenterPoint three-dimensional target detection algorithm based on the point cloud data is used to detect the center point, orientation angle, and confidence score of the connected vehicle target, and based on the center point movement distance and frame of the connected vehicle target in the two frames before and after the nearest neighbor.
  • the interval time ratio calculates the instantaneous speed of the connected vehicle; the OCR recognition method is used to identify the connected vehicle target in the image data and identify the license plate number of the connected vehicle.
  • the above three-dimensional target detection algorithms include image-based generation, point cloud-based generation, and image-point cloud fusion generation.
  • the target detection algorithm based on the CenterPoint network model is used, which is only based on point cloud generation and labels a large amount of collected point cloud data.
  • the annotated data is divided into a training set, a verification set, and a test set.
  • the model trained on the training set has an accuracy mAP value of up to 91% on the test set, within a 50m (unit: meter) range of the point cloud data center.
  • the detection rate of targets within is 95%.
  • the detection accuracy of the OCR recognition method reaches 99%.
  • the orientation angle value range is (- ⁇ , ⁇ ), and the confidence score value range is (0, 1).
  • the detected center point position of the point cloud network vehicle target is filtered and corrected. That is, the confidence score and time deviation distribution parameters of the connected vehicle target detected by the detection algorithm are used to calculate the confidence gain, and the optimal position of the connected vehicle target is estimated by re-filtering based on the confidence gain.
  • the specific process is as follows:
  • the horizontal and vertical coordinates x k ′ and y k ′ of the point cloud center point of the connected vehicle target are estimated as:
  • x k ′ x k * ⁇ +(x k +v k *u t *cos ⁇ k )(1- ⁇ )
  • y k ′ y k * ⁇ +(y k +v k *u t *sin ⁇ k )(1- ⁇ )
  • the horizontal and vertical coordinates x k ′ and y k ′ of the point cloud center point of the connected vehicle target are re-estimated as:
  • Step 3 Map the filtered and corrected point cloud connected vehicle targets to the corresponding image frames one by one, and calculate the mapping point coordinates of the center point of each point cloud connected vehicle target in the image and any connected vehicle in the image.
  • the distance difference between the coordinates of the target center point is the smallest and less than the threshold, which is the corresponding matching target in the image.
  • the mapping, matching and alignment of all point clouds and connected vehicle targets in the image is completed.
  • Figure 3 it is a schematic diagram of the pseudo 3D frame mapped from the position-corrected point cloud reference connected vehicle to the image. The specific process of this step is as follows:
  • mapping point coordinates and the point cloud center point coordinates satisfy the following relationship:
  • H is the affine transformation matrix from point cloud to image.
  • the dimension of H matrix is 3*4. It can be obtained by joint external parameter calibration of lidar and camera and internal parameter calibration of camera. It can be expressed as follows, where h rs are the elements in the H matrix. , all are real numbers, 1 ⁇ r ⁇ 4, 1 ⁇ s ⁇ 4:
  • the center point coordinates of the i-th connected vehicle target in the image are (mi , n i ), and 1 ⁇ i ⁇ N img_obj , N img_obj is the total number of connected vehicle targets in the image. Then the distance difference between the coordinates of the mapping point and the coordinates of the center point of the i-th connected vehicle target in the image is:
  • the minimum distance difference is:
  • the connected vehicle target in the corresponding image is the matching target.
  • steps (1)-(3) complete the mapping, matching and alignment of all point cloud connected vehicle targets and image network connected vehicle targets.
  • Step 4 Fusion of the sensed information of the matched and aligned connected vehicle targets in point clouds and images to obtain the license plate number and instantaneous speed information of the same target connected vehicle, and identify the license plate of the connected vehicle whose instantaneous speed exceeds the maximum speed limit.
  • the information is reported to the connected vehicle cloud control platform and an overspeed warning is issued, and the connected vehicle is remotely controlled to slow down to a normal speed.
  • the maximum speed limit is the maximum speed limit for connected vehicles specified in the park, which is 30km/h, and the regular speed is 25km/h.
  • the connected vehicle cloud control platform is based on a cloud server, provides connected vehicle management and control functions, contains information about each connected vehicle and can remotely control specific connected vehicles.
  • the present invention also provides a network-connected vehicle speed warning system based on filter correction.
  • the system includes a time deviation distribution parameter determination module, a filter correction module, a matching alignment module and a perception information fusion module. ;
  • the time deviation distribution parameter determination module is used to select a reference connected vehicle, and obtain the center of the point cloud and image data of several frames of the reference connected vehicle during continuous driving through the laser radar and camera with data frame time synchronization.
  • Point coordinates use the affine transformation matrix to map the center point of the reference connected vehicle in the point cloud to the image, measure the position deviation between the mapping point on the image and the coordinates of the center point of the reference connected vehicle on the image, and estimate the reference Generate time deviations of connected vehicle targets in point clouds and images, and calculate time deviation distribution parameters; for the specific implementation process of the time deviation distribution parameter determination module, refer to a speed warning method for connected vehicles based on filter correction provided by the present invention Detailed description of step one.
  • the filter correction module is used to obtain point cloud and image data in real time during the continuous driving of road Internet-connected vehicles, and use the confidence filtering method to determine the position of the point cloud center point for the point cloud Internet-connected vehicle target detected in any point cloud frame.
  • Filter correction specifically: use the confidence score of the point cloud network vehicle target detected by the detection algorithm and the time deviation distribution parameter obtained by the time deviation distribution parameter determination module to calculate the confidence gain, and re-filter the estimated point cloud network connection based on the confidence gain
  • the optimal position of the vehicle target for the specific implementation process of the filter correction module, refer to the detailed description of step 2 in the speed warning method for connected vehicles based on filter correction provided by the present invention.
  • the matching and alignment module is used to map the point cloud network vehicle targets corrected by the filter correction module to the corresponding image frames one by one, and calculate the mapping point coordinates and image coordinates of the center point of each point cloud network vehicle target in the image.
  • the one with the smallest distance difference and less than the threshold is its corresponding matching target in the image. In this way, the mapping, matching and alignment of all point clouds and connected vehicle targets in the image is completed. ;
  • the matching and alignment module refer to the detailed description of step three in the network-connected vehicle speed warning method based on filter correction provided by the present invention.
  • the perceptual information fusion module is used to fuse the perceptual information of the connected vehicle target in the point cloud and image after matching and alignment by the matching alignment module, so as to obtain the license plate number and instantaneous speed of the connected vehicle. If the instantaneous speed exceeds the maximum speed limit, The license plate number information of the connected vehicle is reported to the connected vehicle cloud control platform, and an overspeed warning is issued, and the connected vehicle is remotely controlled to slow down to a safe speed.
  • the perceptual information fusion module refer to the detailed description of step 4 in the network-connected vehicle speed warning method based on filter correction provided by the present invention.
  • the present invention also provides embodiments of an overspeed warning device for connected vehicles based on filter correction.
  • an embodiment of the present invention provides a network-connected vehicle speed warning device based on filter correction, including a memory and one or more processors.
  • the memory stores executable code
  • the processor executes the When the code is executable, it is used to implement the network-connected vehicle speed warning method based on filter correction in the above embodiment.
  • the embodiments of the network-connected vehicle speed warning device based on filter correction of the present invention can be applied to any device with data processing capabilities, and any device with data processing capabilities can be a device or device such as a computer.
  • the device embodiments may be implemented by software, or may be implemented by hardware or a combination of software and hardware. Taking software implementation as an example, as a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory and running them through the processor of any device with data processing capabilities. From the hardware level, as shown in Figure 5, it is a hardware structure diagram of any device with data processing capabilities where the networked vehicle speed warning device based on filter correction of the present invention is located.
  • any device with data processing capabilities where the device in the embodiment is located may also include other hardware based on the actual functions of any device with data processing capabilities. This will not be discussed here. Repeat.
  • the device embodiment since it basically corresponds to the method embodiment, please refer to the partial description of the method embodiment for relevant details.
  • the device embodiments described above are only illustrative.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in One location, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
  • Embodiments of the present invention also provide a computer-readable storage medium on which a program is stored.
  • the program is executed by a processor, the speed warning method for connected vehicles based on filter correction in the above embodiments is implemented.
  • the computer-readable storage medium may be an internal storage unit of any device with data processing capabilities as described in any of the foregoing embodiments, such as a hard disk or a memory.
  • the computer-readable storage medium can also be an external storage device of any device with data processing capabilities, such as a plug-in hard disk, smart memory card (Smart Media Card, SMC), SD card, flash memory card equipped on the device (Flash Card) etc.
  • the computer-readable storage medium may also include both an internal storage unit and an external storage device of any device with data processing capabilities.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by any device with data processing capabilities, and can also be used to temporarily store data that has been output or is to be output.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于滤波校正的网联车超速预警方法及***,通过对激光雷达和摄像头感知数据中同一目标生成时间偏差进行估计,并利用估计的时间偏差分布,对点云目标位置进行滤波校正,实现多源目标映射对齐,解决由于时间偏差造成的点云和图像融合准确率低的问题。通过标记出参考网联车在连续行驶过程中,分别在点云和图像数据中的中心点坐标,并利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,利用映射点和在图像中的中心点的距离差,推导出目标的生成时间偏差,并设计一种置信滤波方法重新估计网联车点云目标的最优位置,实现基于点云和图像高精度融合的车辆超速识别预警,为智能网联车安全驾驶提供技术支撑。

Description

一种基于滤波校正的网联车超速预警方法及*** 技术领域
本发明涉及智能交通技术领域,尤其涉及一种基于滤波校正的网联车超速预警方法及***。
背景技术
随着智慧交通建设的高速发展,智慧网联车相关技术逐渐发展兴起,网联车是智慧园区建设的重要部分,也是C-V2X(车路协同)技术的主要落地应用。智能网联车的安全驾驶是一个重要课题,涉及到感知、协调、决策、控制等多个方面,准确感知周围环境,控制车辆行驶速度是基本安全驾驶准则。车路协同技术通过路侧感知设备感知车辆行驶速度,进一步控制网联车安全驾驶。过去,通过毫米波雷达监测车辆行驶速度的方式,由于不能准确区分车辆而逐渐被废弃,取而代之的是通过激光雷达和摄像头融合感知的车辆行驶速度监测方法。
目前采用硬件线控触发激光雷达和摄像头的硬件时间同步,由于激光雷达、摄像头传感器曝光机制、目标运动、以太网传输时延、数据编解码等不确定因素造成两个传感器设备采集的数据帧内容并非完全同步,运动目标生成时间存在一定范围内的偏差,造成两者数据融合时同一目标不能完全对齐,由于不能准确关联同一目标,导致基于这两种感知数据融合的车辆超速检测方法检测准确率较低。因此,本发明提出一种通过对激光雷达和摄像头感知数据中同一目标生成时间偏差进行估计,并利用估计的时间偏差分布,对点云目标位置进行滤波校正,提高点云和图像融合对齐准确率,实现基于点云和图像高精度融合的车辆超速识别预警,为基于多传感器融合的网联车安全监控提供可靠的技术支撑。
发明内容
本发明的目的在于针对现有技术的不足,提供一种基于滤波校正的网联车超速预警方法及***,解决现有的激光雷达和摄像头由于存在时间偏差同一目标不能匹配对齐,造成基于激光雷达和摄像头的多传感器融合的网联车超速预警***检测准确度较低的问题。本发明通过标记出参考网联车在连续行驶过程中,分别在点云和图像数据中的中心点坐标,并利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,利用映射点和在图像中的中心点的距离差,推导出目标的生成时间偏差,并根据时间偏差分布重新滤波估计点云网联车目标中心点最优位置,实现基于点云和图像高精度融合。
本发明的目的是通过以下技术方案来实现的:一种基于滤波校正的网联车超速预警方法,该方法包括以下步骤:
步骤一:选定一辆参考网联车,通过数据帧时间同步的激光雷达和摄像头采集若干帧参 考网联车在连续行驶过程中的点云和图像数据,并标记出参考网联车在点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,测量出图像上的映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出参考网联车目标在点云和图像中的生成时间偏差,并计算时间偏差分布参数;
步骤二:实时获取道路上网联车连续行驶过程中的点云和图像数据,对任一点云帧中检测出的点云网联车目标通过置信滤波方法对点云中心点位置进行滤波校正;具体为:利用检测算法检测出的点云网联车目标的置信度分数和时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计点云网联车目标的最优位置;
步骤三:将滤波校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中网联车目标的映射匹配对齐;
步骤四:将匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取网联车的车牌号和瞬时速度,对瞬时速度超过最大限速的网联车车牌号上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至安全车速。
进一步地,步骤一中,采用硬件线控方式控制激光雷达和摄像头数据帧时间同步。
进一步地,步骤一中,假设要估计的点云和图像中参考网联车目标生成时间偏差为t,标记出的参考网联车在点云中的中心点坐标为(x,y,z)、在图像中的中心点坐标为(u′,v′)、朝向角为γ,计算出的瞬时速度为v_t,点云中心点映射至图像上的映射点坐标为(u,v),测量出的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像中的中心点坐标(u′,v′)的位置偏差为d;
则参考网联车在点云中的中心点坐标(x,y,z)和映射至图像上的映射点坐标(u,v)满足如下关系:
Figure PCTCN2022116972-appb-000001
其中H为点云到图像的仿射变换矩阵,H矩阵维度大小为3*4,由激光雷达和摄像头联合外参标定和摄像头内参标定得到;表示如下:其中h rs为H矩阵中的元素,均为实数,1≤r≤4,1≤s≤4;
Figure PCTCN2022116972-appb-000002
根据参考网联车瞬时行驶速度,得出参考网联车在时间偏差t内发生移动后的位置在点云中的坐标x′、y′,z′分别为:
x′=x+v_t*t*cosγ
y′=y+v_t*t*cosγ
z′=z
其中v_t为参考网联车的瞬时速度,(x′,y′,z′)为参考网联车发生移动后的位置坐标;
则参考网联车发生移动后的位置坐标(x′,y′,z′)和在图像中的中心点坐标(u′,v′)满足如下关系:
Figure PCTCN2022116972-appb-000003
因此,根据测量出的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像中的中心点坐标(u′,v′)的位置偏差d,列出如下等式:
Figure PCTCN2022116972-appb-000004
则可推导出时间偏差t是一个和已知的仿射变换矩阵、点云中心点坐标、朝向角、瞬时速度、位置偏差相关的数值,表示如下:
Figure PCTCN2022116972-appb-000005
其中,大写字母A,B分别为:
Figure PCTCN2022116972-appb-000006
Figure PCTCN2022116972-appb-000007
进一步地,参考网联车的瞬时行驶速度通过参考网联车目标在最近邻前后两帧中的中心点移动距离和帧间隔时间比值计算得出。
进一步地,计算时间偏差分布参数具体过程如下:
(1)利用Kolmogorov-Smirnov检验方法检测时间偏差是否符合正态分布;假设估计出的时间偏差数据为N组,计算出数据的均值为μ,方差为σ 2,设置检测显著性水平为α;利用Kolmogorov-Smirnov检验方法检测数据不服从正态分布的概率,即P值,若P值小于等于显著性水平,则时间偏差不符合正态分布,若P值大于显著性水平,则时间偏差符合正态分布;
(2)若时间偏差符合正态分布,则其正态分布表达式记为X~N(μ,σ 2);其中X表示这N组时间偏差数据;
(3)若时间偏差不符合正态分布,则按数值大小从小到大对时间偏差数据进行排序,对数值大小处于第二四分位数和第三四分位数的之间的所有数据,计算出中位数和方差。
进一步地,步骤二中,包括以下步骤:
(1)对第k个点云网联车目标,利用深度学习检测算法检测出的中心点位置坐标为(x k,y k,z k),朝向角为γ k,置信度分数为c,计算出的瞬时速度为v k
(2)根据时间偏差分布参数计算置信增益,具体为:
(2.1)若时间偏差符合正态分布,假设其正态分布表达式中的参数均值为u t,方差为σ t 2;则置信增益ε为:
Figure PCTCN2022116972-appb-000008
则基于置信增益重新滤波估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
x k′=x k*ε+(x k+v k*u t*cosγ k)(1-ε)
y k′=y k*ε+(y k+v k*u t*sinγ k)(1-ε)
(2.2)若时间偏差不符合正态分布,假设时间偏差按数值大小从小到大排序后的第二四分位数和第三四分位数之间的数据的中位数为
Figure PCTCN2022116972-appb-000009
方差为τ t 2,则置信增益ε′为:
Figure PCTCN2022116972-appb-000010
则基于置信增益滤波重新估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
Figure PCTCN2022116972-appb-000011
Figure PCTCN2022116972-appb-000012
(3)由于网联车为刚性物体,其位置移动并不会改变其在竖坐标即z轴的值的大小,即基于置信增益滤波重新估计的竖坐标z k′=z k,则重新滤波估计后的网联车目标的最优位置中心点坐标为(x k′,y k′,z k′)。
进一步地,步骤三中,包括以下步骤:
(1)对任一位置滤波校正后的点云网联车目标,利用仿射变换矩阵,将点云中心点坐标映射至图像;
(2)计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差;
假设校正后的点云中心点映射至图像上的映射点坐标为(p,q),图像中第i网联车目标的中心点坐标为(m i,n i),且1≤i≤N img_obj,N img_obj为图像中网联车目标总数;
则映射点坐标和图像中第i个网联车目标的中心点坐标的距离差d i为:
Figure PCTCN2022116972-appb-000013
(3)计算最小距离差,并判断最小距离差是否小于设定阈值Δ;其中最小距离差为:
Figure PCTCN2022116972-appb-000014
若最小距离差d min小于阈值Δ,则对应的图像中网联车目标即为匹配目标;
(4)依据步骤(1)-(3),完成所有点云网联车目标和图像网联车目标的映射匹配对齐。
进一步地,步骤四中,基于图像数据利用OCR车牌号识别方法识别网联车目标在图像中的车牌号。
另一方面,本发明还提供了一种基于滤波校正的网联车超速预警***,该***包括时间偏差分布参数确定模块、滤波校正模块、匹配对齐模块和感知信息融合模块;
所述时间偏差分布参数确定模块用于选定一辆参考网联车,通过数据帧时间同步的激光雷达和摄像头采集若干帧参考网联车在连续行驶过程中的点云和图像数据,并标记出参考网联车在点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,测量出图像上的映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出参考网联车目标在点云和图像中的生成时间偏差,并计算时间偏差分布参数;
所述滤波校正模块用于实时获取道路上网联车连续行驶过程中的点云和图像数据,对任 一点云帧中检测出的点云网联车目标通过置信滤波方法对点云中心点位置进行滤波校正;具体为:利用检测算法检测出的点云网联车目标的置信度分数和时间偏差分布参数确定模块得到的时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计点云网联车目标的最优位置;
所述匹配对齐模块用于将滤波校正模块校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中网联车目标的映射匹配对齐;
所述感知信息融合模块用于将匹配对齐模块匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取网联车的车牌号和瞬时速度,对瞬时速度超过最大限速的网联车车牌号信息上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至安全车速。
本发明的有益效果是,本发明提出一种基于滤波校正的网联车超速预警方法及***,采用深度学习的目标检测方法对运动的点云和图像数据中的网联车目标进行检测,通过对连续视频帧中的运动网联车位置进行滤波校正,实现对同一运动目标的融合匹配对齐,显著减少由于时间偏差造成的基于激光雷达和摄像头融合的超速检测方法准确率低的问题。其实现方法简单高效,能有效的运用于基于多传感器融合的网联车超速驾驶的安全监测,为智能网联车安全驾驶管理精准决策提供可靠技术支撑。
附图说明
图1是本发明一种基于滤波校正的网联车超速预警方法流程图。
图2是不同时间偏差下,参考网联车点云检测框映射至图像上的映射框和图像检测框的位置偏差示意图。
图3是位置校正后的点云参考网联车映射至图像的伪3D框示意图。
图4是本发明一种基于滤波校正的网联车超速预警***结构示意图。
图5是本发明一种基于滤波校正的网联车超速预警装置结构示意图。
具体实施方式
下面根据附图详细说明本发明,本发明的目的和效果将变得更加明白。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
如图1所示,本发明提出了一种基于滤波校正的网联车超速预警方法,用于解决基于激光雷达和摄像头融合的网联车超速预警过程,由于运动目标在点云和图像数据中融合不完全对齐导致的检测准确率低的问题。融合方法为决策级融合方法,即通过分别检测出网联车目标在点云中的位置速度信息和在图像中的车牌号信息,再将两种数据源中的网联车目标进行 匹配对齐,融合目标在两种数据源中的信息,以达到利用多源异构数据感知特点进行融合的目的。该方法包括以下步骤:
步骤一:在园区道路网联车速度监测点安装固态激光雷达和摄像头,采用硬件线控方式控制激光雷达和摄像头数据帧时间同步。
上述固态激光雷达的产品型号为大疆览沃AVIA傲览,采用非重复扫描方式,水平FOV为70.4°,竖直FOV为77.2°,一帧点云数据包含4.8万个反射点。摄像头为网络摄像头。两者同向安装在园区网联车速度监测点的竖杆上,通过硬件线控的方式控制摄像头和激光雷达同步曝光,数据采集频率均为10HZ,但是由于两个传感器的曝光机制、目标运动、以太网传输时延、数据编解码等不确定因素造成两个传感器设备采集的数据帧内容并非完全同步,导致运动目标生成时间存在一定范围内的偏差,造成两者数据融合时同一目标不能完全对齐,因此首先需估计出数据帧的时间偏差。
选定一辆参考网联车多次行驶进入固态激光雷达和摄像头的监测点,并标记出参考网联车在连续行驶过程中,分别在点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云数据中的中心点映射至图像,测量出映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出目标生成时间偏差。具体过程如下:
(1)选定一辆参考网联车多次行驶进入监测点,并采集若干帧参考网联车在连续行驶过程中的点云和图像数据,标记出参考网联车在每一帧点云和图像数据中的中心点坐标。
(2)对任一对同步数据帧,测量参考网联车点云中心点映射至图像中的映射点坐标和在图像数据中的中心点坐标的位置偏差,并基于位置偏差,估计出点云和图像数据中的目标生成时间偏差。
假设要估计的目标生成时间偏差为t,标记出的参考网联车在点云数据中的中心点坐标为(x,y,z),在图像数据中的中心点坐标为(u′,v′),朝向角为γ,计算出的瞬时速度为v_t,点云中心点映射至图像上的映射点坐标为(u,v),测量出的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像中的中心点坐标(u′,v′)的位置偏差为d。
则参考网联车在点云数据中的中心点坐标(x,y,z)和映射至图像上的映射点坐标(u,v)满足如下关系:
Figure PCTCN2022116972-appb-000015
其中H为点云到图像的仿射变换矩阵,H矩阵维度大小为3*4,可由激光雷达和摄像头 联合外参标定和摄像头内参标定得到,可表示如下,其中h rs为H矩阵中的元素,均为实数,1≤r≤4,1≤s≤4。
Figure PCTCN2022116972-appb-000016
根据网联车瞬时行驶速度,可得出参考网联车在时间偏差t内发生移动后的位置在点云中心点的坐标x′、y′,z′分别为:
x′=x+v-t*t*cosγ
y′=y+v-t*t*cosγ
z′=z
其中v_t为参考网联车的瞬时速度,可通过网联车目标在最近邻前后两帧中的中心点移动距离和帧间隔时间比值计算得出,(x′,y′,z′)为参考网联车发生移动后的位置坐标,由于网联车为刚性物体,因此其竖坐标即z轴的值的大小不随车辆位置移动而改变。
则参考网联车发生移动后的位置点云中心点坐标(x′,y′,z′)和在图像数据中的中心点坐标(u′,v′)满足如下关系:
Figure PCTCN2022116972-appb-000017
因此,根据测量得到的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像数据中的中心点坐标(u′,v′)的位置偏差d,可列出如下等式:
Figure PCTCN2022116972-appb-000018
根据公式(1),(2),(3)可推导出,时间偏差t是一个和已知的仿射变换参数、点云中心点坐标、朝向角、瞬时速度、位置偏差相关的数值,可表示如下:
Figure PCTCN2022116972-appb-000019
其中,大写字母A,B分别为:
Figure PCTCN2022116972-appb-000020
Figure PCTCN2022116972-appb-000021
如图2所示,是点云中参考网联车目标未进行位置校正,不同时间偏差下,参考网联车点云检测框映射至图像上的映射框和参考网联车在图像中检测框的位置偏差,时间偏差较小时映射框和图像中检测框的重叠度较高,时间偏差较大时映射框和图像中检测框的位置偏差较大,几乎无重叠。
假设估计出的时间偏差为N组,利用Kolmogorov-Smirnov检验方法,检测时间偏差是否符合正态分布。若时间偏差符合正态分布,则求出其正态分布表达式,若不符合正态分布,则计算出按数值大小排序后的时间偏差的第二四分位数和第三四分位数之间的数据的中位数和方差。
上述Kolmogorov-Smirnov检验方法常被用于检测某数据分布是否符合某分布,此处是正态分布,通过估计出某数据分布的P值,来确定数据符合正态分布的假设是否成立,若P值大于显著性水平则认为假设成立,否则不成立。具体过程如下:
(1)利用Kolmogorov-Smirnov检验方法检测时间偏差是否符合正太分布。对N组时间偏差数据,计算出数据的均值为μ,方差为σ 2,设置检测显著性水平为α。利用Kolmogorov-Smirnov检验方法检测该组数据不服从正态分布的概率,即P值,若P值小于等于显著性水平,则时间偏差不符合正态分布,若P值大于显著性水平,则时间偏差符合正态分布。其中,N值大于等于100。
(2)若时间偏差符合正态分布,则其正态分布表达式可记为X~N(μ,σ 2)。其中X表示这N组时间偏差数据。
(3)若时间偏差不符合正态分布,则按数值大小从小到大对时间偏差数据进行排序,对数值大小处于第二四分位数和第三四分位数的之间的所有数据,计算出中位数和方差。
步骤二:实时获取园区道路上网联车连续行驶过程中的点云和图像数据,并检测出点云网联车目标的中心点、朝向角、置信度分数、瞬时速度,和图像网联车目标的车牌号。
具体的,对点云数据采用基于CenterPoint三维目标检测算法检测出网联车目标中心点、朝向角、置信度分数,并基于网联车目标在最近邻前后两帧中的中心点移动距离和帧间隔时间比值计算网联车的瞬时速度;采用OCR识别方法识别出图像数据中的网联车目标,并识别出网联车的车牌号。
上述三维目标检测算法包括基于图像生成、基于点云生成、基于图像点云融合生成,此处采用基于CenterPoint网络模型的目标检测算法,仅基于点云生成,将采集的大量点云数据 进行标注,并将标注后的数据分为训练集、验证集、测试集,在训练集上训练后的模型在测试集上的准确率mAP值高达91%,在点云数据中心50m(单位:米)范围内的目标的检出率达95%。所述的OCR识别方法检测准确率达99%。所述朝向角值范围为(-π,π),所述置信度分数值范围为(0,1)。
基于置信滤波方法对检测出的点云网联车目标中心点位置进行滤波校正。即利用检测算法检测出的网联车目标的置信度分数和时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计网联车目标的最优位置。具体过程如下:
(1)对第k个点云网联车目标,假设利用深度学习检测算法检测出的中心点位置坐标为(x k,y k,z k),朝向角为γ k,置信度分数为c,计算出的瞬时速度为v k
(2)根据时间偏差分布参数计算置信增益,具体为:
(2.1)若时间偏差符合正态分布,假设其正态分布表达式中的参数均值为u t,方差为σ t 2。则置信增益ε为:
Figure PCTCN2022116972-appb-000022
则基于置信增益重新滤波估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
x k′=x k*ε+(x k+v k*u t*cosγ k)(1-ε)
y k′=y k*ε+(y k+v k*u t*sinγ k)(1-ε)
(2.2)若时间偏差不符合正态分布,假设时间偏差按数值大小从小到大排序后的第二四分位数和第三四分位数之间的数据的中位数为
Figure PCTCN2022116972-appb-000023
方差为τ t 2,则置信增益ε′为:
Figure PCTCN2022116972-appb-000024
则基于置信增益滤波重新估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
Figure PCTCN2022116972-appb-000025
Figure PCTCN2022116972-appb-000026
(3)由于网联车为刚性物体,其位置移动并不会改变其在竖坐标即z轴的值的大小,即z k′=z k,则重新滤波估计后的网联车目标的最优位置中心点坐标为(x k′,y k′,z k′)。
步骤三:将滤波校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中的网联车目标的映射匹配对齐。如图3所示,是位置校正后的点云参考网联车映射至图像的伪3D框示意图。该步骤的具体过程如下:
(1)对任一位置滤波校正后的点云网联车目标,利用仿射变换矩阵,将点云中心点坐标映射至图像。
(2)计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差。
假设其校正后的点云中心点坐标为(x′,y′,z),映射至图像上的映射点坐标为(p,q),则映射点坐标和点云中心点坐标满足如下关系:
Figure PCTCN2022116972-appb-000027
其中H为点云到图像的仿射变换矩阵,H矩阵维度大小为3*4,可由激光雷达和摄像头联合外参标定和摄像头内参标定得到,可表示如下,其中h rs为H矩阵中的元素,均为实数,1≤r≤4,1≤s≤4:
Figure PCTCN2022116972-appb-000028
假设图像中第i个网联车目标的中心点坐标为(m i,n i),且1≤i≤N img_obj,N img_obj为图像中网联车目标总数。则映射点坐标和图像中第i个网联车目标的中心点坐标的距离差为:
Figure PCTCN2022116972-appb-000029
(3)计算最小距离差,并判断最小距离差是否小于设定阈值Δ。其中最小距离差为:
Figure PCTCN2022116972-appb-000030
若最小距离差d min小于阈值Δ,则相应的图像中网联车目标即为匹配目标。
(4)依据步骤(1)-(3),完成所有点云网联车目标和图像网联车目标的映射匹配对齐。
步骤四:将匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取同一目标 网联车的车牌号和瞬时速度信息,将瞬时速度超过最大限速的网联车车牌号信息进行上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至常规车速。最大限速为园区内规定的网联车的最大限速,为30km/h,常规车速为25km/h。
所述网联车云控平台是基于云服务器,提供网联车管理控制功能,包含每辆网联车信息并可以对具体的网联车做出远程控制。
另一方面,如图4所示,本发明还提供了一种基于滤波校正的网联车超速预警***,该***包括时间偏差分布参数确定模块、滤波校正模块、匹配对齐模块和感知信息融合模块;
所述时间偏差分布参数确定模块用于选定一辆参考网联车,通过数据帧时间同步的激光雷达和摄像头获取若干帧参考网联车在连续行驶过程中的点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,测量出图像上的映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出参考网联车目标在点云和图像中的生成时间偏差,并计算时间偏差分布参数;时间偏差分布参数确定模块的具体实现过程参考本发明提供的一种基于滤波校正的网联车超速预警方法中步骤一的细节描述。
所述滤波校正模块用于实时获取道路上网联车连续行驶过程中的点云和图像数据,对任一点云帧中检测出的点云网联车目标通过置信滤波方法对点云中心点位置进行滤波校正;具体为:利用检测算法检测出的点云网联车目标的置信度分数和时间偏差分布参数确定模块得到的时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计点云网联车目标的最优位置;滤波校正模块的具体实现过程参考本发明提供的一种基于滤波校正的网联车超速预警方法中步骤二的细节描述。
所述匹配对齐模块用于将滤波校正模块校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中网联车目标的映射匹配对齐;匹配对齐模块的具体实现过程参考本发明提供的一种基于滤波校正的网联车超速预警方法中步骤三的细节描述。
所述感知信息融合模块用于将匹配对齐模块匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取网联车的车牌号和瞬时速度,对瞬时速度超过最大限速的网联车车牌号信息上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至安全车速。感知信息融合模块的具体实现过程参考本发明提供的一种基于滤波校正的网联车超速预警方法中步骤四的细节描述。
与前述基于滤波校正的网联车超速预警方法的实施例相对应,本发明还提供了基于滤波校正的网联车超速预警装置的实施例。
参见图5,本发明实施例提供的一种基于滤波校正的网联车超速预警装置,包括存储器和一个或多个处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,用于实现上述实施例中的基于滤波校正的网联车超速预警方法。
本发明基于滤波校正的网联车超速预警装置的实施例可以应用在任意具备数据处理能力的设备上,该任意具备数据处理能力的设备可以为诸如计算机等设备或装置。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在任意具备数据处理能力的设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图5所示,为本发明基于滤波校正的网联车超速预警装置所在任意具备数据处理能力的设备的一种硬件结构图,除了图5所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的任意具备数据处理能力的设备通常根据该任意具备数据处理能力的设备的实际功能,还可以包括其他硬件,对此不再赘述。
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本发明方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本发明实施例还提供一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现上述实施例中的基于滤波校正的网联车超速预警方法。
所述计算机可读存储介质可以是前述任一实施例所述的任意具备数据处理能力的设备的内部存储单元,例如硬盘或内存。所述计算机可读存储介质也可以是任意具备数据处理能力的设备的外部存储设备,例如所述设备上配备的插接式硬盘、智能存储卡(Smart Media Card,SMC)、SD卡、闪存卡(Flash Card)等。进一步的,所述计算机可读存储介质还可以既包括任意具备数据处理能力的设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述任意具备数据处理能力的设备所需的其他程序和数据,还可以用于暂时地存储已经输出或者将要输出的数据。
上述实施例用来解释说明本发明,而不是对本发明进行限制,在本发明的精神和权利要求的保护范围内,对本发明作出的任何修改和改变,都落入本发明的保护范围。

Claims (9)

  1. 一种基于滤波校正的网联车超速预警方法,其特征在于,该方法包括以下步骤:
    步骤一:选定一辆参考网联车,通过数据帧时间同步的激光雷达和摄像头采集若干帧参考网联车在连续行驶过程中的点云和图像数据,并标记出参考网联车在点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,测量出图像上的映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出参考网联车目标在点云和图像中的生成时间偏差,并计算时间偏差分布参数;
    步骤二:实时获取道路上网联车连续行驶过程中的点云和图像数据,对任一点云帧中检测出的点云网联车目标通过置信滤波方法对点云中心点位置进行滤波校正;具体为:利用检测算法检测出的点云网联车目标的置信度分数和时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计点云网联车目标的最优位置;
    步骤三:将滤波校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中网联车目标的映射匹配对齐;
    步骤四:将匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取网联车的车牌号和瞬时速度,对瞬时速度超过最大限速的网联车车牌号上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至安全车速。
  2. 根据权利要求1所述的一种基于滤波校正的网联车超速预警方法,其特征在于,步骤一中,采用硬件线控方式控制激光雷达和摄像头数据帧时间同步。
  3. 根据权利要求1所述的一种基于滤波校正的网联车超速预警方法,其特征在于,步骤一中,假设要估计的点云和图像中参考网联车目标生成时间偏差为t,标记出的参考网联车在点云中的中心点坐标为(x,y,z)、在图像中的中心点坐标为(u′,v′)、朝向角为γ,计算出的瞬时速度为v_t,点云中心点映射至图像上的映射点坐标为(u,v),测量出的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像中的中心点坐标(u′,v′)的位置偏差为d;
    则参考网联车在点云中的中心点坐标(x,y,z)和映射至图像上的映射点坐标(u,v)满足如下关系:
    Figure PCTCN2022116972-appb-100001
    其中H为点云到图像的仿射变换矩阵,H矩阵维度大小为3*4,由激光雷达和摄像头联合外参标定和摄像头内参标定得到;表示如下:其中h rs为H矩阵中的元素,均为实数,1≤r≤4,1≤s≤4;
    Figure PCTCN2022116972-appb-100002
    根据参考网联车瞬时行驶速度,得出参考网联车在时间偏差t内发生移动后的位置在点云中的坐标x′、y′,z′分别为:
    x′=x+v_t*t*cosγ
    y′=y+v_t*t*cosγ
    z′=z
    其中v_t为参考网联车的瞬时速度,(x′,y′,z′)为参考网联车发生移动后的位置坐标;
    则参考网联车发生移动后的位置坐标(x′,y′,z′)和在图像中的中心点坐标(u′,v′)满足如下关系:
    Figure PCTCN2022116972-appb-100003
    因此,根据测量出的参考网联车映射至图像上的映射点坐标(u,v)和参考网联车在图像中的中心点坐标(u′,v′)的位置偏差d,列出如下等式:
    Figure PCTCN2022116972-appb-100004
    则可推导出时间偏差t是一个和已知的仿射变换矩阵、点云中心点坐标、朝向角、瞬时速度、位置偏差相关的数值,表示如下:
    Figure PCTCN2022116972-appb-100005
    其中,大写字母A,B分别为:
    Figure PCTCN2022116972-appb-100006
  4. 根据权利要求3所述的一种基于滤波校正的网联车超速预警方法,其特征在于,参考网联车的瞬时行驶速度通过参考网联车目标在最近邻前后两帧中的中心点移动距离和帧间隔时间比值计算得出。
  5. 根据权利要求1所述的一种基于滤波校正的网联车超速预警方法,其特征在于,计算时间偏差分布参数具体过程如下:
    (1)利用Kolmogorov-Smirnov检验方法检测时间偏差是否符合正态分布;假设估计出的时间偏差数据为N组,计算出数据的均值为μ,方差为σ 2,设置检测显著性水平为α;利用Kolmogorov-Smirnov检验方法检测数据不服从正态分布的概率,即P值,若P值小于等于显著性水平,则时间偏差不符合正态分布,若P值大于显著性水平,则时间偏差符合正态分布;
    (2)若时间偏差符合正态分布,则其正态分布表达式记为X~N(μ,σ 2);其中X表示这N组时间偏差数据;
    (3)若时间偏差不符合正态分布,则按数值大小从小到大对时间偏差数据进行排序,对数值大小处于第二四分位数和第三四分位数的之间的所有数据,计算出中位数和方差。
  6. 根据权利要求5所述的一种基于滤波校正的网联车超速预警方法,其特征在于,步骤二中,包括以下步骤:
    (1)对第k个点云网联车目标,利用深度学习检测算法检测出的中心点位置坐标为(x k,y k,z k),朝向角为γ k,置信度分数为c,计算出的瞬时速度为v k
    (2)根据时间偏差分布参数计算置信增益,具体为:
    (2.1)若时间偏差符合正态分布,假设其正态分布表达式中的参数均值为u t,方差为σ t 2;则置信增益ε为:
    Figure PCTCN2022116972-appb-100007
    则基于置信增益重新滤波估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
    x k′=x k*ε+(x k+v k*u t*cosγ k)(1-ε)
    y k′=y k*ε+(y k+v k*u t*sinγ k)(1-ε)
    (2.2)若时间偏差不符合正态分布,假设时间偏差按数值大小从小到大排序后的第二四分位数和第三四分位数之间的数据的中位数为
    Figure PCTCN2022116972-appb-100008
    方差为τ t 2,则置信增益ε′为:
    Figure PCTCN2022116972-appb-100009
    则基于置信增益滤波重新估计出网联车目标的点云中心点横纵坐标x k′,y k′分别为:
    Figure PCTCN2022116972-appb-100010
    Figure PCTCN2022116972-appb-100011
    (3)由于网联车为刚性物体,其位置移动并不会改变其在竖坐标即z轴的值的大小,即基于置信增益滤波重新估计的竖坐标z k′=z k,则重新滤波估计后的网联车目标的最优位置中心点坐标为(x k′,y k′,z k′)。
  7. 根据权利要求1所述的一种基于滤波校正的网联车超速预警方法,其特征在于,步骤三中,包括以下步骤:
    (1)对任一位置滤波校正后的点云网联车目标,利用仿射变换矩阵,将点云中心点坐标映射至图像;
    (2)计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差;
    假设校正后的点云中心点映射至图像上的映射点坐标为(p,q),图像中第i网联车目标的中心点坐标为(m i,n i),且1≤i≤N img_obj,N img_obj为图像中网联车目标总数;
    则映射点坐标和图像中第i个网联车目标的中心点坐标的距离差d i为:
    Figure PCTCN2022116972-appb-100012
    (3)计算最小距离差,并判断最小距离差是否小于设定阈值Δ;其中最小距离差为:
    Figure PCTCN2022116972-appb-100013
    若最小距离差d min小于阈值Δ,则对应的图像中网联车目标即为匹配目标;
    (4)依据步骤(1)-(3),完成所有点云网联车目标和图像网联车目标的映射匹配对齐。
  8. 根据权利要求1所述的一种基于滤波校正的网联车超速预警方法,其特征在于,步骤四中,基于图像数据利用OCR车牌号识别方法识别网联车目标在图像中的车牌号。
  9. 一种基于滤波校正的网联车超速预警***,其特征在于,该***包括时间偏差分布参数确定模块、滤波校正模块、匹配对齐模块和感知信息融合模块;
    所述时间偏差分布参数确定模块用于选定一辆参考网联车,通过数据帧时间同步的激光雷达和摄像头采集若干帧参考网联车在连续行驶过程中的点云和图像数据,并标记出参考网联车在点云和图像数据中的中心点坐标,利用仿射变换矩阵将参考网联车在点云中的中心点映射至图像,测量出图像上的映射点和参考网联车在图像上的中心点坐标的位置偏差,估计出参考网联车目标在点云和图像中的生成时间偏差,并计算时间偏差分布参数;
    所述滤波校正模块用于实时获取道路上网联车连续行驶过程中的点云和图像数据,对任一点云帧中检测出的点云网联车目标通过置信滤波方法对点云中心点位置进行滤波校正;具体为:利用检测算法检测出的点云网联车目标的置信度分数和时间偏差分布参数确定模块得到的时间偏差分布参数计算置信增益,并基于置信增益重新滤波估计点云网联车目标的最优位置;
    所述匹配对齐模块用于将滤波校正模块校正后的点云网联车目标一一映射至对应的图像帧,计算每一个点云网联车目标的中心点在图像中的映射点坐标和图像中任一网联车目标中心点坐标的距离差,距离差最小且小于阈值的即为其在图像中相应的匹配目标,按此方式完成所有点云和图像中网联车目标的映射匹配对齐;
    所述感知信息融合模块用于将匹配对齐模块匹配对齐后的网联车目标在点云和图像中的感知信息融合,以获取网联车的车牌号和瞬时速度,对瞬时速度超过最大限速的网联车车牌号信息上报至网联车云控平台同时做出超速预警,并远程控制网联车减速至安全车速。
PCT/CN2022/116972 2022-06-13 2022-09-05 一种基于滤波校正的网联车超速预警方法及*** WO2023240805A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210661541.3A CN114758504B (zh) 2022-06-13 2022-06-13 一种基于滤波校正的网联车超速预警方法及***
CN202210661541.3 2022-06-13

Publications (1)

Publication Number Publication Date
WO2023240805A1 true WO2023240805A1 (zh) 2023-12-21

Family

ID=82337228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116972 WO2023240805A1 (zh) 2022-06-13 2022-09-05 一种基于滤波校正的网联车超速预警方法及***

Country Status (2)

Country Link
CN (1) CN114758504B (zh)
WO (1) WO2023240805A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118124536A (zh) * 2024-05-06 2024-06-04 江苏大块头智驾科技有限公司 一种无人驾驶车辆制动装置及路侧环境感知***

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758504B (zh) * 2022-06-13 2022-10-21 之江实验室 一种基于滤波校正的网联车超速预警方法及***
CN114937081B (zh) * 2022-07-20 2022-11-18 之江实验室 基于独立非均匀增量采样的网联车位置估计方法及装置
CN115272493B (zh) * 2022-09-20 2022-12-27 之江实验室 一种基于连续时序点云叠加的异常目标检测方法及装置

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040248589A1 (en) * 2003-06-05 2004-12-09 Docomo Communications Laboratories Usa, Inc. Method and apparatus for location estimation using region of confidence filtering
CN105549050A (zh) * 2015-12-04 2016-05-04 合肥工业大学 一种基于模糊置信度滤波的北斗变形监测定位方法
CN106228570A (zh) * 2016-07-08 2016-12-14 百度在线网络技术(北京)有限公司 一种真值数据确定方法和装置
CN107564069A (zh) * 2017-09-04 2018-01-09 北京京东尚科信息技术有限公司 标定参数的确定方法、装置及计算机可读存储介质
CN108932736A (zh) * 2018-05-30 2018-12-04 南昌大学 二维激光雷达点云数据处理方法以及动态机器人位姿校准方法
CN109872370A (zh) * 2017-12-04 2019-06-11 通用汽车环球科技运作有限责任公司 使用激光雷达数据对摄像机***进行检测和重新校准
CN110243358A (zh) * 2019-04-29 2019-09-17 武汉理工大学 多源融合的无人车室内外定位方法及***
CN110850403A (zh) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 一种多传感器决策级融合的智能船水面目标感知识别方法
CN112085801A (zh) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) 基于神经网络的三维点云和二维图像融合的校准方法
CN114078145A (zh) * 2020-08-19 2022-02-22 北京万集科技股份有限公司 盲区数据处理方法、装置、计算机设备和存储介质
CN114545434A (zh) * 2022-01-13 2022-05-27 燕山大学 一种路侧视角测速方法、***、电子设备及存储介质
CN114612795A (zh) * 2022-03-02 2022-06-10 南京理工大学 基于激光雷达点云的路面场景目标识别方法
CN114758504A (zh) * 2022-06-13 2022-07-15 之江实验室 一种基于滤波校正的网联车超速预警方法及***

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016013028A1 (de) * 2016-11-02 2018-05-03 Friedrich-Schiller-Universität Jena Verfahren und Vorrichtung zur genauen Lagebestimmung von pfeilartigen Objekten relativ zu Oberflächen
CN108983248A (zh) * 2018-06-26 2018-12-11 长安大学 一种基于3d激光雷达及v2x的网联车定位方法
CN109147370A (zh) * 2018-08-31 2019-01-04 南京锦和佳鑫信息科技有限公司 一种智能网联车的高速公路控制***和特定路径服务方法
CN110942449B (zh) * 2019-10-30 2023-05-23 华南理工大学 一种基于激光与视觉融合的车辆检测方法
CN114076918A (zh) * 2020-08-20 2022-02-22 北京万集科技股份有限公司 毫米波雷达、激光雷达与相机联合标定方法和装置
CN113112817A (zh) * 2021-04-13 2021-07-13 天津职业技术师范大学(中国职业培训指导教师进修中心) 一种基于车联网和跟驰行为的隧道车辆定位和预警***及其方法
CN113092807B (zh) * 2021-04-21 2024-05-14 上海浦江桥隧运营管理有限公司 基于多目标跟踪算法的城市高架道路车辆测速方法
CN114359181B (zh) * 2021-12-17 2024-01-26 上海应用技术大学 一种基于图像和点云的智慧交通目标融合检测方法及***

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040248589A1 (en) * 2003-06-05 2004-12-09 Docomo Communications Laboratories Usa, Inc. Method and apparatus for location estimation using region of confidence filtering
CN105549050A (zh) * 2015-12-04 2016-05-04 合肥工业大学 一种基于模糊置信度滤波的北斗变形监测定位方法
CN106228570A (zh) * 2016-07-08 2016-12-14 百度在线网络技术(北京)有限公司 一种真值数据确定方法和装置
CN107564069A (zh) * 2017-09-04 2018-01-09 北京京东尚科信息技术有限公司 标定参数的确定方法、装置及计算机可读存储介质
CN109872370A (zh) * 2017-12-04 2019-06-11 通用汽车环球科技运作有限责任公司 使用激光雷达数据对摄像机***进行检测和重新校准
CN108932736A (zh) * 2018-05-30 2018-12-04 南昌大学 二维激光雷达点云数据处理方法以及动态机器人位姿校准方法
CN110243358A (zh) * 2019-04-29 2019-09-17 武汉理工大学 多源融合的无人车室内外定位方法及***
CN110850403A (zh) * 2019-11-18 2020-02-28 中国船舶重工集团公司第七0七研究所 一种多传感器决策级融合的智能船水面目标感知识别方法
CN114078145A (zh) * 2020-08-19 2022-02-22 北京万集科技股份有限公司 盲区数据处理方法、装置、计算机设备和存储介质
CN112085801A (zh) * 2020-09-08 2020-12-15 清华大学苏州汽车研究院(吴江) 基于神经网络的三维点云和二维图像融合的校准方法
CN114545434A (zh) * 2022-01-13 2022-05-27 燕山大学 一种路侧视角测速方法、***、电子设备及存储介质
CN114612795A (zh) * 2022-03-02 2022-06-10 南京理工大学 基于激光雷达点云的路面场景目标识别方法
CN114758504A (zh) * 2022-06-13 2022-07-15 之江实验室 一种基于滤波校正的网联车超速预警方法及***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118124536A (zh) * 2024-05-06 2024-06-04 江苏大块头智驾科技有限公司 一种无人驾驶车辆制动装置及路侧环境感知***

Also Published As

Publication number Publication date
CN114758504B (zh) 2022-10-21
CN114758504A (zh) 2022-07-15

Similar Documents

Publication Publication Date Title
WO2023240805A1 (zh) 一种基于滤波校正的网联车超速预警方法及***
WO2021004548A1 (zh) 一种基于双目立体视觉***的车辆智能测速方法
CN106919915B (zh) 基于adas***的地图道路标记及道路质量采集装置及方法
CN110033479B (zh) 基于交通监控视频的交通流参数实时检测方法
CN106503653B (zh) 区域标注方法、装置和电子设备
CN112700470B (zh) 一种基于交通视频流的目标检测和轨迹提取方法
CN109583415B (zh) 一种基于激光雷达与摄像机融合的交通灯检测与识别方法
CN109615870A (zh) 一种基于毫米波雷达和视频的交通检测***
CN104021676B (zh) 基于车辆动态视频特征的车辆定位及车速测量方法
WO2018227980A1 (zh) 基于摄像头传感器的车道线地图构建方法以及构建***
CN111753797B (zh) 一种基于视频分析的车辆测速方法
CN112132896B (zh) 一种轨旁设备状态检测方法及***
CN109949594A (zh) 实时的交通灯识别方法
Xiao et al. Monocular vehicle self-localization method based on compact semantic map
CN114359181B (zh) 一种基于图像和点云的智慧交通目标融合检测方法及***
CN110969055B (zh) 用于车辆定位的方法、装置、设备和计算机可读存储介质
KR20200064873A (ko) 객체와 감지 카메라의 거리차를 이용한 속도 검출 방법
WO2021253245A1 (zh) 识别车辆变道趋势的方法和装置
JP2022514891A (ja) 教師あり機械学習のための画像の自動ラベリングのためのシステムおよび方法
CN112446915B (zh) 一种基于图像组的建图方法及装置
CN105869413A (zh) 基于摄像头视频检测车流量和车速的方法
CN108827325A (zh) 对数据进行定位的方法、设备和计算机可读的存储介质
CN117215327A (zh) 基于无人机的公路巡检检测及智能飞行控制方法
CN115457130A (zh) 一种基于深度关键点回归的电动汽车充电口检测定位方法
CN113160299B (zh) 基于卡尔曼滤波的车辆视频测速方法和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946478

Country of ref document: EP

Kind code of ref document: A1