CN112509333A - Roadside parking vehicle track identification method and system based on multi-sensor sensing - Google Patents

Roadside parking vehicle track identification method and system based on multi-sensor sensing Download PDF

Info

Publication number
CN112509333A
CN112509333A CN202011125302.3A CN202011125302A CN112509333A CN 112509333 A CN112509333 A CN 112509333A CN 202011125302 A CN202011125302 A CN 202011125302A CN 112509333 A CN112509333 A CN 112509333A
Authority
CN
China
Prior art keywords
vehicle
visible light
data
camera
light camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011125302.3A
Other languages
Chinese (zh)
Inventor
闫军
张恒
项炎平
王艳清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202011125302.3A priority Critical patent/CN112509333A/en
Publication of CN112509333A publication Critical patent/CN112509333A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a roadside parking vehicle track identification method and system based on multi-sensor sensing, and relates to the field of intelligent parking management identification, wherein the method comprises the following steps: respectively acquiring vehicle target foreground area information according to images acquired by an infrared camera and a visible light camera; performing 2D target detection fusion according to the confidence coefficient weights corresponding to the visible light camera and the infrared thermal imaging camera, the vehicle target foreground region information and the combined calibration data; and when the radar sensor data are reliable, obtaining 3D space position information of the vehicle target according to the laser point cloud data, the combined calibration data and the 2D target detection fusion result, and further obtaining a 3D track of the vehicle. According to the method, the 3D track behavior information of the vehicle in the roadside parking service is acquired by using the multi-sensor fusion sensing technology, so that the problem that the vehicle behavior is difficult to accurately identify under different illumination conditions and weather conditions, especially under the condition of severe shielding can be solved, and the accuracy of roadside parking vehicle track identification is improved.

Description

Roadside parking vehicle track identification method and system based on multi-sensor sensing
Technical Field
The invention relates to the field of intelligent parking management and recognition, in particular to a roadside parking vehicle track recognition method and system based on multi-sensor sensing.
Background
In an urban intelligent transportation system, the management of parking lots is a significant percentage. With the increasing occupancy of urban motor vehicles, parking lots are not limited to the original mode, and roadside parking lots play an increasingly important role. Parking lots and roadside parking are intelligent depending on accurate analysis of vehicle behavior, particularly accurate acquisition of vehicle trajectories.
The existing vehicle track recognition technology basically analyzes a single type of sensor, wherein one mode is to obtain the running track of a vehicle and the running track of a target object in the environment according to data acquired by a vehicle-mounted sensor and a radar in real time so as to judge the collision danger in time. However, the method has a great problem in the perception capability of black vehicles, and in addition, when a large included angle exists between the irradiated surface of the vehicle and the radar, the whole information of the vehicle is difficult to obtain, so that the difficulty in vehicle identification is increased. The other mode is that a plurality of cameras are installed on the vehicle, each camera can set different visual angles, focal lengths and positions, the monocular processing can be performed on the result of each monocular camera, the monocular processing can also be performed on the plurality of cameras, and finally the track information of the targets around the vehicle is obtained. However, when the mutual shielding of the vehicle targets is serious, the target track misrecognition rate is increased in such a manner.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for recognizing a track of a roadside parking vehicle based on multi-sensor sensing, which can solve the problems of low accuracy and high error recognition rate of the track of the existing roadside parking vehicle in different specific scenes.
In order to achieve the purpose, the invention provides a roadside parking vehicle track identification method based on multi-sensor sensing, which comprises the following steps:
respectively acquiring vehicle target foreground area information according to images acquired by an infrared camera and a visible light camera;
performing 2D target detection fusion according to confidence weights corresponding to a visible light camera and an infrared thermal imaging camera, vehicle target foreground region information and combined calibration data, wherein the combined calibration data is obtained by performing space synchronization on data of the infrared camera, the visible light camera and a radar sensor, and the confidence weights are configured according to current illumination conditions;
when the radar sensor data are reliable, obtaining 3D space position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result;
and obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
Further, when the data of the radar sensor is reliable, before the step of obtaining the 3D spatial position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data, and the 2D target detection fusion result, the method further includes:
identifying the weather and degree of the current rain and snow according to the image data of the visible light camera;
and judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
Further, when the data of the radar sensor is reliable, before the step of obtaining the 3D spatial position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data, and the 2D target detection fusion result, the method further includes:
preprocessing laser point cloud data obtained by a radar sensor;
and acquiring a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering operation algorithm.
Further, when the data of the radar sensor is reliable, the step of obtaining the 3D spatial position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data, and the 2D target detection fusion result includes:
and when the radar sensor data is reliable, obtaining the 3D space position information of the vehicle target according to the vehicle point cloud target, the combined calibration data and the 2D target detection fusion result.
Further, the step of obtaining the 3D trajectory of the vehicle according to the 3D spatial position information of the vehicle object includes:
and obtaining a 3D track of the vehicle through a Kalman filtering target tracking algorithm and a data association algorithm according to the 3D space position information of the vehicle target.
Further, the method further comprises:
and when the data of the radar sensor is unreliable, carrying out binocular intersection on image information and configuration parameters acquired by the visible light camera and the infrared thermal imaging camera to obtain the 3D space position information of the vehicle target.
Further, before the step of respectively acquiring the foreground region information of the vehicle target according to the images acquired by the infrared camera and the visible light camera, the method further includes:
acquiring image brightness corresponding to a visible light camera through image data acquired by the visible light camera, and acquiring the edge definition of the image by using a preset edge extraction algorithm;
acquiring the illumination condition of the scene according to the edge definition;
and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
Further, before the step of respectively acquiring the foreground region information of the vehicle target according to the images acquired by the infrared camera and the visible light camera, the method further includes:
time synchronizing the visible light camera, the infrared thermal imaging camera, and the radar sensor.
Further, before the step of respectively acquiring the foreground region information of the vehicle target according to the images acquired by the infrared camera and the visible light camera, the method further includes:
selecting a radar coordinate system as a world coordinate system;
acquiring internal parameters of a visible light camera and an infrared thermal imaging camera by using a preset calibration object;
configuring a plurality of rectangular calibration plates in the field of view ranges of a visible light camera, an infrared thermal imaging camera and a radar sensor;
detecting the corner position of a rectangular calibration plate in image data collected by a visible light camera and an infrared thermal imaging camera;
identifying point clouds at the edge of the calibration plate on data collected by a radar sensor, and respectively fitting the point clouds at the four edges of the calibration plate to obtain the three-dimensional line segment positions of the four edges of each calibration plate;
and acquiring four corner three-dimensional coordinates of the calibration plate in the point cloud data in the least square sense through the three-dimensional line segments, and settling the space synchronization relation between the radar sensor and the visible light camera and the infrared thermal imaging camera through an intersection principle.
Further, the invention provides a roadside parking vehicle track recognition system based on multi-sensor perception, which comprises:
the acquisition module is used for respectively acquiring the information of the foreground area of the vehicle target according to the images acquired by the infrared camera and the visible light camera;
the detection fusion module is used for carrying out 2D target detection fusion according to confidence weights corresponding to the visible light camera and the infrared thermal imaging camera, vehicle target foreground region information and combined calibration data, wherein the combined calibration data is obtained by carrying out space synchronization on data of the infrared camera, the visible light camera and the radar sensor, and the confidence weights are configured according to the current illumination condition;
the acquisition module is further used for acquiring 3D space position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result when the radar sensor data are reliable; and obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
Further, the system further comprises:
the identification module is used for identifying the current rain and snow weather and the degree according to the image data of the visible light camera;
and the judging module is used for judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
Further, the system further comprises:
the preprocessing module is used for preprocessing the laser point cloud data obtained by the radar sensor;
the acquisition module is also used for acquiring a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering operation algorithm.
Further, the obtaining module is specifically configured to, when the radar sensor data is reliable, obtain 3D spatial position information of the vehicle target according to the vehicle point cloud target, the joint calibration data, and the 2D target detection fusion result.
Further, the obtaining module is specifically configured to obtain a vehicle 3D trajectory through a Kalman filtering target tracking algorithm and a data association algorithm according to the vehicle target 3D spatial position information.
Further, the acquisition module is further configured to perform binocular intersection on image information and configuration parameters acquired by the visible light camera and the infrared thermal imaging camera when the radar sensor data is unreliable, so as to obtain the 3D spatial position information of the vehicle target.
Further, the system further comprises: a configuration module;
the configuration module is used for acquiring the image brightness corresponding to the visible light camera through the image data acquired by the visible light camera and acquiring the edge clearness of the image by utilizing a preset edge extraction algorithm; acquiring the illumination condition of the scene according to the edge definition; and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
Further, the system further comprises: a synchronization module;
and the time synchronization module is used for performing time synchronization on the visible light camera, the infrared thermal imaging camera and the radar sensor.
Furthermore, the synchronization module is also used for selecting a radar coordinate system as a world coordinate system; acquiring internal parameters of a visible light camera and an infrared thermal imaging camera by using a preset calibration object; configuring a plurality of rectangular calibration plates in the field of view ranges of a visible light camera, an infrared thermal imaging camera and a radar sensor; detecting the corner position of a rectangular calibration plate in image data collected by a visible light camera and an infrared thermal imaging camera; identifying point clouds at the edge of the calibration plate on data collected by a radar sensor, and respectively fitting the point clouds at the four edges of the calibration plate to obtain the three-dimensional line segment positions of the four edges of each calibration plate; and acquiring four corner three-dimensional coordinates of the calibration plate in the point cloud data in the least square sense through the three-dimensional line segments, and settling the space synchronization relation between the radar sensor and the visible light camera and the infrared thermal imaging camera through an intersection principle.
According to the roadside parking vehicle track identification method and system based on multi-sensor sensing, the method for acquiring the vehicle position information on the image through the sensing capability advantage of the infrared thermal imaging camera and the detection identification network can solve the problem that the vehicle is difficult to identify through the visible light camera data under the condition of weak illumination conditions; meanwhile, the method of combining laser point cloud 3D data with visible light camera and infrared thermal imaging camera binocular positioning is used for accurately positioning a full-color vehicle target, and the problem that black vehicle position information is easily lost due to the fact that a laser radar is weak in perception capability of a black object is solved; in addition, aiming at the problem that the laser radar data is easily interfered in rainy and snowy weather, the method and the system have the advantages that the weather state identification and degree are evaluated through the confidence degree weights corresponding to the visible light camera and the infrared thermal imaging camera, and the overall automation degree of the system is improved; and finally, acquiring a 3D track of the vehicle through laser point cloud data obtained by a radar sensor, the combined calibration data and the 2D target detection fusion result, promoting the 2D tracking problem to a 3D tracking problem, and greatly improving the stability and quasi-removability of vehicle track calculation by acquiring the 3D information of the vehicle and combining a Kalman filtering algorithm.
Drawings
FIG. 1 is a flow chart of a roadside parking vehicle track identification method based on multi-sensor sensing provided by the invention;
FIG. 2 is a schematic diagram of a roadside parked vehicle trajectory recognition system based on multi-sensor sensing provided by the invention.
FIG. 3 is a flowchart of an overall implementation of the method for recognizing the track of a roadside parked vehicle based on multi-sensor sensing provided by the invention.
Detailed Description
The structure and implementation of the device of the present invention are further described in detail below with reference to the accompanying drawings and examples.
The embodiment of the invention provides a roadside parking vehicle track identification method based on multi-sensor sensing, which specifically comprises the following steps as shown in FIG. 1:
101. and respectively acquiring the information of the foreground area of the vehicle target according to the images acquired by the infrared camera and the visible light camera.
For the embodiment of the present invention, step 101 may specifically include: for the infrared thermal imaging camera, identifying vehicle information in the image data by using a detection network; for the visible light camera, identifying vehicle position information in the image data by using a detection network; and respectively acquiring the information of the foreground area of the vehicle target in the identified vehicle position areas by using an example segmentation algorithm.
For the embodiment of the present invention, time and space synchronization is also required before step 101, which is specifically as follows: the time synchronism of the data of each sensor is one of the fusion perception bases, and the embodiment of the invention adopts an external synchronization triggering mode and a time service synchronization mode. Specifically, for a sensor supporting external triggering, an ethernet clock synchronization protocol is adopted. And modifying internal clock information of the sensor without an external trigger function in the same time service mode, and adding timestamp information to a data frame of the sensor. The ethernet clock synchronization protocol includes, but is not limited to, IEEE 1588, IEEE802.1AS, etc. The time service source includes but is not limited to GPS, Beidou and the like.
Further, spatial synchronization of sensor data is one of the key links for fusion perception. The data of each sensor is synchronized in space, namely, the sensors are calibrated in a combined mode, information from different sensors is converted into a pixel level fusion module, and system positioning errors are minimized through calculation of space-time calibration, coordinate system conversion, position information offset of the sensors, detection offset and the like, so that subsequent multi-sensor data integration can be correctly and effectively achieved.
Specifically, for an image and laser radar registration and infrared thermal imaging camera, the implementation method is as follows:
1) selecting a radar coordinate system as a world coordinate system;
2) acquiring internal parameters of a camera and an infrared thermal imaging camera by using a calibration object;
3) arranging a plurality of rectangular calibration plates in the visual field range of the visible light or infrared thermal imaging camera and the laser radar;
4) detecting the corner position of a rectangular calibration board in the image data;
5) identifying point clouds at edges of the flat plate on radar data, respectively fitting the point clouds at four sides of the flat plate, obtaining three-dimensional line segment positions of four sides of each flat plate, and obtaining four angular point three-dimensional coordinates of the flat plate in point cloud data in the least square sense through the three-dimensional line segments.
6) And settling the space synchronization relation between the radar sensor and the visible light or infrared thermal imaging camera through the rendezvous principle.
The calibration objects include, but are not limited to, checkerboard, star calibration stand, etc. Specifically, for the image and millimeter wave radar synchronism, aiming at a roadside parking service scene, a transformation matrix between the image and millimeter wave radar synchronism is calculated according to the position of a vehicle in the scene.
102. And carrying out 2D target detection fusion according to the confidence coefficient weights corresponding to the visible light camera and the infrared thermal imaging camera, the vehicle target foreground area information and the combined calibration data.
The combined calibration data is obtained by performing spatial synchronization on data of an infrared camera, a visible light camera and a radar sensor, and the confidence coefficient weight is configured according to the current illumination condition.
It should be noted that, the configuration process of the confidence weight may be as follows: acquiring image brightness corresponding to a visible light camera through image data acquired by the visible light camera, and acquiring the edge definition of the image by using a preset edge extraction algorithm; acquiring the illumination condition of the scene according to the edge definition; and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition. For the embodiment of the invention, under different environmental conditions, different confidence weights are respectively configured for the visible light camera and the infrared thermal imaging camera, so that the accuracy of image acquisition can be further improved, and the accuracy of vehicle track identification can be further improved.
Specifically, in the aspect of illumination conditions, the image brightness of the sensor is evaluated by using visible light camera data, and the edge definition is evaluated by using an edge extraction algorithm so as to judge the illumination conditions of the scene, and support is provided for subsequently selecting a reliable perception sensor by combining clock information. And for the aspects of rain and snow weather and the degree thereof, a classification network based on deep learning is carried out by utilizing the visible light camera image data, the rain and snow weather is identified, and grade classification is carried out. The weather classification and degree identification network includes, but is not limited to, classification networks such as ResNet, ResNext, and GoogleNet. The edge extraction algorithm includes, but is not limited to, traditional edge extraction methods such as Sobel, Canny, and deep learning based edge extraction methods such as HED, casnet, etc.
103. And when the data of the radar sensor is reliable, obtaining 3D space position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result.
For the embodiment of the present invention, step 103 may specifically include: and when the radar sensor data is reliable, obtaining the 3D space position information of the vehicle target according to the vehicle point cloud target, the combined calibration data and the 2D target detection fusion result.
For the embodiment of the present invention, step 103 may further include: identifying the weather and degree of the current rain and snow according to the image data of the visible light camera; and judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
For the embodiment of the present invention, step 103 may further include: preprocessing laser point cloud data obtained by a radar sensor; and acquiring a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering operation algorithm. Interference such as leaves, flying birds, the ground, green belts, traffic signboards and the like can be eliminated through point cloud preprocessing, and the analysis reliability is further improved; then, clustering operation is carried out according to the nearest neighbor principle, and point cloud targets such as license plates, hubs, headlight lamp rings and the like of the whole or local vehicles are obtained.
Further, when the data of the radar sensor is unreliable, binocular intersection is carried out on image information and configuration parameters collected by the visible light camera and the infrared thermal imaging camera, and 3D space position information of the vehicle target is obtained. When the data of the radar sensor is unreliable, the embodiment of the invention obtains the 3D space position information of the vehicle target through binocular intersection, and realizes a dual 3D positioning method so as to stably obtain the vehicle position information.
104. And obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
For the embodiment of the present invention, step 104 may specifically include: and obtaining a 3D track of the vehicle through a Kalman filtering target tracking algorithm and a data association algorithm according to the 3D space position information of the vehicle target. It should be noted that, for the mutual shielding of vehicles, although the images may be distinguished from each other, the problem that the existing algorithm is prone to cause the ID of the vehicle to be confused in vehicle tracking is solved.
It should be noted that the vehicle detection method adopted in the embodiment of the present invention includes, but is not limited to, target detection networks such as YOLO, SSD, CenterNet, and the like; the calculation method of the cross-over ratio includes, but is not limited to, the calculation methods of the cross-over ratio such as IOU, CIOU, DIOU and GIOU; vehicle example segmentation methods include, but are not limited to, MASK-RCNN, BlendMask, SOLO, Polarmmask, and the like; the data association algorithm includes, but is not limited to, hungarian algorithm.
The specific application process of the embodiment of the present invention may be as shown in fig. 3, but is not limited thereto, and includes: setting different confidence weights for the visible light camera and the infrared thermal imaging camera respectively according to the illumination condition identification result; for the infrared thermal imaging camera, identifying vehicle information in the image data by using a detection network; for the visible light camera, identifying vehicle position information in the image data by using a detection network; respectively identifying vehicle position areas identified by a visible light camera and an infrared thermal imaging camera, and acquiring a vehicle foreground area by using an example segmentation algorithm; performing 2D target detection fusion based on the space calibration data and the confidence coefficient weight, and merging the homonymous targets by utilizing an intersection ratio; judging whether the laser radar data is credible or not by using the recognition results of the rain and snow weather and the degree; when the laser radar data is credible, point cloud data is preprocessed, then clustering operation is carried out according to the nearest neighbor principle, and point cloud targets such as license plates, hubs, headlight lamp rings and the like of the whole or local vehicles are obtained. When the laser point cloud data is reliable, preferably selecting the 3D data of the point cloud, performing 3D fusion on the vehicle by using the confidence coefficient weights corresponding to the visible light camera and the infrared thermal imaging camera, the vehicle target foreground area information and the combined calibration data, and obtaining the measured 3D spatial position information; when the laser point cloud data is unreliable, obtaining binocular intersection of the visible light camera and the infrared thermal imaging camera by using the confidence coefficient weights corresponding to the visible light camera and the infrared thermal imaging camera and the foreground region information of the vehicle target so as to obtain three-dimensional position information of the target; and obtaining target position information by using a Kalman filtering target tracking algorithm and a data association algorithm, and finally obtaining a 3D track of the vehicle.
According to the roadside parking vehicle track identification method based on multi-sensor sensing provided by the embodiment of the invention, the method for acquiring the vehicle position information on the image through the sensing capability advantage of the infrared thermal imaging camera and the detection identification network can solve the problem that the vehicle is difficult to identify through the visible light camera data under the condition of weak illumination conditions; meanwhile, the method of combining laser point cloud 3D data with visible light camera and infrared thermal imaging camera binocular positioning is used for accurately positioning a full-color vehicle target, and the problem that black vehicle position information is easily lost due to the fact that a laser radar is weak in perception capability of a black object is solved; in addition, aiming at the problem that the laser radar data is easily interfered in rainy and snowy weather, the method and the system have the advantages that the weather state identification and degree are evaluated through the confidence degree weights corresponding to the visible light camera and the infrared thermal imaging camera, and the overall automation degree of the system is improved; and finally, acquiring a 3D track of the vehicle through laser point cloud data obtained by a radar sensor, the combined calibration data and the 2D target detection fusion result, promoting the 2D tracking problem to a 3D tracking problem, and greatly improving the stability and quasi-removability of vehicle track calculation by acquiring the 3D information of the vehicle and combining a Kalman filtering algorithm.
As a specific implementation manner of the method shown in fig. 1, an embodiment of the present invention provides a roadside parked vehicle trajectory recognition system based on multi-sensor sensing, and as shown in fig. 2, the system includes:
and the acquisition module 21 is configured to acquire information of a foreground area of the vehicle target according to images acquired by the infrared camera and the visible light camera.
For the embodiment of the present invention, time and space synchronization is also required before the obtaining module 21 executes the corresponding function, which is specifically as follows: the time synchronism of the data of each sensor is one of the fusion perception bases, and the embodiment of the invention adopts an external synchronization triggering mode and a time service synchronization mode. Specifically, for a sensor supporting external triggering, an ethernet clock synchronization protocol is adopted. And modifying internal clock information of the sensor without an external trigger function in the same time service mode, and adding timestamp information to a data frame of the sensor. The ethernet clock synchronization protocol includes, but is not limited to, IEEE 1588, IEEE 802.11 as, and the like. The time service source includes but is not limited to GPS, Beidou and the like.
Further, spatial synchronization of sensor data is one of the key links for fusion perception. The data of each sensor is synchronized in space, namely, the sensors are calibrated in a combined mode, information from different sensors is converted into a pixel level fusion module, and system positioning errors are minimized through calculation of space-time calibration, coordinate system conversion, position information offset of the sensors, detection offset and the like, so that subsequent multi-sensor data integration can be correctly and effectively achieved.
And the detection fusion module 22 is configured to perform 2D target detection fusion according to the confidence weights corresponding to the visible light camera and the infrared thermal imaging camera, the vehicle target foreground region information, and the joint calibration data.
The combined calibration data is obtained by performing spatial synchronization on data of an infrared camera, a visible light camera and a radar sensor, and the confidence coefficient weight is configured according to the current illumination condition.
It should be noted that, the configuration process of the confidence weight may be as follows: acquiring image brightness corresponding to a visible light camera through image data acquired by the visible light camera, and acquiring the edge definition of the image by using a preset edge extraction algorithm; acquiring the illumination condition of the scene according to the edge definition; and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
Specifically, in the aspect of illumination conditions, the image brightness of the sensor is evaluated by using visible light camera data, and the edge definition is evaluated by using an edge extraction algorithm so as to judge the illumination conditions of the scene, and support is provided for subsequently selecting a reliable perception sensor by combining clock information. And for the aspects of rain and snow weather and the degree thereof, a classification network based on deep learning is carried out by utilizing the visible light camera image data, the rain and snow weather is identified, and grade classification is carried out. The weather classification and degree identification network includes, but is not limited to, classification networks such as ResNet, ResNext, and GoogleNet. The edge extraction algorithm includes, but is not limited to, traditional edge extraction methods such as Sobel, Canny, and deep learning based edge extraction methods such as HED, casnet, etc.
The obtaining module 21 is further configured to obtain 3D spatial position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the joint calibration data, and the 2D target detection fusion result when the radar sensor data is reliable; and obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
Further, the system further comprises: the identification module 23 is configured to perform current rain and snow weather identification and degree identification according to the visible light camera image data; and the judging module 24 is used for judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
Further, the system further comprises: a preprocessing module 25, configured to preprocess the laser point cloud data obtained by the radar sensor; the obtaining module 21 is further configured to obtain a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering algorithm.
It should be noted that interferences such as leaves, birds, the ground, green belts, traffic signboards and the like can be eliminated through point cloud preprocessing, so that the analysis reliability is further improved; then, clustering operation is carried out according to the nearest neighbor principle, and point cloud targets such as license plates, hubs, headlight lamp rings and the like of the whole or local vehicles are obtained.
Further, the obtaining module 21 is specifically configured to, when the radar sensor data is reliable, obtain 3D spatial position information of the vehicle target according to the vehicle point cloud target, the joint calibration data, and the 2D target detection fusion result.
Further, the obtaining module 21 is specifically configured to obtain a vehicle 3D trajectory through a Kalman filtering target tracking algorithm and a data association algorithm according to the vehicle target 3D spatial position information.
Further, the obtaining module 21 is further configured to perform binocular convergence on image information and configuration parameters acquired by the visible light camera and the infrared thermal imaging camera when the radar sensor data is unreliable, so as to obtain the 3D spatial position information of the vehicle target. For the embodiment of the invention, when the data of the radar sensor is unreliable, the embodiment of the invention obtains the 3D space position information of the vehicle target through binocular convergence, and realizes a dual 3D positioning method so as to stably obtain the vehicle position information.
Further, the system further comprises: a configuration module 26; the configuration module is used for acquiring the image brightness corresponding to the visible light camera through the image data acquired by the visible light camera and acquiring the edge clearness of the image by utilizing a preset edge extraction algorithm; acquiring the illumination condition of the scene according to the edge definition; and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
Further, the system further comprises: a synchronization module 27; and the time synchronization module is used for performing time synchronization on the visible light camera, the infrared thermal imaging camera and the radar sensor. The synchronization module 27 is further configured to select a radar coordinate system as a world coordinate system; acquiring internal parameters of a visible light camera and an infrared thermal imaging camera by using a preset calibration object; configuring a plurality of rectangular calibration plates in the field of view ranges of a visible light camera, an infrared thermal imaging camera and a radar sensor; detecting the corner position of a rectangular calibration plate in image data collected by a visible light camera and an infrared thermal imaging camera; identifying point clouds at the edge of the calibration plate on data collected by a radar sensor, and respectively fitting the point clouds at the four edges of the calibration plate to obtain the three-dimensional line segment positions of the four edges of each calibration plate; and acquiring four corner three-dimensional coordinates of the calibration plate in the point cloud data in the least square sense through the three-dimensional line segments, and settling the space synchronization relation between the radar sensor and the visible light camera and the infrared thermal imaging camera through an intersection principle.
The calibration objects include, but are not limited to, checkerboard, star calibration stand, etc. Specifically, for the image and millimeter wave radar synchronism, aiming at a roadside parking service scene, a transformation matrix between the image and millimeter wave radar synchronism is calculated according to the position of a vehicle in the scene.
It should be noted that the vehicle detection method adopted in the embodiment of the present invention includes, but is not limited to, target detection networks such as YOLO, SSD, CenterNet, and the like; the calculation method of the cross-over ratio includes, but is not limited to, the calculation methods of the cross-over ratio such as IOU, CIOU, DIOU and GIOU; vehicle example segmentation methods include, but are not limited to, MASK-RCNN, BlendMask, SOLO, Polarmmask, and the like; the data association algorithm includes, but is not limited to, hungarian algorithm.
According to the roadside parking vehicle track recognition system based on multi-sensor sensing, the method for acquiring the vehicle position information on the image through the sensing capability advantage of the infrared thermal imaging camera and the detection recognition network can solve the problem that the vehicle is difficult to recognize by the visible light camera data under the condition of weak illumination conditions; meanwhile, the method of combining laser point cloud 3D data with visible light camera and infrared thermal imaging camera binocular positioning is used for accurately positioning a full-color vehicle target, and the problem that black vehicle position information is easily lost due to the fact that a laser radar is weak in perception capability of a black object is solved; in addition, aiming at the problem that the laser radar data is easily interfered in rainy and snowy weather, the method and the system have the advantages that the weather state identification and degree are evaluated through the confidence degree weights corresponding to the visible light camera and the infrared thermal imaging camera, and the overall automation degree of the system is improved; and finally, acquiring a 3D track of the vehicle through laser point cloud data obtained by a radar sensor, the combined calibration data and the 2D target detection fusion result, promoting the 2D tracking problem to a 3D tracking problem, and greatly improving the stability and quasi-removability of vehicle track calculation by acquiring the 3D information of the vehicle and combining a Kalman filtering algorithm.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (18)

1. A roadside parked vehicle track identification method based on multi-sensor perception is characterized by comprising the following steps:
respectively acquiring vehicle target foreground area information according to images acquired by an infrared camera and a visible light camera;
performing 2D target detection fusion according to confidence weights corresponding to a visible light camera and an infrared thermal imaging camera, vehicle target foreground region information and combined calibration data, wherein the combined calibration data is obtained by performing space synchronization on data of the infrared camera, the visible light camera and a radar sensor, and the confidence weights are configured according to current illumination conditions;
when the radar sensor data are reliable, obtaining 3D space position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result;
and obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
2. The method for roadside parking vehicle track recognition based on multi-sensor sensing of claim 1, wherein when radar sensor data is reliable, before the step of obtaining vehicle target 3D spatial position information according to laser point cloud data obtained by a radar sensor, the combined calibration data and the 2D target detection fusion result, the method further comprises:
identifying the weather and degree of the current rain and snow according to the image data of the visible light camera;
and judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
3. The method for recognizing the track of the roadside parked vehicle based on multi-sensor perception according to claim 1 or 2, wherein when radar sensor data is reliable, before the step of obtaining the 3D spatial position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result, the method further comprises:
preprocessing laser point cloud data obtained by a radar sensor;
and acquiring a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering operation algorithm.
4. The method for roadside parked vehicle track recognition based on multi-sensor sensing of claim 3, wherein when radar sensor data is reliable, the step of obtaining vehicle target 3D spatial position information according to laser point cloud data obtained by a radar sensor, the combined calibration data and the 2D target detection fusion result comprises:
and when the radar sensor data is reliable, obtaining the 3D space position information of the vehicle target according to the vehicle point cloud target, the combined calibration data and the 2D target detection fusion result.
5. The method for recognizing the track of the roadside parked vehicle based on multi-sensor perception according to claim 1, wherein the step of obtaining the 3D track of the vehicle according to the 3D spatial position information of the vehicle target comprises:
and obtaining a 3D track of the vehicle through a Kalman filtering target tracking algorithm and a data association algorithm according to the 3D space position information of the vehicle target.
6. The method for roadside parked vehicle trajectory identification based on multi-sensor perception according to claim 1 or 2, further comprising:
and when the data of the radar sensor is unreliable, carrying out binocular intersection on image information and configuration parameters acquired by the visible light camera and the infrared thermal imaging camera to obtain the 3D space position information of the vehicle target.
7. The method for recognizing the track of the roadside parked vehicle based on multi-sensor perception according to claim 1, wherein before the step of respectively acquiring the information of the foreground region of the vehicle target according to the images collected by the infrared camera and the visible light camera, the method further comprises:
acquiring image brightness corresponding to a visible light camera through image data acquired by the visible light camera, and acquiring the edge definition of the image by using a preset edge extraction algorithm;
acquiring the illumination condition of the scene according to the edge definition;
and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
8. The method for recognizing the track of the roadside parked vehicle based on multi-sensor perception according to claim 1, wherein before the step of respectively acquiring the information of the foreground region of the vehicle target according to the images collected by the infrared camera and the visible light camera, the method further comprises:
time synchronizing the visible light camera, the infrared thermal imaging camera, and the radar sensor.
9. The method for recognizing the track of the roadside parked vehicle based on multi-sensor perception according to claim 1, wherein before the step of respectively acquiring the information of the foreground region of the vehicle target according to the images collected by the infrared camera and the visible light camera, the method further comprises:
selecting a radar coordinate system as a world coordinate system;
acquiring internal parameters of a visible light camera and an infrared thermal imaging camera by using a preset calibration object;
configuring a plurality of rectangular calibration plates in the field of view ranges of a visible light camera, an infrared thermal imaging camera and a radar sensor;
detecting the corner position of a rectangular calibration plate in image data collected by a visible light camera and an infrared thermal imaging camera;
identifying point clouds at the edge of the calibration plate on data collected by a radar sensor, and respectively fitting the point clouds at the four edges of the calibration plate to obtain the three-dimensional line segment positions of the four edges of each calibration plate;
and acquiring four corner three-dimensional coordinates of the calibration plate in the point cloud data in the least square sense through the three-dimensional line segments, and settling the space synchronization relation between the radar sensor and the visible light camera and the infrared thermal imaging camera through an intersection principle.
10. A roadside parked vehicle trajectory recognition system based on multi-sensor perception, the system comprising:
the acquisition module is used for respectively acquiring the information of the foreground area of the vehicle target according to the images acquired by the infrared camera and the visible light camera;
the detection fusion module is used for carrying out 2D target detection fusion according to confidence weights corresponding to the visible light camera and the infrared thermal imaging camera, vehicle target foreground region information and combined calibration data, wherein the combined calibration data is obtained by carrying out space synchronization on data of the infrared camera, the visible light camera and the radar sensor, and the confidence weights are configured according to the current illumination condition;
the acquisition module is further used for acquiring 3D space position information of the vehicle target according to the laser point cloud data obtained by the radar sensor, the combined calibration data and the 2D target detection fusion result when the radar sensor data are reliable; and obtaining a 3D track of the vehicle according to the 3D space position information of the vehicle target.
11. The system of claim 10, further comprising:
the identification module is used for identifying the current rain and snow weather and the degree according to the image data of the visible light camera;
and the judging module is used for judging whether the data of the radar sensor is reliable or not according to the rain and snow weather identification result and the degree identification result.
12. The system of claim 10 or 11, further comprising:
the preprocessing module is used for preprocessing the laser point cloud data obtained by the radar sensor;
the acquisition module is also used for acquiring a vehicle point cloud target from the preprocessed laser point cloud data according to a preset clustering operation algorithm.
13. The system of claim 12, wherein the vehicle track recognition system comprises a plurality of sensors,
the acquisition module is specifically used for acquiring 3D space position information of the vehicle target according to the vehicle point cloud target, the combined calibration data and the 2D target detection fusion result when the radar sensor data is reliable.
14. The system of claim 10, wherein the vehicle is parked on the roadside with the track recognition system based on the multiple sensor sensing,
the acquisition module is further specifically used for obtaining a vehicle 3D track through a Kalman filtering target tracking algorithm and a data association algorithm according to the vehicle target 3D space position information.
15. The system for roadside parked vehicle trajectory recognition based on multi-sensor perception according to claim 10 or 11,
the acquisition module is further used for carrying out binocular intersection on image information and configuration parameters acquired by the visible light camera and the infrared thermal imaging camera when the radar sensor data are unreliable, and obtaining the 3D space position information of the vehicle target.
16. The system of claim 10, further comprising: a configuration module;
the configuration module is used for acquiring the image brightness corresponding to the visible light camera through the image data acquired by the visible light camera and acquiring the edge clearness of the image by utilizing a preset edge extraction algorithm; acquiring the illumination condition of the scene according to the edge definition; and respectively configuring confidence coefficient weights for the visible light camera and the infrared thermal imaging camera according to the current illumination condition.
17. The system of claim 10, further comprising: a synchronization module;
and the time synchronization module is used for performing time synchronization on the visible light camera, the infrared thermal imaging camera and the radar sensor.
18. The system of claim 17, wherein the vehicle is parked on the roadside with the track recognition system based on multiple sensor perception,
the synchronization module is also used for selecting a radar coordinate system as a world coordinate system; acquiring internal parameters of a visible light camera and an infrared thermal imaging camera by using a preset calibration object; configuring a plurality of rectangular calibration plates in the field of view ranges of a visible light camera, an infrared thermal imaging camera and a radar sensor; detecting the corner position of a rectangular calibration plate in image data collected by a visible light camera and an infrared thermal imaging camera; identifying point clouds at the edge of the calibration plate on data collected by a radar sensor, and respectively fitting the point clouds at the four edges of the calibration plate to obtain the three-dimensional line segment positions of the four edges of each calibration plate; and acquiring four corner three-dimensional coordinates of the calibration plate in the point cloud data in the least square sense through the three-dimensional line segments, and settling the space synchronization relation between the radar sensor and the visible light camera and the infrared thermal imaging camera through an intersection principle.
CN202011125302.3A 2020-10-20 2020-10-20 Roadside parking vehicle track identification method and system based on multi-sensor sensing Withdrawn CN112509333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011125302.3A CN112509333A (en) 2020-10-20 2020-10-20 Roadside parking vehicle track identification method and system based on multi-sensor sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011125302.3A CN112509333A (en) 2020-10-20 2020-10-20 Roadside parking vehicle track identification method and system based on multi-sensor sensing

Publications (1)

Publication Number Publication Date
CN112509333A true CN112509333A (en) 2021-03-16

Family

ID=74954181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011125302.3A Withdrawn CN112509333A (en) 2020-10-20 2020-10-20 Roadside parking vehicle track identification method and system based on multi-sensor sensing

Country Status (1)

Country Link
CN (1) CN112509333A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255779A (en) * 2021-05-28 2021-08-13 中国航天科工集团第二研究院 Multi-source perception data fusion identification method and system and computer readable storage medium
CN113420805A (en) * 2021-06-21 2021-09-21 车路通科技(成都)有限公司 Dynamic track image fusion method, device, equipment and medium for video and radar
CN113433548A (en) * 2021-06-24 2021-09-24 中国第一汽车股份有限公司 Data monitoring method, device, equipment and storage medium
CN113516687A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113554866A (en) * 2021-06-03 2021-10-26 广东未来智慧城市科技有限公司 3D vehicle track calculation analysis display system
CN113962301A (en) * 2021-10-20 2022-01-21 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN114166137A (en) * 2021-11-26 2022-03-11 沪东中华造船(集团)有限公司 Intelligent detection system and method for ship-to-ship filling interval
CN114463976A (en) * 2022-02-09 2022-05-10 超级视线科技有限公司 Vehicle behavior state determination method and system based on 3D vehicle track
CN114913399A (en) * 2022-05-12 2022-08-16 苏州大学 Vehicle track optimization method and intelligent traffic system
CN114999217A (en) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 Vehicle detection method and device and parking management system
CN114999216A (en) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 Vehicle detection method and device and parking management system
CN115131980A (en) * 2022-04-20 2022-09-30 汉得利(常州)电子股份有限公司 Target identification system and method for intelligent automobile road driving
WO2022206978A1 (en) * 2021-01-01 2022-10-06 许军 Roadside millimeter-wave radar calibration method based on vehicle-mounted positioning apparatus
CN115359681A (en) * 2022-07-20 2022-11-18 贵州大学 Optimized layout method of roadside structure light cameras supporting automatic driving
CN115861366A (en) * 2022-11-07 2023-03-28 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection
CN116304994A (en) * 2023-05-22 2023-06-23 浙江交科交通科技有限公司 Multi-sensor target data fusion method, device, equipment and storage medium
CN116449347A (en) * 2023-06-14 2023-07-18 蘑菇车联信息科技有限公司 Calibration method and device of roadside laser radar and electronic equipment
CN116679319A (en) * 2023-07-28 2023-09-01 深圳市镭神智能***有限公司 Multi-sensor combined tunnel early warning method, system, device and storage medium
CN117095540A (en) * 2023-10-18 2023-11-21 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium
CN117197019A (en) * 2023-11-07 2023-12-08 山东商业职业技术学院 Vehicle three-dimensional point cloud image fusion method and system
CN117238143A (en) * 2023-09-15 2023-12-15 北京卓视智通科技有限责任公司 Traffic data fusion method, system and device based on radar double-spectrum camera
CN117636671A (en) * 2024-01-24 2024-03-01 四川君迪能源科技有限公司 Cooperation scheduling method and system for intelligent vehicle meeting of rural roads

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022206978A1 (en) * 2021-01-01 2022-10-06 许军 Roadside millimeter-wave radar calibration method based on vehicle-mounted positioning apparatus
CN113255779A (en) * 2021-05-28 2021-08-13 中国航天科工集团第二研究院 Multi-source perception data fusion identification method and system and computer readable storage medium
CN113255779B (en) * 2021-05-28 2023-08-18 中国航天科工集团第二研究院 Multi-source perception data fusion identification method, system and computer readable storage medium
CN113554866A (en) * 2021-06-03 2021-10-26 广东未来智慧城市科技有限公司 3D vehicle track calculation analysis display system
CN113420805A (en) * 2021-06-21 2021-09-21 车路通科技(成都)有限公司 Dynamic track image fusion method, device, equipment and medium for video and radar
CN113420805B (en) * 2021-06-21 2022-11-29 车路通科技(成都)有限公司 Dynamic track image fusion method, device, equipment and medium for video and radar
CN113433548A (en) * 2021-06-24 2021-09-24 中国第一汽车股份有限公司 Data monitoring method, device, equipment and storage medium
CN113516687A (en) * 2021-07-09 2021-10-19 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113962301A (en) * 2021-10-20 2022-01-21 北京理工大学 Multi-source input signal fused pavement quality detection method and system
CN114166137A (en) * 2021-11-26 2022-03-11 沪东中华造船(集团)有限公司 Intelligent detection system and method for ship-to-ship filling interval
CN114463976A (en) * 2022-02-09 2022-05-10 超级视线科技有限公司 Vehicle behavior state determination method and system based on 3D vehicle track
CN115131980A (en) * 2022-04-20 2022-09-30 汉得利(常州)电子股份有限公司 Target identification system and method for intelligent automobile road driving
CN114913399A (en) * 2022-05-12 2022-08-16 苏州大学 Vehicle track optimization method and intelligent traffic system
CN114999216A (en) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 Vehicle detection method and device and parking management system
CN114999217A (en) * 2022-05-27 2022-09-02 北京筑梦园科技有限公司 Vehicle detection method and device and parking management system
CN115359681A (en) * 2022-07-20 2022-11-18 贵州大学 Optimized layout method of roadside structure light cameras supporting automatic driving
CN115861366A (en) * 2022-11-07 2023-03-28 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection
CN115861366B (en) * 2022-11-07 2024-05-24 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection
CN116304994B (en) * 2023-05-22 2023-09-15 浙江交科交通科技有限公司 Multi-sensor target data fusion method, device, equipment and storage medium
CN116304994A (en) * 2023-05-22 2023-06-23 浙江交科交通科技有限公司 Multi-sensor target data fusion method, device, equipment and storage medium
CN116449347B (en) * 2023-06-14 2023-10-03 蘑菇车联信息科技有限公司 Calibration method and device of roadside laser radar and electronic equipment
CN116449347A (en) * 2023-06-14 2023-07-18 蘑菇车联信息科技有限公司 Calibration method and device of roadside laser radar and electronic equipment
CN116679319A (en) * 2023-07-28 2023-09-01 深圳市镭神智能***有限公司 Multi-sensor combined tunnel early warning method, system, device and storage medium
CN116679319B (en) * 2023-07-28 2023-11-10 深圳市镭神智能***有限公司 Multi-sensor combined tunnel early warning method, system, device and storage medium
CN117238143A (en) * 2023-09-15 2023-12-15 北京卓视智通科技有限责任公司 Traffic data fusion method, system and device based on radar double-spectrum camera
CN117238143B (en) * 2023-09-15 2024-03-22 北京卓视智通科技有限责任公司 Traffic data fusion method, system and device based on radar double-spectrum camera
CN117095540A (en) * 2023-10-18 2023-11-21 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium
CN117095540B (en) * 2023-10-18 2024-01-23 四川数字交通科技股份有限公司 Early warning method and device for secondary road accidents, electronic equipment and storage medium
CN117197019A (en) * 2023-11-07 2023-12-08 山东商业职业技术学院 Vehicle three-dimensional point cloud image fusion method and system
CN117636671A (en) * 2024-01-24 2024-03-01 四川君迪能源科技有限公司 Cooperation scheduling method and system for intelligent vehicle meeting of rural roads
CN117636671B (en) * 2024-01-24 2024-04-30 四川君迪能源科技有限公司 Cooperation scheduling method and system for intelligent vehicle meeting of rural roads

Similar Documents

Publication Publication Date Title
CN112509333A (en) Roadside parking vehicle track identification method and system based on multi-sensor sensing
US11080995B2 (en) Roadway sensing systems
US10140855B1 (en) Enhanced traffic detection by fusing multiple sensor data
AU2014202300B2 (en) Traffic monitoring system for speed measurement and assignment of moving vehicles in a multi-target recording module
WO2019071212A1 (en) System and method of determining a curve
CN112740225B (en) Method and device for determining road surface elements
WO2014160027A1 (en) Roadway sensing systems
CN111582256A (en) Parking management method and device based on radar and visual information
WO2019198076A1 (en) Real-time raw data- and sensor fusion
CN111739338A (en) Parking management method and system based on multiple types of sensors
Wang et al. A roadside camera-radar sensing fusion system for intelligent transportation
CN113743171A (en) Target detection method and device
Wang et al. Road edge detection in all weather and illumination via driving video mining
CN114550142A (en) Parking space detection method based on fusion of 4D millimeter wave radar and image recognition
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
CN116027283A (en) Method and device for automatic calibration of a road side sensing unit
CN111693998A (en) Method and device for detecting vehicle position based on radar and image data
CN114252859A (en) Target area determination method and device, computer equipment and storage medium
US7860640B1 (en) Marker means for determining direction and zoom of a means for viewing
CN115166722B (en) Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN114863695B (en) Overproof vehicle detection system and method based on vehicle-mounted laser and camera
TW202341006A (en) Object tracking integration method and integrating apparatus
CN115457488A (en) Roadside parking management method and system based on binocular stereo vision
CN113283367A (en) Safety detection method for visual blind area of underground garage in low-visibility environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210316