CN118070232B - Vehicle space-time track chain extraction method based on radar fusion perception - Google Patents

Vehicle space-time track chain extraction method based on radar fusion perception Download PDF

Info

Publication number
CN118070232B
CN118070232B CN202410461772.9A CN202410461772A CN118070232B CN 118070232 B CN118070232 B CN 118070232B CN 202410461772 A CN202410461772 A CN 202410461772A CN 118070232 B CN118070232 B CN 118070232B
Authority
CN
China
Prior art keywords
time
nodes
radar
designated position
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410461772.9A
Other languages
Chinese (zh)
Other versions
CN118070232A (en
Inventor
李松明
彭丽娟
李志斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donglan Nanjing Intelligent Technology Co ltd
Original Assignee
Donglan Nanjing Intelligent Technology Co ltd
Filing date
Publication date
Application filed by Donglan Nanjing Intelligent Technology Co ltd filed Critical Donglan Nanjing Intelligent Technology Co ltd
Priority to CN202410461772.9A priority Critical patent/CN118070232B/en
Publication of CN118070232A publication Critical patent/CN118070232A/en
Application granted granted Critical
Publication of CN118070232B publication Critical patent/CN118070232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a vehicle space-time track chain extraction method based on radar fusion perception, which comprises the following steps: data acquisition, data preprocessing, data optimization, association analysis, track processing, data storage and track visualization; relates to the technical field of track extraction; according to the method, the running data perceived by the radar and the video camera are synchronously processed, so that the time sequence consistency of the data is ensured, the running track of the vehicle can be tracked in real time, the data of a plurality of designated position nodes in the target area are subjected to target matching through the association algorithm, a preliminary vehicle track is formed, the data processing efficiency is improved, meanwhile, the horizontal distance between the target vehicle and the radar position is calculated through combining the distance parameter and the angle parameter through the trigonometric function, the radar perceived speed parameter can be effectively verified and calibrated, and the problem that the error of a speed characteristic value is large due to inaccurate radar perceived data is avoided.

Description

Vehicle space-time track chain extraction method based on radar fusion perception
Technical Field
The invention relates to the technical field of track extraction, in particular to a vehicle space-time track chain extraction method based on radar fusion perception.
Background
With the development of technology, vehicle monitoring technology is also advancing continuously. Among them, radar and video fusion perception technology is a vehicle monitoring technology which is emerging in recent years. The technology collects the driving data of the vehicle through the radar and the video camera, and extracts the track chain information of the vehicle through data processing and analysis, so that the real-time monitoring of the vehicle is realized.
However, when the existing vehicle monitoring technology processes and analyzes the vehicle running data, the problems of inaccurate data, low data processing efficiency, unstable data analysis result and the like often exist. These problems seriously affect the accuracy and efficiency of vehicle monitoring and limit the further development of vehicle monitoring technology.
Therefore, how to provide a method capable of accurately and efficiently processing and analyzing vehicle driving data so as to extract accurate vehicle track chain information is a problem to be solved in the current vehicle monitoring technical field.
Disclosure of Invention
The invention aims to provide a vehicle space-time track chain extraction method based on radar fusion perception, which solves the technical problems in the background technology.
The aim of the invention can be achieved by the following technical scheme:
a vehicle space-time track chain extraction method based on radar fusion perception comprises the following steps:
Step one, data acquisition
Acquiring running data of a target vehicle in a designated position node of a target area through a radar and a video camera on the designated position node; the driving data comprise distance parameters, speed parameters and angle parameters of the target vehicle extracted from the radar, and shape information, color information and license plate information of the target vehicle extracted from the video data;
Step two, data preprocessing
According to the real-time acquisition time point of the running data, selecting one time point as a standard time node, synchronizing the running data perceived by the radar and the video camera, and obtaining a corresponding data packet;
Step three, data optimization
Carrying out target matching on the running data of a plurality of designated position nodes in the target area through a correlation algorithm, and obtaining the characteristic information of the corresponding target vehicle, wherein the characteristic information comprises query characteristics, speed characteristic values and time characteristic nodes;
Step four, association analysis
Searching target vehicle characteristic information which contains the same query characteristic as one designated position node on other designated position nodes, calculating estimated time according to the geographic distance and the speed characteristic value which are obtained in advance, then calculating the time difference of time characteristic nodes in adjacent designated position nodes, and then comparing the time difference to determine a data matching result.
Step five, track processing
Extracting time feature nodes corresponding to running data related to all the position nodes, sequencing the time feature nodes according to time sequence according to time stamps of the time feature nodes, connecting the designated position nodes in the track packet according to time sequence according to sequencing results to form a continuous track chain, and integrating speed feature values corresponding to each node and geographic distance information between adjacent nodes into the track chain;
Step six, storing and displaying
And storing the obtained vehicle track chain information in a database, and simultaneously visually displaying the track chain of the obtained target vehicle on a map.
As a further scheme of the invention: wherein the distance parameter refers to the distance between the radar and the target vehicle;
the speed parameter refers to the running speed of the target vehicle;
the angle parameter refers to the azimuth angle and the pitch angle of the target vehicle monitored by the radar;
Azimuth is the angle measured from the horizontal direction of the radar device to the target and is used to assist the radar in determining the position of the target relative to the radar on the horizontal plane.
The pitch angle is an angle measured from the vertical direction of the radar apparatus to the target, the pitch angle being used to assist the radar in determining the position of the target on the vertical plane;
The shape information refers to the style of the target vehicle;
The color information refers to a color identified for the target vehicle by image data captured by the video camera.
As a further scheme of the invention: the synchronization processing mode is as follows:
firstly, extracting a distance parameter, a speed parameter, an angle parameter, shape information and color information of a target vehicle, wherein the distance parameter, the speed parameter, the angle parameter, the shape information and the color information correspond to time points acquired in real time;
Then in the time period that the radar and the video camera sense that a target passes, a time point is selected as a standard time node, and distance parameters, speed parameters, angle parameters, shape information, color information and license plate information which are obtained in corresponding time are selected according to the standard time node, and then the standard time node and the distance parameters, the speed parameters, the angle parameters, the shape information, the color information and the license plate information are bound into a data packet;
Similarly, selecting a plurality of time points as standard time nodes, and obtaining a corresponding number of data packets;
as a further scheme of the invention: the target matching method is as follows:
StepA1, acquiring data packets of each node at a designated position, and importing the data packets into a pre-trained association matching model;
StepA2, performing locking processing on a target vehicle corresponding to the data packet by using a correlation matching model, and obtaining characteristic information of the target vehicle, wherein the characteristic information comprises a speed characteristic value, a time characteristic node and a query characteristic;
as a further scheme of the invention: the locking treatment mode is as follows:
Step a1, taking shape information, color information and license plate information in each data packet as query characteristics of a target vehicle on a node at a designated position;
Step a2, extracting distance parameters and angle parameters from each data packet, and calculating the horizontal distance between the target vehicle and the radar position through a trigonometric function;
Then extracting two groups of data packets in a pairwise combination mode, acquiring corresponding standard time nodes and horizontal distances from the data packets, and calculating corresponding speed parameters through a speed calculation formula;
Step a3, screening the speed parameter obtained in the step two and the speed parameter perceived by the radar in a variance analysis mode, screening out a corresponding speed parameter, calculating the average value of the speed parameter, and marking the average value as a speed characteristic value;
Step a4, acquiring all standard time nodes from each data packet on a designated position node, sequencing the standard time nodes according to time sequence, and selecting intermediate values as time feature nodes.
As a further scheme of the invention: the screening by means of analysis of variance was as follows:
Marking all corresponding speed parameters as Si, i=1, 2, … … n, and indicating that the number of the speed parameters is n;
Then substituting it into formula Calculating a deviation value P of the speed parameter, wherein Sp is an average value of the corresponding speed parameter participating in the calculation of the corresponding deviation value, and then comparing P with a preset deviation threshold Py:
If P is larger than Py, the deviation value of the group of speed parameters is larger, then corresponding Si values are deleted in sequence from larger to smaller according to the |Si-Sp| and the rest deviation value P is correspondingly calculated until P is smaller than or equal to Py, then when P is smaller than or equal to Py, the Si of the corresponding P1 is calculated, and the average value of all the Si is calculated.
As a further scheme of the invention: the specific mode of the fourth step is as follows:
StepB1, acquiring the characteristic information of the target vehicle with the same query characteristic from other corresponding designated position nodes of the target area according to the characteristic information of one designated position node;
StepB2, acquiring the geographic distance between each two adjacent designated position nodes in the target area on a target map preset in the target area;
Then extracting two speed characteristic values between adjacent designated position nodes, solving the average value of the two speed characteristic values, and calculating the estimated time through a speed calculation formula;
StepB3, extracting a plurality of time feature nodes in each adjacent designated position node, and calculating the time difference of the corresponding two time feature nodes between the adjacent designated position nodes;
StepB4, comparing the estimated time with a plurality of time differences by combining a preset compensation time factor:
if the estimated time plus the preset compensation time factor is less than or equal to the time difference, representing that two pieces of running data on the adjacent designated position nodes correspond, and extracting the group of adjacent designated position nodes and the running data thereof;
And so on, acquiring adjacent designated position nodes corresponding to the comparison result and the running data thereof from the plurality of adjacent designated position nodes until the adjacent designated position nodes do not contain the corresponding two running data;
StepB5, later binding the acquired designated position node and the running data thereof into a track packet;
As a further scheme of the invention: stepB2 is given by: estimated time = geographical distance/average of two velocity signatures.
As a further scheme of the invention: in StepB4, if the estimated time+the preset compensation time factor > the time difference, it indicates that the two driving data on the adjacent designated position nodes do not correspond, and the next group of comparison is performed.
As a further scheme of the invention: the specific processing mode of the fifth step is as follows:
StepC1, acquiring time feature nodes corresponding to running data in all nodes at specified positions in a track packet;
StepC2, sorting the time feature nodes according to the time sequence, and generating a time sequence table;
StepC3, connecting all the designated position nodes in the track packet according to a time sequence table, obtaining a track chain corresponding to the target vehicle, and adding the corresponding speed characteristic value and the geographic distance between every two adjacent designated position nodes on the track chain.
The invention has the beneficial effects that:
Accuracy: according to the method, the radar and the video monitoring equipment are used for acquiring the running data of the target vehicle, including the distance, the speed, the angle, the shape, the color and the license plate information, so that the running state of the vehicle can be comprehensively and accurately depicted.
Real-time performance: the running data perceived by the radar and the video camera are synchronously processed, so that the time sequence consistency of the data is ensured, and the running track of the vehicle can be tracked in real time.
High efficiency: and carrying out target matching on the data of the plurality of designated position nodes in the target area through a correlation algorithm to form a preliminary vehicle track, so that the data processing efficiency is improved.
Flexibility: the method can be applied to various scenes, such as expressway monitoring, urban traffic management and the like, and a user can identify abnormal behaviors of the target vehicle, such as overspeed and retrograde driving, according to the track chain of the target vehicle and the speed characteristic value of the track chain, and generate corresponding alarm information.
Visibility: and the obtained track chain of the target vehicle is visually displayed on a map, so that the track chain is convenient for a user to understand and analyze.
And (3) data storage: the obtained vehicle track chain information is stored in a database for later inquiry, so that the management and the use of data are facilitated.
In general, the invention provides an accurate, real-time, efficient, flexible and visual vehicle space-time track chain extraction method, which has important application value for vehicle monitoring and management.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a vehicle space-time trajectory chain extraction method based on the radar fusion perception.
Fig. 2 is a schematic diagram of a locking process flow of a vehicle space-time trajectory chain extraction method based on the radar fusion perception.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1 and 2, the invention discloses a vehicle space-time track chain extraction method based on radar fusion perception, which comprises the following steps:
Step one, data acquisition
At a designated position node of a target area, collecting running data of a target vehicle by using a radar and video monitoring equipment equipped for the node;
The driving data comprise distance parameters, speed parameters and angle parameters of the target vehicle extracted from the radar, and shape information, color information and license plate information of the target vehicle extracted from the video data;
wherein the distance parameter refers to the distance between the radar and the target vehicle;
the speed parameter refers to the running speed of the target vehicle;
the angle parameter refers to the azimuth angle and the pitch angle of the target vehicle monitored by the radar;
Azimuth is the angle measured from the horizontal direction of the radar device to the target and is used to assist the radar in determining the position of the target relative to the radar on the horizontal plane.
The pitch angle is an angle measured from the vertical direction of the radar apparatus to the target, the pitch angle being used to assist the radar in determining the position of the target on the vertical plane;
The shape information refers to a style of a target vehicle, in this embodiment, the shape information is obtained by comparing an obtained vehicle main body image with a preset vehicle style model after a video camera fixedly arranged performs background separation by combining a photographed background image with a photographed image containing the vehicle, and the technology is the prior art, so details are not described herein;
The color information refers to the color identified by the target vehicle through the image data captured by the video camera;
Step two, data preprocessing
Carrying out synchronous processing on the driving data perceived by the radar and the video camera to ensure the time sequence consistency of the data of the radar and the video camera;
The synchronization processing mode is as follows:
firstly, extracting a distance parameter, a speed parameter, an angle parameter, shape information and color information of a target vehicle, wherein the distance parameter, the speed parameter, the angle parameter, the shape information and the color information correspond to time points acquired in real time;
Then in the time period that the radar and the video camera sense that a target passes, a time point is selected as a standard time node, and distance parameters, speed parameters, angle parameters, shape information, color information and license plate information which are obtained in corresponding time are selected according to the standard time node, and then the standard time node and the distance parameters, the speed parameters, the angle parameters, the shape information, the color information and the license plate information are bound into a data packet;
Similarly, selecting a plurality of time points as standard time nodes, and obtaining a corresponding number of data packets;
Step three, data optimization
Carrying out target matching on the running data of a plurality of designated position nodes in the target area through a correlation algorithm, and forming a preliminary vehicle track, wherein the target matching mode is as follows:
StepA1, acquiring data packets of each node at a designated position, and importing the data packets into a pre-trained association matching model;
StepA2, performing locking processing on a target vehicle corresponding to the data packet by using a correlation matching model, and obtaining characteristic information of the target vehicle, wherein the characteristic information comprises a speed characteristic and a query characteristic, and the locking processing mode is as follows:
Step a1, taking shape information, color information and license plate information in each data packet as query characteristics of a target vehicle on a node at a designated position;
Step a2, extracting distance parameters and angle parameters from each data packet, and calculating the horizontal distance between the target vehicle and the radar position through a trigonometric function;
Then extracting two groups of data packets in a pairwise combination mode, acquiring corresponding standard time nodes and horizontal distances from the data packets, and calculating corresponding speed parameters through a speed calculation formula;
Step a3, calculating an average value corresponding to all the speed parameters, and recording the average value as a speed characteristic value;
In the embodiment, the horizontal distance between the target vehicle and the radar position is calculated by combining the distance parameter and the angle parameter through the trigonometric function, so that the speed parameter perceived by the radar can be effectively verified and calibrated, and the error of the speed characteristic value caused by inaccurate radar perception data is avoided;
Step a4, acquiring all standard time nodes from each data packet on a designated position node, sequencing the standard time nodes according to time sequence, and selecting intermediate values as time feature nodes;
Step four, association analysis
StepB1, acquiring the characteristic information of the target vehicle with the same query characteristic from other corresponding designated position nodes of the target area according to the characteristic information of one designated position node;
StepB2, acquiring the geographic distance between each two adjacent designated position nodes in the target area on a target map preset in the target area;
Then extracting two speed characteristic values between adjacent designated position nodes, solving the average value of the two speed characteristic values, and calculating the estimated time through a speed calculation formula;
In this embodiment, the formula is:
Estimated time = geographic distance/average of two speed eigenvalues;
StepB3, extracting a plurality of time feature nodes in each adjacent designated position node, and calculating the time difference of the corresponding two time feature nodes between the adjacent designated position nodes;
In this embodiment, the reason why each of the adjacent designated position nodes contains a plurality of time feature nodes is that: the target vehicle passes through the location multiple times in different time periods, thereby generating multiple corresponding driving data;
StepB4, comparing the estimated time with a plurality of time differences by combining a preset compensation time factor:
If the estimated time plus the preset compensation time factor is larger than the time difference, the two pieces of running data on the adjacent designated position nodes are not corresponding, and the next group of comparison is carried out;
if the estimated time plus the preset compensation time factor is less than or equal to the time difference, representing that two pieces of running data on the adjacent designated position nodes correspond, and extracting the group of adjacent designated position nodes and the running data thereof;
And so on, acquiring adjacent designated position nodes corresponding to the comparison result and the running data thereof from the plurality of adjacent designated position nodes until the adjacent designated position nodes do not contain the corresponding two running data;
StepB5, later binding the acquired designated position node and the running data thereof into a track packet;
Step five, track processing
StepC1, acquiring time feature nodes corresponding to running data in all nodes at specified positions in a track packet;
StepC2, sorting the time feature nodes according to the time sequence, and generating a time sequence table;
StepC3, connecting all the designated position nodes in the track packet according to a time sequence table, so as to obtain a track chain corresponding to the target vehicle, and adding the corresponding speed characteristic value and the geographic distance between every two adjacent designated position nodes on the track chain;
Example two
Referring to fig. 1 and fig. 2, as a second embodiment of the present application, in comparison with the first embodiment, the technical solution of the present embodiment is different from the first embodiment only in that the manner of replacing the step a3 in the present embodiment is as follows:
screening the speed parameters obtained in the second step and the speed parameters perceived by the radar in a variance analysis mode, screening out corresponding speed parameters, and then calculating the average value of the speed parameters;
The specific mode is as follows:
Marking all corresponding speed parameters as Si, i=1, 2, … … n, and indicating that the number of the speed parameters is n;
Then substituting it into formula Calculating a deviation value P of the speed parameter, wherein Sp is an average value of the corresponding speed parameter participating in the calculation of the corresponding deviation value, and then comparing P with a preset deviation threshold Py:
If P is larger than Py, the deviation value of the group of speed parameters is larger, then the corresponding Si values are deleted in sequence from larger to smaller according to the |Si-Sp| and the rest deviation value P is correspondingly calculated until P is smaller than or equal to Py, then when P is smaller than or equal to Py, the Si corresponding to P1 is calculated, the average value of all the Si is calculated, and the average value is recorded as a speed characteristic value;
According to the method, the influence of abnormal values on calculation of the speed characteristic values can be reduced by eliminating the speed parameters with larger deviation mean values, so that the accuracy of the speed characteristic values is improved, the screened speed parameters are more stable, and the reliability of subsequent analysis is improved due to the fact that the speed parameters are closer to the mean level of whole data; in general, the method makes the obtained statistical result more robust and reliable, and is suitable for various data analysis and decision support scenes.
Example III
As an embodiment three of the present application, in the implementation of the present application, compared with the first embodiment and the second embodiment, the technical solution of the present embodiment is that the solutions of the first embodiment and the second embodiment are implemented in combination, and the difference between the technical solution of the present embodiment and the first embodiment and the second embodiment is that the present embodiment only includes the following steps:
step six, data storage
The obtained vehicle track chain information is stored in a database for later inquiry;
Step seven, track visualization
The obtained track chain of the target vehicle is visually displayed on a map, so that the user can understand and analyze the track chain conveniently;
Example IV
As an embodiment four of the present application, in the implementation of the present application, compared with the first, second and third embodiments, the technical solution of the present embodiment is to combine the solutions of the first, second and third embodiments;
In this embodiment, the present embodiment may be applied to various scenarios, such as highway monitoring, urban traffic management, etc., where a user may identify, according to a track chain of a target vehicle and a speed feature value thereof, an abnormal behavior of the target vehicle, such as overspeed and retrograde, and generate corresponding alarm information, where the overspeed behavior may be determined by analysis according to the speed feature value in combination with a geographic distance between nodes at adjacent designated positions, and the retrograde behavior may be determined according to a radar and a video camera disposed in a unilateral manner.
The above formulas are all formulas with dimensionality removed and numerical calculation, the formulas are formulas with the latest real situation obtained by software simulation through collecting a large amount of data, and preset parameters and threshold selection in the formulas are set by those skilled in the art according to the actual situation.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The vehicle space-time track chain extraction method based on the radar fusion perception is characterized by comprising the following steps of:
Step one, acquiring running data of a target vehicle in a designated position node of a target area through a radar and a video camera on the designated position node; the driving data comprise distance parameters, speed parameters and angle parameters of the target vehicle extracted from the radar, and shape information, color information and license plate information of the target vehicle extracted from the video data;
Step two, according to the real-time acquisition time point of the running data, selecting one time point as a standard time node, synchronizing the running data perceived by the radar and the video camera, and obtaining a corresponding data packet, wherein the synchronous processing mode is as follows:
firstly, extracting a distance parameter, a speed parameter, an angle parameter, shape information and color information of a target vehicle, wherein the distance parameter, the speed parameter, the angle parameter, the shape information and the color information correspond to time points acquired in real time;
Then in the time period that the radar and the video camera sense that a target passes, a time point is selected as a standard time node, and distance parameters, speed parameters, angle parameters, shape information, color information and license plate information which are obtained in corresponding time are selected according to the standard time node, and then the standard time node and the distance parameters, the speed parameters, the angle parameters, the shape information, the color information and the license plate information are bound into a data packet;
Similarly, selecting a plurality of time points as standard time nodes, and obtaining a corresponding number of data packets;
Thirdly, carrying out target matching on the running data of a plurality of designated position nodes in the target area through a correlation algorithm, and obtaining corresponding characteristic information of the target vehicle, wherein the characteristic information comprises query characteristics, speed characteristic values and time characteristic nodes;
Searching target vehicle characteristic information which contains the same query characteristics as one designated position node on other designated position nodes, calculating estimated time according to the geographic distance and the speed characteristic value which are obtained in advance, then calculating the time difference of time characteristic nodes in adjacent designated position nodes, and comparing the time difference to determine a data matching result;
step five, extracting time feature nodes corresponding to running data related to all the position nodes, sorting the time feature nodes according to time sequence according to time stamps of the time feature nodes, connecting the designated position nodes in the track packet according to time sequence according to sorting results to form a continuous track chain, and integrating speed feature values corresponding to each node and geographic distance information between adjacent nodes into the track chain;
and step six, storing the obtained vehicle track chain information in a database, and simultaneously visually displaying the track chain of the obtained target vehicle on a map.
2. The method for extracting a space-time track chain of a vehicle based on the radar fusion perception according to claim 1, wherein the distance parameter refers to the distance between the radar and the target vehicle;
the speed parameter refers to the running speed of the target vehicle;
the angle parameter refers to the azimuth angle and the pitch angle of the target vehicle monitored by the radar;
azimuth is the angle measured from the horizontal direction of the radar device to the target, the azimuth being used to assist the radar in determining the position of the target relative to the radar on the horizontal plane;
The pitch angle is an angle measured from the vertical direction of the radar apparatus to the target, the pitch angle being used to assist the radar in determining the position of the target on the vertical plane;
The shape information refers to the style of the target vehicle;
The color information refers to a color identified for the target vehicle by image data captured by the video camera.
3. The vehicle space-time trajectory chain extraction method based on the radar fusion perception according to claim 1, wherein the target matching mode is as follows:
StepA1, acquiring data packets of each node at a designated position, and importing the data packets into a pre-trained association matching model;
StepA2, performing locking processing on a target vehicle corresponding to the data packet by using a correlation matching model, and obtaining characteristic information of the target vehicle, wherein the characteristic information comprises a speed characteristic value, a time characteristic node and a query characteristic.
4. The vehicle space-time trajectory chain extraction method based on the radar fusion perception according to claim 3, wherein the locking processing mode is as follows:
Step a1, taking shape information, color information and license plate information in each data packet as query characteristics of a target vehicle on a node at a designated position;
Step a2, extracting distance parameters and angle parameters from each data packet, and calculating the horizontal distance between the target vehicle and the radar position through a trigonometric function;
Then extracting two groups of data packets in a pairwise combination mode, acquiring corresponding standard time nodes and horizontal distances from the data packets, and calculating corresponding speed parameters through a speed calculation formula;
Step a3, screening the speed parameter obtained in the step two and the speed parameter perceived by the radar in a variance analysis mode, screening out a corresponding speed parameter, calculating the average value of the speed parameter, and marking the average value as a speed characteristic value;
Step a4, acquiring all standard time nodes from each data packet on a designated position node, sequencing the standard time nodes according to time sequence, and selecting intermediate values as time feature nodes.
5. The method for extracting the space-time trajectory chain of the vehicle based on the radar fusion perception according to claim 4, wherein the screening mode by the analysis of variance is as follows:
Marking all corresponding speed parameters as Si, i=1, 2, … … n, and indicating that the number of the speed parameters is n;
Then substituting it into formula Calculating a deviation value P of the speed parameter, wherein Sp is an average value of the corresponding speed parameter participating in the calculation of the corresponding deviation value, and then comparing P with a preset deviation threshold Py:
If P is larger than Py, the deviation value of the group of speed parameters is larger, then corresponding Si values are deleted in sequence from larger to smaller according to the |Si-Sp| and the rest deviation value P is correspondingly calculated until P is smaller than or equal to Py, then when P is smaller than or equal to Py, the Si of the corresponding P1 is calculated, and the average value of all the Si is calculated.
6. The vehicle space-time trajectory chain extraction method based on the radar fusion perception according to claim 1, wherein the specific mode of the fourth step is as follows:
StepB1, acquiring the characteristic information of the target vehicle with the same query characteristic from other corresponding designated position nodes of the target area according to the characteristic information of one designated position node;
StepB2, acquiring the geographic distance between each two adjacent designated position nodes in the target area on a target map preset in the target area;
Then extracting two speed characteristic values between adjacent designated position nodes, solving the average value of the two speed characteristic values, and calculating the estimated time through a speed calculation formula;
StepB3, extracting a plurality of time feature nodes in each adjacent designated position node, and calculating the time difference of the corresponding two time feature nodes between the adjacent designated position nodes;
StepB4, comparing the estimated time with a plurality of time differences by combining a preset compensation time factor:
if the estimated time plus the preset compensation time factor is less than or equal to the time difference, representing that two pieces of running data on the adjacent designated position nodes correspond, and extracting the group of adjacent designated position nodes and the running data thereof;
And so on, acquiring adjacent designated position nodes corresponding to the comparison result and the running data thereof from the plurality of adjacent designated position nodes until the adjacent designated position nodes do not contain the corresponding two running data;
StepB5, binding the obtained designated position node and the obtained running data thereof into a track packet.
7. The method for extracting a vehicle space-time trajectory chain based on the radar fusion awareness of claim 6, wherein the formula StepB is as follows: estimated time = geographical distance/average of two velocity signatures.
8. The method for extracting a space-time trajectory chain of a vehicle based on a radar fusion perception of claim 6, wherein in StepB, if the estimated time+the preset compensation time factor > the time difference, it indicates that two pieces of driving data on the adjacent designated position nodes do not correspond, and the next group of comparison is performed.
9. The vehicle space-time trajectory chain extraction method based on the radar fusion perception according to claim 1, wherein the specific processing mode of the fifth step is as follows:
StepC1, acquiring time feature nodes corresponding to running data in all nodes at specified positions in a track packet;
StepC2, sorting the time feature nodes according to the time sequence, and generating a time sequence table;
StepC3, connecting all the designated position nodes in the track packet according to a time sequence table, obtaining a track chain corresponding to the target vehicle, and adding the corresponding speed characteristic value and the geographic distance between every two adjacent designated position nodes on the track chain.
CN202410461772.9A 2024-04-17 Vehicle space-time track chain extraction method based on radar fusion perception Active CN118070232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410461772.9A CN118070232B (en) 2024-04-17 Vehicle space-time track chain extraction method based on radar fusion perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410461772.9A CN118070232B (en) 2024-04-17 Vehicle space-time track chain extraction method based on radar fusion perception

Publications (2)

Publication Number Publication Date
CN118070232A CN118070232A (en) 2024-05-24
CN118070232B true CN118070232B (en) 2024-06-21

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116165654A (en) * 2023-02-22 2023-05-26 江苏恒超智能技术有限公司 Millimeter wave radar and video combined vehicle track monitoring method
CN116453346A (en) * 2023-06-20 2023-07-18 山东高速信息集团有限公司 Vehicle-road cooperation method, device and medium based on radar fusion layout

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116165654A (en) * 2023-02-22 2023-05-26 江苏恒超智能技术有限公司 Millimeter wave radar and video combined vehicle track monitoring method
CN116453346A (en) * 2023-06-20 2023-07-18 山东高速信息集团有限公司 Vehicle-road cooperation method, device and medium based on radar fusion layout

Similar Documents

Publication Publication Date Title
CN109059954B (en) Method and system for supporting high-precision map lane line real-time fusion update
CN110069585B (en) Method and device for processing track point data, storage medium and electronic device
CN109615572B (en) Personnel intimacy degree analysis method and system based on big data
CN111983935B (en) Performance evaluation method and device
CN110874362A (en) Data association analysis method and device
CN113065474B (en) Behavior recognition method and device and computer equipment
CN109615862A (en) Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN114170448A (en) Evaluation method and device for visual perception algorithm
CN110969142B (en) Abnormal driving scene extraction method based on network-connected vehicle natural driving data
CN109886724B (en) Robust resident travel track identification method
CN109740479A (en) A kind of vehicle recognition methods, device, equipment and readable storage medium storing program for executing again
CN115620518B (en) Intersection traffic conflict judging method based on deep learning
CN107527356B (en) Video tracking method based on lazy interaction mode
CN113759938B (en) Unmanned vehicle path planning quality evaluation method and system
CN113850237A (en) Internet vehicle target detection and evaluation method and system based on video and track data
CN118070232B (en) Vehicle space-time track chain extraction method based on radar fusion perception
CN112686226A (en) Big data management method and device based on gridding management and electronic equipment
CN118070232A (en) Vehicle space-time track chain extraction method based on radar fusion perception
CN111753642A (en) Method and device for determining key frame
CN110738167A (en) pedestrian identification method based on multi-domain spatial attribute correlation analysis
CN112651992B (en) Track tracking method and system
CN111090105B (en) Vehicle-mounted laser radar point cloud signal ground point separation method
CN113033443A (en) Unmanned aerial vehicle-based automatic pedestrian crossing facility whole road network checking method
CN112434648A (en) Wall shape change detection method and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant