CN112946628A - Road running state detection method and system based on radar and video fusion - Google Patents

Road running state detection method and system based on radar and video fusion Download PDF

Info

Publication number
CN112946628A
CN112946628A CN202110172642.XA CN202110172642A CN112946628A CN 112946628 A CN112946628 A CN 112946628A CN 202110172642 A CN202110172642 A CN 202110172642A CN 112946628 A CN112946628 A CN 112946628A
Authority
CN
China
Prior art keywords
radar
millimeter wave
coordinate system
wave radar
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110172642.XA
Other languages
Chinese (zh)
Inventor
张志祥
杨阳
刘强
关永胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sinoroad Engineering Technology Research Institute Co ltd
Original Assignee
Jiangsu Sinoroad Engineering Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sinoroad Engineering Technology Research Institute Co ltd filed Critical Jiangsu Sinoroad Engineering Technology Research Institute Co ltd
Priority to CN202110172642.XA priority Critical patent/CN112946628A/en
Publication of CN112946628A publication Critical patent/CN112946628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of road traffic state detection, in particular to a method and a system for detecting a road running state based on radar and video fusion, which comprises the following steps: the method comprises the following steps: carrying out data acquisition on the moving target through a millimeter wave radar and a video detector, and preprocessing the data; step two: converting data coordinates of the millimeter wave radar into coordinates of a video pixel coordinate system, and registering the time of the millimeter wave radar and the time of the video detector to realize the mapping of a redundant vehicle region of interest of the millimeter wave radar to the video image and complete the fusion of space and time; step three: and carrying out vehicle identification on the interested region by adopting a convolutional neural network obtained by training the history image data, determining the interested region with the vehicle, and removing the false alarm information of the radar. Step four: and calculating traffic statistical data according to the vehicle identification result to obtain traffic running state parameters including flow, average speed and queuing length in a certain time period.

Description

Road running state detection method and system based on radar and video fusion
Technical Field
The invention relates to the technical field of road traffic state detection, in particular to a method and a system for detecting a road running state based on radar and video fusion.
Background
The rapid development of national economy continuously stimulates the travel demand of residents, the quantity of motor vehicles kept by everyone is increased year by year, urban traffic activities are more and more frequent in travel, and expressways as the main arteries of road traffic can bear more and more travel traffic flow. The intelligent expressway construction is an effective means for improving the expressway passing efficiency and the operation safety, the all-round intelligent perception system is a basic condition for realizing intelligent management and control, intelligent service and intelligent decision of the expressway, the traffic detector is continuously developed to the intelligent perception system through the technical development, the traffic state detection performance is gradually optimized, and the equipment is diversified.
The existing sensing system of the expressway is mostly based on a video detector, the video detector integrates an image processing technology, a camera technology and a vehicle detection technology, a camera is generally and fixedly installed on a road portal, the camera monitors a video image of an area on a road site in real time, when a vehicle passes through the detection area of the camera, the gray level value of the video image changes when the vehicle is in a non-vehicle state, the gray level changes are compared under the support of the image processing technology, and traffic information such as vehicle flow, vehicle type, vehicle speed and the like is obtained through the vehicle detection technology. Common video vehicle detection algorithms include an optical flow method and a background difference method, but the algorithms have poor accuracy, and a target detection algorithm based on deep learning does not need to manually set characteristics, has high running speed and is gradually popularized and applied. The traffic state detection unit based on the video detector is generally composed of a camera, a video processing module, a traffic analysis module, a communication transmission module and the like. The image information that the video detector gathered is directly perceived and the detection precision is higher, but in the poor adverse weather environment of illumination conditions such as heavy fog, sleet, the observation scope of video detector receives the influence, and the sheltering from of the great vehicle of volume causes the hourglass of small-size vehicle to examine easily in addition.
The millimeter wave radar is widely adopted in the construction of intelligent high-speed, and can be arranged on the side of a road or above a lane. The millimeter wave radar is a radar which works in a millimeter wave band for detection, and utilizes specific modulation frequency electromagnetic waves generated by a high-frequency circuit to calculate the radial distance of a target and the sub-speed of the target speed in the direction of a central connecting line between the target and the radar according to the difference frequency of a transmitting signal and an echo signal. According to different radiation electromagnetic wave modes, the millimeter wave radar mainly has two working systems, namely a pulse system and a continuous wave system, and most of the current traffic information detection radars adopt a working mode based on frequency modulation continuous waves. The traffic state detection unit based on the millimeter wave radar generally comprises an antenna, a transceiver module, a signal processing module, a data transmission module and the like. The millimeter wave radar can penetrate fog, smoke and dust, can track multiple lane multiple targets by capturing reflected signals, can acquire parameters such as distance, speed and angle, and can generate detection errors under the conditions that a cart shields a trolley for a long time and the different positions of a large-sized vehicle are identified for multiple times due to high resolution.
In view of the above problems, the designer actively makes research and innovation based on the practical experience and professional knowledge that the engineering of such products is applied for many years, and cooperates with the application of the theory, so as to create a road running state detection method and system based on the fusion of radar and video, and the method and system are more practical.
Disclosure of Invention
In order to achieve the purpose, the invention adopts the technical scheme that: a road running state detection method based on radar and video fusion comprises the following steps:
the method comprises the following steps: data acquisition is carried out on the moving target through a millimeter wave radar and a video detector, and the millimeter wave radar acquires the position, the speed and the angle of the moving target in real time; the video detector collects high-definition image information in real time, performs primary processing on the data, filters interference and invalid information and ensures the data quality of the sensor;
step two: converting data coordinates of the millimeter wave radar into coordinates of a video pixel coordinate system, registering the time of the millimeter wave radar and the time of a video detector, mapping a redundant vehicle region of interest of the millimeter wave radar to a video image, and completing space-time fusion of a sensor;
step three: and identifying the region of interest by adopting a convolutional neural network obtained by training the history image data, determining the region of interest with the vehicle, and removing the false alarm information of the radar.
Step four: and calculating traffic statistical data according to the vehicle identification result to obtain traffic running state parameters including flow, average speed and queuing length in a certain time period.
Further, when the spatial fusion is realized in the second step, the method comprises the following steps:
step S210: acquiring the installation height and angle of the millimeter wave radar;
step S211: and converting the spherical coordinate system where the millimeter wave radar is located into a world coordinate system, taking the position of the millimeter wave radar as the origin of coordinates, taking the target detected by the millimeter wave radar as P, taking the radial distance between the target P and the millimeter wave radar as r, and taking the azimuth angle as alpha. Assuming that the installation height of the radar is h, the radar beam irradiates downwards in an inclined mode, s is the radial distance between the intersection point of the central axis of the radar beam and the ground plane and the radar, and the angle between the target P and the millimeter wave radar in the plane of the central axis and the vertical direction is theta.
The radial distance r is decomposed into:
Figure BDA0002939326520000031
Figure BDA0002939326520000032
the inclined direction of the straight line s to the ground is O-ZwIn the positive direction of the axis, the horizontal left direction of the plane of the radar is O-XwPositive axial direction, perpendicular to O-XwAxial direction is set to O-YwIn the positive direction of the axis, establish O-XwYwZwA world coordinate system. The following can be obtained:
Figure BDA0002939326520000041
from the trigonometric relationship:
Figure BDA0002939326520000042
step 212: converting the millimeter wave radar world coordinate system into the camera coordinate system of the video detector, point (x), based on the video detector mounting locationw,yw,zw) The relationship converted from the world coordinate system to the camera coordinate system is:
Figure BDA0002939326520000043
where R is a 3 × 3 coordinate system rotation matrix, T is a 3 × 1 translation vector, and point (x)c,yc,zc) Is a point (x)w,yw,zw) Coordinates in the camera coordinate system. The parameter values of R and T depend on the installation position of the camera;
step S213: point (x) in the camera coordinate systemc,yc,zc) Conversion to a point (x) in the image coordinate systemi,yi) The coordinates are:
Figure BDA0002939326520000051
f in the formula is the focal length of the camera;
further, in step S213, the image coordinate system and the image pixel coordinate system are calibrated, specifically: the point (x) in the image coordinate systemi,yi) Converted into points (u, v) in the image pixel coordinate system,
Figure BDA0002939326520000052
image coordinate system O-xiyiHas a coordinate of (u) in the pixel coordinate system O-uv0,v0) And delta is an included angle between coordinate axes in the pixel coordinate system O-uv.
Further, in the second step, the fusion in time includes the following steps:
step S220: setting a time delay to make the time acquisition starting points of the millimeter wave radar and the video detector consistent, specifically:
tradar=tcamera±τcal
wherein, tradarAs time coordinate of the millimeter-wave radar, tcameraAs a time coordinate of the video detector, τcalSetting a time delay;
step S221: registering the time of the millimeter wave radar and the video detector; the method specifically comprises the following steps: setting an interpolation interval, and the three-point estimation value in the interval is
Figure BDA0002939326520000053
Let three instants t be assumedk-1、tk、tk+1Are equally spaced, i.e. tk-tk-1T; when the interpolation point time is t ═ tkAnd + delta t, calculating a measured value at the time t by using a Lagrange three-point interpolation method as follows:
Figure BDA0002939326520000061
thereby achieving temporal registration.
Further, in step S221, when time alignment of the millimeter wave radar and the video detector is performed, a sensor with a long data acquisition period is used as a reference, and it is assumed that the time of alignment is located in the middle of the interpolation interval.
Further, in step three, the region of interest is determined according to the following steps:
step S300: defining the width and height of vehicleRatio rwh
Figure BDA0002939326520000062
Wherein width represents the vehicle width and height is the vehicle height;
step S301: establishing the size of the region of interest according to the aspect ratio of the vehicle, specifically:
Ps=Pst×rwh×α×β×γ
Psis the longitudinal pixel size, P, of the region of intereststIs the initial pixel vertical size; alpha represents whether the vehicle is a truck or not, if the vehicle is the truck, 2 is taken, and the rest vehicles are taken as 1; β represents an amplification factor; gamma denotes a distance coefficient, and the center of the region of interest is selected to be the target center.
Further, the size of the region of interest pixels is inversely related to the target distance.
Further, in the third step, a convolutional neural network Mask-R-CNN is adopted to identify redundant interested areas, the neural network processor obtains a training sample set test sample set according to the processed historical data, the number and the structure of convolutional layers, pooling layers and full-connection layers of the convolutional neural network are determined, vehicle detection model parameters are stored, the characteristics of target elements in the video collector image are extracted, and the trained vehicle detection model is used for deeply learning the image to determine the interested areas where vehicles exist.
The invention also comprises a road running state detection system based on the integration of radar and video, which comprises:
the data acquisition module comprises a millimeter wave radar and a video detector, wherein the millimeter wave radar is used for detecting the position, the speed and the angle of a moving target; the video detector is used for detecting image information of a moving object;
the data processing module is in communication connection with the data acquisition module, processes the original data acquired by the data acquisition module and outputs traffic parameters, and comprises a microprocessor and an embedded neural network processor;
the data storage module is in communication connection with the data processing module and is used for storing traffic parameters.
And the data communication module is in communication connection with the data storage module and the data center, and transmits the traffic parameters to the data center or realizes the call of the data center on the traffic parameters.
The invention has the beneficial effects that:
1. the information of the video detector and the millimeter wave radar is combined, the information collected by the millimeter wave radar and the information collected by the video detector are fused in space and time, richer and more accurate traffic running state information is obtained, and the robustness of a road detection system is improved;
2. the acquisition of traffic running state information can be completed at the road side, and the operation pressure of a data center at the upper stage is reduced;
3. the video image information is processed through the convolutional neural network, the target detection and the target parameter acquisition are realized, and the accuracy is higher than that of the methods such as interframe difference and background difference of the traditional target detection;
4. the characteristics that the millimeter wave radar basically covers all effective targets due to high resolution and redundancy exists in output are considered, image detection is carried out in the region of interest of the millimeter wave radar, and the real-time detection speed and accuracy of the system are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the detection method of the present invention;
FIG. 2 is a schematic view of a detection system of the present invention;
FIG. 3 is a schematic diagram of a millimeter wave radar installation;
FIG. 4 is a diagram illustrating the transformation between an image coordinate system and a pixel coordinate system.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
As shown in fig. 1, a method for detecting a road running state based on radar and video fusion includes the following steps:
the method comprises the following steps: data acquisition is carried out on the moving target through a millimeter wave radar and a video detector, and the millimeter wave radar acquires the position, the speed and the angle of the moving target in real time; the video detector collects high-definition road surface image information in real time. Preprocessing data, filtering interference and invalid information, and ensuring the data quality of the sensor;
step two: converting data coordinates of the millimeter wave radar into coordinates of a video pixel coordinate system, and registering the time of the millimeter wave radar and the time of the video detector to realize the mapping of a redundant vehicle region of interest of the millimeter wave radar to a video image and complete the space-time fusion of the sensor;
step three: and identifying the region of interest by adopting a convolutional neural network obtained by training the history image data, determining the region of interest with the vehicle, and removing the false alarm information of the radar.
Step four: and calculating traffic statistical data according to the vehicle identification result to obtain traffic running state parameters including flow, average speed and queuing length in a certain time period.
The invention has the beneficial effects that:
1. the information of the video detector and the millimeter wave radar is combined, the information collected by the millimeter wave radar and the information collected by the video detector are fused in space and time, richer and more accurate traffic running state information is obtained, and the robustness of a road detection system is improved;
2. the acquisition of traffic running state information can be completed at the road side, and the operation pressure of a data center at the upper stage is reduced;
3. the video image information is processed through the convolutional neural network, the target detection and the target parameter acquisition are realized, and the accuracy is higher than that of the methods such as interframe difference and background difference of the traditional target detection;
4. the characteristics that the millimeter wave radar basically covers all effective targets due to high resolution and redundancy exists in output are considered, image detection is carried out in the region of interest of the millimeter wave radar, and the real-time detection speed and accuracy of the system are improved.
The reference coordinate systems and the data rates of the millimeter wave radar and the video detector are different, the premise of the united traffic detection of the millimeter wave radar and the video detector is to realize the unification of the coordinate systems and the matching of the data rates, the synchronization of space and time needs to be considered, the fusion of the millimeter wave radar and the video detector on the space is to unify the measurement results of different sensor coordinate systems to the same coordinate system, namely, the millimeter wave radar coordinate system is converted into a world coordinate system, then the world coordinate system is converted into the video detector coordinate system, finally the image coordinate system is converted, and the position of a moving target collected by the millimeter wave radar on a video image is determined.
As shown in fig. 3, when the spatial fusion is implemented in step two, the method includes the following steps:
step S210: acquiring the installation height and angle of the millimeter wave radar;
step S211: and converting the spherical coordinate system where the millimeter wave radar is located into a world coordinate system, taking the position of the millimeter wave radar as the origin of coordinates, taking the target detected by the millimeter wave radar as P, taking the radial distance between the target P and the millimeter wave radar as r, and taking the azimuth angle as alpha. Assuming that the installation height of the radar is h, the radar beam irradiates downwards in an inclined mode, s is the radial distance between the intersection point of the central axis of the radar beam and the ground plane and the radar, and the angle between the target P and the millimeter wave radar in the plane of the central axis and the vertical direction is theta.
From the geometry of FIG. 3, the radial distance r can be decomposed as:
Figure BDA0002939326520000101
Figure BDA0002939326520000102
the inclined direction of the straight line s to the ground is O-ZwIn the positive direction of the axis, the horizontal left direction of the plane of the radar is O-XwPositive axial direction, perpendicular to O-XwAxial direction is set to O-YwIn the positive direction of the axis, establish O-XwYwZwA world coordinate system. The following can be obtained:
Figure BDA0002939326520000111
from the trigonometric relationship:
Figure BDA0002939326520000112
step 212: converting the millimeter wave radar world coordinate system into the camera coordinate system of the video detector, point (x), based on the video detector mounting locationw,yw,zw) The relationship converted from the world coordinate system to the camera coordinate system is:
Figure BDA0002939326520000113
where R is a 3 × 3 coordinate system rotation matrix, T is a 3 × 1 translation vector, and point (x)c,yc,zc) Is a point (x)w,yw,zw) Coordinates in the camera coordinate system. The R and T parameter values depend on the installation position of the camera, in practical application, the installation positions of the millimeter wave radar and the video detector should determine proper positions according to inspection requirements, detection performance and the like, the installation positions of the millimeter wave radar and the video detector may have a distance, and coordinate system conversion calculation of spatial fusion of the millimeter wave radar and the video detector at different positions is adoptedThe method has increased applicability.
Step S213: point (x) in the camera coordinate systemc,yc,zc) Conversion to a point (x) in the image coordinate systemi,yi) The coordinates are:
Figure BDA0002939326520000121
f in the formula is the focal length of the camera;
through the steps, the spherical coordinate system where the millimeter wave radar is located is converted into the world coordinate system, the world coordinate system is converted into the camera coordinate system, and the camera coordinate system is converted into the image coordinate system, so that the spatial conversion is realized.
As shown in fig. 4, in step S213, the image coordinate system and the image pixel coordinate system are calibrated, specifically: the point (x) in the image coordinate systemi,yi) Converted into points (u, v) in the image pixel coordinate system,
Figure BDA0002939326520000122
image coordinate system O-xiyiHas a coordinate of (u) in the pixel coordinate system O-uv0,v0) Delta is the angle between the coordinate axes in the pixel coordinate system O-uv, u0,v0Can be obtained by the Zhangyingyou camera calibration method.
In an ideal state, the image coordinate system and the pixel coordinate system are coincident, but in practice, due to geometric structural difference and distortion of the lens, the image coordinate system and the pixel coordinate system need to be calibrated when being converted into the pixel coordinate system, so that the accuracy of data processing is increased.
Preferably, in the second step, the fusion in time includes the following steps:
step S220: setting a time delay to make the time acquisition starting points of the millimeter wave radar and the video detector consistent, specifically:
tradar=tcamera±τcal
wherein, tradarAs time coordinate of the millimeter-wave radar, tcameraAs a time coordinate of the video detector, τcalSetting a time delay;
step S221: registering the time of the millimeter wave radar and the video detector; the method specifically comprises the following steps: setting an interpolation interval, and the three-point estimation value in the interval is
Figure BDA0002939326520000131
Let three instants t be assumedk-1、tk、tk+1Are equally spaced, i.e. tk-tk-1T; when the interpolation point time is t ═ tkAnd + delta t, calculating a measured value at the time t by using a Lagrange three-point interpolation method as follows:
Figure BDA0002939326520000132
thereby achieving temporal registration.
Different sensor data acquisition frequencies are different, acquisition time starting points are different, and the millimeter wave radar and the video detector are synchronized in time, so that the millimeter wave radar and the video detector describe information at the same time, and subsequent processing is facilitated.
As a preferable example of the above embodiment, in step S221, when time alignment of the millimeter wave radar and the video detector is performed, a sensor with a long data acquisition period is used as a reference, and it is assumed that the time of alignment is located in the middle of the interpolation interval.
In practical application, because the types of the adopted sensors are different, the data acquisition frequency of the millimeter wave radar may be greater than or less than that of the video detector, and in order to avoid loss of generality, the sensor with a long data acquisition period is taken as a reference, so that the accuracy of data processing is effectively improved.
Due to the fact that the resolution ratio is high, the millimeter wave radar can detect different positions of the vehicle in sequence to form a plurality of tracks, redundancy exists in the detection output result of the single millimeter wave radar, and the area where the vehicle possibly exists, namely the region of interest, can be found on the image by utilizing radar information.
The definition of the region of interest based on the millimeter wave radar directly affects the fusion result of the detector, and the region of interest can be determined by adopting the following method:
step S300: defining the width-height ratio r of vehiclewh
Figure BDA0002939326520000141
Wherein width represents the vehicle width and height is the vehicle height;
the geometric characteristics of the vehicle are more stable according to the typical three-dimensional point cloud image of the vehicle, the size of the region of interest can be obtained according to the outline of the vehicle,
step S301: establishing the size of the region of interest according to the aspect ratio of the vehicle, specifically:
Ps=Pst×rwh×α×β×γ
Psis the longitudinal pixel size, P, of the region of intereststThe initial pixel vertical size can be 100; alpha represents whether the vehicle is a truck or not, if the vehicle is the truck, 2 is taken, and the rest vehicles are taken as 1; beta represents an amplification factor aiming at reducing the omission factor and can be 1.2; gamma represents a distance coefficient, and the center of the interested area is selected as the center of the target; the size of the region of interest pixels is inversely related to the target distance.
Preferably, in the third step, a convolutional neural network Mask-R-CNN is used to identify a redundant region of interest, the neural network processor obtains a training sample set test sample set according to the processed historical data, determines the number and structure of convolutional layers, pooling layers and full-link layers of the convolutional neural network, stores parameters of a vehicle detection model, extracts features of target elements in the video collector image, and determines the region of interest where the vehicle exists by deep learning of the image through the trained vehicle detection model.
The convolutional neural network is adopted for recognition, in the recognition process, the image is deeply learned through a vehicle detection model obtained through training, a training sample set testing sample set is obtained from processed historical data, the number and the structure of convolutional layers, pooling layers and full-connection layers of the convolutional neural network are determined, parameters of the vehicle detection model are saved, and the recognition accuracy is improved.
As shown in fig. 2, the present invention further includes a system for detecting a road running state based on the fusion of radar and video, including:
the data acquisition module comprises a millimeter wave radar and a video detector, and the millimeter wave radar is used for detecting the position, the speed and the angle of a moving target; the video detector is used for detecting image information of a moving object;
the data processing module is in communication connection with the data acquisition module, processes the original data acquired by the data acquisition module and outputs traffic parameters, and the data processing module comprises a microprocessor and an embedded neural network processor;
and the data storage module is in communication connection with the data processing module and is used for storing the traffic parameters.
And the data communication module is in communication connection with the data storage module and the data center, and transmits the traffic parameters to the data center or realizes the call of the data center on the traffic parameters.
By arranging the data acquisition module, the data processing module, the data storage module and the data communication module, edge calculation is completed in the roadside detection unit, the processing pressure of a data center at the upper stage is reduced, traffic parameter statistical data including flow, queuing length, average speed and the like within a certain time are output, information of the video detector and the millimeter wave radar is combined, the information acquired by the millimeter wave radar and the information acquired by the video detector are fused in space and time, richer and more accurate traffic running state information is acquired, and the robustness of a road detection system is improved; the acquisition of traffic running state information can be completed at the road side, and the operation pressure of a data center at the upper stage is reduced; video image information is processed through a convolutional neural network Mask-R-CNN, target detection and target parameter acquisition are achieved, and compared with methods such as interframe difference and background difference of traditional target detection, accuracy is higher;
it will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. A road running state detection method based on radar and video fusion is characterized by comprising the following steps:
the method comprises the following steps: data acquisition is carried out on the moving target through a millimeter wave radar and a video detector, and the millimeter wave radar acquires the position, the speed and the angle of the moving target in real time; the video detector collects high-definition road surface image information in real time, preprocesses data and filters interference and invalid information;
step two: converting data coordinates of the millimeter wave radar into coordinates of a video pixel coordinate system, and registering the time of the millimeter wave radar and the time of the video detector to realize the mapping of a redundant vehicle region of interest of the millimeter wave radar to a video image and complete the space-time fusion of the sensor;
step three: identifying the region of interest by adopting a convolutional neural network obtained by training historical image data, determining the region of interest with the vehicle, and removing radar false alarm information;
step four: and calculating traffic statistical data according to the vehicle identification result to obtain traffic running state parameters including flow, average speed and queuing length in a certain time period.
2. The method for detecting the road running state based on the radar and video fusion as claimed in claim 1, wherein in the second step, when the spatial fusion is realized, the method comprises the following steps:
step S210: acquiring the installation height and angle of the millimeter wave radar;
step S211: converting a spherical coordinate system where a millimeter wave radar is located into a world coordinate system, taking the position of the millimeter wave radar as an origin of coordinates, taking a target detected by the millimeter wave radar as P, taking the radial distance between the target P and the millimeter wave radar as r, the azimuth angle as alpha, the mounting height of the radar as h, irradiating a radar beam obliquely downwards, taking s as the radial distance between the intersection point of a central axis of the radar beam and a ground plane and the radar, and taking the angle between the target P and the millimeter wave radar in the plane of the central axis and the vertical direction as theta;
the radial distance r is decomposed into:
Figure FDA0002939326510000021
Figure FDA0002939326510000022
the inclined direction of the straight line s to the ground is O-ZwIn the positive direction of the axis, the horizontal left direction of the plane of the radar is O-XwPositive axial direction, perpendicular to O-XwAxial direction is set to O-YwIn the positive direction of the axis, establish O-XwYwZwWorld coordinate system:
Figure FDA0002939326510000023
from the trigonometric relationship:
Figure FDA0002939326510000024
step 212: converting the millimeter wave radar world coordinate system into the camera coordinate system of the video detector, point (x), based on the video detector mounting locationw,yw,zw) The relationship converted from the world coordinate system to the camera coordinate system is:
Figure FDA0002939326510000025
where R is a 3 × 3 coordinate system rotation matrix, T is a 3 × 1 translation vector, and point (x)c,yc,zc) Is a point (x)w,yw,zw) Coordinates under a camera coordinate system; the parameter values of R and T depend on the installation position of the camera;
step S213: point (x) in the camera coordinate systemc,yc,zc) Conversion to a point (x) in the image coordinate systemi,yi) The coordinates are:
Figure FDA0002939326510000031
in the formula, f is the focal length of the camera.
3. The method for detecting road running state based on radar and video fusion of claim 2, wherein in step S213, the image coordinate system and the image pixel coordinate system are calibrated, specifically: the point (x) in the image coordinate systemi,yi) Converted into points (u, v) in the image pixel coordinate system,
Figure FDA0002939326510000032
image coordinate system O-xiyiHas a coordinate of (u) in the pixel coordinate system O-uv0,v0) And delta is an included angle between coordinate axes in the pixel coordinate system O-uv.
4. The method for detecting road running state based on radar and video fusion as claimed in claim 3, wherein in the second step, the fusion in time comprises the following steps:
step S220: setting a time delay to make the time acquisition starting points of the millimeter wave radar and the video detector consistent, specifically:
tradar=tcamera±τcal
wherein, tradarAs time coordinate of the millimeter-wave radar, tcameraAs a time coordinate of the video detector, τcalSetting a time delay;
step S221: registering the time of the millimeter wave radar and the video detector; the method specifically comprises the following steps: setting an interpolation interval, and the three-point estimation value in the interval is
Figure FDA0002939326510000041
Let three instants t be assumedk-1、tk、tk+1Are equally spaced, i.e. tk-tk-1T; when the interpolation point time is t ═ tkAnd + delta t, calculating a measured value at the time t by using a Lagrange three-point interpolation method as follows:
Figure FDA0002939326510000042
thereby achieving temporal registration.
5. The method for detecting the road running state based on the radar and the video fusion as claimed in claim 4, wherein in the step S221, when the time of the millimeter wave radar and the time of the video detector are aligned, a sensor with a long data acquisition period is taken as a reference, and the alignment time is assumed to be located in the middle of an interpolation interval.
6. The method for detecting the road running state based on the radar and video fusion as claimed in claim 1, wherein in the third step, the region of interest is determined according to the following steps:
step S300: defining the width-height ratio r of vehiclewh
Figure FDA0002939326510000043
Wherein width represents the vehicle width and height is the vehicle height;
step S301: establishing the size of the region of interest according to the aspect ratio of the vehicle, specifically:
Ps=Pst×rwh×α×β×γ
Psis the longitudinal pixel size, P, of the region of intereststIs the initial pixel vertical size; alpha represents whether the vehicle is a truck or not, if the vehicle is the truck, 2 is taken, and the rest vehicles are taken as 1; β represents an amplification factor; gamma denotes a distance coefficient, and the center of the region of interest is selected to be the target center.
7. The radar and video fusion-based road running state detection method as claimed in claim 6, wherein the pixel size of the region of interest is in inverse relation to the target distance.
8. The method for detecting the road running state based on the radar and video fusion is characterized in that in the third step, a convolutional neural network Mask-R-CNN is adopted to identify redundant interested areas, a neural network processor obtains a training sample set test sample set according to processed historical data, the number and the structure of convolutional layers, pooling layers and full-connection layers of the convolutional neural network are determined, parameters of a vehicle detection model are stored, the characteristics of target elements in images of a video collector are extracted, and the interested areas where vehicles exist are determined through deep learning of the images through the vehicle detection model obtained through training.
9. A road running state detection system based on radar and video fusion is characterized by comprising:
the data acquisition module comprises a millimeter wave radar and a video detector, wherein the millimeter wave radar is used for detecting the position, the speed and the angle of a moving target; the video detector is used for detecting image information of a moving object;
the data processing module is in communication connection with the data acquisition module, processes the original data acquired by the data acquisition module and outputs traffic parameters, and comprises a microprocessor and an embedded neural network processor;
the data storage module is in communication connection with the data processing module and is used for storing traffic parameters;
and the data communication module is in communication connection with the data storage module and the data center, and transmits the traffic parameters to the data center or realizes the call of the data center on the traffic parameters.
CN202110172642.XA 2021-02-08 2021-02-08 Road running state detection method and system based on radar and video fusion Pending CN112946628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110172642.XA CN112946628A (en) 2021-02-08 2021-02-08 Road running state detection method and system based on radar and video fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110172642.XA CN112946628A (en) 2021-02-08 2021-02-08 Road running state detection method and system based on radar and video fusion

Publications (1)

Publication Number Publication Date
CN112946628A true CN112946628A (en) 2021-06-11

Family

ID=76244200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110172642.XA Pending CN112946628A (en) 2021-02-08 2021-02-08 Road running state detection method and system based on radar and video fusion

Country Status (1)

Country Link
CN (1) CN112946628A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450580A (en) * 2021-08-19 2021-09-28 浙江安沿科技有限公司 Radar for monitoring traffic flow
CN113506440A (en) * 2021-09-08 2021-10-15 四川国蓝中天环境科技集团有限公司 Traffic state estimation method for multi-source data fusion under Lagrange coordinate system
CN113581199A (en) * 2021-06-30 2021-11-02 银隆新能源股份有限公司 Vehicle control method and device
CN113627569A (en) * 2021-09-27 2021-11-09 浙江高速信息工程技术有限公司 Data fusion method for radar video all-in-one machine used for traffic large scene
CN114202931A (en) * 2021-12-10 2022-03-18 深圳市旗扬特种装备技术工程有限公司 5G air upgrading method of radar-vision fusion traffic incident detection system
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114495520A (en) * 2021-12-30 2022-05-13 北京万集科技股份有限公司 Vehicle counting method, device, terminal and storage medium
CN114627409A (en) * 2022-02-25 2022-06-14 海信集团控股股份有限公司 Method and device for detecting abnormal lane change of vehicle
CN114814825A (en) * 2022-03-23 2022-07-29 合肥工业大学 Vehicle track sensing and state extraction method based on radar and video fusion
CN114842643A (en) * 2022-04-20 2022-08-02 深圳市旗扬特种装备技术工程有限公司 Video vehicle detection model online updating method and device and radar fusion system
CN115019512A (en) * 2022-07-05 2022-09-06 北京动视元科技有限公司 Road event detection system based on radar video fusion
CN115083088A (en) * 2022-05-11 2022-09-20 长江慧控科技(武汉)有限公司 Railway perimeter intrusion early warning method
CN115331190A (en) * 2022-09-30 2022-11-11 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar fusion
CN115346368A (en) * 2022-07-30 2022-11-15 东南大学 Traffic roadside sensing system and method based on integration of far and near view multiple sensors
CN115376312A (en) * 2022-07-22 2022-11-22 交通运输部路网监测与应急处置中心 Road monitoring method and system based on radar and video fusion
CN115421136A (en) * 2022-07-28 2022-12-02 广西北投信创科技投资集团有限公司 Vehicle detection system and detection method thereof
CN115440056A (en) * 2022-08-02 2022-12-06 天津光电聚能专用通信设备有限公司 Intelligent safety protection system based on millimeter wave radar and vision fusion
CN115527364A (en) * 2022-08-25 2022-12-27 西安电子科技大学广州研究院 Traffic accident tracing method and system based on radar vision data fusion
CN115830032A (en) * 2023-02-13 2023-03-21 杭州闪马智擎科技有限公司 Road expansion joint lesion identification method and device based on old facilities
CN115985095A (en) * 2022-12-23 2023-04-18 河北德冠隆电子科技有限公司 Wisdom is multidimension degree thunder for traffic and is looked integration all-in-one
CN116189116A (en) * 2023-04-24 2023-05-30 江西方兴科技股份有限公司 Traffic state sensing method and system
CN116913097A (en) * 2023-09-14 2023-10-20 江西方兴科技股份有限公司 Traffic state prediction method and system
CN117233725A (en) * 2023-11-15 2023-12-15 中国空气动力研究与发展中心计算空气动力研究所 Coherent radar target detection method based on graph neural network multi-feature fusion
CN117672007A (en) * 2024-02-03 2024-03-08 福建省高速公路科技创新研究院有限公司 Road construction area safety precaution system based on thunder fuses

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
US20200041612A1 (en) * 2018-08-02 2020-02-06 Metawave Corporation Recurrent super-resolution radar for autonomous vehicles
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN112130136A (en) * 2020-09-11 2020-12-25 中国重汽集团济南动力有限公司 Traffic target comprehensive sensing system and method
CN112147612A (en) * 2020-08-19 2020-12-29 上海图丽信息技术有限公司 Method for real-time tracking of vehicle by fusing radar video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
US20200041612A1 (en) * 2018-08-02 2020-02-06 Metawave Corporation Recurrent super-resolution radar for autonomous vehicles
CN110068818A (en) * 2019-05-05 2019-07-30 中国汽车工程研究院股份有限公司 The working method of traffic intersection vehicle and pedestrian detection is carried out by radar and image capture device
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN112147612A (en) * 2020-08-19 2020-12-29 上海图丽信息技术有限公司 Method for real-time tracking of vehicle by fusing radar video
CN112130136A (en) * 2020-09-11 2020-12-25 中国重汽集团济南动力有限公司 Traffic target comprehensive sensing system and method

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113581199A (en) * 2021-06-30 2021-11-02 银隆新能源股份有限公司 Vehicle control method and device
CN113450580A (en) * 2021-08-19 2021-09-28 浙江安沿科技有限公司 Radar for monitoring traffic flow
CN113506440A (en) * 2021-09-08 2021-10-15 四川国蓝中天环境科技集团有限公司 Traffic state estimation method for multi-source data fusion under Lagrange coordinate system
CN113506440B (en) * 2021-09-08 2021-11-30 四川国蓝中天环境科技集团有限公司 Traffic state estimation method for multi-source data fusion under Lagrange coordinate system
CN113627569A (en) * 2021-09-27 2021-11-09 浙江高速信息工程技术有限公司 Data fusion method for radar video all-in-one machine used for traffic large scene
CN114202931B (en) * 2021-12-10 2022-07-08 深圳市旗扬特种装备技术工程有限公司 5G air upgrading method of radar-vision fusion traffic incident detection system
CN114202931A (en) * 2021-12-10 2022-03-18 深圳市旗扬特种装备技术工程有限公司 5G air upgrading method of radar-vision fusion traffic incident detection system
CN114495520A (en) * 2021-12-30 2022-05-13 北京万集科技股份有限公司 Vehicle counting method, device, terminal and storage medium
CN114495520B (en) * 2021-12-30 2023-10-03 北京万集科技股份有限公司 Counting method and device for vehicles, terminal and storage medium
CN114627409A (en) * 2022-02-25 2022-06-14 海信集团控股股份有限公司 Method and device for detecting abnormal lane change of vehicle
CN114814825A (en) * 2022-03-23 2022-07-29 合肥工业大学 Vehicle track sensing and state extraction method based on radar and video fusion
CN114419572B (en) * 2022-03-31 2022-06-17 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114842643A (en) * 2022-04-20 2022-08-02 深圳市旗扬特种装备技术工程有限公司 Video vehicle detection model online updating method and device and radar fusion system
CN115083088A (en) * 2022-05-11 2022-09-20 长江慧控科技(武汉)有限公司 Railway perimeter intrusion early warning method
CN115019512A (en) * 2022-07-05 2022-09-06 北京动视元科技有限公司 Road event detection system based on radar video fusion
CN115376312A (en) * 2022-07-22 2022-11-22 交通运输部路网监测与应急处置中心 Road monitoring method and system based on radar and video fusion
CN115421136A (en) * 2022-07-28 2022-12-02 广西北投信创科技投资集团有限公司 Vehicle detection system and detection method thereof
CN115346368A (en) * 2022-07-30 2022-11-15 东南大学 Traffic roadside sensing system and method based on integration of far and near view multiple sensors
CN115346368B (en) * 2022-07-30 2024-01-05 东南大学 Traffic road side sensing system and method based on integrated fusion of far-view and near-view multiple sensors
CN115440056A (en) * 2022-08-02 2022-12-06 天津光电聚能专用通信设备有限公司 Intelligent safety protection system based on millimeter wave radar and vision fusion
CN115527364A (en) * 2022-08-25 2022-12-27 西安电子科技大学广州研究院 Traffic accident tracing method and system based on radar vision data fusion
CN115527364B (en) * 2022-08-25 2023-11-21 西安电子科技大学广州研究院 Traffic accident tracing method and system based on radar data fusion
CN115331190A (en) * 2022-09-30 2022-11-11 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar fusion
CN115331190B (en) * 2022-09-30 2022-12-09 北京闪马智建科技有限公司 Road hidden danger identification method and device based on radar vision fusion
CN115985095A (en) * 2022-12-23 2023-04-18 河北德冠隆电子科技有限公司 Wisdom is multidimension degree thunder for traffic and is looked integration all-in-one
CN115830032A (en) * 2023-02-13 2023-03-21 杭州闪马智擎科技有限公司 Road expansion joint lesion identification method and device based on old facilities
CN116189116A (en) * 2023-04-24 2023-05-30 江西方兴科技股份有限公司 Traffic state sensing method and system
CN116189116B (en) * 2023-04-24 2024-02-23 江西方兴科技股份有限公司 Traffic state sensing method and system
CN116913097A (en) * 2023-09-14 2023-10-20 江西方兴科技股份有限公司 Traffic state prediction method and system
CN116913097B (en) * 2023-09-14 2024-01-19 江西方兴科技股份有限公司 Traffic state prediction method and system
CN117233725A (en) * 2023-11-15 2023-12-15 中国空气动力研究与发展中心计算空气动力研究所 Coherent radar target detection method based on graph neural network multi-feature fusion
CN117233725B (en) * 2023-11-15 2024-01-23 中国空气动力研究与发展中心计算空气动力研究所 Coherent radar target detection method based on graph neural network multi-feature fusion
CN117672007A (en) * 2024-02-03 2024-03-08 福建省高速公路科技创新研究院有限公司 Road construction area safety precaution system based on thunder fuses
CN117672007B (en) * 2024-02-03 2024-04-26 福建省高速公路科技创新研究院有限公司 Road construction area safety precaution system based on thunder fuses

Similar Documents

Publication Publication Date Title
CN112946628A (en) Road running state detection method and system based on radar and video fusion
CN109444911B (en) Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
WO2022141914A1 (en) Multi-target vehicle detection and re-identification method based on radar and video fusion
EP3885794A1 (en) Track and road obstacle detecting method
CN110379178B (en) Intelligent unmanned automobile parking method based on millimeter wave radar imaging
Sugimoto et al. Obstacle detection using millimeter-wave radar and its visualization on image sequence
CN110568433A (en) High-altitude parabolic detection method based on millimeter wave radar
CN111045000A (en) Monitoring system and method
CN112363167A (en) Extended target tracking method based on fusion of millimeter wave radar and monocular camera
CN113359097A (en) Millimeter wave radar and camera combined calibration method
CN113850102B (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN113566833A (en) Multi-sensor fusion vehicle positioning method and system
CN110764083B (en) Anti-intrusion data fusion method and system for millimeter wave radar
Cui et al. 3D detection and tracking for on-road vehicles with a monovision camera and dual low-cost 4D mmWave radars
CN113504525B (en) Fog region visibility inversion method and system
CN112784679A (en) Vehicle obstacle avoidance method and device
CN115690746A (en) Non-blind area sensing method and system based on vehicle-road cooperation
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN113504543B (en) Unmanned aerial vehicle LiDAR system positioning and attitude determination system and method
CN117029840A (en) Mobile vehicle positioning method and system
CN116863382A (en) Expressway multi-target tracking method based on radar fusion
CN113947141B (en) Roadside beacon sensing system of urban intersection scene
CN110865367A (en) Intelligent fusion method for radar video data
Huang et al. An efficient multi-threshold selection method for lane detection based on lidar
CN113848825B (en) AGV state monitoring system and method for flexible production workshop

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210611