CN116311903A - Method for evaluating road running index based on video analysis - Google Patents

Method for evaluating road running index based on video analysis Download PDF

Info

Publication number
CN116311903A
CN116311903A CN202310042682.1A CN202310042682A CN116311903A CN 116311903 A CN116311903 A CN 116311903A CN 202310042682 A CN202310042682 A CN 202310042682A CN 116311903 A CN116311903 A CN 116311903A
Authority
CN
China
Prior art keywords
vehicle
lane
coordinate system
vehicles
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310042682.1A
Other languages
Chinese (zh)
Inventor
许梦菲
李湾
郑晏群
朱宇
林松荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Comprehensive Transportation Operation Command Center
Original Assignee
Shenzhen Comprehensive Transportation Operation Command Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Comprehensive Transportation Operation Command Center filed Critical Shenzhen Comprehensive Transportation Operation Command Center
Priority to CN202310042682.1A priority Critical patent/CN116311903A/en
Publication of CN116311903A publication Critical patent/CN116311903A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/065Traffic control systems for road vehicles by counting the vehicles in a section of the road or in a parking area, i.e. comparing incoming count with outgoing count
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method for evaluating a road running index based on video analysis, which comprises the following steps: acquiring monitoring video data of a road section to be evaluated; carrying out lane detection on the monitoring video data, and demarcating a boundary line in a region where a lane line is located; detecting the vehicle of the monitoring video data, starting a preset vehicle speed measuring module to allocate the vehicle ID, restore the vehicle track and measure the vehicle speed of each motor vehicle; matching the ID-carrying vehicles and the tracks thereof with the boundary of each lane to obtain the number of the vehicles and the average running speed of the vehicles of each lane; and comparing the relation between the average speed of the vehicles of each lane and the preset speed range and the relation between the number of the vehicles of each lane and the preset number of vehicles range, and accordingly evaluating the road running index of each lane. The method for evaluating the road running index based on the video analysis provided by the invention can evaluate the road running index of each lane so as to reflect the road running index of each lane.

Description

Method for evaluating road running index based on video analysis
Technical Field
The invention relates to the technical field of Internet, in particular to a method for evaluating a road running index based on video analysis.
Background
In recent years, the income level of residents in China is rapidly improved, and partial public transportation passenger traffic is shunted since driving consumption rises; on the other hand, the net is about, the travel single car and the electric single car are shared and put in on a large scale, and certain extrusion is caused to the urban passenger transport system. For a long time, the problem of traffic jam in China has universality, and is particularly obvious in urban traffic in large cities.
In the highway traffic line network of China, the most basic main lines also have the conditions of unidirectional two lanes and bidirectional four lanes, and the main lines of some large-scale traffic hubs have the conditions of unidirectional four lanes and bidirectional eight lanes. However, the current common technical scheme only evaluates the road in a single direction when evaluating the road congestion condition, and does not specifically evaluate the congestion condition of a certain lane, and the road evaluation cannot reflect the actual condition and has negative influence on the traffic dredging decision. Therefore, it is necessary to design a method for evaluating the road running index based on video analysis.
Disclosure of Invention
The invention aims to provide a method for evaluating a road running index based on video analysis, which can evaluate the road running index of each lane so as to reflect the road running index of each lane.
In order to achieve the above object, the present invention provides the following solutions:
a method for evaluating a road running index based on video analysis, comprising the steps of:
step one: acquiring monitoring video data of a road section to be evaluated;
step two: carrying out lane detection on the monitoring video data, and demarcating a boundary line in a region where a lane line is located;
step three: constructing a road vehicle detection data set, constructing a vehicle detection model and a vehicle speed measurement model, training the vehicle detection model and the vehicle speed measurement model through the road detection vehicle data set, performing vehicle detection on the monitoring video data through the trained vehicle detection model, and performing vehicle ID distribution, vehicle track restoration and vehicle speed measurement on each motor vehicle through the trained vehicle speed measurement model;
step four: matching the ID-carrying vehicles and the tracks thereof with the boundary of each lane in the step two to obtain the number of vehicles and the average running speed of the vehicles of each lane;
step five: and (3) comparing the relation between the average speed of the vehicles and the preset speed range of each lane obtained in the step (IV) with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the step (IV), and evaluating the road running index of each lane according to the relation, so as to provide data support for traffic dredging decision.
Optionally, in the second step, lane detection is performed on the monitoring video data, and a boundary line is defined in an area where a lane line is located, specifically:
the lane detection adopts FLD algorithm, defines the lane detection as finding the set of the positions of the lane lines in the image, namely, based on the position selection and classification in the direction of the lines, and then defines the dividing line in the area where the lane lines are located.
Optionally, in the third step, a road vehicle detection data set is constructed, specifically:
and constructing a road vehicle detection data set containing 10 ten thousand pictures of real scenes, wherein the pictures in the data set are all obtained by shooting by a high-definition camera erected in the real road scenes.
Optionally, in step three, a vehicle detection model is built, the vehicle detection model is trained through a road detection vehicle data set, and the vehicle detection is performed on the monitoring video data through the trained vehicle detection model, specifically:
building a YOLOV5s target detection model, and setting a road vehicle detection data set according to 7:2:1 into a training set, a verification set and a test set, training a YOLOV5s target detection model through the training set, the verification set and the test set to obtain a trained vehicle detection model, and carrying out vehicle detection on the monitoring video data through the trained vehicle detection model.
Optionally, in step three, a vehicle speed measurement model is built, the vehicle speed measurement model is trained through a road detection vehicle data set, and vehicle ID allocation, vehicle track restoration and vehicle speed measurement are performed on each motor vehicle through the trained vehicle speed measurement model, specifically:
the vehicle speed measurement model comprises a vehicle tracking model and a vehicle speed measurement module, wherein the vehicle tracking model is a FairMOT multi-target tracking model, the FairMOT multi-target tracking model is trained through a road detection vehicle data set, a trained vehicle tracking model is obtained, and vehicle ID distribution is carried out on the monitoring video data through the vehicle tracking model; and carrying out vehicle track restoration and vehicle speed measurement on the monitoring video data through a vehicle speed measurement module.
Optionally, the vehicle speed measuring module is used for carrying out vehicle track restoration and vehicle speed measurement on the monitoring video data, and specifically comprises the following steps:
and acquiring monitoring video data, mapping the track of the vehicle moving on the world coordinate system into the pixel coordinate system through the vehicle speed measuring module to obtain the moving track and time of the vehicle, and calculating the moving speed of the vehicle.
Optionally, the vehicle speed measuring part applies a transformation idea between machine vision coordinate systems to map a track of a vehicle moving on a world coordinate system in a pixel coordinate system, specifically:
let the coordinates of a point P in a pixel coordinate system be (u, v), the coordinates of an image coordinate system be (x, y), the coordinates of a camera coordinate system be (Xc, yc, zc), the coordinates of a world coordinate system be (Xw, yw, zw), and the rotation θ of the point P around the z axis in a world coordinate system can be obtained:
Figure BDA0004051060080000031
the deduction is as follows:
Figure BDA0004051060080000032
similarly, the P point rotates phi around the x axis and rotates omega around the y axis to respectively obtain corresponding matrixes:
Figure BDA0004051060080000033
Figure BDA0004051060080000034
from this 3*3 rotation matrix r=r can be obtained 1 R 2 R 3
From the following components
Figure BDA0004051060080000035
The coordinate transformation from the world coordinate system to the camera coordinate system can be obtained as:
Figure BDA0004051060080000041
wherein T is a 3*1 offset vector;
the three-dimensional coordinates are converted into two-dimensional coordinates from a camera coordinate system to an image coordinate system, and the following formula is obtained:
Figure BDA0004051060080000042
wherein f is the focal length of the camera, i.e. the distance from the optical center of the camera to the imaging plane;
from the image coordinate system to the pixel coordinate system, the corresponding matrix is:
Figure BDA0004051060080000043
wherein dx and dy respectively represent how many camera coordinate system unit lengths each column and each row respectively represent, (u) 0 ,v 0 ) The coordinates of the origin of the image coordinate system on the pixel coordinate system;
in summary, the mathematical change process of the P point from the world coordinate system to the pixel coordinate system is as follows:
Figure BDA0004051060080000044
wherein ,
Figure BDA0004051060080000045
is a camera with internal parameters>
Figure BDA0004051060080000046
The camera external parameters are obtained through Zhang Zhengyou calibration, and the motion trail and time of the vehicle are obtained through the formula.
Optionally, in the fourth step, matching the vehicle with ID and the track thereof in the third step with the boundary line of each lane in the second step to obtain the number of vehicles and the average running speed of the vehicles in each lane, specifically:
dividing each vehicle into corresponding lanes according to the comparison result of the boundary position of each vehicle track and each lane, and obtaining the vehicle number and the average vehicle speed of each lane according to the vehicle number of the corresponding lane and the movement speed of each vehicle.
Optionally, in the fifth step, the relation between the average speed of the vehicle and the preset speed range of each lane obtained in the fourth step is compared with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the fourth step, and the road running index of each lane is evaluated according to the relation, specifically:
setting an upper speed threshold limit and a lower speed threshold limit for a preset speed range, setting an upper vehicle threshold limit and a lower vehicle threshold limit for a preset vehicle number range, and dividing the road running index into smooth, slightly congested and severely congested; estimating a road running index corresponding to each lane according to the magnitude relation between the average speed of the vehicle and the speed threshold value and the magnitude relation between the number of vehicles and the vehicle threshold value, wherein the road running index is specifically as follows:
if the average speed of the vehicle is greater than the upper speed threshold and the number of vehicles is less than the lower speed threshold, estimating the road running index corresponding to each lane as the smoothness;
if the average speed of the vehicles is in the preset speed range and the number of the vehicles is in the preset number of vehicles range, estimating the road running index corresponding to each lane as the slight congestion; or if the average speed of the vehicle is smaller than the lower speed threshold and the number of vehicles is larger than the upper speed threshold, estimating the road running index corresponding to each lane as the serious congestion.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the method for evaluating the road running index based on video analysis can evaluate the road running index of each lane so as to reflect the road running index of each lane, adopts the FLD algorithm to define the lane line detection as a set for searching the positions of certain lines of the lane line in the image, reduces the calculation complexity to a minimum range, solves the problem of low segmentation speed, and greatly accelerates the speed of the lane line detection algorithm; when the monitoring video is used for vehicle detection, a road vehicle detection data set containing 10 ten thousand pictures of real scenes is constructed by adopting a YOLOV5s target detection model, so that the robustness of the vehicle detection model in an application scene is improved; the vehicle speed measuring part applies the idea of conversion between machine vision coordinate systems, simplifies calculation, only utilizes the existing monitoring facilities, and does not need to additionally purchase additional distance measurement or monitoring equipment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for evaluating a road running index according to an embodiment of the present invention;
FIG. 2 is a lane-dividing diagram of step four;
FIG. 3 is a schematic diagram of the FLD algorithm detecting lane lines;
FIG. 4 is a flow chart of the conversion between machine vision coordinate systems;
FIG. 5 is a schematic illustration of rotation θ about the z-axis;
FIG. 6 is a schematic diagram of the conversion from a camera coordinate system to an image coordinate system;
fig. 7 is a schematic diagram of the conversion from an image coordinate system to a pixel coordinate system.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method for evaluating a road running index based on video analysis, which can evaluate the road running index of each lane so as to reflect the road running index of each lane.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, a method for evaluating a road running index based on video analysis includes the steps of:
step one: acquiring monitoring video data of a road section to be evaluated;
step two: carrying out lane detection on the monitoring video data, and demarcating a boundary line in a region where a lane line is located;
step three: constructing a road vehicle detection data set, constructing a vehicle detection model and a vehicle speed measurement model, training the vehicle detection model and the vehicle speed measurement model through the road detection vehicle data set, performing vehicle detection on the monitoring video data through the trained vehicle detection model, and performing vehicle ID distribution, vehicle track restoration and vehicle speed measurement on each motor vehicle through the trained vehicle speed measurement model;
step four: matching the ID-carrying vehicles and the tracks thereof with the boundary of each lane in the step two to obtain the number of vehicles and the average running speed of the vehicles of each lane;
step five: and (3) comparing the relation between the average speed of the vehicles and the preset speed range of each lane obtained in the step (IV) with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the step (IV), and evaluating the road running index of each lane according to the relation, so as to provide data support for traffic dredging decision.
In the second step, lane detection is performed on the monitoring video data, and a boundary line is defined in an area where a lane line is located, specifically:
the lane detection adopts FLD algorithm, defines the lane detection as finding the set of the positions of the lane lines in the image, namely, based on the position selection and classification in the direction of the lines, and then defines the dividing line in the area where the lane lines are located.
Lane line detection is a basic module in automatic driving, and a plurality of lane line detection algorithms realized based on traditional image processing exist in the early stage. However, as the research is advanced, the scenes corresponding to the lane line detection task are more and more diversified, and the low-order understanding of the white and yellow lines is gradually separated. The present invention's recurrent FLD algorithm seeks to detect the presence of a semantically lane line, even if it is a blurred, illuminated, or even completely occluded lane line.
Compared with the common segmentation algorithm, the method classifies each pixel point in the image, performs very intensive calculation for segmenting the lane lines, and has the result that the speed is relatively low, as shown in fig. 3, the FLD algorithm defines the lane line detection as searching for a set of positions of certain lines of the lane lines in the image, namely, selection and classification based on the positions of the lane lines in the line direction. Assuming that the image size of one lane line is to be detected as HxW, the present invention needs to deal with HxW classification problems for segmentation problems.
Because the scheme of the invention is row direction selection, the invention only needs to deal with the classification problem on h rows, assuming that the invention makes selections on h rows, but the classification problem on each row is W-dimensional. Therefore, the original HxW classification problems can be simplified into H classification problems, and the H is set according to the requirement, but is generally much smaller than the image height H.
Therefore, the method directly reduces the classification number from HxW to h, and h is far smaller than HxW, not to mention h is far smaller than HxW, so that the method reduces the calculation complexity to a very small range, solves the problem of low segmentation speed, and greatly accelerates the speed of a lane line detection algorithm.
In the third step, a road vehicle detection data set is constructed, specifically:
and constructing a road vehicle detection data set containing 10 ten thousand pictures of real scenes, wherein the pictures in the data set are all obtained by shooting by a high-definition camera erected in the real road scenes.
Thirdly, building a vehicle detection model, training the vehicle detection model through a road detection vehicle data set, and carrying out vehicle detection on the monitoring video data through the trained vehicle detection model, wherein the vehicle detection model specifically comprises the following steps:
building a YOLOV5s target detection model, and setting a road vehicle detection data set according to 7:2:1 into a training set, a verification set and a test set, training a YOLOV5s target detection model through the training set, the verification set and the test set to obtain a trained vehicle detection model, and carrying out vehicle detection on the monitoring video data through the trained vehicle detection model. After the vehicle detection model is trained by the training set, the accuracy of 96% and the recall of 95% are obtained on the test set.
Thirdly, building a vehicle speed measurement model, training the vehicle speed measurement model through a road detection vehicle data set, and carrying out vehicle ID distribution, vehicle track restoration and vehicle speed measurement on each motor vehicle through the trained vehicle speed measurement model, wherein the vehicle speed measurement model specifically comprises the following steps:
the vehicle speed measurement model comprises a vehicle tracking model and a vehicle speed measurement module, wherein the vehicle tracking model is a FairMOT multi-target tracking model, and the FairMOT multi-target tracking model is trained through a road detection vehicle data set to obtain a trained vehicle tracking model, so that the application scene tracking of the vehicle speed measurement model is more accurate; the vehicle ID distribution is carried out on the monitoring video data through a vehicle tracking model; and carrying out vehicle track restoration and vehicle speed measurement on the monitoring video data through a vehicle speed measurement module.
The vehicle speed measuring module is used for carrying out vehicle track restoration and vehicle speed measurement on the monitoring video data, and specifically comprises the following steps:
the method comprises the steps of obtaining monitoring video data, applying a conversion idea between machine vision coordinate systems, mapping a track of a vehicle moving on a world coordinate system into a pixel coordinate system through a vehicle speed measuring module to obtain a moving track and time of the vehicle, and calculating the moving speed of the vehicle, wherein the specific flow is shown in fig. 4.
The coordinate system in the machine vision comprises a pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system. The unit scale of the pixel coordinate system (u, v) is a pixel, which is a discrete image coordinate or pixel coordinate, and the origin is at the upper left corner of the picture. The image coordinate system (x, y) is a coordinate system established by taking the intersection point of diagonal lines of pictures as a reference origin, and is a continuous image coordinate or a space coordinate. The camera coordinate system (Xc, yc, zc) is the coordinate system of the camera at its own angle, the origin is on the camera's optical axis, and the Z-axis is parallel to the camera's optical axis, i.e. the camera's lens shooting direction. The world coordinate system (Xw, yw, zw) is a reference system of the target object position, and the dot positions can be freely set according to the operation convenience and can be positioned on a robot base or a robot front end actuator.
The vehicle speed measuring part applies the conversion thought between machine vision coordinate systems to map the track of the vehicle moving on the world coordinate system into the pixel coordinate system, and specifically comprises the following steps:
let the coordinates of a point P in a pixel coordinate system be (u, v), the coordinates of an image coordinate system be (x, y), the coordinates of a camera coordinate system be (Xc, yc, zc), the coordinates of a world coordinate system be (Xw, yw, zw), and the rotation θ of the point P around the z axis in a world coordinate system can be obtained:
Figure BDA0004051060080000091
the deduction is as follows:
Figure BDA0004051060080000092
similarly, the P point rotates phi around the x axis and rotates omega around the y axis to respectively obtain corresponding matrixes:
Figure BDA0004051060080000093
Figure BDA0004051060080000094
from this 3*3 rotation matrix r=r can be obtained 1 R 2 R 3
From the following components
Figure BDA0004051060080000095
The coordinate transformation from the world coordinate system to the camera coordinate system can be obtained as:
Figure BDA0004051060080000101
wherein T is a 3*1 offset vector;
as shown in fig. 6, the process from the camera coordinate system to the image coordinate system is a process of converting three-dimensional coordinates into two-dimensional coordinates, which is called perspective projective transformation. In order to solve the relation between the two, the common image coordinates (x, y) are expanded into a certain point in the homogeneous coordinate (x, y, 1) space, the point projected onto the image plane and the optical center of the camera are on the same straight line, a camera coordinate system is established by taking the optical center as the origin, and the coordinate system can be obtained according to the similar triangle relation:
△ABO c ~△oCO C
△PBO c ~△pCO C
Figure BDA0004051060080000102
Figure BDA0004051060080000103
the following formula can be obtained:
Figure BDA0004051060080000104
wherein f is the focal length of the camera, i.e. the distance from the optical center of the camera to the imaging plane;
as shown in fig. 7, from the image coordinate system to the pixel coordinate system,
from the following components
Figure BDA0004051060080000105
The corresponding matrix is derived as follows:
Figure BDA0004051060080000111
the pixel coordinate system and the image coordinate system are both on the imaging plane, except that the respective origin and measurement units are different. The origin of the image coordinate system is the intersection point of the camera optical axis and the imaging plane, and is usually the midpoint of the imaging plane; the unit of the image coordinate system is mm, which belongs to the physical unit, and the unit of the pixel coordinate system is pixel, which usually describes that a pixel point is a plurality of rows and columns, so the conversion between the two is as follows: where dx and dy represent how many mm each column and each row represent, respectively, i.e. 1pixel = dx mm, (u) 0 ,v 0 ) The coordinates of the origin of the image coordinate system on the pixel coordinate system;
in summary, the mathematical change process of the P point from the world coordinate system to the pixel coordinate system is as follows:
Figure BDA0004051060080000112
wherein ,
Figure BDA0004051060080000113
is a camera with internal parameters>
Figure BDA0004051060080000114
The camera external parameters are obtained through Zhang Zhengyou calibration, and the motion trail and time of the vehicle are obtained through the formula.
In the fourth step, matching the vehicle with ID and the track thereof in the third step with the boundary line of each lane in the second step to obtain the number of vehicles and the average running speed of the vehicles in each lane, specifically:
dividing each vehicle into corresponding lanes according to the comparison result of the boundary position of each vehicle track and each lane, and obtaining the vehicle number and the average vehicle speed of each lane according to the vehicle number of the corresponding lane and the movement speed of each vehicle. Assuming that the track of the vehicle a falls between the lane line a and the lane line B in fig. 2, the vehicle a belongs to the vehicle of the lane one, and so on, the number of vehicles and the average speed of the vehicles of each lane can be obtained.
In the fifth step, the relation between the average speed of the vehicle and the preset speed range of each lane obtained in the fourth step is compared with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the fourth step, and the road running index of each lane is evaluated according to the relation, specifically:
setting an upper speed threshold limit and a lower speed threshold limit for a preset speed range, setting an upper vehicle threshold limit and a lower vehicle threshold limit for a preset vehicle number range, and dividing the road running index into smooth, slightly congested and severely congested; estimating a road running index corresponding to each lane according to the magnitude relation between the average speed of the vehicle and the speed threshold value and the magnitude relation between the number of vehicles and the vehicle threshold value, wherein the road running index is specifically as follows:
if the average speed of the vehicle is greater than the upper speed threshold and the number of vehicles is less than the lower speed threshold, estimating the road running index corresponding to each lane as the smoothness;
if the average speed of the vehicles is in the preset speed range and the number of the vehicles is in the preset number of vehicles range, estimating the road running index corresponding to each lane as the slight congestion; or if the average speed of the vehicle is smaller than the lower speed threshold and the number of vehicles is larger than the upper speed threshold, estimating the road running index corresponding to each lane as the serious congestion.
The method for evaluating the road running index based on video analysis can evaluate the road running index of each lane so as to reflect the road running index of each lane, adopts the FLD algorithm to define the lane line detection as a set for searching the positions of certain lines of the lane line in the image, reduces the calculation complexity to a minimum range, solves the problem of low segmentation speed, and greatly accelerates the speed of the lane line detection algorithm; when the monitoring video is used for vehicle detection, a road vehicle detection data set containing 10 ten thousand pictures of real scenes is constructed by adopting a YOLOV5s target detection model, so that the robustness of the vehicle detection model in an application scene is improved; the vehicle speed measuring part applies the idea of conversion between machine vision coordinate systems, simplifies calculation, only utilizes the existing monitoring facilities, and does not need to additionally purchase additional distance measurement or monitoring equipment.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (9)

1. A method for evaluating a road running index based on video analysis, comprising the steps of:
step one: acquiring monitoring video data of a road section to be evaluated;
step two: carrying out lane detection on the monitoring video data, and demarcating a boundary line in a region where a lane line is located;
step three: constructing a road vehicle detection data set, constructing a vehicle detection model and a vehicle speed measurement model, training the vehicle detection model and the vehicle speed measurement model through the road detection vehicle data set, performing vehicle detection on the monitoring video data through the trained vehicle detection model, and performing vehicle ID distribution, vehicle track restoration and vehicle speed measurement on each motor vehicle through the trained vehicle speed measurement model;
step four: matching the ID-carrying vehicles and the tracks thereof with the boundary of each lane in the step two to obtain the number of vehicles and the average running speed of the vehicles of each lane;
step five: and (3) comparing the relation between the average speed of the vehicles and the preset speed range of each lane obtained in the step (IV) with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the step (IV), and evaluating the road running index of each lane according to the relation, so as to provide data support for traffic dredging decision.
2. The method for evaluating a road running index based on video analysis according to claim 1, wherein in the second step, lane detection is performed on the monitoring video data, and a dividing line is defined in a region where a lane line is located, specifically:
the lane detection adopts FLD algorithm, defines the lane detection as finding the set of the positions of the lane lines in the image, namely, based on the position selection and classification in the direction of the lines, and then defines the dividing line in the area where the lane lines are located.
3. The method for estimating a road running index based on video analysis according to claim 1, wherein in step three, a road vehicle detection data set is constructed, specifically:
and constructing a road vehicle detection data set containing 10 ten thousand pictures of real scenes, wherein the pictures in the data set are all obtained by shooting by a high-definition camera erected in the real road scenes.
4. The method for evaluating a road running index based on video analysis according to claim 3, wherein in step three, a vehicle detection model is built, the vehicle detection model is trained by a road detection vehicle data set, and the vehicle detection is performed on the monitoring video data by the trained vehicle detection model, specifically:
building a YOLOV5s target detection model, and setting a road vehicle detection data set according to 7:2:1 into a training set, a verification set and a test set, training a YOLOV5s target detection model through the training set, the verification set and the test set to obtain a trained vehicle detection model, and carrying out vehicle detection on the monitoring video data through the trained vehicle detection model.
5. The method for evaluating a road running index based on video analysis according to claim 4, wherein in step three, a vehicle speed measurement model is built, the vehicle speed measurement model is trained by a road detection vehicle data set, and each motor vehicle is subjected to vehicle ID allocation, vehicle track restoration and vehicle speed measurement by the trained vehicle speed measurement model, specifically comprising the following steps:
the vehicle speed measurement model comprises a vehicle tracking model and a vehicle speed measurement module, wherein the vehicle tracking model is a FairMOT multi-target tracking model, the FairMOT multi-target tracking model is trained through a road detection vehicle data set, a trained vehicle tracking model is obtained, and vehicle ID distribution is carried out on the monitoring video data through the vehicle tracking model; and carrying out vehicle track restoration and vehicle speed measurement on the monitoring video data through a vehicle speed measurement module.
6. The method for evaluating a road running index based on video analysis according to claim 5, wherein the vehicle track reduction and the vehicle speed measurement are performed on the monitoring video data by a vehicle speed measurement module, specifically:
and acquiring monitoring video data, mapping the track of the vehicle moving on the world coordinate system into the pixel coordinate system through the vehicle speed measuring module to obtain the moving track and time of the vehicle, and calculating the moving speed of the vehicle.
7. The method for estimating a road running index based on video analysis according to claim 6 wherein the vehicle speed measuring section applies the idea of converting between machine vision coordinate systems to map the trajectory of the vehicle moving on the world coordinate system in the pixel coordinate system, specifically:
let the coordinates of a point P in a pixel coordinate system be (u, v), the coordinates of an image coordinate system be (x, y), the coordinates of a camera coordinate system be (Xc, yc, zc), the coordinates of a world coordinate system be (Xw, yw, zw), and the rotation θ of the point P around the z axis in a world coordinate system can be obtained:
Figure FDA0004051060070000021
the deduction is as follows:
Figure FDA0004051060070000031
similarly, the P point rotates phi around the x axis and rotates omega around the y axis to respectively obtain corresponding matrixes:
Figure FDA0004051060070000032
Figure FDA0004051060070000033
from this 3*3 rotation matrix r=r can be obtained 1 R 2 R 3
From the following components
Figure FDA0004051060070000034
The coordinate transformation from the world coordinate system to the camera coordinate system can be obtained as:
Figure FDA0004051060070000035
wherein T is a 3*1 offset vector;
the three-dimensional coordinates are converted into two-dimensional coordinates from a camera coordinate system to an image coordinate system, and the following formula is obtained:
Figure FDA0004051060070000036
wherein f is the focal length of the camera, i.e. the distance from the optical center of the camera to the imaging plane;
from the image coordinate system to the pixel coordinate system, the corresponding matrix is:
Figure FDA0004051060070000041
wherein dx and dy respectively represent how many camera coordinate system unit lengths each column and each row respectively represent, (u) 0 ,v 0 ) The coordinates of the origin of the image coordinate system on the pixel coordinate system;
in summary, the mathematical change process of the P point from the world coordinate system to the pixel coordinate system is as follows:
Figure FDA0004051060070000042
wherein ,
Figure FDA0004051060070000043
is a camera with internal parameters>
Figure FDA0004051060070000044
The camera external parameters are obtained through Zhang Zhengyou calibration, and the motion trail and time of the vehicle are obtained through the formula.
8. The method for estimating a road running index based on video analysis according to claim 7, wherein in the fourth step, the number of vehicles and the average running speed of the vehicles in each lane are obtained by matching the ID-equipped vehicles and the track thereof in the third step with the boundary line of each lane in the second step, specifically:
dividing each vehicle into corresponding lanes according to the comparison result of the boundary position of each vehicle track and each lane, and obtaining the vehicle number and the average vehicle speed of each lane according to the vehicle number of the corresponding lane and the movement speed of each vehicle.
9. The method according to claim 8, wherein in the fifth step, the relation between the average speed of the vehicle and the preset speed range of each lane obtained in the fourth step is compared with the relation between the number of the vehicles and the preset number of vehicles of each lane obtained in the fourth step, and the road running index of each lane is evaluated accordingly, specifically:
setting an upper speed threshold limit and a lower speed threshold limit for a preset speed range, setting an upper vehicle threshold limit and a lower vehicle threshold limit for a preset vehicle number range, and dividing the road running index into smooth, slightly congested and severely congested; estimating a road running index corresponding to each lane according to the magnitude relation between the average speed of the vehicle and the speed threshold value and the magnitude relation between the number of vehicles and the vehicle threshold value, wherein the road running index is specifically as follows:
if the average speed of the vehicle is greater than the upper speed threshold and the number of vehicles is less than the lower speed threshold, estimating the road running index corresponding to each lane as the smoothness;
if the average speed of the vehicles is in the preset speed range and the number of the vehicles is in the preset number of vehicles range, estimating the road running index corresponding to each lane as the slight congestion; or if the average speed of the vehicle is smaller than the lower speed threshold and the number of vehicles is larger than the upper speed threshold, estimating the road running index corresponding to each lane as the serious congestion.
CN202310042682.1A 2023-01-28 2023-01-28 Method for evaluating road running index based on video analysis Pending CN116311903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310042682.1A CN116311903A (en) 2023-01-28 2023-01-28 Method for evaluating road running index based on video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310042682.1A CN116311903A (en) 2023-01-28 2023-01-28 Method for evaluating road running index based on video analysis

Publications (1)

Publication Number Publication Date
CN116311903A true CN116311903A (en) 2023-06-23

Family

ID=86778765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310042682.1A Pending CN116311903A (en) 2023-01-28 2023-01-28 Method for evaluating road running index based on video analysis

Country Status (1)

Country Link
CN (1) CN116311903A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116631196A (en) * 2023-07-25 2023-08-22 南京农业大学 Traffic road condition prediction method and device based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230254A (en) * 2017-08-31 2018-06-29 北京同方软件股份有限公司 A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
CN111915883A (en) * 2020-06-17 2020-11-10 西安交通大学 Road traffic condition detection method based on vehicle-mounted camera shooting
WO2021004548A1 (en) * 2019-07-08 2021-01-14 中原工学院 Vehicle speed intelligent measurement method based on binocular stereo vision system
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium
US11068713B1 (en) * 2018-07-23 2021-07-20 University Of South Florida Video-based intelligent road traffic universal analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230254A (en) * 2017-08-31 2018-06-29 北京同方软件股份有限公司 A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
US11068713B1 (en) * 2018-07-23 2021-07-20 University Of South Florida Video-based intelligent road traffic universal analysis
WO2021004548A1 (en) * 2019-07-08 2021-01-14 中原工学院 Vehicle speed intelligent measurement method based on binocular stereo vision system
CN111915883A (en) * 2020-06-17 2020-11-10 西安交通大学 Road traffic condition detection method based on vehicle-mounted camera shooting
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116631196A (en) * 2023-07-25 2023-08-22 南京农业大学 Traffic road condition prediction method and device based on big data
CN116631196B (en) * 2023-07-25 2023-11-28 南京农业大学 Traffic road condition prediction method and device based on big data

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109034047B (en) Lane line detection method and device
CN108320510B (en) Traffic information statistical method and system based on aerial video shot by unmanned aerial vehicle
JP3895238B2 (en) Obstacle detection apparatus and method
CN109064495B (en) Bridge deck vehicle space-time information acquisition method based on fast R-CNN and video technology
Song et al. Dynamic calibration of pan–tilt–zoom cameras for traffic monitoring
CN111753797B (en) Vehicle speed measuring method based on video analysis
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
CN108681718B (en) Unmanned aerial vehicle low-altitude target accurate detection and identification method
US10984263B2 (en) Detection and validation of objects from sequential images of a camera by using homographies
CN112329776B (en) License plate detection method and device based on improved CenterNet network
CN108416798B (en) A kind of vehicle distances estimation method based on light stream
CN111967360A (en) Target vehicle attitude detection method based on wheels
CN110889328A (en) Method, device, electronic equipment and storage medium for detecting road traffic condition
CN114170580A (en) Highway-oriented abnormal event detection method
CN112204614A (en) Motion segmentation in video from non-stationary cameras
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN116311903A (en) Method for evaluating road running index based on video analysis
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN116503818A (en) Multi-lane vehicle speed detection method and system
Lee Neural network approach to identify model of vehicles
CN110210324B (en) Road target rapid detection early warning method and system
CN115457780B (en) Vehicle flow and velocity automatic measuring and calculating method and system based on priori knowledge set
CN116740657A (en) Target detection and ranging method based on similar triangles
CN116912517A (en) Method and device for detecting camera view field boundary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination