CN117994741A - Vehicle speed detection method, system and storage medium based on video monitoring - Google Patents

Vehicle speed detection method, system and storage medium based on video monitoring Download PDF

Info

Publication number
CN117994741A
CN117994741A CN202410009006.9A CN202410009006A CN117994741A CN 117994741 A CN117994741 A CN 117994741A CN 202410009006 A CN202410009006 A CN 202410009006A CN 117994741 A CN117994741 A CN 117994741A
Authority
CN
China
Prior art keywords
speed
vehicle
video image
image frame
vehicle speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410009006.9A
Other languages
Chinese (zh)
Other versions
CN117994741B (en
Inventor
陈红君
黄华茂
关金发
霍启锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhishi Cloud Control Technology Co ltd
Original Assignee
Guangdong Zhishi Cloud Control Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhishi Cloud Control Technology Co ltd filed Critical Guangdong Zhishi Cloud Control Technology Co ltd
Priority to CN202410009006.9A priority Critical patent/CN117994741B/en
Publication of CN117994741A publication Critical patent/CN117994741A/en
Application granted granted Critical
Publication of CN117994741B publication Critical patent/CN117994741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, in particular to a vehicle speed detection method, a vehicle speed detection system and a storage medium based on video monitoring, wherein the method comprises the following steps: responding to a trigger instruction for executing a detection line generation process, starting the detection line generation process, determining edge line information of a speed measurement area based on a current video image frame, and further determining a vehicle speed detection line in the speed measurement area; performing target detection and target tracking on a video image frame containing a speed measurement area to obtain boundary frame information in the speed measurement area, and determining the average speed and the speed direction of a vehicle in the speed measurement area based on the speed detection line and the boundary frame information; determining whether the field of view of the speed measuring area is changed or not based on the boundary frame of the field of view change marker in two adjacent video image frames, and if the field of view of the speed measuring area is determined to be changed, starting a detection line generation process to process the next video image frame; if the field of view of the speed measuring area is determined to be unchanged, executing a vehicle speed detection process; the invention can improve the accuracy and processing speed of vehicle speed detection.

Description

Vehicle speed detection method, system and storage medium based on video monitoring
Technical Field
The invention relates to the field of artificial intelligence, in particular to a vehicle speed detection method, a vehicle speed detection system and a storage medium based on video monitoring.
Background
Vehicle speed is one of the important factors affecting traffic safety. An excessively fast vehicle speed can cause an increase in the braking distance of the vehicle, and the risk of rear-end collision accidents is increased; through the speed detection, the running speed of the vehicle can be effectively controlled, and the occurrence of traffic accidents is reduced. In addition, the vehicle speed detection can reflect the running state of the vehicle on the road, help traffic management departments to know traffic flow characteristics, road safety conditions, driving behavior modes and the like, and provide basis for road construction and transformation. Therefore, vehicle speed detection is of great importance in traffic management.
The vehicle speed detection method mainly comprises radar speed measurement, laser speed measurement, induction coil speed measurement, timing card speed measurement, video monitoring speed measurement and the like. With the development of computer technology and the reduction of the cost of monitoring cameras, video monitoring is a common means for monitoring traffic states in real time. Based on the existing monitoring video, the vehicle speed detection can be carried out by fully extracting information from continuous video image frames. Compared with other speed measuring methods, the video monitoring speed measuring method does not need to install any extra equipment on the road surface, does not need expensive equipment and high maintenance cost, can reduce the cost of vehicle speed detection, and has wide development prospect.
In the existing video monitoring speed measurement method, a certain frame of image is generally extracted from a monitoring video, and a vehicle speed detection line is manually drawn and used for detecting the vehicle speed of all image frames of a subsequent monitoring video. The manual drawing of the vehicle speed detection line is large in workload, and timeliness and accuracy of the subsequent vehicle speed detection are difficult to ensure; the commercial monitoring camera cannot return to the original working point when the system is deployed after scaling or rotating operation, or the monitoring camera deviates from the original working point under the action of external force, so that the field of view is changed, and the manually drawn vehicle speed detection line deviates from a traffic lane when the system is deployed, so that the vehicle speed detection error is large and even fails. Therefore, the method for realizing accurate and reliable vehicle speed detection by automatically adapting to the field of view change of the monitoring camera is researched and has important application value.
Disclosure of Invention
Accordingly, an objective of the embodiments of the present invention is to provide a vehicle speed detecting method, system and storage medium based on video monitoring, so as to solve one or more technical problems existing in the prior art, and at least provide a beneficial selection or creation condition.
In one aspect, an embodiment of the present invention provides a vehicle speed detection method based on video monitoring, the method including the following steps:
S100, responding to a trigger instruction for executing a detection line generation process, starting the detection line generation process, acquiring a video image frame, and taking the video image frame as a current video image frame;
S200, determining edge line information of a speed measurement area based on a current video image frame, and determining a vehicle speed detection line in the speed measurement area based on the edge line information; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S300, executing a vehicle speed detection process, carrying out target detection and target tracking on a video image frame containing a speed measurement area, obtaining boundary frame information in the speed measurement area, and determining the average vehicle speed and the vehicle speed direction of a vehicle in the speed measurement area based on the vehicle speed detection line and the boundary frame information; the boundary frame information comprises boundary frames and identifiers of each vehicle, boundary frames and identifiers of each field-of-view change marker and boundary frames and identifiers of each traffic flow direction marker in the speed measuring area;
s400, determining whether the field of view of the speed measuring area is changed or not based on the boundary frame of the field of view change marker in two adjacent frames of video image frames, if the field of view of the speed measuring area is determined to be changed, responding to the field of view change, starting a detection line generation process, acquiring a next video image frame, taking the next video image frame as a current video image frame, and returning to execute S200; if it is determined that the field of view of the velocimetry zone is not changed, S300 is returned to be executed.
Optionally, in S200, the determining edge line information of the speed measurement area based on the current video image frame, and determining the vehicle speed detection line in the speed measurement area based on the edge line information includes:
S201, performing target detection and instance segmentation on a current video image frame by using a pre-generated velocimetry area segmentation model to obtain edge line information of a velocimetry area; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S202, dividing the edge line into four sections according to abrupt points of the slope of the edge line, and taking a region surrounded by the four sections of the edge line as a velocity measurement region; wherein, the shorter two sections of edge lines are in the cross section direction of the traffic lane, and the longer two sections of edge lines are in the direction of the traffic lane;
S203, setting the generated vehicle speed detection line to be effective temporarily, and executing S205 if the detection line generation process is started in response to the field of view change; if the detection line generation process is started in response to the trigger instruction, S204 is performed;
S204, determining the total number kN of vehicles in the speed measurement area based on the edge line of each vehicle in the speed measurement area, extracting a current video image frame if the total number kN of the vehicles is 0, setting the generated vehicle speed detection line to be effective all the time before the field of view changes, and executing S205; otherwise, taking the next video image frame as the current video image frame, and returning to execute S201;
s205, determining a vehicle speed detection line in the speed measurement area based on the speed measurement area, the speed measurement markers in the speed measurement area and the vehicle.
Optionally, in S205, the determining a vehicle speed detection line in the speed measurement area based on the speed measurement area and the speed measurement markers and the vehicles therein includes:
S251, for a video image frame f, making a vertical line from the mass center of each speed measurement marker to the central line of the edge line of the speed measurement region along the direction of a traffic lane, obtaining a foot corresponding to each speed measurement marker, setting the foot with a distance smaller than a distance threshold as the same foot array, setting the mass center corresponding to the same foot array as the same mass center array, and setting the corresponding foot array and the mass center array as the same marker array;
S252, for each marker array, taking all centroids and feet in the marker arrays as control points to make a linear fitting curve, connecting the intersection point of the linear fitting curve and the edge line of the speed measuring area along the direction of the traffic lane to obtain a vehicle speed detection line i, and marking the ith vehicle speed detection line as A iBi;
S253, calculating the number of intersection points of the central line of the edge line of the speed measuring area along the direction of the traffic lane and the edge line of the vehicle j, and if the number of intersection points of each vehicle is 0, marking the shielding identifier block_Q i of the ith vehicle speed detection line as non-shielding; if one vehicle j exists so that the number of intersection points is larger than 0, the blocking identifiers of the two vehicle speed detection lines nearest to the vehicle j are marked as blocked.
Optionally, after S205, the method further includes:
If the generated vehicle speed detection line is valid until the field of view is changed, setting a detection line generation process to be started in response to the field of view change, and executing S300 after initializing a speed measurement array F j to be empty; the velocity measurement array F j comprises elements [ j, A iBi,bl ock_Qi, F ]; where j denotes the number of the vehicle in the speed measurement region, a iBi denotes the ith vehicle speed detection line, block_q i denotes the blocking identifier of the ith vehicle speed detection line, and f denotes the video image frame.
Optionally, after S205, the method further includes:
If the generated vehicle speed detection line is valid temporarily, the next video image frame is taken as the current video image frame, and the process returns to S200, meanwhile, the detection line generation process is set to be started in response to the field of view change, and after the speed measurement array F j is initialized to be empty, S300 is executed.
Optionally, in S300, the performing object detection and object tracking on the video image frame including the speed measurement area to obtain bounding box information in the speed measurement area includes:
Acquiring a video image frame containing a speed measuring region, and performing target detection and target tracking on the video image frame by using a pre-generated field change detection model to obtain boundary frame information in the speed measuring region; wherein the bounding box information includes a bounding box and an identifier for each vehicle in the speed measurement zone, a bounding box and an identifier for each field of view change marker, and a bounding box and an identifier for each traffic direction marker.
Optionally, in S300, the determining, based on the vehicle speed detection line and the bounding box information, an average vehicle speed and a vehicle speed direction of the vehicle in the speed measurement area includes:
s301, obtaining boundary frame information of a vehicle in a speed measuring area in a current video image frame; the boundary frame information of the vehicles comprises boundary frames and identifiers of each vehicle in a speed measuring area and boundary frames and identifiers of each traffic flow direction marker;
S302, acquiring a latest stored speed measurement array F j, and reading a vehicle speed detection line [ A iBi]save and a shielding identifier block_Q i from the latest stored speed measurement array F j;
s303, reading a boundary box of a vehicle j as a target identifier in the current video image frame f;
S304, calculating the number of intersection points of the vehicle speed detection line [ A iBi]save ] and the boundary frame of the vehicle j, and recording elements [ j, A iBi,block_Qi, f ] when the number of intersection points is greater than 0;
s305, determining the length of the velocity measurement array F j, if the length of the velocity measurement array F j =0, executing S306; if the length of the velocity measurement array F j is greater than 0, executing S307;
S306, assigning the element [ j, A iBi,block_Qi, F ] as the first element of the velocity measurement array F j, and returning to the execution S301;
S307, judging whether the vehicle speed detection line [ A iBi]now ] is the same as the [ A iBi ] of the last element of the speed measurement array F j, if so, taking the next video image frame as the current video image frame, and returning to S301; if not, adding the element [ j, A iBi,block_Qi, F ] as the last element in the velocity measurement array F j;
S308, for each vehicle j, in the speed measurement array F j, if [ block_q i]last =0 and [ block_q i]last-1 =0, after calculating the average vehicle speed v between the two vehicle speed detection lines [ a iBi]last and [ a iBi]last-1 ], S309 is executed; otherwise, taking the next video image frame as the current video image frame, and returning to execute S301; wherein last is the number of the last element in the velocity measurement array F j; last-1 is the number of the penultimate element in the velocity measurement array F j;
S309, for each vehicle j, calculating a frame number difference delta F according to video image frames [ F ] last and [ F ] last-1 of the speed measurement array F j, and dividing the delta F by the frame rate of the camera to obtain a time difference delta t; and calculating the serial number difference delta N of the vehicle speed detection lines [ A iBi]last and [ A iBi]last-1 ], multiplying the delta N by the distance between the speed measurement markers to obtain the distance difference delta s, further calculating the average vehicle speed v= delta s/[ delta ] t, and judging the vehicle speed direction according to the labels of the boundary boxes of the vehicle flow direction markers.
Optionally, in S400, the determining whether the field of view of the tachometer zone is changed based on the bounding box of the field of view change marker in the two adjacent frames of video image frames includes:
s410, reading a boundary frame of a field change marker in a current video image frame and a boundary frame of a field change marker in a next video image frame;
S420, respectively determining the center coordinates of the two boundary frames, calculating the distance between the two center coordinates, and determining the field of view change of the velocity measurement area if the calculated distance is greater than a distance threshold.
In another aspect, an embodiment of the present invention provides a vehicle speed detection system based on video monitoring, including:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
In another aspect, embodiments of the present invention provide a computer-readable storage medium in which a processor-executable program is stored, which when executed by a processor is configured to perform the above-described method.
The embodiment of the invention has the following beneficial effects: the invention automatically detects the change of the visual field in the speed measuring area, automatically generates the vehicle speed detection line, and can determine the average speed and the vehicle speed direction of the vehicle by utilizing the generated partial vehicle speed detection lines even if the vehicle shields the speed measuring marker, thereby realizing accurate and real-time vehicle speed detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of steps of a vehicle speed detection method based on video monitoring according to an embodiment of the present invention;
FIG. 2 is a workflow diagram of two concurrent processes in an embodiment of the invention;
FIG. 3 is a workflow diagram of from a dataset to two concurrent processes in an embodiment of the invention;
FIG. 4 is a flow chart of one implementation of step S300 in FIG. 1;
FIG. 5 is a flow chart of one implementation of step S400 in FIG. 1;
fig. 6 is a block diagram of a vehicle speed detection system based on video monitoring according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block division is performed in a schematic and logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division of the schematic or the order in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the inventive aspects may be practiced without one or more of the specific details, or with other methods, components, steps, etc. In other instances, well-known methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
As shown in fig. 1,2 and 3, an embodiment of the present invention provides a vehicle speed detection method based on video monitoring, which includes the following steps:
S100, responding to a trigger instruction for executing a detection line generation process, starting the detection line generation process, acquiring a video image frame, and taking the video image frame as a current video image frame;
S200, determining edge line information of a speed measurement area based on a current video image frame, and determining a vehicle speed detection line in the speed measurement area based on the edge line information; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S300, executing a vehicle speed detection process, carrying out target detection and target tracking on a video image frame containing a speed measurement area, obtaining boundary frame information in the speed measurement area, and determining the average vehicle speed and the vehicle speed direction of a vehicle in the speed measurement area based on the vehicle speed detection line and the boundary frame information; the boundary frame information comprises boundary frames and identifiers of each vehicle, boundary frames and identifiers of each field-of-view change marker and boundary frames and identifiers of each traffic flow direction marker in the speed measuring area;
s400, determining whether the field of view of the speed measuring area is changed or not based on the boundary frame of the field of view change marker in two adjacent frames of video image frames, if the field of view of the speed measuring area is determined to be changed, responding to the field of view change, starting a detection line generation process, acquiring a next video image frame, taking the next video image frame as a current video image frame, and returning to execute S200; if it is determined that the field of view of the velocimetry zone is not changed, S300 is returned to be executed.
In the embodiment provided by the invention, after the vehicle speed detection line in the speed measurement area is determined, the vehicle speed detection process can be executed through the step S300, and the average vehicle speed and the vehicle speed direction of the vehicle in the speed measurement area are determined; meanwhile, the field of view of the monitoring camera can be automatically detected, when the monitoring camera deviates from an original working point under the actions of zooming, rotation and the like or under the action of external force, the field of view can be automatically detected to change, a vehicle speed detection line is rapidly generated, and the problem that the traditional manually drawn vehicle speed detection line fails is avoided. Through the deep learning algorithm, accurate detection and tracking of the vehicle are realized, the accuracy and processing speed of vehicle speed detection are improved, and the method can be suitable for various complex road environments.
In some preferred embodiments, in S200, the determining edge line information of the speed measurement area based on the current video image frame, and determining a vehicle speed detection line in the speed measurement area based on the edge line information, includes:
S201, performing target detection and instance segmentation on a current video image frame by using a pre-generated velocimetry area segmentation model to obtain edge line information of a velocimetry area; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S202, dividing the edge line into four sections according to abrupt points of the slope of the edge line, and taking a region surrounded by the four sections of the edge line as a velocity measurement region; wherein, the shorter two sections of edge lines are in the cross section direction of the traffic lane, and the longer two sections of edge lines are in the direction of the traffic lane;
Specifically, a border line q K of a speed measurement area in a current video image frame, a border line w i of each speed measurement marker in the speed measurement area, and a border line c j of each vehicle in the speed measurement area are obtained, wherein i represents the number of the speed measurement marker detected in the speed measurement area, and j represents the number of the vehicle detected in the speed measurement area;
For the speed measurement region, dividing the edge line q k into four sections according to the abrupt change point of the slope of the edge line q k, wherein the shorter two sections are the edge lines q k1 and q k2 in the cross section direction of the traffic lane, the longer two sections are the q k3 and q k4 in the traffic lane direction, and the central line q k0 of the speed measurement region is calculated according to the edge lines q k3 and q k4 of the speed measurement region.
S203, setting the generated vehicle speed detection line to be effective temporarily, and executing S205 if the detection line generation process is started in response to the field of view change; if the detection line generation process is started in response to the trigger instruction, S204 is performed;
S204, determining the total number kN of vehicles in the speed measurement area based on the edge line of each vehicle in the speed measurement area, extracting a current video image frame if the total number kN of the vehicles is 0, setting the generated vehicle speed detection line to be effective all the time before the field of view changes, and executing S205; otherwise, taking the next video image frame as the current video image frame, and returning to execute S201;
s205, determining a vehicle speed detection line in the speed measurement area based on the speed measurement area, the speed measurement markers in the speed measurement area and the vehicle.
Referring to fig. 2, in some embodiments, setting flag_auto=0 indicates that the detection line generation process is started in response to a trigger instruction, flag_auto=1 indicates that the detection line generation process is started in response to a field of view change, and in S100, the system deployment period, the detection line generation process is manually started, and flag_auto=0 is initialized; setting flag_temp=0 indicates that the generated vehicle speed detection line is valid until the field of view changes, flag_temp=1 indicates that the generated vehicle speed detection line is valid temporarily, and flag_temp=1 is initialized.
When no vehicle exists in the speed measuring area, the set generated vehicle speed detection line is effective all the time before the visual field changes, and the interference of the vehicle on the vehicle speed detection line is avoided.
In the present invention, the vehicle speed detection line generation and the vehicle speed detection are performed in parallel, and as long as there are two or more vehicle speed detection lines generated, it is possible to continuously generate other vehicle speed detection lines while testing the average vehicle speed between the two lines.
Referring to fig. 3, it should be noted that in some embodiments, a data set is constructed by collecting expressway surveillance videos, extracting image frames, and labeling the markers such as vehicles, field of view change markers, speed measurement areas, speed measurement markers, and vehicle flow direction. The vehicle refers to a truck, a passenger car, an engineering vehicle, an ambulance, an oil tank truck, a motorcycle and the like which are to be tested. The visual field change marker is an anchor point which does not move relative to the pavement of a traffic lane, such as a street lamp, a sign board, a house, a portal frame and the like, and is used for judging whether the visual field of the monitoring camera is changed or not. The speed measuring area is a traffic lane which contains speed measuring markers and has no bifurcation, and is divided into a single-line speed measuring area and a multi-line speed measuring area. The speed measurement marker refers to an anchor point with known length or distance in a visual field, such as a lane broken line, a street lamp and the like, and is used for judging the real length or distance of a vehicle on a road surface. The single line speed measuring region refers to a curve distribution of speed measuring markers along the direction of a traffic lane in the speed measuring region, such as a lane dividing broken line in the middle of a double lane, a row of street lamps beside the lane, and the like. The multi-line speed measuring area refers to the distribution of speed measuring markers along multiple curves in the speed measuring area along the direction of a traffic lane, such as two lane dividing broken lines in the middle of a three-lane, three lane dividing broken lines in the middle of a four-lane, two rows of street lamps at two sides of the traffic lane and the like. The vehicle direction marker is a marker which is in a view field and marks the running direction conforming to the traffic rule, such as an arrow on a road surface, a sign board with an arrow on a roadside, and the like.
It should be noted that, in some embodiments, the velocimetry region segmentation model is pre-generated by:
The speed measuring region segmentation model is a deep learning neural network model and comprises an input module, a feature extraction module, a feature fusion module, a detection branch module, a segmentation branch module, a linear combination module and an output module, and is used for training the example segmentation of a data set of a vehicle, a speed measuring region and a speed measuring marker to obtain a parameter weight file of an optimization model. The input module includes a data enhancement layer and a picture scaling layer. The feature extraction module comprises a convolution layer, a batch normalization layer, an activation function layer and a pooling layer. The feature fusion module enables the deep feature layer up-sampling and the shallow feature layer to be formed into a feature fusion layer. And the detection branch module detects the targets and generates the confidence coefficient of the mask prototype graph for each target. And the segmentation branch module generates a mask prototype graph. And the linear combination module convolves the mask confidence coefficient of the detection branch module and the mask of the segmentation branch module to obtain an example segmentation result. The output module outputs bounding boxes of the instances and instance segmentation edge lines.
In some preferred embodiments, in S205, the determining a vehicle speed detection line in the speed measurement area based on the speed measurement area and the speed measurement markers and the vehicles therein includes:
S251, for a video image frame f, making a vertical line from the mass center of each speed measurement marker to the central line of the edge line of the speed measurement region along the direction of a traffic lane, obtaining a foot corresponding to each speed measurement marker, setting the foot with a distance smaller than a distance threshold as the same foot array, setting the mass center corresponding to the same foot array as the same mass center array, and setting the corresponding foot array and the mass center array as the same marker array;
S252, for each marker array, taking all centroids and feet in the marker arrays as control points to make a linear fitting curve, connecting the intersection point of the linear fitting curve and the edge line of the speed measuring area along the direction of the traffic lane to obtain a vehicle speed detection line, and marking the ith vehicle speed detection line as A iBi;
S253, calculating the number of intersection points of the central line of the edge line of the speed measuring area along the direction of the traffic lane and the edge line of the vehicle j, and if the number of intersection points of each vehicle is 0, marking the shielding identifier block_Q i of the ith vehicle speed detection line as non-shielding; if one vehicle j exists so that the number of intersection points is larger than 0, the blocking identifiers of the two vehicle speed detection lines nearest to the vehicle j are marked as blocked.
Block_q i =0 indicates that the ith vehicle speed detection line is not affected by the blocked vehicle and can be used for detecting the vehicle speed; block_q i =1 indicates that the i-th vehicle speed detection line is affected by the blocked vehicle and is not used to detect the vehicle speed.
In some preferred embodiments, after S205, the method further comprises:
If the generated vehicle speed detection line is valid until the field of view is changed, setting a detection line generation process to be started in response to the field of view change, and executing S300 after initializing a speed measurement array F j to be empty; the velocity measurement array F j comprises elements [ j, A iBi,bl ock_Qi, F ]; where j represents the number of the vehicle in the speed measurement area, a iBi represents the ith vehicle speed detection line, block_q i represents the blocking identifier of the ith speed measurement line, and f represents the video image frame.
If the generated vehicle speed detection line is valid temporarily, the next video image frame is taken as the current video image frame, and the process returns to S200, meanwhile, the detection line generation process is set to be started in response to the field of view change, and after the speed measurement array F j is initialized to be empty, S300 is executed.
In some preferred embodiments, in S300, the performing object detection and object tracking on the video image frame including the tachometer zone to obtain bounding box information in the tachometer zone includes:
Acquiring a video image frame containing a speed measuring region, and performing target detection and target tracking on the video image frame by using a pre-generated field change detection model to obtain boundary frame information in the speed measuring region; wherein the bounding box information includes a bounding box and an identifier for each vehicle in the speed measurement zone, a bounding box and an identifier for each field of view change marker, and a bounding box and an identifier for each traffic direction marker.
It should be noted that, in some embodiments, the field of view change detection model is pre-generated by:
the visual field change detection model is a deep learning neural network model and comprises an input module, a feature extraction module, a feature fusion module, a prediction module, a target cutting module, a target feature extraction module, a target similarity calculation module and a target association module, and is used for training target detection and target tracking on a data set of a vehicle, a visual field change marker and a vehicle flow direction marker to obtain a parameter weight file of an optimization model. The input module includes a data enhancement layer and a picture scaling layer. The feature extraction module comprises a convolution layer, a batch normalization layer, an activation function layer and a pooling layer. The feature fusion module enables the deep feature layer up-sampling and the shallow feature layer to be formed into a feature fusion layer. And the prediction module detects a large target from the deep feature fusion layer, detects a small target from the shallow feature fusion layer, and optimizes the weight of the target detection parameter according to the loss function and the optimizer. And the target clipping module clips the picture according to the boundary box to obtain a detection target. The target feature extraction module is used for extracting appearance features and motion features of the target. The target similarity calculation module calculates a similarity matrix of the front frame image and the rear frame image in the video according to the extracted features. The target association module associates the same target according to the similarity matrix and the threshold value and distributes a unique identifier.
Referring to fig. 4, in some preferred embodiments, in S300, the determining the average speed and the speed direction of the vehicle in the speed measurement area based on the speed detection line and the bounding box information includes:
s301, obtaining boundary frame information of a vehicle in a speed measuring area in a current video image frame; the boundary frame information of the vehicles comprises boundary frames and identifiers of each vehicle in a speed measuring area and boundary frames and identifiers of each traffic flow direction marker;
S302, acquiring a latest stored speed measurement array F j, and reading a vehicle speed detection line [ A iBi]save and a shielding identifier block_Q i from the latest stored speed measurement array F j;
s303, reading a boundary box of a vehicle j as a target identifier in the current video image frame f;
S304, calculating the number of intersection points of the vehicle speed detection line [ A iBi]save ] and the boundary frame of the vehicle j, and recording elements [ j, A iBi,block_Qi, f ] when the number of intersection points is greater than 0;
s305, determining the length of the velocity measurement array F j, if the length of the velocity measurement array F j =0, executing S306; if the length of the velocity measurement array F j is greater than 0, executing S307;
S306, assigning the element [ j, A iBi,block_Qi, F ] as the first element of the velocity measurement array F j, and returning to the execution S301;
S307, judging whether the vehicle speed detection line [ A iBi]now ] is the same as the [ A iBi ] of the last element of the speed measurement array F j, if so, taking the next video image frame as the current video image frame, and returning to S301; if not, adding the element [ j, A iBi,block_Qi, F ] as the last element in the velocity measurement array F j;
S308, for each vehicle j, in the speed measurement array F j, if [ block_q i]last =0 and [ block_q i]last-1 =0, after calculating the average vehicle speed v between the two vehicle speed detection lines [ a iBi]last and [ a iBi]last-1 ], S309 is executed; otherwise, taking the next video image frame as the current video image frame, and returning to execute S301; wherein last is the number of the last element in the velocity measurement array F j; last-1 is the number of the penultimate element in the velocity measurement array F j;
S309, for each vehicle j, calculating a frame number difference delta F according to video image frames [ F ] last and [ F ] last-1 of the speed measurement array F j, and dividing the delta F by the frame rate of the camera to obtain a time difference delta t; and calculating the serial number difference delta N of the vehicle speed detection lines [ A iBi]last and [ A iBi]last-1 ], multiplying the delta N by the distance between the speed measurement markers to obtain the distance difference delta s, further calculating the average vehicle speed v= delta s/[ delta ] t, and judging the vehicle speed direction according to the labels of the boundary boxes of the vehicle flow direction markers.
Referring to fig. 5, in some preferred embodiments, in S400, the determining whether the field of view of the tachometer zone is changed based on the bounding box of the field of view change marker in two adjacent frames of video image frames includes:
s410, reading a boundary frame of a field change marker in a current video image frame and a boundary frame of a field change marker in a next video image frame;
S420, respectively determining the center coordinates of the two boundary frames, calculating the distance between the two center coordinates, and determining the field of view change of the velocity measurement area if the calculated distance is greater than a distance threshold.
Specifically, reading a boundary box of which the identifier is a field change marker e in the current video image frame f, and recording parameters (xE e,yEe,wEe,hEe) of the boundary box, wherein e is the number of the field change marker detected in a speed measurement area, and xE e、yEe、wEe、hEe is the center x coordinate, the center y coordinate, the width and the height of the boundary box respectively;
Reading a bounding box of which the identifier is a field change marker e in the next video image frame f next, and recording parameters (xE e,yEe,wEe,hEe)next;
And calculating the distance between (xE e,yEe) and (xE e,yEe)next), and when the distance is larger than the distance threshold value, judging that the field of view is changed, starting the detection line generation process, and returning to S100.
After the detection line generation process is started, flag_auto=1 indicates that the detection line generation process is automatically started by the vehicle speed detection process.
In summary, the embodiment provided by the invention has the following beneficial effects:
1. compared with the traditional scheme of manually drawing the vehicle speed detection line, the invention automatically generates the vehicle speed detection line, and has high processing speed and high positioning accuracy.
2. The invention can manually start the automatic generation process of the detection line, can automatically detect the view field of the monitoring camera, can automatically detect the change of the view field when the monitoring camera deviates from the original working point under the actions of zooming, rotation and the like or under the action of external force, and can quickly generate the vehicle speed detection line, thereby avoiding the problem that the traditional manually drawn vehicle speed detection line fails.
3. In the present invention, the vehicle speed detection line generation and the vehicle speed detection are performed in parallel, and as long as there are two or more vehicle speed detection lines generated, it is possible to continuously generate other vehicle speed detection lines while testing the average vehicle speed between the two lines.
Referring to fig. 6, an embodiment of the present invention provides a vehicle speed detection system based on video monitoring, including:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
It can be seen that the content in the above method embodiment is applicable to the embodiment of the present device, and the functions specifically implemented by the embodiment of the present device are the same as those of the embodiment of the above method, and the beneficial effects achieved by the embodiment of the above method are the same as those achieved by the embodiment of the above method.
Furthermore, the embodiment of the invention also discloses a computer program product or a computer program, and the computer program product or the computer program is stored in a computer readable storage medium. The computer program may be read from a computer readable storage medium by a processor of a computer device, the processor executing the computer program causing the computer device to perform the method as described above. Similarly, the content in the above method embodiment is applicable to the present storage medium embodiment, and the specific functions of the present storage medium embodiment are the same as those of the above method embodiment, and the achieved beneficial effects are the same as those of the above method embodiment.
The embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that the functional modules/units in all or some of the steps of the methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the invention and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present invention, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, and are not thereby limiting the scope of the claims of the embodiments of the present invention. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present invention shall fall within the scope of the claims of the embodiments of the present invention.

Claims (10)

1. The vehicle speed detection method based on video monitoring is characterized by comprising the following steps of:
S100, responding to a trigger instruction for executing a detection line generation process, starting the detection line generation process, acquiring a video image frame, and taking the video image frame as a current video image frame;
S200, determining edge line information of a speed measurement area based on a current video image frame, and determining a vehicle speed detection line in the speed measurement area based on the edge line information; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S300, executing a vehicle speed detection process, carrying out target detection and target tracking on a video image frame containing a speed measurement area, obtaining boundary frame information in the speed measurement area, and determining the average vehicle speed and the vehicle speed direction of a vehicle in the speed measurement area based on the vehicle speed detection line and the boundary frame information; the boundary frame information comprises boundary frames and identifiers of each vehicle, boundary frames and identifiers of each field-of-view change marker and boundary frames and identifiers of each traffic flow direction marker in the speed measuring area;
S400, determining whether the field of view of the speed measuring area is changed or not based on the boundary frame of the field of view change marker in two adjacent frames of video image frames, if the field of view of the speed measuring area is determined to be changed, responding to the field of view change, starting a detection line generation process, acquiring a next video image frame, taking the next video image frame as a current video image frame, and returning to execute S200; if it is determined that the field of view of the velocimetry zone is unchanged, S300 is performed.
2. The method according to claim 1, wherein in S200, the determining edge line information of the tachometer zone based on the current video image frame, and determining a vehicle speed detection line in the tachometer zone based on the edge line information, comprises:
S201, performing target detection and instance segmentation on a current video image frame by using a pre-generated velocimetry area segmentation model to obtain edge line information of a velocimetry area; the edge line information comprises edge lines of a speed measuring area, edge lines of speed measuring markers in the speed measuring area and edge lines of each vehicle in the speed measuring area;
S202, dividing the edge line into four sections according to abrupt points of the slope of the edge line, and taking a region surrounded by the four sections of the edge line as a velocity measurement region; wherein, the shorter two sections of edge lines are in the cross section direction of the traffic lane, and the longer two sections of edge lines are in the direction of the traffic lane;
S203, setting the generated vehicle speed detection line to be effective temporarily, and executing S205 if the detection line generation process is started in response to the field of view change; if the detection line generation process is started in response to the trigger instruction, S204 is performed;
S204, determining the total number kN of vehicles in the speed measurement area based on the edge line of each vehicle in the speed measurement area, extracting a current video image frame if the total number kN of the vehicles is 0, setting the generated vehicle speed detection line to be effective all the time before the field of view changes, and executing S205; otherwise, taking the next video image frame as the current video image frame, and returning to execute S201;
s205, determining a vehicle speed detection line in the speed measurement area based on the speed measurement area, the speed measurement markers in the speed measurement area and the vehicle.
3. The method according to claim 2, wherein in S205, the determining a vehicle speed detection line in the tachometer zone based on the tachometer markers and the vehicle therein comprises:
S251, for a video image frame f, making a vertical line from the mass center of each speed measurement marker to the central line of the edge line of the speed measurement region along the direction of a traffic lane, obtaining a foot corresponding to each speed measurement marker, setting the foot with a distance smaller than a distance threshold as the same foot array, setting the mass center corresponding to the same foot array as the same mass center array, and setting the corresponding foot array and the mass center array as the same marker array;
S252, for each marker array, taking all centroids and feet in the marker arrays as control points to make a linear fitting curve, connecting the intersection point of the linear fitting curve and the edge line of the speed measuring area along the direction of the traffic lane to obtain a vehicle speed detection line, and marking the ith vehicle speed detection line as A iBi;
S253, calculating the number of intersection points of the central line of the edge line of the speed measuring area along the direction of the traffic lane and the edge line of the vehicle j, and if the number of intersection points of each vehicle is 0, marking the shielding identifier block_Q i of the ith vehicle speed detection line as non-shielding; if one vehicle j exists so that the number of intersection points is larger than 0, the blocking identifiers of the two vehicle speed detection lines nearest to the vehicle j are marked as blocked.
4. A method according to claim 3, wherein after S205, the method further comprises:
If the generated vehicle speed detection line is valid until the field of view is changed, setting a detection line generation process to be started in response to the field of view change, and executing S300 after initializing a speed measurement array F j to be empty; the velocity measurement array F j comprises elements [ j, A iBi,bl ock_Qi, F ]; where j denotes the number of the vehicle in the speed measurement region, a iBi denotes the ith vehicle speed detection line, block_q i denotes the blocking identifier of the ith vehicle speed detection line, and f denotes the video image frame.
5. A method according to claim 3, wherein after S205, the method further comprises:
If the generated vehicle speed detection line is valid temporarily, the next video image frame is taken as the current video image frame, and the process returns to S200, meanwhile, the detection line generation process is set to be started in response to the field of view change, and after the speed measurement array F j is initialized to be empty, S300 is executed.
6. A method according to claim 3, wherein in S300, the performing object detection and object tracking on the video image frame including the speed measurement region to obtain bounding box information in the speed measurement region includes:
Acquiring a video image frame containing a speed measuring region, and performing target detection and target tracking on the video image frame by using a pre-generated field change detection model to obtain boundary frame information in the speed measuring region; wherein the bounding box information includes a bounding box and an identifier for each vehicle in the speed measurement zone, a bounding box and an identifier for each field of view change marker, and a bounding box and an identifier for each traffic direction marker.
7. The method of claim 6, wherein in S300, the determining the average speed and the speed direction of the vehicle in the speed measurement zone based on the speed detection line and bounding box information comprises:
s301, obtaining boundary frame information of a vehicle in a speed measuring area in a current video image frame; the boundary frame information of the vehicles comprises boundary frames and identifiers of each vehicle in a speed measuring area and boundary frames and identifiers of each traffic flow direction marker;
S302, acquiring a latest stored speed measurement array F j, and reading a vehicle speed detection line [ A iBi]save and a shielding identifier block_Q i from the latest stored speed measurement array F j;
s303, reading a boundary box of a vehicle j as a target identifier in the current video image frame f;
S304, calculating the number of intersection points of the vehicle speed detection line [ A iBi]save ] and the boundary frame of the vehicle j, and recording elements [ j, A iBi,block_Qi, f ] when the number of intersection points is greater than 0;
s305, determining the length of the velocity measurement array F j, if the length of the velocity measurement array F j =0, executing S306; if the length of the velocity measurement array F j is greater than 0, executing S307;
S306, assigning the element [ j, A iBi,block_Qi, F ] as the first element of the velocity measurement array F j, and returning to the execution S301;
S307, judging whether the vehicle speed detection line [ A iBi]now ] is the same as the [ A iBi ] of the last element of the speed measurement array F j, if so, taking the next video image frame as the current video image frame, and returning to S301; if not, adding the element [ j, A iBi,block_Qi, F ] as the last element in the velocity measurement array F j;
S308, for each vehicle j, in the speed measurement array F j, if [ block_q i]last =0 and [ block_q i]last-1 =0, after calculating the average vehicle speed v between the two vehicle speed detection lines [ a iBi]last and [ a iBi]last-1 ], S309 is executed; otherwise, taking the next video image frame as the current video image frame, and returning to execute S301; wherein last is the number of the last element in the velocity measurement array F j; last-1 is the number of the penultimate element in the velocity measurement array F j;
S309, for each vehicle j, calculating a frame number difference delta F according to video image frames [ F ] last and [ F ] last-1 of the speed measurement array F j, and dividing the delta F by the frame rate of the camera to obtain a time difference delta t; and calculating the serial number difference delta N of the vehicle speed detection lines [ A iBi]last and [ A iBi]last-1 ], multiplying the delta N by the distance between the speed measurement markers to obtain the distance difference delta s, further calculating the average vehicle speed v= delta s/[ delta ] t, and judging the vehicle speed direction according to the labels of the boundary boxes of the vehicle flow direction markers.
8. The method according to claim 6, wherein in S400, the determining whether the field of view of the velocimetry area is changed based on the bounding box of the field of view change marker in the two adjacent frames of video image frames includes:
s410, reading a boundary frame of a field change marker in a current video image frame and a boundary frame of a field change marker in a next video image frame;
S420, respectively determining the center coordinates of the two boundary frames, calculating the distance between the two center coordinates, and determining the field of view change of the velocity measurement area if the calculated distance is greater than a distance threshold.
9. A vehicle speed detection system based on video monitoring, comprising:
At least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any one of claims 1 to 8.
10. A computer readable storage medium, in which a processor executable program is stored, characterized in that the processor executable program is for performing the method according to any one of claims 1 to 8 when being executed by a processor.
CN202410009006.9A 2024-01-03 2024-01-03 Vehicle speed detection method, system and storage medium based on video monitoring Active CN117994741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410009006.9A CN117994741B (en) 2024-01-03 2024-01-03 Vehicle speed detection method, system and storage medium based on video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410009006.9A CN117994741B (en) 2024-01-03 2024-01-03 Vehicle speed detection method, system and storage medium based on video monitoring

Publications (2)

Publication Number Publication Date
CN117994741A true CN117994741A (en) 2024-05-07
CN117994741B CN117994741B (en) 2024-07-12

Family

ID=90900161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410009006.9A Active CN117994741B (en) 2024-01-03 2024-01-03 Vehicle speed detection method, system and storage medium based on video monitoring

Country Status (1)

Country Link
CN (1) CN117994741B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
CN105869413A (en) * 2016-06-23 2016-08-17 常州海蓝利科物联网技术有限公司 Method for measuring traffic flow and speed based on camera video
CN106056926A (en) * 2016-07-18 2016-10-26 华南理工大学 Video vehicle speed detection method based on dynamic virtual coil
CN113160299A (en) * 2021-01-28 2021-07-23 西安电子科技大学 Vehicle video speed measurement method based on Kalman filtering and computer readable storage medium
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
CN114926791A (en) * 2022-05-10 2022-08-19 北京市公安局公安交通管理局 Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN114937358A (en) * 2022-05-20 2022-08-23 内蒙古工业大学 Method for counting traffic flow of multiple lanes of highway
KR102448944B1 (en) * 2022-07-29 2022-09-30 시티아이랩 주식회사 Method and Device for Measuring the Velocity of Vehicle by Using Perspective Transformation
WO2023127250A1 (en) * 2021-12-27 2023-07-06 株式会社Nttドコモ Detection line determination device
WO2023124383A1 (en) * 2021-12-28 2023-07-06 京东方科技集团股份有限公司 Vehicle speed measurement method, collision early-warning method, and electronic device
CN116434159A (en) * 2023-04-13 2023-07-14 西安电子科技大学 Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN116503818A (en) * 2023-04-27 2023-07-28 内蒙古工业大学 Multi-lane vehicle speed detection method and system
CN116884235A (en) * 2023-08-09 2023-10-13 广东省交通运输规划研究中心 Video vehicle speed detection method, device and equipment based on wire collision and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
CN105869413A (en) * 2016-06-23 2016-08-17 常州海蓝利科物联网技术有限公司 Method for measuring traffic flow and speed based on camera video
CN106056926A (en) * 2016-07-18 2016-10-26 华南理工大学 Video vehicle speed detection method based on dynamic virtual coil
CN113160299A (en) * 2021-01-28 2021-07-23 西安电子科技大学 Vehicle video speed measurement method based on Kalman filtering and computer readable storage medium
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
WO2023127250A1 (en) * 2021-12-27 2023-07-06 株式会社Nttドコモ Detection line determination device
WO2023124383A1 (en) * 2021-12-28 2023-07-06 京东方科技集团股份有限公司 Vehicle speed measurement method, collision early-warning method, and electronic device
CN114926791A (en) * 2022-05-10 2022-08-19 北京市公安局公安交通管理局 Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN114937358A (en) * 2022-05-20 2022-08-23 内蒙古工业大学 Method for counting traffic flow of multiple lanes of highway
KR102448944B1 (en) * 2022-07-29 2022-09-30 시티아이랩 주식회사 Method and Device for Measuring the Velocity of Vehicle by Using Perspective Transformation
CN116434159A (en) * 2023-04-13 2023-07-14 西安电子科技大学 Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN116503818A (en) * 2023-04-27 2023-07-28 内蒙古工业大学 Multi-lane vehicle speed detection method and system
CN116884235A (en) * 2023-08-09 2023-10-13 广东省交通运输规划研究中心 Video vehicle speed detection method, device and equipment based on wire collision and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡云鹭: ""基于视频图像的车辆检测跟踪及车速检测算法研究"", 《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》, no. 2018, 15 March 2018 (2018-03-15), pages 034 - 943 *

Also Published As

Publication number Publication date
CN117994741B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
Chang et al. Argoverse: 3d tracking and forecasting with rich maps
Soilán et al. Segmentation and classification of road markings using MLS data
CN112133089B (en) Vehicle track prediction method, system and device based on surrounding environment and behavior intention
CN103155015B (en) Moving-object prediction device, virtual-mobile-object prediction device, program module, mobile-object prediction method, and virtual-mobile-object prediction method
EP3647728A1 (en) Map information system
JP2021531462A (en) Intelligent navigation methods and systems based on topology maps
CN107705577B (en) Real-time detection method and system for calibrating illegal lane change of vehicle based on lane line
CN114842450A (en) Driving region detection method, device and equipment
CN113759391A (en) Passable area detection method based on laser radar
CN114694060A (en) Road shed object detection method, electronic equipment and storage medium
CN115795808A (en) Automatic driving decision dangerous scene generation method, system, equipment and medium
Alpar et al. Intelligent collision warning using license plate segmentation
CN117372969B (en) Monitoring scene-oriented abnormal event detection method
CN117994741B (en) Vehicle speed detection method, system and storage medium based on video monitoring
CN117557600A (en) Vehicle-mounted image processing method and system
CN113189610A (en) Map-enhanced autonomous driving multi-target tracking method and related equipment
CN114492544B (en) Model training method and device and traffic incident occurrence probability evaluation method and device
Chiang et al. Fast multi-resolution spatial clustering for 3D point cloud data
CN116580551A (en) Vehicle driving behavior evaluation method, device, equipment and storage medium
Alam et al. Faster RCNN based robust vehicle detection algorithm for identifying and classifying vehicles
Karsten et al. Automated framework to audit traffic signs using remote sensing data
CN114677662A (en) Method, device, equipment and storage medium for predicting vehicle front obstacle state
CN114820931A (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN112949595A (en) Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
CN117853975B (en) Multi-lane vehicle speed detection line generation method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Chen Hongjun

Inventor after: Huo Qifeng

Inventor before: Chen Hongjun

Inventor before: Huang Huamao

Inventor before: Guan Jinfa

Inventor before: Huo Qifeng

CB03 Change of inventor or designer information
GR01 Patent grant