CN113689705B - Method and device for detecting red light running of vehicle, computer equipment and storage medium - Google Patents

Method and device for detecting red light running of vehicle, computer equipment and storage medium Download PDF

Info

Publication number
CN113689705B
CN113689705B CN202010425689.8A CN202010425689A CN113689705B CN 113689705 B CN113689705 B CN 113689705B CN 202010425689 A CN202010425689 A CN 202010425689A CN 113689705 B CN113689705 B CN 113689705B
Authority
CN
China
Prior art keywords
vehicle
signal lamp
image frame
frame
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010425689.8A
Other languages
Chinese (zh)
Other versions
CN113689705A (en
Inventor
李京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fengchi Shunxing Information Technology Co Ltd
Original Assignee
Shenzhen Fengchi Shunxing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fengchi Shunxing Information Technology Co Ltd filed Critical Shenzhen Fengchi Shunxing Information Technology Co Ltd
Priority to CN202010425689.8A priority Critical patent/CN113689705B/en
Publication of CN113689705A publication Critical patent/CN113689705A/en
Application granted granted Critical
Publication of CN113689705B publication Critical patent/CN113689705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for detecting red light running of a vehicle, computer equipment and a storage medium. The method comprises the following steps: acquiring a first image frame; when the vehicle is in a static state, detecting a signal lamp state corresponding to the first image frame; when the signal lamp state is a red lamp, judging whether the vehicle is a first vehicle behind a zebra crossing or not according to the first image frame; when the vehicle is determined to be the first vehicle behind the zebra crossing, detecting a target vehicle in the first image frame; tracking the target vehicle, and judging whether the target vehicle runs the red light according to the tracking record; and when the target vehicle is judged to have the red light running event, sending the vehicle red light running information corresponding to the target vehicle to a server. By adopting the method, the detection accuracy of the red light running of the vehicle can be improved.

Description

Method and device for detecting red light running of vehicle, computer equipment and storage medium
Technical Field
The application relates to the technical field of traffic, in particular to a method and a device for detecting red light running of a vehicle, computer equipment and a storage medium.
Background
With the construction of modern highways and the popularization of vehicles, the travel of self-driving vehicles gradually becomes the mainstream mode of people, and the travel mode of self-driving vehicles brings convenience to the life of people. However, some drivers have illegal driving behaviors such as running red lights due to lack of awareness of safe driving, and traffic accidents are gradually increased due to frequent occurrence of the illegal driving behaviors such as running red lights. Therefore, how to detect the red light running event of the vehicle is a concern.
At present, cameras are generally deployed on public roads, main roads of highways, important entrances and exits, main traffic flow channels and the like, the cameras send collected monitoring videos to a server, and the server detects red light running events of vehicles based on the monitoring videos. However, the detection method for detecting the red light running of the vehicle is limited by the arrangement position and the number of the cameras, and the coverage area for detecting the red light running of the vehicle is limited, so that the detection accuracy is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, a computer device and a storage medium for detecting a red light running of a vehicle, which can improve the accuracy of detecting the red light running of the vehicle.
A vehicle red light running detection method, the method comprising:
acquiring a first image frame;
when the vehicle is in a static state, detecting a signal lamp state corresponding to the first image frame;
when the signal lamp state is a red lamp, judging whether the vehicle is a first vehicle behind a zebra crossing according to the first image frame;
when the vehicle is determined to be the first vehicle behind the zebra crossing, detecting a target vehicle in the first image frame;
tracking the target vehicle, and judging whether the target vehicle runs the red light according to the tracking record;
and when the target vehicle is judged to have the red light running event, sending the vehicle red light running information corresponding to the target vehicle to a server.
A vehicle red light running detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring a first image frame;
the state detection module is used for detecting the state of a signal lamp corresponding to the first image frame when the vehicle is in a static state;
the zebra crossing detection module is used for judging whether the vehicle is a first vehicle behind the zebra crossing or not according to the first image frame when the signal lamp state is a red lamp;
the vehicle detection module is used for detecting a target vehicle in the first image frame when the vehicle is determined to be the first vehicle behind the zebra crossing;
the tracking module is used for tracking the target vehicle and judging that the target vehicle runs the red light according to the tracking record;
and the sending module is used for sending the vehicle red light running information corresponding to the target vehicle to a server when the target vehicle is judged to have the red light running event.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method embodiments when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the respective method embodiment.
According to the method, the device, the computer equipment and the storage medium for detecting the red light running of the vehicle, the first image frame for detecting the red light running of the vehicle is dynamically acquired through the terminal arranged on the vehicle, the dynamic detection of the red light running event of the vehicle is carried out based on the dynamically acquired first image frame, the detection of the red light running event of the vehicle under a dynamic scene can be realized, and the detection accuracy of the red light running event can be improved. Furthermore, the detection accuracy can be further improved under the condition of improving the detection efficiency of the red light running of the vehicle by sequentially executing the detection processes of the stationary state detection of the located vehicle, the signal lamp state detection corresponding to the first image frame, the zebra crossing detection, the detection and the tracking of the target vehicle and the like.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for detecting red light running of a vehicle;
FIG. 2 is a schematic flow chart illustrating a method for detecting red light running of a vehicle according to an embodiment;
FIG. 3 is a schematic diagram illustrating an embodiment of determining a signal light status corresponding to a first image frame based on signal light statuses of signal lights in the first image frame;
FIG. 4 is a schematic diagram illustrating a candidate signal light status of each of a plurality of first image frames to determine a signal light status corresponding to a currently acquired first image frame in one embodiment;
FIG. 5 is a schematic diagram illustrating a principle of determining whether a vehicle in which a terminal is located is a first vehicle behind a zebra crossing according to a first image frame in one embodiment;
FIG. 6 is a diagram illustrating the effect of obtaining a corresponding zebra crossing fit line segment from a first image frame according to an embodiment;
FIG. 7 is a schematic diagram illustrating the principle of tracking a target vehicle and determining red light running of the target vehicle according to a tracking record in one embodiment;
FIG. 8 is a schematic diagram of a target vehicle marked in an image frame for the presence of a red light violation event in one embodiment;
FIG. 9 is a schematic diagram of a method for detecting red light running of a vehicle according to one embodiment;
FIG. 10 is a block diagram of a red light running detection device of a vehicle according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for detecting the red light running of the vehicle can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The method comprises the steps that a terminal 102 obtains a first image frame, when a vehicle where the terminal 102 is located is in a static state, the state of a signal lamp corresponding to the first image frame is detected, when the state of the signal lamp is red light, whether the vehicle where the terminal 102 is located is a first vehicle behind a zebra crossing or not is judged according to the first image frame, when the vehicle is judged to be the first vehicle behind the zebra crossing, a target vehicle in the first image frame is detected and tracked, red light running judgment is conducted on the target vehicle according to tracking records, and when the target vehicle is judged to have a red light running event, the red light running image frame corresponding to the target vehicle with the red light running event and vehicle information are sent to a server 104. The terminal 102 may be, but is not limited to, various personal computers, smart phones, portable wearable devices, and other vehicle-mounted devices capable of detecting red light running of a vehicle, such as a vehicle event data recorder, a Central Processing Unit (CPU), an embedded device, or a processor. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a method for detecting red light running of a vehicle is provided, which is described by taking the method as an example of being applied to the terminal in fig. 1, and includes the following steps:
at step 202, a first image frame is acquired.
The first image frame is a video frame extracted from a video, or an image frame extracted from an image set. An image set is a set consisting of a plurality of image frames. The image set may particularly be a sequence of image frames, i.e. a sequence consisting of a plurality of image frames ordered by an acquisition time stamp.
In one embodiment, the terminal acquires video in real time and extracts a first image frame from the acquired video. The terminal may also acquire an image set in real-time and extract a first image frame from the acquired image set. The terminal collects videos or image sets in real time through the camera. The camera can be used as a component part to be arranged in the terminal, and can also be used as an independent device to be externally connected with the terminal.
In one embodiment, the terminal may be fixedly mounted to the vehicle and may also be placed on the vehicle when it is desired to detect a red light violation by the vehicle. It is understood that the vehicle in which the terminal is located may be a motor vehicle or a non-motor vehicle, and is not limited in particular herein.
In one embodiment, the terminal acquires a first image frame according to a preset detection frequency, and executes a related step of detecting that the vehicle runs the red light based on the currently acquired first image frame after acquiring the first image frame each time. The terminal extracts a first image frame from a video or image set acquired in real time according to a preset detection frequency. The preset detection frequency is a preset frequency for extracting the first image frame from the video or the image set and detecting the red light running of the vehicle based on the extracted first image frame, and may be specifically defined according to actual conditions, such as 2 frames/second.
And 204, detecting the signal lamp state corresponding to the first image frame when the vehicle is in a static state.
The static state represents a state that the vehicle is static, and the static state corresponds to a driving state. The signal light state refers to the state of a traffic signal light, and specifically may include a red light, a green light, a yellow light, and an invalid light, and may also include a red light and a non-red light. The signal lamp state corresponding to the first image frame refers to a signal lamp state determined according to the first image frame and corresponding to a signal lamp referred to by the vehicle where the terminal is located when the vehicle runs, and specifically may refer to a signal lamp state referred to by the vehicle where the terminal is located when the vehicle runs in the first image frame. It is understood that the traffic light referred to when the vehicle is running means a traffic light for indicating that the vehicle is kept in a running state, is kept in a stationary state, is changed from a stationary state to a running state, or is changed from a running state to a stationary state.
The first image frame may include a signal lamp referred by the vehicle where the terminal is located when the vehicle is running, a signal lamp of a sidewalk and a signal lamp referred by the vehicle where the terminal is located when the vehicle is running, wherein the signal lamp is inconsistent with the running direction of the vehicle where the terminal is located when the vehicle is running.
Specifically, after acquiring a first image frame, the terminal determines whether the vehicle where the terminal is located is in a stationary state when the first image frame is acquired, that is, determines whether the vehicle where the terminal is located is in a stationary state at a time point of an acquisition timestamp corresponding to the first image frame. When the vehicle is judged to be in a static state, the terminal detects the signal lamp state corresponding to the first image frame according to the acquired first image frame, namely detects the signal lamp state corresponding to the signal lamp referred by the vehicle in which the terminal is located when the vehicle runs according to the first image frame.
In one embodiment, the terminal acquires positioning information and inertial measurement data corresponding to the acquired first image frame and corresponding to a vehicle where the terminal is located, and judges whether the vehicle is in a static state according to the acquired positioning information and the inertial measurement data.
In one embodiment, the terminal determines a candidate signal lamp state corresponding to a currently acquired first image frame according to the first image frame, and determines a signal lamp state corresponding to the currently acquired first image frame according to the candidate signal lamp state corresponding to the first image frame and candidate signal lamp states corresponding to a plurality of first image frames adjacent to the first image frame. It can be understood that when the signal lamp state corresponding to the first image frame is determined based on a single first image frame, a false judgment may exist, so that the signal lamp state determined for the first image frame based on the single image frame is taken as a candidate signal lamp state corresponding to the first image frame, that is, the signal lamp state determined corresponding to the first image frame itself is taken as a candidate signal lamp state possibly corresponding to the first image frame, and the signal lamp state corresponding to the currently acquired first image frame is determined according to the candidate signal lamp states corresponding to the first image frame and a plurality of first image frames adjacent to the first image frame, so that the accuracy of the signal lamp state can be improved.
In one embodiment, the terminal detects the signal lamp state corresponding to the first image frame through a trained signal lamp detection model.
In one embodiment, when the vehicle is determined to be in a driving state, the corresponding vehicle red light running detection operation is not performed based on the first image frame.
And step 206, when the signal lamp state is a red lamp, judging whether the vehicle is the first vehicle behind the zebra crossing according to the first image frame.
Specifically, when the signal lamp state corresponding to the first image frame is judged to be a red lamp, the terminal performs zebra crossing semantic segmentation on the first image frame so as to segment a target zebra crossing region closest to a vehicle where the terminal is located from the first image frame, and judges whether the vehicle is a first vehicle behind the zebra crossing according to the target zebra crossing region, namely judges whether the vehicle is located at a first position behind the zebra crossing.
In one embodiment, the terminal conducts zebra crossing semantic segmentation on the first image frame through a trained pavement semantic segmentation model, so that whether the vehicle where the terminal is located is the first vehicle behind the zebra crossing or not can be judged based on the obtained zebra crossing segmentation result, and the efficiency and the accuracy of judging the location of the vehicle can be achieved.
In one embodiment, when the signal lamp state corresponding to the first image frame is determined to be not a red lamp, the terminal does not perform the step of determining whether the vehicle is the first vehicle behind the zebra crossing based on the first image frame.
And step 208, when the vehicle is determined to be the first vehicle behind the zebra crossing, detecting the target vehicle in the first image frame.
Specifically, when the vehicle where the terminal is located is the first vehicle behind the zebra crossing according to the first image frame, the terminal detects the target vehicles in the first image frame to obtain a vehicle detection frame corresponding to each target vehicle. The vehicle detection frame can be used to identify a location of a respective target vehicle in the first image frame.
In one embodiment, the terminal detects the target vehicles in the first image frame, and obtains vehicle detection frame data and vehicle classification labels corresponding to each target vehicle. The vehicle detection frame data can be used to uniquely determine a respective vehicle detection frame in the first image frame. The vehicle classification tag comprises the enterprise attribution of the target vehicle and can also comprise a vehicle orientation, wherein the vehicle orientation refers to the orientation of the target vehicle relative to the vehicle where the terminal is located, and specifically can comprise the same direction or the opposite direction. The corporate affiliation is not specifically limited, and examples thereof include "beauty group", "hungry type", "none", and the like. Therefore, when the target vehicle is judged to have the red light running event, the target vehicle with the red light running event can be subjected to summarizing analysis and control according to the vehicle classification label.
In one embodiment, the terminal detects the target vehicle in the first image frame through the trained vehicle detection model, and therefore detection efficiency and accuracy of the target vehicle can be improved.
In one embodiment, the target vehicle includes a motor vehicle and a non-motor vehicle, and is not limited herein.
In one embodiment, when the vehicle where the terminal is located is determined not to be the first vehicle behind the zebra crossing, the terminal does not continue to perform the step of detecting the target vehicle in the first image frame.
And step 210, tracking the target vehicle, and judging the red light running of the target vehicle according to the tracking record.
Specifically, the terminal acquires a second image frame from the video or the image set, tracks each target vehicle detected from the first image frame according to the second image frame, and judges whether the target vehicle has a red light running event according to a tracking record corresponding to each target vehicle.
In one embodiment, the terminal acquires a second image frame from the video or the image set according to a preset tracking frequency, and after the second image frame is acquired each time, each target vehicle detected from the first image frame is tracked according to the currently acquired second image frame so as to update a tracking record corresponding to each target vehicle. And when the tracking record meets the red light running judgment condition, the terminal judges whether the corresponding target vehicle has a red light running event or not according to the tracking record. The red light running judgment condition is a basis or condition for judging whether the red light running judgment operation is executed on the target vehicle, for example, the tracking record corresponding to the target vehicle includes a preset number of tracking positions, that is, the tracking record is determined by the preset number of second image frames which are sequentially acquired. The terminal can determine the movement track of the target vehicle according to the corresponding tracking record of the target vehicle so as to judge the red light running of the vehicle based on the movement track. The preset tracking frequency may be customized, such as 6 frames/second.
In one embodiment, the terminal extracts a first image frame from a video or image set according to a preset detection frequency, and extracts a second image frame from the video or image set in a summary mode according to a preset tracking frequency. And the terminal determines a target vehicle to be tracked according to each first image frame, and adds a tracking record or corrects the current tracking record for each target vehicle according to the first image frame. And the terminal tracks each target vehicle determined by the first image frame with the nearest acquisition time according to each second image frame so as to update the tracking record of each target vehicle. It is to be understood that the first image frame may be understood as a detection image frame and the second image frame may be understood as a tracking image frame. The preset detection frequency may not be consistent with the preset tracking frequency, for example, the preset detection frequency is 2 frames/second, and the preset tracking frequency is 6 frames/second.
In one embodiment, after acquiring the second image frame, the terminal determines whether the vehicle in which the terminal is located is in a stationary state at the time point of the acquisition time stamp of the second image frame. And if the vehicle is judged to be in a static state, the terminal tracks the corresponding target vehicle according to the second image frame. If the vehicle is judged to be in the driving state, the terminal does not execute the tracking operation on the corresponding target vehicle based on the second image frame, furthermore, the terminal can end the current red light running detection process of the vehicle and delete the corresponding detection record, and can continue to extract the next second image frame, judge whether the vehicle where the terminal is located is in a static state or not according to the next second image frame, and execute the corresponding operation according to the judgment result. And if the vehicle where the terminal is positioned is judged to be in a static state aiming at the second image frames of the preset number extracted in sequence, the terminal ends the red light running detection process of the vehicle at the current time and deletes the corresponding detection record. The preset number is, for example, 3.
And step 212, when the target vehicle is judged to have the red light running event, vehicle red light running information corresponding to the target vehicle is sent to a server.
The vehicle red light running information is a record or a certificate for judging that a red light running event exists in a corresponding target vehicle, and specifically comprises red light running image frames and vehicle information recorded with red light running evidences of the target vehicle, and also comprises a camera mark. The red light running image frame may specifically include a first image frame and/or a second image frame carrying a red light running record of the target vehicle. The vehicle information includes a vehicle classification tag corresponding to the target vehicle and vehicle location information. The vehicle position information refers to position information of a target vehicle, and specifically may refer to longitude and latitude information. The camera identification is used to uniquely identify the camera that captures the video or image set.
Specifically, when it is determined that the target vehicle detected from the first image frame has the red light running event, the terminal acquires corresponding red light running image frames and vehicle information for each target vehicle having the red light running event respectively, and sends the acquired red light running image frames and vehicle information to the server. The terminal may determine vehicle location information corresponding to the target vehicle according to the location information of the vehicle in which the terminal is located, for example, determine longitude and latitude information in the location information as vehicle location information corresponding to the target vehicle.
In one embodiment, the server updates the information of the vehicle running the red light reported by the terminal to the database. The server can also perform statistical analysis on the vehicle red light running events of all enterprises according to the enterprise affiliations in the vehicle red light running information, so that accountability and processing are performed on corresponding enterprises based on analysis results.
According to the method for detecting the red light running of the vehicle, the first image frame used for detecting the red light running of the vehicle is dynamically acquired through the terminal arranged on the vehicle, the dynamic detection of the red light running event of the vehicle is carried out based on the dynamically acquired first image frame, the detection of the red light running event of the vehicle under a dynamic scene can be realized, and the detection accuracy of the red light running event can be improved. Furthermore, the detection accuracy can be further improved under the condition of improving the red light running detection efficiency of the vehicle by sequentially executing the detection processes of the stationary state detection of the vehicle, the signal lamp state detection corresponding to the first image frame, the zebra crossing detection, the detection and tracking of the target vehicle and the like.
In one embodiment, before step 204, the method for detecting a red light running of a vehicle further includes: acquiring positioning information and inertia measurement data of a vehicle; the positioning information and the inertial measurement data correspond to a first image frame; and judging whether the vehicle is in a static state or not according to the positioning information and the inertia measurement data.
The Positioning information is information for representing a current position of a vehicle in which the terminal is located, and specifically may be GPS (Global Positioning System) information. The instantaneous speed of the vehicle in which the terminal is located may be included in the positioning information. The inertial measurement data is data collected by the inertial measurement unit, and may specifically include angular velocity and acceleration. The inertial measurement device may be specifically an IMU (inertial measurement unit), which measures angular velocity and acceleration of the respective vehicle in three-dimensional space, typically through a three-axis gyroscope, an accelerometer in three directions, and the like.
Specifically, the terminal acquires positioning information and inertial measurement data of a vehicle where the terminal is located according to the acquired first image frame, and acquisition timestamps corresponding to the positioning information, the inertial measurement data and the first image frame are all consistent. And the terminal judges whether the vehicle where the terminal is positioned is in a static state or not according to the instantaneous speed in the positioning information and the acceleration in the inertia measurement data.
In one embodiment, when the instantaneous speed in the positioning information is zero and the acceleration in the inertia measurement data is zero, the terminal determines that the corresponding vehicle is in a stationary state. It is understood that the terminal may directly determine that the vehicle is in a stationary state when the instantaneous speed in the positioning information is zero. And when the instantaneous speed in the positioning information is judged to be zero, whether the acceleration in the inertial measurement data is zero is further judged, and when the acceleration in the inertial measurement data is further judged to be zero, the terminal further judges that the vehicle is in a static state, so that the judgment accuracy of the static state of the vehicle can be improved.
In one embodiment, the terminal collects the positioning information of the vehicle in which the terminal is positioned through the positioning device. The positioning device can be arranged in the terminal as a component part, and can also be externally connected to the terminal as an independent device. A positioning device such as a GPS positioning device.
In the above embodiment, according to the positioning information and the inertial measurement data of the vehicle at the time point of the acquisition timestamp corresponding to the first image frame where the terminal is located, whether the vehicle is in the static state at the time point of the acquisition timestamp is dynamically determined, so that the accuracy of determining the static state can be improved. Therefore, when the vehicle red light running detection is carried out based on the first image frame collected in the static state, the detection accuracy can be improved.
In one embodiment, detecting a signal light state corresponding to a first image frame comprises: determining the signal lamp state, the signal lamp detection frame and the corresponding initial confidence coefficient of each signal lamp in the first image frame through the trained signal lamp detection model; determining the bias degree of each signal lamp detection frame according to the designated point of each signal lamp detection frame and the vertical central axis of the first image frame; determining a corresponding target confidence coefficient according to the initial confidence coefficient and the bias degree corresponding to each signal lamp detection frame; and determining the signal lamp state corresponding to the first image frame according to the target confidence.
The signal lamp detection frame is a detection frame corresponding to the signal lamp detected from the first image frame and used for identifying the position of the corresponding signal lamp in the first image frame. The initial confidence coefficient is obtained by prediction according to the first image frame through the trained signal lamp detection model and represents the credibility of the signal lamp which is referred to when the signal lamp corresponding to the corresponding signal lamp detection frame is the vehicle where the terminal is located. The offset degree refers to an offset degree or a deviation degree, and specifically may refer to an offset degree of a specified point in the signal lamp detection frame with respect to a vertical central axis of the first image frame, that is, the offset degree is used to characterize the offset degree of the signal lamp corresponding to the signal lamp detection frame with respect to the vertical central axis of the first image frame. And a designated point of the signal lamp detection frame, such as a central point, an upper left corner point, an upper right corner point or a lower right corner of the signal lamp detection frame, is lighted. The target confidence degree is determined according to the initial confidence degree corresponding to each signal lamp detection frame and the bias degree, and represents the credibility degree of the signal lamp referred to when the signal lamp corresponding to the signal lamp detection frame is the vehicle where the terminal is located. It can be understood that the initial confidence corresponding to the signal lamp detection frame is the initial confidence corresponding to the signal lamp detection frame, similarly, the bias degree corresponding to the signal lamp detection frame is the bias degree corresponding to the corresponding signal lamp, and the target confidence corresponding to the signal lamp detection frame is the target confidence corresponding to the corresponding signal lamp.
Specifically, when the vehicle where the terminal is located is judged to be in a static state, the terminal inputs the acquired first image frame into a trained signal lamp detection model, and the signal lamps in the first image frame are detected through the signal lamp detection model, so that the signal lamp state, the signal lamp detection frame and the initial confidence coefficient corresponding to each signal lamp in the first image frame are obtained. And the terminal determines the corresponding bias degree of each signal lamp detection frame according to the distance between the designated point of each signal lamp detection frame in the first image frame and the vertical central axis of the first image frame. The terminal determines a target confidence corresponding to each signal lamp detection frame according to the initial confidence and the bias corresponding to each signal lamp detection frame, and determines a signal lamp state corresponding to the first image frame according to the target confidence corresponding to each signal lamp detection frame corresponding to the first image frame.
In one embodiment, the terminal can obtain signal lamp detection frame data corresponding to each signal lamp in the first image frame through the signal lamp detection model, and the position of the corresponding signal lamp detection frame in the first image frame can be determined according to the signal lamp detection frame data, that is, the corresponding signal lamp detection frame can be uniquely marked in the first image frame. The signal light detection frame data includes, for example, coordinates of a center point, coordinates of an upper left corner, coordinates of a lower right corner, and the like of the signal light detection frame, and further includes, for example, coordinates of a center point of the signal light detection frame, and a length and a width of the signal light detection frame, which are not specifically limited herein.
In one embodiment, the terminal determines the degree of bias corresponding to each signal lamp detection frame according to a first preset mapping relation according to the distance between the designated point of each signal lamp detection frame and the vertical central axis of the first image frame and the width of the first image frame. The designated point is, for example, a central point of the signal lamp detection frame, and the first preset mapping relationship is, for example:
Figure BDA0002498625020000111
wherein dscore i Indicating the degree of offset of the i-th signal lamp detection frame, d i Indicating the central point of the ith signal lamp detection frame and the vertical point of the first image frameThe distance of the straight central axis, width, representing the image width of the first image frame, S dis Is a custom constant value, such as 0.2.
In one embodiment, the terminal sums the initial confidence corresponding to each signal lamp detection frame with the bias degree to obtain a corresponding target confidence. The summation can be direct arithmetic summation or weighted summation, and the weight of the weighted summation can be customized.
In an embodiment, after obtaining the target confidence corresponding to each signal lamp detection frame corresponding to the first image frame, the terminal may use the signal lamp state corresponding to the signal lamp detection frame with the highest target confidence as the signal lamp state corresponding to the first image frame. The terminal can also use the signal lamp state corresponding to the signal lamp detection frame with the maximum target confidence as the candidate signal lamp state corresponding to the first image frame, and determine the signal lamp state corresponding to the current first image frame according to the candidate signal lamp states corresponding to the first image frame and a plurality of adjacent first image frames.
In one embodiment, the training step of the signal lamp detection model includes: the method comprises the steps of obtaining a plurality of sample image frames, labeling each sample image frame, obtaining a sample signal lamp state, a sample signal lamp detection frame and a corresponding sample confidence coefficient corresponding to each signal lamp in each sample image frame, sequentially using each sample image frame as an input feature, and performing iterative training of a model by using the corresponding sample signal lamp state, the corresponding sample signal lamp detection frame and the corresponding sample confidence coefficient as expected output features to obtain a trained signal lamp detection model. The training step of the signal lamp detection model may be performed by a terminal or a server, and is not limited in this respect.
In one embodiment, the machine learning algorithm involved in training the signal lamp detection model is a neural network algorithm, and may specifically be a convolutional neural network algorithm.
In one embodiment, when only one signal lamp is detected in the first image frame, the signal lamp state of the signal lamp is used as the signal lamp state corresponding to the first image frame.
In one embodiment, after obtaining the signal lamp state, the signal lamp detection frame, and the corresponding initial confidence level of each signal lamp in the first image frame through the signal lamp detection model, the terminal may determine the signal lamp state corresponding to the first image frame directly according to the initial confidence level and the signal lamp state corresponding to each signal lamp in the first image frame, for example, determine the signal lamp state with the highest initial confidence level as the signal lamp state corresponding to the first image frame. In this embodiment, in the training stage of the signal lamp detection model, when the initial confidence corresponding to each signal lamp in the sample image frame is labeled, the position of the signal lamp in the sample image frame needs to be considered.
Fig. 3 is a schematic diagram illustrating the principle of determining the signal light state corresponding to the first image frame based on the signal light state of each signal light in the first image frame in one embodiment. The first image frame comprises a first signal lamp and a second signal lamp, the signal lamp states corresponding to the first signal lamp and the second signal lamp are respectively determined to be an invalid lamp and a red lamp through a signal lamp detection model, the initial confidence degrees corresponding to the first signal lamp and the second signal lamp are respectively 0.5 and 0.6, and signal lamp detection frames shown as dotted line frames in fig. 3 are also respectively determined. As shown in fig. 3, each signal lamp in the first image frame corresponds to a signal lamp detection box for identifying the position of the corresponding signal lamp in the first image frame. In addition, the distances from the first signal lamp and the second signal lamp to the vertical central axis of the first image frame are d1 and d2, respectively, the degree of offset of the corresponding signal lamp can be determined based on the distances, the corresponding target confidence level can be determined based on the degree of offset and the corresponding initial confidence level, and if the signal lamp state of the signal lamp with the maximum target confidence level is determined as the signal lamp state corresponding to the first image frame, the signal lamp state corresponding to the first image frame shown in fig. 3 is red. It should be noted that, as shown in fig. 3, the distance between the center point of the signal light detection frame and the vertical central axis of the first image frame is determined as the distance between the signal light corresponding to the signal light detection frame and the vertical central axis, and this distance calculation manner is only an example and is not limited in particular, for example, the distance between the upper left corner of the signal light detection frame and the vertical central axis of the first image frame may also be determined as the distance between the signal light corresponding to the signal light detection frame and the vertical central axis.
In the above embodiment, the trained signal lamp detection model is used to detect the signal lamp state, the signal lamp detection frame and the corresponding initial confidence corresponding to each signal lamp in the first image frame, so as to improve the detection efficiency and accuracy, and further, the target confidence is determined according to the offset determined by the signal lamp detection frame and the vertical central axis of the first image frame and the corresponding initial confidence, so that when the signal lamp state of the first image frame is determined based on the target confidence, the accuracy of the signal lamp state can be further improved.
In one embodiment, determining the signal lamp state corresponding to the first image frame according to the target confidence comprises: determining the signal lamp state corresponding to the signal lamp detection frame with the maximum target confidence coefficient as a candidate signal lamp state corresponding to the first image frame; updating the candidate signal lamp state to a signal lamp state queue; and determining the signal lamp state corresponding to the first image frame according to the red light occupation ratio in the updated signal lamp state queue and a preset occupation ratio threshold value.
The signal lamp state queue is a queue used for storing signal lamp states corresponding to a plurality of first image frames which are acquired in sequence. The queue length of the semaphore state queue can be preset, such as 3 or 5. The preset duty ratio threshold is a preset duty ratio threshold, and can be customized, such as 60%.
Specifically, after determining a target confidence corresponding to each signal lamp in a first image frame, the terminal screens the signal lamp with the maximum target confidence from the signal lamps according to the determined target confidence, determines a signal lamp state corresponding to the screened signal lamp as a candidate signal lamp state corresponding to the first image frame, updates and stores the candidate signal lamp state corresponding to the first image frame into a signal lamp state queue, and obtains an updated signal lamp state queue. And the terminal calculates the proportion of the red light in the updated signal lamp state queue and compares the proportion of the red light with a preset proportion threshold value. And when the red light occupation ratio is larger than or equal to a preset occupation ratio threshold, the terminal determines the signal lamp state corresponding to the first image frame as the red light. And when the ratio of the red light is smaller than a preset ratio threshold value, the terminal determines the signal lamp state corresponding to the first image frame as a non-red light.
In one embodiment, if the number of the candidate signal lamp states stored in the signal lamp state queue is greater than or equal to the preset queue length before the candidate signal lamp state corresponding to the first image frame is updated to the signal lamp state queue, the terminal deletes the candidate signal lamp state with the earliest writing time in the signal lamp state queue while writing the candidate signal lamp state corresponding to the first image frame into the signal lamp state queue.
For example, assuming that an existing signal lamp state queue is [1,0,1], if a candidate signal lamp state corresponding to a currently acquired first image frame is a red lamp, the red lamp is marked as 1, and a preset occupancy threshold value is 60%, according to that the updated signal lamp state queue is [0, 1], since the occupancy of the red lamp in the updated signal lamp state queue is 67%, and the occupancy is greater than the preset occupancy threshold value by 60%, it is determined that the signal lamp state corresponding to the first image frame is the red lamp. If the candidate signal lamp state corresponding to the first image frame is assumed to be a green lamp, and the green lamp is recorded as 0, the updated signal lamp state queue is [0,1,0], and since the percentage of the red lamp in the updated signal lamp state queue is 33%, and the percentage is less than the preset percentage threshold value 60%, the signal lamp state corresponding to the first image frame is determined to be a non-red lamp. It can be understood that if the candidate signal lamp status includes red light, green light, yellow light and invalid light, the red light can be recorded as 1, and the green light, yellow light and invalid light can be recorded as 0.
Fig. 4 is a schematic diagram illustrating a principle of determining a signal lamp status corresponding to a currently acquired first image frame by combining candidate signal lamp statuses of a plurality of first image frames according to an embodiment. As shown in fig. 4, the terminal determines the bias degree corresponding to each signal lamp in the first image frame according to the distance between each signal lamp in the first image frame and the vertical central axis of the first image frame, adds the bias degree corresponding to each signal lamp and the initial confidence degree to obtain a corresponding target confidence degree, determines the signal lamp state of the signal lamp with the maximum target confidence degree as a candidate signal lamp state corresponding to the first image frame, writes the candidate signal lamp state into an existing signal lamp state queue, and votes for the candidate signal lamp state in the signal lamp state queue to determine the signal lamp state corresponding to the first image frame according to the voting result.
In the above embodiment, after determining the candidate signal lamp state corresponding to the first image frame based on the first image frame itself, the candidate signal lamp state corresponding to the first image frame and the candidate signal lamp states corresponding to the multiple first image frames adjacent to the first image frame are integrated, and the signal lamp state corresponding to the currently acquired first image frame is dynamically determined, so that the problem of false detection or missing detection when determining the corresponding signal lamp state based on a single first image frame is avoided, and the detection accuracy of the signal lamp state can be improved.
In one embodiment, the determining whether the vehicle is the first vehicle behind the zebra crossing according to the first image frame includes: performing zebra crossing semantic segmentation on the first image frame through a trained pavement semantic segmentation model to obtain a zebra crossing segmentation result; determining a target zebra crossing region according to the zebra crossing segmentation result; performing straight line fitting and truncation on the target zebra crossing region to obtain a zebra crossing fitting line segment; and judging whether the vehicle is the first vehicle behind the zebra crossing or not according to the zebra crossing fitting line segment.
And marking each zebra crossing region which can be detected in the first image frame in the zebra crossing segmentation result. The zebra crossing segmentation result can be a binary image marked with a zebra crossing region, wherein the zebra crossing region is marked by white pixel points, and other background regions except the zebra crossing in the first image frame are represented by black pixel points. The target zebra crossing region is a zebra crossing region closest to a vehicle where the terminal is located.
Specifically, the terminal inputs a first image frame into a trained road surface semantic segmentation model, and performs zebra crossing semantic segmentation on the first image frame through the road surface semantic segmentation model to obtain a corresponding zebra crossing segmentation result. And the terminal screens out the zebra crossing region with the largest connected domain according to the zebra crossing segmentation result to serve as a target zebra crossing region, and performs straight line fitting and truncation on the target zebra crossing region to obtain a corresponding zebra crossing fitting line segment. And the terminal judges whether the vehicle in which the terminal is positioned is the first vehicle behind the zebra crossing or not according to the position relation between the zebra crossing fitting line segment and the first image frame.
In one embodiment, the terminal calls a pre-configured connected domain screening algorithm, detects a connected domain corresponding to each zebra crossing region from the zebra crossing segmentation result, and screens the zebra crossing region corresponding to the largest connected domain as a target zebra crossing region. Connected component screening algorithms such as opencv based algorithms. It is to be understood that the first image frame may include a plurality of zebra crossing regions, for example, a zebra crossing region closest to the vehicle where the terminal is located and a zebra crossing region opposite to the road, and the zebra crossing region closest to the vehicle can be screened out in the above manner.
In one embodiment, the terminal calls a pre-configured straight line fitting function to perform straight line fitting on a target zebra crossing region to obtain a corresponding zebra crossing fitting straight line, and cuts the zebra crossing fitting straight line according to the target zebra crossing region to obtain a corresponding zebra crossing fitting line segment. A straight line fitting function such as an opencv function. And the terminal traverses the target zebra crossing area to obtain the maximum abscissa and the minimum abscissa of the target zebra crossing area in the first image frame, and cuts the corresponding zebra crossing fitting straight line according to the maximum abscissa and the minimum abscissa to obtain the corresponding zebra crossing fitting line segment. The terminal may use the upper left corner of the first image frame as the origin of coordinates, the horizontal direction (i.e., the wide side of the first image frame) as the horizontal axis, and the vertical direction (i.e., the narrow side of the first image frame) as the vertical axis.
In one embodiment, the terminal judges whether the vehicle is the first vehicle behind the zebra crossing according to the slope of the zebra crossing fitted line segment, the maximum abscissa, the minimum abscissa and the width of the first image frame. When the slope of the zebra crossing fitting line segment is smaller than a first preset slope threshold value, and the ratio between the maximum transverse coordinate difference value corresponding to the zebra crossing fitting line segment and the width of the first image frame is larger than a preset ratio threshold value, the terminal judges that the vehicle is the first vehicle behind the zebra crossing, and otherwise, the terminal judges that the vehicle is not the first vehicle behind the zebra crossing. And the maximum abscissa difference value corresponding to the zebra crossing fitting line segment is the difference value between the maximum abscissa and the minimum abscissa of the zebra crossing fitting line segment. A first predetermined slope threshold, such as 0.1, and a predetermined ratio threshold, such as 0.5.
In one embodiment, the training step of the road surface semantic segmentation model includes: the method comprises the steps of obtaining a plurality of sample image frames, labeling a zebra crossing area in each sample image frame, carrying out binarization processing on the sample image frames with the labeled zebra crossing area to obtain a sample zebra crossing segmentation result corresponding to each sample image frame, sequentially taking each sample image frame as an input feature, taking the corresponding sample zebra crossing segmentation result as an expected output feature, and carrying out iterative training on a model to obtain a trained pavement semantic segmentation model.
Fig. 5 is a schematic diagram illustrating a principle of determining whether a vehicle in which a terminal is located is a first vehicle behind a zebra crossing according to a first image frame in an embodiment. The method comprises the steps that a terminal carries out zebra crossing semantic segmentation on a first image frame, a maximum connected domain is searched from an obtained zebra crossing segmentation result and is used as a target zebra crossing region, connected domain straight line fitting and truncation are carried out on the searched maximum connected domain to obtain a zebra crossing fitting line segment, line segment morphological judgment is carried out on the zebra crossing fitting line segment, and whether a vehicle where the terminal is located is a first vehicle behind the zebra crossing or not is determined according to a judgment result.
FIG. 6 is a diagram illustrating the effect of obtaining a corresponding zebra crossing fitted line segment from the first image frame according to an embodiment. In fig. 6, a reference numeral 601 denotes a first image frame, a reference numeral 601a denotes a zebra crossing closest to a vehicle where a terminal is located in the first image frame, a reference numeral 602 denotes a zebra crossing semantic segmentation performed on the first image frame shown in 601, a target zebra crossing region 602a corresponding to the zebra crossing shown in 601a is screened out from the zebra crossing segmentation result, and a schematic diagram obtained by performing straight line fitting and truncation on the target zebra crossing region is obtained, where a reference numeral 602b denotes a zebra crossing fitting line segment obtained by performing straight line fitting and truncation on the target zebra crossing region.
In the above embodiment, the zebra crossing semantic segmentation is performed on the first image frame through the pavement semantic segmentation model, so that the corresponding zebra crossing segmentation result can be obtained quickly and accurately, and the accuracy of judgment can be improved when the vehicle where the terminal is located is judged to be the first vehicle behind the zebra crossing based on the zebra crossing segmentation result.
In one embodiment, detecting a target vehicle in a first image frame comprises: inputting the first image frame into a trained vehicle detection model for detection to obtain a vehicle detection frame corresponding to each target vehicle in the first image frame; the method for detecting the red light running of the vehicle further comprises the following steps: when the tracker matched with the vehicle detection frame exists, correspondingly updating the vehicle tracking frame in the matched tracker according to the vehicle detection frame; and when the tracker matched with the vehicle detection frame does not exist, initializing the vehicle tracking frame in the tracker corresponding to the corresponding target vehicle according to the vehicle detection frame.
The tracker can also be understood as a tracking queue, each target vehicle corresponds to one tracker, and the tracker is used for storing a vehicle tracking frame obtained by tracking the corresponding target vehicle. The tracking of the corresponding target vehicle can be realized based on the vehicle tracking frames sequentially stored in the tracker, namely, the motion trail of the corresponding target vehicle can be determined. The vehicle tracking frame is a tracking frame obtained when the target vehicle is tracked from the second image frame. The position of the respective target vehicle can be noted in the respective second image frame based on the vehicle tracking frame.
Specifically, when the vehicle where the terminal is located is judged to be the first vehicle behind the zebra crossing according to the first image frame, the terminal inputs the first image frame into a trained vehicle detection model, and each target vehicle in the first image frame is detected through the vehicle detection model to obtain a vehicle detection frame corresponding to each target vehicle. And the terminal judges whether each vehicle detection frame has a matched tracker or not according to each vehicle detection frame corresponding to the first image frame and the vehicle tracker with the latest update time in the existing trackers. When the matched tracker exists in the vehicle detection frame, the target vehicle corresponding to the vehicle detection frame is indicated to exist in the corresponding tracker, the terminal correspondingly updates the vehicle tracking frame in the tracker matched with the vehicle detection frame according to the vehicle detection frame so as to correct the vehicle tracking frame in the tracker, and the current time is determined as the updating time corresponding to the currently updated vehicle tracker. And when the vehicle detection frame is judged to be not provided with the matched tracker, the target vehicle corresponding to the vehicle detection frame is the vehicle which newly enters the visual field of the camera, the terminal establishes a new tracker for the target vehicle, and the vehicle detection frame is used as an initial vehicle tracking frame in the newly established tracker.
In one embodiment, the training step of the vehicle detection model includes: and obtaining a training sample set, wherein the training sample set comprises a plurality of sample image frames and a sample vehicle detection frame corresponding to each target vehicle in each sample image frame, the sample image frames are used as input features, and the corresponding sample vehicle detection frames are used as expected output features to perform model training to obtain a trained vehicle detection model. It is to be understood that the training sample set may further include a sample vehicle classification label corresponding to each sample image frame, so that a vehicle detection frame and a vehicle classification label corresponding to each target vehicle in the first image frame can be detected through the trained vehicle detection model, and thus, the vehicle detection model integrates the detection and classification operations of the target vehicle. The terminal can also detect a vehicle detection frame corresponding to each target vehicle in the first image frame through the trained vehicle detection model, and obtain a vehicle classification label corresponding to each target vehicle according to the vehicle detection frame corresponding to each target vehicle and the first image frame through the trained vehicle classification model. The terminal or the server may jointly train the vehicle detection model and the vehicle classification model based on the training sample set.
In one embodiment, after obtaining the vehicle detection frame corresponding to each target vehicle in the first image frame, the terminal respectively calculates the intersection ratio between each vehicle detection frame and the vehicle tracking frame with the latest update time in each existing tracker, and determines the matching relationship between the vehicle detection frame and the existing tracker according to the calculated intersection ratio. And the intersection ratio between the vehicle detection frame and the matched vehicle tracking frame with the latest updating time in the tracker is greater than or equal to a preset intersection ratio threshold. The preset intersection ratio threshold may be customized, such as 60%. The terminal can obtain a corresponding multi-dimensional matrix according to the intersection ratio between each vehicle detection frame and each vehicle tracking frame, screen the largest intersection ratio from the multi-dimensional matrix, establish the matching relationship between the vehicle detection frame corresponding to the screened intersection ratio and the tracker, delete the row and the column where the screened intersection ratio is located from the multi-dimensional matrix, perform dimension reduction processing on the multi-dimensional matrix, perform the step of returning to the step of screening the largest intersection ratio from the multi-dimensional matrix for the multi-dimensional matrix after dimension reduction processing, continue execution, and so on until the matching relationship between the vehicle detection frame and the matched tracker is respectively established for each vehicle detection frame with the matched tracker.
In the process of establishing the matching relationship between the vehicle detection frame and the tracker, if the screened maximum cross-over ratio is smaller than a preset cross-over ratio threshold value, it is determined that the corresponding vehicle detection frame is not matched with the tracker, that is, the matching relationship between the vehicle detection frame and the tracker cannot be established. When the iteration process is finished, namely after each vehicle detection frame corresponding to the first image frame or each existing vehicle tracking frame in each tracker is traversed and updated at the latest time, if a vehicle detection frame which does not establish a matching relationship exists, the terminal establishes a tracker newly according to the vehicle detection frame, and if a tracker which does not establish a matching relationship exists, the terminal does not update the vehicle tracking frame in the tracker correspondingly based on the vehicle detection frame corresponding to the first image frame.
In one embodiment, if no vehicle detection frame matched with an existing tracker exists in vehicle detection frames corresponding to multiple sequentially acquired first image frames, it is indicated that a target vehicle corresponding to the tracker has exited out of a visual field of a camera, and the terminal deletes or locks the tracker.
In the above embodiment, the trained vehicle detection model is used to detect the vehicle detection frame corresponding to each target vehicle in the first image frame, so that the detection accuracy and efficiency of the vehicle detection frame can be improved, and the tracker is newly built based on the vehicle detection frame with higher accuracy, or the vehicle tracking frame with the latest update time in the matched tracker is corrected, so that the accuracy of red light running judgment can be improved when the red light running judgment is performed on the corresponding target vehicle based on the vehicle tracking frame recorded in the tracker.
In one embodiment, step 210 includes: acquiring a second image frame; updating a vehicle tracking frame in a tracker corresponding to each target vehicle according to the second image frame; according to the vehicle tracking frame recorded in the tracker meeting the red light running judgment condition, performing linear fitting on the motion track of the corresponding target vehicle to obtain a track fitting linear line; and judging whether the corresponding target vehicle runs the red light according to the track fitting straight line.
The red light running judgment condition, such as the tracker, includes a preset number of vehicle tracking frames, and the vehicle tracking frames can be used for determining the tracking positions of the corresponding target vehicles in the corresponding second image frames.
Specifically, the terminal acquires a second image frame from the video or the image set, and updates a vehicle tracking frame with the latest updating time in a tracker corresponding to each target vehicle to be tracked currently according to the second image frame through a preset tracking algorithm. After the vehicle tracking frames in the trackers are updated, the terminal judges whether the corresponding trackers meet the red light running judgment condition or not according to the tracking records in each tracker, and carries out linear fitting on the motion tracks of the corresponding target vehicles according to the vehicle tracking frames recorded in the trackers meeting the red light running judgment condition to obtain track fitting straight lines so as to judge whether red light running events exist in the corresponding target vehicles according to the track fitting straight lines.
In one embodiment, when the tracker meets the red light running judgment condition, the terminal performs straight line fitting on the motion track of the corresponding target vehicle according to the central points of the vehicle tracking frames recorded in sequence in the tracker to obtain a track fitting straight line. It can be understood that the central points of the vehicle tracking frames in the tracker are sequentially connected according to the recording sequence, so that the motion track of the corresponding target vehicle can be obtained, and the track fitting straight line of the motion track can be obtained by performing straight line fitting on the central points of the vehicle tracking frames.
In one embodiment, the terminal judges whether the corresponding target vehicle has a red light running event or not according to the slope of the track fitting straight line and the position relation between the track fitting straight line and the second image frame. When the slope of the track fitting straight line is larger than or equal to a second preset slope threshold value and an intersection point exists between the track fitting straight line and the wide side of the second image frame, or when the slope of the track fitting straight line is larger than or equal to the second preset slope threshold value and the ratio of the horizontal coordinate of the intersection point corresponding to the track fitting straight line to the width of the second image frame is within a preset ratio range, the terminal judges that the red light running event exists in the corresponding target vehicle. The abscissa of the intersection point corresponding to the trajectory fitting straight line refers to the abscissa of the intersection point of the trajectory fitting straight line and the lower edge of the second image frame. A second predetermined slope threshold, such as 2, and a predetermined ratio range, such as [0.1,0.9].
In one embodiment, when the condition that the red light running event exists in the corresponding target vehicle is judged according to the tracker meeting the red light running judgment condition, the terminal marks or deletes the tracker so as to avoid detecting the red light running event of the corresponding target vehicle based on the tracker. When it is determined that the corresponding target vehicle does not have the red light running event according to the tracker meeting the red light running determination condition, the target vehicle is continuously tracked according to the target vehicle tracking mode provided in one or more embodiments of the present application, that is, the vehicle tracking frame in the tracker corresponding to the target vehicle is continuously updated, and the red light running determination is continuously performed on the corresponding target vehicle according to the tracker after the vehicle tracking frame is updated.
In one embodiment, the terminal sequentially acquires second image frames from the video or the image set according to a preset tracking frequency, and respectively executes the target vehicle tracking and red light running judgment process for each sequentially acquired second image frame.
Fig. 7 is a schematic diagram illustrating the principle of tracking the target vehicle and determining red light running of the target vehicle according to the tracking record in one embodiment. As shown in fig. 7, reference numeral 701 is a vehicle tracking frame obtained by tracking a target vehicle based on an N-1 th frame second image frame, reference numeral 702 is a vehicle tracking frame obtained based on an nth frame second image frame, reference numeral 701 is a vehicle tracking frame obtained based on an N +1 th frame second image frame, and reference numeral 704 is a trajectory fitting straight line obtained by fitting straight lines to a plurality of vehicle tracking frames corresponding to the target vehicle. It is to be understood that, the vehicle tracking frames respectively determined for the same target vehicle based on the three second image frames are only illustrated in fig. 7, and are not intended to be limited in particular.
In the above embodiment, the vehicle tracking frame in the tracker corresponding to each target vehicle is updated based on the second image frame, and when the red light running judgment condition is met, the red light running judgment is performed on the corresponding target vehicle based on the vehicle tracking frame recorded in the corresponding tracker, so that the judgment accuracy can be improved.
FIG. 8 is a schematic diagram of a target vehicle marked in an image frame for the presence of a red light violation event in one embodiment. Reference numerals 801 and 802 in fig. 8 respectively represent image frames in which a target vehicle having a red light running event is marked, and reference numerals 801a and 802a respectively represent target vehicles having a red light running event in the image frames. The terminal can be understood that after detecting that the target vehicle has the red light running event according to the vehicle red light running detection process, the terminal can send the vehicle red light running information carrying the red light running image frames corresponding to the target vehicle and the vehicle tracking frame of the target vehicle in the red light running image frames to the server, can mark the corresponding target vehicle in the corresponding red light running image according to the vehicle tracking frame, and send the vehicle red light running information carrying the red light running image frames marked with the target vehicle to the server.
FIG. 9 is a schematic diagram illustrating a method for detecting red light running of a vehicle according to an embodiment. When the red light running detection process of the vehicle starts, the terminal acquires a first image frame and judges whether the vehicle where the terminal is located is in a static state at the time point of a collecting timestamp corresponding to the first image frame. When the vehicle is judged to be in a static state, the terminal detects a signal lamp of the first image frame and judges whether the signal lamp state is red. And when the signal lamp state is judged to be the red lamp, the terminal carries out zebra crossing semantic segmentation on the first image frame and judges whether the vehicle in which the terminal is positioned at the first position behind the zebra crossing. When the vehicle is judged to be in the first place behind the zebra crossing, the terminal detects the target vehicle in the first image frame and tracks the target vehicle to judge whether the red light running event exists in the target vehicle. When the red light running event of the target vehicle is judged to exist, the red light running information of the vehicle, which carries the red light running image frame, the enterprise attribution, the vehicle position information and the like corresponding to the target vehicle, is stored, the red light running information of the vehicle is reported to the central management platform so as to indicate the central management platform to correspondingly update the database, and after the red light running detection operation of the vehicle is completed, the red light running detection process of the vehicle is ended. It is to be understood that the central management platform is also referred to as a server in one or more embodiments of the present application.
Further, when the vehicle where the terminal is located is judged not to be in a static state, or when the signal lamp state corresponding to the first image frame is judged to be in a non-red lamp state, or when the vehicle where the terminal is located is judged not to be located at the first position of the zebra crossing, the terminal returns to the step of obtaining the first image frame to continue execution. When the target vehicle is judged not to have the red light running event, the corresponding tracking records are abandoned or deleted after the red light running detection process of the vehicle is finished.
In the above embodiment, the monitoring of the red light running event of the vehicle is realized by sequentially executing the detection processes of vehicle static state detection, signal lamp state detection, zebra crossing detection, target vehicle detection and tracking and the like, so that the robustness and the accuracy of the monitoring can be improved.
In one embodiment, the vehicle red light running detection provided by the application has universality, is not only suitable for dynamic scenes of vehicle-mounted cameras, but also suitable for static scenes such as traffic intersections and road gates, can be correspondingly deployed with terminals for executing the vehicle red light running detection aiming at different application scenes, and dynamically adjusts corresponding vehicle red light running detection processes. For example, in a static scene, since the zebra crossing positions and the zebra crossing states are known, it is not necessary to perform operations such as signal crossing detection and zebra crossing detection. For example, videos or image sets acquired in a static scene are sent to a server, and the server performs a vehicle red light running detection operation.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in fig. 2 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In one embodiment, as shown in fig. 10, there is provided a vehicle red light running detection apparatus 1000, comprising: an obtaining module 1001, a state detecting module 1002, a zebra crossing detecting module 1003, a vehicle detecting module 1004, a tracking module 1005 and a sending module 1006, wherein:
an obtaining module 1001 configured to obtain a first image frame;
the state detection module 1002 is configured to detect a signal lamp state corresponding to a first image frame when the vehicle is in a stationary state;
the zebra crossing detection module 1003 is used for judging whether the vehicle is a first vehicle behind the zebra crossing or not according to the first image frame when the signal lamp state is a red lamp;
the vehicle detection module 1004 is used for detecting a target vehicle in a first image frame when the vehicle is judged to be a first vehicle behind the zebra crossing;
the tracking module 1005 is used for tracking the target vehicle and judging red light running of the target vehicle according to the tracking record;
and the sending module 1006 is configured to send the vehicle red light running information corresponding to the target vehicle to the server when it is determined that the target vehicle has the red light running event.
In one embodiment, the state detection module 1002 is further configured to obtain positioning information and inertial measurement data of the located vehicle; the positioning information and the inertial measurement data correspond to a first image frame; and judging whether the vehicle is in a static state or not according to the positioning information and the inertia measurement data.
In one embodiment, the state detection module 1002 is further configured to determine, through a trained signal lamp detection model, a signal lamp state, a signal lamp detection box, and a corresponding initial confidence of each signal lamp in the first image frame; determining the bias degree of each signal lamp detection frame according to the designated point of each signal lamp detection frame and the vertical central axis of the first image frame; determining a corresponding target confidence coefficient according to the initial confidence coefficient and the bias degree corresponding to each signal lamp detection frame; and determining the signal lamp state corresponding to the first image frame according to the target confidence.
In one embodiment, the state detection module 1002 is further configured to determine a signal lamp state corresponding to the signal lamp detection block with the maximum target confidence as a candidate signal lamp state corresponding to the first image frame; updating the candidate signal lamp state to a signal lamp state queue; and determining the signal lamp state corresponding to the first image frame according to the updated ratio of the red lamp in the signal lamp state queue and a preset ratio threshold.
In an embodiment, the zebra crossing detection module 1003 is configured to perform zebra crossing semantic segmentation on the first image frame through a trained road surface semantic segmentation model to obtain a zebra crossing segmentation result; determining a target zebra crossing region according to the zebra crossing segmentation result; performing straight line fitting and truncation on the target zebra crossing region to obtain a zebra crossing fitting line segment; and judging whether the vehicle is the first vehicle behind the zebra crossing or not according to the zebra crossing fitting line segment.
In one embodiment, the vehicle detection module 1004 is configured to input the first image frame into a trained vehicle detection model for detection, so as to obtain a vehicle detection frame corresponding to each target vehicle in the first image frame; when the tracker matched with the vehicle detection frame exists, correspondingly updating the vehicle tracking frame in the matched tracker according to the vehicle detection frame; and when the tracker matched with the vehicle detection frame does not exist, initializing the vehicle tracking frame in the tracker corresponding to the corresponding target vehicle according to the vehicle detection frame.
In one embodiment, the tracking module 1005 is further configured to acquire a second image frame; updating a vehicle tracking frame in a tracker corresponding to each target vehicle according to the second image frame; according to the vehicle tracking frame recorded in the tracker meeting the red light running judgment condition, performing linear fitting on the motion track of the corresponding target vehicle to obtain a track fitting linear line; and judging whether the corresponding target vehicle runs the red light according to the track fitting straight line.
For specific limitations of the vehicle red light running detection device, reference may be made to the above limitations of the vehicle red light running detection method, which are not described herein again. All or part of each module in the vehicle red light running detection device can be realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of detecting a red light violation by a vehicle. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory having a computer program stored therein and a processor that implements the steps of the method embodiments when the processor executes the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware that is instructed by a computer program, and the computer program may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A method for detecting red light running of a vehicle, the method comprising:
acquiring a first image frame;
when the vehicle is in a static state, detecting a signal lamp state corresponding to the first image frame through a trained signal lamp detection model;
when the signal lamp state is a red lamp, judging whether the vehicle is a first vehicle behind a zebra crossing or not according to the first image frame;
when the vehicle is determined to be the first vehicle behind the zebra crossing, detecting a target vehicle in the first image frame;
tracking the target vehicle, and judging that the target vehicle runs the red light according to the tracking record;
when the target vehicle is judged to have the red light running event, vehicle red light running information corresponding to the target vehicle is sent to a server;
the detecting the signal lamp state corresponding to the first image frame through the trained signal lamp detection model comprises:
determining a signal lamp state, a signal lamp detection frame and a corresponding initial confidence coefficient of each signal lamp in the first image frame through a trained signal lamp detection model;
determining the bias degree of each signal lamp detection frame according to the designated point of each signal lamp detection frame and the vertical central axis of the first image frame;
determining a corresponding target confidence coefficient according to the initial confidence coefficient and the bias degree corresponding to each signal lamp detection frame;
and determining a signal lamp state corresponding to the first image frame according to the target confidence.
2. The method according to claim 1, wherein before detecting a signal light state corresponding to the first image frame when the vehicle is in a stationary state, the method further comprises:
acquiring positioning information and inertia measurement data of a vehicle; the positioning information and the inertial measurement data both correspond to the first image frame;
and judging whether the vehicle is in a static state or not according to the positioning information and the inertia measurement data.
3. The method of claim 1, wherein said determining a signal light state for the first image frame based on the target confidence comprises:
determining the signal lamp state corresponding to the signal lamp detection frame with the maximum target confidence coefficient as a candidate signal lamp state corresponding to the first image frame;
updating the candidate signal lamp state to a signal lamp state queue;
and determining the signal lamp state corresponding to the first image frame according to the updated ratio of the red lamp in the signal lamp state queue and a preset ratio threshold.
4. The method of claim 1, wherein said determining from the first image frame whether the vehicle is a first vehicle behind a zebra crossing comprises:
performing zebra crossing semantic segmentation on the first image frame through a trained pavement semantic segmentation model to obtain a zebra crossing segmentation result;
determining a target zebra crossing region according to the zebra crossing segmentation result;
performing straight line fitting and truncation on the target zebra crossing region to obtain a zebra crossing fitting line segment;
and judging whether the vehicle is the first vehicle behind the zebra crossing or not according to the zebra crossing fitting line segment.
5. The method of any of claims 1-4, wherein the detecting the target vehicle in the first image frame comprises:
inputting the first image frames into a trained vehicle detection model for detection to obtain a vehicle detection frame corresponding to each target vehicle in the first image frames;
the method further comprises the following steps:
when the tracker matched with the vehicle detection frame exists, correspondingly updating the vehicle tracking frame in the matched tracker according to the vehicle detection frame;
and when the tracker matched with the vehicle detection frame does not exist, initializing according to the vehicle detection frame to obtain a vehicle tracking frame in the tracker corresponding to the corresponding target vehicle.
6. The method as claimed in claim 5, wherein the tracking the target vehicle and the red light running judgment of the target vehicle according to the tracking record comprise:
acquiring a second image frame;
updating a vehicle tracking frame in a tracker corresponding to each target vehicle according to the second image frame;
according to the vehicle tracking frame recorded in the tracker meeting the red light running judgment condition, performing linear fitting on the motion track of the corresponding target vehicle to obtain a track fitting linear line;
and judging whether the corresponding target vehicle runs the red light according to the track fitting straight line.
7. A vehicle red light running detection device, characterized in that the device includes:
the acquisition module is used for acquiring a first image frame;
the state detection module is used for detecting the state of a signal lamp corresponding to the first image frame through a trained signal lamp detection model when the vehicle is in a static state;
the zebra crossing detection module is used for judging whether the vehicle is a first vehicle behind the zebra crossing or not according to the first image frame when the signal lamp state is a red lamp;
the vehicle detection module is used for detecting a target vehicle in the first image frame when the vehicle is determined to be the first vehicle behind the zebra crossing;
the tracking module is used for tracking the target vehicle and judging that the target vehicle runs the red light according to the tracking record;
the sending module is used for sending the vehicle red light running information corresponding to the target vehicle to a server when the target vehicle is judged to have the red light running event;
the state detection module is further used for determining a signal lamp state, a signal lamp detection frame and a corresponding initial confidence coefficient of each signal lamp in the first image frame through the trained signal lamp detection model, determining a bias degree of the corresponding signal lamp detection frame according to a designated point of each signal lamp detection frame and a vertical central axis of the first image frame, determining a corresponding target confidence coefficient according to the initial confidence coefficient and the bias degree corresponding to each signal lamp detection frame, and determining the signal lamp state corresponding to the first image frame according to the target confidence coefficient.
8. The apparatus of claim 7, wherein the state detection module is further configured to obtain positioning information and inertial measurement data of the located vehicle, the positioning information and the inertial measurement data both corresponding to the first image frame, and determine whether the vehicle is in a stationary state according to the positioning information and the inertial measurement data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010425689.8A 2020-05-19 2020-05-19 Method and device for detecting red light running of vehicle, computer equipment and storage medium Active CN113689705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010425689.8A CN113689705B (en) 2020-05-19 2020-05-19 Method and device for detecting red light running of vehicle, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010425689.8A CN113689705B (en) 2020-05-19 2020-05-19 Method and device for detecting red light running of vehicle, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113689705A CN113689705A (en) 2021-11-23
CN113689705B true CN113689705B (en) 2022-11-29

Family

ID=78576133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010425689.8A Active CN113689705B (en) 2020-05-19 2020-05-19 Method and device for detecting red light running of vehicle, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689705B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988110B (en) * 2021-12-02 2022-04-05 深圳比特微电子科技有限公司 Red light running behavior detection method and device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010049535A (en) * 2008-08-22 2010-03-04 Mazda Motor Corp Vehicular running support apparatus
CN105046980A (en) * 2014-08-24 2015-11-11 薛青 Detection system for red light running of vehicles
CN107038420A (en) * 2017-04-14 2017-08-11 北京航空航天大学 A kind of traffic lights recognizer based on convolutional network

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010102601A (en) * 2008-10-27 2010-05-06 Rohm Co Ltd Vehicle traveling information recording device
CN203351030U (en) * 2013-03-19 2013-12-18 宝鸡市博安电子科技有限公司 Mobile vehicle-mounted regulation-violating and illegal parking snapshot system
CN106803353B (en) * 2015-11-26 2021-06-29 罗伯特·博世有限公司 Method for determining a transformation rule of a traffic light and on-board system
CN105551110A (en) * 2016-02-05 2016-05-04 北京奇虎科技有限公司 Traveling vehicle data recording method, device and system
CN205845328U (en) * 2016-07-14 2016-12-28 清华大学苏州汽车研究院(吴江) Collaborative and the active safety prior-warning device of 4G network based on bus or train route
CN206773925U (en) * 2017-06-15 2017-12-19 河南中安占海控股集团有限公司 Make a dash across the red light identification disposal plant
CN107689157B (en) * 2017-08-30 2021-04-30 电子科技大学 Traffic intersection passable road planning method based on deep learning
EP3707572B1 (en) * 2017-11-10 2023-08-23 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN108093219A (en) * 2017-12-26 2018-05-29 山东易华录信息技术有限公司 Traffic offence accident information reporting system and method based on automobile data recorder
CN110598511A (en) * 2018-06-13 2019-12-20 杭州海康威视数字技术股份有限公司 Method, device, electronic equipment and system for detecting green light running event
CN110378276B (en) * 2019-07-16 2021-11-30 顺丰科技有限公司 Vehicle state acquisition method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010049535A (en) * 2008-08-22 2010-03-04 Mazda Motor Corp Vehicular running support apparatus
CN105046980A (en) * 2014-08-24 2015-11-11 薛青 Detection system for red light running of vehicles
CN107038420A (en) * 2017-04-14 2017-08-11 北京航空航天大学 A kind of traffic lights recognizer based on convolutional network

Also Published As

Publication number Publication date
CN113689705A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110753892A (en) Method and system for instant object tagging via cross-modality verification in autonomous vehicles
US11734783B2 (en) System and method for detecting on-street parking violations
WO2015129045A1 (en) Image acquisition system, terminal, image acquisition method, and image acquisition program
US11590989B2 (en) Training data generation for dynamic objects using high definition map data
CN110869559A (en) Method and system for integrated global and distributed learning in autonomous vehicles
CN110799982A (en) Method and system for object-centric stereo vision in an autonomous vehicle
US20230260154A1 (en) Systems and Methods for Image-Based Location Determination and Parking Monitoring
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
Zhang et al. Vehicle re-identification for lane-level travel time estimations on congested urban road networks using video images
CN113689705B (en) Method and device for detecting red light running of vehicle, computer equipment and storage medium
JP2017188164A (en) Image acquisition device, terminal, and image acquisition system
Jiang et al. itv: Inferring traffic violation-prone locations with vehicle trajectories and road environment data
Bhandari et al. Fullstop: A camera-assisted system for characterizing unsafe bus stopping
US20220335730A1 (en) System and method for traffic signage inspection through collection, processing and transmission of data
US11488390B2 (en) Map generation device, recording medium and map generation method
Ranjan et al. City scale monitoring of on-street parking violations with streethawk
CN114693722A (en) Vehicle driving behavior detection method, detection device and detection equipment
Nalavde et al. Driver assistant services using ubiquitous smartphone
TWI672642B (en) People count statistic system and method thereof
Saif et al. Szr5: A modern technique to detect potholes using convolution neural network
WO2022233099A1 (en) Networked adas-based method for investigating spatial-temporal characteristics of road area traffic violation behavior
CN113327414A (en) Vehicle reverse running detection method and device, computer equipment and storage medium
CN114663469A (en) Target object tracking method and device, electronic equipment and readable storage medium
CA3205033A1 (en) System and method for traffic signage inspection through collection, processing and transmission of data
JP2019071139A (en) Image acquisition device, terminal, and image acquisition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant