CN114299414A - Deep learning-based vehicle red light running identification and determination method - Google Patents

Deep learning-based vehicle red light running identification and determination method Download PDF

Info

Publication number
CN114299414A
CN114299414A CN202111452264.7A CN202111452264A CN114299414A CN 114299414 A CN114299414 A CN 114299414A CN 202111452264 A CN202111452264 A CN 202111452264A CN 114299414 A CN114299414 A CN 114299414A
Authority
CN
China
Prior art keywords
vehicle
state
red light
target vehicle
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111452264.7A
Other languages
Chinese (zh)
Other versions
CN114299414B (en
Inventor
魏健康
张瑞龙
张星
吕晓鹏
张伟
刘晔
惠峰涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Datalake Information Technology Co ltd
Beijing E Hualu Information Technology Co Ltd
Original Assignee
Wuxi Datalake Information Technology Co ltd
Beijing E Hualu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Datalake Information Technology Co ltd, Beijing E Hualu Information Technology Co Ltd filed Critical Wuxi Datalake Information Technology Co ltd
Priority to CN202111452264.7A priority Critical patent/CN114299414B/en
Publication of CN114299414A publication Critical patent/CN114299414A/en
Application granted granted Critical
Publication of CN114299414B publication Critical patent/CN114299414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle red light running identification and determination method based on deep learning, which comprises the following steps of: acquiring a video stream, performing frame extraction processing on the video stream, and performing area calibration on a scene in the video stream; collecting and marking vehicle data, establishing a vehicle type marking data set, and performing structural detection training on different types of vehicles by adopting a Yolov5s model; classifying and detecting the traffic light state by using a MobilenNetV1 model, and judging the linkage of the current traffic light state and the vehicle passing state; tracking the vehicle in the calibration area, and recording the target vehicle track; and judging the running direction and the red light running state of the target vehicle according to the target vehicle track result. The invention can greatly reduce hardware cost, reduce labor and maintenance cost of field constructors, avoid secondary damage to roads, effectively save resources and improve accuracy.

Description

Deep learning-based vehicle red light running identification and determination method
Technical Field
The invention relates to the technical field of traffic information, in particular to a red light running identification and determination method based on deep learning.
Background
With the gradual development of society, the social security of private cars is increased explosively, but the driving safety awareness of people needs to be improved, illegal behaviors such as running red lights are also happened at times, and the personal and property safety threats caused by the behavior of running red lights are frequent, so that the traffic control department needs to take a snapshot and punish on the illegal behaviors of running red lights.
At present, the existing red light running snapshot technology is mostly based on high-cost hardware such as an intelligent camera and the like for behavior snapshot, subsequent iterative optimization is not facilitated, and the manual maintenance cost is very high. Or the device for behavior detection in a radar mode usually destroys a normal road, is not beneficial to road maintenance, shortens the service life and causes social resource waste.
Disclosure of Invention
The invention aims to provide a method for detecting and judging the red light running behavior of a video stream in real time based on deep learning and finishing result pushing.
In order to solve the technical problem, the invention provides a deep learning-based vehicle red light running identification and determination method, which comprises the following steps of:
s1: acquiring a video stream, performing frame extraction processing on the video stream, and performing area calibration on a scene in the video stream;
s2: collecting and marking vehicle data in a video stream, establishing a vehicle type marking data set, carrying out structural detection model training on different types of vehicles by adopting a Yolov5s model, and optimizing by using a TensorRT model;
s3: classifying and detecting the traffic light state by adopting a MobilenNetV1 model, judging the current red light, green light or yellow light state, and linking the traffic light state with the vehicle passing state;
s4: the method for tracking the track of the vehicle in the calibrated to-be-detected calibration area comprises the following steps of:
s41: determining whether the vehicle is within the calibration area;
s42: acquiring vehicle initialization information of a vehicle entering a calibration area;
s43: matching vehicle characteristics, and tracking and recording a track of a target vehicle through a row Reid model;
s5: according to the result of the trajectory traced by the target vehicle in step S43, the traveling direction of the target vehicle and the state of running a red light are determined, and the determination result is pushed out.
Further, in step S1, the area calibration includes the following steps:
s101: calibrating the position information of the road lines and the traffic lights in the video stream, and writing the road coordinates into a database;
s102: and calibrating the road route attribute in the S101, and generating a json format calibration file.
Further, the road line attribute in S102 includes straight running, right turning, left turning, straight running right turning, or no limitation.
Further, in S3, the classifying of the traffic light status includes the following steps:
s301: carrying out traffic light data acquisition and establishing a traffic light classified data set;
s302: and (5) carrying out traffic light state classification model training on the data set in the S301, and optimizing by using a TensorRT model.
Further, the S41 includes the following steps:
s411: sequentially arranging the corresponding relation of the vertex positions in the calibration region according to the clockwise sequence, namely p1, p2, p3 and p 4;
s412: calculating the vector relation between the positions according to the inner point p0 of the calibration region
Figure BDA0003385550820000021
Figure BDA0003385550820000022
S413: and performing cross product calculation according to the two groups of vectors, wherein the calculation formula is as follows:
Figure BDA0003385550820000023
s414: and performing cyclic calculation of the vertex, wherein the calculation formula is as follows:
Figure BDA0003385550820000031
s415: performing statistical analysis on the result information of the four vertexes if n isiIf the number is less than 0, the point can be judged to be in the calibration area, otherwise, the point is not in the calibration area;
s416: performing the center point p of the vehicle according to the output result of the Yolov5s modelcenterAnd the upper and lower edge midpoints p of the vehicle detection frametop,pbottomRecording information of three points in total;
s417: from the recorded vehicle centre point pcenterInformation to determine whether the vehicle enters the calibration area.
Further, the method for determining the target vehicle in S5 includes that the target vehicle in the video stream is position information of frames taken back and forth, whether the target vehicle is crossing the line is determined, and a calculation formula for determining whether the target vehicle is crossing the line is:
Figure BDA0003385550820000032
if n is1·n2The more than 0, theThe line behavior.
Further, the method for determining the target vehicle in S5 further includes whether the target vehicle crosses a stop line in front of a zebra crossing in the calibration area, and the method includes:
firstly, the current traffic light is in a green light state pbottomCrossing the stop line occurs, marking the state as 0; the current traffic light status is yellow light, pbottomCrossing the stop line occurs, marking the state as 1; the current traffic light state is red light, pbottomCrossing the stop line occurs, marking state 2;
when the target vehicle is in the marking state 2, if the target vehicle is tracked to turn left or to perform a straight-ahead behavior, marking the red light running behavior and pushing the result.
Further, the video stream comprises a plurality of real-time video streams of the electric police and the bayonet cameras.
Compared with the related art, the invention has the following beneficial effects:
according to the method for identifying and judging the red light running of the vehicle based on the deep learning, an artificial intelligence technology is introduced into the identification of the red light running behavior, and the red light running behavior of the vehicle under different conditions is snapshot and judged through the combination of different models and algorithm types. The remote deployment of the multi-path video stream can reduce the labor and maintenance cost of field construction personnel to a certain extent, and also avoids secondary damage to the road.
The regional calibration used in the method can assist the artificial intelligence model to judge, so that the whole set of system can gradually carry out error correction from the video stream processing to achieve the optimal process of model reasoning and meet the real-time requirement.
In order to make the aforementioned and other objects, features and advantages of the invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating the steps of a method according to a preferred embodiment of the present invention;
FIG. 2 is a flow chart of detection in an embodiment provided by the present invention;
FIG. 3 is a diagram illustrating a position mapping relationship between vertices of a calibration area according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an algorithm for determining whether a vehicle is off-line in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and fig. 2, a method for identifying and determining red light running of a vehicle based on deep learning includes the following steps:
s1: acquiring a video stream, performing frame extraction processing on the video stream, and performing area calibration on a scene in the video stream;
the method comprises the steps that video streams come from real-time video streams of a plurality of electric police and bayonet cameras, video recording can be carried out in advance, 100 camera point positions of an urban road are covered, frame extraction and duplication removal operations are carried out on 500 acquired videos from six points early to eight points late, recording time is about one week in total, pictures are stored, multi-angle samples are collected from multiple cities, multiple scenes and multiple time periods, and each picture of the extracted frames is required to be clear and easy to distinguish by human eyes;
s101: calibrating the position information of the road lines and the traffic lights in the video stream, and writing the road coordinates into a database;
s102: calibrating the road route attribute in the S101, and generating a calibration file in a json format; the road line attributes in S102 include straight, right turn, left turn, straight right turn, or no restriction.
S2: collecting and marking vehicle data in a video stream, and establishing a vehicle type marking data set; manually labeling a large number of sample pictures acquired in the step S1, wherein the labeling requirements are as follows: and adopting polygon marking, and properly zooming the edge of the vehicle to enable the marking frame to cover the target vehicle body.
The labeling labels are divided into three categories: the labels of small vehicles such as small ordinary passenger cars and small off-road buses are car; the labels of buses such as buses and coaches are buses; the label of a large truck, a container and other dangerous trucks is truck. And carrying out structural detection model training on different types of vehicles by adopting a Yolov5s model, completing model transplantation of TensorRT, and further saving reasoning resources.
S3: classifying and detecting the traffic light state by adopting a MobilenNetV1 model, judging the current red light, green light or yellow light state, and linking the traffic light state with the vehicle passing state;
in S3, the classifying of the traffic light status includes the following steps:
s301: carrying out traffic light data acquisition and establishing a traffic light classified data set;
specifically, traffic light data of different styles are collected aiming at a real scene of an urban road, and from six early points to eight late points, 7 traffic light styles are collected to cover a plurality of intersections of the urban road 270, and the traffic light styles are also multi-angle samples collected from a plurality of scenes and a plurality of time periods.
S302: and (5) carrying out traffic light state classification model training on the data set in the S301, completing model transplantation of TensorRT, and further saving reasoning resources.
S4: the method for tracking the track of the vehicle in the calibrated to-be-detected calibration area comprises the following steps of:
s41: determining whether the vehicle is within the calibration region, said S41 comprising the steps of:
s411: as shown in fig. 3, in the map of correspondence between vertex positions in the calibration area for vehicle detection, the correspondence between vertex positions in the calibration area is arranged in the order of p1, p2, p3, and p4 in a clockwise order;
s412: calculating the vector relation between the positions according to the inner point p0 of the calibration region
Figure BDA0003385550820000061
Figure BDA0003385550820000062
S413: and performing cross product calculation according to the two groups of vectors, wherein the calculation formula is as follows:
Figure BDA0003385550820000063
s414: and performing cyclic calculation of the vertex, wherein the calculation formula is as follows:
Figure BDA0003385550820000064
s415: performing statistical analysis on the result information of the four vertexes if n isiIf the number is less than 0, the point can be judged to be in the calibration area, otherwise, the point is not in the calibration area;
s416: performing the center point p of the vehicle according to the output result of the Yolov5s modelcenterAnd the upper and lower edge midpoints p of the vehicle detection frametop,pbottomRecording information of three points in total;
s417: from the recorded vehicle centre point pcenterInformation to determine whether the vehicle enters the calibration area.
S42: acquiring vehicle initialization information of a vehicle entering a calibration area to prepare for subsequent tracking Reid;
s43: and matching vehicle features, tracking and recording tracks of the target vehicle through a Reid model, wherein the Reid part uses an OSNet model and the Reid to realize similarity calculation of the detection boxes, and finally realizes id allocation of the detection boxes through a Hungarian class matching algorithm in a classic sort algorithm.
S5: according to the result of the track traced by the target vehicle in step S43, the driving direction and the red light running state of the target vehicle are determined, and the determination result is pushed, wherein the method for determining the target vehicle comprises the steps of determining whether the target vehicle crosses the line by using the position information of the target vehicle in the video stream, extracting frames before and after the target vehicle, detecting the frame extraction frequency of the video stream by 5 frames/S, and combining the calculation formula for determining whether the target vehicle crosses the line as shown in the line crossing algorithm diagram of fig. 4,
Figure BDA0003385550820000071
if n is1·n2< 0, an over-the-wire behavior occurs.
In addition, the method for determining the target vehicle further comprises whether the target vehicle crosses a stop line in front of a zebra crossing in the calibration area, and the method comprises the following steps:
firstly, the current traffic light is in a green light state pbottomCrossing the stop line occurs, marking the state as 0; the current traffic light status is yellow light, pbottomCrossing the stop line occurs, marking the state as 1; the current traffic light state is red light, pbottomCrossing the stop line occurs, marking state 2;
when the data is matched with the target vehicle in the marking state 2, the target vehicle is tracked to turn left or to run straight, and then the red light running behavior is marked and the result is pushed.
In another embodiment, the same logical decision is made for the right turn traffic light in a special case.
Aiming at the manufacture of a detection data set, the invention collects vehicle structural data under a bayonet and an electric police camera of a real scene, collects 7 traffic light style data covering a plurality of intersections of an urban road 270, constructs a large-scale training data set, and further expands the data set by adopting a plurality of data enhancement modes (geometric enhancement comprises random overturning (more horizontal overturning and less vertical overturning), random cutting (crop), stretching and rotating, and color enhancement comprises contrast enhancement, brightness enhancement and more key HSV space enhancement).
Secondly, aiming at the data set, a Yolov5s detection model with high real-time performance and less video memory occupation is adopted for model parameter training, multi-scene and multi-point target vehicle detection is achieved, and a MobilenNetV1 classification model with high inference speed, small video memory occupation and less parameters is adopted for traffic light state judgment. And the inference performance of the model is further improved by adopting a TensorRT technology.
Meanwhile, complete marking configuration adaptation is carried out aiming at intersection scenes of traffic gates and electric police, and information of a database is synchronously stored and updated by associating equipment id. Through the configuration of marking off, the interference of useless information in the whole picture is effectively reduced, and the real condition of the record of the vehicle running track can be effectively focused. Meanwhile, the method can respectively judge a plurality of areas in the scene, saves resources, improves accuracy, and is flexible and easy to use.
The invention can meet the requirements of model training and rapid and efficient detection of vehicle structurization in bayonet and electric police areas, tests are carried out on a Tesla P4 card, a yolov5s model is adopted, the size of the model is 130M, the video memory occupies 400M, the detection time of a single frame is 25ms, and the map reaches 70.5% in multiple scenes. The MobilenNet V1 classification model memory is 300M, the single frame inference speed is 20ms, and the map under multiple scenes reaches 88%.
The target tracking detection module can track a plurality of detection targets in real time, the multi-scene accuracy rate is more than 98%, and the single-frame track recording time is about 15 ms.
In the steps, firstly, frame extraction processing is carried out on video information in a real-time video stream, then region calibration is carried out, classification and structural training is carried out on vehicles by using a Yolov5s model, a target vehicle is detected by using the Yolov5s model, if the target vehicle is detected, characteristic information of the target vehicle is obtained, matching and association are carried out through a Reid model, meanwhile, the current traffic light state of the target vehicle is associated, the running direction of the target vehicle is confirmed, the moving track of the target vehicle is tracked, the complete track of the target vehicle is recorded, the running-out direction of the target vehicle is judged, whether the target vehicle has line crossing behavior under the red light state is calculated, and whether the red light crossing behavior of the target vehicle occurs or not is determined. The method can effectively save resources, improve accuracy and facilitate flexible use.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A vehicle red light running identification and determination method based on deep learning is characterized by comprising the following steps:
s1: acquiring a video stream, performing frame extraction processing on the video stream, and performing area calibration on a scene in the video stream;
s2: collecting and marking vehicle data in a video stream, establishing a vehicle type marking data set, carrying out structural detection model training on different types of vehicles by adopting a Yolov5s model, and optimizing by using a TensorRT model;
s3: classifying and detecting the traffic light state by adopting a MobilenNetV1 model, judging the current red light, green light or yellow light state, and linking the traffic light state with the vehicle passing state;
s4: the method for tracking the track of the vehicle in the calibrated to-be-detected calibration area comprises the following steps of:
s41: determining whether the vehicle is within the calibration area;
s42: acquiring vehicle initialization information of a vehicle entering a calibration area;
s43: matching vehicle characteristics, and tracking and recording a track of a target vehicle through a row Reid model;
s5: according to the result of the trajectory traced by the target vehicle in step S43, the traveling direction of the target vehicle and the state of running a red light are determined, and the determination result is pushed out.
2. The method for identifying and determining red light running of a vehicle according to claim 1, wherein in step S1, the area calibration includes the steps of:
s101: calibrating the position information of the road lines and the traffic lights in the video stream, and writing the road coordinates into a database;
s102: and calibrating the road route attribute in the S101, and generating a json format calibration file.
3. The method according to claim 2, wherein the road line attribute in S102 includes straight running, right turn, left turn, straight running right turn, or no limitation.
4. The method for identifying and determining red light running of a vehicle according to claim 2, wherein in the step S3, the classification of the traffic light state includes the steps of:
s301: carrying out traffic light data acquisition and establishing a traffic light classified data set;
s302: and (5) carrying out traffic light state classification model training on the data set in the S301, and optimizing by using a TensorRT model.
5. The method for identifying and determining red light running of a vehicle according to claim 1, wherein said S41 includes the steps of:
s411: sequentially arranging the corresponding relation of the vertex positions in the calibration region according to the clockwise sequence, namely p1, p2, p3 and p 4;
s412: calculating the vector relation between the positions according to the inner point p0 of the calibration region
Figure FDA0003385550810000021
S413: and performing cross product calculation according to the two groups of vectors, wherein the calculation formula is as follows:
Figure FDA0003385550810000022
s414: and performing cyclic calculation of the vertex, wherein the calculation formula is as follows:
Figure FDA0003385550810000023
s415: performing statistical analysis on the result information of the four vertexes if n isiIf the number is less than 0, the point can be judged to be in the calibration area, otherwise, the point is not in the calibration area;
s416: performing the center point p of the vehicle according to the output result of the Yolov5s modelcenterAnd the upper and lower edge midpoints p of the vehicle detection frametop,pbottomRecording information of three points in total;
s417: from the recorded vehicle centre point pcenterInformation to determine whether the vehicle enters the calibration area.
6. The method for identifying and determining the red light running of the vehicle according to claim 5, wherein the method for determining the target vehicle in the S5 includes the steps of determining whether the target vehicle crosses the line by using position information of the target vehicle in the video stream which is extracted from front and back frames, and calculating a formula for determining whether the target vehicle crosses the line by:
Figure FDA0003385550810000024
if n is1·n2< 0, an over-the-wire behavior occurs.
7. The vehicle red light running recognition method according to claim 5, wherein the determination method of the target vehicle in S5 further includes whether the target vehicle crosses a stop line in front of a zebra crossing in the calibration area, and the method includes:
firstly, the current red and greenThe lamp state is green, pbottomCrossing the stop line occurs, marking the state as 0; the current traffic light status is yellow light, pbottomCrossing the stop line occurs, marking the state as 1; the current traffic light state is red light, pbottomCrossing the stop line occurs, marking state 2;
when the target vehicle is in the marking state 2, if the target vehicle is tracked to turn left or to perform a straight-ahead behavior, marking the red light running behavior and pushing the result.
8. The method for identifying and determining red light running of a vehicle according to claim 7, wherein the video stream comprises a real-time video stream of a plurality of electric alarms and a plurality of checkpoint cameras.
CN202111452264.7A 2021-11-30 2021-11-30 Vehicle red light running recognition and judgment method based on deep learning Active CN114299414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452264.7A CN114299414B (en) 2021-11-30 2021-11-30 Vehicle red light running recognition and judgment method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452264.7A CN114299414B (en) 2021-11-30 2021-11-30 Vehicle red light running recognition and judgment method based on deep learning

Publications (2)

Publication Number Publication Date
CN114299414A true CN114299414A (en) 2022-04-08
CN114299414B CN114299414B (en) 2023-09-15

Family

ID=80965591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452264.7A Active CN114299414B (en) 2021-11-30 2021-11-30 Vehicle red light running recognition and judgment method based on deep learning

Country Status (1)

Country Link
CN (1) CN114299414B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949579A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal automatic auditing method that makes a dash across the red light based on deep learning
CN110009913A (en) * 2019-03-27 2019-07-12 江苏智通交通科技有限公司 A kind of non-at-scene law enforcement picture intelligent checks method and system of vehicles running red light
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN111325988A (en) * 2020-03-10 2020-06-23 北京以萨技术股份有限公司 Real-time red light running detection method, device and system based on video and storage medium
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN112767710A (en) * 2021-01-20 2021-05-07 青岛以萨数据技术有限公司 Vehicle illegal behavior detection method and device and storage medium
WO2021142944A1 (en) * 2020-01-13 2021-07-22 南京新一代人工智能研究院有限公司 Vehicle behaviour recognition method and apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN109949579A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal automatic auditing method that makes a dash across the red light based on deep learning
CN110009913A (en) * 2019-03-27 2019-07-12 江苏智通交通科技有限公司 A kind of non-at-scene law enforcement picture intelligent checks method and system of vehicles running red light
WO2020192122A1 (en) * 2019-03-27 2020-10-01 江苏智通交通科技有限公司 Off-site law enforcement picture intelligent auditing method and system for vehicles running red light
CN110472496A (en) * 2019-07-08 2019-11-19 长安大学 A kind of traffic video intelligent analysis method based on object detecting and tracking
WO2021142944A1 (en) * 2020-01-13 2021-07-22 南京新一代人工智能研究院有限公司 Vehicle behaviour recognition method and apparatus
CN111325988A (en) * 2020-03-10 2020-06-23 北京以萨技术股份有限公司 Real-time red light running detection method, device and system based on video and storage medium
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN112767710A (en) * 2021-01-20 2021-05-07 青岛以萨数据技术有限公司 Vehicle illegal behavior detection method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王萍萍;仇润鹤;: "基于YOLOv3的车辆多目标检测", 科技与创新, no. 03 *

Also Published As

Publication number Publication date
CN114299414B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Hasegawa et al. Robust Japanese road sign detection and recognition in complex scenes using convolutional neural networks
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN105844257B (en) The early warning system and method for road sign are missed based on machine vision travelling in fog day
Fang et al. Road-sign detection and tracking
CN111310583A (en) Vehicle abnormal behavior identification method based on improved long-term and short-term memory network
CN105930791A (en) Road traffic sign identification method with multiple-camera integration based on DS evidence theory
CN110226170A (en) A kind of traffic sign recognition method in rain and snow weather
Llorca et al. Traffic data collection for floating car data enhancement in V2I networks
CN106845453A (en) Taillight detection and recognition methods based on image
CN102044151A (en) Night vehicle video detection method based on illumination visibility identification
CN104537841A (en) Unlicensed vehicle violation detection method and detection system thereof
CN106412508A (en) Intelligent monitoring method and system of illegal line press of vehicles
Pflugfelder et al. On learning vehicle detection in satellite video
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
Tran et al. UIT-ADrone: A Novel Drone Dataset for Traffic Anomaly Detection
CN114299414B (en) Vehicle red light running recognition and judgment method based on deep learning
Pan et al. Fake license plate recognition in surveillance videos
Mohammed et al. An overview on various methods of detection and recognition of traffic signs by Autonomous Vehicles
Yeh et al. Detection and Recognition of Arrow Traffic Signals using a Two-stage Neural Network Structure.
CN112528787A (en) Signal lamp fault detection method based on deep learning
Shahbaz et al. The Evaluation of Cascade Object Detector in Recognizing Different Samples of Road Signs
Wei et al. Adaptive video-based vehicle classification technique for monitoring traffic.
Kushwaha et al. Yolov7-based Brake Light Detection Model for Avoiding Rear-End Collisions
Liu et al. Vehicle Detection and Tracking Techniques Based on Deep Learning in Road Traffic Surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant