CN111640308B - Deep learning red light running detection method based on embedded terminal - Google Patents

Deep learning red light running detection method based on embedded terminal Download PDF

Info

Publication number
CN111640308B
CN111640308B CN202010334070.6A CN202010334070A CN111640308B CN 111640308 B CN111640308 B CN 111640308B CN 202010334070 A CN202010334070 A CN 202010334070A CN 111640308 B CN111640308 B CN 111640308B
Authority
CN
China
Prior art keywords
automobile
red light
video
simulation
accident
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010334070.6A
Other languages
Chinese (zh)
Other versions
CN111640308A (en
Inventor
张中
赵冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhanda Intelligent Technology Co ltd
Original Assignee
Hefei Zhanda Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhanda Intelligent Technology Co ltd filed Critical Hefei Zhanda Intelligent Technology Co ltd
Priority to CN202010334070.6A priority Critical patent/CN111640308B/en
Publication of CN111640308A publication Critical patent/CN111640308A/en
Application granted granted Critical
Publication of CN111640308B publication Critical patent/CN111640308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a deep learning red light running detection method based on an embedded terminal, and particularly relates to the technical field of red light running detection, which comprises the following steps: and extracting videos shot by the traffic accident scene and the peripheral cameras. According to the invention, the condition of vehicles running the red light unexpectedly is independently decomposed, the video of the vehicles running the red light is independently edited and processed according to the number of the vehicles, then the condition of the vehicles is specifically analyzed, the driving condition of the vehicles in the accident process is judged according to the current speed of the vehicles and the damage condition of the vehicles after the accident, meanwhile, the condition of external impact is received, the accident occurrence condition is simulated according to the collected data through a 3D simulation technology, and then the damage data caused by the simulated vehicles after the 3D simulation and the damage data of the surrounding environment are compared with the real data, so that the condition of running the red light is more easily processed and analyzed.

Description

Deep learning red light running detection method based on embedded terminal
Technical Field
The invention relates to the technical field of red light running detection, in particular to a deep learning red light running detection method based on an embedded terminal.
Background
With the increasing development of society and economy, automobiles play an increasingly important role in our lives. The living life of people is facilitated, and meanwhile a series of problems of traffic violation, traffic jam and the like are brought. Most of them are caused by violation of traffic or running red light by the driver of the vehicle.
However, according to the deep learning red light running detection method based on the embedded terminal, accidents are easy to happen when vehicles run red lights, and a plurality of vehicles collide in the red light running process, so that specific conditions of unexpected vehicles cannot be captured timely under the conditions that the vehicle speed is fast and the vehicles are shielded, and the red light running conditions are difficult to be specifically processed and are troublesome.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, embodiments of the present invention provide a deep learning red light running detection method based on an embedded terminal, by adopting the independent decomposition of the vehicle running the red light in case of accident, the video of the related vehicles running the red light is independently edited and processed according to the number of the vehicles, then the condition of the automobile is specifically analyzed, the driving condition of the automobile in the accident process is judged according to the current speed of the automobile and the damage condition after the automobile accident, meanwhile, the accident occurrence condition is simulated by the 3D simulation technology according to the data, and then comparing damage data caused by the simulated vehicle after 3D simulation with damage data of the surrounding environment, and comparing the damage data with real data, so that the condition of running the red light is more easily processed and analyzed.
In order to achieve the purpose, the invention provides the following technical scheme: a deep learning red light running detection method based on an embedded terminal comprises the following specific operation steps:
the method comprises the following steps: extracting videos shot by a traffic accident scene and peripheral cameras, and then extracting vehicle information and personal information of an owner in the videos;
step two: filtering the video to reduce the influence of the environment on the definition of the video, and then pouring the video into an image processor to carry out frame number analysis;
step three: recording the action track of each automobile in the video, marking the action track in the picture, editing the motion condition of each automobile according to the recorded video, and watching and analyzing the single edited video respectively;
step four: shooting a picture of the surrounding environment after an accident by a detector at the accident site, detecting the damage degree of the automobile, and making a data table;
step five: restoring the current field situation through a 3D simulation technology according to the surrounding environment of the video, the information of the automobile, the action track of the automobile in the shot video and the current signal lamp use situation, and carrying out field simulation;
step six: recording all data after 3D simulation, then making a table with the collected field data for data comparison, calculating the contact ratio of 3D simulation reduction according to the data comparison similarity, and adjusting the data to continue simulation when the contact ratio of 3D simulation reduction is lower than a standard value;
step seven: when the contact ratio of the 3D simulation reaches the standard, recording the 360-degree field simulation condition through a high-definition camera, wherein the dead angle condition of a field vehicle is recorded in a key way;
step eight: and recording the specific situation of each automobile in the 3D simulation, analyzing and judging whether the situation of running the red light occurs or not, and then performing accountability according to the situation of the automobile.
In a preferred embodiment, the peripheral cameras in the first step are all cameras within a range of 200m-300m taking an accident scene as a center, the vehicle information in the first step includes a license plate number, a vehicle type, a vehicle driving mileage, a vehicle bearing condition and a vehicle maintenance record, and the video shot by the traffic accident scene and the peripheral cameras is extracted and observed as an embedded terminal device in the first step.
In a preferred embodiment, the personal information of the vehicle owner in the first step includes the age, driving age, physical condition and whether there is a record of driving violation.
In a preferred embodiment, the filtering in step two includes background processing, brightness adjustment and light adjustment, and the number of video frames in step two is 30-50 frames/s.
In a preferred embodiment, the action tracks of the automobiles in the videos in the three steps are all running tracks of 20s-30s automobiles before and after an accident, the number of videos clipped in the three steps is the same as the number of related automobiles, and each clipped video corresponds to each automobile.
In a preferred embodiment, the video analysis in the third step is performed by using 30-40 frames/s for frame number analysis, and the pictures of the surrounding environment in the fourth step mainly comprise public facilities damaged by the automobile accident, including guardrails, greenery and car marks generated in the automobile accident.
In a preferred embodiment, the data of the damaged automobile in the step five comprises the damaged area of the surface of the automobile, the damaged device in the automobile and the damaged position, and the data of the type of the automobile, the speed of the automobile, the surrounding environment and the state of the signal lamp in the step five is simulated in a 3D mode.
In a preferred embodiment, the data collected after the 3D simulation in the sixth step includes a position after the automobile accident, an automobile damage rate and a position of the damage to the surrounding environment, and the overlap ratio standard in the sixth step is higher than 85% -95%.
In a preferred embodiment, the blind spot condition of the vehicle in the step seven includes a condition that other vehicles are blocked and the shooting angle cannot be complete during the accident.
In a preferred embodiment, the vehicle specific conditions in the step eight include vehicle speed, load, driver driving state and vehicle performance.
The invention has the technical effects and advantages that:
the situation of independent decomposition is adopted for the situation of a vehicle running a red light unexpectedly, the video of the vehicle running the red light is edited and processed independently according to the quantity of the vehicles, then the situation of the vehicle is analyzed specifically, the situation of self driving of the vehicle in the accident process is judged according to the current vehicle speed of the vehicle and the damage situation after the vehicle accident, the situation of external impact is received simultaneously, and according to the data taken, through a 3D simulation technology, the situation of accident occurrence is simulated, then the damage data caused by the simulated vehicle after 3D simulation and the damage data to the surrounding environment are compared with the real data, and the situation of running the red light is processed and analyzed more easily.
Detailed Description
The following will clearly and completely describe the technical solutions in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the invention provides a deep learning red light running detection method based on an embedded terminal, which comprises the following specific operation steps:
the method comprises the following steps: extracting videos shot by a traffic accident scene and peripheral cameras, and then extracting vehicle information and personal information of an owner in the videos;
step two: filtering the video to reduce the influence of the environment on the definition of the video, and then pouring the video into an image processor to carry out frame number analysis;
step three: recording the action track of each automobile in the video, marking the action track in the picture, editing the motion condition of each automobile according to the recorded video, and watching and analyzing the single edited video respectively;
step four: shooting a picture of the surrounding environment after an accident by a detector at the accident site, detecting the damage degree of the automobile, and making a data table;
step five: restoring the current field situation through a 3D simulation technology according to the surrounding environment of the video, the information of the automobile, the action track of the automobile in the shot video and the current signal lamp use situation, and carrying out field simulation;
step six: recording all data after 3D simulation, then making a table with the collected field data for data comparison, calculating the contact ratio of 3D simulation reduction according to the data comparison similarity, and adjusting the data to continue simulation when the contact ratio of 3D simulation reduction is lower than a standard value;
step seven: when the contact ratio of the 3D simulation reaches the standard, recording the 360-degree field simulation condition through a high-definition camera, wherein the dead angle condition of a field vehicle is recorded in a key way;
step eight: and recording the specific situation of each automobile in the 3D simulation, analyzing and judging whether the situation of running the red light occurs or not, and then performing accountability according to the situation of the automobile.
Further, the peripheral cameras in the first step are all cameras within a range of 200m taking an accident scene as a center, the vehicle information in the first step comprises a license plate number, a vehicle type, a vehicle driving mileage, a vehicle bearing condition and a vehicle maintenance record, and the video shot by the traffic accident scene and the peripheral cameras is extracted and observed as embedded terminal equipment in the first step.
Further, the personal information of the vehicle owner in the first step comprises the age, the driving age, the physical condition and whether a driving violation record exists.
Further, the filtering in the second step includes background processing, brightness adjustment and light adjustment, and the number of video frames in the second step is 30 frames/s.
Further, the action tracks of the automobiles in the videos in the third step are all running tracks of 20-30 s automobiles before and after the accident, the number of videos clipped in the third step is the same as the number of related automobiles, and each clipped video corresponds to each automobile.
Further, the video is analyzed in the third step, the frame number is analyzed by adopting 30 frames/s, and the photos of the surrounding environment in the fourth step mainly comprise public facilities damaged by the automobile accidents, including guardrails, green plants and car marks generated in the automobile accidents.
Further, the data of the damaged automobile in the fifth step comprises the area of the damaged surface of the automobile, the damaged devices in the automobile and the damaged positions, and the data of the type of the automobile, the speed of the automobile, the surrounding environment and the states of the signal lamps are simulated in the fifth step in a 3D mode.
Further, the data collected after the 3D simulation in the step six include a position after the automobile accident, an automobile damage rate, and a position of damage to the surrounding environment, and the coincidence degree standard in the step six is higher than 85%.
Further, the dead angle condition of the vehicle in the seventh step includes the condition that the shielding and shooting angles of other vehicles cannot be complete in the accident process.
Further, the specific conditions of the automobile in the step eight comprise the speed, the load, the driving state of the driver and the performance of the automobile.
Example 2:
the invention provides a deep learning red light running detection method based on an embedded terminal, which comprises the following specific operation steps:
the method comprises the following steps: extracting videos shot by a traffic accident scene and peripheral cameras, and then extracting vehicle information and personal information of an owner in the videos;
step two: filtering the video to reduce the influence of the environment on the definition of the video, and then pouring the video into an image processor to carry out frame number analysis;
step three: recording the action track of each automobile in the video, marking the action track in the picture, editing the motion condition of each automobile according to the recorded video, and watching and analyzing the single edited video respectively;
step four: shooting a picture of the surrounding environment after an accident by a detector at the accident site, detecting the damage degree of the automobile, and making a data table;
step five: restoring the current field situation through a 3D simulation technology according to the surrounding environment of the video, the information of the automobile, the action track of the automobile in the shot video and the current signal lamp use situation, and carrying out field simulation;
step six: recording all data after 3D simulation, then making a table with the collected field data for data comparison, calculating the contact ratio of 3D simulation reduction according to the data comparison similarity, and adjusting the data to continue simulation when the contact ratio of 3D simulation reduction is lower than a standard value;
step seven: when the contact ratio of the 3D simulation reaches the standard, recording the 360-degree field simulation condition through a high-definition camera, wherein the dead angle condition of a field vehicle is recorded in a key way;
step eight: and recording the specific situation of each automobile in the 3D simulation, analyzing and judging whether the situation of running the red light occurs or not, and then performing accountability according to the situation of the automobile.
Further, the peripheral cameras in the first step are all cameras within a range of 250m taking an accident scene as a center, the vehicle information in the first step comprises license plate numbers, vehicle types, vehicle driving mileage, vehicle bearing conditions and vehicle maintenance records, and the video shot by the traffic accident scene and the peripheral cameras in the first step is extracted to be observed as embedded terminal equipment.
Further, the personal information of the vehicle owner in the first step comprises the age, the driving age, the physical condition and whether a driving violation record exists.
Further, the filtering in the second step includes background processing, brightness adjustment and light adjustment, and the number of video frames in the second step is 40 frames/s.
Further, the action tracks of the automobiles in the videos in the third step are all running tracks of 20-30 s automobiles before and after the accident, the number of videos clipped in the third step is the same as the number of related automobiles, and each clipped video corresponds to each automobile.
Further, the video is analyzed in the third step, frame number analysis is carried out by adopting 35 frames/s, and the photos of the surrounding environment in the fourth step mainly comprise public facilities damaged by the automobile accidents, including guardrails, green plants and car marks generated in the automobile accidents.
Further, the data of the damaged automobile in the fifth step comprises the area of the damaged surface of the automobile, the damaged devices in the automobile and the damaged positions, and the data of the type of the automobile, the speed of the automobile, the surrounding environment and the states of the signal lamps are simulated in the fifth step in a 3D mode.
Further, the data collected after the 3D simulation in the sixth step include a post-accident position of the automobile, an automobile damage rate, and a damage position of the surrounding environment, and the overlap ratio standard in the sixth step is higher than 90%.
Further, the dead angle condition of the vehicle in the seventh step includes the condition that the shielding and shooting angles of other vehicles cannot be complete in the accident process.
Further, the specific conditions of the automobile in the step eight comprise the speed, the load, the driving state of the driver and the performance of the automobile.
Example 3:
the invention provides a deep learning red light running detection method based on an embedded terminal, which comprises the following specific operation steps:
the method comprises the following steps: extracting videos shot by a traffic accident scene and peripheral cameras, and then extracting vehicle information and personal information of an owner in the videos;
step two: filtering the video to reduce the influence of the environment on the definition of the video, and then pouring the video into an image processor to carry out frame number analysis;
step three: recording the action track of each automobile in the video, marking the action track in the picture, editing the motion condition of each automobile according to the recorded video, and watching and analyzing the single edited video respectively;
step four: shooting a picture of the surrounding environment after an accident by a detector at the accident site, detecting the damage degree of the automobile, and making a data table;
step five: restoring the current field situation through a 3D simulation technology according to the surrounding environment of the video, the information of the automobile, the action track of the automobile in the shot video and the current signal lamp use situation, and carrying out field simulation;
step six: recording all data after 3D simulation, then making a table with the collected field data for data comparison, calculating the contact ratio of 3D simulation reduction according to the data comparison similarity, and adjusting the data to continue simulation when the contact ratio of 3D simulation reduction is lower than a standard value;
step seven: when the contact ratio of the 3D simulation reaches the standard, recording the 360-degree field simulation condition through a high-definition camera, wherein the dead angle condition of a field vehicle is recorded in a key way;
step eight: and recording the specific situation of each automobile in the 3D simulation, analyzing and judging whether the situation of running the red light occurs or not, and then performing accountability according to the situation of the automobile.
Further, the peripheral cameras in the first step are all cameras within a range of 300m taking an accident scene as a center, the vehicle information in the first step comprises a license plate number, a vehicle type, a vehicle driving mileage, a vehicle bearing condition and a vehicle maintenance record, and the video shot by the traffic accident scene and the peripheral cameras is extracted and observed as embedded terminal equipment in the first step.
Further, the personal information of the vehicle owner in the first step comprises the age, the driving age, the physical condition and whether a driving violation record exists.
Further, the filtering in the second step includes background processing, brightness adjustment and light adjustment, and the number of video frames in the second step is 50 frames/s.
Further, the action tracks of the automobiles in the videos in the third step are all running tracks of the automobiles 30s before and after the accident, the number of videos clipped in the third step is the same as the number of the automobiles, and each clipped video corresponds to each automobile.
Further, the video is analyzed in the third step, frame number analysis is carried out by adopting 40 frames/s, and the pictures of the surrounding environment in the fourth step mainly comprise public facilities damaged by the automobile accident, including guardrails, green plants and car marks generated in the automobile accident.
Further, the data of the damaged automobile in the fifth step comprises the area of the damaged surface of the automobile, the damaged devices in the automobile and the damaged positions, and the data of the type of the automobile, the speed of the automobile, the surrounding environment and the states of the signal lamps are simulated in the fifth step in a 3D mode.
Further, the data collected after the 3D simulation in the sixth step include a post-accident position of the automobile, an automobile damage rate, and a damage position of the surrounding environment, and the coincidence degree standard in the sixth step is higher than 95%.
Further, the dead angle condition of the vehicle in the seventh step includes the condition that the shielding and shooting angles of other vehicles cannot be complete in the accident process.
Further, the specific conditions of the automobile in the step eight comprise the speed, the load, the driving state of the driver and the performance of the automobile.
According to the method of examples 1-3, 10 red light running accidents were analyzed and the following table was obtained:
Figure BDA0002465974070000101
as can be seen from the above table, the method of example 3 has high accuracy in accident analysis and short analysis time for accidents.
1. Finally, it should be noted that: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.

Claims (8)

1. A deep learning red light running detection method based on an embedded terminal is characterized in that: the specific operation steps are as follows:
the method comprises the following steps: extracting videos shot by a traffic accident scene and peripheral cameras, and then extracting vehicle information and personal information of an owner in the videos;
step two: filtering the video to reduce the influence of the environment on the definition of the video, and then pouring the video into an image processor to carry out frame number analysis;
step three: recording the action track of each automobile in the video, marking the action track in the picture, editing the motion condition of each automobile according to the recorded video, and watching and analyzing the single edited video respectively;
step four: shooting a picture of the surrounding environment after an accident by a detector at the accident site, detecting the damage degree of the automobile, and making a data table;
step five: restoring the current field situation through a 3D simulation technology according to the surrounding environment of the video, the information of the automobile, the action track of the automobile in the shot video and the current signal lamp use situation, and carrying out field simulation;
step six: recording all data after 3D simulation, then making a table with the collected field data for data comparison, calculating the contact ratio of 3D simulation reduction according to the data comparison similarity, and adjusting the data to continue simulation when the contact ratio of 3D simulation reduction is lower than a standard value; data collected after the 3D simulation comprise the position after the automobile accident, the automobile damage rate and the damage position of the surrounding environment, and the coincidence degree standard in the sixth step is higher than 85% -95%;
step seven: when the contact ratio of the 3D simulation reaches the standard, recording the 360-degree field simulation condition through a high-definition camera, wherein the dead angle condition of a field vehicle is recorded in a key way; the dead angle condition of the vehicle comprises the condition that other vehicles are shielded and the shooting angle cannot be complete in the accident process;
step eight: and recording the specific situation of each automobile in the 3D simulation, analyzing and judging whether the situation of running the red light occurs or not, and then performing accountability according to the situation of the automobile.
2. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: the peripheral cameras in the first step are all cameras within a range of 200-300 m with an accident site as a center, and the vehicle information in the first step comprises license plate numbers, automobile types, automobile driving mileage, automobile bearing conditions and automobile maintenance records.
3. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: the personal information of the vehicle owner in the first step comprises the age, the driving age, the physical condition and whether a driving violation record exists, and videos shot by a traffic accident scene and peripheral cameras are extracted in the first step and are observed to be embedded terminal equipment.
4. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: and the filtering in the second step comprises background processing, brightness adjustment and light adjustment, and the number of the video frames in the second step is 30-50 frames/s.
5. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: the action tracks of the automobiles in the videos in the third step are all running tracks of 20-30 s automobiles before and after an accident, the number of videos clipped in the third step is the same as the number of related automobiles, and each clipped video corresponds to each automobile.
6. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: and in the third step, the video is analyzed, the frame number analysis is carried out by adopting 30-40 frames/s, and the photos of the surrounding environment in the fourth step mainly comprise public facilities damaged by the automobile accident, including guardrails, green plants and car marks generated in the automobile accident.
7. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: and in the fifth step, the damaged data of the automobile comprise the damaged area of the surface of the automobile, the damaged devices inside the automobile and the damaged positions, and the 3D simulation in the fifth step comprises the data of the type of the automobile, the speed of the automobile, the surrounding environment and the state of a signal lamp.
8. The deep learning red light running detection method based on the embedded terminal as claimed in claim 1, wherein: and the concrete conditions of the automobile in the step eight comprise the speed, the load, the driving state of a driver and the performance of the automobile.
CN202010334070.6A 2020-04-24 2020-04-24 Deep learning red light running detection method based on embedded terminal Active CN111640308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334070.6A CN111640308B (en) 2020-04-24 2020-04-24 Deep learning red light running detection method based on embedded terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334070.6A CN111640308B (en) 2020-04-24 2020-04-24 Deep learning red light running detection method based on embedded terminal

Publications (2)

Publication Number Publication Date
CN111640308A CN111640308A (en) 2020-09-08
CN111640308B true CN111640308B (en) 2022-03-08

Family

ID=72331832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334070.6A Active CN111640308B (en) 2020-04-24 2020-04-24 Deep learning red light running detection method based on embedded terminal

Country Status (1)

Country Link
CN (1) CN111640308B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001043472A (en) * 1999-07-30 2001-02-16 Mazda Motor Corp Automatic report system
CN102819880B (en) * 2012-08-07 2015-09-09 广东威创视讯科技股份有限公司 A kind of method of comprehensive reduction road accident image
CN103824346B (en) * 2014-02-17 2016-04-13 深圳市宇恒互动科技开发有限公司 Driving recording and replay method and system
CN104916132B (en) * 2015-05-14 2017-02-01 扬州大学 Method used for determining traffic flow running track of intersection
CN105336207B (en) * 2015-12-04 2018-12-25 黄左宁 Vegicle recorder and public security comprehensive monitoring system
CN110009903B (en) * 2019-03-05 2022-02-18 同济大学 Traffic accident scene restoration method
CN109919140B (en) * 2019-04-02 2021-04-09 浙江科技学院 Automatic determination method, system, equipment and storage medium for vehicle collision accident responsibility

Also Published As

Publication number Publication date
CN111640308A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN104408932B (en) A kind of drunk driving vehicle detecting system based on video monitoring
CN104506804B (en) Motor vehicle abnormal behaviour monitoring device and its method on a kind of through street
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
CN105744232A (en) Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN102722704A (en) Method and system for recognizing vehicle license plate by integrating video dynamic tracking
CN105023439A (en) Intelligent dynamic license plate recognition alarm system
CN105405298A (en) Identification method of license plate identifications and device
CN112509325B (en) Video deep learning-based off-site illegal automatic discrimination method
CN111640293A (en) Deep learning non-motor vehicle lane driving detection method based on embedded terminal
CN113033275B (en) Vehicle lane-changing non-turn signal lamp analysis system based on deep learning
WO2023179416A1 (en) Method and apparatus for determining entry and exit of vehicle into and out of parking space, device, and storage medium
CN105046948A (en) System and method of monitoring illegal traffic parking in yellow grid line area
CN103000029A (en) Vehicle video recognition method and application thereof
CN105574502A (en) Automatic detection method for violation behaviors of self-service card sender
CN105528626A (en) RFID (Radio Frequency Identification) reader and camera integrated machine and application thereof
CN112528759A (en) Traffic violation behavior detection method based on computer vision
CN112861797A (en) Method and device for identifying authenticity of license plate and related equipment
Gochoo et al. FishEye8K: a benchmark and dataset for fisheye camera object detection
CN111640308B (en) Deep learning red light running detection method based on embedded terminal
CN204884166U (en) Regional violating regulations parking monitoring devices is stopped to traffic taboo
CN108985197B (en) Automatic detection method for taxi driver smoking behavior based on multi-algorithm fusion
CN112633163B (en) Detection method for realizing illegal operation vehicle detection based on machine learning algorithm
CN113177443A (en) Method for intelligently identifying road traffic violation based on image vision
Wang et al. A video traffic flow detection system based on machine vision
CN111222587A (en) Method and system for predicting dangerous driving behavior of people with loss of evidence based on feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant