CN113761967A - Identification method and device - Google Patents

Identification method and device Download PDF

Info

Publication number
CN113761967A
CN113761967A CN202010486131.0A CN202010486131A CN113761967A CN 113761967 A CN113761967 A CN 113761967A CN 202010486131 A CN202010486131 A CN 202010486131A CN 113761967 A CN113761967 A CN 113761967A
Authority
CN
China
Prior art keywords
traffic light
image
images
vehicle
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010486131.0A
Other languages
Chinese (zh)
Inventor
巫荣
柳圆圆
曹彬
何威
汤煜
李家乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010486131.0A priority Critical patent/CN113761967A/en
Publication of CN113761967A publication Critical patent/CN113761967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an identification method and an identification device, wherein the method comprises the following steps: collecting at least two frames of images recorded by a vehicle event data recorder within a certain time, and preprocessing the at least two frames of images; identifying stop line information and traffic light information in each preprocessed frame image; determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information; under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, identifying a target image from the at least two preprocessed images, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two images; obtaining at least two frame related images adjacent to the target image in time from the at least two frame images after preprocessing; and determining whether the vehicle has the behavior of running the red light within the certain time or not according to the at least two frames of associated images.

Description

Identification method and device
Technical Field
The present application relates to image processing technologies, and in particular, to an identification method and apparatus.
Background
In the prior art, a technology that a plurality of cameras are arranged at a traffic intersection to shoot vehicles at multiple angles is generally adopted to identify whether the vehicles run red light. However, when an insurance company needs to settle a claim for a red light running event of a vehicle, images shot by a camera at a traffic intersection cannot be called, and whether the vehicle runs the red light can be manually confirmed only by manually checking images in a vehicle recorder in the vehicle, which consumes a lot of manpower and time.
Disclosure of Invention
In order to solve the existing technical problem, the application provides an identification method and device, which can realize automatic identification of whether a vehicle runs a red light and avoid consuming a large amount of manpower and time.
The technical scheme of the application is realized as follows:
the embodiment of the application provides an identification method, which comprises the following steps:
collecting at least two frames of images recorded by a vehicle event data recorder within a certain time;
preprocessing the at least two frames of images;
identifying stop line information and traffic light information in each preprocessed frame image;
determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, identifying a target image from the at least two preprocessed images, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two images;
obtaining at least two frame related images adjacent to the target image in time from the at least two frame images after preprocessing;
and determining whether the vehicle has the behavior of running the red light within the certain time or not according to the at least two frames of associated images.
In the above scheme, the determining whether the vehicle has a behavior of running a red light within the certain time according to the at least two frames of associated images includes:
determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and determining whether the vehicle has the behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
In the foregoing solution, the identifying the stop line information in each preprocessed frame image includes:
identifying a stop line interested area of each preprocessed frame image;
identifying whether a longitudinal lane line and/or a transverse lane line exist in the parking line interested area;
the longitudinal lane lines and/or the lateral lane lines existing in the identified parking line interest region are regarded as parking line information.
In the foregoing solution, the identifying a target image from the at least two preprocessed frames of images includes:
recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images;
and aiming at each frame image comprising the longitudinal lane line and/or the transverse lane line, determining an image for identifying that the longitudinal lane line has a breakpoint for the first time and/or that the head of the vehicle and the transverse lane line form a longitudinal tangent point as a target image.
In the above scheme, the identifying traffic light information in each preprocessed frame image includes:
identifying an interested area of the traffic light from each preprocessed frame image;
screening out the traffic light interested areas meeting the requirements according to the aspect ratio of the traffic light interested areas;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by traffic lights with different colors and different shapes;
obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result;
wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
In the scheme, aiming at least part of first images in the images of the traffic light interested areas meeting the requirements, the first images are images which can not obtain the traffic light information in the images according to the matching result,
obtaining a traffic light region of interest of a first image, which meets requirements;
classifying the sample data according to the shape of the traffic light;
according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image;
the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
In the foregoing solution, the determining the driving state of the vehicle within the certain time based on the at least two frames of associated images includes:
acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
and acquiring the horizontal position of the traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle according to the change of the horizontal position of the traffic light in the at least two associated images.
The embodiment of the application also provides an identification device, which comprises a collecting unit, a preprocessing unit, a first identification unit, a first determining unit, a second identification unit, an obtaining unit and a second determining unit; wherein,
the acquisition unit is used for acquiring at least two frames of images within a certain time recorded by the automobile data recorder;
the preprocessing unit is used for preprocessing the at least two frames of images;
the first identification unit is used for identifying stop line information and traffic light information in each preprocessed frame image;
the first determining unit is used for determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
the second identification unit is used for identifying a target image from the at least two preprocessed images under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two preprocessed images;
the acquisition unit is used for acquiring at least two frame related images adjacent to the target image in terms of time from the at least two frame images after preprocessing;
the second determining unit is used for determining whether the vehicle has the behavior of running the red light within the certain time according to the at least two frames of associated images.
In the foregoing solution, the second determining unit is further configured to:
determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and determining whether the vehicle has the behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
In the foregoing solution, the first identifying unit is further configured to:
identifying a stop line interested area of each preprocessed frame image; identifying whether a longitudinal lane line and/or a transverse lane line exist in the parking line interested area; the longitudinal lane lines and/or the lateral lane lines existing in the identified parking line interest region are regarded as parking line information.
In the foregoing solution, the second identifying unit is further configured to:
recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images;
and aiming at each image which is identified to comprise the longitudinal lane line and/or the transverse lane line, determining the image which identifies the first break point of the longitudinal lane line and/or the first longitudinal tangent point formed by the head of the vehicle and the transverse lane line as a target image.
In the foregoing solution, the first identifying unit is further configured to:
identifying an interested area of the traffic light from each preprocessed frame image;
screening out the traffic light interested areas meeting the requirements according to the aspect ratio of the traffic light interested areas;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by traffic lights with different colors and different shapes;
obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result;
wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
In the foregoing solution, for at least a part of a first image in an image including a traffic light interest region satisfying requirements, the first image is an image in which traffic light information in the image cannot be obtained according to the matching result, and the first identifying unit is further configured to:
obtaining a traffic light region of interest of a first image, which meets requirements;
classifying the sample data according to the shape of the traffic light; according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image;
the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
In the foregoing solution, the second determining unit is further configured to:
acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
and acquiring the horizontal position of the traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle according to the change of the horizontal position of the traffic light in the at least two associated images.
The application provides an identification method and an identification device, wherein the identification method comprises the following steps: the method comprises the steps of collecting at least two frames of images within a certain time recorded by a vehicle traveling data recorder, preprocessing the collected at least two frames of images, determining whether a vehicle is positioned at a traffic light intersection within a certain time according to traffic light information and stop line information identified from the preprocessed at least two frames of images, identifying an image for firstly pressing the stop line from the preprocessed at least two frames of images as a target image under the condition that the vehicle is positioned at the traffic light intersection within a certain time, obtaining at least two frames of associated images adjacent to the target image in time, and determining whether the vehicle runs a red light within a certain time according to the at least two frames of associated images. According to the identification method, the images in the automobile data recorder are automatically collected and identified, manual collection and identification are not needed, and a large amount of manpower and time are saved. In addition, the identification method in the embodiment of the application makes full use of data in the automobile data recorder, and additional data acquisition is not needed. And at least two frames of associated images adjacent to the image for pressing the stop line for the first time in the automobile data recorder can be used for identifying whether the vehicle runs the red light, so that intelligent identification is realized, manual identification is not needed, and labor and time are saved. The automatic identification of the red light running behavior by using the associated images can improve the identification accuracy.
In the technical scheme of determining whether the vehicle runs the red light at the traffic light intersection or not according to the at least two frames of associated images, the driving state of the vehicle at the traffic light intersection is determined according to the vertical position and the horizontal position of the traffic light in the at least two frames of associated images, so that the identification accuracy can be improved;
in the technical scheme of identifying the stop line information in at least two frames of images, the stop line information in each frame of image after preprocessing is automatically identified, so that the condition that a large amount of manpower and time are consumed for manual review is avoided, the identification efficiency is improved, and the cost required by identification is reduced;
in the technical scheme of identifying the target image in the at least two frames of images, according to the method and the device, the image that the vehicle presses the stop line for the first time is determined as the target image according to the fact that a breakpoint occurs on the longitudinal lane line for the first time and/or the longitudinal tangent point is formed between the head of the vehicle and the transverse lane line for the first time in the stop line information, and therefore the accuracy of identification is improved;
in the technical scheme for identifying the traffic light information in the at least two frames of images, the method and the device have the advantages that the at least two frames of images are preprocessed, the traffic light interested areas are identified, the traffic light interested areas meeting requirements are screened out according to the width-to-height ratio of the traffic light interested areas, the traffic light areas meeting the requirements can be automatically screened out from the traffic light areas according to the width-to-height ratio of the traffic light areas, compared with manual processing, processing time is shortened, and the efficiency for identifying the traffic light information in each frame of image is improved; furthermore, the method and the device match the sample data with the traffic light interested area meeting the requirements, and can improve the matching accuracy according to the sample data obtained by comprehensively considering the shape and the color (traffic light information) of the traffic light in the actual life;
the application also considers that the image of the traffic light information in the image cannot be obtained according to the matching result, the traffic light region and the traffic light information in the image comprising the traffic light interested region meeting the requirement can be ensured to be identified according to the matching result by matching the image with the sample data classified according to the shape, the traffic light region and the traffic light information in the image comprising the traffic light interested region meeting the requirement are prevented from being omitted, and the rigidness of identifying the behavior of the vehicle running the red light is improved;
in the technical scheme for determining whether the vehicle is positioned at the traffic light intersection within a certain time, the method and the device identify whether the vehicle is positioned at the traffic light intersection according to the combination of the stop line information and the traffic light information, can avoid the condition of inaccurate identification of the traffic light intersection, and improve the accuracy of automatically identifying the red light running behavior of the vehicle according to at least two frames of images in the automobile data recorder.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of an identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of coordinates for identifying a traffic light region of interest that meets requirements according to an embodiment of the present application;
FIG. 3 is a schematic diagram of traffic light sample data provided in an embodiment of the present application;
fig. 4 is a schematic coordinate diagram for identifying a driving state of a vehicle according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an identification apparatus according to an embodiment of the present disclosure;
fig. 6 is a first schematic view of an application scenario of an identification method according to an embodiment of the present application;
fig. 7 is a schematic view of an application scenario of an identification method according to an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating a process for identifying traffic light information according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of identifying stop line information and driving status according to an embodiment of the present disclosure;
fig. 10 is a schematic hardware configuration diagram of an identification apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
In the prior art, cameras are generally mounted at a plurality of positions of an intersection (traffic light intersection), and a mode of shooting the outside of a vehicle from a plurality of angles is adopted to identify whether the vehicle has a red light running behavior. In the embodiment of the application, in order to facilitate people or organizations (such as insurance companies) not in the traffic department to determine whether the vehicle has the red light running behavior at the intersection, an identification method is provided, and the red light running behavior of the vehicle is identified by using images shot by a vehicle traveling data recorder inside the vehicle. As shown in fig. 1, a schematic flow chart of an identification method provided in an embodiment of the present application is shown, where the method includes:
(step) S101: collecting at least two frames of images recorded by a vehicle event data recorder within a certain time;
s102: preprocessing the at least two frames of images;
in order to determine whether a vehicle runs a red light when passing through a certain traffic light intersection within a certain time, at least two frames of images within a certain time recorded by a vehicle data recorder are collected, and the at least two frames of images are preprocessed. Specifically, the preprocessing step sequentially comprises graying, binarization and edge detection of each frame image in at least two frame images.
The graying is to set the parameter values of three parameters R (Red ), G (Green ) and B (Blue ) of each pixel point in the image to the same value to simplify the color data in the image, generally, the graying value may be an average value or a weighted average value of R, G and B parameters of the original pixel point, or may be a value of one parameter (e.g., R) of R, G and B. Taking a pixel point F in a frame of image as an example, R, G and B parameters of F are respectively 50, 180 and 220, the average value of 50, 180 and 220 is calculated to obtain 150, and R, G and B parameters of updated F are both 150.
The binary method is characterized in that R, G and B parameter values of pixel points which are larger than a set threshold value in a grayed image are set to be the maximum value 255, and R, G and B parameter values of pixel points which are smaller than or equal to the set threshold value are set to be the minimum value 0, so that R, G and B parameter values of each pixel point in the image are only 0 or 255, color data in the image is further simplified, meanwhile, the pixel points with obvious brightness change in the image are highlighted, and only a white area with high brightness and a black area with low brightness are presented in the image. Assuming that the set threshold is 150 and the R, G and B parameters of the grayed pixel F are 150, the R, G and B parameters of the updated pixel F are 0.
And performing edge detection on the binarized image, identifying pixel points with obvious brightness change in the image, and obtaining the edges of a high-brightness region and a low-brightness region in the image. Obviously, in each frame of image, the stop line information and the traffic light information are both areas with high brightness, and the edges of the areas with high brightness and low brightness are identified, so that the edges of the areas where the stop line information and the traffic light information are located can be determined in the image.
After preprocessing at least two frame images, S103 and S104 are performed.
S103: identifying stop line information and traffic light information in each preprocessed frame image;
s104: determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
it is understood that the purpose of S101 to S104 is to obtain stop line information and traffic light information in the at least two preprocessed frames of images, and determine whether the vehicle is at a traffic light intersection within the certain time based on the two information of the at least two preprocessed frames of images. Also, the traffic light information described in the embodiments of the present application includes the shape and color of the traffic light, and the identification of the traffic light information, specifically, the identification of the shape and color of the traffic light, described hereinafter.
After the method in S102 is executed, S103 is continuously executed to identify the stop line information and the traffic light information in each of the preprocessed frame images. Obviously, two different kinds of information, the stop line information and the traffic light information, are involved in the recognition in S103, based on which the recognition of the stop line information and the traffic light information will be described in the first part and the second part, respectively.
Looking first at the first part: and identifying the stop line information in each preprocessed frame image. The identifying the stop line information in the at least two preprocessed frames of images comprises:
firstly, identifying a stop line interested area of each preprocessed frame image;
secondly, identifying whether a longitudinal lane line and/or a transverse lane line exists in the parking line interested area;
and thirdly, regarding the longitudinal lane lines and/or the transverse lane lines existing in the region of interest of the identified stop line as stop line information.
As can be seen from the first step, in order to accurately determine whether a vehicle is located at a traffic light intersection within a certain time, it is necessary to identify whether stop line information exists in front of the vehicle head in each frame of image recorded by the vehicle event data recorder within a certain time, and therefore, it is necessary to identify a stop line region of interest preset in front of the vehicle head. Specifically, the parking line interested area is a rectangular area determined in front of the vehicle head of each frame of image according to the preset length and width, and is used for identifying whether parking line information exists in front of the vehicle head of the vehicle. When the vehicle driving state is a forward state, the image information in the stop line-of-interest region in each frame image changes continuously.
And then, the method in the first step, the method in the second step is to identify whether a transverse lane line and/or a longitudinal lane line exist in the parking line interested area in each frame of image after preprocessing, and the transverse lane line and/or the longitudinal lane line identified in the second step are/is parking line information in the third step, so that the parking line information is identified under the condition that the transverse lane line and/or the longitudinal lane line are identified in the parking line interested area.
Through the scheme in the first part, the automatic identification of the stop line information in each preprocessed frame image can be realized, so that the condition that a large amount of manpower and time are spent due to manual inspection is avoided, the identification efficiency is improved, and the cost required by identification is reduced.
In the process of implementing the method for identifying the stop line information in each preprocessed frame image in the first part, the method for identifying the traffic light information in each preprocessed frame image in the second part may be implemented at the same time or at different times, depending on the actual situation (such as the hardware performance of the running program).
Next, looking again at the second part: and identifying the traffic light information in each preprocessed frame image. The identifying the traffic light information in each frame of image comprises:
firstly, identifying an interested area of the traffic light from each preprocessed frame image;
secondly, screening out the interesting area of the traffic light meeting the requirement according to the aspect ratio of the interesting area of the traffic light;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by the traffic lights with different colors and different shapes;
fourthly, obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result; wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
First, in a first step, a traffic light region of interest is identified from each preprocessed frame image. The specific method is that the white area with high brightness in each preprocessed frame image is identified, the identified white area similar to a quadrangle is used as the traffic light interesting area, and the traffic light interesting area in each preprocessed frame image can be identified. In addition to recognizing the traffic light interested region of each preprocessed frame image in a manner of recognizing a white region with high brightness in each preprocessed frame image, in order to obtain a white region with an edge closer to a quadrangle, the preprocessed frame image can be preprocessed by adopting an expansion operation, and thus, the traffic light interested region can be recognized. Specifically, for each frame of preprocessed image (each frame of preprocessed image includes a white region), performing an expansion operation on each frame of preprocessed image, where the expansion operation on the image can expand the white region in the image, the expanded white region can form a quadrilateral convex body, and one or more quadrilateral convex bodies in each frame of preprocessed image are identified as the traffic light interest region. The foregoing dilation operation corresponds to a pre-processing scheme employed prior to identifying the traffic light region of interest that highlights white areas in the image, thereby allowing traffic light regions of interest to be more accurately identified.
Then, in a second step, in order to screen out a traffic light interesting area meeting requirements from the traffic light interesting area, one point on a vehicle in each frame image is selected as an original point (such as a central point of the vehicle in the image), a horizontal line where the original point is located is used as an X axis, a vertical line where the original point is located is used as a Y axis, and a two-dimensional coordinate system is established. Then, taking the processing of one traffic light interested area in one frame of image as an example, according to the vertical coordinates of the highest point and the lowest point in the traffic light interested area and the horizontal coordinates of the leftmost point and the rightmost point, the minimum circumscribed rectangle of the traffic light interested area, the width and the height of the traffic light interested area are obtained, the ratio of the width to the height is calculated, and the aspect ratio of the traffic light interested area in the image is obtained. And under the condition that the aspect ratio is larger than a first threshold value and smaller than a second threshold value, determining the traffic light interesting area as the traffic light interesting area meeting the requirement, and otherwise, abandoning the traffic light interesting area. The first threshold is smaller than the second threshold, and values of the first threshold and the second threshold are determined according to actual conditions, and a decimal close to 1 is generally selected, for example, the first threshold is set to 0.667, and the second threshold is set to 1.8. For further explanation of the scheme in the first step, reference may be made to the following detailed description of fig. 2, which is not repeated herein. The method in the second step can automatically screen out the traffic light areas meeting the requirements from the traffic light areas according to the aspect ratio of the traffic light areas, and compared with manual processing, the method shortens the processing time and further improves the efficiency of identifying the traffic light information in each frame of image.
Then, in the third step, in order to identify the traffic light information in the image including the traffic light interested area meeting the requirement, the traffic light information in various forms is determined as sample data according to the shape and the color of the traffic light information, and the traffic light interested area meeting the requirement is matched based on the sample data, so that the traffic light area and the traffic light information in the image are obtained. The shape of the traffic light information includes at least a circle and an arrow, and the colors are generally three (e.g., red, green, and yellow). Assuming that sample data is determined according to the shape of the traffic light, which is circular and arrow-shaped, and according to the color of the traffic light, which is three (e.g., red, green, and yellow), twelve forms of traffic light information can be obtained as sample data, including three-color forms of the circular traffic light (circular red, green, and yellow), and nine-color forms of the arrow-shaped traffic light in three directions × three colors (left-turn arrow-shaped red, left-turn arrow-shaped green, left-turn arrow-shaped yellow, straight-going arrow-shaped red, straight-going arrow-shaped green, straight-going arrow-shaped yellow, right-turn arrow-shaped red, right-turn arrow-shaped green, and right-turn arrow-shaped yellow), and reference may be made to the twelve sample traffic light samples in fig. 3. In a preferred scheme, a convolutional neural network is adopted to match an image including a traffic light region of interest meeting requirements with sample data, specifically: and putting the sample data into a convolutional neural network as a reference object, and training the convolutional neural network by using the traffic light image with the known traffic light region and the traffic light information to obtain the convolutional neural network capable of accurately matching the known traffic light image with the sample data. Inputting the image including the traffic light interesting area meeting the requirement into a convolutional neural network, matching the image with sample data in the convolutional neural network, outputting the traffic light area in the image including the traffic light interesting area meeting the requirement, determining traffic light information of the image, and if the traffic light area cannot be identified, determining the traffic light-free area and the traffic light information in the image. The convolutional neural network includes, but is not limited to, a convolutional neural network applied to image classification, such as a Visual Geometry network (VGGNet) and an Alex network (AlexNet), in order to identify a traffic light region in an image including a traffic light region of interest satisfying a requirement and determine traffic light information of the image. By taking the shape and color (traffic light information) of a traffic light in real life into full consideration, the obtained sample data can improve the matching accuracy when used for matching with an image including a traffic light region of interest satisfying requirements.
In the fourth step, according to the matching result in the second step, the traffic light area in the image is identified from the image including the traffic light interesting area meeting the requirement, whether the traffic light area and the traffic light information exist in the image is judged, and the traffic light area and the traffic light information in the image in which the traffic light area and the traffic light information exist are identified, so that the automatic identification process improves the efficiency for identifying the red light running behavior of the vehicle.
It should be noted that, according to the matching result in the third step, the partial image in the image including the traffic light region of interest satisfying the requirement may identify the traffic light region and the traffic light information, and the other images may not identify the traffic light region and the traffic light information, and these images are referred to as the first images. In a preferred embodiment, in order to determine whether the traffic light region and the traffic light information exist in the first image, a fifth step is performed to prevent the traffic light region and the traffic light information in the first image from being missed.
Fifthly, aiming at least part of first images in the images of the traffic light interesting regions meeting the requirements, wherein the first images are images of traffic light information in the images which cannot be obtained according to the matching result, and the traffic light interesting regions of the first images meeting the requirements are obtained; classifying the sample data according to the shape of the traffic light; according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image; the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
In the fifth step, considering that the matching result is not accurate enough when all sample data are matched with the images including the traffic light interesting regions meeting the requirements, the sample data are classified according to the shapes of the traffic lights, the sample data in different shapes are respectively matched with the traffic light interesting regions meeting the requirements in the first image, and the traffic light regions and traffic light information of part or all of the images are identified from the first image. Taking sample data of one shape as an example, in a preferred scheme, a convolutional neural network is adopted to match a first image with sample data of different shapes, specifically: and putting the sample data of the shape into a convolutional neural network as a reference object, and training the convolutional neural network by using the traffic light image of the shape with the known traffic light region and the traffic light information to obtain the convolutional neural network region which can accurately match the known traffic light image with the sample data of the shape. Inputting the first image into a convolutional neural network, matching the first image with sample data of the shape in the convolutional neural network, outputting a traffic light interesting region which meets requirements in the first image corresponding to the shape in the first image according to a matching result, obtaining a traffic light region which corresponds to the shape and can identify the traffic light region and the image of the traffic light information, determining the traffic light information of the image, if the traffic light region cannot be identified, determining that the traffic light region and the traffic light information of the image cannot be identified by using the sample data of the shape, and identifying a part of the first image which does not identify the traffic light region and the traffic light information by using sample data of another shape. And training a convolutional neural network by using the same principle for sample data of another shape, matching the first image of the traffic light region and the traffic light information which are not identified with the sample data of the shape, and so on, and finally, still not identifying the first image of the traffic light region as the image of the traffic light-free region. The convolutional neural network includes, but is not limited to, a convolutional neural network applied to image classification, such as a Visual Geometry network (VGGNet) and an Alex network (AlexNet), in order to identify a traffic light region in the first image and determine traffic light information of the image. By the scheme, the traffic light region and the traffic light information in the image including the traffic light interested region meeting the requirement can be identified, the traffic light region and the traffic light information in the image meeting the requirement are prevented from being omitted, and the rigor of identifying the behavior of the vehicle running the red light is improved.
And executing S104 according to the stop line information of the preprocessed images in the first part and the traffic light information of the preprocessed images in the second part, and determining whether the vehicle is at a traffic light intersection.
It should be noted that, when determining whether a vehicle is located at a traffic light intersection within a certain time, it is impossible to determine that the vehicle in a frame of image is located at the traffic light intersection only according to stop line information recognized in the image after preprocessing, and further determine according to traffic light information recognized in the image. If the judgment is carried out only according to the stop line information, the intersection without the traffic light but with the stop line is wrongly identified as the traffic light intersection. If the traffic light information is only identified according to the traffic light information, all the images in which the traffic light information can be identified are screened as the images including the traffic light intersection. Based on the consideration, whether the vehicle is located at the traffic light intersection or not is identified according to the combination of the stop line information and the traffic light information, the condition that the traffic light intersection is identified inaccurately can be avoided, and the accuracy of automatically identifying the red light running behavior of the vehicle according to at least two frames of images in the automobile data recorder is improved.
After determining whether the vehicle is at the traffic light intersection, S105 is executed to identify the target image from the at least two pre-processed images to determine in which image the stop line is pressed for the first time when the vehicle is at the traffic light intersection.
S105: under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, identifying a target image from the at least two preprocessed images, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two images;
specifically, the stop line information includes a lateral lane line and a longitudinal lane line. Correspondingly, the identifying the target image from the at least two preprocessed frames of images includes: recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images; and aiming at each frame image comprising the longitudinal lane line and/or the transverse lane line, determining an image for identifying that the longitudinal lane line has a breakpoint for the first time and/or that the head of the vehicle and the transverse lane line form a longitudinal tangent point as a target image.
It can be understood that when a vehicle presses a stop line, the longitudinal lane line and/or the transverse lane line are/is identified for each frame of image after the pretreatment, the longitudinal tangent point formed by the vehicle head and the transverse lane line can be identified from the image after the pretreatment, the breakpoint on the longitudinal lane line can also be identified from the image, and the image of the vehicle pressing the stop line for the first time is determined as the target image according to at least one of the two situations that the stop line in the image appears. After the target image is determined from the at least two preprocessed frames of images, S106 and S107 are executed to determine at least two frames of associated images temporally adjacent to the target image, so as to determine whether the vehicle runs a red light within a certain time according to the at least two frames of associated images.
S106: obtaining at least two frame related images adjacent to the target image in time from the at least two frame images after preprocessing;
s107: and determining whether the vehicle has the behavior of running the red light within the certain time or not according to the at least two frames of associated images.
Specifically, the images adjacent to the target image in terms of time may be several frames of images before and several frames of images after the target image appears, and the number of images appearing before and after the target image may be the same or different; the images in a period of time appearing before and after the target image may be the same or different. Here, the number of frames of adjacent images appearing before and after may be set to 30 frames, and the time period of adjacent images appearing before and after may be set to within 10 seconds.
Wherein the determining whether the vehicle has the behavior of running the red light within the certain time according to the at least two frames of associated images comprises:
firstly, determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
secondly, extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and thirdly, determining whether the vehicle has a behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
It should be noted that, the method for determining the traffic light information of the at least two frames of associated images may refer to the method for determining the traffic light information of the at least two frames of pre-processed images (the second part) above, and since the embodiment of the present application identifies the traffic light information of the at least two frames of pre-processed images in the aforementioned part, the traffic light information of each of the at least two frames of associated images may be extracted from the traffic light information of the at least two frames of pre-processed images.
Based on the extracted traffic light information of the at least two frames of associated images, determining the driving state of the vehicle in each frame of image of the at least two frames of associated images according to the traffic light information in the at least two frames of associated images in the first step, wherein the driving state comprises the driving state and the driving direction of the vehicle. The driving state of the vehicle in the embodiment of the application comprises forward driving and stopping, and the driving direction comprises left turning, straight driving and right turning. Correspondingly, the determining the driving state of the vehicle within the certain time based on the at least two frames of associated images includes:
(1) acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
specifically, a point on the vehicle shot in each preprocessed frame image is selected as an origin (such as the center of the vehicle in the image), a horizontal line where the origin is located is used as an X axis, a vertical line where the origin is located is used as a Y axis, and a two-dimensional coordinate system is established. One or more points are obtained from the traffic light area of each of the at least two associated images, and the driving state of the vehicle in the current frame is determined based on the change of the one or more points in the current frame with respect to the ordinate (vertical position) in the previous frame. And counting the driving state of the vehicle in each determined frame, and obtaining the driving state of the vehicle at the intersection of the traffic lights according to the counting result.
(2) And acquiring the horizontal position of the traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle in the target image according to the change of the horizontal position of the traffic light in the at least two associated images.
Specifically, a point on the vehicle shot in each frame image is selected as an origin (for example, the center of the vehicle in the image), a horizontal line where the origin is located is used as an X axis, a vertical line where the origin is located is used as a Y axis, and a two-dimensional coordinate system is established, which can be the same as the coordinate system in (1). One or more points are obtained from the traffic light area of each frame image of the at least two associated images, and the driving direction of the vehicle in the current frame is determined based on the change of the abscissa (horizontal position) of the point or points in the current frame relative to the previous frame. And counting the driving direction of the vehicle in each frame, and obtaining the driving direction of the vehicle at the intersection of the traffic lights according to the counting result.
In the scheme, the driving state of the vehicle at the traffic light intersection is determined according to the vertical position and the horizontal position of the traffic light in the at least two frames of associated images, and the identification accuracy can be improved.
In the second step, the traffic light information of at least two frames of related images is extracted from the determined traffic light information of the at least two frames of preprocessed images. For the specific method, reference is made to the above description, which is not repeated herein.
And in the third step, determining whether the vehicle runs the red light at the traffic light intersection or not according to the driving state of the vehicle determined in the first step within the first time and the traffic light direction determined in the second step within the first time. And under the condition that the running state of the vehicle is forward, if the running direction of the vehicle is consistent with the direction in which the forward running of the red light in the traffic light information is forbidden, determining that the vehicle has the behavior of running the red light within the certain time.
Next, a detailed description will be made of an identification method provided by the embodiment of the present application with reference to fig. 2, fig. 4, and fig. 6 to 9. The traffic lights are assumed to be common traffic lights which are classified by shape, including circular traffic lights and arrow-shaped traffic lights. The round traffic lights comprise red round lights, green round lights and yellow round lights; arrow-shaped traffic lights have three directional arrows, a left-turn arrow, a straight arrow, and a right-turn arrow, each having three colors: red, green and yellow. Specifically, the sample data in fig. 3 can be referred to for various forms of traffic lights. As shown in fig. 6, the driving recorder captures a frame of image in an application scene of a round traffic light. As shown in fig. 7, the frame of image is captured by the car recorder in the application scene of the arrow-shaped traffic light. In both of these application scenarios, an identification method provided in the embodiments of the present application can be used. Next, a detailed description will be given of an identification method provided in an embodiment of the present application, taking fig. 6 as an example.
As shown in fig. 6, the frame image includes a stop line interest region and a circular traffic light region. And the vehicle head and the transverse lane line form a longitudinal tangent point, which indicates that the vehicle presses the stop line in the frame image. Assuming that the vehicle first presses the stop line in fig. 6, fig. 6 is a target image in at least two frames.
After the introduction of fig. 6, a detailed description will be given next to specific principles of an identification method provided by the embodiment of the present application through fig. 2, fig. 4, fig. 8, and fig. 9 in conjunction with the application scenario in fig. 6.
In order to judge whether a vehicle runs a red light at a certain traffic light intersection within a certain time, at least two frames of images are preprocessed, whether traffic light information and stop line information exist in the images or not is identified from the preprocessed at least two frames of images, and whether the vehicle is located at the traffic light intersection or not is judged according to the two information.
Reading at least two frames of images recorded by a vehicle event data recorder within a certain time, taking one frame of image as an example (as fig. 6), and explaining a specific principle of identifying traffic light information in the frame with reference to fig. 8:
s801: reading a frame image (as in fig. 6);
s802: the image is preprocessed, and the specific method comprises the following steps: the image is subjected to a graying process, and the image is changed into a grayscale image by setting the values of the color parameter G, R and B of each pixel in the figure to their average values, respectively. For example, for one pixel point F, the average value F of the three parameters is obtained according to the color parameter G, R and the B parameter value of F, and G, R and the B parameter value are updated to be F. Then, the gray image is binarized, for example, with a pixel F, when F is greater than a preset threshold (e.g., 150), G, R and B parameter values are set to 255, and when F is less than or equal to the preset threshold (e.g., 150), G, R and B parameter values are set to 0, and each pixel in the gray image is processed in this processing manner. The areas with high brightness and the areas with low brightness in the binary image obtained in the above way are all very prominent, and then the edge detection is carried out on the binary image, the edges of the areas with high brightness and the areas with low brightness are identified, and the white areas with high brightness (suspected traffic light interested areas) in the image are determined, wherein at this time, the number of the white areas in the image is one or more.
S803: the traffic light interesting area is identified, and the specific method comprises the following steps: the pre-processed image includes a high-brightness quadrilateral-like white area (as shown in fig. 2) that is identified as a traffic light region of interest. Besides the method, the preprocessed image may be expanded, and the expansion operation may make the edge of one or more white regions in the preprocessed image closer to a quadrangle, so as to obtain one or more quadrangle convexities in the preprocessed image, and determine the quadrangle convexities as the traffic light interest regions.
S804: judging whether the aspect ratio meets a certain range, if so, turning to S804a, and if not, turning to S804b, wherein the specific method comprises the following steps: as shown in fig. 2, a point on the vehicle in each frame image is selected as an origin (e.g., a center point of the vehicle in the image), a horizontal line where the origin is located is taken as an X-axis, and a vertical line where the origin is located is taken as a Y-axis, so as to establish a two-dimensional coordinate system. Acquiring a vertical coordinate g2 of the highest point, a vertical coordinate g1 of the lowest point, an abscissa k1 of the leftmost point and an abscissa k2 of the rightmost point in the traffic light interest area, determining the minimum circumscribed rectangle of the traffic light interest area according to the values of the four coordinates, and calculating the value Z of the aspect ratio. In the case that 0.667< Z <1.8, determining the traffic light interest region as the traffic light interest region meeting the requirement, and going to S804 a; otherwise go to S804 b.
S804 a: and determining the traffic light interested area meeting the requirement. Go to S805.
S804 b: the traffic light region of interest does not meet the requirements. And (6) ending.
S805: judging whether the traffic light interesting area meeting the requirements is matched with sample data or not, wherein the specific method comprises the following steps: comparing the traffic light interesting area meeting the requirements with the sample data by adopting the trained first convolution neural network, taking the identified traffic light interesting area meeting the requirements which is consistent with any traffic light information in the sample data as a traffic light area, recording the traffic light information of the traffic light area, and turning to S805 a; otherwise go to S805 b.
S805 a: traffic light zones and traffic light information are identified.
S805 b: the traffic light regions of interest that meet the requirements do not match.
The scheme S801 to S805b is adopted to identify at least two collected images to obtain images including traffic light areas and having traffic light information. In order to prevent that all images (first images) having traffic light information are not recognized in S804 to S805, sample data is classified in a circular shape and an arrow shape. Comparing the unmatched traffic light interesting area meeting the requirements in the S805b with the circular sample data by adopting the trained second convolutional neural network, taking the identified traffic light interesting area meeting the requirements consistent with any traffic light information in the circular sample data as a traffic light area, and recording the traffic light information of the traffic light area; and comparing the traffic light interesting region which is not identified by the second convolutional neural network and meets the requirements with the arrow-shaped sample data by adopting the trained third convolutional neural network, taking the identified traffic light interesting region which is consistent with any traffic light information in the arrow-shaped sample data and meets the requirements as a traffic light region, and recording the traffic light information of the traffic light region. The first to third convolutional neural networks can be implemented by, but not limited to, a convolutional neural network applied to image classification, such as a Visual Geometry network (VGGNet) and an Alex network (AlexNet). However, these three convolutional neural networks are different, and different convolutional neural networks are obtained from different training samples. It will be appreciated that there are differences in at least some of the weight parameters in the three convolutional neural networks, as a result of being derived from different training samples. Wherein, the weight parameters of the first convolution neural network are obtained by training the traffic light sample data of all shapes (circles and arrowheads). And the weight parameters of the second convolutional neural network are obtained by adopting the round traffic light sample data training. And the weight parameters of the third convolutional neural network are obtained by adopting arrow-shaped traffic light sample data training. Here, to facilitate their distinction, they are considered as different convolutional neural networks.
And counting the traffic light information in at least two frames of images, and defining a left turn sequence (recorded as a sequence L), a straight line sequence (recorded as a sequence S) and a right turn sequence (recorded as a sequence R) according to the direction of the traffic light. According to the color of the traffic light, 0 represents no traffic light, 1 represents green light, 2 represents yellow light, and 3 represents red light. And carrying out statistics on the colors of the lamps which represent the left turn in each frame image to obtain a sequence L, and carrying out corresponding processing on the colors of the lamps which represent the straight turn and the right turn in each frame image in the same way.
In order to accurately judge whether the vehicle runs the red light within a certain time, the traffic light information determined by combining the steps S911 to S913 in fig. 9 and the process in fig. 8 is as follows: a left-turn sequence L, a straight sequence S and a right-turn sequence R. Respectively calculating traffic light information weighted value L of target image and 30 frames of images before and after the target imagen、SnAnd RnThe formula is as follows:
Figure BDA0002519131170000141
wherein L isn、SnAnd RnRespectively weighting values of traffic light information in left-turning, straight-going and right-turning directions, wherein i represents the ith frame of image, the target image is taken as the 30 th frame of image as a reference, and the range of i is 0-60; epsiloniRepresenting the weighting coefficient corresponding to the ith frame of image, which is a fixed value set according to the actual requirement of each frame of image; li、siAnd riRespectively representing the ith element in the left turn sequence L, the straight line sequence S and the right turn sequence R. Will calculate the obtained Ln、SnAnd RnRounding up to obtain 0, which means no traffic light (traffic light is broken or switched) when the vehicle presses the line, 1 which means green traffic light when the vehicle presses the line, and 2 which means green traffic light when the vehicle presses the lineThe traffic light is yellow when the vehicle is pressed, and 3 represents that the traffic light is red when the vehicle is pressed.
It should be noted here that, in the case of a circular traffic light, L in one frame imagen、SnAnd RnOnly one of them can obtain 1, 2 or 3, and the others are all 0; l in one frame image in the case of an arrow-shaped traffic lightn、SnAnd RnThe values of (a) and (b) are different from each other.
Taking the processing of one frame of image in fig. 6 as an example, the specific steps of identifying the stop line information and the driving state in the frame are shown in fig. 9:
the specific steps of S901 to S902 can refer to S801 to S802 in fig. 8.
S903: identifying a parking line interesting area, wherein the specific method comprises the following steps: referring to the stop line interest region in fig. 6, a rectangular region is determined in front of the vehicle head of the vehicle in each frame image according to a preset length (side in the horizontal direction) and width, and for example, the length of the rectangular region is 2 meters, the width of the rectangular region is 1.5 meters, and the rectangular region is 0 cm away from the vehicle head.
S904: whether the parking line information exists in the parking line interested area is identified, and the specific method comprises the following steps: and judging whether a strip-shaped white area exists in a brightness area in the stop line interested area based on the preprocessed image, identifying the longitudinal strip-shaped white area as a longitudinal lane line, identifying the transverse strip-shaped white area as a transverse lane line, and regarding the longitudinal lane line and the transverse lane line as stop line information.
S905: identifying longitudinal lane lines of each preprocessed frame image, judging whether a vehicle head forms a longitudinal tangent point (refer to a point Z in the figure 6) with a transverse lane line, and continuing to judge by turning to S907 or S906; no go to S906 to continue the determination.
S906: identifying the transverse lane lines of the preprocessed frames of images, judging whether the longitudinal lane lines have breakpoints (refer to a point H in the figure 6), and turning to S907; no transition to S908 is made.
S907: and determining that the vehicle presses the stop line.
S908: and determining that the vehicle does not press the stop line.
Whether stop line information exists in at least two frames of preprocessed images is recognized according to the steps from S901 to S908, an image displaying a pressed stop line is marked as 1, an image without the pressed stop line and without the stop line information is marked as 0, and a group of sequences … … 1111000 … … is obtained according to the time sequence and is marked as T.
S909: determining an image of a vehicle pressing a stop line for the first time as a target image, wherein the specific method comprises the following steps: and determining an image characterized by 1 at the boundary of 1 and 0 in the T sequence as a target image.
S910: determining front and rear 30 frames of images of a target image, wherein the specific method comprises the following steps: images of 30 frames (the value may vary depending on actual conditions) before and after the target image are extracted from at least two frames of the pre-processed images, and these images are temporally adjacent to the target image and are regarded as related images.
S911: judging whether the vertical position difference of the traffic light areas in the adjacent images is larger than a third threshold value, wherein the specific method comprises the following steps: referring to fig. 4, it is assumed that the coordinate system established in fig. 4 is the same as the coordinate system in fig. 2. One or more points are obtained from the traffic light region of each of at least two associated images (61 frames in total), a vertical position difference (positive value) of the traffic light region is determined based on a change of the one or more points in the current frame with respect to the ordinate (vertical position) in the previous frame, and in the case where the vertical position difference is greater than a third threshold (for example, 0.2), a driving state in which the vehicle is moving forward in the current frame is determined, and a transition is made to S911a, otherwise, a driving state in which the vehicle is stopped is made, and a transition is made to S911 b.
It should be noted that S911 refers to two methods for determining the vehicle vertical displacement difference in the current frame.
The method comprises the following steps: the vertical displacement difference (positive value) of the vehicle in the current frame is determined based on the change of one point in the current frame with respect to the ordinate (vertical position) in the previous frame. And determining that the vehicle is in a forward driving state in the current frame under the condition that the vertical displacement difference is larger than 0.2. As shown in fig. 4, it is assumed that the traffic light regions in the neighboring images are green 1 and green 2, and green 1 belongs to the previous frame and green 2 belongs to the current frame. The ordinate of green 1 is Y1, the ordinate of green 2 is Y2, and (Y2-Y1) are taken as the vertical displacement difference.
The second method comprises the following steps: the vertical position difference (positive value) of the vehicle in the current frame is determined based on the change of the plurality of points in the current frame with respect to the ordinate (vertical position) in the previous frame. Specifically, the vertical coordinates { x ] of the j points in the current frame are obtained based on the change of the vertical coordinates (vertical positions) of the j points in the current frame relative to the previous frame11,x12,x13,...,x1jOrdinate { x ] in the previous frame21,x22,x23,...,x2jAccording to the Euclidean distance formula
Figure BDA0002519131170000151
And calculating to obtain an average vertical position difference d, normalizing d to obtain a normalized average vertical position d1, and determining the forward driving state of the vehicle in the current frame under the condition that the normalized average vertical displacement d1 is greater than 0.2.
S911 a: the vehicle is moving forward.
S911 b: the vehicle is stopped.
S912: and judging whether the horizontal position difference of the traffic light areas in the adjacent images is larger than a positive value. Wherein the positive value may take 0.2 in meters.
S913: and judging whether the horizontal position difference of the traffic light areas in the adjacent images is larger than a negative value. Wherein, the negative value may be-0.2 in meters.
The specific method of S912 to S913 is: referring to fig. 4, it is assumed that the coordinate system established in fig. 4 is the same principle as in fig. 2. A center point of the identified traffic light zone is obtained, and a vertical coordinate of the point is characterized as a horizontal position of the traffic light zone. Assume that the traffic light regions in the neighboring images are green 1 and green 2, and that green 1 belongs to the previous frame and green 2 belongs to the current frame. The abscissa of green 1 is Q1, the abscissa of green 2 is Q2, and (Q2-Q1) are horizontal position differences. In the case where the horizontal position difference is greater than 0.2, go to S914, in the case where the horizontal position difference is less than-0.2, go to S916, otherwise go to S915.
S914: the vehicle turns to the right.
S915: the vehicle is moving straight.
S916: the vehicle turns left.
In the above scheme, according to the method in fig. 8, the traffic light information in at least two frames of images may be obtained, according to the method in fig. 9, the stop line information in at least two frames of images may be obtained, the target image may be determined according to the stop line information in each frame of image, and then 30 frames of images before and after the target image are determined, the traffic light information in the target image and 30 frames of images before and after the target image is extracted from the traffic light information in at least two frames of images, and the traffic light information in the certain time period may be determined. And determining the driving state and the driving direction of the vehicle in the certain time according to the vertical position and the level of the traffic light in the target image and the images of 30 frames before and after the target image. If the driving direction of the vehicle in the advancing state is consistent with the direction of the vehicle which is represented by the traffic light information and prohibits the vehicle from advancing, the fact that the vehicle rushes the red light at the traffic light road within a certain time can be determined.
Specifically, the method comprises the following steps: under the condition that the vehicle turns left and advances within the certain time, if the traffic light information is characterized as a red light turning left; under the condition that the vehicle moves forwards in a straight line within the certain time, if the traffic light information is characterized as a straight-line red light; or, if the traffic light information is characterized as a right-turn red light under the condition that the vehicle turns right and advances within the certain time; and determining that the vehicle runs the red light at the traffic light intersection within the certain time.
The method in the scheme realizes automatic recognition of at least two frames of images recorded by the automobile data recorder within a certain time, accurately judges whether the vehicle runs the red light at the traffic light intersection according to the recognized traffic light information, stop line information and vehicle running state, and has good application prospect.
The embodiment of the present application further provides an identification apparatus, as shown in fig. 5, the apparatus includes an acquisition unit 51, a preprocessing unit 52, a first identification unit 53, a first determination unit 54, a second identification unit 55, an acquisition unit 56, and a second determination unit 57; wherein,
the acquisition unit 51 is used for acquiring at least two frames of images recorded by the automobile data recorder within a certain time,
the preprocessing unit 52 is configured to preprocess at least two frames of images;
the first identification unit 53 is configured to identify stop line information and traffic light information in each preprocessed frame image;
the first determining unit 54 is configured to determine whether the vehicle is located at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
the second identifying unit 55 is configured to identify a target image from the at least two preprocessed images if the vehicle is identified to be located at the traffic light intersection within the certain time, where the target image is characterized by an image of the vehicle pressing a stop line for the first time in the at least two preprocessed images;
the obtaining unit 56 is configured to obtain at least two frame related images temporally adjacent to the target image from the at least two frame images after the preprocessing;
the second determining unit 57 is configured to determine whether the vehicle has a behavior of running a red light within the certain time according to the at least two frames of associated images.
In the foregoing solution, the second determining unit 57 is further configured to:
determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and determining whether the vehicle has the behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
In the foregoing solution, the first identifying unit 53 is further configured to:
identifying a stop line interested area of each preprocessed frame image; identifying whether a longitudinal lane line and/or a transverse lane line exist in the parking line interested area; the longitudinal lane lines and/or the lateral lane lines existing in the identified parking line interest region are regarded as parking line information.
In the foregoing solution, the second identifying unit 55 is further configured to:
recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images;
and aiming at each image which is identified to comprise the longitudinal lane line and/or the transverse lane line, determining the image which identifies the first break point of the longitudinal lane line and/or the first longitudinal tangent point formed by the head of the vehicle and the transverse lane line as a target image.
In the foregoing solution, the first identifying unit 53 is further configured to:
identifying an interested area of the traffic light from each preprocessed frame image;
screening out the traffic light interested areas meeting the requirements according to the aspect ratio of the traffic light interested areas;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by traffic lights with different colors and different shapes;
obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result;
wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
In the above solution, for at least a part of the first image including the interested traffic light meeting the requirement, the first image is an image in which the traffic light information in the image cannot be obtained according to the matching result, the first identifying unit 53 is further configured to:
obtaining a traffic light region of interest of a first image, which meets requirements;
classifying the sample data according to the shape of the traffic light; according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image;
the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
In the foregoing solution, the second determining unit 57 is further configured to:
acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
and acquiring the horizontal position of the red traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle in the target image according to the change of the horizontal position of the traffic light in the at least two associated images.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is configured to, when executed by a processor, perform at least the steps of the method shown in any one of fig. 1 to 9. The computer readable storage medium may be specifically a memory.
The embodiment of the application also provides an identification device. Fig. 10 is a schematic diagram of a hardware structure of an identification apparatus according to an embodiment of the present application, and as shown in fig. 10, the apparatus includes: a communication component 10.3 for data transmission, at least one processor 10.1 and a memory 10.2 for storing a computer program capable of running on the processor 10.1. The various components in the terminal are coupled together by a bus system 10.4. It will be appreciated that the bus system 10.4 is used to enable communications of connections between these components. The bus system 10.4 comprises, in addition to the data bus, a power bus, a control bus and a status signal bus. For clarity of illustration, however, the various buses are labeled as bus system 10.4 in fig. 10.
Wherein the processor 10.1, when executing the computer program, performs at least the steps of the method of any of fig. 1 to 9.
It will be appreciated that the memory 10.2 may be either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced Synchronous Dynamic Random Access Memory), Synchronous linked Dynamic Random Access Memory (DRAM, SyncLinf Dynamic Random Access Memory), Direct Memory (DRmb Random Access Memory). The memory 72 described in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiments of the present application may be applied to the processor 10.1, or implemented by the processor 10.1. The processor 10.1 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 10.1. The processor 10.1 described above may be a general purpose processor, a Digital Signal Processing (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 10.1 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 10.2 and the processor 10.1 reads the information in the memory 10.2 and in combination with its hardware performs the steps of the method described above.
In an exemplary embodiment, the identification Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, MCUs, microprocessors (microprocessors), or other electronic components for performing the aforementioned identification method.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. An identification method, characterized in that the method comprises:
collecting at least two frames of images recorded by a vehicle event data recorder within a certain time;
preprocessing the at least two frames of images;
identifying stop line information and traffic light information in each preprocessed frame image;
determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, identifying a target image from the at least two preprocessed images, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two images;
obtaining at least two frame related images adjacent to the target image in time from the at least two frame images after preprocessing;
and determining whether the vehicle has the behavior of running the red light within the certain time or not according to the at least two frames of associated images.
2. The method as claimed in claim 1, wherein said determining whether the vehicle has a behavior of running a red light within the certain time period according to the at least two frames of associated images comprises:
determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and determining whether the vehicle has the behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
3. The method according to claim 1, wherein the identifying the stop line information in the preprocessed frames of images comprises:
identifying a stop line interested area of each preprocessed frame image;
identifying whether a longitudinal lane line and/or a transverse lane line exist in the parking line interested area;
the longitudinal lane lines and/or the lateral lane lines existing in the identified parking line interest region are regarded as parking line information.
4. The method according to claim 3, wherein the identifying a target image from the at least two preprocessed frames of images comprises:
recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images;
and aiming at each frame image comprising the longitudinal lane line and/or the transverse lane line, determining an image for identifying that the longitudinal lane line has a breakpoint for the first time and/or that the head of the vehicle and the transverse lane line form a longitudinal tangent point as a target image.
5. The method of claim 1, wherein the identifying traffic light information in the preprocessed frames of images comprises:
identifying an interested area of the traffic light from each preprocessed frame image;
screening out the traffic light interested areas meeting the requirements according to the aspect ratio of the traffic light interested areas;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by traffic lights with different colors and different shapes;
obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result;
wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
6. The method of claim 5, wherein for at least a part of the first image in the image including the traffic light interesting region meeting the requirement, the first image is an image in which the traffic light information in the image can not be obtained according to the matching result,
obtaining a traffic light region of interest of a first image, which meets requirements;
classifying the sample data according to the shape of the traffic light;
according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image;
the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
7. The method of claim 2, wherein the determining the driving state of the vehicle within the certain time based on the at least two frames of associated images comprises:
acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
and acquiring the horizontal position of the traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle according to the change of the horizontal position of the traffic light in the at least two associated images.
8. An identification device is characterized by comprising a collecting unit, a preprocessing unit, a first identification unit, a first determining unit, a second identification unit, an obtaining unit and a second determining unit; wherein,
the acquisition unit is used for acquiring at least two frames of images within a certain time recorded by the automobile data recorder;
the preprocessing unit is used for preprocessing the at least two frames of images;
the first identification unit is used for identifying stop line information and traffic light information in each preprocessed frame image;
the first determining unit is used for determining whether the vehicle is positioned at a traffic light intersection within the certain time according to the stop line information and the traffic light information;
the second identification unit is used for identifying a target image from the at least two preprocessed images under the condition that the vehicle is identified to be positioned at the traffic light intersection within the certain time, wherein the target image is characterized in that the vehicle presses a stop line for the first time in the at least two preprocessed images;
the acquisition unit is used for acquiring at least two frame related images adjacent to the target image in terms of time from the at least two frame images after preprocessing;
the second determining unit is used for determining whether the vehicle has the behavior of running the red light within the certain time according to the at least two frames of associated images.
9. The apparatus of claim 8, wherein the second determining unit is further configured to:
determining the driving state of the vehicle within the certain time based on the at least two frames of associated images;
extracting traffic light information in the at least two frames of associated images from the preprocessed traffic light information in the at least two frames of images;
and determining whether the vehicle has the behavior of running the red light within the certain time according to the driving state and the traffic light information in the at least two frames of associated images.
10. The apparatus of claim 8, wherein the first identifying unit is further configured to:
identifying a stop line interested area of each preprocessed frame image; identifying whether a longitudinal lane line and/or a transverse lane line exist in the parking line interested area; the longitudinal lane lines and/or the lateral lane lines existing in the identified parking line interest region are regarded as parking line information.
11. The apparatus of claim 10, wherein the second identifying unit is further configured to:
recognizing longitudinal lane lines and/or transverse lane lines of the preprocessed frames of images;
and aiming at each image which is identified to comprise the longitudinal lane line and/or the transverse lane line, determining the image which identifies the first break point of the longitudinal lane line and/or the first longitudinal tangent point formed by the head of the vehicle and the transverse lane line as a target image.
12. The apparatus of claim 8, wherein the first identifying unit is further configured to:
identifying an interested area of the traffic light from each preprocessed frame image;
screening out the traffic light interested areas meeting the requirements according to the aspect ratio of the traffic light interested areas;
matching sample data with the interesting area of the traffic light meeting the requirements, wherein the sample data are characterized by traffic lights with different colors and different shapes;
obtaining a traffic light area comprising an image of the traffic light interesting area meeting the requirement and traffic light information in the image according to the matching result;
wherein the traffic light information includes traffic light information characterizing different colors and different shapes.
13. The apparatus according to claim 12, wherein for at least a part of a first image in the image including the traffic light interesting region meeting the requirement, the first image is an image in which the traffic light information in the image cannot be obtained according to the matching result, the first identifying unit is further configured to:
obtaining a traffic light region of interest of a first image, which meets requirements;
classifying the sample data according to the shape of the traffic light; according to the classified sample data, dividing the shape of the traffic light interesting area meeting the requirements of the first image;
the traffic light color and shape in the first image are identified from a traffic light region of interest that is divided into a shape.
14. The apparatus of claim 9, wherein the second determining unit is further configured to:
acquiring the vertical position of the traffic light from the at least two preprocessed images, and determining the driving state of the vehicle according to the change of the vertical position of the traffic light in the at least two associated images; the driving state includes at least forward and stop;
and acquiring the horizontal position of the traffic light from the at least two preprocessed images, and determining the driving direction of the vehicle according to the change of the horizontal position of the traffic light in the at least two associated images.
CN202010486131.0A 2020-06-01 2020-06-01 Identification method and device Pending CN113761967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010486131.0A CN113761967A (en) 2020-06-01 2020-06-01 Identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010486131.0A CN113761967A (en) 2020-06-01 2020-06-01 Identification method and device

Publications (1)

Publication Number Publication Date
CN113761967A true CN113761967A (en) 2021-12-07

Family

ID=78782680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010486131.0A Pending CN113761967A (en) 2020-06-01 2020-06-01 Identification method and device

Country Status (1)

Country Link
CN (1) CN113761967A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6919823B1 (en) * 1999-09-14 2005-07-19 Redflex Traffic Systems Pty Ltd Image recording apparatus and method
JP2009104544A (en) * 2007-10-25 2009-05-14 Sumitomo Electric Ind Ltd Vehicle driving support system, driving support device, vehicle, and vehicle driving support method
KR20090055848A (en) * 2007-11-29 2009-06-03 한국전자통신연구원 Apparatus and method of detecting signal light
US20120288138A1 (en) * 2011-05-10 2012-11-15 GM Global Technology Operations LLC System and method for traffic signal detection
CN106710224A (en) * 2015-07-16 2017-05-24 杭州海康威视***技术有限公司 Evidence taking method and device for vehicle illegal driving
CN107122729A (en) * 2017-04-21 2017-09-01 奇酷互联网络科技(深圳)有限公司 Generation method, device and the mobile terminal of image
CN107886033A (en) * 2016-09-30 2018-04-06 比亚迪股份有限公司 Identify the method, apparatus and vehicle of circular traffic lights
US20190035276A1 (en) * 2016-03-06 2019-01-31 Foresight Automotive Ltd. Running vehicle alerting system and method
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110197589A (en) * 2019-05-29 2019-09-03 杭州诚道科技股份有限公司 A kind of illegal detection method of making a dash across the red light based on deep learning
CN110459064A (en) * 2019-09-19 2019-11-15 上海眼控科技股份有限公司 Vehicle illegal behavioral value method, apparatus, computer equipment
CN110598511A (en) * 2018-06-13 2019-12-20 杭州海康威视数字技术股份有限公司 Method, device, electronic equipment and system for detecting green light running event
CN110660225A (en) * 2019-10-28 2020-01-07 上海眼控科技股份有限公司 Red light running behavior detection method, device and equipment
US20200135030A1 (en) * 2018-10-24 2020-04-30 Waymo Llc Traffic light detection and lane state recognition for autonomous vehicles

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6919823B1 (en) * 1999-09-14 2005-07-19 Redflex Traffic Systems Pty Ltd Image recording apparatus and method
JP2009104544A (en) * 2007-10-25 2009-05-14 Sumitomo Electric Ind Ltd Vehicle driving support system, driving support device, vehicle, and vehicle driving support method
KR20090055848A (en) * 2007-11-29 2009-06-03 한국전자통신연구원 Apparatus and method of detecting signal light
US20120288138A1 (en) * 2011-05-10 2012-11-15 GM Global Technology Operations LLC System and method for traffic signal detection
CN106710224A (en) * 2015-07-16 2017-05-24 杭州海康威视***技术有限公司 Evidence taking method and device for vehicle illegal driving
US20190035276A1 (en) * 2016-03-06 2019-01-31 Foresight Automotive Ltd. Running vehicle alerting system and method
CN107886033A (en) * 2016-09-30 2018-04-06 比亚迪股份有限公司 Identify the method, apparatus and vehicle of circular traffic lights
CN107122729A (en) * 2017-04-21 2017-09-01 奇酷互联网络科技(深圳)有限公司 Generation method, device and the mobile terminal of image
CN110598511A (en) * 2018-06-13 2019-12-20 杭州海康威视数字技术股份有限公司 Method, device, electronic equipment and system for detecting green light running event
US20200135030A1 (en) * 2018-10-24 2020-04-30 Waymo Llc Traffic light detection and lane state recognition for autonomous vehicles
CN109949594A (en) * 2019-04-29 2019-06-28 北京智行者科技有限公司 Real-time traffic light recognition method
CN110197589A (en) * 2019-05-29 2019-09-03 杭州诚道科技股份有限公司 A kind of illegal detection method of making a dash across the red light based on deep learning
CN110459064A (en) * 2019-09-19 2019-11-15 上海眼控科技股份有限公司 Vehicle illegal behavioral value method, apparatus, computer equipment
CN110660225A (en) * 2019-10-28 2020-01-07 上海眼控科技股份有限公司 Red light running behavior detection method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘光勇;石阳阳;刘伟康;刘浏;贾碧胜;罗云峰;张恒;: "智能轨道快运***路权控制技术研究", 控制与信息技术, no. 01, 5 February 2020 (2020-02-05) *
王战古;邵金菊;高松;孙亮;于杰;谭德荣;: "基于多传感器融合的前方车辆识别方法研究", 广西大学学报(自然科学版), no. 02, 25 April 2017 (2017-04-25) *

Similar Documents

Publication Publication Date Title
CN110197589B (en) Deep learning-based red light violation detection method
CN103034836B (en) Road sign detection method and road sign checkout equipment
CN108021856B (en) Vehicle tail lamp identification method and device and vehicle
CN103208185A (en) Method and system for nighttime vehicle detection on basis of vehicle light identification
CN107644538B (en) Traffic signal lamp identification method and device
CN111627215A (en) Video image identification method based on artificial intelligence and related equipment
CN113239733B (en) Multi-lane line detection method
CN111783573A (en) High beam detection method, device and equipment
CN111723805B (en) Method and related device for identifying foreground region of signal lamp
CN115424217A (en) AI vision-based intelligent vehicle identification method and device and electronic equipment
CN111898540A (en) Lane line detection method, lane line detection device, computer equipment and computer-readable storage medium
Špoljar et al. Lane detection and lane departure warning using front view camera in vehicle
CN114638969A (en) Vehicle body multi-attribute detection method, electronic equipment and storage medium
CN111724607B (en) Steering lamp use detection method and device, computer equipment and storage medium
Boonsim et al. An algorithm for accurate taillight detection at night
CN113761967A (en) Identification method and device
CN116721396A (en) Lane line detection method, device and storage medium
CN113743226B (en) Daytime front car light language recognition and early warning method and system
Ab Ghani et al. Lane detection using deep learning for rainy conditions
CN111428538A (en) Lane line extraction method, device and equipment
CN113989774A (en) Traffic light detection method and device, vehicle and readable storage medium
CN110020575B (en) Vehicle detection device and method and electronic equipment
CN110135418A (en) Traffic accident fix duty method, apparatus, equipment and storage medium based on picture
CN115861624B (en) Method, device, equipment and storage medium for detecting occlusion of camera
CN114694112B (en) Traffic signal lamp identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination