CN112164221A - Image data mining method, device and equipment and road side equipment - Google Patents

Image data mining method, device and equipment and road side equipment Download PDF

Info

Publication number
CN112164221A
CN112164221A CN202011009094.0A CN202011009094A CN112164221A CN 112164221 A CN112164221 A CN 112164221A CN 202011009094 A CN202011009094 A CN 202011009094A CN 112164221 A CN112164221 A CN 112164221A
Authority
CN
China
Prior art keywords
target
color
difference
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011009094.0A
Other languages
Chinese (zh)
Other versions
CN112164221B (en
Inventor
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111344177.XA priority Critical patent/CN114092717A/en
Priority to CN202011009094.0A priority patent/CN112164221B/en
Publication of CN112164221A publication Critical patent/CN112164221A/en
Application granted granted Critical
Publication of CN112164221B publication Critical patent/CN112164221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an image data mining method, which relates to the fields of intelligent transportation and automatic driving, in particular to the field of computer vision or image processing. The image data mining method comprises the steps of obtaining an image containing a traffic light; determining a first region including a lamp cap of a first color, a second region including a lamp cap of a second color and a target region including a lamp cap of a target color in the image; acquiring a first differential value representing a difference between the target area and the first area, a second differential value representing a difference between the target area and the second area, and a third differential value representing a difference between the first area and the second area, respectively; and determining whether the image is a target image according to the first difference value, the second difference value and the third difference value, and storing the target image as the result image data of the image data mining. The disclosure also provides an image data mining device, equipment, a storage medium and roadside equipment.

Description

Image data mining method, device and equipment and road side equipment
Technical Field
The present disclosure relates to the field of intelligent transportation or automatic driving, specifically to the field of computer vision or image processing, and more specifically to an image data mining method, apparatus, device, storage medium, and roadside device.
Background
In the intelligent transportation, a driving environment is sensed by a vehicle-road cooperation technique using a combination of a roadside sensor and an on-vehicle sensor. The roadside sensor is erected at the roadside and acquires image data in a fixed sensing area. A training data set can be constructed by utilizing image data collected by the road side sensor so as to train and establish a neural network model to carry out light color identification on the colors of the traffic lights. However, since the yellow light data is less in proportion to the training data set, the identification effect of the established neural network model is affected, and the yellow traffic light identification fails.
Disclosure of Invention
In view of the above, the present disclosure provides an image data mining method, apparatus, device and storage medium.
A first aspect of the present disclosure provides an image data mining method, including:
acquiring an image containing a traffic light, the traffic light including a plurality of lightheads each having a first color, a second color, and a target color;
determining a first region including a burner of the first color, a second region including a burner of the second color, and a target region including a burner of the target color in the image, wherein the first region, the second region, and the target region have the same shape and size as each other;
obtaining a first differential value representing a difference between the target area and the first area, a second differential value representing a difference between the target area and the second area, and a third differential value representing a difference between the first area and the second area, respectively; and
determining whether the image is a target image according to the first difference value, the second difference value and the third difference value, and storing the target image as result image data of image data mining in a case where the image is determined to be the target image.
A second aspect of the present disclosure provides an image data mining method, including:
acquiring a first frame image and a second frame image which are continuously captured and contain a traffic light, wherein the traffic light comprises a plurality of lightheads respectively having a first color, a second color and a target color;
determining whether the lightheads of the target colors are lighted according to an interframe difference between the first frame image and the second frame image along a first difference direction;
setting the state of the lamp holder of the target color to be on under the condition that the lamp holder of the target color is determined to be on;
determining whether the lamp holder of the target color is extinguished according to the inter-frame difference between the first frame image and the second frame image along a second difference direction;
under the condition that the lamp holder of the target color is determined to be extinguished, setting the state of the lamp holder of the target color to be extinguished;
when the lamp cap of the target color is in a lighting state, determining whether the second frame image is the target image according to the difference between the areas of a first area including the lamp cap of the first color, a second area including the lamp cap of the second color and a first target area including the lamp cap of the target color in the second frame image; and
and in the case that the second frame image is determined to be a target image, storing the target image as result image data of image data mining.
A third aspect of the present disclosure provides an image data mining apparatus, including:
an image acquisition module configured to acquire an image containing a traffic light including a plurality of lightheads each having a first color, a second color, and a target color;
a region determination module configured to determine a first region including a burner of the first color, a second region including a burner of the second color, and a target region including a burner of the target color in the image, wherein the first region, the second region, and the target region have a same shape and size as each other;
a differential value obtaining module configured to obtain a first differential value representing a difference between the target region and the first region, a second differential value representing a difference between the target region and the second region, and a third differential value representing a difference between the first region and the second region, respectively; and
and the judging and storing module is configured to determine whether the image is a target image according to the first difference value, the second difference value and the third difference value, and store the target image as image data of a result of image data mining under the condition that the image is determined to be the target image.
A fourth aspect of the present disclosure provides an image data mining device, including:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining method provided according to the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides an image data mining apparatus, including:
an image acquisition module configured to acquire a first frame image and a second frame image continuously captured containing a traffic light including a plurality of lightheads respectively having a first color, a second color, and a target color;
a first lighting judgment module configured to determine whether the lamp cap of the target color is lighted according to an inter-frame difference between the first frame image and the second frame image along a first difference direction;
a first state setting module configured to set a state of the burner of the target color to be lit, in a case where it is determined that the burner of the target color is lit;
the light-off judging module is configured to determine whether the lamp holder of the target color is turned off according to the interframe difference between the first frame image and the second frame image along a second difference direction;
the second state setting module is configured to set the state of the lamp holder of the target color to be extinguished under the condition that the lamp holder of the target color is determined to be extinguished;
a second lighting judgment module configured to determine whether the second frame image is a target image according to a difference between regions between a first region including the base of the first color, a second region including the base of the second color, and a first target region including the base of the target color in the second frame image, when the status of the base of the target color is lighting; and
a result data storage module configured to store the target image as result image data of image data mining in a case where it is determined that the second frame image is the target image.
A sixth aspect of the present disclosure provides an image data mining device, including:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining method provided according to the second aspect of the present disclosure.
A seventh aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the image data mining method as described above when executed.
An eighth aspect of the present disclosure provides a roadside apparatus including:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining methods provided according to the first and second aspects of the present disclosure.
According to the embodiment of the disclosure, the image data containing the traffic lights is screened by acquiring the plurality of differential values representing the differences among the regions respectively including the first color light heads, the second color light heads and the target color light heads and determining whether the target color light heads in the image are in the lighting state according to the plurality of differential values, so that the full-automatic data mining of the traffic light image data is realized, the labor cost is saved, and the time for performing data labeling on the image data is saved. In addition, by adding the lighting check and the extinguishing check for the target color and the bases other than the target color, the accuracy of data mining is improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a system architecture for intelligent transportation vehicle-to-road coordination, in accordance with an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of an image data mining method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates an example of inter-region differences for an image data mining method according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow diagram of an image data mining method according to another embodiment of the present disclosure;
fig. 5 schematically illustrates an example of an inter-frame difference of an image data mining method according to another embodiment of the present disclosure;
FIG. 6 schematically illustrates an implementation of an image data mining method according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of an image data mining device according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of an image data mining device according to another embodiment of the present disclosure; and
FIG. 9 schematically illustrates a block diagram of an electronic device adapted to perform image data mining according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
When a training data set constructed by using image data collected by a road side sensor is used for training and establishing a light color recognition neural network model, the yellow light data accounts for a small amount in the training data set, so that the recognition effect of the established neural network model is influenced. The embodiment of the disclosure provides an image data mining method, which is used for automatically screening image data with a yellow light turned on from image data acquired by a road side sensor so as to supplement a training data set, thereby improving the identification effect of an established neural network model.
The intelligent traffic vehicle-road cooperative system realizes real-time information interaction between vehicles, between vehicles and roads, between vehicles and people and between vehicles and networks by adopting a communication technology. Fig. 1 schematically illustrates a system architecture for intelligent transportation vehicle-road coordination according to an embodiment of the present disclosure. As shown in fig. 1, the in-vehicle device is provided in the autonomous or assisted driving vehicle 20, and may communicate with the server device 40 to transmit driving information of the vehicle to the server device 40 in real time and receive real-time traffic information from the server device. The roadside apparatus 30 is erected at the roadside and includes a roadside sensing apparatus 301 and a roadside calculating apparatus 302. The roadside sensing device 301 (e.g., a roadside camera) is connected to a roadside computing device 302 (e.g., a roadside computing unit RSCU), and the roadside computing device 302 is connected to the server device 40. The server device 40 may communicate with the autonomous or assisted driving vehicle 20 in various ways. In another system architecture, the roadside sensing device 301 includes a computing function, and the roadside sensing device 301 may be directly connected to the server device 40. The above connections may be wired or wireless. In the present application, the server device 40 may be, for example, a cloud control platform, a vehicle-road cooperative management platform, a central subsystem, an edge computing platform, a cloud computing platform, and the like.
An image of a traffic light including a plurality of lightheads respectively having red (first color), green (second color), and yellow (target color) is captured with the roadside sensing device 301, and thus, the traffic light and the positions of the respective lightheads of the traffic light in the image are fixed. That is, as shown in fig. 1, the positions of the red light 11, the green light 12, and the yellow light 13 are the same in each captured frame image. Thus, the color of the burner can be determined from the position of the burner. For example, as shown in fig. 1, the yellow light 13 may be represented by the position of an area 130 including the yellow light 13 surrounded by a frame 130 (indicated by a dotted line in fig. 1) with respect to the entire frame image. By analogy, the red light 11 is represented by the position of the region 110 including the red light 11 with respect to the entire frame image, and the red light 12 is represented by the position of the region 120 including the green light 12 with respect to the entire frame image.
FIG. 2 schematically shows a flow diagram of an image data mining method 200 according to an embodiment of the present disclosure. As shown in fig. 2, the image data mining method 200 includes the steps of:
in step S210, an image containing a traffic light is acquired.
According to an embodiment, the captured image containing the traffic light is typically a color image. Each pixel in a color image may typically be represented by three color components, namely a red (R) component, a green (G) component, and a blue (B) component. That is, each pixel corresponds to a ternary array, with each element in the array taking on values between [0,255 ]. In the process of determining whether or not a yellow light (base of a target color) in an image is in a lit state, processing is performed using a grayscale image corresponding to a color image in order to reduce the amount of calculation. In a grayscale image, each pixel has only one sample value, which can represent different degrees of lightness or different intensities of any color. The captured color image may be converted to a depth image using a particular algorithm.
In step S220, a first region including the bases of the first color, a second region including the bases of the second color, and a target region including the bases of the target color are determined in the image.
According to an embodiment, the first region may be a region including a red light (a first color of the burner). The second region may be a region including a green light (a base of a second color). The determined first region, second region and target region have the same shape and size as each other. That is, the determined first region, second region, and target region are rectangular-shaped regions having equal lengths and widths, respectively, and the same number of pixels are included in the first region, second region, and target region.
In step S230, a first differential value representing a difference between the target area and the first area, a second differential value representing a difference between the target area and the second area, and a third differential value representing a difference between the first area and the second area are respectively acquired.
According to an embodiment, acquiring a first differential value representing a difference between the target region and the first region, a second differential value representing a difference between the target region and the second region, and a third differential value representing a difference between the first region and the second region, respectively, may further include acquiring differential regions according to a difference in gray-scale values of respective pixels in the two regions, between the target region and the first region, between the target region and the second region, and between the first region and the second region, respectively, and performing binarization processing and normalization processing on the acquired differential regions to acquire the first differential value, the second differential value, and the third differential value.
In step S240, it is determined whether the image is the target image according to the first difference value, the second difference value, and the third difference value, and in case that it is determined that the image is the target image, the target image is stored as the result image data of the image data mining.
According to an embodiment, determining whether the image is the target image according to the first, second, and third difference values may further include comparing the absolute value of the first, second, and third difference values with a first threshold and a second threshold, respectively, and determining the image as the target image in a case where the absolute value of the first and second difference values are greater than the first threshold and the absolute value of the third difference value is less than the second threshold. After the target images are determined, the target images may be selected and stored in a designated collection. After a certain number of target images have been stored in the designated set, the set may be added to the training data set to supplement the yellow light (target color lighthead) data.
It is easily understood that the image data mining method 200 may be used not only for mining yellow light data, but also for mining red light data and green light data, as long as red light or green light is used as the lamp holder of the target color, and lamp holders of two other colors except for red light or green light are used as the lamp holder of the first color and the lamp holder of the second color, respectively.
According to the image data mining method of the embodiment of the disclosure, the regions respectively including the bases of the first color, the bases of the second color, and the bases of the target color may be analyzed by acquiring the first differential value, the second differential value, and the third differential value, and whether the bases of the target color are in an on state is determined based on the difference between the regions, thereby implementing data mining for the bases of the designated color. The method is simple to operate and high in execution efficiency, and can remarkably save labor cost and save time for performing data annotation on the image data.
Fig. 3 schematically illustrates an example of inter-region differences of an image data mining method according to an embodiment of the present disclosure, and a process of acquiring a first difference value, a second difference value, and a third difference value and identifying a target image according to the first difference value, the second difference value, and the third difference value is described in detail below with reference to fig. 3. For simplicity of description, a red lamp is used as the base of the first color, a green lamp is used as the base of the second color, and a yellow lamp is used as the base of the target color. Further, as shown in fig. 3, the region 310 is a first region including the red light 31, the region 320 is a second region including the green light 32, and the region 330 is a third region including the yellow light 33, and only three pixels are shown in each region for explanation.
According to an embodiment, a difference is first made between any two regions, and the difference may not distinguish the direction. For example, calculating the difference between the target region 330 and the first region 310 may be obtained by: the gray value of the pixel 3111 in the first region 310 is subtracted from the gray value of the pixel 3311 in the target region 330, the gray value of the pixel 3112 in the first region 310 is subtracted from the gray value of the pixel 3312 in the target region 330, and the gray value … … of the pixel 3113 in the first region 310 is subtracted from the gray value of the pixel 3313 in the target region 330. It can also be obtained by: the gray value of the pixel 3311 in the target region 330 is subtracted from the gray value of the pixel 3111 in the first region 310, the gray value of the pixel 3312 in the target region 330 is subtracted from the gray value of the pixel 3112 in the first region 310, and the gray value … … of the pixel 3313 in the target region 330 is subtracted from the gray value of the pixel 3113 in the first region 310. The above-described differential operation is performed for each pixel in the target region 330 and the first region 310, and is performed between the target region 330 and the second region 320 and between the first region 310 and the second region 320, respectively. Through the difference operation, three difference regions can be obtained respectively, and the pixel in each difference region is the difference between the gray values of the pixels of the corresponding two regions.
Next, the binarization processing for each of the obtained differential regions specifically includes setting a threshold THB for performing the binarization processing, and comparing the values of the pixels in the differential regions with the threshold THB, respectively. The value of the pixel whose value is greater than the threshold THB is set to "0", and the value of the pixel whose value is less than or equal to the threshold THB is set to "255".
Next, the normalization processing of the differential area subjected to the binarization processing specifically includes adding the numerical values of the respective pixels in each differential area subjected to the binarization processing, and taking the sum value obtained by the addition as a first differential value, a second differential value, and a third differential value, respectively. According to another embodiment, the sum of the obtained pixel values may be divided by the number of pixels included in the differential region, that is, the number of pixels included in the first region 310, the second region 320, and the target region 330, and the obtained quotient may be used as the first differential value, the second differential value, and the third differential value. The first, second, and third differential values may generally characterize the difference between two of the first, second, and target regions 310, 320, and 330.
Next, the absolute value of the first difference value, the absolute value of the second difference value, and the absolute value of the third difference value are compared with the set threshold values, respectively. The individual pixels in the lit lighthead have a larger gray value than the unlit lighthead, so the two differential values with participation of the lit lighthead will have a larger value, while the other differential value without participation of the lit lighthead will have a smaller value. The threshold values may be set separately, for example, a first threshold value and a second threshold value. According to an embodiment, an image in which the absolute value of the first difference value and the absolute value of the second difference value are both greater than a first threshold and the absolute value of the third difference value is less than a second threshold may be determined as the target image. According to an embodiment, the first threshold and the second threshold may be the same value, and thus an image in which the absolute value of the first difference value and the absolute value of the second difference value are both greater than the absolute value of the third difference value may be determined as the target image. According to another embodiment, the first threshold may be k times the second threshold, and thus an image in which the absolute value of the first difference value and the absolute value of the second difference value are k times greater than the absolute value of the third difference value may be determined as the target image.
It will be readily understood that the first threshold value and the second threshold value may be set according to empirical values, or may be set according to the characteristics of the sensor, or may be set according to the requirements for inspection errors, contrast requirements, or the like, as long as the difference between the gradation values due to the lighting of the burner can be screened out. The setting of the first threshold and the second threshold and the setting of the condition for screening the target image by using the first threshold and the second threshold are not limited in the embodiments of the present disclosure.
The image data mining method 200 according to the embodiment of the present disclosure is easy to implement and efficient to execute, but may cause false detection due to lack of necessary inspection safeguards in the process of executing operations. For example, if the frame surrounding the base is not compact enough to include several pixels of the background region other than the base in the formed region, it is impossible to eliminate such interference by only one frame of image, and thus false detection may be caused.
FIG. 4 schematically illustrates a flow diagram of an image data mining method 400 according to another embodiment of the present disclosure. As shown in fig. 4, the image data mining method 400 includes the steps of:
in step S410, a first frame image and a second frame image including a traffic light, which are continuously captured, are acquired.
According to an embodiment, the continuously captured images may be continuous image frames captured with a roadside sensor or continuous image frames obtained from captured video data. The change in the state of the base can be judged by using the change in the state of the region including the base between two images captured consecutively, thereby inferring the lighting and the extinguishing of the base. For example, if the gradation value of the pixel in the area where the yellow light is located has significantly changed in the period of changing from the first frame image to the second frame image, it is determined that the yellow light is turned on or off in the period of changing from the first frame image to the second frame image.
In step S420, it is determined whether the bases of the target colors are lit according to an inter-frame difference between the first frame image and the second frame image in the first difference direction.
According to an embodiment, the first differential direction refers to a direction in which the difference is made from the second frame image to the first frame image. According to an embodiment, determining whether the bases of the target colors are lit according to an inter-frame difference between the first frame image and the second frame image in the first difference direction further includes determining a first target region and a second target region including the bases of the target colors in the second frame image and in the first frame image, respectively, wherein the first target region and the second target region have the same shape and size as each other. And acquiring a first difference target area according to the difference between the gray values of each pixel in the first target area and each pixel in the second target area, and performing binarization processing and normalization processing on the first difference target area to acquire a bright difference value of the target point. The target lighting differential value is compared to a first lighting threshold, and in the event that the target point lighting differential value is greater than the first lighting threshold, it is determined that the lighthead of the target color is lit.
In step S430, in the case where it is determined that the base of the target color is lit, the state of the base of the target color is set to be lit.
According to an embodiment, the state of the base of the target color refers to a static state of the base of the target color in one frame image, different from a dynamic state in which the base of the target color is lit. For example, if the yellow lamp is in the lit state in the first frame image and remains in the lit state in the second frame image, it can be determined that both the state of the yellow lamp in the first frame image and the state of the yellow lamp in the second frame image are lit, but obviously, the lighting operation of the yellow lamp does not occur in the period from the first frame image to the second frame image. The state of the base of the target color being on means that the base of the target color is in an on state in the current frame (e.g., the second frame), and may be in an on state or an off state in the previous frame (e.g., the first frame).
In step S440, it is determined whether the base of the target color is extinguished based on the inter-frame difference in the second difference direction between the first frame image and the second frame image.
According to an embodiment, the second differential direction refers to a direction in which the difference is made from the first frame image to the second frame image. According to an embodiment, determining whether the base of the target color is extinguished based on an inter-frame difference between the first frame image and the second frame image in the second difference direction further includes determining a first target region and a second target region including the base of the target color in the second frame image and in the first frame image, respectively, the first target region and the second target region having the same shape and size as each other. And acquiring a second difference target area according to the difference between the gray values of each pixel in the second target area and each pixel in the first target area, and performing binarization processing and normalization processing on the second difference target area to acquire a target extinction difference value. The target extinction difference value is compared with a first extinction threshold, and in the case where the target extinction difference value is greater than the first extinction threshold, it is determined that the base of the target color is extinguished.
In step S450, in a case where it is determined that the base of the target color is extinguished, the state of the base of the target color is set to be extinguished.
According to the embodiment, the status of the base of the target color being off means that the base of the target color is off in the current frame (e.g., the second frame), and may be on or off in the previous frame (e.g., the first frame).
In step S460, when the status of the base of the target color is lighting, it is determined whether the second frame image is the target image according to the inter-region difference between the first region including the base of the first color, the second region including the base of the second color, and the first target region including the base of the target color in the second frame image.
According to an embodiment, determining whether the second frame image is the target image according to the inter-region differences between a first region including the base of the first color, a second region including the base of the second color, and a first target region including the base of the target color in the second frame image further includes acquiring differential regions according to differences in gray values of pixels in the two regions between the first target region and the first region, between the first target region and the second region, and between the first region and the second region, respectively, and performing binarization processing and normalization processing on the acquired differential regions to acquire a first differential value, a second differential value, and a third differential value. The absolute value of the first difference value, the absolute value of the second difference value, and the absolute value of the third difference value are compared with a first threshold and a second threshold, respectively, and the image is determined as the target image in a case where the absolute value of the first difference value and the absolute value of the second difference value are both greater than the first threshold and the absolute value of the third difference value is less than the second threshold. The operation of determining the target image and the setting of the first threshold and the second threshold in this embodiment may be obtained by referring to the method 200 in the foregoing embodiment, and details are not repeated here.
In step S470, in the case where it is determined that the second frame image is the target image, the target image is stored as the result image data of the image data mining.
According to an embodiment, a target image may be selected and the selected target image may be stored in a designated collection. In the case where the required number of target images are not stored in the specified set, the processing returns to step S410, and the processing of the next frame image is started. After a certain number of target images have been stored in the designated set, the set may be added to the training data set to supplement the yellow light (target color lighthead) data.
According to the image data mining method of the embodiment of the present disclosure, by determining whether the base of the target color is lit or extinguished based on the inter-frame difference between two continuously captured images, it is possible to avoid false detection due to the inclusion of several pixels of other background regions than the base in the formed region in the case where the base of the target color is not lit. Under the condition that the lamp color lighting check and the lamp color extinguishing check are added simultaneously, the method is favorable for determining the lighting time interval of the lamp holder with the target color, and mutual verification can be realized through the lamp color lighting check and the lamp color extinguishing check, so that the false detection can be found as early as possible by utilizing the lamp color extinguishing check under the condition that the false detection occurs in the lamp color lighting check. For example, if a yellow lamp is erroneously detected as being turned on by the lamp color turning-on inspection when it is not turned on, and if it is detected that a green lamp is turned off by the lamp color turning-off inspection after several steps, it can be determined that the previously determined yellow lamp is turned on as being erroneously detected. And an image erroneously stored between a determination that the yellow light is lit to a determination that the green light is turned off can be easily found.
Fig. 5 schematically illustrates an example of an inter-frame difference of an image data mining method according to another embodiment of the present disclosure. As shown in fig. 5, 530_1 is a second target region including the base 53_1 of the target color in the first frame image, and 530_2 is a first target region including the base 53_2 of the target color in the second frame image. The dashed line with arrows L1 indicates the first differential direction. The inter-frame difference in the first difference direction between the first frame image and the second frame image is implemented based on a difference of the gray value of each pixel in the first target region 530_2 minus the gray value of each pixel in the second target region 530_ 1. The dashed line with an arrow L2 indicates the second differential direction. The inter-frame difference in the second difference direction between the first frame image and the second frame image is implemented based on a difference of the gray value of each pixel in the second target region 530_1 minus the gray value of each pixel in the first target region 530_ 2.
As can be seen from the method 400 according to the foregoing embodiment, the operation of determining whether the bases of the target colors are turned on according to the inter-frame difference between the first frame image and the second frame image in the first difference direction in step S420, and the operation of determining whether the bases of the target colors are turned off according to the inter-frame difference between the first frame image and the second frame image in the second difference direction in step S440 are determined only for the bases of the target colors (e.g., yellow lamps), and not for the bases of other colors (e.g., red lamps and green lamps). In this case, if the yellow light turning-off is not correctly detected while the yellow light is turning-off, it is likely to cause frequent invalid operations, thereby increasing the amount of calculation. If the detection for the red light and the green light is added simultaneously, even if the yellow light is not detected to be turned off when the yellow light is turned off, whether the yellow light is turned off or whether the previous turning on of the yellow light is false detection can be judged according to the detected turning on of the red light or the detected turning on of the green light. That is, the reliability of the inspection can be further improved by the lighting inspection and the extinction inspection for the bases of other colors.
According to an embodiment, the lighting check for red lamps (burners of a first color) and green lamps (burners of a second color) may comprise: first and third regions including the bases of the first color and second and fourth regions including the bases of the second color are determined in the second frame image and the first frame image, respectively, and the first, second, third, and fourth regions have the same shape and size as the first and second target regions. And acquiring differential areas between the first area and the third area and between the second area and the fourth area according to the difference of the gray values of the pixels in the two areas, and performing binarization processing and normalization processing on the acquired differential areas to acquire a first lighting differential value and a second lighting differential value. Comparing the first lighting difference value and the second lighting difference value with a second lighting threshold value respectively, and determining that the lamp holder of the target color is lighted under the condition that the target point lighting difference value is greater than the first lighting threshold value and the first lighting difference value and the second lighting difference value are both smaller than the second lighting threshold value; comparing the target lighting differential value, the first lighting differential value and the second lighting differential value with a second lighting threshold value, a third lighting threshold value and a fourth lighting threshold value respectively, and determining that the lamp holder of the first color is lighted under the condition that the first lighting differential value is greater than the third lighting threshold value and the target lighting differential value and the second lighting differential value are both smaller than the second lighting threshold value; and under the condition that the second lighting differential value is greater than the fourth lighting threshold value and the target point lighting differential value and the first lighting differential value are both smaller than the second lighting threshold value, determining that the lamp holder of the second color is lighted.
According to an embodiment, the extinction check for red and green lights may include: first and third regions including the bases of the first color and second and fourth regions including the bases of the second color are determined in the second frame image and the first frame image, respectively, and the first, second, third, and fourth regions have the same shape and size as the first and second target regions. And acquiring differential areas between the third area and the first area and between the fourth area and the second area according to the difference of the gray values of the pixels in the two areas, and performing binarization processing and normalization processing on the acquired differential areas to acquire a first extinguishment differential value and a second extinguishment differential value. And comparing the first and second extinguishment differential values with a second extinguishment threshold respectively, and determining that the lamp holder of the target color is extinguished under the condition that the target extinguishment differential value is greater than the first extinguishment threshold and the first and second extinguishment differential values are both less than the second extinguishment threshold. Comparing the target extinction difference value, the first extinction difference value and the second extinction difference value with a second extinction threshold value, a third extinction threshold value and a fourth extinction threshold value respectively, and determining that the lamp holder of the first color is extinguished under the condition that the first extinction difference value is greater than the third extinction threshold value and the target extinction difference value and the second extinction difference value are both less than the second extinction threshold value; and under the condition that the second extinguishing differential value is larger than the fourth extinguishing threshold value and the target extinguishing differential value and the first extinguishing differential value are both smaller than the second extinguishing threshold value, determining that the lamp holder of the second color is extinguished.
According to the above-described embodiment, it is possible to further judge whether the burners of the first color and the burners of the second color are lit or extinguished. According to an embodiment, the method 400 further comprises setting the status of the burner of the target color to off if it is determined that the burner of the first color or the second color is lit. According to an embodiment, the method 400 further comprises querying the status of the lighthead of the target color and operating according to the query result, if it is determined that no lighthead is lit. In the case where the status of the target color burner is on, the operation of step S460 in the method 400 may be continued. In the case where the state of the base of the target color is off, the processing of the next frame image may be started directly or, in the case where the lamp-color-off inspection has not been performed, the processing of the next frame image may be started after the lamp-color-off inspection is performed.
In addition, for the false detection, if the corresponding image is already stored, the stored image may be deleted, or the image may be retained as a normal data image.
The lighting threshold and the extinguishing threshold involved in the embodiments of the present disclosure can be obtained by the following method: continuously captured multiple frames of images containing traffic lights are acquired. Between any two frame images in the multi-frame images, a lighting threshold value for determining whether the lightheads of the first color, the second color and the target color are lighted is obtained by calculating an inter-frame difference of the two frame images along a first difference direction. And acquiring an extinguishing threshold value for determining whether the lamp caps of the first color, the second color and the target color are extinguished or not by calculating the inter-frame difference of the two frames of images along the second difference direction between any two frames of images in the multi-frame images.
Fig. 6 schematically shows an implementation of the image data mining method according to an embodiment of the present disclosure. As shown in fig. 6, the image data mining method starts, and at step S601, the method starts initialization to acquire a plurality of threshold values required during the execution of the method. According to an embodiment, the initialization operation may include acquiring a specific position of each lighthead in the traffic light in the image, for example, coordinates of an upper left corner point and a lower right corner point of a region including the lighthead, and the determined number n _ pixel of pixel points in the region may be acquired. In the first frame image, the initial state of each burner is unknown, and the history lighting lamp color is initialized to be unknown. Next, a plurality of lighting threshold values and lighting-off threshold values are acquired by a threshold initialization process. The threshold initialization process may include setting an initial value, and first, setting the frame number init _ waiting _ frame _ number of a desired image in the threshold initialization process. According to an embodiment, the frame number init _ waiting _ frame _ number may be set to include one or more than one complete traffic light period, or may also be set to a fixed time including a plurality of traffic light periods, for example, 10 minutes, and the disclosed embodiments are not limited thereto. Then, the red light lighting threshold red _ max _ diff _ sum1 is set to 0, the green light lighting threshold green _ max _ diff _ sum1 is set to 0, and the yellow light lighting threshold yellow _ max _ diff _ sum1 is set to 0; the red light extinction threshold red _ max _ diff _ sum2 is set to 0, the green light extinction threshold green _ max _ diff _ sum2 is set to 0, and the yellow light extinction threshold yellow _ max _ diff _ sum2 is set to 0. Then, the Flag of the second check of the yellow lamp is set to 0. The threshold initialization process further includes: acquiring a first frame image; and aiming at the determined region including the lamp holder, performing interframe difference on the first frame image and the second frame image along a first difference direction, and performing binarization processing and normalization processing on the obtained difference images to obtain a first difference threshold, a second difference threshold and a third difference threshold of red light, green light and yellow light respectively. And acquiring the maximum value of the obtained first difference threshold, second difference threshold and third difference threshold, recording the maximum value as max _ diff _ sum _ current1, and recording the color of the lamp holder corresponding to the difference threshold according to the positions of the lamp holders of the three colors. The difference threshold max _ diff _ sum _ current1 is assigned to the lighting threshold of the burner of the corresponding color. Next, for the determined region including the lamp base, inter-frame difference is performed on the first frame image and the second frame image along a second difference direction, and binarization processing and normalization processing are performed on the obtained difference image to obtain a first difference threshold, a second difference threshold and a third difference threshold for red light, green light and yellow light, respectively. And acquiring the maximum value of the obtained first difference threshold, second difference threshold and third difference threshold, recording the maximum value as max _ diff _ sum _ current2, and recording the color of the lamp holder corresponding to the difference threshold according to the positions of the lamp holders of the three colors. The difference threshold max _ diff _ sum _ current2 is assigned to the lightoff threshold of the lighthead of the corresponding color. The above threshold initialization processing is repeatedly executed between any two frames of the multi-frame image until the init _ waiting _ frame _ number frame image is processed, and the turn-on threshold and turn-off threshold of the red light, the green light and the yellow light are obtained respectively.
Note that, in the above threshold initialization processing, the maximum value among the first differential threshold, the second differential threshold, and the third differential threshold is selected as max _ diff _ sum _ current1 or max _ diff _ sum _ current 2. However, other values may be selected as max _ diff _ sum _ current1 or max _ diff _ sum _ current2, such as selecting the minimum of the first, second, and third differential thresholds, depending on the embodiment. The selection of the differential threshold is not limited in the embodiments of the present disclosure, as long as the desired screening result can be achieved. For example, a plurality of difference threshold values calculated from any two of the plurality of frame images may be stored first, and then a median value or a value satisfying other setting conditions may be selected from the plurality of difference threshold values. In this embodiment, selecting the maximum value among the first differential threshold, the second differential threshold, and the third differential threshold as max _ diff _ sum _ current1 or max _ diff _ sum _ current2 makes it possible to make the subsequent detection of the lamp color lighting test or the lamp color extinction test using the lighting threshold or the extinction threshold more reliable.
Next, after the initialization process ends, at step S602, a lamp color lighting check is performed. Next, at step S603, it is first determined whether or not the base is lit according to the result of the lamp color lighting inspection. If the determination result is yes, it is continuously determined whether the lighted lamp is a yellow lamp at step S604. If the determination result is still yes, it indicates that the yellow light is lit, and then the value of the Flag for indicating the yellow light state is set to 1 at step S605. Next, at step S606, a secondary check is performed on the current frame (i.e., the second frame image of the two frame images captured consecutively) to confirm again whether the state of the yellow light is lighting. At step S607, it is determined whether the secondary verification passes according to the result of the secondary verification. If it is passed, indicating that the yellow light in the current frame (i.e., the second frame) is in the lit state, at step S608, the current frame is saved. If the secondary check is not passed, i.e., the determination at step S607 is no, the current frame is not saved. In order to further improve the reliability of the light color check, it proceeds to step S609 after that to perform the light color extinction check, regardless of whether the current frame is saved. Next, at step S610, it is determined whether the yellow lamp is extinguished according to the result of the lamp color extinction check. If the result of the determination is yes, it indicates that the test result is unreliable, and there is a possibility that one or both of the light color lighting test and the light color extinguishing test may have been erroneously detected, the Flag is set to 0 in step S611, and the processing of the next frame is started in step S612. If the determination result at step S610 is no, the check result may be considered to be reliable, the Flag need not be set, and the process may proceed directly to step S612 and start processing the next frame. Further, if the result of the determination at step S604 is no, it means that the red light or the green light is lit, so the flag position is set to 0 at step S613, and it proceeds to step S609, where the light color extinction check is performed. On the other hand, if the determination result at step S603 is no, indicating that all the lightheads remain in the original state, then the determination of the value of the Flag for indicating the yellow light state is continued at step S614. If the Flag value is 1, indicating that the yellow light is kept in the lighting state, the process proceeds to step S606, and the secondary verification process is continuously performed on the current frame. If the value of Flag is 0, it indicates that the yellow lamp is kept in the extinguished state, and the process further proceeds to step S609, where the lamp color extinction check is performed.
According to the embodiment of the disclosure, while the yellow light state is detected according to the light color lighting inspection, the increase of the calculated amount caused by false detection is avoided by adopting various means such as secondary inspection, light color extinguishing inspection and the like, and the rapid and accurate automatic extraction of the image data is realized.
Fig. 7 schematically shows a block diagram of an image data mining apparatus 700 according to an embodiment of the present disclosure. As shown in fig. 7, the image data mining apparatus 700 includes an image acquisition module 710, a region determination module 720, a difference value acquisition module 730, and a judgment and storage module 740.
According to an embodiment, the image acquisition module 710 is configured to acquire an image containing a traffic light. The region determination module 720 is configured to determine a first region including a first color of lighthead, a second region including a second color of lighthead, and a target region including a target color of lighthead in the image. The differential value acquiring module 730 is configured to acquire a first differential value representing a difference between the target area and the first area, a second differential value representing a difference between the target area and the second area, and a third differential value representing a difference between the first area and the second area, respectively. The judging and storing module 740 is configured to determine whether the image is the target image according to the first difference value, the second difference value, and the third difference value, and in case that it is determined that the image is the target image, store the target image as the result image data of the image data mining.
The specific operations of the functional modules may be obtained by referring to the operation steps of the image data mining method 200 in the foregoing embodiment, and are not described herein again.
Fig. 8 schematically shows a block diagram of an image data mining device according to another embodiment of the present disclosure. As shown in fig. 8, the image data mining apparatus 800 includes an image acquisition module 810, a first lighting judgment module 820, a first state setting module 830, a lights-out judgment module 840, a second state setting module 850, a second lighting judgment module 860, and a result data storage module 870.
According to an embodiment, the image acquisition module 810 is configured to acquire a first frame image and a second frame image that are continuously captured, including a traffic light. The first lighting judgment module 820 is configured to determine whether the lightheads of the target colors are lighted according to an inter-frame difference between the first frame image and the second frame image in the first difference direction. The first state setting module 830 is configured to set the state of the lighthead of the target color to be lit if it is determined that the lighthead of the target color is lit. The extinction determination module 840 is configured to determine whether the base of the target color is extinguished according to an inter-frame difference between the first frame image and the second frame image in the second difference direction. The second status setting module 850 is configured to set the status of the lighthead of the target color to off if it is determined that the lighthead of the target color is off. The second lighting determination module 860 is configured to determine whether the second frame image is the target image according to an inter-region difference between a first region including the burner of the first color, a second region including the burner of the second color, and a first target region including the burner of the target color in the second frame image, in a case where the state of the burner of the target color is lighting. The result data storage module 870 is configured to store the target image as result image data of the image data mining in a case where it is determined that the second frame image is the target image.
The specific operations of the above functional modules may be obtained by referring to the operation steps of the image data mining method 400 in the foregoing embodiment, and are not described herein again.
Fig. 9 schematically illustrates a block diagram of an electronic device 900 adapted to perform image data mining according to an embodiment of the present disclosure. The image data mining method according to the embodiment of the present disclosure may be performed using the electronic device shown in fig. 9.
As shown in fig. 9, an electronic device 900 according to an embodiment of the disclosure includes a processor 901 and a memory 902. The processor 901 may perform various appropriate actions and processes in accordance with programs or instructions stored in the memory 902. Processor 901 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 901 may also include on-board memory for caching purposes. The processor 901 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
The processor 901 and the memory 902 are connected to each other via a bus. The processor 901 performs various operations of the method flows according to embodiments of the present disclosure by executing programs in the memory 902. It is noted that the program may also be stored in one or more storage devices other than the memory 902. The processor 901 may also perform various operations of the method flows according to the embodiments of the present disclosure by executing programs stored in the one or more storage devices.
The electronic device 900 may further include an input device 903 and an output device 904, the input device 903 and the output device 904 also being connected to the bus, according to embodiments of the disclosure. Further, the electronic device 900 may also include one or more of the following components: an input section including a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like.
An execution subject of the image data mining method according to the embodiment of the present disclosure may be a separate image data mining apparatus or roadside apparatus. The single image data mining device can obtain the image from the storage device in an off-line mode or can obtain the image from the shooting device in an on-line mode. The roadside device is, for example, a roadside sensing device with a computing function, a roadside computing device connected with the roadside sensing device, or a server device connected with the roadside computing device, or a server device directly connected with the roadside sensing device, or the like. The server device in the application is, for example, a cloud control platform, a vehicle-road cooperative management platform, a central subsystem, an edge computing platform, a cloud computing platform, and the like.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by the processor 901, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer readable storage medium carries one or more programs which, when executed by the processor 901, implement the methods according to the embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a computer-non-volatile computer-readable storage medium, which may include, for example and without limitation: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (19)

1. An image data mining method, comprising:
acquiring an image containing a traffic light, the traffic light including a plurality of lightheads each having a first color, a second color, and a target color;
determining a first region including a burner of the first color, a second region including a burner of the second color, and a target region including a burner of the target color in the image, wherein the first region, the second region, and the target region have the same shape and size as each other;
obtaining a first differential value representing a difference between the target area and the first area, a second differential value representing a difference between the target area and the second area, and a third differential value representing a difference between the first area and the second area, respectively; and
determining whether the image is a target image according to the first difference value, the second difference value and the third difference value, and storing the target image as result image data of image data mining in a case where the image is determined to be the target image.
2. The method of claim 1, wherein obtaining a first differential value representing a difference between the target region and the first region, a second differential value representing a difference between the target region and the second region, and a third differential value representing a difference between the first region and the second region, respectively, comprises:
obtaining differential areas between the target area and the first area, between the target area and the second area, and between the first area and the second area according to a difference between gray values of pixels in the two areas, and performing binarization processing and normalization processing on the obtained differential areas to obtain the first differential value, the second differential value, and the third differential value.
3. The method of claim 1, wherein determining whether the image is a target image according to the first, second, and third differential values comprises:
comparing the absolute value of the first difference value, the absolute value of the second difference value, and the absolute value of the third difference value with a first threshold and a second threshold, respectively, and determining the image as a target image in a case where the absolute value of the first difference value and the absolute value of the second difference value are both greater than the first threshold and the absolute value of the third difference value is less than the second threshold.
4. An image data mining method, comprising:
acquiring a first frame image and a second frame image which are continuously captured and contain a traffic light, wherein the traffic light comprises a plurality of lightheads respectively having a first color, a second color and a target color;
determining whether the lightheads of the target colors are lighted according to an interframe difference between the first frame image and the second frame image along a first difference direction;
setting the state of the lamp holder of the target color to be on under the condition that the lamp holder of the target color is determined to be on;
determining whether the lamp holder of the target color is extinguished according to the inter-frame difference between the first frame image and the second frame image along a second difference direction;
under the condition that the lamp holder of the target color is determined to be extinguished, setting the state of the lamp holder of the target color to be extinguished;
when the lamp cap of the target color is in a lighting state, determining whether the second frame image is the target image according to the difference between the areas of a first area including the lamp cap of the first color, a second area including the lamp cap of the second color and a first target area including the lamp cap of the target color in the second frame image; and
and in the case that the second frame image is determined to be a target image, storing the target image as result image data of image data mining.
5. The method of claim 4, further comprising:
setting the status of the burner of the target color to off if it is determined that the burner of the first color or the second color is lit.
6. The method of claim 4, further comprising:
in the case where it is determined that no base is lit, the status of the base of the target color is queried.
7. The method of claim 4, wherein determining whether the target color lighthead is lit based on an inter-frame difference between the first frame image and the second frame image along a first difference direction comprises:
determining a first target region and a second target region including a base of the target color in the second frame image and the first frame image, respectively, the first target region and the second target region having the same shape and size as each other;
acquiring a first difference target area according to the difference between the gray values of each pixel in the first target area and each pixel in the second target area, and performing binarization processing and normalization processing on the first difference target area to acquire a bright difference value of a target point; and
comparing the target lighting differential value to a first lighting threshold and determining that the target color burner is lit if the target lighting differential value is greater than the first lighting threshold.
8. The method of claim 7, wherein determining whether the target color burner is lit based on an inter-frame difference between the first frame image and the second frame image along a first difference direction further comprises:
determining a first region and a third region including the bases of the first color and a second region and a fourth region including the bases of the second color in the second frame image and the first frame image, respectively, the first region, the second region, the third region, and the fourth region having the same shape and size as the first target region and the second target region;
obtaining differential areas between the first area and the third area and between the second area and the fourth area according to the difference of the gray values of the pixels in the two areas, and performing binarization processing and normalization processing on the obtained differential areas to obtain a first lighting differential value and a second lighting differential value; and
comparing the first and second lighting differential values with a second lighting threshold, respectively, and determining that the lighthead of the target color is lit if the target lighting differential value is greater than the first lighting threshold and both the first and second lighting differential values are less than the second lighting threshold.
9. The method of claim 4, wherein determining whether the second frame image is a target image according to an inter-region difference between a first region including the first color of the lighthead in the second frame image, a second region including the second color of the lighthead, and a first target region including the target color of the lighthead comprises:
obtaining differential areas between the first target area and the first area, between the first target area and the second area, and between the first area and the second area according to the difference of the gray values of the pixels in the two areas, and performing binarization processing and normalization processing on the obtained differential areas to obtain a first differential value, a second differential value, and a third differential value; and
comparing the absolute value of the first difference value, the absolute value of the second difference value, and the absolute value of the third difference value with a first threshold and a second threshold, respectively, and determining the image as a target image in a case where the absolute value of the first difference value and the absolute value of the second difference value are both greater than the first threshold and the absolute value of the third difference value is less than the second threshold.
10. The method of claim 4, wherein determining whether the target color burner is extinguished based on an inter-frame difference between the first frame image and the second frame image in a second difference direction comprises:
determining a first target region and a second target region including a base of the target color in the second frame image and the first frame image, respectively, the first target region and the second target region having the same shape and size as each other;
acquiring a second difference target area according to the difference between the gray values of each pixel in the second target area and each pixel in the first target area, and performing binarization processing and normalization processing on the second difference target area to acquire a target extinguishment difference value; and
comparing the target extinguishment differential value with a first extinguishment threshold, and determining that the burners of the target color are extinguished if the target extinguishment differential value is greater than the first extinguishment threshold.
11. The method of claim 10, wherein determining whether the target color burner is extinguished based on an inter-frame difference between the first frame image and the second frame image in a second difference direction further comprises:
determining a first region and a third region including the bases of the first color and a second region and a fourth region including the bases of the second color in the second frame image and the first frame image, respectively, the first region, the second region, the third region, and the fourth region having the same shape and size as the first target region and the second target region;
obtaining differential areas between the third area and the first area and between the fourth area and the second area according to the difference of the gray values of the pixels in the two areas, and performing binarization processing and normalization processing on the obtained differential areas to obtain a first extinguishing differential value and a second extinguishing differential value; and
comparing the first and second extinguishment differential values with a second extinguishment threshold, respectively, and determining that the lamp holder of the target color is extinguished when the target extinguishment differential value is greater than the first extinguishment threshold and the first and second extinguishment differential values are both less than the second extinguishment threshold.
12. The method of claim 4, further comprising, prior to determining whether the lighthead of the target color is lit based on an inter-frame difference between the first frame image and the second frame image in a first difference direction:
acquiring a plurality of continuously captured images containing a traffic light, the traffic light comprising a plurality of lightheads respectively having a first color, a second color, and a target color;
acquiring a lighting threshold value for determining whether the lamp caps of the first color, the second color and the target color are lighted or not by calculating an inter-frame difference of the two frame images along a first difference direction between any two frame images in the multi-frame images; and
and acquiring an extinguishing threshold value for determining whether the lamp caps of the first color, the second color and the target color are extinguished or not by calculating an inter-frame difference of the two frame images along a second difference direction between any two frame images in the multi-frame images.
13. An image data mining apparatus comprising:
an image acquisition module configured to acquire an image containing a traffic light including a plurality of lightheads each having a first color, a second color, and a target color;
a region determination module configured to determine a first region including a burner of the first color, a second region including a burner of the second color, and a target region including a burner of the target color in the image, wherein the first region, the second region, and the target region have a same shape and size as each other;
a differential value obtaining module configured to obtain a first differential value representing a difference between the target region and the first region, a second differential value representing a difference between the target region and the second region, and a third differential value representing a difference between the first region and the second region, respectively; and
and the judging and storing module is configured to determine whether the image is a target image according to the first difference value, the second difference value and the third difference value, and store the target image as image data of a result of image data mining under the condition that the image is determined to be the target image.
14. An image data mining device comprising:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining method of any of claims 1 to 3.
15. An image data mining apparatus comprising:
an image acquisition module configured to acquire a first frame image and a second frame image continuously captured containing a traffic light including a plurality of lightheads respectively having a first color, a second color, and a target color;
a first lighting judgment module configured to determine whether the lamp cap of the target color is lighted according to an inter-frame difference between the first frame image and the second frame image along a first difference direction;
a first state setting module configured to set a state of the burner of the target color to be lit, in a case where it is determined that the burner of the target color is lit;
the light-off judging module is configured to determine whether the lamp holder of the target color is turned off according to the interframe difference between the first frame image and the second frame image along a second difference direction;
the second state setting module is configured to set the state of the lamp holder of the target color to be extinguished under the condition that the lamp holder of the target color is determined to be extinguished;
a second lighting judgment module configured to determine whether the second frame image is a target image according to a difference between regions between a first region including the base of the first color, a second region including the base of the second color, and a first target region including the base of the target color in the second frame image, when the status of the base of the target color is lighting; and
a result data storage module configured to store the target image as result image data of image data mining in a case where it is determined that the second frame image is the target image.
16. An image data mining device comprising:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining method of any of claims 4 to 12.
17. A computer-readable storage medium storing computer-executable instructions for implementing the image data mining method of any one of claims 1 to 3 when executed.
18. A computer-readable storage medium storing computer-executable instructions for implementing the image data mining method of any one of claims 4 to 12 when executed.
19. A roadside apparatus comprising:
a memory storing program instructions; and
a processor configured to execute the program instructions to perform the image data mining method of any of claims 1 to 12.
CN202011009094.0A 2020-09-23 2020-09-23 Image data mining method, device and equipment and road side equipment Active CN112164221B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111344177.XA CN114092717A (en) 2020-09-23 2020-09-23 Image data mining method, device, equipment, road side equipment and edge computing platform
CN202011009094.0A CN112164221B (en) 2020-09-23 2020-09-23 Image data mining method, device and equipment and road side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011009094.0A CN112164221B (en) 2020-09-23 2020-09-23 Image data mining method, device and equipment and road side equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111344177.XA Division CN114092717A (en) 2020-09-23 2020-09-23 Image data mining method, device, equipment, road side equipment and edge computing platform

Publications (2)

Publication Number Publication Date
CN112164221A true CN112164221A (en) 2021-01-01
CN112164221B CN112164221B (en) 2022-01-25

Family

ID=73863385

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011009094.0A Active CN112164221B (en) 2020-09-23 2020-09-23 Image data mining method, device and equipment and road side equipment
CN202111344177.XA Pending CN114092717A (en) 2020-09-23 2020-09-23 Image data mining method, device, equipment, road side equipment and edge computing platform

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111344177.XA Pending CN114092717A (en) 2020-09-23 2020-09-23 Image data mining method, device, equipment, road side equipment and edge computing platform

Country Status (1)

Country Link
CN (2) CN112164221B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033464A (en) * 2021-04-10 2021-06-25 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
CN104598912A (en) * 2015-01-23 2015-05-06 湖南科技大学 Traffic light detection and recognition method based CPU and GPU cooperative computing
US20160098924A1 (en) * 2013-10-31 2016-04-07 Bayerische Motoren Werke Aktiengesellschaft Systems and Methods for Estimating Traffic Signal Information
CN108736972A (en) * 2018-06-07 2018-11-02 华南理工大学 LED vision-based detections based on ITS-VLC and tracking and its system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098924A1 (en) * 2013-10-31 2016-04-07 Bayerische Motoren Werke Aktiengesellschaft Systems and Methods for Estimating Traffic Signal Information
CN104574960A (en) * 2014-12-25 2015-04-29 宁波中国科学院信息技术应用研究院 Traffic light recognition method
CN104598912A (en) * 2015-01-23 2015-05-06 湖南科技大学 Traffic light detection and recognition method based CPU and GPU cooperative computing
CN108736972A (en) * 2018-06-07 2018-11-02 华南理工大学 LED vision-based detections based on ITS-VLC and tracking and its system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶茂胜等: "基于色域差分与伽马校正的交通灯识别", 《软件导刊》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033464A (en) * 2021-04-10 2021-06-25 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium
CN113033464B (en) * 2021-04-10 2023-11-21 阿波罗智联(北京)科技有限公司 Signal lamp detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114092717A (en) 2022-02-25
CN112164221B (en) 2022-01-25

Similar Documents

Publication Publication Date Title
CN109284674B (en) Method and device for determining lane line
CN110660254B (en) Traffic signal lamp detection and intelligent driving method and device, vehicle and electronic equipment
CN110197589B (en) Deep learning-based red light violation detection method
JP5747549B2 (en) Signal detector and program
CN111950536A (en) Signal lamp image processing method and device, computer system and road side equipment
US20180060986A1 (en) Information processing device, road structure management system, and road structure management method
US20080013789A1 (en) Apparatus and System for Recognizing Environment Surrounding Vehicle
US8199971B2 (en) Object detection system with improved object detection accuracy
US9493108B2 (en) Apparatus for detecting other vehicle lights and light control apparatus for vehicles
CN112101272B (en) Traffic light detection method, device, computer storage medium and road side equipment
US10853936B2 (en) Failed vehicle estimation system, failed vehicle estimation method and computer-readable non-transitory storage medium
CN110335273B (en) Detection method, detection device, electronic apparatus, and medium
CN107644538B (en) Traffic signal lamp identification method and device
CN111931726B (en) Traffic light detection method, device, computer storage medium and road side equipment
CN114143940B (en) Tunnel illumination control method, device, equipment and storage medium
CN103927548A (en) Novel vehicle collision avoiding brake behavior detection method
CN112785850A (en) Method and device for identifying vehicle lane change without lighting
CN103324957A (en) Identification method and identification device of state of signal lamps
CN112164221B (en) Image data mining method, device and equipment and road side equipment
CN113989772A (en) Traffic light detection method and device, vehicle and readable storage medium
CN111277956A (en) Method and device for collecting vehicle blind area information
CN109784317B (en) Traffic signal lamp identification method and device
CN117237907A (en) Traffic signal lamp identification method and device, storage medium and electronic equipment
CN113989774A (en) Traffic light detection method and device, vehicle and readable storage medium
CN112784817B (en) Method, device and equipment for detecting lane where vehicle is located and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211012

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd.

Address before: 2 / F, *** building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant