CN111582216B - Unmanned vehicle-mounted traffic signal lamp identification system and method - Google Patents

Unmanned vehicle-mounted traffic signal lamp identification system and method Download PDF

Info

Publication number
CN111582216B
CN111582216B CN202010416680.0A CN202010416680A CN111582216B CN 111582216 B CN111582216 B CN 111582216B CN 202010416680 A CN202010416680 A CN 202010416680A CN 111582216 B CN111582216 B CN 111582216B
Authority
CN
China
Prior art keywords
signal lamp
color
module
traffic
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010416680.0A
Other languages
Chinese (zh)
Other versions
CN111582216A (en
Inventor
郭晴
程莹
赵琪
谢小娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202010416680.0A priority Critical patent/CN111582216B/en
Publication of CN111582216A publication Critical patent/CN111582216A/en
Application granted granted Critical
Publication of CN111582216B publication Critical patent/CN111582216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an unmanned vehicle-mounted traffic signal lamp recognition system, which comprises a detection recognition module for carrying out color and shape combination analysis on traffic signal lamp pictures; the traffic signal lamp transformation prediction module is used for constructing a motion model based on the dynamic video shot by the camera component; the GIS module is used for adding geographic position information based on a probability model to the data analysis results of the detection and identification module and the transformation and prediction module, and the identification method comprises the following steps: extracting candidate areas from images acquired by the camera module through extracting color and shape characteristics of traffic lights; adopting an image downsampling algorithm to reduce the image resolution of the candidate region and obtain a low-resolution pixel group; and then, each pixel of the low-resolution pixel group is endowed with a color class label through a linear color classifier, and then, signal lamp candidate areas with different color classes are obtained through processing by adopting an image separation algorithm, so that the recognition efficiency of the signal lamp is greatly improved from the physical and algorithm structure.

Description

Unmanned vehicle-mounted traffic signal lamp identification system and method
Technical Field
The embodiment of the invention relates to the technical field of unmanned aerial vehicle, in particular to an unmanned aerial vehicle-mounted traffic signal lamp identification system and method.
Background
The traffic light recognition technology is an important component of unmanned automobile technology, and has a very wide application range. With the development of social economy and the progress of scientific technology, automobiles become an important riding tool for people. The automobile is greatly convenient for people to live, so that large-scale cities are possible, and the distance between the cities is greatly shortened. Through the technical means, the vehicle can recognize the indication information of the traffic signal lamp on the distance longer than the visual distance of human eyes, and remind the driver of the passing rule of the intersection in front.
The main characteristics of the traffic signal lamp comprise color characteristics, morphological characteristics and position characteristics. The position features are mainly used for reducing the operation amount, and because the traffic signal lamp is high in installation position, only the upper half part of the image is processed and identified. The color features and the morphological features are the main basis for traffic signal lamp identification. For the color characteristics, the distribution characteristics of the colors are analyzed in different color spaces, corresponding screening conditions are set, and the traffic signal lamp with the fixed color characteristics (red, green and yellow) is segmented from the original image. When objects with colors similar to those of traffic lights exist in the images, namely, the images of non-traffic lights also meet the screening conditions, the problem of false identification is easily generated.
RGB space is the most common color space, but there is also a problem of being vulnerable to lighting conditions. Masako et al normalize the RGB space and set a threshold to screen and extract red and green regions as candidate regions. Extracting edge information of the image, searching a circular outline by using Hough transformation, counting the number of color pixels contained in the circular outline searched by the Hough transformation, and counting, wherein the traffic signal lamp is considered as the highest count value. The method combines the color and the circular morphological characteristics, but has larger Hough transformation operand, is sensitive to deformation and is easy to generate the problem of unrecognizable.
The identification of the traffic signal lamp mainly starts from the characteristics of the color, the form, the position and the like of the traffic signal lamp, and the identification and extraction of the traffic signal lamp image in the image are carried out by the identification method aiming at different characteristics, the sequence of the characteristic identification and the fusion of the results of each link, but if the object with the similar color and shape to the traffic signal lamp exists in the image, the screening requirement can be met in the identification process, and the correct traffic signal lamp is difficult to distinguish.
Disclosure of Invention
Therefore, the embodiment of the invention provides an unmanned vehicle-mounted traffic signal lamp recognition system and method, which effectively solve the problems of image processing, algorithm architecture and more interference items in image recognition in the traditional signal lamp recognition process.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
an unmanned vehicle-mounted traffic signal lamp identification system, comprising:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera component in a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operation;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored nighttime traffic light algorithm;
the daytime traffic light algorithm extracts color space distribution characteristics of a certain number of signal lamp pixel factors, the nighttime traffic light algorithm extracts and screens halation images in RG space threshold values of the certain number of signal lamp pixel factors, binarizes screening results, eliminates halation through area surrounding zero number screening and local area reverse operation, extracts complete traffic light contours, and then identifies through an area morphological characteristic method, and the daytime traffic light algorithm and the nighttime traffic light algorithm are combined to carry out an intermediate algorithm for synchronously identifying bright and dark areas of the signal lamp pixel factors;
the transformation prediction module is used for carrying out transformation prediction on signal lamp pixel factors of continuous multiframes on signal lamp dynamic videos shot by the camera component based on data of the speed and the position of the automobile movement, taking a sudden-change state of the images in the signal lamp continuous multiframes as a intercept point, and carrying out fitness matching on the signal lamp pixel factors processed by the processing module;
and the GIS module is used for adding geographic position information based on the probability model to the data analysis result of the detection and identification module and the transformation and prediction module by using a GPS system.
As a preferable scheme of the invention, the detection and identification module is internally provided with a cutter for cutting the picture of the best communication signal lamp shot by the camera component, a fuzzy classifier for screening the color characteristics of the picture cut by the cutter, and a projection analyzer for carrying out coordinate projection analysis on the picture data classified by the fuzzy classifier.
As a preferable scheme of the invention, the transformation prediction module acquires the motion state of the automobile through the MCU connected with the automobile, acquires the geographic coordinate position of the signal lamp from the automobile through the GPS system, and estimates the relative motion state of the signal lamp in the traffic signal lamp dynamic video shot by the camera component at the same time, so as to construct a motion model; under the motion model, the transformation prediction module utilizes the detection, identification and analysis of the dynamic video of the traffic signal lamp with continuous multiframes.
As a preferable mode of the invention, the transformation prediction module utilizes detection, identification and analysis of traffic signal dynamic videos of continuous multiframes to analyze specific parameters including signal positions, heights and orientations based on lane directions.
As a preferred scheme of the invention, the detection and identification module further comprises a mounting shell, and a main camera component, a secondary camera component and a color sensor which are embedded on the mounting shell, wherein an image separation mechanism for physically shrinking and enlarging pixels of the picture shot by the main camera component is arranged around the mounting shell;
the image separation mechanism comprises flexible shielding strips which are arranged around the installation shell and stretch out and draw back along the shooting direction of the main camera component, arc-shaped guide slots for the flexible shielding strips to stretch out are formed in four corners of the front surface of the installation shell, and one end of each flexible shielding strip is connected with a driving device.
As a preferable scheme of the invention, a film-coated transparent plate is arranged in the middle of one flexible shielding strip, and when the flexible shielding strip is completely extended, the front end of the flexible shielding strip can just extend into an arc-shaped guide slot hole which is opposite to the front end of the flexible shielding strip.
The invention provides an unmanned vehicle-mounted traffic signal lamp identification method, which comprises the following steps:
s100, extracting candidate areas from images acquired by the camera module through extracting color and shape characteristics of traffic lights;
s200, reducing the image resolution of the candidate region by adopting an image downsampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then adopting an image separation algorithm to process to obtain signal lamp candidate areas with different color classes;
s400, using signal lamp candidate areas of different color categories of the previous frames of images obtained by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output signal lamp identification results.
As a preferred scheme of the invention, the linear color classifier is a classifier based on an HSV color space, and each pixel of the low-resolution pixel group of the linear color classifier is endowed with a color class label which specifically comprises four types of red, yellow, green and other colors;
in S300, the low resolution pixel group obtained by processing with the image downsampling algorithm is stored in the gray scale map, and different color class labels are marked with different gray scale values.
As a preferred scheme of the invention, in S400, the prior information of continuous multiframes is utilized, the position distribution of the image where the signal lamp is positioned is estimated in real time based on a probability model, the signal lamp position and the lane orientation parameter based on the GPS dotting technology are added, the signal lamp position and the lane orientation parameter are packed and stored into a GIS module with two parts of offline data extraction and online data extraction, and the signal lamp position and the lane orientation parameter are extracted from the GIS module as prior information when the signal lamp passes through the same intersection next time.
As a preferable scheme of the invention, in S400, the position of the traffic signal lamp in the next frame in the image is predicted to be processed into a trigger signal of focusing high-definition shooting by another group of shooting modules, the trigger signal is matched according to the length-width ratio and the size of the signal lamp candidate areas in different color categories, and pixels of the same color label in the signal lamp candidate areas in different color categories which meet the matching result are connected by rectangular frames to be used as an output result.
Embodiments of the present invention have the following advantages:
the invention effectively realizes the recognition, detection tracking and data storage of the signal lamp position in unmanned driving from the physical structure and software algorithm, can predict the target position of the next frame, and improves the detection and recognition efficiency; the transformation prediction module links the information of the same target in a plurality of continuous frames, so that the information of the current frame can be utilized when making a decision, the information of the previous frames can be synthesized to make an optimal decision, the burden of a system algorithm is reduced, and the possibility of algorithm failure is reduced; the position and the orientation of each traffic light are stored, and the set of the traffic lights is used as priori knowledge to form a priori knowledge database which can be called in real time at a later stage.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the ambit of the technical disclosure.
FIG. 1 is a block diagram of an unmanned vehicle-mounted traffic signal lamp recognition system in an embodiment of the invention;
FIG. 2 is a schematic diagram of a GIS module structure of an identification system according to an embodiment of the present invention;
FIG. 3 is a block diagram of an unmanned vehicle-mounted traffic signal lamp recognition method in an embodiment of the invention;
FIG. 4 is a flowchart of a method for identifying a halo design caused by a glare phenomenon of a night traffic light according to an embodiment of the present invention;
FIG. 5 is a schematic cross-sectional view of a detection and recognition module according to an embodiment of the present invention;
fig. 6 is a schematic front view of a detection and identification module according to an embodiment of the present invention.
In the figure:
1-mounting a shell; 2-a primary camera assembly; 3-an image separation mechanism; 4-color sensor; 5-a transparent cover; 6-honeycomb-shaped protrusions; 7-an iris shutter mechanism; 8-a secondary camera assembly;
301-flexible barrier strips; 302-arc-shaped guide slots; 303-coating a transparent plate; 304-a micro drive motor; 305-a lead screw output shaft.
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1 and 2, the present invention provides an unmanned vehicle-mounted traffic signal lamp recognition system, comprising:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera component in a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operation;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored nighttime traffic light algorithm;
the daytime traffic light algorithm extracts color space distribution characteristics of a certain number of signal lamp pixel factors, the nighttime traffic light algorithm extracts and screens halation images in RG space threshold values of the certain number of signal lamp pixel factors, binarizes screening results, eliminates halation through area surrounding zero number screening and local area reverse operation, extracts complete traffic light contours, and then identifies through an area morphological characteristic method, and the daytime traffic light algorithm and the nighttime traffic light algorithm are combined to carry out an intermediate algorithm for synchronously identifying bright and dark areas of the signal lamp pixel factors;
the transformation prediction module is used for carrying out transformation prediction on signal lamp pixel factors of continuous multiframes on signal lamp dynamic videos shot by the camera component based on data of the speed and the position of the automobile movement, taking a sudden-change state of the images in the signal lamp continuous multiframes as a intercept point, and carrying out fitness matching on the signal lamp pixel factors processed by the processing module;
and the GIS module is used for adding geographic position information based on the probability model to the data analysis result of the detection and identification module and the transformation and prediction module by using a GPS system.
The detection and identification module is internally provided with a cutter for cutting the picture of the best communication signal lamp shot by the camera component, a fuzzy classifier for screening the image color characteristics of the picture cut by the cutter, and a projection analyzer for carrying out coordinate projection analysis on the picture data classified by the fuzzy classifier.
The transformation prediction module acquires the motion state of the automobile through an MCU connected with the automobile, acquires the geographic coordinate position of the automobile to the signal lamp through a GPS system, and estimates the relative motion state of the signal lamp in the traffic signal lamp dynamic video shot by the camera component at the same time, so as to construct a motion model; under the motion model, the transformation prediction module utilizes detection, identification and analysis of the dynamic video of the traffic signal lamp of continuous multiframes, and the function of the transformation prediction module is expressed as follows:
on one hand, the transformation prediction module can predict the target position of the next frame, so that the detection and recognition efficiency is improved;
on the other hand, the transformation prediction module links the information of the same target in a plurality of continuous frames, so that the information of the current frame can be utilized and the information of the previous frames can be synthesized to make an optimal decision when making a decision.
A tracking module is introduced into the traffic signal lamp identification system, the region of interest of a detector is reduced, the detection results of multiple frames are associated, and once a certain signal lamp is locked, the transformation prediction module can provide the detection of the next frame with the advance information.
The position of the previous frames is used as prior information, and the position of the traffic signal lamp of the next frame in the image is predicted. In other words, if some information of the signal lamp, such as the position, is obtained in advance, the prior information can be used for processing in a targeted manner in the detection and identification stage.
In this way, on the one hand the burden of the algorithm is reduced, and on the other hand the likelihood of failure of the algorithm is reduced. The position and the orientation of each traffic signal lamp are stored, and the set of the traffic signal lamps is used as priori knowledge to form a priori knowledge database.
How to obtain such a priori databases is a first problem encountered with this approach. Such a huge database is obviously not feasible if it is still obtained by manual mapping, labeling methods.
Therefore, this process is divided into two parts, offline and online, acquiring these data offline and directly retrieving the data at online run time.
The transformation prediction module utilizes detection, identification and analysis of traffic signal dynamic videos of continuous multiframes to analyze specific parameters including signal positions, heights and orientations based on lane directions.
As shown in fig. 5 and 6, the physical structure part of the detection and identification module specifically includes a mounting housing 1, and a main camera assembly 2, a sub camera assembly 8 and a color sensor 4 that are embedded on the mounting housing 1, and an image separation mechanism 3 for physically shrinking and enlarging pixels of the picture taken by the main camera assembly 2 is provided around the mounting housing 1.
The image separation mechanism 3 comprises flexible shielding strips 301 which are arranged around the installation shell 1 and extend and retract along the shooting direction of the main camera assembly 2, arc-shaped guide slots 302 for the flexible shielding strips 301 to extend are arranged at four corners of the front surface of the installation shell 1, and one end of each flexible shielding strip 301 is connected with a driving device.
The main characteristics of the traffic signal lamp comprise color characteristics, morphological characteristics and position characteristics. The position features are mainly used for reducing the operation amount, and because the traffic signal lamp is high in installation position, only the upper half part of the image is processed and identified. The color features and the morphological features are the main basis for traffic signal lamp identification. And aiming at the color characteristics, analyzing the distribution characteristics of the colors in different color spaces, setting corresponding screening conditions, and dividing the traffic signal lamp with the fixed color characteristics from the original image. When objects with colors similar to those of traffic lights exist in the images, namely, the images of non-traffic lights also meet the screening conditions, the problem of false identification is easily generated.
The color features can be used for effectively removing a large number of irrelevant objects in the image, and the method is often used as the first step of the traffic signal lamp image recognition processing. The morphological characteristics of the traffic signal lamp comprise a rectangular traffic signal lamp frame body and a round or arrow-shaped traffic signal lamp body.
I.e. the candidate regions are extracted from the acquired image using various features. This step is the basis of the traffic light identification system and is an irreplaceable module.
The detection identifier is required to have low omission rate and low false detection rate, and has high efficiency and real-time performance. However, there are two pairs of contradictions in between. For a detection identifier with a certain structure, all samples cannot be correctly classified by using limited features, so that to reduce the false detection rate, the classification area must be reduced, and to reduce the miss detection rate, the classification area must be enlarged, which causes a pair of contradictions. On the other hand, to achieve both low omission rate and low false detection rate, a relatively complex classifier needs to be designed, which contradicts the real-time requirement.
The invention performs preliminary positioning on the position of the traffic light in the real space through the color sensor 4, wherein the identification standard set by the color sensor 4 is the combination of the correlation of 3 components of red (R), green (G) and blue (B), so as to resist the interference of other external light rays, and the invention is suitable for positioning the image red-green light.
Then the final picture shot by the camera is physically separated by the image separation mechanism 3, and the specific separation principle is as follows: when the color sensor 4 is positioned to the traffic light, whether the traffic light is longitudinally arranged or transversely arranged is judged according to the shape of the traffic light, and the left and right flexible barrier strips 301 or the upper and lower flexible barrier strips 301 positioned on the installation shell 1 are driven to extend according to the judging result, so that physical pixel cutting is carried out on the shooting picture of the main camera component 2.
Meanwhile, compared with an RGB color model, the HSI (hue, saturation and brightness) color model is more suitable for human visual characteristics, all color information of an image is contained in an H component, preliminary traffic light positioning operation is facilitated, a processor in a system performs positioning and cutting of the traffic light to the minimum circumscribed rectangle, a secondary camera component 8 performs focusing shooting on the traffic light in a space according to the cutting result, an iris shutter mechanism 7 performs shooting aperture control in the focusing process, display pixel information of each lamp in the traffic light is further reduced, superposition analysis is performed on pictures shot by the primary camera component, and recognition accuracy is improved.
The resolution ratio of the processor system in processing the picture is reduced, the processing speed of the system in processing the high-definition picture shot by the high-definition camera is reduced, the interference of other lamp bodies in the picture shot by the shooting camera is reduced as much as possible, and the accuracy of traffic light identification is greatly improved.
When the stray light rays projected into the main camera assembly are strong, the light is filtered through the film-coated transparent plate 302 arranged in the middle of one of the flexible shielding strips 301.
And when the flexible barrier strip 301 is fully extended, the front end of the flexible barrier strip 301 can just extend into the arc-shaped guide slot 302 which is opposite to the front end.
The front end of the installation housing 1 is provided with a transparent cover 5, and the surface of the transparent cover 5 is provided with a honeycomb protrusion 6, and the height of the honeycomb protrusion 6 decreases from the edge of the transparent cover to the center.
The driving device comprises a micro driving motor 304 and a screw output shaft 305, and the micro driving motor 304 drives the flexible barrier strip 301 to do linear motion by driving the screw output shaft to rotate.
The color sensor 4 employs a color sensor of model TCS 3200D.
As shown in fig. 3 and 4, the invention provides a method for identifying an unmanned vehicle-mounted traffic signal lamp, which comprises the following steps:
s100, extracting candidate areas from images acquired by the camera module through extracting color and shape characteristics of traffic lights;
s200, reducing the image resolution of the candidate region by adopting an image downsampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then adopting an image separation algorithm to process to obtain signal lamp candidate areas with different color classes;
s400, using signal lamp candidate areas of different color categories of the previous frames of images obtained by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output signal lamp identification results.
The invention realizes the identification of traffic signal lamps by combining colors and morphological characteristics on the signal lamp images shot by the camera module, and adopts the resolution of a camera of six million (2736 multiplied by 2192). Since the mounting position of the signal lamp is at a higher position above the horizon and the camera is mounted horizontally, in actual use, the region of interest of the image is the upper half of the image, which is approximately three megapixels (2736×1096).
And the signal lamp image shot by the shooting module adjusts the shutter opening and closing time according to the pixel gray level of the cutting part, and controls the light inlet quantity so as to ensure the image quality.
The image color features are screened through a fuzzy classifier, the accuracy and the robustness are better than those of a common color threshold method, and whether the object is the traffic signal lamp or not is judged by combining the perimeter area ratio of the traffic signal lamp and the tracking of continuous frame images.
Color identification:
firstly, positioning and cutting the traffic light to the smallest circumscribed rectangle. And secondly, converting a color space, wherein an RGB color model is a common color model, and the three components red (R), green (G) and blue (B) under the model have higher correlation and poorer external interference resistance, so that the RGB color model is not suitable for image segmentation.
The HSI (hue, saturation and brightness) color model is more suitable for human visual characteristics than the RGB color model. And all color information of the image is contained in the H component, which is convenient to operate, and the color setting H is recognized from the H component.
Shape recognition, shape recognition method signal lamp type has the diversity, and the arrow head lamp contains colour and direction information, and circular lamp contains colour information only.
In order to realize flexible identification of round and arrow-shaped signal lamps, a coordinate projection analysis method is provided. The shape is identified by comparing the standard arrow with the circular coordinate projection.
The abscissa projection of the binary image of the light source region is essentially the calculation of the area of the light source region in the row direction, while the ordinate projection is the column Fang Ji, which can be used for identification in practice.
For comparison, compression and stretching should be performed to unify the image size. Here uniformly compression stretched into an image of 20 x 20 size.
Standard circles and arrows are first set before the comparison is made.
The linear color classifier is a classifier based on HSV color space, and each pixel of the low-resolution pixel group of the linear color classifier is endowed with a color class label which comprises four types of red, yellow, green and other colors;
in S200, an image downsampling algorithm is adopted, and for an identification algorithm of daytime traffic, the method specifically comprises: through collecting the traffic signal lamp sample image, draw the pixel in the area of traffic signal lamp, carry on the statistical analysis to a certain number of points, can obtain the distribution characteristic in the correspondent color space of this colour.
A common color space is a color space such as RGB, YCbCr, HSV.
They represent three classes of color spaces, the color space characterized by the primary color mixing ratio (RGB space), the color space characterized by the luminance and color difference (YCbCr space) and the color space characterized by the saturation hue brightness parameter (HSV space), respectively, which have a mutual conversion relationship with each other. The original image is respectively converted into YCbCr space and HSV space from RGB space, and the distribution of pixel values is analyzed.
Setting a corresponding screening threshold. RGB space is the most common color space, and image acquisition and display devices use RGB space more. The three components of the RGB space represent the values of red, green and blue, respectively, the values of the three components being related to their gray scale.
The RGB space is a cube, and the three coordinate axes are RGB components.
Through the value analysis of a certain number of pixels, screening conditions of red and green are set as shown in the formula.
R−G≥70 and R−B≥70,
G−R≥70 and G−B≥0,
And further obtaining the transformation relation between the YCbCr space and the RGB space.
Similar to the RGB space, YCbCr space is a cube, and YCbCr is three coordinate axes, respectively.
The YCbCr space represents color characteristics by Cb and Cr values, and the Y component can represent gray values, so that only the Cb and Cr values are provided with threshold values, and the red and green threshold conditions in the YCbCr space are shown in the formula
80≤Cb≤135 and155≤Cr,
95≤Cb≤165 and Cr≤115。
Method for identifying halation design caused by dazzling phenomenon of night traffic light
Firstly, extracting halation in an image through RGB space threshold value screening, binarizing a screening result, eliminating the halation through regional surrounding zero number screening and local regional negation operation, extracting a complete traffic light contour, and then further identifying through a regional morphological characteristic method.
And (3) performing threshold screening on night samples by using an RGB space threshold screening method to verify whether the RGB space threshold screening method can meet the requirement of extracting traffic signal lamp candidate areas.
The distribution of red and green in the RGB space during the night is substantially the same as during the day.
In order to ensure that the area where the night traffic signal lamp is located is completely acquired as much as possible, the RGB space threshold screening condition is relaxed, the color screening condition of the night red traffic signal lamp is shown in the following formula, and the green traffic signal lamp screening condition is shown in the following formula:
R − G ≥ 60 and R − B ≥ 30,
G − R ≥ 30 and G − B ≥ 0,
the color threshold screening is performed on the samples using the formula.
In S300, the low resolution pixel group obtained by processing with the image downsampling algorithm is stored in the gray scale map, and different color class labels are marked with different gray scale values.
In S400, the prior information of the continuous multiframe is utilized to estimate the position distribution of the image where the signal lamp is located in real time based on the probability model, and the signal lamp position and the lane orientation parameter based on the GPS dotting technology are added, packaged and stored in the GIS module with two parts of offline data extraction and online data extraction, and extracted from the GIS module as prior information when the signal lamp passes through the same intersection next time.
S400, the position of the traffic signal lamp in the next frame in the image is predicted to be processed into a trigger signal of focusing high-definition shooting by another group of shooting modules, the trigger signal is matched according to the length-width ratio and the size of the signal lamp candidate areas in different color categories, and pixels of the same color label in the signal lamp candidate areas in different color categories which meet the matching result are connected through rectangular frames to be used as an output result.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (9)

1. An unmanned vehicle-mounted traffic signal lamp identification system, comprising:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera component in a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operation;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored nighttime traffic light algorithm;
the daytime traffic light algorithm extracts color space distribution characteristics of a certain number of signal lamp pixel factors, the nighttime traffic light algorithm extracts and screens halation images in RG space threshold values of the certain number of signal lamp pixel factors, binarizes screening results, eliminates halation through area surrounding zero number screening and local area reverse operation, extracts complete traffic light contours, and then identifies through an area morphological characteristic method, and the daytime traffic light algorithm and the nighttime traffic light algorithm are combined to carry out an intermediate algorithm for synchronously identifying bright and dark areas of the signal lamp pixel factors;
the transformation prediction module is used for carrying out transformation prediction on signal lamp pixel factors of continuous multiframes on signal lamp dynamic videos shot by the camera component based on data of the speed and the position of the automobile movement, taking a sudden-change state of the images in the signal lamp continuous multiframes as a intercept point, and carrying out fitness matching on the signal lamp pixel factors processed by the processing module;
the GIS module is used for adding geographic position information based on a probability model to the data analysis result of the detection and identification module and the transformation and prediction module by using a GPS system;
the transformation prediction module acquires the motion state of the automobile through an MCU connected with the automobile, acquires the geographic coordinate position of the automobile to the signal lamp through a GPS system, and simultaneously estimates the relative motion state of the signal lamp in the traffic signal lamp dynamic video shot by the camera component, so as to construct a motion model; under the motion model, the transformation prediction module utilizes detection, identification and analysis of traffic signal lamp dynamic videos of continuous multiframes, and analysis contents comprise:
predicting the target position of a signal lamp in a next frame in the shot traffic signal lamp dynamic video;
the information of the same target in the shot traffic signal dynamic video in a plurality of continuous frames is related, so that when a prediction decision is made, the information of the current frame number or the information of the previous frames is integrated to make an optimal prediction decision.
2. The unmanned vehicle-mounted traffic light recognition system according to claim 1, wherein the detection recognition module is internally provided with a cutter for cutting a picture of a good communication signal lamp shot by the camera assembly, a fuzzy classifier for screening image color characteristics of the picture cut by the cutter, and a projection analyzer for performing coordinate projection analysis on picture data classified by the fuzzy classifier.
3. The unmanned vehicle-mounted traffic light recognition system of claim 1, wherein the transformation prediction module uses detection of traffic light dynamic video for successive frames to recognize specific parameters including light position, altitude and orientation based on lane direction and for installing lights.
4. The unmanned vehicle-mounted traffic light recognition system according to claim 1, wherein the detection recognition module further comprises a mounting housing (1), and a main camera assembly (2), a sub camera assembly (8) and a color sensor (4) which are embedded on the mounting housing (1), wherein an image separation mechanism (3) for physically shrinking and enlarging pixels of a picture taken by the main camera assembly (2) is arranged around the mounting housing (1);
the image separation mechanism (3) comprises flexible shielding strips (301) which are arranged around the installation shell (1) and stretch out and draw back along the shooting direction of the main camera component (2), arc-shaped guide slots (302) for the flexible shielding strips (301) to stretch out are formed in four corners of the front surface of the installation shell (1), and one end of each flexible shielding strip (301) is connected with a driving device.
5. The unmanned vehicle-mounted traffic light recognition system according to claim 4, wherein a coated transparent plate (303) is arranged in the middle of one flexible shielding strip (301), and when the flexible shielding strip (301) is fully extended, the front end of the flexible shielding strip (301) can just extend into an arc-shaped guide slot hole (302) opposite to the flexible shielding strip.
6. A method of unmanned vehicle traffic signal recognition according to the system of claims 1-5, comprising the steps of:
s100, extracting candidate areas from images acquired by the camera module through extracting color and shape characteristics of traffic lights;
s200, reducing the image resolution of the candidate region by adopting an image downsampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then adopting an image separation algorithm to process to obtain signal lamp candidate areas with different color classes;
s400, using signal lamp candidate areas of different color categories of the previous frames of images obtained by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output signal lamp identification results.
7. The method for identifying the unmanned vehicle traffic signal according to claim 6, wherein the linear color classifier is a classifier based on an HSV color space, and each pixel of the low resolution pixel group of the linear color classifier is endowed with a color class label comprising four types of colors, namely red, yellow, green and other colors;
in S300, the low resolution pixel group obtained by processing with the image downsampling algorithm is stored in the gray scale map, and different color class labels are marked with different gray scale values.
8. The method for recognizing unmanned vehicle-mounted traffic lights according to claim 6, wherein in S400, the image position distribution of the signal lights is estimated in real time based on a probability model by using priori information of consecutive multiframes, signal light position and lane orientation parameters based on a GPS dotting technique are added, and the signal light position and lane orientation parameters are packed and stored in a GIS module having two parts of offline data extraction and online data extraction, and are extracted from the GIS module as priori information when the signal lights pass through the same intersection next time.
9. The method for identifying the unmanned vehicle-mounted traffic signal lamp according to claim 8, wherein in step S400, the position of the traffic signal lamp in the next frame predicted in the image is processed into a trigger signal for focusing high-definition shooting by another group of camera modules, the trigger signals are matched according to the aspect ratio and the size of signal lamp candidate areas of different color categories, and pixels of the same color label in the signal lamp candidate areas of different color categories which meet the matching result are connected by rectangular frames to serve as an output result.
CN202010416680.0A 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method Active CN111582216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010416680.0A CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010416680.0A CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Publications (2)

Publication Number Publication Date
CN111582216A CN111582216A (en) 2020-08-25
CN111582216B true CN111582216B (en) 2023-08-04

Family

ID=72118925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010416680.0A Active CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Country Status (1)

Country Link
CN (1) CN111582216B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
CN113039556B (en) 2018-10-11 2022-10-21 特斯拉公司 System and method for training machine models using augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN112233415A (en) * 2020-09-04 2021-01-15 南京航空航天大学 Traffic signal lamp recognition device for unmanned driving
CN113065466B (en) * 2021-04-01 2024-06-04 安徽嘻哈网络技术有限公司 Deep learning-based traffic light detection system for driving training
CN113642521B (en) * 2021-09-01 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment
CN113989771B (en) * 2021-10-26 2024-07-19 中国人民解放***箭军工程大学 Traffic signal lamp identification method based on digital image processing
CN116152784B (en) * 2023-04-21 2023-07-07 深圳市夜行人科技有限公司 Signal lamp early warning method and system based on image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
WO2016203616A1 (en) * 2015-06-18 2016-12-22 日産自動車株式会社 Traffic light detection device and traffic light detection method
CN108492601A (en) * 2018-04-13 2018-09-04 济南浪潮高新科技投资发展有限公司 A kind of vehicle DAS (Driver Assistant System) and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016203616A1 (en) * 2015-06-18 2016-12-22 日産自動車株式会社 Traffic light detection device and traffic light detection method
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN108492601A (en) * 2018-04-13 2018-09-04 济南浪潮高新科技投资发展有限公司 A kind of vehicle DAS (Driver Assistant System) and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
付强.智能汽车交通灯识别方法研究.《中国优秀学位论文全文数据库》.2017,说明书第I页第2-3段. *

Also Published As

Publication number Publication date
CN111582216A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582216B (en) Unmanned vehicle-mounted traffic signal lamp identification system and method
CN109740478B (en) Vehicle detection and identification method, device, computer equipment and readable storage medium
CN106909937A (en) Traffic lights recognition methods, control method for vehicle, device and vehicle
CN111126325B (en) Intelligent personnel security identification statistical method based on video
Maldonado-Bascón et al. Road-sign detection and recognition based on support vector machines
US8798314B2 (en) Detection of vehicles in images of a night time scene
CN109918971B (en) Method and device for detecting number of people in monitoring video
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN107704853A (en) A kind of recognition methods of the traffic lights based on multi-categorizer
CN112017445B (en) Pedestrian violation prediction and motion trail tracking system and method
Guo et al. Mixed vertical-and-horizontal-text traffic sign detection and recognition for street-level scene
CN111027475A (en) Real-time traffic signal lamp identification method based on vision
Cai et al. Real-time arrow traffic light recognition system for intelligent vehicle
CN112464731B (en) Traffic sign detection and identification method based on image processing
Maldonado-Bascon et al. Traffic sign recognition system for inventory purposes
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
Yao et al. Coupled multivehicle detection and classification with prior objectness measure
CN111401364A (en) License plate positioning algorithm based on combination of color features and template matching
CN113989771A (en) Traffic signal lamp identification method based on digital image processing
Deshmukh et al. Real-time traffic sign recognition system based on colour image segmentation
Patel et al. A novel approach for detecting number plate based on overlapping window and region clustering for Indian conditions
Mustafa et al. Challenges in Automatic License Plate Recognition System Review
KR102139932B1 (en) A Method of Detecting Character Data through a Adaboost Learning Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant