CN111582216A - Unmanned vehicle-mounted traffic signal lamp identification system and method - Google Patents

Unmanned vehicle-mounted traffic signal lamp identification system and method Download PDF

Info

Publication number
CN111582216A
CN111582216A CN202010416680.0A CN202010416680A CN111582216A CN 111582216 A CN111582216 A CN 111582216A CN 202010416680 A CN202010416680 A CN 202010416680A CN 111582216 A CN111582216 A CN 111582216A
Authority
CN
China
Prior art keywords
signal lamp
color
image
module
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010416680.0A
Other languages
Chinese (zh)
Other versions
CN111582216B (en
Inventor
郭晴
程莹
赵琪
谢小娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Normal University
Original Assignee
Anhui Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Normal University filed Critical Anhui Normal University
Priority to CN202010416680.0A priority Critical patent/CN111582216B/en
Publication of CN111582216A publication Critical patent/CN111582216A/en
Application granted granted Critical
Publication of CN111582216B publication Critical patent/CN111582216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an unmanned vehicle-mounted traffic signal lamp identification system, which comprises a detection identification module, a traffic signal lamp image analysis module and a traffic signal lamp image analysis module, wherein the detection identification module is used for performing color and shape combination analysis on the traffic signal lamp image; the traffic signal lamp transformation prediction module is used for constructing a dynamic video shot by the camera assembly and based on a motion model; the GIS module adds the geographical location information based on the probability model to the data analysis result of the detection identification module and the transformation prediction module by utilizing the GPS system, and the identification method comprises the following steps: extracting candidate regions from the image acquired by the camera module by extracting the color and shape characteristics of the traffic lights; reducing the image resolution of the candidate area by adopting an image down-sampling algorithm to obtain a low-resolution pixel group; and then, each pixel of the low-resolution pixel group is endowed with a color class label through a linear color classifier, and then, signal lamp candidate areas with different color classes are obtained through image separation algorithm processing, so that the identification efficiency of the signal lamp is greatly improved in physical and algorithm structures.

Description

Unmanned vehicle-mounted traffic signal lamp identification system and method
Technical Field
The embodiment of the invention relates to the technical field of unmanned driving, in particular to an unmanned vehicle-mounted traffic signal lamp identification system and method.
Background
The traffic signal lamp recognition technology is an important component of the unmanned automobile technology, and has a wide application range. With the development of social economy and the progress of science and technology, automobiles become an important tool for people to ride instead of walk. The automobile greatly facilitates the life of people, enables large-scale cities to become possible, and greatly shortens the distance between cities. Through the technical means, the vehicle can recognize the indicating information of the traffic signal lamp at a distance farther than the visible distance of human eyes, and the driver is reminded of the passing rule of the front intersection.
The main features of traffic lights include color, morphological, and positional features. The position features are mainly used for reducing the calculation amount, and because the installation position of the traffic signal lamp is higher, only the upper half part of the image is processed and identified usually. The color feature and the shape feature are the main basis for the traffic signal lamp identification. Aiming at the color characteristics, the distribution characteristics of the colors are analyzed in different color spaces, corresponding screening conditions are set, and the traffic signal lamp with fixed color characteristics (red, green and yellow) is segmented from the original image. When objects with colors close to the traffic lights exist in the images, namely non-traffic light images also meet the screening conditions, the problem of false recognition is easily caused.
The RGB space is the most common color space, but there is also a problem of being vulnerable to lighting conditions. Masako and the like perform normalization processing on the RGB space, set a threshold value, and perform screening extraction on red and green areas to serve as candidate areas. Extracting edge information of the image, searching a circular outline by using Hough transformation, counting the number of color pixels contained in the circular outline searched by the Hough transformation, and counting the number with the highest count value to be regarded as the traffic signal lamp. The method combines the color and the circular shape characteristics, but the Hough transformation has large calculation amount and is sensitive to deformation, so that the problem that the Hough transformation cannot be identified easily occurs.
The identification of the traffic signal lamp is mainly based on the characteristics of the traffic signal lamp such as color, form and position of the traffic signal lamp, the identification method aiming at different characteristics, the sequence of characteristic identification and the fusion of results of each link are adopted to identify and extract the traffic signal lamp image in the image, but if an object with the color and the shape similar to those of the traffic signal lamp exists in the image, the screening requirement can be met in the identification process, and the correct traffic signal lamp is difficult to distinguish.
Disclosure of Invention
Therefore, the embodiment of the invention provides an unmanned vehicle-mounted traffic signal lamp identification system and method, which effectively solve the problems of more interference items in image processing, algorithm architecture and image identification in the traditional signal lamp identification process.
In order to achieve the above object, an embodiment of the present invention provides the following:
an unmanned on-board traffic signal light identification system, comprising:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera assembly by a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operations are carried out;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored night traffic light algorithm;
the method comprises the steps that a daytime traffic light algorithm carries out color space distribution characteristic extraction on a certain number of signal light pixel factors, a night traffic light algorithm carries out extraction and screening of halo images in RG space threshold values on the certain number of signal light pixel factors, screening results are binarized, halo is eliminated through area surrounding zero number screening and local area negation operation, a complete traffic light outline is extracted, and then the traffic light outline is identified through a region morphological characteristic method;
the conversion prediction module is used for performing conversion prediction of continuous multi-frame signal lamp pixel factors on the signal lamp dynamic video shot by the camera assembly based on the data of the speed and the position of the automobile movement, and performing fit degree matching on the signal lamp pixel factors processed by the processing module by taking the color sudden change state in the continuous multi-frame dynamic video of the signal lamp as an interception point;
and the GIS module is used for adding geographical position information based on a probability model to the data analysis result of the detection identification module and the transformation prediction module by utilizing a GPS system.
As a preferable scheme of the present invention, a cutter for cutting the communication signal lamp picture shot by the camera assembly, a fuzzy classifier for screening image color features of the picture cut by the cutter, and a projection analyzer for performing coordinate projection analysis on the picture data classified by the fuzzy classifier are disposed in the detection and identification module.
As a preferred scheme of the invention, the transformation prediction module acquires the motion state of the automobile through an MCU connected with the automobile, acquires the geographic coordinate position from the automobile to a signal lamp through a GPS system, and estimates the relative motion state of the signal lamp in a traffic signal lamp dynamic video shot by a camera assembly, thereby constructing a motion model; under the motion model, the transformation prediction module utilizes the detection, identification and analysis of continuous multi-frame traffic signal lamp dynamic videos.
As a preferable scheme of the invention, the transformation prediction module utilizes detection recognition of continuous multi-frame traffic signal dynamic videos to analyze specific parameters including signal position, height and orientation based on lane directions.
As a preferred scheme of the invention, the detection and identification module further comprises an installation shell, and a main camera component, an auxiliary camera component and a color sensor which are embedded on the installation shell, wherein the periphery of the installation shell is provided with an image separation mechanism for physically reducing and amplifying pixels of a picture shot by the main camera component;
the image separation mechanism comprises flexible shielding strips which are arranged around the mounting shell and stretch along the shooting direction of the main camera assembly, four corners of the front surface of the mounting shell are provided with arc-shaped guide slot holes for the flexible shielding strips to stretch out, and one end of each flexible shielding strip is connected with a driving device.
As a preferable scheme of the present invention, a coated transparent plate is disposed in the middle of one of the flexible shielding strips, and when the flexible shielding strip is completely extended, the front end of the flexible shielding strip can just extend into the arc-shaped guide slot hole right opposite to the flexible shielding strip.
The invention provides a method for identifying an unmanned vehicle-mounted traffic signal lamp, which comprises the following steps:
s100, extracting a candidate region from an image acquired by a camera module through the color and shape characteristics of a traffic light;
s200, reducing the image resolution of the candidate area by adopting an image down-sampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then processing by adopting an image separation algorithm to obtain signal lamp candidate regions with different color classes;
s400, using signal lamp candidate areas of different color types of the previous frames of images acquired by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output a signal lamp identification result.
As a preferred scheme of the present invention, the linear color classifier is a classifier based on HSV color space, and each pixel of the linear color classifier low-resolution pixel group is given a color class label specifically including four classes of red, yellow, green and other colors;
in S300, the low-resolution pixel group obtained by the image downsampling algorithm is stored in the gray scale map, and different color category labels are marked with different gray scale values.
As a preferable scheme of the invention, in S400, the prior information of continuous multiple frames is utilized, the image position distribution of the signal lamp is estimated in real time based on a probability model, the signal lamp position and the lane orientation parameter based on a GPS dotting technology are added, the signal lamp position and the lane orientation parameter are packaged and stored in a GIS module with two parts of off-line data extraction and on-line data extraction, and the signal lamp position and the lane orientation parameter are extracted from the GIS module as the prior information when the signal lamp passes through the same intersection next time.
In S400, the predicted position of the traffic signal in the next frame in the image is processed into a trigger signal for another set of camera modules to perform focusing high-definition shooting, matching is performed according to the length-width ratio and the size of the candidate areas of the signal lamps of different color categories, and pixels of the same color label in the candidate areas of the signal lamps of different color categories, which meet the matching result, are connected by using a rectangular frame as an output result.
The embodiment of the invention has the following advantages:
the invention effectively realizes the identification, detection tracking and data storage of the signal lamp position in unmanned driving on the basis of physical structure and software algorithm, can predict the target position of the next frame, and improves the efficiency of detection and identification; the transformation prediction module links the information of the same target in a plurality of continuous frames, so that the optimal decision can be made by using the information of the current frame and integrating the information of the previous frames when the decision is made, the burden of a system algorithm is reduced, and the possibility of algorithm failure is reduced; and storing the position and the orientation of each traffic signal lamp, and forming a priori knowledge database which can be called in real time at a later stage by using the set of the positions and the orientation as the priori knowledge.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
FIG. 1 is a block diagram of an unmanned vehicle-mounted traffic signal light identification system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a GIS module structure of the recognition system in the embodiment of the present invention;
FIG. 3 is a block diagram of a method for identifying an unmanned vehicle-mounted traffic signal lamp according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for identifying a halo design caused by a glare phenomenon of a night traffic light according to an embodiment of the present invention;
FIG. 5 is a schematic cross-sectional view of a specific detection and identification module according to an embodiment of the present invention;
fig. 6 is a schematic front view structure diagram of a specific detection and identification module according to an embodiment of the present invention.
In the figure:
1-installing a shell; 2-a main camera assembly; 3-an image separation mechanism; 4-a color sensor; 5-a transparent cover; 6-honeycomb-shaped bulges; 7-iris shutter mechanism; 8-a secondary camera assembly;
301-flexible masking strip; 302-arc guide slot; 303-coating a transparent plate; 304-a micro drive motor; 305-screw output shaft.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, the present invention provides an unmanned on-vehicle traffic signal light recognition system, including:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera assembly by a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operations are carried out;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored night traffic light algorithm;
the method comprises the steps that a daytime traffic light algorithm carries out color space distribution characteristic extraction on a certain number of signal light pixel factors, a night traffic light algorithm carries out extraction and screening of halo images in RG space threshold values on the certain number of signal light pixel factors, screening results are binarized, halo is eliminated through area surrounding zero number screening and local area negation operation, a complete traffic light outline is extracted, and then the traffic light outline is identified through a region morphological characteristic method;
the conversion prediction module is used for performing conversion prediction of continuous multi-frame signal lamp pixel factors on the signal lamp dynamic video shot by the camera assembly based on the data of the speed and the position of the automobile movement, and performing fit degree matching on the signal lamp pixel factors processed by the processing module by taking the color sudden change state in the continuous multi-frame dynamic video of the signal lamp as an interception point;
and the GIS module is used for adding geographical position information based on a probability model to the data analysis result of the detection identification module and the transformation prediction module by utilizing a GPS system.
The detection and identification module is internally provided with a cutter for cutting the good communication signal lamp picture shot by the camera assembly, a fuzzy classifier for screening the image color characteristics of the picture cut by the cutter and a projection analyzer for performing coordinate projection analysis on the picture data classified by the fuzzy classifier.
The transformation prediction module acquires the motion state of the automobile by connecting an MCU of the automobile, acquires the geographic coordinate position from the automobile to a signal lamp by a GPS system, and estimates the relative motion state of the signal lamp in a traffic signal lamp dynamic video shot by a camera component so as to construct a motion model; under the motion model, the transformation prediction module utilizes the detection, identification and analysis of continuous multi-frame traffic signal lamp dynamic videos, and the functions of the transformation prediction module are represented as follows:
on one hand, the transformation prediction module can predict the target position of the next frame, and the detection and identification efficiency is improved;
on the other hand, the transformation prediction module associates the information of the same target in a plurality of continuous frames, so that the optimal decision can be made by utilizing the information of the current frame and integrating the information of the previous frames during decision making.
A tracking module is introduced in the traffic signal light identification system, the interested area of the detector is reduced, the detection results of a plurality of frames are correlated, and once a signal light is locked, a transformation prediction module can provide advance information for the detection of the next frame.
The positions of the previous frames are used as prior information, and the positions of the traffic signal lamps of the next frame appearing in the image are predicted. In other words, if some information of the signal lamp, such as the position, is obtained in advance, the a priori information can be used for processing in a targeted manner in the detection and identification stage.
In this way, on the one hand, the burden on the algorithm is reduced, and on the other hand, the possibility of algorithm failure is also reduced. And storing the position and the orientation of each traffic signal lamp, and using the set of the positions and the orientation as prior knowledge to form a prior knowledge database.
But how to acquire such a priori database is the first problem encountered by this approach. Such a large database is obviously not feasible if still obtained by manual mapping and annotation.
Therefore, the process is divided into an offline part and an online part, the data is acquired offline, and the data is directly called during online operation.
The transformation prediction module identifies and analyzes specific parameters including signal lamp position, height and orientation based on lane direction by using detection of continuous multi-frame traffic signal lamp dynamic videos.
As shown in fig. 5 and 6, the physical structure of the detection and identification module specifically includes a mounting case 1, and a main camera assembly 2, a sub-camera assembly 8 and a color sensor 4 embedded in the mounting case 1, and an image separating mechanism 3 for physically reducing and enlarging pixels of a picture taken by the main camera assembly 2 is disposed around the mounting case 1.
The image separation mechanism 3 is provided with the arc-shaped guide slot hole 302 for the flexible shielding strip 301 to stretch out and draw back, including setting up around the installation shell 1, and along the flexible shielding strip 301 that stretches out and draws back on the shooting direction of the main camera component 2, four corners of the front surface of the installation shell 1, and the one end of the flexible shielding strip 301 is connected with the driving device 4.
The main features of traffic lights include color, morphological, and positional features. The position features are mainly used for reducing the calculation amount, and because the installation position of the traffic signal lamp is higher, only the upper half part of the image is processed and identified usually. The color feature and the shape feature are the main basis for the traffic signal lamp identification. And analyzing the distribution characteristics of the colors in different color spaces according to the color characteristics, setting corresponding screening conditions, and segmenting the traffic signal lamp with fixed color characteristics from the original image. When objects with colors close to the traffic lights exist in the images, namely non-traffic light images also meet the screening conditions, the problem of false recognition is easily caused.
A large number of irrelevant objects in the image can be effectively removed by utilizing the color characteristics, and the method is often used as a first step of traffic signal lamp image identification processing. The traffic signal lamp has the morphological characteristics of a rectangular traffic signal lamp frame body and a round or arrow-shaped traffic signal lamp body.
That is, candidate regions are extracted from the acquired image using various features. This step is the basis of the traffic light identification system and is a non-replaceable module.
From the aspect of performance, the detection identifier is required to have low missing detection rate and low false detection rate, and meanwhile, the detection identifier is required to have high efficiency and real-time performance. However, there are two pairs of contradictions in between. For a detection recognizer with a definite structure, all samples cannot be classified correctly by using limited features, so that the classification area needs to be reduced to reduce the false detection rate, and the classification area needs to be enlarged to reduce the false detection rate, which causes a pair of contradictions. On the other hand, to achieve both low false detection rate and low false detection rate, a more complex classifier needs to be designed, which is contradictory to the requirement of real-time performance.
The invention carries out preliminary positioning on the position of the traffic light in the real space through the color sensor 4, wherein the identification standard set by the color sensor 4 is the combination of the relativity of 3 components of red (R), green (G) and blue (B) to resist the interference of other external light rays, and the invention is suitable for positioning the image traffic light.
And then the final picture shot by the camera is physically separated by the image separation mechanism 3, and the specific separation principle is as follows: after the color sensor 4 is positioned at the traffic lights, whether the traffic lights are longitudinally arranged or transversely arranged is judged according to the shape of the traffic lights, and the left and right groups of flexible shielding strips 301 or the upper and lower groups of flexible shielding strips 301 positioned on the mounting shell 1 are driven to extend out according to the judgment result, so that the physical pixel cutting is carried out on the shooting picture of the main camera component 2.
Meanwhile, compared with an RGB color model, the HSI (hue, saturation and brightness) color model is more suitable for the visual characteristics of human beings, all color information of an image is contained in an H component, so that preliminary traffic light positioning operation is facilitated, a processor in the system is used for positioning and cutting the traffic light in the image to the minimum external rectangle, a secondary camera assembly 8 is used for focusing and shooting the traffic light in the space according to the cutting result, an iris shutter mechanism 7 is used for controlling the shooting aperture in the focusing process, the display pixel information of each light in the traffic light is reduced, and is superposed and analyzed with the picture shot by a main camera assembly, and the identification accuracy is improved.
The resolution of the processor system in processing the pictures is reduced, the processing speed of the system in processing the high-definition pictures shot by the high-definition camera is reduced, the interference of other lamp bodies in the pictures shot by the camera is reduced as much as possible, and the accuracy of traffic light identification is greatly improved.
When the stray light projected into the main camera assembly is stronger, the light is filtered by the film-coated transparent plate 302 arranged in the middle of one of the flexible shielding strips 301.
And when the flexible shielding strip 301 is fully extended, the front end of the flexible shielding strip 301 just can be extended into the arc-shaped guide slot hole 302 which is just opposite to the flexible shielding strip 301.
The front end of the mounting shell 1 is provided with a transparent cover 5, the surface of the transparent cover 5 is provided with a honeycomb-shaped bulge 6, and the height of the honeycomb-shaped bulge 6 is gradually reduced from the edge of the transparent cover to the center.
The driving device 4 includes a micro driving motor 304 and a screw output shaft 305, and the micro driving motor 304 drives the flexible shielding strip 301 to make a linear motion by driving the screw output shaft to rotate.
The color sensor 4 is a color sensor model TCS 3200D.
As shown in fig. 3 and 4, the present invention provides an unmanned vehicle-mounted traffic signal lamp recognition method, comprising the steps of:
s100, extracting a candidate region from an image acquired by a camera module through the color and shape characteristics of a traffic light;
s200, reducing the image resolution of the candidate area by adopting an image down-sampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then processing by adopting an image separation algorithm to obtain signal lamp candidate regions with different color classes;
s400, using signal lamp candidate areas of different color types of the previous frames of images acquired by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output a signal lamp identification result.
The traffic signal lamp recognition method realizes the recognition of the traffic signal lamp by combining the color and the morphological characteristics of the signal lamp image shot by the camera module, and the resolution of the adopted camera is six million (2736 multiplied by 2192). Since the installation position of the signal lamp is at a high position above the horizon and the camera is installed horizontally, the region of interest of the image is the upper half of the image, about three million pixels (2736 × 1096) in actual use.
The signal lamp image shot by the camera module adjusts the opening and closing time of the shutter according to the pixel gray scale of the cut part, and controls the light inlet quantity so as to ensure the image quality.
The fuzzy classifier is used for screening the color features of the image, the precision and the robustness are better than those of a common color threshold method, and whether the object is a traffic signal lamp or not is judged by combining the perimeter area ratio of the traffic signal lamp and the tracking of continuous frame images.
Color recognition:
the traffic light is first positioned and trimmed to its minimum bounding rectangle. And secondly, converting a color space, wherein an RGB color model is a common color model, and 3 components of red (R), green (G) and blue (B) in the model have high correlation and poor external interference resistance and are not suitable for image segmentation.
The HSI (hue, saturation and brightness) color model is more suitable for the human visual characteristics than the RGB color model. And all color information of the image is contained in the H component, which facilitates the operation of recognizing the color setting H from the H component.
Shape recognition, shape recognition method signal lamp type have the variety, and the arrow lamp contains colour and direction information, and the circular lamp only contains colour information.
In order to realize flexible identification of round and arrow-shaped signal lamps, a coordinate projection analysis method is provided. The shape is identified by comparing the standard arrow and the circular coordinate projection.
The abscissa projection of the binary image of the light source region is substantially to calculate the area of the light source region in the row direction, while the ordinate projection is the column square product, and in actual use, the identification can be performed by the abscissa projection.
For comparison, compression and stretching should be performed to uniform the image size. Here, the compressed and stretched image is unified into a size of 20 × 20.
A quasi-circle and an arrow are first set before comparison.
The linear color classifier is based on HSV color space, and each pixel of the linear color classifier low-resolution pixel group is endowed with four classes of color class labels, specifically including red, yellow, green and other colors;
in S200, an image downsampling algorithm is adopted, and the identification algorithm for daytime traffic specifically includes: the distribution characteristics of the color in the corresponding color space can be obtained by collecting the sample image of the traffic signal lamp, extracting the pixels in the traffic signal lamp area and carrying out statistical analysis on a certain number of points.
Common color spaces include RGB, YCbCr, HSV, and the like.
They represent three types of color spaces, respectively, a color space (RGB space) characterized by a mixture ratio of primary colors, a color space (YCbCr space) characterized by brightness and color difference, and a color space (HSV space) characterized by a saturation hue brightness parameter, which are in a mutual conversion relationship with each other. The original image is converted from RGB space to YCbCr space and HSV space respectively, and the distribution of pixel values is analyzed.
Setting a corresponding screening threshold. The RGB space is the most common color space, and is used by image capture and display devices. Three components of the RGB space represent the values of red, green and blue, respectively, and the values of the three components are related to their gray levels.
The RGB space is a cube, and the three coordinate axes are RGB components.
Through the value analysis of a certain number of pixels, the red and green screening conditions are set as shown in the formula.
R-G≥70and R-B≥70,
G-R≥70and G-B≥0,
And then the transformation relation between the YCbCr space and the RGB space is obtained.
Like the RGB space, the YCbCr space is a cube, and the ycbcrs are three coordinate axes, respectively.
The YCbCr space represents the color characteristics by the Cb and Cr values, the Y component can represent the gray value, therefore, only the Cb and Cr values are set with threshold values, and the red and green threshold conditions in the YCbCr space are shown as the formula
80≤Cb≤135and155≤Cr,
95≤Cb≤165and Cr≤115。
Identification method for halo design caused by glare phenomenon of night traffic light
Firstly, screening through RGB space threshold values, extracting halos in images, carrying out binarization on screening results, eliminating halos through screening of the number of zero points surrounded by regions and negation operation of local regions, extracting complete traffic light outlines, and then further identifying through a region morphological characteristic method.
And (3) carrying out threshold screening on the night sample by using an RGB space threshold screening method to verify whether the RGB space threshold screening method can meet the requirement of extracting the candidate area of the traffic signal lamp.
The distribution of red and green in the RGB space at night is substantially the same as during the daytime.
In order to ensure that the area where the night traffic signal lamp is located is completely acquired as much as possible, the RGB space threshold value screening condition is relaxed, the night red traffic signal lamp color screening condition is shown by the following formula, and the green traffic signal lamp screening condition is shown by the following formula:
R-G≥60and R-B≥30,
G-R≥30and G-B≥0,
the samples were color threshold screened using this formula.
In S300, the low-resolution pixel group obtained by the image downsampling algorithm is stored in the gray scale map, and different color category labels are marked with different gray scale values.
In S400, the prior information of continuous multiple frames is utilized, the image position distribution of the signal lamp is estimated in real time based on a probability model, the signal lamp position and the lane orientation parameter based on a GPS dotting technology are added, the signal lamp position and the lane orientation parameter are packaged and stored in a GIS module with an offline data extraction part and an online data extraction part, and the signal lamp position and the lane orientation parameter are extracted from the GIS module as the prior information when the signal lamp passes through the same intersection next time.
In S400, the predicted position of the next frame of traffic signal lamp in the image is processed into a trigger signal for focusing high-definition shooting by another group of camera modules, matching is carried out according to the length-width ratio and the size of candidate areas of signal lamps of different color categories, and pixels of the same color label in the candidate areas of the signal lamps of the different color categories, which accord with the matching result, are connected by a rectangular frame to serve as an output result.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. An unmanned on-vehicle traffic signal light identification system, comprising:
the detection and identification module is used for reducing the resolution of the static traffic signal lamp picture shot by the camera assembly by a physical structure, and analyzing signal lamp pixel factors in the picture in a color and shape matching combination mode after cutting, fuzzy classification and coordinate projection operations are carried out;
the algorithm processing module comprises a pixel classifier for classifying signal lamp pixel factors, and a stored daytime traffic light algorithm and a stored night traffic light algorithm;
the method comprises the steps that a daytime traffic light algorithm carries out color space distribution characteristic extraction on a certain number of signal light pixel factors, a night traffic light algorithm carries out extraction and screening of halo images in RG space threshold values on the certain number of signal light pixel factors, screening results are binarized, halo is eliminated through area surrounding zero number screening and local area negation operation, a complete traffic light outline is extracted, and then the traffic light outline is identified through a region morphological characteristic method;
the conversion prediction module is used for performing conversion prediction of continuous multi-frame signal lamp pixel factors on the signal lamp dynamic video shot by the camera assembly based on the data of the speed and the position of the automobile movement, and performing fit degree matching on the signal lamp pixel factors processed by the processing module by taking the color sudden change state in the continuous multi-frame dynamic video of the signal lamp as an interception point;
and the GIS module is used for adding geographical position information based on a probability model to the data analysis result of the detection identification module and the transformation prediction module by utilizing a GPS system.
2. The unmanned on-board traffic signal light recognition system of claim 1, wherein the detection recognition module comprises a cutter for cutting the image of the good traffic signal light captured by the camera assembly, a fuzzy classifier for screening the image color features of the image cut by the cutter, and a projection analyzer for performing coordinate projection analysis on the image data classified by the fuzzy classifier.
3. The unmanned vehicle-mounted traffic signal lamp identification system of claim 1, wherein the transformation prediction module acquires a vehicle motion state by connecting with a MCU of a vehicle, acquires a geographic coordinate position from the vehicle to a signal lamp by a GPS system, and estimates a relative motion state of the signal lamp in a traffic signal lamp dynamic video shot by a camera assembly, thereby constructing a motion model; under the motion model, the transformation prediction module utilizes the detection, identification and analysis of continuous multi-frame traffic signal lamp dynamic videos.
4. The unmanned on-board traffic signal identification system of claim 3, wherein the transform prediction module utilizes detection recognition of consecutive frames of traffic signal dynamic video to analyze specific parameters including signal location, height and orientation based on lane direction and for installation of signal lights.
5. The unmanned vehicle-mounted traffic signal lamp identification system according to claim 1, wherein the detection identification module further comprises a mounting housing (1), and a main camera assembly (2), a sub-camera assembly (8) and a color sensor (4) embedded on the mounting housing (1), wherein an image separation mechanism (3) for physically reducing and enlarging the pixels of the picture taken by the main camera assembly (2) is arranged around the mounting housing (1);
image separation mechanism (3) are around installing casing (1) including setting up, and along flexible shielding strip (301) that stretch out and draw back on the shooting direction of main camera subassembly (2), four corners of the front surface of installation casing (1) are provided with arc direction slotted hole (302) that supply flexible shielding strip (301) to stretch out, the one end of flexible shielding strip (301) is connected with drive arrangement (4).
6. The unmanned on-board traffic signal light recognition system of claim 5, wherein a transparent plate (303) coated with a film is disposed in the middle of one of the flexible shielding strips (301), and when the flexible shielding strip (301) is fully extended, the front end of the flexible shielding strip (301) can just extend into the arc-shaped guide slot hole (302) opposite to the front end.
7. The unmanned on-board traffic signal light identification method according to claims 1-6, comprising the steps of:
s100, extracting a candidate region from an image acquired by a camera module through the color and shape characteristics of a traffic light;
s200, reducing the image resolution of the candidate area by adopting an image down-sampling algorithm to obtain a low-resolution pixel group;
s300, endowing each pixel of the low-resolution pixel group with a color class label through a linear color classifier, and then processing by adopting an image separation algorithm to obtain signal lamp candidate regions with different color classes;
s400, using signal lamp candidate areas of different color types of the previous frames of images acquired by the camera module as prior information, predicting the position of the next frame of traffic signal lamp in the images, and repeating the steps S200 and S300 to output a signal lamp identification result.
8. The method according to claim 7, wherein the linear color classifier is a HSV color space-based classifier, and each pixel of the linear color classifier low-resolution pixel group is assigned with a color class label specifically including four classes of red, yellow, green and other colors;
in S300, the low-resolution pixel group obtained by the image downsampling algorithm is stored in the gray scale map, and different color category labels are marked with different gray scale values.
9. The method of claim 7, wherein in S400, the distribution of the positions of the traffic lights in the image is estimated in real time based on a probability model by using the prior information of a plurality of consecutive frames, the traffic light positions and the lane orientation parameters based on a GPS dotting technique are added, and the traffic light positions and the lane orientation parameters are packaged and stored in a GIS module with two parts of off-line data extraction and on-line data extraction, and are extracted from the GIS module as the prior information when the traffic light passes through the same intersection next time.
10. The method for identifying the unmanned on-vehicle traffic signal lamp of claim 9, wherein in step S400, the predicted position of the traffic signal lamp of the next frame in the image is processed into a trigger signal for another group of camera modules to perform focusing high-definition shooting, matching is performed according to the length-width ratio and the size of the candidate regions of the signal lamp of different color categories, and pixels of the same color label in the candidate regions of the signal lamp of different color categories which meet the matching result are connected by a rectangular frame as an output result.
CN202010416680.0A 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method Active CN111582216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010416680.0A CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010416680.0A CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Publications (2)

Publication Number Publication Date
CN111582216A true CN111582216A (en) 2020-08-25
CN111582216B CN111582216B (en) 2023-08-04

Family

ID=72118925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010416680.0A Active CN111582216B (en) 2020-05-15 2020-05-15 Unmanned vehicle-mounted traffic signal lamp identification system and method

Country Status (1)

Country Link
CN (1) CN111582216B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233415A (en) * 2020-09-04 2021-01-15 南京航空航天大学 Traffic signal lamp recognition device for unmanned driving
CN113065466A (en) * 2021-04-01 2021-07-02 安徽嘻哈网络技术有限公司 Traffic light detection system for driving training based on deep learning
CN113642521A (en) * 2021-09-01 2021-11-12 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment
CN113989771A (en) * 2021-10-26 2022-01-28 中国人民解放***箭军工程大学 Traffic signal lamp identification method based on digital image processing
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
CN116152784A (en) * 2023-04-21 2023-05-23 深圳市夜行人科技有限公司 Signal lamp early warning method and system based on image processing
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
WO2016203616A1 (en) * 2015-06-18 2016-12-22 日産自動車株式会社 Traffic light detection device and traffic light detection method
CN108492601A (en) * 2018-04-13 2018-09-04 济南浪潮高新科技投资发展有限公司 A kind of vehicle DAS (Driver Assistant System) and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016203616A1 (en) * 2015-06-18 2016-12-22 日産自動車株式会社 Traffic light detection device and traffic light detection method
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN108492601A (en) * 2018-04-13 2018-09-04 济南浪潮高新科技投资发展有限公司 A kind of vehicle DAS (Driver Assistant System) and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
付强: "智能汽车交通灯识别方法研究" *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US12020476B2 (en) 2017-03-23 2024-06-25 Tesla, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
CN112233415A (en) * 2020-09-04 2021-01-15 南京航空航天大学 Traffic signal lamp recognition device for unmanned driving
CN113065466B (en) * 2021-04-01 2024-06-04 安徽嘻哈网络技术有限公司 Deep learning-based traffic light detection system for driving training
CN113065466A (en) * 2021-04-01 2021-07-02 安徽嘻哈网络技术有限公司 Traffic light detection system for driving training based on deep learning
CN113642521B (en) * 2021-09-01 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment
CN113642521A (en) * 2021-09-01 2021-11-12 东软睿驰汽车技术(沈阳)有限公司 Traffic light identification quality evaluation method and device and electronic equipment
CN113989771A (en) * 2021-10-26 2022-01-28 中国人民解放***箭军工程大学 Traffic signal lamp identification method based on digital image processing
CN113989771B (en) * 2021-10-26 2024-07-19 中国人民解放***箭军工程大学 Traffic signal lamp identification method based on digital image processing
CN116152784A (en) * 2023-04-21 2023-05-23 深圳市夜行人科技有限公司 Signal lamp early warning method and system based on image processing

Also Published As

Publication number Publication date
CN111582216B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN111582216A (en) Unmanned vehicle-mounted traffic signal lamp identification system and method
CN107729801B (en) Vehicle color recognition system based on multitask deep convolution neural network
CN109740478B (en) Vehicle detection and identification method, device, computer equipment and readable storage medium
CN108108761B (en) Rapid traffic signal lamp detection method based on deep feature learning
CN107506763B (en) Multi-scale license plate accurate positioning method based on convolutional neural network
Lafuente-Arroyo et al. Traffic sign shape classification evaluation I: SVM using distance to borders
CN107622502B (en) Path extraction and identification method of visual guidance system under complex illumination condition
Rotaru et al. Color image segmentation in HSI space for automotive applications
CN103093249B (en) A kind of taxi identification method based on HD video and system
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
Nandi et al. Traffic sign detection based on color segmentation of obscure image candidates: a comprehensive study
Le et al. Real time traffic sign detection using color and shape-based features
CN111027475A (en) Real-time traffic signal lamp identification method based on vision
CN112464731B (en) Traffic sign detection and identification method based on image processing
Maldonado-Bascon et al. Traffic sign recognition system for inventory purposes
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
Yao et al. Coupled multivehicle detection and classification with prior objectness measure
CN115424217A (en) AI vision-based intelligent vehicle identification method and device and electronic equipment
CN105975949A (en) Visual-information-based automobile identification method
Flores-Calero et al. Ecuadorian traffic sign detection through color information and a convolutional neural network
CN112183427B (en) Quick extraction method for arrow-shaped traffic signal lamp candidate image area
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
Kim Detection of traffic signs based on eigen-color model and saliency model in driver assistance systems
Tilakaratna et al. Image analysis algorithms for vehicle color recognition
Gerhardt et al. Neural network-based traffic sign recognition in 360° images for semi-automatic road maintenance inventory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant