CN112150364B - Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp - Google Patents

Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp Download PDF

Info

Publication number
CN112150364B
CN112150364B CN202011079511.9A CN202011079511A CN112150364B CN 112150364 B CN112150364 B CN 112150364B CN 202011079511 A CN202011079511 A CN 202011079511A CN 112150364 B CN112150364 B CN 112150364B
Authority
CN
China
Prior art keywords
arrow
candidate image
traffic signal
signal lamp
lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011079511.9A
Other languages
Chinese (zh)
Other versions
CN112150364A (en
Inventor
钟铭恩
汤世福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of Technology filed Critical Xiamen University of Technology
Priority to CN202011079511.9A priority Critical patent/CN112150364B/en
Publication of CN112150364A publication Critical patent/CN112150364A/en
Application granted granted Critical
Publication of CN112150364B publication Critical patent/CN112150364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a matching and splicing method for split type candidate image areas of an arrow-shaped traffic signal lamp, wherein the candidate image areas comprise a pointing arrow candidate image set BinaryArrows and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp; the images in Binarylarrows and BinaryLines are paired one by one, and the circumscribed rectangle of the paired images in Binarylarrows is set to be R1kThe external rectangle of the paired images in BinaryLines is R2kThe constraint conditions of pairing are as follows: r1kAnd R2kThe intersection of (a) cannot be empty; r2kWith and only two vertices at R1kInside, and the line connecting the two vertices is R2kThe shorter side of (2); r1k、R2kThe ratio of the area and the perimeter of the filter to the area and the perimeter of the union of the area and the perimeter of the filter is respectively positioned in a set change range threshold; and if the constraint conditions are met, matching is successful, and the two successfully matched images are spliced to be used as a lamp body candidate image area. The invention fully considers the actual image of the split arrow lamp and greatly improves the accuracy of image identification.

Description

Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp
Technical Field
The invention relates to the technical field of intelligent driving assistance and unmanned driving of vehicles, in particular to a matching and splicing method for candidate image areas of arrow-shaped traffic signal lamps based on split imaging.
Background
The traffic light automatic identification technology, especially the red light automatic identification technology, is one of the key support technologies of vehicle safe driving assistance and unmanned driving, and is an important component part of vehicle traffic environment perception. According to different technical schemes, the existing identification schemes are mainly divided into four categories, namely, the scheme based on vehicle-road communication, the scheme based on surrounding vehicle state perception, the scheme based on GPS navigation and the scheme based on vehicle vision. Although the multi-scheme fusion is a trend of future development, the vehicle-mounted vision-based identification technology is still a hotspot of research in academia and business circles and becomes a hotspot research field of intelligent traffic systems. The red light represents the prohibition sign, which has more important significance for traffic sequence and safety, so that the red light can be detected from the traffic environment video image quickly, accurately and reliably in real time, which is the basic requirement of engineering application.
The existing red light image automatic detection technology is mainly divided into two categories of traditional image identification and deep learning-based red light identification. The candidate image area of the red light is extracted based on the characteristics of the red light such as color, geometry and the like, and the real-time performance is good. The latter requires a large number of image samples to train the model, has good accuracy, but the trained neural network model has huge parameter number, puts high requirements on hardware processing speed, is not beneficial to being deployed on vehicles, and has poor real-time performance. These directly restrict the application of red light automatic detection technology based on vehicle vision in intelligent driving assistance and unmanned driving technology.
Due to the actual split type structure of the arrow-shaped traffic signal lamp, the traffic light image extraction technology aiming at image recognition has the following defects in the actual recognition:
in fact, the manufacture, the arrangement and the installation of the traffic lights in China are respectively specified in standards such as national standard GB14886-2006 set and installation standard for road traffic signal lights, national standard GB14886-2016 set and installation standard for road traffic signal lights, GB14887-2011 set road traffic signal lights and GB14887-2016 set road traffic signal lights. According to these standards, a conventional arrow-shaped traffic signal lamp mainly has two kinds of a left turn prohibition direction lamp with an arrow facing left and a straight direction prohibition lamp with an arrow facing up, as shown in fig. 1.
Ideally, the arrow-shaped traffic signal is in the form of a split in the video image as shown in fig. 1, which includes two separate parts, a "directional arrow" and a "tail line". However, in reality, the image may be integrated due to halo, too far distance, etc., as shown in fig. 2. In the prior studies, some scholars did not consider that arrow lamps actually consist of two separate parts, which is particularly evident in close range imaging; although some scholars consider the problem, most of the scholars adopt a morphological method to carry out merging operation, and how to select the size of a morphological kernel is not good when processing the split lamp bodies imaged at different distances, different angles and complex and variable backgrounds. There is a limitation to the recognition of the image.
In addition, in reality, the images of the arrow-shaped traffic signal lamps are deformed in various ways due to the possible random changes in positions such as rolling, pitching and yawing relative to the red lamp body during the running of the vehicle. The accuracy is not high due to the lack of basis for how the specific parameters of the features are defined.
References (References):
[1]Kim K T.STVC:Secure Traffic-light to Vehicle Communication[C].International Congress on Ultra-Modern Telecommunications and Control Systems and Workshops,St.Petersburg Russia,2012:96-104。
[2]Iglesias I,Isasi L,Larburu M,et al.I2V Communication Driving Assistance System:On-Board Traffic Light Assistant[C].Vehicular Technology Conference,Calgary Canada,2008:1-5。
[3]Campbell J,Amor H B,Ang M H,et al.Traffic light status detection using movement patterns of vehicles[C].Proceedings of the IEEE International Conference on Intelligent Transportation Systems,Rio de Janeiro Brazil,2016:283-288。
[4]Campbell J,Tuncali C E,Liu P,et al.Modeling concurrency and reconfiguration in vehicular systems:A π-calculus approach[C].IEEE International Conference on Automation Science and Engineering,Texas USA,2016。
[5]Fairfield N,Urmson C.Traffic light mapping and detection[C].IEEE International Conference on Robotics and Automation,Shanghai China,2011:5421-5426。
[6]LevinsonJ,Askeland J,Dolson J,et al.Traffic light mapping,localization,and state detection for autonomous vehicles[C].IEEE International Conference on Robotics and Automation,Shanghai China,2011:5784-5791。
[7]Jang C,Cho S,Jeong S,et al.Traffic light recognition exploiting map and localization at every stage[J].Expert Systems with Applications,2017,88:290-304。
[8] zhou Xuan Ru, Yuan Jia Zheng, Liu hong Zheng, etc. the HOG feature-based traffic signal lamp real-time identification algorithm was studied [ J ] in computer science 2014,41(7):313 + 317.
[9]Shi Z,Zou Z,Zhang C.Real-time traffic light detection with adaptive background suppression filter[J].IEEE Transactions on Intelligent Transportation Systems,2015,17(3):690-700。
[10]Salarian M,Manavella A,Ansari R.A vision based system for traffic lights recognition[C]//2015 SAI Intelligent Systems Conference(IntelliSys).IEEE,2015:747-753。
[11]Saini S,Nikhil S,Konda K R,et al.An efficient vision-based traffic light detection and state recognition for autonomous vehicles[C]//2017 IEEE Intelligent Vehicles Symposium(IV).IEEE,2017:606-611。
[12] Wuying, Zhang Xiaoning, and he, traffic signal identification method [ J ] traffic information and safety based on image processing, 2011,29(3): 51-54.
[13] Gongqin, Chucai, Lei apparatus traffic light identification using circularity and color histograms [ J ] computer engineering and design, 2012,33(1): 243-.
[14] Lizongxin, Qinbao, Wangmonqian, real-time detection and identification of traffic lights based on spatio-temporal relation model [ J ] computer science, 2018,45(06): 314-.
[15] Junshu, Zhuwenxing, Shayong, traffic signal lamp comprehensive identification method under complex background [ J ]. Shandong university school newspaper (engineering edition), 2013,44(2): 64-68.
[16] Wuzefeng, Zhang Zhongyang, permission text the traffic signal lamp detection and identification method based on projection characteristic values [ J ]. modern electronic technology, 2016,39(09): 160-.
[17] Pengzu-traffic sign and signal lamp real-time detection and identification technology research [ D ]. Chongqing university, 2012.
[18] Gold billows, king spring incense, king ice, etc. the traffic light identification method [ J ] based on cascade filtering, Shanghai university of transportation bulletin, 2012,46(09): 1355-1360.
[19]Yu C,Huang C,Lang Y.Traffic light detection during day and night conditions by a camera[C],IEEE 10th International Conference on Signal Processing Proceedings.IEEE,2010:821-824。
[20]Diaz-Cabrera M,Cerri P,Sanchez-Medina J.Suspended traffic lights detection and distance estimation using color features[C]//2012 15th International IEEE Conference on Intelligent Transportation Systems.IEEE,2012:1315-1320。
[21] Wu national Qing, Wangxing, Zhangxudong, et al, traffic light detection technology based on image processing [ J ] modern electronic technology, 2017,40(8): 103-.
[22]Wonghabut P,Kumphong J,Ung-arunyawee R,et al.Traffic Light Color Identification for Automatic Traffic Light Violation Detection System[C]//2018 International Conference on Engineering,Applied Sciences,and Technology(ICEAST).IEEE,2018:1-4。
[23] The traffic light automatic identification [ J ] television technology based on HSV color space and shape characteristics 2015,39(5): 150-.
[24]Chen Z,Huang X.Accurate and reliable detection of traffic lights using multiclass learning and multiobject tracking[J].IEEE Intelligent Transportation Systems Magazine,2016,8(4):28-42。
[25]Mu G,Xinyu Z,Deyi L,et al.Traffic light detection and recognition for autonomous vehicles[J].The Journal of China Universities of Posts and Telecommunications,2015,22(1):50-56。
[26]Gong J,Jiang Y,Xiong G,et al.The recognition and tracking of traffic lights based on color segmentation and camshift for intelligent vehicles[C]//2010 IEEE Intelligent Vehicles Symposium.Ieee,2010:431-435。
[27] The method comprises the steps of allowing a person to know the contents of the text, opening the sun, detecting and identifying the traffic light based on the significance characteristics [ J ], computer and digital engineering, 2017,45(07): 1397-.
[28]Almagambetov A,Velipasalar S,Baitassova A.Mobile standards-based traffic light detection in assistive devices for individuals with color-vision deficiency[J].IEEE Transactions on Intelligent Transportation Systems,2014,16(3):1305-1320。
[29] Wei Hailin, Cuilhu, Wang Quen, etc. Signal light intersection auxiliary driving method based on image recognition [ J ] Zhejiang university bulletin, engineering edition, 2017(6): 1090-.
[30] Real-time identification algorithm [ J ] of arrow type traffic signal lamps in urban environment, university of south and China (Nature science edition), 2013,44(04): 1403-.
[31]Li X,Ma H,Wang X,et al.Traffic light recognition for complex scene with fusion detections[J].IEEE Transactions on Intelligent Transportation Systems,2017,19(1):199-208。
[32] License text, traffic light based on unmanned platform and digital detection and identification system [ D ]. Jiangsu: Nanjing university of science and technology, 2016.
[33]Yu G,Lei A,Li H,et al.A Real-Time Traffic Light Detection Algorithm Based on Adaptive Edge Information[R].SAE Technical Paper,2018。
[34]Jensen M B,Nasrollahi K,Moeslund T B.Evaluating state-of-the-art object detector on challenging traffic light data[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.2017:9-15。
[35] Qian hong Ying, Wanglihua, mu Li Lei Yili, traffic light rapid detection and identification based on deep learning [ J ] computer science, 2019,46(12):272 and 278.
[36] Xionghui, Guoyang, Chen-Shi, etc. traffic signal light detection [ J ] based on genetic optimization and deep learning automobile engineering, 2019,41(08):960 + 966.
[37] Pan defense, Chen Ying Hao, Liu Bo, etc. traffic signal lamp detection and identification based on Faster-RCNN [ J ] sensor and micro system, 2019,38(09): 147-.
Disclosure of Invention
The invention aims to provide a matching and splicing method of split candidate image areas of an arrow-shaped traffic signal lamp by fully considering actual images of split arrow lamps.
Another technical problem to be solved by the present invention is to provide a method for obtaining a reasonable geometric characteristic threshold value, which considers the possible existence of position changes such as randomly changing roll, pitch and yaw, etc., to reduce the loss rate and the interference rate.
In order to solve the technical problems, the technical solution of the invention is as follows:
a pairing and splicing method for a split type candidate image area of an arrow-shaped traffic signal lamp is disclosed, wherein the candidate image area comprises a pointing arrow candidate image set Binarylarrows and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp; the images in the set BinaryArrows and the set BinaryLines are paired one by one, and the external rectangle of the paired images in the set BinaryArrows is set as R1kThe circumscribed rectangle of the paired images in the set BinaryLines is R2kThe constraint conditions of pairing are as follows: r1kAnd R2kThe intersection of (a) cannot be empty; r2kWith and only two vertices at R1kInside, and the line connecting the two vertices is R2kThe shorter side of (2); r1k、R2kThe ratio of the area and the perimeter of the filter to the area and the perimeter of the union of the area and the perimeter of the filter is respectively positioned in a set change range threshold; if the constraint conditions are met, the pairing is successful, otherwise, the pairing is unsuccessful, and the next group of pairing is carried out until the pairing is completed; and splicing the two successfully matched images to serve as a lamp body candidate image and adding the lamp body candidate image into the arrow-shaped traffic signal lamp candidate image set BinaryLight.
Preferably, the candidate image region is extracted by a method comprising: extracting all outer contour lines related to the lighting area of the arrow-shaped traffic signal lamp in the original image to be identified, calculating the aspect ratio of the circumscribed rectangles, and dividing the image into an arrow-shaped traffic signal lamp candidate image set BinaryLight, a pointing arrow candidate image set BinaryArrows of the arrow-shaped traffic signal lamp and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp according to a set aspect ratio change threshold value.
Preferably, the threshold is obtained by: setting the installation position of a camera in a vehicle as an original point O, the central coordinates of a traffic signal lamp as (x, y, z), the roll angle of the axis line of the camera of the vehicle relative to a vertical line passing through the surface center of the traffic signal lamp as delta and the pitch angle as pitch angle
Figure BDA0002718154810000067
The yaw angle is omega, and then according to the standard size of the traffic signal lamp, a change formula of geometric characteristic parameters of corresponding area ratio, perimeter ratio or width-to-height ratio of various arrow-shaped traffic signal lamps is obtained through space coordinate transformation; then, the driving conditions of the majority of vehicles are covered, namely, x, y, z, delta are given,
Figure BDA0002718154810000066
And under the constraint condition of omega, obtaining the characteristic parameter change interval of each lamp body so as to obtain the threshold value.
Preferably, the constraint conditions are:
Figure BDA0002718154810000061
wherein the content of the first and second substances,
Figure BDA0002718154810000062
Figure BDA0002718154810000063
and satisfies:
Figure BDA0002718154810000064
preferably, the images in the candidate image set BinaryLights are further verified and the type is judged; the method comprises the following steps: (1) carrying out binarization processing on each candidate image area in the BinaryLights set, then dividing the candidate image area into a plurality of subblocks, extracting the pixel density characteristic of each subblock, and judging whether the candidate image area is a lamp body and the category of the lamp body by using a pre-trained primary support vector machine; (2) and (2) expanding each binary candidate image area in the set BinaryLights screened in the step (1) according to a certain expansion coefficient, cutting out the same area from the original image SrcImage according to the expanded candidate image area, extracting HOG characteristics, and further verifying by using a pre-trained secondary support vector machine.
Preferably, the sample images in the first-stage support vector machine and the second-stage support vector machine are manually marked in the category to which the sample images belong, and are divided into 5 categories, namely fake lamp bodies, left-facing, right-facing, upward and downward arrow lamp heads; and constructing training samples based on the corresponding features and categories.
Preferably, in the step (2), the candidate image region expansion manner is: and taking the center of the circumscribed rectangle of the candidate image area as a base point, extending in the width and height directions, wherein the extension coefficient is 2.1, and obtaining an expanded candidate image area.
Preferably, the extraction process of the HOG feature vector is as follows: (1) scaling the candidate lamp body image to 40 x 40 pixel size; (2) performing Gamma correction; (3) calculating the gradient of each pixel of the image; (4) dividing 5 by 5 pixels into one unit; (5) calculating a gradient histogram of each cell; (6) combining 2 x 2 cells into one block, calculating and connecting HOG features of each block in series, and finally obtaining 1764-dimensional vectors.
By adopting the scheme, the invention fully considers that the arrow lamp is actually composed of two separated parts, the separation phenomenon is particularly obvious particularly in short-distance imaging, and the invention provides the method for pairing and splicing the split imaging images, so that the accuracy of image identification can be greatly improved.
In addition, the method fully considers the position changes of the lamp body such as the roll, the pitch and the yaw which can be changed randomly, gives the geometric characteristic parameter range of the arrow-shaped traffic signal lamp based on the space transformation, and finally comprehensively gives a method for quickly extracting the candidate image area of the arrow-shaped traffic signal lamp in the static image, thereby not only being used for improving the accuracy of the traditional image identification method of the arrow-shaped traffic signal lamp, but also being used for improving the real-time performance of the identification scheme of the arrow-shaped traffic signal lamp based on the deep learning.
In addition, the candidate lamp body images can be further verified and subjected to class judgment through a secondary vector machine, wherein the images processed by the primary vector machine are binarized images, the operation efficiency is high, the secondary vector machine is further judged through the screened images, the number of subsequent images needing high-dimensional feature extraction can be reduced, and the total time is reduced; and the secondary vector machine verification adopts a mode of extracting HOG characteristics of the original color image, so that the judgment accuracy can be improved.
The arrow-shaped traffic signal lamp candidate area image finally obtained by the method can provide materials for traditional identification and deep learning identification of the current arrow-shaped traffic signal lamp, the effect of the arrow-shaped traffic signal lamp candidate area image is similar to that of a preprocessing filter, the data volume of the image to be processed is greatly reduced, meanwhile, good instantaneity, robustness and adaptability are kept, and the arrow-shaped traffic signal lamp candidate area image has good application value. The method is suitable for arrow-shaped red lamps, arrow-shaped yellow lamps and green lamps. Especially, the identification of traffic lights requires strong real-time performance, and the fewer candidate objects are more beneficial to improving the real-time performance, which is the original intention of the filtering algorithm.
Drawings
FIG. 1 is an image of two typical split arrow-shaped traffic signals;
FIG. 2 is an actual one-piece image of two typical arrow-shaped traffic signals;
FIG. 3 is a size parameter diagram of a left-turn arrow lamp in road traffic signal lamp/GB 14887-2011;
FIG. 4 is a diagram of the ideal vehicle-lamp position relationship;
FIG. 5 is a schematic view of the body attitude during vehicle travel;
FIG. 6 is a diagram showing a vehicle-lamp position relationship during vehicle running;
fig. 7 is a flowchart of the method for extracting candidate image regions of arrow-shaped traffic signal lamps according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
The invention discloses a matching and splicing method for split type candidate image areas of an arrow-shaped traffic signal lamp, wherein the extracted candidate image areas comprise a pointing arrow candidate image set BinaryArrows and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp; the images in the Binarylarrows set and the BinaryLines set are paired with each other one by one, and the circumscribed rectangle of the paired images in the Binarylarrows set is R1kThe circumscribed rectangle of the paired images in the set BinaryLines is R2kThe constraint conditions of pairing are as follows: r1kAnd R2kThe intersection of (a) cannot be empty; r2kWith and only two vertices at R1kInside, and the line connecting the two vertices is R2kThe shorter side of (2); r1k、R2kThe ratio of the area and the perimeter of the filter to the area and the perimeter of the union of the area and the perimeter of the filter is respectively positioned in a set change range threshold; if the constraint conditions are met, the pairing is successful, otherwise, the pairing is unsuccessful, and the next group of pairing is carried out until the pairing is completed; and splicing the two successfully matched images to serve as a lamp body candidate image and adding the lamp body candidate image into the candidate image set BinaryLights of the arrow-shaped traffic signal lamp.
In fact, the method of the present invention may be one or some steps of the method for extracting the candidate image area of the arrow-shaped traffic signal, and for convenience of description, it is necessary to incorporate the method of the present invention into the extraction method. The extraction method (shown in fig. 7) may include the following steps:
firstly, image preprocessing:
the image preprocessing is carried out on the original image SrcImage to be recognized, and the preprocessing comprises the following modes: for some oversized pictures, the picture can be reduced to no more than 1280 pixels wide and 720 pixels tall; the image is suitably gaussian smoothed.
Secondly, setting a geometric shape change threshold value of the arrow-shaped traffic signal lamp in advance, wherein the geometric shape change threshold value comprises (1) the aspect ratio of a circumscribed rectangle R0 of the overall arrow-shaped traffic signal lamp is mu0The variation range of (a); (2) when imaging separately, point to the outside of the arrowAspect ratio mu of rectangular R11The ratio λ of the area of the circumscribed rectangle R1 to the area of the circumscribed rectangle R0 of the overall arrow-shaped traffic signal lamp10The ratio ρ of the perimeter of the circumscribed rectangle R1 to the perimeter of the circumscribed rectangle R0 of the overall arrow-shaped traffic signal lamp10The variation range of (a); (3) in split imaging, the width-to-height ratio mu of the circumscribed rectangle R2 of the tail line2The ratio λ of the area of the circumscribed rectangle R2 to the area of the circumscribed rectangle R0 of the overall arrow-shaped traffic signal lamp20The ratio ρ of the perimeter of the circumscribed rectangle R2 to the perimeter of the circumscribed rectangle R0 of the overall arrow-shaped traffic signal lamp20The range of variation of (a).
The above-mentioned geometry change threshold can be obtained from the prior art, such as the reference paper in the background art. The invention adopts the following method to improve the accuracy.
There is a certain standard for the size of road traffic lights, either for national or international standards. Based on the specifications of the respective sizes of the arrow lamp in road traffic signal lamps/GB 14887-2011, the shapes and sizes of the arrow lamp, the circular lamp (also called full screen lamp) and the forked lamp are specified in detail. Taking the left arrow lamp of FIG. 3 as an example, the width l of the lighting area0The integral height h of the lamp body0And width w0Two component heads of lamp body "<"and tail" - "each have a height h1And h2And a width w1And w2Are of a well-defined size.
For standard lamp body dimensions, the above-mentioned geometric parameter μ0、μ1、μ2、λ10、λ20、ρ10、ρ20The geometric parameters are all constant values, but in an actual photo, the geometric parameters change within a certain range due to the change of the direction and the distance between the center of the signal lamp body and the actual position of the camera, and a reasonable change range, namely the geometric shape change threshold of the arrow-shaped traffic signal lamp, is obtained in the step.
In an ideal situation, as shown in fig. 4, the center of the traffic signal lamp is located on the Z-axis in the vehicle driving direction, the lamp body plane is parallel to the X-Y plane, the origin is the camera mounting position, and the optical axis of the imaging lens is parallel to the Z-axis, at this time, the XY-axis coordinates of the signal lamp are all 0, and the distance Z between the XY-axis coordinates of the signal lamp and the vehicle only affects the size of the signal lamp image, without changing the geometric shape of the signal lamp. In this case, the geometric characteristic parameters of the left arrow lamp were obtained as shown in table 3.
TABLE 3 characteristics of the shape of a left-turn arrow lamp in the ideal case
Figure BDA0002718154810000091
However, in reality, the position of the signal lamp is not on the Z-axis, and the signal lamp will be translated relative to the Z-axis, and during the actual running of the vehicle, the random roll (the vehicle body rotates around the Z-axis), pitch (the vehicle body rotates around the X-axis), and yaw (the vehicle body rotates around the Y-axis) between the vehicle and the signal lamp due to various reasons should be further considered, as shown in fig. 5. At this time, the positional relationship of the vehicle and the signal lamp can be simply expressed by fig. 6.
Similarly, the origin O is set as the camera mounting position, and the signal lamp center coordinates are (x, y, z). Assuming that the installation angle of the vehicle-mounted camera is approximately towards the right front of the horizontal direction of the vehicle body, the angular deviation of the vehicle-mounted camera can be coupled to the roll angle, pitch angle and yaw angle of the vehicle body relative to the signal lamp, so that special consideration is not needed. Finally, the characteristic parameters of each geometric shape of the left arrow lamp relative to the roll angle delta and the pitch angle of the vehicle body can be obtained through space coordinate transformation
Figure BDA0002718154810000104
And the calculation formula of the yaw angle ω are shown in table 4.
Table 4 left turn arrow lamp shape characteristic algorithm under real condition
Figure BDA0002718154810000101
Wherein the content of the first and second substances,
Figure BDA0002718154810000102
and satisfies:
Figure BDA0002718154810000103
and the geometric characteristic parameter change formulas of the right arrow lamp, the upper arrow lamp and the lower arrow lamp can be obtained in the same way. Circular lamps and fork lamps also allow the calculation formulas to be obtained because of the relative simplicity of the geometric symmetry algorithm. The calculation formulas are related to the central coordinates (x, y, z) of the arrow-shaped traffic signal lamp and the roll angle delta and the pitch angle of the vehicle camera relative to the arrow-shaped traffic signal lamp
Figure BDA0002718154810000111
And the yaw angle omega, and is also related to the specific model size of the arrow-shaped traffic signal lamp.
Finally, a set of conditions is given
Figure BDA0002718154810000112
In time (the constraint condition covers the spatial transformation range under most vehicle running conditions), the characteristic parameter change interval of each lamp body is shown in table 5.
TABLE 5 Interval thresholds of Signal geometry characteristics under Condition θ
Figure BDA0002718154810000113
Taking a phi 300 arrow lamp as an example, the left-facing arrow red lamp indicating that the left turn is prohibited can be calculated according to the following formula:
Figure BDA0002718154810000114
Figure BDA0002718154810000115
Figure BDA0002718154810000116
Figure BDA0002718154810000117
Figure BDA0002718154810000118
Figure BDA0002718154810000119
Figure BDA00027181548100001110
for the arrow-up red light indicating prohibited straight going, the formula is calculated as:
Figure BDA0002718154810000121
Figure BDA0002718154810000122
Figure BDA0002718154810000123
Figure BDA0002718154810000124
Figure BDA0002718154810000125
Figure BDA0002718154810000126
Figure BDA0002718154810000127
wherein, delta,
Figure BDA0002718154810000128
Omega is respectively a roll angle, a pitch angle and a yaw angle of the axial lead of the vehicle camera relative to a vertical line passing through the center of the arrow-shaped red light surface;
Figure BDA0002718154810000129
and
Figure BDA00027181548100001210
the horizontal compressibility and the vertical compressibility resulting from the position offset (x, y, z) of the arrow-shaped red light surface center with respect to the vehicle camera, respectively.
At the angle of delta less than or equal to 30 degrees and omega less than or equal to 45 degrees,
Figure BDA00027181548100001212
0.8≤k1≤1、0.8≤k2≤1、k1cosω≥0.7、
Figure BDA00027181548100001211
Under the constraint of conditions (which cover the spatial transformation range under most of the vehicle driving conditions), for the left arrow red light for the left turn prohibition and the right arrow red light for the right turn prohibition: mu.s0The variation range is [0.576, 1.917 ]]、μ1The variation range is [0.418, 1.575]、μ2The variation range is [0.787, 6.854 ]]、λ10The variation range is [0.668, 0.893 ]]、λ20The variation range is [0.096, 0.303 ]]、ρ10The variation range is [0.782, 0.961 ]]、ρ20The variation range is [0.326, 0.581 ]]. For the arrow-up red light prohibited from going straight and the arrow-down red light prohibited from going forward: mu.s0Variations inIn the range of [0.522, 1.736]、μ1The variation range is [0.635, 2.393 ]]、μ2The variation range is [0.146, 1.270 ]]、λ10The variation range is [0.668, 0.893 ]]、λ20The variation range is [0.096, 0.303 ]]、ρ10The variation range is [0.782, 0.961 ]]、ρ20The variation range is [0.326, 0.581 ]]。
In the prior art, some scholars do not consider that the arrow lamp is actually composed of two separated parts, and the separation phenomenon is particularly obvious in short-distance imaging and only adopts the characteristic parameter mu0The candidate image object is used as a basis for deleting the traffic light candidate image object; although some scholars consider the problem, most scholars adopt a morphological method to carry out merging operation, and how to select a proper morphological kernel size is not good when processing the split lamp body images at different distances, different angles and complex and variable backgrounds; some students are at mu respectively0The geometric shape characteristic threshold of the lamp body is determined by increasing, decreasing or scaling on the basis of the parameters, and the effectiveness of the method needs to be verified; there are also some scholars who give a threshold value by counting the distribution range of the geometric features of the lamp bodies in the image set in a large number, but the threshold value is not widely referred because of the limitation of the installation angle of the camera and the distribution of the image set itself is not necessarily widely representative.
To verify the validity of the above-mentioned geometric feature threshold, in the image set P12On the basis, the result of the selection and filtering of the red and green candidate lamp bodies is shown in the table 6, and the filtering result under the existing open threshold value condition is given in the table as a comparison reference.
TABLE 6 traffic light image filtering results under different geometric feature thresholds
Figure BDA0002718154810000131
The results in Table 6 show that the loss is 0.13% of the lowest value when the lamp body image is filtered by using the shape characteristic threshold value provided by the invention, the effect is better than that obtained by the existing research, and jam is 22.7 of the next lowest value and has little difference with the existing lowest value of 22.4.
Third, using threshold value mu0、μ1、μ2The variation range of (2) divides the candidate image: extracting all outermost layer contour lines of Binaryrois, traversing all outer contour lines, calculating external rectangles of each outer contour line, calculating the aspect ratio mu of the external rectangles, and respectively obtaining the mu according to the third step0、μ1、μ2The change range of (3) further divides the Binaryrois into an arrow-shaped traffic signal light candidate image set BinaryLights, a pointing arrow candidate image set BinaryArrows of the arrow-shaped traffic signal light and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal light.
Some targets which obviously do not belong to the lamp body or the independent component part of the lamp body can be deleted through the step, any one of the reserved targets can be a complete lamp body image or an arrow head lamp or the tail part of the arrow head lamp, and due to the fact that the same image can be multiple possibilities at the same time, missing detection can be avoided as much as possible.
In the above steps, it may be considered that an original image SrcImage to be identified is subjected to preliminary screening of candidate image regions, and the screened image regions include an arrow candidate image set BinaryArrows and a tail straight line candidate image set BinaryLines in addition to an entire signal lamp image region due to the signal lamp split characteristics. Of course, the present invention is not limited to the above method for obtaining the arrow candidate image set BinaryArrows and the tail straight line candidate image set BinaryLines.
Fourthly, separating arrow lamp images, pairing and splicing: and traversing and searching possible pairs in the pointing arrow candidate image set BinaryArrows and the tail straight line candidate image set BinaryLines, constructing a separated arrow-shaped traffic signal candidate image, adding and merging the separated arrow-shaped traffic signal candidate image into the arrow-shaped traffic signal candidate image set BinaryLight. The pairing process comprises the following steps:
(1) computing a circumscribed rectangle R1 of one of the set of pointing arrow candidate images BinarylarrowskAnd a circumscribed rectangle R2 of one of the tail straight line candidate images in the tail straight line candidate image set BinaryLineskAnd bothIs determined by the bounding rectangle R0 of the joint image areak
(2) Judging whether the two are paired to form an arrow-shaped traffic signal lamp according to the following constraint conditions, and judging that the pairing is successful if and only if all the constraint conditions are met:
constraint 1: r1kAnd R2kThe intersection of (a) cannot be empty;
constraint 2: r2kWith and only two vertices at R1kInside, and the line connecting the two vertices is R2kThe shorter side of (2);
constraint 3: r1kArea of (D) and R0kMust lie at the lambda set in the third step10Between the ranges of variation of (1); r1kAnd R0kMust be located at p set in the third step10Between the ranges of variation of (1);
constraint 4: r2kArea of (D) and R0kMust be located at the lambda set in the third step20Between the ranges of variation of (1); r2kAnd R0kMust be located at p set in the third step20Between the ranges of variation of (1);
(3) if R1kAnd R2kWhen the pairing is successful, the two parts are two independent parts of an arrow-shaped traffic signal lamp and are connected with R1k、R2kRespectively combining corresponding contour lines to serve as an arrow-shaped traffic signal lamp candidate image area and merging the arrow-shaped traffic signal lamp candidate image area into a set BinaryLights;
(4) and (4) returning to the step (1), selecting the next group of the candidate images of the directional arrow and the candidate images of the tail straight line which are not successfully paired until all elements in the sets Binarylarrows and Binarylines are subjected to pairing attempts.
The step is one of key technologies of the invention, the imaging phenomenon possibly occurring in the split arrow lamp is fully considered, and the two parts of the split imaging arrow lamp are matched and spliced, so that the omission is avoided.
The binarrylights set is a candidate image area for segmenting all arrow-shaped traffic lights from one original image SrcImage. It is worth mentioning that the candidate image area of the segmented arrow-shaped traffic signal lamp extracted by the invention is only a candidate and not necessarily the signal lamp itself, which is beneficial to further and rapidly judging and confirming the subsequent arrow-shaped signal lamp. It can also be understood that the invention provides a filtering algorithm of a filter to screen out a target image of a suspected arrow-shaped traffic signal lamp from an image at a higher speed and with a better accuracy so as to prepare for subsequent traffic light identification. Of course, the images in the BinaryLights set identified by the present invention may be traffic lights themselves, and may be directly used as an identifier.
And fifthly, further verifying and judging the type of the images in the candidate image set BinaryLights.
Due to the geometrical parameters of the whole arrow head and the arrow head or tail, i.e. mu0、μ1、μ2The variation ranges of (a) and (b) have an intersection, so that for any candidate image region, as long as the geometric parameter of itself falls into which section, it is copied into the corresponding set, that is, an image candidate region may be a plurality of possible lamp bodies or independent parts of lamp bodies at the same time, and is put into different sets at the same time, which is done to avoid missing detection, because missing detection brings more serious influence than false detection. Furthermore, these geometric parameters of the left-right, top-bottom lamp types are identical. Therefore, the arrow-shaped traffic light candidate image set BinaryLights has a situation that the images are not complete lamp shapes, and the types of the lamps also need to be further determined, which all need to further verify and determine the images in the arrow-shaped traffic light candidate image set BinaryLights. Therefore, the invention can subsequently utilize a support vector machine to judge, and can adopt a two-stage vector machine, wherein the first-stage vector machine utilizes the low-dimensional pixel density characteristics of the binary (black and white image) candidate lamp body image to judge whether the lamp body and the lamp body type exist, and the second-stage vector machine utilizes the high-dimensional HOG characteristics of the expanded image area in the original color image to further verify and confirm, thereby further reducing false detection and misinterpretation.
The method comprises the following specific steps:
step1, using an i-class SVM1, determining whether the candidate lamp body image is a complete lamp body and the type of the lamp body.
Each element in the candidate image set BinaryLights of the arrow-shaped traffic signal lamp is traversed, and the element is divided into a plurality of sub-blocks (the number of the sub-blocks is the same as that of the sample image) by a plurality of equal divisions based on the width and the height of the circumscribed rectangle of the element. And calculating the pixel density of each sub-block, constructing an N-dimensional characteristic vector according to the pixel density, and then using the N-dimensional characteristic vector as the input of the I-level SVM1 to judge whether the candidate lamp body image is a complete lamp body and the orientation type thereof and judge that the candidate lamp body image is a non-lamp body and is deleted.
SVM1 is itself a two-classifier whose sample set has 5 classes, i.e., class 0 is a dummy lamp, and classes 1 to 4 are left, right, up, down arrow lamps, respectively. The invention realizes multi-target classification based on OVR (over-the-range) method, i.e. one class is taken as a positive sample and the rest classes are taken as negative samples during each training, and the method is carried out in an iterative way. A large penalty factor is introduced to solve the problem of asymmetry of positive and negative sample proportions.
The training sample set of the class i SVM1 of the present invention may be as follows: (1) collecting a large number of traffic images containing arrow-shaped traffic signal lamps under various conditions, extracting all traffic signal lamp candidate images in each image, and obtaining a binaryzation signal lamp sample image set C by adopting the method for extraction0And artificially marking C0Of each sample image C0(i) Class T of0(i),T0(i) The lamp is divided into 5 groups, wherein the group 0 is a false lamp body, and the groups 1 to 4 are respectively a left-facing, right-facing, upward-facing and downward-facing arrow lamp. (2) Sample image C0(i) Dividing into n × m sub-blocks, the number of sub-blocks being determined according to actual situation, dividing into 3 × 3 sub-blocks, calculating pixel density of each sub-block, and extracting C0(i) 9-dimensional pixel density feature vector f0(i) In that respect (3) Based on f0(i) And T0(i) And constructing a training sample. (3) Training of the class i vector machine SVM1 is performed based on the training samples until convergence.
In order to improve the operation efficiency, the preprocessed image elements may be binarized before the step is performed. In addition, only the qualified candidate object is subjected to subsequent processing, so that the number of subsequent images needing high-dimensional feature extraction can be effectively reduced, and the total time consumption is reduced.
Step2. validation was performed using a class ii SVM 2.
Each image element in the candidate lamp body image set BinaryLight screened by the I-level SVM1 is circumscribed by a rectangle R0(i) With the center as the base point, at the width w0(i) And height h0(i) Properly extending direction to obtain rectangle R'0(i) Based on R'0(i) Same region image C 'is segmented and extracted from original image SrcImage'0(i) Wherein: rectangle R'0(i) Width w'0(i)=(1+α)w0(i) H 'height'0(i)=(1+β)h0(i) And α and β are the coefficients of extensibility. The purpose of extending the image is to make the candidate image contain the black back panel around the lit area of the light body to take full advantage of the inherent characteristics of the traffic light system. By contrast, the coefficient of elongation is preferably 2.1, 2.1. Then extracting C'0(i) HOG feature vector f'0(i) And the class II SVM is used for verifying the lamp body type.
The extraction process of the HOG feature vector can adopt the following modes: (1) c'0(i) Scaling to 40 x 40 pixel size; (2) performing Gamma correction; (3) calculating the gradient of each pixel of the image; (4) dividing 5 by 5 pixels into one unit; (5) calculating a gradient histogram of each cell; (6) combining 2 x 2 cells into one block, calculating and connecting HOG features of each block in series, and finally obtaining 1764-dimensional vectors. The specific parameter values and the dimension vectors in the steps are not selected uniquely, but the parameter effects are better by comparison.
The pre-training sample set of the class II SVM is constructed as follows: (1) collecting a large number of traffic images containing arrow-shaped traffic signal lamps under various conditions, extracting all traffic signal lamp candidate images in each image, and obtaining a binaryzation signal lamp sample image set C by adopting the method for extraction0Mixing C with0Each picture element C in0(i) Proper extension, C based on the extension0(i) From the original colourExtracting the same region image C 'from the image'0(i) And artificially labeled C'0(i) Category of T'0(i) (ii) a Same T'0(i) The lamp is divided into 5 types, wherein the type 0 is a false lamp body, and the types 1 to 4 are respectively a left-facing, right-facing, upward-facing and downward-facing arrow lamp. (2) Extracting the feature vector f 'from the above'0(i) In that respect (3) Based on f'0(i) And T'0(i) And constructing a training sample. (4) Training of the class II vector machine SVM2 is performed based on the training samples until convergence.
And step3, finally adopting a judgment and identification criterion as follows: and if and only if the candidate image of the signal lamp can be verified through the two-stage SVM, and the classification results of the two-stage SVM are consistent, judging the candidate image as true and giving the lamp body type.
The above-mentioned embodiments are only preferred embodiments of the present invention, and do not limit the technical scope of the present invention, so that the changes and modifications made by the claims and the specification of the present invention should fall within the scope of the present invention.

Claims (8)

1. A pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamps is characterized by comprising the following steps: the candidate image area comprises a pointing arrow candidate image set Binarylarrows and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp; the images in the Binarylarrows set and the BinaryLines set are paired with each other one by one, and the circumscribed rectangle of the paired images in the Binarylarrows set is R1kThe circumscribed rectangle of the paired images in the set BinaryLines is R2kThe constraint conditions of pairing are as follows: r1kAnd R2kThe intersection of (a) cannot be empty; r2kWith and only two vertices at R1kInternal, and the line connecting the two vertices is R2kA short side of (a); r1k、R2kThe ratio of the area and the perimeter of the filter to the area and the perimeter of the union of the area and the perimeter of the filter is respectively positioned in a set change range threshold; if the constraint conditions are met, the pairing is successful, otherwise, the pairing is unsuccessful, and the next group of pairing is carried out until the pairing is completed; splicing two successfully matched images to serve as a lamp body candidateThe image is added to the set of arrow-shaped traffic light candidate images BinaryLights.
2. The paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 1, wherein the extraction method for the candidate image areas is as follows: extracting all outer contour lines related to the lighting area of the arrow-shaped traffic signal lamp in the original image to be identified, calculating the aspect ratio of the circumscribed rectangles, and dividing the image into an arrow-shaped traffic signal lamp candidate image set BinaryLight, a pointing arrow candidate image set BinaryArrows of the arrow-shaped traffic signal lamp and a tail straight line candidate image set BinaryLines of the arrow-shaped traffic signal lamp according to a set aspect ratio change threshold value.
3. The paired splicing method for the arrow-shaped traffic signal lamp split type candidate image areas according to claim 1 or 2, wherein the threshold value is obtained by: setting the installation position of a camera in a vehicle as an original point O, the central coordinates of a traffic signal lamp as (x, y, z), the roll angle of the axis line of the camera of the vehicle relative to a vertical line passing through the surface center of the traffic signal lamp as delta and the pitch angle as pitch angle
Figure FDA0003593284000000011
The yaw angle is omega, and then according to the standard size of the traffic signal lamp, a change formula of geometric characteristic parameters of corresponding area ratio, perimeter ratio or width-to-height ratio of various arrow-shaped traffic signal lamps is obtained through space coordinate transformation; then, the driving conditions of the majority of vehicles are covered, namely, x, y, z, delta are given,
Figure FDA0003593284000000012
And under the constraint condition of omega, obtaining the characteristic parameter change interval of each lamp body so as to obtain the threshold value.
4. The paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 3, wherein the constraint conditions are as follows:
Figure FDA0003593284000000013
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003593284000000014
Figure FDA0003593284000000021
and satisfies:
Figure FDA0003593284000000022
5. the paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 1,2 or 4, wherein: further verifying and judging the type of the images in the candidate image set BinaryLights; the method comprises the following steps: (1) carrying out binarization processing on each candidate image area in the BinaryLights set, then dividing the candidate image area into a plurality of subblocks, extracting the pixel density characteristic of each subblock, and judging whether the candidate image area is a lamp body and the category of the lamp body by using a pre-trained primary support vector machine; (2) and (2) expanding each binary candidate image area in the set BinaryLights screened in the step (1) according to a certain expansion coefficient, cutting out the same area from the original image SrcImage according to the expanded candidate image area, extracting the HOG characteristic vector, and further verifying by using a pre-trained secondary support vector machine.
6. The paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 5, wherein: the sample images in the first-stage support vector machine and the second-stage support vector machine are manually marked to belong to categories, and are divided into 5 categories, namely pseudo lamp bodies, leftward facing, rightward facing, upward facing and downward facing arrow lamp heads; and constructing training samples based on the corresponding features and categories.
7. The paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 5, wherein: in the step (2), the candidate image region expansion mode is as follows: and taking the center of the circumscribed rectangle of the candidate image area as a base point, extending in the width and height directions, wherein the extension coefficient is 2.1, and obtaining an expanded candidate image area.
8. The paired splicing method for the split candidate image areas of the arrow-shaped traffic signal lamp according to claim 5, wherein: the extraction process of the HOG feature vector is as follows: (1) scaling the candidate lamp body image to 40 x 40 pixel size; (2) performing Gamma correction; (3) calculating the gradient of each pixel of the image; (4) dividing 5 by 5 pixels into one unit; (5) calculating a gradient histogram of each unit; (6) combining 2 x 2 cells into one block, calculating and connecting HOG features of each block in series, and finally obtaining 1764-dimensional vectors.
CN202011079511.9A 2020-10-10 2020-10-10 Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp Active CN112150364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011079511.9A CN112150364B (en) 2020-10-10 2020-10-10 Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011079511.9A CN112150364B (en) 2020-10-10 2020-10-10 Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp

Publications (2)

Publication Number Publication Date
CN112150364A CN112150364A (en) 2020-12-29
CN112150364B true CN112150364B (en) 2022-06-07

Family

ID=73952938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011079511.9A Active CN112150364B (en) 2020-10-10 2020-10-10 Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp

Country Status (1)

Country Link
CN (1) CN112150364B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496486B (en) * 2023-12-27 2024-03-26 安徽蔚来智驾科技有限公司 Traffic light shape recognition method, readable storage medium and intelligent device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009244946A (en) * 2008-03-28 2009-10-22 Fujitsu Ltd Traffic light recognizing apparatus, traffic light recognizing method, and traffic light recognizing program
CN104021378A (en) * 2014-06-07 2014-09-03 北京联合大学 Real-time traffic light recognition method based on space-time correlation and priori knowledge
CN104408424A (en) * 2014-11-26 2015-03-11 浙江大学 Multiple signal lamp recognition method based on image processing
CN105893971A (en) * 2016-04-01 2016-08-24 上海理工大学 Traffic signal lamp recognition method based on Gabor and sparse representation
CN111723625A (en) * 2019-03-22 2020-09-29 上海海拉电子有限公司 Traffic light image recognition processing method and device, auxiliary traffic system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009244946A (en) * 2008-03-28 2009-10-22 Fujitsu Ltd Traffic light recognizing apparatus, traffic light recognizing method, and traffic light recognizing program
CN104021378A (en) * 2014-06-07 2014-09-03 北京联合大学 Real-time traffic light recognition method based on space-time correlation and priori knowledge
CN104408424A (en) * 2014-11-26 2015-03-11 浙江大学 Multiple signal lamp recognition method based on image processing
CN105893971A (en) * 2016-04-01 2016-08-24 上海理工大学 Traffic signal lamp recognition method based on Gabor and sparse representation
CN111723625A (en) * 2019-03-22 2020-09-29 上海海拉电子有限公司 Traffic light image recognition processing method and device, auxiliary traffic system and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Traffic lights detection and recognition based on multi-feature fusion;Wenhao Wang et al.;《Multimedia Tools and Applications》;20161213;第76卷;全文 *
城市环境中箭头型交通信号灯的实时识别算法;谷明琴等;《中南大学学报(自然科学版)》;20130426;第44卷(第04期);全文 *
基于显著性特征的交通信号灯检测和识别;许明文等;《计算机与数字工程》;20170720;第45卷(第07期);全文 *

Also Published As

Publication number Publication date
CN112150364A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN110969160B (en) License plate image correction and recognition method and system based on deep learning
Mi et al. Research on regional clustering and two-stage SVM method for container truck recognition
US9467645B2 (en) System and method for recognizing parking space line markings for vehicle
CN111626217A (en) Target detection and tracking method based on two-dimensional picture and three-dimensional point cloud fusion
CN109726717B (en) Vehicle comprehensive information detection system
CN108090459B (en) Traffic sign detection and identification method suitable for vehicle-mounted vision system
CN111178236A (en) Parking space detection method based on deep learning
CN106326810B (en) Road scene recognition method and equipment
CN105740886B (en) A kind of automobile logo identification method based on machine learning
CN101937508A (en) License plate localization and identification method based on high-definition image
CN105184301B (en) A kind of method that vehicle heading is differentiated using four-axle aircraft
CN112183427B (en) Quick extraction method for arrow-shaped traffic signal lamp candidate image area
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
CN112150364B (en) Pairing and splicing method for split type candidate image areas of arrow-shaped traffic signal lamp
CN117078717A (en) Road vehicle track extraction method based on unmanned plane monocular camera
CN111089598B (en) Vehicle-mounted lane-level real-time map matching method based on ICCIU
CN107220632B (en) Road surface image segmentation method based on normal characteristic
CN110733416A (en) lane departure early warning method based on inverse perspective transformation
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
CN107392115B (en) Traffic sign identification method based on hierarchical feature extraction
CN107992788B (en) Method and device for identifying traffic light and vehicle
CN111339823A (en) Threshing and sunning ground detection method based on machine vision and back projection algorithm
CN109271905B (en) Black smoke vehicle detection method based on single-frame image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant