CN106919910B - Traffic sign identification method based on HOG-CTH combined features - Google Patents

Traffic sign identification method based on HOG-CTH combined features Download PDF

Info

Publication number
CN106919910B
CN106919910B CN201710075888.9A CN201710075888A CN106919910B CN 106919910 B CN106919910 B CN 106919910B CN 201710075888 A CN201710075888 A CN 201710075888A CN 106919910 B CN106919910 B CN 106919910B
Authority
CN
China
Prior art keywords
traffic sign
cth
hog
gradient
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710075888.9A
Other languages
Chinese (zh)
Other versions
CN106919910A (en
Inventor
张尤赛
孙露霞
李永顺
周旭
张硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Publication of CN106919910A publication Critical patent/CN106919910A/en
Application granted granted Critical
Publication of CN106919910B publication Critical patent/CN106919910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a traffic sign identification method based on HOG-CTH combined characteristics, which comprises the following steps: training a classifier model by using a training sample; detecting a traffic sign image in the positioning live-action picture; extracting HOG-CTH fusion characteristic vectors of the positioned traffic sign images; and identifying the traffic sign type of the test picture. The traffic sign identification method based on the HOG-CTH combined characteristics can reduce the calculation complexity to a great extent, can obtain higher identification rate and has stronger robustness.

Description

Traffic sign identification method based on HOG-CTH combined features
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a traffic sign recognition method based on HOG-CTH combined features.
Background
The development of the automobile industry greatly facilitates the work and life of people, but the traffic safety problem is also avoided. One approach to solve the traffic safety problem is to accurately and effectively set up road traffic signs, so as to provide drivers with abundant information such as prohibition, warning, indication and the like, thereby reducing traffic accidents. In order to ensure timely and accurate transmission of Traffic Sign information, a Traffic Sign Recognition (TSR) system has received attention from various researchers. The traffic sign recognition system has wide application prospect, is mainly applied to a plurality of fields such as driving assistance, traffic sign maintenance, unmanned driving and the like, relates to a plurality of disciplines such as machine vision, mode recognition, image processing, digital signal processing, artificial intelligence, communication and information technology and the like, and is a typical mode recognition application system as well as face recognition and target tracking.
The traffic sign recognition system mainly comprises two basic technical links: firstly, detecting a traffic sign, including positioning the traffic sign and necessary preprocessing; and secondly, the classification and the identification of the traffic signs, including the feature extraction and the classification of the traffic signs. In the detection stage, methods such as threshold segmentation and template matching are mainly adopted; in the classification stage, solutions are mainly sought from the aspects of learners and image features, namely, suitable learners are adopted to extract suitable image features to complete the final traffic sign identification task. Generally, the traffic sign identification method at the present stage has low identification accuracy and long operation time, and is difficult to meet the requirement of vehicle-mounted instantaneity.
Disclosure of Invention
Aiming at the problems and the defects in the prior art, the invention aims to provide a traffic sign identification method based on HOG-CTH combined characteristics.
In order to achieve the purpose, the invention adopts the following technical scheme:
a traffic sign identification method based on HOG-CTH combined features comprises the following steps:
s1: training classifier models using training samples
Step S1-1: determining a training set;
step S1-2: respectively extracting the HOG (histogram of oriented gradient) features and the CTH (statistical transformation histogram) features of the images of the training samples in the training set determined in the step S1-1, performing refinement on the CTH features, and connecting the HOG and the CTH features in series to obtain HOG-CTH fusion features;
step S1-3: training the HOG-CTH fusion feature vector obtained in the step S1-2 by using a linear Support Vector Machine (SVM) algorithm to obtain an SVM traffic sign classifier;
s2: detecting and positioning traffic sign image in live-action picture
Step S2-1: carrying out color segmentation on the traffic sign in the natural scene in the HSV color space;
step S2-2: performing morphological image processing on the region subjected to color segmentation in the step S2-1;
step S2-3: positioning and cutting out the divided traffic sign images;
s3: identifying traffic sign images using classifier models
Step S3-1: carrying out graying processing on the traffic sign image in the detected live-action image in the S2;
step S3-2: extracting HOG-CTH fusion characteristics of the traffic sign image subjected to gray processing;
step S3-3: and identifying the type of the traffic sign by using the trained SVM traffic sign classifier in the S1.
The step S1-2 is specifically as follows:
the gamma correction method is adopted to carry out color space normalization on the input traffic sign image, the input traffic sign image is firstly converted into a gray level image, and I is set g (x, y) is the gray value of the pixel point of the (x, y) coordinate, and the gamma compression formula:
I g (x,y)=I g (x,y) gamma take gamma =0.5 (1)
Calculating the gradient amplitude and direction of each pixel of the traffic sign image, and performing convolution calculation on the traffic sign image by using a 3 multiplied by 3Sobel template, wherein I is g The gradient of (x, y) is:
Figure GDA0003926646500000021
Figure GDA0003926646500000022
G x (x,y)、G y (x, y) represent the gradients of the horizontal and vertical edge detection respectively, and the gradient magnitude G (x, y) and the direction theta (x, y) of the pixel point (x, y) are shown as formula (4) and formula (5):
Figure GDA0003926646500000023
Figure GDA0003926646500000024
the gradients of all color channels of the color images can be respectively calculated, and the value with the maximum amplitude is selected as the gradient of the pixel point;
counting gradient histogram in cell, dividing image window area into uniformly distributed cell units, each cell unit containing 4 × 4 pixels, and dividing the cell unit into multiple cells
Figure GDA0003926646500000031
The gradient direction of the cell is averagely divided into 9 bin intervals, then histogram statistics is carried out on gradient values of all pixels in each cell unit in each bin interval respectively, and direction gradient histograms in the cell units are counted respectively;
normalizing the histogram of gradient directions in the region block, forming a block by every 2 multiplied by 2 cells, forming a 36-dimensional characteristic vector by one block, and normalizing the whole block by using an L2-norm, wherein the formula is shown in formula (6):
Figure GDA0003926646500000032
wherein v is a feature vector, | v | | non-calculation 2 2 norm of v, epsilon represents a constant to avoid denominator of 0;
and (4) forming HOG feature vectors through serialization, and carrying out serialization processing on the HOG features of all block blocks to form the final HOG feature vectors of training samples or detection window images.
Calculating the statistical transformation of the pixel points, and sequentially numbering clockwise: p is a radical of 0 ~p 7 (ii) a Obtaining a symbol difference value between pixel points according to the amplitude relation:
T=t(s(p c -p 0 ),s(p c -p 1 ),…,s(p c -p 7 )) (7)
wherein:
Figure GDA0003926646500000033
in the formula, the t () function represents a joint distribution function of the symbol difference values; s (p) c -p i ) Representing the current pixel point p c And the ith neighborhood point p i A sign difference value therebetween; at this moment, the current pixel point p c Is calculated by equation (9):
Figure GDA0003926646500000034
in the formula, N is the number of neighborhood pixels, and R is the CTH calculation radius;
fine quantization processing: introducing the concept of 0-1 transformation, wherein 0 to 1 or 1 to 0 is regarded as one 0-1 transformation, and the central pixel point p c The degree of 0-1 transformation u of (a) can be calculated by the following equation (10):
Figure GDA0003926646500000041
unifying the CTH characteristic value according to the 0-1 transformation times of the pixel points, as shown in formula (11):
Figure GDA0003926646500000042
obtaining corresponding characteristics by counting the histogram of the sparse CTH characteristic values;
counting a block histogram and carrying out normalization processing, and carrying out normalization processing on a CTH histogram in the block by adopting an L2-norm so as to fuse two types of features of HOG and CTH to obtain a HOG-CTH fusion feature vector; and putting the HOG-CTH fusion feature vector into an SVM algorithm of a linear support vector machine for training to obtain an SVM traffic sign classifier.
Has the advantages that: compared with the prior art, the method is used for identifying the traffic sign based on the HOG-CTH fusion characteristics, in the detection stage, a traffic sign detection method based on an HSV color space is used for carrying out color threshold segmentation in the HSV space, then the segmented area is filled and expanded, a series of characteristics are marked, then candidate areas are screened by calculating whether the characteristics meet the conditions, and finally the area of the traffic sign is cut from an original image to finish the detection. The method has constant scale, can perform reliable traffic sign detection in a complex traffic sign scene, and is convenient for the next step of identification. In the identification stage, the gradient feature and the texture feature are combined, the description performance is richer than that of a single feature, and the limitation of the single feature can be made up, so that the identification rate is improved. When the identification rate is improved by adopting the fusion features, the CTH features are finely quantized, so that the feature dimension is greatly reduced, the identification time is greatly shortened, and the robustness of the system is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of the HOG-CTH combined feature extraction in the present invention;
fig. 3 is a CTH characteristic calculation chart in the present invention.
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a traffic sign identification method based on HOG-CTH combination features includes the following steps:
step A: selecting a training sample, wherein a national traffic sign database is selected as a positive sample, a sample set contains 43 different traffic signs, each traffic sign is a category, category labels are set for the 43 different traffic signs, and a plurality of image sets which do not contain the traffic signs are randomly shot as negative samples. And B, step B: as shown in FIG. 2, the HOG (Histogram of ordered) of the training sample in step A is extracted
(ii) a Gradient feature;
step B1: the gamma correction method is adopted to carry out color space normalization on the input image, and square root gamma normalization can well eliminate the influence of integral illumination and contrast of the image. Since the color information has little effect, it is usually converted into a gray scale image, set I g (x, y) is the gray value of the pixel point of the (x, y) coordinate, and the gamma compression formula:
I g (x,y)=I g (x,y) gamma take gamma =0.5 (1)
And step B2: calculating the gradient amplitude and direction of each pixel of the image, and performing convolution calculation on the traffic sign image by using a 3 multiplied by 3Sobel template, so as to obtain the value I g The gradient of (x, y) is:
Figure GDA0003926646500000051
Figure GDA0003926646500000052
G x (x,y)、G y (x, y) represent the gradients of the horizontal and vertical edge detection respectively, and the gradient magnitude G (x, y) and the direction theta (x, y) of the pixel point (x, y) are shown as formula (4) and formula (5):
Figure GDA0003926646500000053
Figure GDA0003926646500000054
the gradients of all color channels of the color images can be respectively calculated, and the value with the maximum amplitude is selected as the gradient of the pixel point;
and step B3: counting gradient histogram in cell, dividing image window area into uniformly distributed cell (cell) units, each cell unit comprises 4 × 4 pixels, and counting gradient histogram in each cell unit
Figure GDA0003926646500000055
The gradient direction of the cell is averagely divided into 9 intervals (bins), then the gradient values of all pixels in each cell unit are subjected to histogram statistics in each bin interval respectively, so that a 9-dimensional feature vector is obtained by one cell unit, and directional gradient histograms in the cell units are respectively counted;
and step B4: normalizing the histogram of gradient directions in the region block, forming a block (block) by every 2 × 2 cell units, forming a 36-dimensional feature vector by using one block, and normalizing the whole block by using an L2-norm, as shown in formula (6):
Figure GDA0003926646500000061
wherein v is a feature vector, | v | | non-calculation 2 Represents the norm of order 2 of v, and epsilon represents a small constant to avoid a denominator of 0;
and step B5: and (4) forming HOG feature vectors through serialization, and forming the final HOG feature vectors of training samples or detection window images by carrying out serialization processing on the HOG features of all block blocks.
And C: extracting the CTH (CENTs Transform Histogram, CENT RIST) characteristics of the training sample in the step A;
step C1: calculating the statistical transformation of the pixel point, as shown in (a) of FIG. 3, for the current calculated pixel point p c And the space relation graph between the adjacent points is sequentially numbered clockwise as follows: p is a radical of 0 ~p 7 (ii) a Fig. 3 (b) shows the gray scale of the current pixel. Based on the amplitude relationship, the pixel can be obtainedThe symbol difference value between them is shown in equations (7) and (8):
T=t(s(p c -p 0 ),s(p c -p 1 ),…,s(p c -p 7 )) (7)
wherein:
Figure GDA0003926646500000062
in the formula, the t () function represents a joint distribution function of the symbol difference values; s (p) c -p i ) Representing the current pixel point p c And the ith neighborhood point p i The sign difference value between them. As shown in fig. 3 (c), the corresponding sign difference values are for all neighborhood points. Fig. 3 (d) shows a weight template for CTH feature calculation, where the current pixel point p c The corresponding CTH value of (a) can be calculated by equation (9):
Figure GDA0003926646500000063
in the formula, N is the number of the neighborhood pixels, R is the CTH calculation radius, and the calculated CTH value of the current pixel is 47. It can be seen that the CTH value is only related to the relative relationship between the pixels, and not to the specific pixel amplitude. The corresponding image features can be obtained by calculating a statistical histogram of the CTH values.
And step C2: in the fine quantization process, in order to avoid an excessively large dimension of the CTH feature, it is necessary to thin the CTH feature. First, the concept of a 0-1 transform is introduced. A 0-1 transformation is considered to be a 0-1 transformation from 0 to 1 or from 1 to 0. Thus, the center pixel point p c The degree of 0-1 transformation u of (a) can be calculated by the following equation (10):
Figure GDA0003926646500000071
at this time, the CTH value can be unified according to the 0-1 transformation times of the pixel points, as shown in equation (11):
Figure GDA0003926646500000072
corresponding characteristics can be obtained by counting the histogram of the sparse CTH value;
and C3: the histogram of the block is counted and normalized, in order to fuse HOG and CTH characteristics, the invention adopts a unified normalization scheme for the HOG characteristics and the CTH characteristics, namely, the normalization processing is carried out on the CTH histogram in the block by adopting an L2-norm, thereby combining/fusing the two characteristics.
Step D: and putting the combined features into a linear Support Vector Machine (SVM) algorithm for training to obtain the traffic sign classifier.
And E, step E: performing color segmentation on the traffic sign under a natural scene under an HSV (Hue, saturation) color space; let (r, g, b) be the red, green and blue coordinates of a certain color, respectively, and be the real number in the interval [0,1], let max equal to the maximum value among r, g and b, min be the minimum value, and the RGB-to-HSV spatial conversion formula is:
Figure GDA0003926646500000073
Figure GDA0003926646500000074
V=max (14)
to facilitate the selection of the threshold, H, S, V is normalized to [0,255]. The threshold value for the segmentation of the red, yellow and blue colors is as follows:
red: h e (0,10) U (240,255), S e (40,255), V e (30,255)
Yellow: h e (18,45), S e (148,255), V e (66,255)
Blue color: h e (140,255), S e (60,255), V e (20,255)
Judging whether the current pixel point is an interested color point (red, yellow and blue) by utilizing the threshold value; if the point is the interest point, the pixel value is set to be white, otherwise, the pixel value is set to be black, and therefore the binarization of the image is completed.
Step F: e, positioning and cutting out a traffic sign image, and performing morphological closing operation on the image obtained in the step E to fill up the hole; expanding the binary image, filling the inner cavity, and labeling each separated part of a communicated object, wherein 8-way communication is adopted; a series of attributes of the labeled region are then measured, using 3 features: sequentially calculating the total number of pixels in each region of the image, the center of mass (gravity center) of each region and the minimum rectangle containing the corresponding region; then according to the area of each filling block, the largest 3 filling blocks are found and stored.
Taking one of the filling blocks as an example, M is the minimum value of the length and the width of the minimum rectangle circumscribing the region, X is the abscissa of the gravity center of the region, Y is the ordinate of the gravity center, H and L are the row number and the column number of the pixels of the original image, respectively, T is the name of the binary image just before labeling, a is the total number of the pixels of the region, as a traffic sign candidate region, the following five sets of conditions must be satisfied:
Figure GDA0003926646500000081
Figure GDA0003926646500000082
Figure GDA0003926646500000083
Figure GDA0003926646500000084
A/(H×L)>0.01 (19)
according to the conditions, at most 3 mark areas are determined, then, according to the horizontal and vertical coordinates of the pixel at the upper left corner of the corresponding rectangle of each area and the length and width of the rectangle, the original image is cut, and the cut image is the traffic mark image detected after positioning.
Step G: and extracting HOG-CTH fusion feature vectors of the cut and positioned traffic sign images, and putting the fusion feature vectors into a trained SVM classifier for classification and identification.
The invention discloses a traffic sign identification method based on HOG-CTH combined characteristics, which comprises the following steps: training a classifier model by using a training sample; detecting a traffic sign image in the positioning live-action picture; extracting HOG-CTH fusion characteristic vectors of the positioned traffic sign images; and identifying the traffic sign type of the test picture. The invention can reduce the calculation complexity to a great extent, and can also obtain higher recognition rate and stronger robustness.

Claims (2)

1. A traffic sign identification method based on HOG-CTH combined features is characterized by comprising the following steps:
s1: training classifier models using training samples
Step S1-1: determining a training set;
step S1-2: respectively extracting the HOG (histogram of oriented gradient) features and the CTH (statistical transformation histogram) features of the images of the training samples in the training set determined in the step S1-1, performing refinement on the CTH features, and connecting the HOG and the CTH features in series to obtain HOG-CTH fusion features;
step S1-3: training the HOG-CTH fusion feature vector obtained in the step S1-2 by using a linear Support Vector Machine (SVM) algorithm to obtain an SVM traffic sign classifier;
s2: detecting and positioning traffic sign image in live-action picture
Step S2-1: carrying out color segmentation on the traffic sign in the natural scene in the HSV color space;
step S2-2: performing morphological image processing on the region subjected to color segmentation in the step S2-1;
step S2-3: positioning and cutting out the divided traffic sign images;
s3: identifying traffic sign images using classifier models
Step S3-1: carrying out graying processing on the traffic sign image in the detected live-action image in the S2;
step S3-2: extracting HOG-CTH fusion characteristics of the traffic sign image subjected to gray processing;
step S3-3: and identifying the type of the traffic sign by using the trained SVM traffic sign classifier in the S1.
2. The method for recognizing the traffic sign based on the HOG-CTH combined features as claimed in claim 1, wherein the step S1-2 is specifically as follows:
the gamma correction method is adopted to carry out color space normalization on the input traffic sign image, the input traffic sign image is firstly converted into a gray level image, and I is set g (x, y) is the gray value of the pixel point of the (x, y) coordinate, and the gamma compression formula:
I g (x,y)=I g (x,y) gamma take gamma =0.5 (1)
Calculating the gradient amplitude and direction of each pixel of the traffic sign image, and performing convolution calculation on the traffic sign image by using a 3 multiplied by 3Sobel template, wherein I is g The gradient of (x, y) is:
Figure FDA0003926646490000011
Figure FDA0003926646490000021
G x (x,y)、G y (x, y) represent the gradient of the horizontal and vertical edge detection respectively, and the magnitude G (x, y) and direction θ (x, y) of the gradient of the pixel point (x, y) are shown in the formula (4) and the formula (5):
Figure FDA0003926646490000022
Figure FDA0003926646490000023
the gradients of all color channels of the color images can be respectively calculated, and the value with the maximum amplitude is selected as the gradient of the pixel point;
counting gradient histogram in cell, dividing image window area into uniformly distributed cell units, each cell unit comprises 4 × 4 pixels, and each cell unit is internally provided with a plurality of pixels
Figure FDA0003926646490000024
The gradient direction of the cell is averagely divided into 9 bin intervals, then histogram statistics is carried out on gradient values of all pixels in each cell unit in each bin interval respectively, and direction gradient histograms in the cell units are counted respectively;
normalizing the histogram of gradient directions in the region block, forming a block by every 2 multiplied by 2 cells, forming a 36-dimensional characteristic vector by one block, and normalizing the whole block by using an L2-norm, wherein the formula is shown in formula (6):
Figure FDA0003926646490000025
wherein v is a feature vector, | v | | calucity 2 2 norm of v, epsilon represents a constant to avoid denominator of 0;
forming HOG characteristic vectors through serialization, and carrying out serialization processing on HOG characteristics of all block blocks to form final HOG characteristic vectors of training samples or detection window images;
calculating the statistical transformation of the pixel points, and sequentially numbering clockwise: p is a radical of formula 0 ~p 7 (ii) a Obtaining a symbol difference value between pixel points according to the amplitude relation:
T=t(s(p c -p 0 ),s(p c -p 1 ),…,s(p c -p 7 )) (7)
wherein:
Figure FDA0003926646490000026
in the formula, the t () function represents a joint distribution function of the symbol difference values; s (p) c -p i ) Representing the current pixel point p c And the ith neighborhood point p i A sign difference value therebetween; at this moment, the current pixel point p c Is calculated by equation (9):
Figure FDA0003926646490000031
in the formula, N is the number of neighborhood pixels, and R is the CTH calculation radius;
fine quantization processing: introducing the concept of 0-1 transformation, wherein 0 to 1 or 1 to 0 is regarded as one 0-1 transformation, and the central pixel point p c The degree of 0-1 transformation u of (a) can be calculated by the following equation (10):
Figure FDA0003926646490000032
unifying the CTH characteristic value according to the 0-1 transformation times of the pixel points, as shown in formula (11):
Figure FDA0003926646490000033
obtaining corresponding characteristics by counting a histogram of the sparse CTH characteristic values;
counting a block histogram and carrying out normalization processing, and carrying out normalization processing on a CTH histogram in a block by adopting an L2-norm so as to fuse two types of characteristics, namely HOG and CTH, to obtain a HOG-CTH fusion characteristic vector; and putting the HOG-CTH fusion feature vector into an SVM algorithm of a linear support vector machine for training to obtain an SVM traffic sign classifier.
CN201710075888.9A 2016-05-12 2017-02-13 Traffic sign identification method based on HOG-CTH combined features Active CN106919910B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016103191156 2016-05-12
CN201610319115 2016-05-12

Publications (2)

Publication Number Publication Date
CN106919910A CN106919910A (en) 2017-07-04
CN106919910B true CN106919910B (en) 2023-03-24

Family

ID=59453581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710075888.9A Active CN106919910B (en) 2016-05-12 2017-02-13 Traffic sign identification method based on HOG-CTH combined features

Country Status (1)

Country Link
CN (1) CN106919910B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622229B (en) * 2017-08-29 2021-02-02 中山大学 Video vehicle re-identification method and system based on fusion features
CN108009472B (en) * 2017-10-25 2020-07-21 五邑大学 Finger back joint print recognition method based on convolutional neural network and Bayes classifier
CN109190451B (en) * 2018-07-12 2021-07-06 广西大学 Remote sensing image vehicle detection method based on LFP characteristics
CN109086687A (en) * 2018-07-13 2018-12-25 东北大学 The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction
CN110801208B (en) * 2019-11-27 2022-04-05 东北师范大学 Tooth crack detection method and system
CN113420633B (en) * 2021-06-18 2022-04-12 桂林电子科技大学 Traffic sign identification method based on UM enhancement and SIFT feature extraction
CN113674270B (en) * 2021-09-06 2023-11-14 深邦智能科技集团(青岛)有限公司 Tire pattern consistency detection system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881160A (en) * 2012-07-18 2013-01-16 广东工业大学 Outdoor traffic sign identification method under low-illumination scene
CN103366190A (en) * 2013-07-26 2013-10-23 中国科学院自动化研究所 Method for identifying traffic sign
CN103824081A (en) * 2014-02-24 2014-05-28 北京工业大学 Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881160A (en) * 2012-07-18 2013-01-16 广东工业大学 Outdoor traffic sign identification method under low-illumination scene
CN103366190A (en) * 2013-07-26 2013-10-23 中国科学院自动化研究所 Method for identifying traffic sign
CN103824081A (en) * 2014-02-24 2014-05-28 北京工业大学 Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Also Published As

Publication number Publication date
CN106919910A (en) 2017-07-04

Similar Documents

Publication Publication Date Title
CN106919910B (en) Traffic sign identification method based on HOG-CTH combined features
CN108108761B (en) Rapid traffic signal lamp detection method based on deep feature learning
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN108090459B (en) Traffic sign detection and identification method suitable for vehicle-mounted vision system
CN105809138A (en) Road warning mark detection and recognition method based on block recognition
CN108921120B (en) Cigarette identification method suitable for wide retail scene
CN110705639B (en) Medical sperm image recognition system based on deep learning
CN104134079A (en) Vehicle license plate recognition method based on extremal regions and extreme learning machine
CN103544484A (en) Traffic sign identification method and system based on SURF
CN104573685A (en) Natural scene text detecting method based on extraction of linear structures
CN103413147A (en) Vehicle license plate recognizing method and system
CN104408424A (en) Multiple signal lamp recognition method based on image processing
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN112464731B (en) Traffic sign detection and identification method based on image processing
CN109145964B (en) Method and system for realizing image color clustering
CN107818321A (en) A kind of watermark date recognition method for vehicle annual test
CN107330365A (en) Traffic sign recognition method based on maximum stable extremal region and SVM
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
CN111860509A (en) Coarse-to-fine two-stage non-constrained license plate region accurate extraction method
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
Xing et al. Traffic sign detection and recognition using color standardization and Zernike moments
CN108664969A (en) Landmark identification method based on condition random field
CN104182769A (en) Number plate detection method and system
Mammeri et al. North-American speed limit sign detection and recognition for smart cars

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant