CN109598218B - Method for quickly identifying vehicle type - Google Patents

Method for quickly identifying vehicle type Download PDF

Info

Publication number
CN109598218B
CN109598218B CN201811410036.1A CN201811410036A CN109598218B CN 109598218 B CN109598218 B CN 109598218B CN 201811410036 A CN201811410036 A CN 201811410036A CN 109598218 B CN109598218 B CN 109598218B
Authority
CN
China
Prior art keywords
sample
image
vehicle type
matrix
color space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811410036.1A
Other languages
Chinese (zh)
Other versions
CN109598218A (en
Inventor
李洪均
周泽
胡伟
陈俊杰
李壮伟
王娇
孙婉婷
张雯敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Original Assignee
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University, Nantong Research Institute for Advanced Communication Technologies Co Ltd filed Critical Nantong University
Priority to CN201811410036.1A priority Critical patent/CN109598218B/en
Publication of CN109598218A publication Critical patent/CN109598218A/en
Application granted granted Critical
Publication of CN109598218B publication Critical patent/CN109598218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for rapidly identifying a vehicle type, which mainly solves the problems of accuracy and real-time performance in vehicle type identification, and the method firstly combines color space conversion and a multi-channel HOG feature extraction algorithm to reduce the influence of an illumination environment and simultaneously extract the front face feature of the vehicle; the PCA dimension reduction operation is used, the characteristic dimension of the sample is reduced, and the calculation complexity is reduced; then, carrying out sparse representation and nonlinear mapping on the sample characteristics to reduce the correlation among the characteristics; and finally, establishing a relation between the sample characteristics and the sample labels and solving a weight coefficient between the sample characteristics and the sample labels to realize a quick vehicle type identification effect. Experimental results on a BIT-Vehicle database show that the recognition precision of the method is 96.69%, the recognition speed is 70.3fps, the Vehicle type recognition precision is improved, and meanwhile the instantaneity is guaranteed.

Description

Method for quickly identifying vehicle type
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a method for quickly identifying a vehicle type.
Background
In recent years, with the popularization of traffic monitoring equipment and the rapid development of computer vision, some computer vision technologies based on traffic video images are applied to modern intelligent traffic systems. The real-time vehicle type recognition technology is an important component of an intelligent traffic system, and has a wide application range, such as highway charging systems, traffic flow statistics, urban traffic monitoring and assistance in criminal investigation [ document 1] (Zhang F, wilkie D, zheng Y, et al. Sensing the Pulse of an Urban road refinishing Behavior [ C ]// Proceedings of the 2013acm International Joint Conference on privacy and public traffic computing.new york, acm, 2013.
At present, the research in the field of vehicle type recognition can be mainly divided into 3 types: a Vehicle Model identification method Based on a 3D Model [ document 2] (Zhang Z, tan T, huang K, et al, three-dimensional Deformable-Model [ J ]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society,2012,21 (1): 1-13.), which performs 3D modeling on different types of vehicles and then realizes Vehicle Model identification by means of Model matching, prokaj et al [ document 3] (Prokaj, medical G.3-D Model Based Vehicle Recognition [ C ]/Proceedings of Workshop on Applications of Computer vision, snowbird: IEEE Press, 2013D Model of each type of Vehicle is established, the Model of the Vehicle to be identified is projected to a space, and the accuracy of the Vehicle Model identification is realized by means of 5.87%. A vehicle type recognition method based on a depth network model [ document 4] (Voulodomos A, doulamis N, doulamis A, et al deep Learning for Computer Vision: A Brief Review [ J ]. Computt Intell Neurosci.2018,2018: 1-13.) includes the steps of firstly extracting features of a vehicle to be recognized, then training a network classifier by using the obtained feature vectors, recognizing the type of the vehicle by using the trained classifier, and Lei Qian et al [ document 5] (Lei Qian, hao Cunming, zhang Weiping ], super-resolution and depth neural network-based vehicle type recognition [ J ]. Computer science, 2018,45 (s 1): 230-233.) adopt a depth neural network eNFlet with 13 layers to realize vehicle type recognition, the recognition accuracy is as high as 95.2%, but the GPU is used for accelerating the training of the network. The vehicle type identification method based on feature extraction [ document 6] (manzor M A, morgan Y. Vehicle Make and Model classification system using bag of SIFT features [ C ]// Proceedings of Annual Computing and Communication works Workshop and connectivity. Las Vegas: IEEE Press,2017 ], uses a feature extractor designed with prior knowledge to extract fixed features of vehicle images, such as SIFT features, haaris corner features, HOG features and the like [ document 7] (Zhang Tong, zhang Ping ], a vehicle type identification method based on improved Harris detection [ J ]. Computer science, 3425 zxft 3525 (s 2): 257-259 ] proposes an improved Harris detection method, and realizes vehicle type identification by adopting an improved recognition method of vehicle type matching, wherein the accuracy of vehicle type identification is 90%. The method based on the 3D model has simple matching principle, but has complex modeling process, poor robustness and low identification precision; the method based on the deep network model has strong fault-tolerant capability and high identification precision, but the method needs a large number of training samples, has high operation complexity and long time consumption, and is difficult to meet the real-time requirement; the method based on feature extraction has a faster feature extraction speed than a deep network model due to a fixed feature extraction mode, but has lower identification precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a quick vehicle type identification method, which reduces the influence of illumination environment and extracts the front face characteristics of a vehicle through color space conversion and multi-channel HOG characteristic extraction and reduces the calculation complexity through Principal Component Analysis (PCA) dimension reduction; sparse representation and nonlinear mapping processing are carried out on sample characteristics, and characteristic correlation is reduced; establishing a relation between the sample characteristics and the sample labels, and solving a weight coefficient between the sample characteristics and the sample labels to realize the effect of quick vehicle type identification, wherein the effect is realized by the following technical scheme:
the method for quickly identifying the vehicle type comprises the following steps:
step 1) color space conversion: put RGB color space underIs the color sample image X ∈ R N×M Converting to YCbCr color space to obtain sample image X YCbCr ∈R N×M Separating luminance information and chrominance information of the color image, wherein N represents the number of samples, and M represents a characteristic dimension;
step 2), multi-channel HOG feature extraction: in the sample image X YCbCr HOG characteristics of three color channels of Y, cb and Cr are respectively extracted to obtain sample characteristics of the multi-channel HOG characteristics after extraction
Figure BDA0001878276540000021
Wherein M is 1 Representing the extracted feature dimensions;
step 3), PCA dimension reduction operation: for the sample characteristic X H Performing dimension reduction operation to obtain the dimension-reduced sample characteristics
Figure BDA0001878276540000022
Wherein M is 2 Representing the characteristic dimension after dimension reduction;
step 4) sparse representation: training n complete dictionaries by a multiplier alternative direction method
Figure BDA0001878276540000023
Then mapping the sample characteristics U through a mapping function phi to generate n sparse characteristic graphs Z i N, and defines Z =1 n =[Z 1 ,...Z n ]Is a sparse feature map group;
step 5) nonlinear mapping: randomly generating m orthogonal canonical matrices
Figure BDA0001878276540000031
The sparse feature map group Z is subjected to the nonlinear mapping function xi n Nonlinear mapping to orthogonal space to generate enhanced node H j J =1.. M, and defines H m =[H 1 ,...,H m ]To enhance the node group;
step 6) calculating a weight coefficient matrix: establishing a sparse feature map group Z n Enhanced node group H m And the sample label matrix Y, and find the relationship between the twoW, where Y ∈ R N×C C represents the number of vehicle type categories;
and 7) realizing rapid vehicle type recognition: establishing a test sample label Y test Respectively carrying out color space conversion, multi-channel HOG feature extraction and PCA dimension reduction operation on a sample to be tested according to the steps 1), 2) and 3); then directly through the complete dictionary
Figure BDA0001878276540000032
And the orthogonal criterion matrix->
Figure BDA00018782765400000311
Respectively realizing sparse representation and nonlinear mapping of the test samples to obtain a characteristic graph group of the samples to be tested>
Figure BDA0001878276540000034
And the boost node group>
Figure BDA0001878276540000035
Will finally>
Figure BDA00018782765400000312
Multiplying the weight coefficient matrix W to obtain a prediction label matrix->
Figure BDA0001878276540000037
Will be/are>
Figure BDA0001878276540000038
And test sample label Y test Comparing and identifying the vehicle type, wherein>
Figure BDA0001878276540000039
N test For the number of test samples, C pre Is a predicted vehicle type category.
The rapid vehicle type recognition algorithm is further designed in that in the step 1), a color sample image X in an RGB color space belongs to R through an equation (1) N×M Converting to YCbCr color space to obtain sample image X YCbCr ∈R N×M
Figure BDA00018782765400000310
Wherein Y is (x,y) ,Cb (x,y) ,Cr (x,y) Respectively representing pixel values of 3 color channels of a pixel point (x, y) in a YCbCr color space; r (x,y) ,G (x,y) ,B (x,y) Respectively representing pixel values of 3 color channels in the RGB color space.
The rapid vehicle type recognition algorithm is further designed in that the step 2) comprises the following steps:
step 2-1) carrying out normalization operation on an input image by adopting a Gamma correction method, and adjusting the image contrast to reduce the influence of uneven illumination and noise of the image;
step 2-2) calculating the gradient size G (x, y) and the direction D (x, y) of each pixel point (x, y) in the image through the formula (2), and capturing the contour information of the target object;
Figure BDA0001878276540000041
wherein G is x (x,y)、G y (x, y) respectively representing gradients of the pixel points in the directions of an x axis and a y axis in a two-dimensional plane vertical coordinate system;
step 2-3) dividing the image area into two layers, wherein the first layer is an image unit which is communicated with each other, the image units are not overlapped with each other, the second layer is an image block which is composed of a plurality of image units, and the image blocks can be overlapped with each other; step 2-4), equally dividing the gradient direction range of the image unit into l direction blocks according to 360 degrees, accumulating the gradient amplitude of the image unit according to the section to which the gradient direction belongs, and then carrying out contrast normalization operation on the image block; and (4) connecting HOG features in all image blocks in the image in series to obtain the HOG feature vector of the image.
The rapid vehicle type recognition algorithm is further designed in the step 2-2) G x (x,y)、G y (x, y) is represented by the formula(3),
Figure BDA0001878276540000042
Wherein H (x, y) represents the luminance value of the pixel point (x, y).
The rapid vehicle type recognition algorithm is further designed in that in the step 3), the sample characteristic X is subjected to formula (4) H Obtaining the sample characteristics after dimension reduction by dimension reduction operation
Figure BDA0001878276540000043
Figure BDA0001878276540000044
Wherein the content of the first and second substances,
Figure BDA0001878276540000045
is X H The sample mean of (2); />
Figure BDA0001878276540000046
Is X H The covariance matrix of (c).
The rapid vehicle type recognition algorithm is further designed in that the step 4) trains n complete dictionaries by a multiplier alternating direction method of formula (5)
Figure BDA00018782765400000413
Figure BDA0001878276540000047
Wherein ρ is a constant and ρ > 0; i is an identity matrix;
Figure BDA0001878276540000048
is a soft threshold processing function as in equation (6); />
Figure BDA0001878276540000049
Is->
Figure BDA00018782765400000410
The dual term of (c); p is t Representing a lagrange multiplier; λ represents a penalty coefficient; e.g. of the type i Means ith complete dictionary->
Figure BDA00018782765400000411
And dual items>
Figure BDA00018782765400000412
The index of (2); t represents the number of iterations. />
Figure BDA0001878276540000051
Thinning the sample characteristics U after dimensionality reduction by an equation (7) to obtain n sparse characteristic graphs Z i N, and defines Z =1 n =[Z 1 ,...Z n ]In order to sparsify the set of feature maps,
Figure BDA0001878276540000052
wherein the activation function phi (x) = (x-min (x))/(max (x) -min (x)).
The rapid vehicle type recognition algorithm is further designed in that m orthogonal canonical matrixes are randomly generated in the step 5)
Figure BDA0001878276540000053
Sparsifying feature map group Z by formula (8) n The non-linear mapping of (a) to (b),
Figure BDA0001878276540000054
wherein the nonlinear mapping function ξ is as shown in equation (9):
Figure BDA0001878276540000055
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0001878276540000056
s∈(0,1]is a contraction factor; h is j Represents the jth orthogonal norm matrix>
Figure BDA0001878276540000057
Of the index (c).
The rapid vehicle type recognition algorithm is further designed in that in the step 6), a relation between Q and a sample label matrix Y is established according to an equation (10),
Y=QW (10)
wherein Q = [ Z ] n |H m ]To obtain W = Q + Y, wherein Q + For the pseudo-inverse of Q, Q is obtained by the formula (11) + Finally, a weight coefficient matrix W is obtained,
Figure BDA0001878276540000058
Figure BDA0001878276540000059
wherein I is an identity matrix and lambda represents a penalty coefficient.
The invention has the beneficial effects that:
the method for quickly identifying the vehicle type has the characteristics of high identification precision, high identification speed and the like. The method effectively separates the brightness information and the chrominance information of the image by adopting color space conversion, and extracts the key characteristics of the front face of the vehicle by using a multi-channel HOG characteristic extraction method; the dimensionality reduction is realized through PCA, so that the calculation complexity is reduced; performing sparse representation and nonlinear mapping operation on the reduced sample characteristics to further reduce characteristic correlation; and (4) solving a weight coefficient between the sample characteristics and the sample label to realize the effect of quick vehicle type identification.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
FIG. 2 is a 3-channel diagram of the RGB and YCbCr color spaces of the present invention.
FIG. 3 is a schematic representation of a portion of a sample image of the present invention.
FIG. 4 is a confusion matrix diagram of vehicle type recognition accuracy.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the method for quickly identifying a vehicle type provided by the present invention specifically comprises the following steps:
step 1) color space conversion: converting the color sample image X in RGB color space to YCbCr color space by using the conversion relation of equation (13) to obtain the sample image X YCbCr The mutual separation of the brightness information and the chrominance information of the color image is realized, and the influence of the illumination environment is reduced.
Figure BDA0001878276540000061
Wherein Y is (x,y) ,Cb (x,y) ,Cr (x,y) Respectively representing pixel values of 3 color channels of a pixel point (x, y) in a YCbCr color space; r (x,y) ,G (x,y) ,B (x,y) Pixel values of 3 color channels in the RGB color space are represented, respectively, and a 3 color channel map of the original image and a converted 3 color channel map are shown in fig. 1.
Step 2), multi-channel HOG feature extraction: in the sample image X YCbCr HOG features of three color channels of Y, cb and Cr are respectively extracted, the size =10 of an image unit is set, the step size =10,2 × 2 image units form an image block, the gradient direction block l =18 of each image unit, and finally the sample feature after multi-channel HOG feature extraction is X H
Step 3), PCA dimension reduction operation: first, a sample feature X is calculated by equation (14) H Sample mean of
Figure BDA0001878276540000062
Then passing the formula (1)5) Calculating a covariance matrix for characteristics of input samples>
Figure BDA0001878276540000063
Finally, selecting the front p principal components of the covariance matrix, and carrying out X pair through a formula (16) H And reducing the dimension to obtain a sample characteristic U after dimension reduction.
Figure BDA0001878276540000064
Figure BDA0001878276540000071
Figure BDA0001878276540000072
Step 4) sparse representation: the sample characteristic U after dimensionality reduction is sparsely represented by a training complete dictionary, and the specific steps are as follows:
first, a coefficient matrix is randomly generated
Figure BDA0001878276540000073
Then, the optimal solution of the formula (17) is solved by adopting the multiplier alternating direction iteration method described in the formula (5), and the optimal solution is considered as the complete dictionary->
Figure BDA0001878276540000074
Finally, the trained complete dictionary is judged>
Figure BDA0001878276540000075
And substituting the sample characteristic U into an expression (7) to realize sparse representation of the sample characteristic.
Figure BDA0001878276540000076
Repeating the steps n times to obtain n sparse characteristic graphs Z i N, and defines Z n =[Z 1 ,...Z n ]Is a sparse feature map set.
Step 5) nonlinear mapping: using the generated orthogonal canonical matrix to complete the sparse feature map group Z n The non-linear mapping comprises the following specific steps:
firstly, an orthogonal canonical matrix is randomly generated
Figure BDA0001878276540000077
Then, the sparse feature map group Z is formed n And the orthogonal norm matrix>
Figure BDA0001878276540000078
Carry-in (8) generating enhanced node H j . Repeating the steps m times to obtain m enhanced nodes H j J =1.. M, and defines H m =[H 1 ,...,H m ]To enhance the node group.
Step 6) calculating a weight coefficient matrix: definition Q = [ Z = n |H m ]The relationship between Q and Y is established as equation (18).
Y=QW (18)
Wherein Y ∈ R N×C C represents the number of vehicle types, each column respectively represents Bus, sedan, microbus, minivan, SUV and Truck 6 vehicle types, if the first sample is Sedan, the 2 nd column corresponding to the 1 st row of the sample label matrix is 1, and the rest columns in the first row are 0; similarly, if the Num sample is an SUV, the column 5 corresponding to the Num row of the sample tag matrix is 1, and the remaining columns in the Num row are 0.
Thus, W = Q is obtainable + Y, wherein Q + For the pseudo-inverse of Q, Q is obtained by the formula (19) + Finally, a weight coefficient matrix is obtained
Figure BDA0001878276540000079
Figure BDA00018782765400000710
Wherein I is an identity matrix and lambda represents a penalty coefficient.
And 7) realizing rapid vehicle type recognition: establishing a test sample label Y test Respectively carrying out color space conversion, multi-channel HOG feature extraction and PCA dimension reduction operation on a sample to be tested according to the steps 1), 2) and 3); then directly through the complete dictionary
Figure BDA0001878276540000081
And orthogonal norm matrix>
Figure BDA0001878276540000082
Respectively realizing test sample sparse representation and nonlinear mapping to obtain the characteristic graph group of the sample to be tested>
Figure BDA0001878276540000083
And enhanced node group>
Figure BDA0001878276540000084
Will finally>
Figure BDA0001878276540000085
Multiplying with the weight coefficient matrix W to obtain a predicted label matrix>
Figure BDA0001878276540000086
Will be/are>
Figure BDA0001878276540000087
And a test sample label Y test Comparing and identifying vehicle type, wherein>
Figure BDA0001878276540000088
N test For the number of test samples, C pre Is a predicted vehicle type category.
The technical scheme of the invention is experimentally verified on a BIT-Vehicle database, partial sample images are shown in FIG. 2, experimental results show that the invention can effectively identify 6 types of Vehicle types, namely Bus, sedan, microbus, minivan, SUV and Truck, the identification precision is 96.69%, the identification speed is as high as 70.3fps, the real-time Vehicle type identification effect is achieved, and a Vehicle type identification precision confusion matrix is shown in FIG. 3; in addition, in order to illustrate the superiority of the method of the present invention, compared with the currently mainstream vehicle type identification method, the experimental results are shown in table 1:
TABLE 1
Figure BDA0001878276540000089
By comparing the Vehicle Type recognition accuracy of table 1, the recognition accuracy of the method of the present invention is superior to other methods such as semi-supervised CNN method [ document 8] (Dong Z, wu Y, pei M, et al. Vehicle Type Classification Using unapperverved connective Network [ C ]// Proceedings of International Conference on Pattern recognition.storage: IEEE Press, 2014. The experimental apparatus is configured to: core i7-6800K CPU, 169B RAM, windows10 system computer with frequency of 3.40GHz, and MATLAB2016b 64 bit as running environment.
The method combines color space conversion and a multi-channel HOG feature extraction method, reduces the influence of the illumination environment and simultaneously extracts the front face features of the vehicle; the PCA dimension reduction operation is used, so that the calculation complexity is reduced; carrying out sparse representation and nonlinear mapping on the sample characteristics to reduce the correlation among the sample characteristics; and establishing a relation between the sample characteristics and the sample labels, and solving a weight coefficient between the sample characteristics and the sample labels. The method has the advantages of high recognition precision, high recognition speed and the like in the aspect of vehicle type recognition.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method for quickly identifying a vehicle type is characterized by comprising the following steps:
step 1) color space conversion: enabling a color sample image X in an RGB color space to be in the same size as R N×M Converting to YCbCr color space to obtain sample image X YCbCr ∈R N×M Separating luminance information and chrominance information of the color image, wherein N represents the number of samples, and M represents a characteristic dimension;
step 2), multi-channel HOG feature extraction: in the sample image X YCbCr HOG characteristics of three color channels of Y, cb and Cr are respectively extracted to obtain sample characteristics of the multi-channel HOG characteristics after extraction
Figure FDA0004110515030000011
Wherein M is 1 Representing the extracted feature dimensions;
step 3), PCA dimension reduction operation: for the sample characteristic X H Performing dimension reduction operation to obtain the dimension-reduced sample characteristics
Figure FDA0004110515030000012
Wherein M is 2 Representing the characteristic dimension after dimension reduction;
step 4) sparse representation: training n complete dictionaries by multiplier alternating direction method
Figure FDA0004110515030000013
Then mapping the sample characteristics U through a mapping function phi to generate n sparse characteristic graphs Z i ,i=1... n, and defines Z n =[Z 1 ,...Z n ]Is a sparse feature map group;
step 5) nonlinear mapping: randomly generating m orthogonal canonical matrices
Figure FDA0004110515030000014
The sparse feature map group Z is subjected to the nonlinear mapping function xi n Nonlinear mapping to orthogonal space to generate enhanced node H j J =1.. M, and defines H m =[H 1 ,...,H m ]To enhance the node group;
step 6) calculating a weight coefficient matrix: establishing a sparse feature map group Z n Enhanced node group H m And the relation with a sample label matrix Y is obtained, and a weight coefficient matrix W between the sample label matrix Y and the sample label matrix Y is obtained, wherein Y belongs to R N×C C represents the number of vehicle type categories;
and 7) realizing rapid vehicle type recognition: establishing a test sample label Y test Respectively carrying out color space conversion, multi-channel HOG feature extraction and PCA dimension reduction operation on a sample to be tested according to the steps 1), 2) and 3); then directly through said complete dictionary
Figure FDA0004110515030000015
And the orthogonal criterion matrix->
Figure FDA0004110515030000016
Respectively realizing test sample sparse representation and nonlinear mapping to obtain the characteristic graph group of the sample to be tested>
Figure FDA0004110515030000017
And enhanced node group>
Figure FDA0004110515030000018
Will be at last->
Figure FDA0004110515030000019
Multiplying by the weight coefficient matrix WObtaining a prediction tag matrix->
Figure FDA00041105150300000110
Will->
Figure FDA00041105150300000111
And a test sample label Y test Comparing and identifying the vehicle type, wherein>
Figure FDA00041105150300000112
N test For testing the number of samples, C pre Is a predicted vehicle type category.
2. The method for rapidly identifying the vehicle type according to claim 1, wherein the color sample image in RGB color space X e R in the step 1) is represented by formula (1) N×M Converting to YCbCr color space to obtain sample image X YCbCr ∈R N×M
Figure FDA0004110515030000021
Wherein Y is (x,y) ,Cb (x,y) ,Cr (x,y) Respectively representing pixel values of 3 color channels of a pixel point (x, y) in a YCbCr color space; r (x,y) ,G (x,y) ,B (x,y) Respectively representing the pixel values of 3 color channels in the RGB color space.
3. The method for rapidly identifying a vehicle type according to claim 1, wherein the step 2) comprises the steps of:
step 2-1) carrying out normalization operation on an input image by adopting a Gamma correction method, and adjusting the image contrast to reduce the influence of uneven illumination and noise of the image;
step 2-2) calculating the gradient size G (x, y) and the direction D (x, y) of each pixel point (x, y) in the image through the formula (2), and capturing the contour information of the target object;
Figure FDA0004110515030000022
wherein G is x (x,y)、G y (x, y) respectively representing the gradients of the pixel points in the directions of an x axis and a y axis in a two-dimensional plane vertical coordinate system;
step 2-3) dividing the image area into two layers, wherein the first layer is an image unit which is communicated with each other, the image units are not overlapped with each other, the second layer is an image block which is formed by a plurality of image units, and the image blocks can be overlapped with each other; step 2-4), equally dividing the gradient direction range of the image unit into l direction blocks according to 360 degrees, accumulating the gradient amplitude of the image unit according to the section to which the gradient direction belongs, and then carrying out contrast normalization operation on the image block; and (4) connecting HOG features in all image blocks in the image in series to obtain HOG feature vectors of the image.
4. The method for rapidly recognizing a vehicle type according to claim 3, wherein G in the step 2-2) x (x,y)、G y The expression of (x, y) is shown as formula (3),
Figure FDA0004110515030000023
wherein H (x, y) represents the luminance value of the pixel point (x, y).
5. The method for rapidly identifying a vehicle type according to claim 1, wherein the step 3) is to identify the sample characteristic X by the formula (4) H Obtaining the sample characteristics after dimension reduction through dimension reduction operation
Figure FDA0004110515030000024
Figure FDA0004110515030000031
Wherein the content of the first and second substances,
Figure FDA0004110515030000032
is X H The sample mean of (2); />
Figure FDA0004110515030000033
Is X H The covariance matrix of (2).
6. The method for rapidly recognizing the vehicle type according to claim 1, wherein said step 4) trains n complete dictionaries by a multiplier alternating direction method of formula (5)
Figure FDA0004110515030000034
Figure FDA0004110515030000035
Wherein ρ is a constant and ρ > 0; i is an identity matrix;
Figure FDA0004110515030000036
is a soft threshold processing function as in equation (6); />
Figure FDA0004110515030000037
Is->
Figure FDA0004110515030000038
The dual term of (c); p t Representing a lagrange multiplier; λ represents a penalty coefficient; e.g. of the type i Means ith complete dictionary +>
Figure FDA0004110515030000039
And a dual term>
Figure FDA00041105150300000310
An index of (2); t represents the number of iterations;
Figure FDA00041105150300000311
thinning the sample characteristics U after dimensionality reduction by a formula (7) to obtain n sparse characteristic graphs Z i N, and defines Z n =[Z 1 ,...Z n ]In order to sparsify the set of feature maps,
Figure FDA00041105150300000312
/>
wherein the activation function phi (x) = (x-min (x))/(max (x) -min (x)).
7. The method for rapidly identifying a vehicle type according to claim 1, wherein m orthogonal norm matrices are randomly generated in the step 5)
Figure FDA00041105150300000313
Sparse feature map set Z is realized by formula (8) n The non-linear mapping of (a) to (b),
Figure FDA00041105150300000314
wherein the nonlinear mapping function ξ is as shown in equation (9):
Figure FDA00041105150300000315
wherein the content of the first and second substances,
Figure FDA00041105150300000316
s∈(0,1]is a contraction factor; h is a total of j Represents the jth orthogonal norm matrix>
Figure FDA00041105150300000317
Of the index (c).
8. The method for rapidly identifying a vehicle type according to claim 1, wherein a relationship between Q and a sample label matrix Y is established according to equation (10) in step 6),
Y=QW (10)
wherein Q = [ Z ] n |H m ]To obtain W = Q + Y, wherein Q + For the pseudo-inverse of Q, Q is obtained by the formula (11) + Finally, a weight coefficient matrix W is obtained,
Figure FDA0004110515030000041
Figure FDA0004110515030000042
wherein I is an identity matrix and lambda represents a penalty coefficient.
CN201811410036.1A 2018-11-23 2018-11-23 Method for quickly identifying vehicle type Active CN109598218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811410036.1A CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811410036.1A CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Publications (2)

Publication Number Publication Date
CN109598218A CN109598218A (en) 2019-04-09
CN109598218B true CN109598218B (en) 2023-04-18

Family

ID=65958812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811410036.1A Active CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Country Status (1)

Country Link
CN (1) CN109598218B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108122008A (en) * 2017-12-22 2018-06-05 杭州电子科技大学 SAR image recognition methods based on rarefaction representation and multiple features decision level fusion
CN108647690A (en) * 2017-10-17 2018-10-12 南京工程学院 The sparse holding projecting method of differentiation for unconstrained recognition of face

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324047B (en) * 2011-09-05 2013-06-12 西安电子科技大学 Hyper-spectral image ground object recognition method based on sparse kernel representation (SKR)
CN105447503B (en) * 2015-11-05 2018-07-03 长春工业大学 Pedestrian detection method based on rarefaction representation LBP and HOG fusion
CN106971196A (en) * 2017-03-02 2017-07-21 南京信息工程大学 A kind of fire fighting truck recognition methods of the nuclear sparse expression grader based on cost-sensitive

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108647690A (en) * 2017-10-17 2018-10-12 南京工程学院 The sparse holding projecting method of differentiation for unconstrained recognition of face
CN108122008A (en) * 2017-12-22 2018-06-05 杭州电子科技大学 SAR image recognition methods based on rarefaction representation and multiple features decision level fusion

Also Published As

Publication number Publication date
CN109598218A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
CN108197326B (en) Vehicle retrieval method and device, electronic equipment and storage medium
Tang et al. Multi-modal metric learning for vehicle re-identification in traffic surveillance environment
CN106845341B (en) Unlicensed vehicle identification method based on virtual number plate
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN113420742B (en) Global attention network model for vehicle weight recognition
CN112766291B (en) Matching method for specific target object in scene image
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN112836677B (en) Weak supervision vehicle heavy identification method using deep learning
CN114170516B (en) Vehicle weight recognition method and device based on roadside perception and electronic equipment
Cui et al. Vehicle re-identification by fusing multiple deep neural networks
CN112115871B (en) High-low frequency interweaving edge characteristic enhancement method suitable for pedestrian target detection
CN111428735B (en) Truck brand classification method based on migration learning deep network fusion model
CN110852292B (en) Sketch face recognition method based on cross-modal multi-task depth measurement learning
CN115131580A (en) Space target small sample identification method based on attention mechanism
Chen et al. Mechanical assembly monitoring method based on depth image multiview change detection
CN109670506A (en) Scene Segmentation and system based on Kronecker convolution
Yu et al. Intelligent corner synthesis via cycle-consistent generative adversarial networks for efficient validation of autonomous driving systems
Sun et al. SES-YOLOv8n: automatic driving object detection algorithm based on improved YOLOv8
CN109598218B (en) Method for quickly identifying vehicle type
CN106650629A (en) Kernel sparse representation-based fast remote sensing target detection and recognition method
CN113298037B (en) Vehicle weight recognition method based on capsule network
CN115757855A (en) Image retrieval method based on graph structure matching
Wang et al. A novel fine-grained method for vehicle type recognition based on the locally enhanced PCANet neural network
CN113537013A (en) Multi-scale self-attention feature fusion pedestrian detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant