CN109598218A - A kind of method for quickly identifying of vehicle - Google Patents

A kind of method for quickly identifying of vehicle Download PDF

Info

Publication number
CN109598218A
CN109598218A CN201811410036.1A CN201811410036A CN109598218A CN 109598218 A CN109598218 A CN 109598218A CN 201811410036 A CN201811410036 A CN 201811410036A CN 109598218 A CN109598218 A CN 109598218A
Authority
CN
China
Prior art keywords
sample
image
matrix
formula
rarefaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811410036.1A
Other languages
Chinese (zh)
Other versions
CN109598218B (en
Inventor
李洪均
周泽
胡伟
陈俊杰
李壮伟
王娇
孙婉婷
张雯敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Original Assignee
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University, Nantong Research Institute for Advanced Communication Technologies Co Ltd filed Critical Nantong University
Priority to CN201811410036.1A priority Critical patent/CN109598218B/en
Publication of CN109598218A publication Critical patent/CN109598218A/en
Application granted granted Critical
Publication of CN109598218B publication Critical patent/CN109598218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention proposes a kind of method for quickly identifying of vehicle, mainly solve the accuracy and real time problems in vehicle cab recognition, this method uses color space conversion and multichannel HOG feature extraction algorithm to combine first, reduces light environment and extracts vehicle front face feature while influence;It is operated using PCA dimensionality reduction, reduces sample characteristics dimension, reduce computation complexity;Then rarefaction representation and Nonlinear Mapping are carried out to sample characteristics, reduces the correlation between feature;The relationship finally established between sample characteristics and sample label simultaneously finds out weight coefficient between the two, realizes quick vehicle type recognition effect.It is on BIT-Vehicle database the experimental results showed that, the accuracy of identification of the method for the present invention is 96.69%, recognition speed 70.3fps, not only increases vehicle cab recognition precision while also ensuring real-time.

Description

A kind of method for quickly identifying of vehicle
Technical field
The invention belongs to technical field of computer vision, and in particular to a kind of method for quickly identifying of vehicle.
Background technique
In recent years, some to be based on traffic video with universal and computer vision the fast development of traffic monitoring apparatus The computer vision technique of image is used in modernization intelligent transportation system.Real-time vehicle cab recognition technology is handed over as intelligence The important component of entire body system, is with a wide range of applications, such as highway tolling system, magnitude of traffic flow statistics, city Traffic monitoring and assistance criminal investigation [document 1] (Zhang F, Wilkie D, Zheng Y, et al.Sensing the Pulse of Urban Refueling Behavior[C]//Proceedings of the 2013ACM International Joint Conference on Pervasive and Ubiquitous Computing.New York:ACM,2013:13- 22.) etc..
The research in vehicle cab recognition field can be mainly divided into 3 classes at present: the model recognizing method [document 2] based on 3D model (Zhang Z,Tan T,Huang K,et al.Three-dimensional Deformable-model[J].IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2012,21 (1): 1-13.), this method carries out 3D modeling to different types of vehicle, then passes through Model Matching Mode realizes vehicle cab recognition, Prokaj et al. [document 3] (Prokaj J, Mediono G.3-D Model Based Vehicle Recognition[C]//Proceedings of Workshop on Applications of Computer Vision.Snowbird:IEEE Press, 2013:1-7.) the 3D model of establishing every class vehicle, vehicle to be identified is projected To the model space, 87.5% vehicle cab recognition precision is realized by way of Model Matching.Vehicle based on depth network model Type recognition methods [document 4] (Voulodimos A, Doulamis N, Doulamis A, et al.Deep Learning for Computer Vision:A Brief Review [J] .Comput Intell Neurosci.2018,2018:1-13.), it should Method first carries out feature extraction to vehicle to be identified, then trains network classifier with feature vector obtained, utilizes instruction The classifier identification type of vehicle perfected, (Lei Qian, Hao Cunming, Zhang Weiping are based on super-resolution and depth to Lei Qian et al. [document 5] Spend vehicle cab recognition [J] computer science of neural network, 2018,45 (s1): 230-233.) using the depth mind with 13 layers Vehicle cab recognition is realized through network C affeNet, and accuracy of identification is up to 95.2%, but needs to accelerate network training using GPU.Base In model recognizing method [document 6] (Manzoor M A, the Morgan Y.Vehicle Make and Model of feature extraction classification system using bag of SIFT features[C]//Proceedings of Annual Computing and Communication Workshop and Conference.Las Vegas:IEEE Press, 2017:1-5.), this method is utilized with feature extractor designed by priori knowledge, extracts the immobilization feature of vehicle image, such as SIFT feature, Haaris corner feature and HOG feature etc., (Zhang Tong, Zhang Ping are based on improved Zhang Tong et al. [document 7] Model recognizing method [J] computer science of Harris Corner Detection, 2017,44 (s2): 257-259.) propose it is improved The method of Harris Corner Detection extracts the corner feature of vehicle image, is realized by the way of corners Matching to 5 class vehicles Type identification, accuracy of identification 90%.Method matching principle based on 3D model is simple, but modeling process is complicated, robustness Poor, accuracy of identification is relatively low;Method based on depth network model has stronger fault-tolerant ability, and accuracy of identification is higher, but should Class method needs a large amount of training sample, and computational complexity is high, and time-consuming, it is difficult to meet requirement of real-time;Based on feature extraction The feature extraction mode fixed by it of method, feature extraction speed is compared to faster than depth network model, but its identification essence It spends relatively low.
Summary of the invention
It is an object of the invention to overcome the deficiency of the above-mentioned prior art, a kind of method for quickly identifying of vehicle is proposed, By color space conversion and multichannel HOG feature extraction, reduce light environment influences while extracting face spy before vehicle this method Sign reduces computation complexity by principal component analysis PCA dimensionality reduction;Sample characteristics are carried out at rarefaction representation and Nonlinear Mapping Reason reduces feature correlation;The relationship between sample characteristics and sample label is established, and finds out weight coefficient between the two, The effect for realizing quick vehicle type identification, is specifically realized by the following technical scheme:
The method for quickly identifying of the vehicle, includes the following steps:
Step 1) color space conversion: by the color catalog image X ∈ R under RGB colorN×MIt is transformed into YCbCr color Space obtains sample image XYCbCr∈RN×M, the luminance information and chrominance information of separating color image, wherein N indicates sample number Amount, M indicate characteristic dimension;
Step 2) multichannel HOG feature extraction: in sample image XYCbCrIt is upper to extract tri- Color Channels of Y, Cb, Cr respectively HOG feature, the sample characteristics after obtaining multichannel HOG feature extractionWherein M1Indicate the characteristic dimension extracted;
Step 3) PCA dimensionality reduction operation: to the sample characteristics XHCarry out dimensionality reduction operation, the sample characteristics after obtaining dimensionality reductionWherein M2Characteristic dimension after indicating dimensionality reduction;
Step 4) rarefaction indicates: training n complete dictionaries by multiplier alternating direction methodPass through again The sample characteristics U is mapped and generates n rarefaction characteristic pattern Z by mapping function φi, i=1...n, and define Zn=[Z1, ...Zn] it is rarefaction characteristic pattern group;
Step 5) Nonlinear Mapping: m orthonormal matrix is generated at randomPass through nonlinear mapping function ξ, by the rarefaction characteristic pattern group ZnIn Nonlinear Mapping to orthogonal intersection space, enhancing node H is generatedj, j=1...m, and define Hm=[H1,...,Hm] it is enhancing node group;
Step 6) calculates weight coefficient matrix: establishing rarefaction characteristic pattern group Zn, enhancing node group HmWith sample label matrix Relationship between Y, and find out weight coefficient matrix W between the two, wherein Y ∈ RN×C, C expression vehicle categorical measure;
Step 7) realizes quick vehicle type identification: establishing test sample label Ytest, by sample to be tested according to step 1), step 2) and step 3) carry out color space conversion, multichannel HOG feature extraction and the operation of PCA dimensionality reduction respectively;It connects Directly pass through the complete dictionaryWith orthonormal matrixRespectively realize test sample rarefaction indicate and it is non-linear Mapping, obtains the characteristic pattern group of sample to be testedWith enhancing node groupFinally willWith the weight system Matrix number W is multiplied, and obtains prediction label matrixIt willWith test sample label YtestMatching identification vehicle, whereinNtestFor test sample number, CpreFor the vehicle classification of prediction.
The further design of the quick vehicle type recognizer is, by formula (1) that RGB color is empty in the step 1) Between under color catalog image X ∈ RN×M, it is transformed into YCbCr color space and obtains sample image XYCbCr∈RN×M,
Wherein, Y(x,y),Cb(x,y),Cr(x,y)Respectively indicate pixel (x, y) 3 Color Channels under YCbCr color space Pixel value;R(x,y),G(x,y),B(x,y)Respectively indicate the pixel value of 3 Color Channels under RGB color.
The further design of the quick vehicle type recognizer is that the step 2) includes the following steps:
Step 2-1) use Gamma correction method that operation is normalized to input picture, adjusting picture contrast reduces figure As uneven illumination is even and the influence of noise;
Step 2-2) pass through gradient magnitude G (x, y) and direction D (x, y) that formula (2) calculate each pixel (x, y) in image, Capture the profile information of target object;
Wherein, Gx(x,y)、Gy(x, y) respectively indicates pixel x-axis and y-axis direction ladder in two-dimensional surface vertical coordinate system Degree;
Step 2-3) image-region is divided into two layers, first layer is interconnected elementary area, and elementary area is mutually not Overlapping, the second layer are the image blocks of several elementary areas composition, can be overlapped between image block;Step 2-4) it will figure As the gradient direction range of unit is divided into l direction block by 360 degree, according to the affiliated section of gradient direction, accumulated image unit Gradient amplitude, then to image block degree of comparing normalization operation;By the HOG feature string in image blocks all in image Connection gets up, and obtains the HOG feature vector of image.
The quick vehicle type recognizer it is further design be, the step 2-2) in Gx(x,y)、GyThe table of (x, y) Up to formula such as formula (3),
Wherein, H (x, y) indicates the brightness value of pixel (x, y).
The further design of the quick vehicle type recognizer is, states step 3) by formula (4) to sample characteristics XHDrop Dimension operation obtains the sample characteristics after dimensionality reduction
Wherein,For XHSample average;For XHCovariance matrix.
The further design of the quick vehicle type recognizer is that the step 4) passes through the multiplier alternating side of formula (5) To n complete dictionaries of method training
Wherein, ρ is constant and ρ > 0;I is unit matrix;It is soft-threshold processing function such as formula (6);It isPair Even item;PtIndicate Lagrange multiplier;λ indicates penalty coefficient;eiIndicate i-th of complete dictionaryWith antithesis itemIndex;t Indicate the number of iterations.
By formula (7) by the sample characteristics U rarefaction after dimensionality reduction, n rarefaction characteristic pattern Z is obtainedi, i=1...n, and Define Zn=[Z1,...Zn] it is rarefaction characteristic pattern group,
Wherein, activation primitive φ (x)=(x-min (x))/(max (x)-min (x)).
The further design of the quick vehicle type recognizer is, generates m orthonormal matrix in step 5) at randomRarefaction characteristic pattern group Z is realized by formula (8)nNonlinear Mapping,
Wherein shown in nonlinear mapping function ξ such as formula (9):
Wherein,S ∈ (0,1] it is contraction factor;hjIndicate j-th of orthonormal matrix's Index.
The further design of the quick vehicle type recognizer is, establishes Q and sample mark according to formula (10) in step 6) The relationship between matrix Y is signed,
Y=QW (10)
Wherein, Q=[Zn|Hm], obtain W=Q+Y, wherein Q+For the pseudoinverse of Q, Q is found out by formula (11)+, finally obtain power Weight coefficient matrix W,
Wherein I is unit matrix, and λ indicates penalty coefficient.
Beneficial effects of the present invention:
The method for quickly identifying of vehicle of the invention has the features such as accuracy of identification is high, and recognition speed is fast.It is adopted in this method Effectively the luminance information of image and chrominance information are separated with color space conversion, using multichannel HOG feature extracting method, The key characteristics of face before extraction vehicle;By PCA dimensionality reduction, reduce computation complexity;It is sparse to the sample characteristics after dimensionality reduction Changing indicates to operate with Nonlinear Mapping, is further reduced feature correlation;Find out the weight between sample characteristics and sample label Coefficient realizes quick vehicle type recognition effect.
Detailed description of the invention
The flow diagram of Fig. 1 the method for the present invention.
Fig. 2 RGB of the present invention and 3 channel figure of YCbCr color space.
Fig. 3 sample image schematic diagram in part of the present invention.
Fig. 4 vehicle cab recognition precision confusion matrix figure of the present invention.
Specific embodiment
Present invention will be described in further detail below with reference to the accompanying drawings.
Such as Fig. 1, the method for quickly identifying specific implementation of vehicle provided by the invention the following steps are included:
Step 1) color space conversion: the transformational relation of formula (13) is utilized, by the color catalog image under RGB color X is transformed into YCbCr color space and obtains sample image XYCbCr, realize that the luminance information of color image and chrominance information mutually divide From reducing the influence of light environment.
Wherein Y(x,y),Cb(x,y),Cr(x,y)Respectively indicate pixel (x, y) 3 Color Channels under YCbCr color space Pixel value;R(x,y),G(x,y),B(x,y)Respectively indicate the pixel value of 3 Color Channels under RGB color, original image 3 3 Color Channel figures after a Color Channel figure and conversion are as shown in Figure 1.
Step 2) multichannel HOG feature extraction: in sample image XYCbCrIt is upper to extract tri- Color Channels of Y, Cb, Cr respectively HOG feature, is arranged the size size=10 of elementary area, step-length stride=10, and 2 × 2 elementary areas form an image Block, the gradient direction block l=18 of each elementary area, finally the sample characteristics after multichannel HOG feature extraction are XH
The operation of step 3) PCA dimensionality reduction: sample characteristics X is calculated by formula (14) firstHSample averageThen The covariance matrix of input sample feature is calculated by formula (15)Finally choose p before covariance matrix it is main at Point, by formula (16) to XHCarry out dimensionality reduction, the sample characteristics U after obtaining dimensionality reduction.
Step 4) rarefaction indicates: the sample characteristics U rarefaction after dimensionality reduction indicated by training complete dictionary, it is specific to walk It is rapid as follows:
It is first randomly generated coefficient matrixThen formula is solved using multiplier alternating direction alternative manner described in formula (5) (17) optimal solution, and it is regarded as complete dictionaryThe complete dictionary that finally training is obtainedIt is brought into sample characteristics U Formula (7) realizes that the rarefaction of sample characteristics indicates.
N times above-mentioned steps are repeated, n rarefaction characteristic pattern Z is obtainedi, i=1...n, and define Zn=[Z1,...Zn] be Rarefaction characteristic pattern group.
Step 5) Nonlinear Mapping: using the orthonormal matrix generated, rarefaction characteristic pattern group Z is completednIt is non-linear Mapping, the specific steps are as follows:
It is first randomly generated orthonormal matrixThen by rarefaction characteristic pattern group ZnWith orthonormal matrixBand Enter formula (8) and generates enhancing node Hj.Repeated m time above-mentioned steps, obtain m enhancing node Hj, j=1...m, and define Hm= [H1,...,Hm] it is enhancing node group.
Step 6) calculates weight coefficient matrix: defining Q=[Zn|Hm], establish the relationship such as formula (18) between Q and Y.
Y=QW (18)
Wherein Y ∈ RN×C, C indicate vehicle categorical measure, it is each column respectively represent Bus, Sedan, Microbus, 6 class vehicle of Minivan, SUV, Truck corresponds to the 2nd of the 1st row of sample label matrix if first sample is Sedan It is classified as 1, remaining of the first row is classified as 0;If similarly the Num sample is SUV, the Num row of sample label matrix is corresponded to 5th is classified as 1, and remaining of Num row is classified as 0.
Therefore W=Q can be obtained+Y, wherein Q+For the pseudoinverse of Q, Q is found out by formula (19)+, finally obtain weight coefficient matrix
Wherein I is unit matrix, and λ indicates penalty coefficient.
Step 7) realizes quick vehicle type identification: establishing test sample label Ytest, by sample to be tested according to step 1), step 2) and step 3) carry out color space conversion, multichannel HOG feature extraction and the operation of PCA dimensionality reduction respectively;Then Directly pass through the complete dictionaryWith orthonormal matrixRespectively realize test sample rarefaction indicate and it is non-linear Mapping, obtains the characteristic pattern group of sample to be testedWith enhancing node groupFinally willWith the weight system Matrix number W is multiplied, and obtains prediction label matrixIt willWith test sample label YtestMatching identification vehicle, whereinNtestFor test sample number, CpreFor the vehicle classification of prediction.
The following are carried out experimental verification, part sample to technical solution of the present invention on BIT-Vehicle database Image as shown in Fig. 2, the experimental results showed that the present invention can effectively identify 6 class vehicles, be respectively Bus, Sedan, Microbus, Minivan, SUV, Truck, accuracy of identification 96.69%, recognition speed is up to 70.3fps, has reached real-time vehicle Type recognition effect, vehicle cab recognition precision confusion matrix are as shown in Figure 3;In addition to illustrating the superiority of the method for the present invention, compare The model recognizing method of current mainstream, experimental result are as shown in table 1:
Table 1
By the vehicle cab recognition accuracy comparison of table 1, the accuracy of identification of the method for the present invention is for example semi-supervised better than other methods CNN method [document 8] (Dong Z, Wu Y, Pei M, et al.Vehicle Type Classification Using Unsupervised Convolutional Neural Network[C]//Proceedings of International Conference on Pattern Recognition.Stockholm:IEEE Press, 2014:172-177.), depth glass The graceful machine method of Wurz [document 9] (Santos D F S, Souza G B D, Marana A N.A 2D Deep Boltzmann Machine for Robust and Fast Vehicle Classification[C]//Proceedings of Sibgrapi Conference on Graphics,Patterns and Images.Niteroi:IEEE Press,2017: 155-162.), being based on can deformation model method [document 10] (Bai S, Liu Z, Yao C.Classify Vehicles in Traffic Scene Images With Deformable Part-based Models[J].Machine Vision& Applications, 2017,29 (3): 1-11.) and width learning system method [document 11] (Chen C, Liu Z.Broad Learning System:An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.[J].IEEE Transactions on Neural Networks&Learning Systems, 2018,29 (1): 10-24.), recognition speed has also reached requirement of real-time.Experiment Device configuration are as follows: Core i7-6800K CPU, 16GB RAM, frequency are the Windows10 component computer of 3.40GHz, operation Environment is MATLAB2016b 64.
The present invention is combined using color space conversion and multichannel HOG feature extracting method, and reducing light environment influences Vehicle front face feature is extracted simultaneously;It is operated using PCA dimensionality reduction, reduces computation complexity;Rarefaction table is carried out to sample characteristics Show and Nonlinear Mapping, reduces the correlation between sample characteristics;The relationship between sample characteristics and sample label is established, and is asked Weight coefficient between the two out.The method of the present invention has many advantages, such as accuracy of identification height in terms of vehicle cab recognition, and recognition speed is fast.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims Subject to.

Claims (8)

1. a kind of method for quickly identifying of vehicle, it is characterised in that include the following steps:
Step 1) color space conversion: by the color catalog image X ∈ R under RGB colorN×MIt is empty to be transformed into YCbCr color Between, obtain sample image XYCbCr∈RN×M, the luminance information and chrominance information of separating color image, wherein N indicates sample size, M indicates characteristic dimension;
Step 2) multichannel HOG feature extraction: in sample image XYCbCrThe upper HOG for extracting tri- Color Channels of Y, Cb, Cr respectively Feature, the sample characteristics after obtaining multichannel HOG feature extractionWherein M1Indicate the characteristic dimension extracted;
Step 3) PCA dimensionality reduction operation: to the sample characteristics XHCarry out dimensionality reduction operation, the sample characteristics after obtaining dimensionality reductionWherein M2Characteristic dimension after indicating dimensionality reduction;
Step 4) rarefaction indicates: training n complete dictionaries by multiplier alternating direction methodPass through mapping again The sample characteristics U is mapped and generates n rarefaction characteristic pattern Z by function phii, i=1...n, and define Zn=[Z1,...Zn] For rarefaction characteristic pattern group;
Step 5) Nonlinear Mapping: m orthonormal matrix is generated at randomIt, will by nonlinear mapping function ξ The rarefaction characteristic pattern group ZnIn Nonlinear Mapping to orthogonal intersection space, enhancing node H is generatedj, j=1...m, and define Hm= [H1,...,Hm] it is enhancing node group;
Step 6) calculates weight coefficient matrix: establishing rarefaction characteristic pattern group Zn, enhancing node group HmWith sample label matrix Y it Between relationship, and find out weight coefficient matrix W between the two, wherein Y ∈ RN×C, C expression vehicle categorical measure;
Step 7) realizes quick vehicle type identification: establishing test sample label Ytest, by sample to be tested according to step 1), step 2) and step It is rapid 3) to carry out color space conversion, multichannel HOG feature extraction and the operation of PCA dimensionality reduction respectively;Then directly by described complete DictionaryWith orthonormal matrixTest sample rarefaction expression and Nonlinear Mapping are realized respectively, obtain sample to be tested Characteristic pattern groupWith enhancing node groupFinally willIt is multiplied, is predicted with the weight coefficient matrix W Label matrixIt willWith test sample label YtestMatching identification vehicle, wherein NtestFor test sample number, CpreFor the vehicle classification of prediction.
2. quick vehicle type recognizer according to claim 1, it is characterised in that passing through formula (1) in the step 1) will Color catalog image X ∈ R under RGB colorN×M, it is transformed into YCbCr color space and obtains sample image XYCbCr∈RN×M,
Wherein, Y(x,y),Cb(x,y),Cr(x,y)Respectively indicate the picture of pixel (x, y) 3 Color Channels under YCbCr color space Element value;R(x,y),G(x,y),B(x,y)Respectively indicate the pixel value of lower 3 Color Channels of RGB color.
3. quick vehicle type recognizer according to claim 1, it is characterised in that the step 2) includes the following steps:
Step 2-1) use Gamma correction method that operation is normalized to input picture, adjusting picture contrast reduces image light According to uneven and noise influence;
Step 2-2) pass through the gradient magnitude G (x, y) and direction D (x, y) of each pixel (x, y) in formula (2) calculating image, capture The profile information of target object;
Wherein, Gx(x,y)、Gy(x, y) respectively indicates pixel x-axis and y-axis direction gradient in two-dimensional surface vertical coordinate system;
Step 2-3) image-region is divided into two layers, first layer is interconnected elementary area, and elementary area does not overlap, The second layer is the image block of several elementary areas composition, can be overlapped between image block;
Step 2-4) the gradient direction range of elementary area is divided into l direction block by 360 degree, according to the affiliated area of gradient direction Between, the gradient amplitude of accumulated image unit, then to image block degree of comparing normalization operation;
HOG feature in image blocks all in image is together in series, the HOG feature vector of image is obtained.
4. quick vehicle type recognizer according to claim 1, it is characterised in that the step 2-2) in Gx(x,y)、Gy The expression formula of (x, y) such as formula (3),
Wherein, H (x, y) indicates the brightness value of pixel (x, y).
5. quick vehicle type recognizer according to claim 1, it is characterised in that the step 3) is by formula (4) to sample Feature XHDimensionality reduction operates to obtain the sample characteristics after dimensionality reduction
Wherein,For XHSample average;For XHCovariance matrix.
6. quick vehicle type recognizer according to claim 1, it is characterised in that the step 4) passes through the multiplier of formula (5) N complete dictionaries of alternating direction method training
Wherein, ρ is constant and ρ > 0;I is unit matrix;It is soft-threshold processing function such as formula (6);It isAntithesis item; PtIndicate Lagrange multiplier;λ indicates penalty coefficient;eiIndicate i-th of complete dictionaryWith antithesis itemIndex;T is indicated The number of iterations;
By formula (7) by the sample characteristics U rarefaction after dimensionality reduction, n rarefaction characteristic pattern Z is obtainedi, i=1...n, and define Zn =[Z1,...Zn] it is rarefaction characteristic pattern group,
Wherein, activation primitive φ (x)=(x-min (x))/(max (x)-min (x)).
7. quick vehicle type recognizer according to claim 1, it is characterised in that generate m orthogonal rule in step 5) at random Model matrixRarefaction characteristic pattern group Z is realized by formula (8)nNonlinear Mapping,
Wherein shown in nonlinear mapping function ξ such as formula (9):
Wherein,S ∈ (0,1] it is contraction factor;hjIndicate j-th of orthonormal matrixRope Draw.
8. quick vehicle type recognizer according to claim 1, it is characterised in that in step 6) according to formula (10) establish Q with Relationship between sample label matrix Y,
Y=QW (10)
Wherein, Q=[Zn|Hm], obtain W=Q+Y, wherein Q+For the pseudoinverse of Q, Q is found out by formula (11)+, finally obtain weight system Matrix number W,
Wherein I is unit matrix, and λ indicates penalty coefficient.
CN201811410036.1A 2018-11-23 2018-11-23 Method for quickly identifying vehicle type Active CN109598218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811410036.1A CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811410036.1A CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Publications (2)

Publication Number Publication Date
CN109598218A true CN109598218A (en) 2019-04-09
CN109598218B CN109598218B (en) 2023-04-18

Family

ID=65958812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811410036.1A Active CN109598218B (en) 2018-11-23 2018-11-23 Method for quickly identifying vehicle type

Country Status (1)

Country Link
CN (1) CN109598218B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324047A (en) * 2011-09-05 2012-01-18 西安电子科技大学 High spectrum image atural object recognition methods based on sparse nuclear coding SKR
CN105447503A (en) * 2015-11-05 2016-03-30 长春工业大学 Sparse-representation-LBP-and-HOG-integration-based pedestrian detection method
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN106971196A (en) * 2017-03-02 2017-07-21 南京信息工程大学 A kind of fire fighting truck recognition methods of the nuclear sparse expression grader based on cost-sensitive
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108122008A (en) * 2017-12-22 2018-06-05 杭州电子科技大学 SAR image recognition methods based on rarefaction representation and multiple features decision level fusion
CN108647690A (en) * 2017-10-17 2018-10-12 南京工程学院 The sparse holding projecting method of differentiation for unconstrained recognition of face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324047A (en) * 2011-09-05 2012-01-18 西安电子科技大学 High spectrum image atural object recognition methods based on sparse nuclear coding SKR
CN105447503A (en) * 2015-11-05 2016-03-30 长春工业大学 Sparse-representation-LBP-and-HOG-integration-based pedestrian detection method
CN105930812A (en) * 2016-04-27 2016-09-07 东南大学 Vehicle brand type identification method based on fusion feature sparse coding model
CN106971196A (en) * 2017-03-02 2017-07-21 南京信息工程大学 A kind of fire fighting truck recognition methods of the nuclear sparse expression grader based on cost-sensitive
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108647690A (en) * 2017-10-17 2018-10-12 南京工程学院 The sparse holding projecting method of differentiation for unconstrained recognition of face
CN108122008A (en) * 2017-12-22 2018-06-05 杭州电子科技大学 SAR image recognition methods based on rarefaction representation and multiple features decision level fusion

Also Published As

Publication number Publication date
CN109598218B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
Caner et al. Efficient embedded neural-network-based license plate recognition system
Hoang et al. Enhanced detection and recognition of road markings based on adaptive region of interest and deep learning
CN112766291B (en) Matching method for specific target object in scene image
JP2016062610A (en) Feature model creation method and feature model creation device
Chen et al. Moving vehicle detection based on optical flow estimation of edge
Poggi et al. Crosswalk recognition through point-cloud processing and deep-learning suited to a wearable mobility aid for the visually impaired
CN104850857B (en) Across the video camera pedestrian target matching process of view-based access control model spatial saliency constraint
CN113393503B (en) Classification-driven shape prior deformation category-level object 6D pose estimation method
CN111931683B (en) Image recognition method, device and computer readable storage medium
Tang et al. Integrated feature pyramid network with feature aggregation for traffic sign detection
CN109034136A (en) Image processing method, device, picture pick-up device and storage medium
WO2024037408A1 (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
Zang et al. Traffic lane detection using fully convolutional neural network
Chen et al. Mechanical assembly monitoring method based on depth image multiview change detection
Wu et al. Vehicle detection based on adaptive multi-modal feature fusion and cross-modal vehicle index using RGB-T images
Sugirtha et al. Semantic segmentation using modified u-net for autonomous driving
Srinidhi et al. Pothole detection using CNN and AlexNet
Amudhan et al. RFSOD: a lightweight single-stage detector for real-time embedded applications to detect small-size objects
CN109598218A (en) A kind of method for quickly identifying of vehicle
CN113269088A (en) Scene description information determining method and device based on scene feature extraction
Said et al. Wavelet networks for facial emotion recognition
KhabiriKhatiri et al. Road Traffic Sign Detection and Recognition using Adaptive Color Segmentation and Deep Learning
Gohilot et al. Detection of pedestrian, lane and traffic signal for vision based car navigation
Wicaksono et al. Traffic sign image recognition using gabor wavelet and principle component analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant