CN109508731A - A kind of vehicle based on fusion feature recognition methods, system and device again - Google Patents

A kind of vehicle based on fusion feature recognition methods, system and device again Download PDF

Info

Publication number
CN109508731A
CN109508731A CN201811172525.8A CN201811172525A CN109508731A CN 109508731 A CN109508731 A CN 109508731A CN 201811172525 A CN201811172525 A CN 201811172525A CN 109508731 A CN109508731 A CN 109508731A
Authority
CN
China
Prior art keywords
vehicle
feature
fusion feature
image
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811172525.8A
Other languages
Chinese (zh)
Inventor
李熙莹
周智豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201811172525.8A priority Critical patent/CN109508731A/en
Publication of CN109508731A publication Critical patent/CN109508731A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of vehicles based on fusion feature again recognition methods, system and device, wherein, after method is the following steps are included: obtain the image of vehicle to be identified, the area information of at least two preset components is calculated in the picture according to preset component detection model;After carrying out feature extraction to image according to each area information, the feature extracted is subjected to fused in tandem, and obtain fusion feature;Euclidean distance is calculated in conjunction with fusion feature and preset searching database, vehicle is carried out according to the Euclidean distance of calculating and is identified again.The present invention targetedly chooses representative component, component detection and feature extraction fusion are carried out to the component of selection, reduce remaining to a certain extent without interference brought by representative vehicle sections, in the identification again of the different vehicle individual under same vehicle model, better effect can be obtained, the accuracy rate identified again is improved, can be widely applied to vehicle weight identification technology field.

Description

A kind of vehicle based on fusion feature recognition methods, system and device again
Technical field
The present invention relates to vehicle weight identification technology field more particularly to a kind of vehicles based on the fusion feature again side of identification Method, system and device.
Background technique
Vehicle is identified again and can usually be realized by Car license recognition, but (such as vacation in the case of Car license recognition can not be applied License plate covers license plate, license plate shading, without license plate etc.), then it needs to identify again using the other regions of vehicle to carry out vehicle.Vehicle weight In identification, maximum difficult point is the differentiation between the individual of the different vehicle under same vehicle model, and committed step is to extract Most variant feature between individual.
In the correlative study that vehicle identifies again, research method can be divided into two classes: first is that using means such as sensors, into The tag match of the same vehicle of row, to realize the identification again of same vehicle;Second is that extracting vehicle using computer vision technique The validity feature of image realizes that vehicle identifies again by calculating the distance between different vehicle image measurement.
First method mainly utilizes multiple sensor nodes, is matched according to the same vehicle tag that different nodes obtain As a result, determining vehicle-state;This kind of method accuracy rate is high, but needs to dispose big quantity sensor and carry out vehicle tag to vehicle Setting, exploitativeness are poor, it is difficult to popularization and application.
Second method common are Car license recognition and the method based on feature extraction.Vehicle is carried out using Car license recognition Identification is a kind of most effective and most popular mode again, and great application value, license plate recognition technology at this stage is also more mature, Effect is good, but often in police criminal detection demand, the vehicle of required lookup is illegal vehicle, no license plate, false vehicle often occurs Board and situation of blocking license plate etc., under this application environment, by Car license recognition carry out vehicle identify again it is then infeasible.Base It is mainly the validity feature extracted in vehicle image in the method for feature extraction, passes through the distance between calculating different vehicle image Measurement realizes that vehicle identifies again, and different according to the type of feature are divided into the side of identification again of the vehicle based on traditional artificial feature again Method and vehicle recognition methods again based on depth convolution feature, since the characterization ability of depth convolution feature is relative to traditional artificial Design feature, therefore use depth convolution feature in vehicle weight Study of recognition more.But this kind of methods are for different vehicles mostly Type brand etc. has the heavy Study of recognition between the vehicle of larger difference, for the different vehicle individual under same type of vehicle Vehicle identify this application scenarios again, there is a problem of that accuracy rate is lower.
The vehicle that current some the relevant technologies are mainly still directed under different automobile types, different vehicle brand and model is known again It does not study, in the case of there are a large amount of same vehicle model Different Individuals, extracted feature descriptive power is insufficient, distinction Less, so that the heavy recognition accuracy of vehicle is lower under same vehicle model.
Summary of the invention
In order to solve the above-mentioned technical problem, the first object of the present invention is to provide one kind and identifies again under same vehicle model Accuracy rate height, the good recognition methods again of the vehicle based on fusion feature of applicability.
The second object of the present invention is to provide a kind of base that weight recognition accuracy is high under same vehicle model, applicability is good In the vehicle weight identifying system of fusion feature.
The third object of the present invention is to provide a kind of base that weight recognition accuracy is high under same vehicle model, applicability is good In the vehicle weight identification device of fusion feature.
First technical solution of the present invention is:
A kind of recognition methods again of the vehicle based on fusion feature, comprising the following steps:
S1, after obtaining the image of vehicle to be identified, at least two are calculated in the picture according to preset component detection model The area information of preset component;
S2, after carrying out feature extraction to image according to each area information, the feature extracted is subjected to fused in tandem, and obtain Take fusion feature;
S3, Euclidean distance is calculated in conjunction with fusion feature and preset searching database, is carried out according to the Euclidean distance of calculating Vehicle identifies again.
Further, the preset component includes vehicle face part and vehicle window component, and the area information includes the position of component Information and classification information.
Further, the preset component detection model is built by following steps:
After inputting the training set of vehicle image, clustered using labeled data of the K-means algorithm to training set, and obtain Obtain the width height and aspect ratio information of cluster centre;
Component detection model is constructed using wide high and aspect ratio information as the input data of Faster R-CNN algorithm.
Further, the step S1, specifically includes the following steps:
After the image zooming-out vehicle characteristics figure of vehicle to be identified, the width in conjunction with vehicle characteristics figure and vehicle is high and wide high Than information acquisition initial candidate region;
The characteristic pattern of candidate region is obtained in conjunction with initial candidate region and vehicle characteristics figure;
Returned in conjunction with the characteristic pattern of candidate region, boundary and softmax classifier obtain vehicle face part area information and The area information of vehicle window component.
Further, the step S2, specifically includes the following steps:
S21, vehicle face picture and vehicle window are obtained according to the area information of vehicle face part and the area information of vehicle window component respectively Picture;
S22, the depth convolution feature for extracting vehicle face picture and vehicle window picture respectively using preset Feature Selection Model;
S23, two depth convolution features are carried out by fused in tandem using the first preset formula, and obtains fusion feature.
Further, first preset formula are as follows:
f(X1,X2…Xk)=ω1f(X1)⊙ω2f(X2)…⊙ωkf(Xk)
Wherein, f (X1,X2…Xk) indicate the fusion feature obtained after mixing operation;ω12…ωkIndicate different zones Fusion Features weight;⊙ indicates serial operation;f(Xk) indicate k-th of region characteristics of image.
Further, the step S3, specifically:
Successively by the fusion feature of all images in the fusion feature and searching database of the image of vehicle to be identified according to Second preset formula calculates Euclidean distance, carries out vehicle according to the Euclidean distance of calculating and identifies again.
Further, second preset formula are as follows:
Wherein, Dif indicates Euclidean distance, the number of dimensions that n is characterized, qkFor the fusion feature of vehicle image Q to be identified Kth dimension value, rkFor the kth dimension value of the fusion feature of image in retrieval data set R.
Second technical solution of the present invention is:
A kind of vehicle weight identifying system based on fusion feature, comprising:
At least one processor;
At least one processor, for storing at least one program;
When at least one described program is executed by least one described processor, so that at least one described processor is realized A kind of above-mentioned recognition methods again of the vehicle based on fusion feature.
Third technical solution of the present invention is:
A kind of vehicle weight identification device based on fusion feature, comprising:
After component detection module is used to obtain the image of vehicle to be identified, in the picture according to preset component detection model Calculate the area information of at least two preset components;
After feature extraction Fusion Module is used to carry out feature extraction to image according to each area information, feature that will extract Fused in tandem is carried out, and obtains fusion feature;
Computing module is used to calculate Euclidean distance in conjunction with fusion feature and preset searching database, according to the Euclidean of calculating Distance carries out vehicle and identifies again.
The beneficial effects of the present invention are: the present invention targetedly chooses representative component, to the component of selection Component detection and feature extraction fusion are carried out, reduces remaining to a certain extent without dry brought by representative vehicle sections It disturbs, in the identification again of the different vehicle individual under same vehicle model, better effect can be obtained, improve the standard identified again True rate.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of the vehicle based on fusion feature of present invention recognition methods again;
Fig. 2 is a kind of structural block diagram of the vehicle weight identification device based on fusion feature of the present invention;
Fig. 3 is a kind of step flow chart of the vehicle based on fusion feature of the present invention specific embodiment of recognition methods again;
Fig. 4 is the step flow chart that component detects in specific embodiment.
Specific embodiment
As shown in Figure 1, a kind of recognition methods again of the vehicle based on fusion feature, comprising the following steps:
A1, after obtaining the image of vehicle to be identified, at least two are calculated in the picture according to preset component detection model The area information of preset component.
A2, after carrying out feature extraction to image according to each area information, the feature extracted is subjected to fused in tandem, and obtain Take fusion feature.
A3, Euclidean distance is calculated in conjunction with fusion feature and preset searching database, is carried out according to the Euclidean distance of calculating Vehicle identifies again.
In the above method, using the strategy of regional area substitution general image, amplify crucial regional area to vehicle weight The active force of identification, reduce weakly heterogeneous component area counterweight identification interference, such as can according to vehicle face, car light, vehicle window or These representational parts of wheel are identified judgement again.By extracting the feature of multiple components, and multiple features are carried out After fusion, the Euclidean distance between each image is calculated, to realize that vehicle identifies again.This method is targetedly chosen to have and be represented Property component, component detection is carried out to the component of selection and feature extraction is merged, reduces remaining without representative vehicle sections Brought interference can obtain better effect, improve in the identification again of the different vehicle individual under same vehicle model The accuracy rate identified again, suitable for different detection targets.
Specifically, the preset component includes vehicle face part and vehicle window component, and the area information includes the position of component Information and classification information.
Wherein, the preset component detection model is built by following steps:
B1, it after inputting the training set of vehicle image, is clustered using labeled data of the K-means algorithm to training set, And obtain the width height and aspect ratio information of cluster centre.
B2, component detection model is constructed using wide high and aspect ratio information as the input data of Faster R-CNN algorithm.
Step A1 specifically includes step A11~A23:
A11, according to the image zooming-out vehicle characteristics figure of vehicle to be identified after, the width in conjunction with vehicle characteristics figure and vehicle it is high and Aspect ratio information obtains initial candidate region.
A12, the characteristic pattern that candidate region is obtained in conjunction with initial candidate region and vehicle characteristics figure.
A13, the characteristic pattern in conjunction with candidate region, boundary returns and the region letter of softmax classifier acquisition vehicle face part The area information of breath and vehicle window component.
Wherein, step A2 specifically includes step A21~A23:
A21, vehicle face picture and vehicle window are obtained according to the area information of vehicle face part and the area information of vehicle window component respectively Picture.
A22, the depth convolution feature for extracting vehicle face picture and vehicle window picture respectively using preset Feature Selection Model.
A23, two depth convolution features are carried out by fused in tandem using the first preset formula, and obtains fusion feature.
Wherein, first preset formula are as follows:
f(X1,X2…Xk)=ω1f(X1)⊙ω2f(X2)…⊙ωkf(Xk)
Wherein, f (X1,X2Xxk) indicate the fusion feature obtained after mixing operation;ω12…ωkIndicate that different zones are special Sign fusion weight;⊙ indicates serial operation;f(Xk) indicate k-th of region characteristics of image.
Wherein, step A3 specifically: will successively own in the fusion feature and searching database of the image of vehicle to be identified The fusion feature of image calculates Euclidean distance according to the second preset formula, carries out vehicle according to the Euclidean distance of calculating and identifies again.
Second preset formula are as follows:
Wherein, Dif indicates Euclidean distance, the number of dimensions that n is characterized, qkFor the fusion feature of vehicle image Q to be identified Kth dimension value, rkFor the kth dimension value of the fusion feature of image in retrieval data set R.
In the above method, image component labeled data is clustered using K-means cluster, determines component testing process In region detection network parameter so that generate candidate region it is more accurate, the position in candidate region is more accurate On the basis of, the discrimination of component is improved to a certain extent.Difference between different vehicle models can pass through vehicle face Divide reflection processing, carries out feature extraction by detecting vehicle face region, and to vehicle face region, obtain the spy between different vehicle model Levy difference;Gap between same vehicle model different vehicle individual is then gathered in vehicle window region mostly, simultaneously by detection vehicle window Feature extraction is carried out to vehicle window region, obtains the feature difference between vehicle individual.Two component area features are merged, it can To carry out character representation to different automobile types Different Individual simultaneously.Overall step process is targetedly chosen most representative Region, component detection is carried out to the region of selection and feature extraction is merged, reduces remaining to a certain extent without representative Property vehicle sections brought by interference can be obtained better in the identification again of the different vehicle individual under same vehicle model Effect.
Specific embodiment
The above method is explained in detail below in conjunction with example Fig. 3 and Fig. 4.
As shown in figure 3, the present embodiment uses the strategy of regional area substitution general image, amplify crucial regional area pair The active force that vehicle identifies again reduces the interference of weakly heterogeneous component area counterweight identification.First to Faster R-CNN (Faster region-based convolutional neural network) algorithm optimizes, and constructs vehicle part (vehicle window, vehicle face) detects location model (component detection module);It is then basic model with VGG16 model, for fusion target Adjust network structure, extract the depth convolution feature of corresponding region and the feature of extraction is merged (feature extraction with merge Module);Finally, calculating the Euclidean distance of picture in inquiry picture and search library, the identification again of vehicle is realized.
Component detection, which is referred to, extracts object candidate area first with extracted region network, then learns candidate by classification Area classification, the technology of the present invention component detection part use faster region convolutional neural networks algorithm (Faster R-CNN) The vehicle face and vehicle window part of vehicle are detected, vehicle vehicle face, window locations are accurately obtained.
Feature extraction refers to carrying out feature extraction respectively to the vehicle face and vehicle window part that detect with merging, then, will The feature extracted is merged, and fusion feature is obtained, and the technology of the present invention feature extraction mainly extracts vehicle face with part is merged And the depth convolution feature of vehicle window image, network model use VGG16 model, extract full articulamentum (FC6) feature, extract two After the depth characteristic of component, by two feature fused in tandem, new fusion feature is formed.
In component detection in the specific implementation, main use faster region convolutional neural networks algorithm (Faster R- CNN) vehicle window of vehicle and vehicle face region are positioned, more accurately to detect vehicle face and vehicle window region, first to training The labeled data of collection carries out K-means cluster, and the wide height of target obtained using cluster adjusts faster region convolutional Neural net The parameter when candidate region of network algorithm generates, to obtain more accurately object candidate area, to be accurately positioned vehicle face vehicle Window region.
In component detection in the specific implementation, main use faster region convolutional neural networks algorithm (Faster R- CNN) vehicle window of vehicle and vehicle face region are positioned, more accurately to detect vehicle face and vehicle window region, first to training The labeled data of collection carries out K-means cluster, and the wide height of target obtained using cluster adjusts faster region convolutional Neural net The parameter when candidate region of network algorithm generates, to obtain more accurately object candidate area, to be accurately positioned vehicle face vehicle Window region.Wherein, referring to Fig. 4, specific step is as follows for component detection:
1) K-means algorithm applying step is as follows:
Step1: obtaining picture target data, and 2-D data cluster is carried out using the wide high level of target object as reference axis.
Step2: assuming that data object target is m, then k object μ is arbitrarily selected in m data object12,…,μk As initial cluster center, for remaining other objects, then according to the similarity of they and these cluster centres, respectively by it Distribute to the cluster most like with it.I-th of data object generic c(i)It calculates as follows:
Wherein, c(i)For i-th of data object generic;μjFor j-th of the initial cluster center selected in m data; x(i)For i-th of data object.
Step3: the mean value of all objects in cluster is calculated, updates the cluster centre of each cluster, cluster centre algorithm is such as Formula (2):
1 { c in formula(i)=j } indicate function are as follows:
Wherein, c(i)For i-th of data object generic;μjFor j-th of the initial cluster center selected in m data; x(i)For i-th of data object.Denominator part indicates all the sum of data for belonging to jth class, and molecule indicates all and belongs to jth class The number of data calculates the mean value for taking jth class data, using the data of average point as updated cluster centre.
Step4: repeating step2-step3, until the sum of the distance of all data objects and the cluster centre point belonging to it Reach minimum value.As shown in formula (3):
Wherein, k is the number of species of cluster, dist (x, μi) indicate data x and ith cluster center μiDistance value.
2) faster region convolutional neural networks algorithm applying step is as follows:
Step1: the characteristic pattern of image is extracted using the convolution pond layer on one group of basis first.This feature figure, which is shared, to be used for Extract network layer and full articulamentum in subsequent sections.Wherein, the input of characteristic extracting module is vehicle pictures;Characteristic extracting module Output is characterized figure, i.e. characteristics of image.
Step2: characteristic pattern input area extracts network for generating candidate region.The layer carries out two points by classifier Class judges that candidate region belongs to prospect or background, recycles bounding box to return amendment, obtain accurate candidate region.Wherein, The input of extracted region network module is characterized the width height and aspect ratio information of figure, the cluster centre that k-means optimization module obtains, The wide high and aspect ratio information is used to optimize parameter reference anchors in extracted region network (with reference to anchor) size; The output of extracted region network module is the possibility target candidate location information and classification of initial acquisition, and classification herein is target With non-targeted two class.
Step3: pond layer is sent into the candidate region that the characteristic pattern of shared calculating and extracted region network extract, integrates this The characteristic pattern that candidate region is extracted after a little information is sent into subsequent full articulamentum and determines target category.Wherein, initial candidate pool area The input for changing module is characterized the candidate location information of figure, the acquisition of extracted region network module;Initial candidate pool area module Output be initial candidate provincial characteristics figure and initial candidate positions information.
Step4: calculating the classification of candidate region using candidate region characteristic pattern, while returning examined with boundary again Survey the final exact position of frame.Wherein, the input of Classification and Identification position regression block is initial candidate provincial characteristics figure and initial Candidate location information;Class identifies that the output that position returns is final goal location information and classification information, and classification herein is vehicle Window, vehicle face and non-targeted 3 class.
And directly make with merging in the specific implementation, depth convolutional network structure uses vgg-16 model in feature extraction With completing trained network model on ImageNet data set, convolutional neural networks VGG include 5 convolution operations C1, C2, C3, C4, C5 }, two full attended operations { FC6, FC7 } and a classification layer Softmax, one of convolution operation packet Contain convolution (Convolution) and maximum pond (Max-pooling).Softmax layers are joined for convolutional neural networks Several training, once training is completed, Softmax layers can be removed, and the output of full articulamentum F6 will be used as feature vector.Training In the stage, using the parameter of the training data fine tuning full convolutional layer of model, Feature Selection Model training process is as shown in Figure 3.It has trained Cheng Hou, using the depth convolution feature (full fc6 layers of connection) of the model extraction test image after fine tuning, each component extracts one 4096 dimensional vectors of group, the feature extracted to two components of same picture carry out fused in tandem, finally obtain one group of 8192 dimension Feature vector.Eventually by the Euclidean distance for calculating feature between every picture, Lai Shixian vehicle identifies again.
Two component area features are merged, feature of the new fusion feature between different vehicle individual is obtained and retouches It states.
f(X1,X2…Xk)=ω1f(X1)⊙ω2f(X2)…⊙ωkf(Xk) (4)
Wherein f (X1,X2…Xk) indicate the fusion feature obtained after mixing operation;ω12…ωkFor different zones feature Merge weight;⊙ indicates serial operation;f(Xk) indicate k-th of region characteristics of image.
In the concrete operations for calculating characteristic measure distance, a vehicle image Q to be identified and retrieval data set are given R is compared using fusion feature, by the fusion feature { q of vehicle image Q to be identified1,q2,…,qnAnd retrieve in data set R Fusion feature { the r of all images1,r2,…,rnMetric calculation is carried out, calculation is as follows:
Dif indicates distance metric, the number of dimensions that n is characterized, qkFor the kth of the fusion feature of vehicle image Q to be identified Dimension value, rkFor the kth dimension value of the fusion feature of image in retrieval data set R.
It carries out vehicle using the method for above-described embodiment to identify again, the beneficial effect of the acquisition at least following:
1, in component detection-phase, image component labeled data is clustered using K-means cluster, determines that component is examined Region detection network parameter in flow gauge so that generate candidate region it is more accurate, candidate region position more On the basis of accurately, the discrimination of component is improved to a certain extent.
2, the difference between different vehicle models can be handled by the reflection of vehicle face part, by detecting vehicle face region, And feature extraction is carried out to vehicle face region, obtain the feature difference between different vehicle model;Same vehicle model different vehicle Gap between individual is then gathered in vehicle window region mostly, carries out feature extraction by detection vehicle window and to vehicle window region, obtains Feature difference between vehicle individual.Two component area features are merged, can simultaneously to different automobile types Different Individual into Row character representation.
3, overall step process targetedly chooses most representational region, carries out component to the region of selection Detection and feature extraction fusion, reduce remaining without interference brought by representative vehicle sections, in phase to a certain extent In identification again with the different vehicle individual under vehicle model, better effect can be obtained.
The method of the present invention compared with the prior art scheme, the present invention has the advantages that targetedly choose most Representative region carries out component detection to the region of selection and feature extraction is merged, reduces remaining to a certain extent It interferes brought by no representative vehicle sections, in the identification again of the different vehicle individual under same vehicle model, can obtain Better effect.Meanwhile in component detection part, image component labeled data is clustered using K-means cluster, is determined Region detection network parameter in component testing process, so that the candidate region generated is more accurate, in candidate region Position more accurately on the basis of, improve the discrimination of component to a certain extent, have to different detection targets better Applicability.
Embodiment two
A kind of vehicle weight identifying system based on fusion feature, is based on fusion spy for executing one kind described in embodiment one The recognition methods again of the vehicle of sign.
A kind of vehicle weight identifying system based on fusion feature of the present embodiment, executable embodiment of the present invention method are mentioned A kind of recognition methods again of the vehicle based on fusion feature supplied, any combination implementation steps of executing method embodiment have The corresponding function of this method and beneficial effect.
Embodiment three
As shown in Fig. 2, a kind of vehicle weight identification device based on fusion feature, including component detection module, feature extraction Fusion Module and computing module;
After the component detection module is used to obtain the image of vehicle to be identified, schemed according to preset component detection model The area information of at least two preset components is obtained as in;
After the feature extraction Fusion Module is used to carry out feature extraction to image according to each area information, by what is extracted Feature carries out fused in tandem, and obtains fusion feature;
The computing module is used to calculate Euclidean distance in conjunction with fusion feature and preset searching database, to realize vehicle It identifies again.
A kind of vehicle weight identification device based on fusion feature of the present embodiment, executable embodiment of the present invention method are mentioned A kind of recognition methods again of the vehicle based on fusion feature supplied, any combination implementation steps of executing method embodiment have The corresponding function of this method and beneficial effect.
It is to be illustrated to preferable implementation of the invention, but the invention is not limited to the implementation above Example, those skilled in the art can also make various equivalent variations on the premise of without prejudice to spirit of the invention or replace It changes, these equivalent deformations or replacement are all included in the scope defined by the claims of the present application.

Claims (10)

1. a kind of recognition methods again of the vehicle based on fusion feature, which comprises the following steps:
S1, after obtaining the image of vehicle to be identified, it is default that at least two are calculated in the picture according to preset component detection model The area information of component;
S2, after carrying out feature extraction to image according to each area information, the feature extracted is subjected to fused in tandem, and obtain and melt Close feature;
S3, Euclidean distance is calculated in conjunction with fusion feature and preset searching database, vehicle is carried out according to the Euclidean distance of calculating It identifies again.
2. a kind of recognition methods again of the vehicle based on fusion feature according to claim 1, which is characterized in that described default Component includes vehicle face part and vehicle window component, and the area information includes the location information and classification information of component.
3. a kind of recognition methods again of the vehicle based on fusion feature according to claim 2, which is characterized in that described default Component detection model built by following steps:
After inputting the training set of vehicle image, clustered using labeled data of the K-means algorithm to training set, and gathered The width height and aspect ratio information at class center;
Component detection model is constructed using wide high and aspect ratio information as the input data of Faster R-CNN algorithm.
4. a kind of recognition methods again of the vehicle based on fusion feature according to claim 3, which is characterized in that the step S1, specifically includes the following steps:
After the image zooming-out vehicle characteristics figure of vehicle to be identified, the width in conjunction with vehicle characteristics figure and vehicle is high and the ratio of width to height is believed Breath obtains initial candidate region;
The characteristic pattern of candidate region is obtained in conjunction with initial candidate region and vehicle characteristics figure;
The area information and vehicle window of vehicle face part are obtained in conjunction with the characteristic pattern of candidate region, boundary recurrence and softmax classifier The area information of component.
5. a kind of recognition methods again of the vehicle based on fusion feature according to claim 4, which is characterized in that the step S2, specifically includes the following steps:
S21, vehicle face picture and vehicle window picture are obtained according to the area information of vehicle face part and the area information of vehicle window component respectively;
S22, the depth convolution feature for extracting vehicle face picture and vehicle window picture respectively using preset Feature Selection Model;
S23, two depth convolution features are carried out by fused in tandem using the first preset formula, and obtains fusion feature.
6. a kind of recognition methods again of the vehicle based on fusion feature according to claim 5, which is characterized in that described first Preset formula are as follows:
f(X1, X2…Xk)=ω1f(X1)⊙ω2f(X2)…⊙ωkf(Xk)
Wherein, f (X1, X2…Xk) indicate the fusion feature obtained after mixing operation;ω1, ω2…ωkIndicate different zones feature Merge weight;⊙ indicates serial operation;f(Xk) indicate k-th of region characteristics of image.
7. a kind of recognition methods again of the vehicle based on fusion feature according to claim 1, which is characterized in that the step S3, specifically:
Successively by the fusion feature of all images in the fusion feature and searching database of the image of vehicle to be identified according to second Preset formula calculates Euclidean distance, carries out vehicle according to the Euclidean distance of calculating and identifies again.
8. a kind of recognition methods again of the vehicle based on fusion feature according to claim 7, which is characterized in that described second Preset formula are as follows:
Wherein, Dif indicates Euclidean distance, the number of dimensions that n is characterized, qkFor the kth of the fusion feature of vehicle image Q to be identified Dimension value, rkFor the kth dimension value of the fusion feature of image in retrieval data set R.
9. a kind of vehicle weight identifying system based on fusion feature characterized by comprising
At least one processor;
At least one processor, for storing at least one program;
When at least one described program is executed by least one described processor, so that at least one described processor realizes right It is required that a kind of described in any item recognition methods again of the vehicle based on fusion feature of 1-8.
10. a kind of vehicle weight identification device based on fusion feature characterized by comprising
After component detection module is used to obtain the image of vehicle to be identified, calculated in the picture according to preset component detection model The area information of at least two preset components;
After feature extraction Fusion Module is used to carry out feature extraction to image according to each area information, the feature extracted is carried out Fused in tandem, and obtain fusion feature;
Computing module is used to calculate Euclidean distance in conjunction with fusion feature and preset searching database, according to the Euclidean distance of calculating Vehicle is carried out to identify again.
CN201811172525.8A 2018-10-09 2018-10-09 A kind of vehicle based on fusion feature recognition methods, system and device again Pending CN109508731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811172525.8A CN109508731A (en) 2018-10-09 2018-10-09 A kind of vehicle based on fusion feature recognition methods, system and device again

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811172525.8A CN109508731A (en) 2018-10-09 2018-10-09 A kind of vehicle based on fusion feature recognition methods, system and device again

Publications (1)

Publication Number Publication Date
CN109508731A true CN109508731A (en) 2019-03-22

Family

ID=65746425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811172525.8A Pending CN109508731A (en) 2018-10-09 2018-10-09 A kind of vehicle based on fusion feature recognition methods, system and device again

Country Status (1)

Country Link
CN (1) CN109508731A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046599A (en) * 2019-04-23 2019-07-23 东北大学 Intelligent control method based on depth integration neural network pedestrian weight identification technology
CN110188708A (en) * 2019-06-03 2019-08-30 西安工业大学 A kind of facial expression recognizing method based on convolutional neural networks
CN110334586A (en) * 2019-05-22 2019-10-15 深圳壹账通智能科技有限公司 A kind of automobile recognition methods, device, computer system and readable storage medium storing program for executing
CN110458086A (en) * 2019-08-07 2019-11-15 北京百度网讯科技有限公司 Vehicle recognition methods and device again
CN110781975A (en) * 2019-10-31 2020-02-11 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111460891A (en) * 2020-03-01 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Automatic driving-oriented vehicle-road cooperative pedestrian re-identification method and system
CN111783654A (en) * 2020-06-30 2020-10-16 苏州科达科技股份有限公司 Vehicle weight identification method and device and electronic equipment
CN111797782A (en) * 2020-07-08 2020-10-20 上海应用技术大学 Vehicle detection method and system based on image features
CN112270228A (en) * 2020-10-16 2021-01-26 西安工程大学 Pedestrian re-identification method based on DCCA fusion characteristics
CN114067293A (en) * 2022-01-17 2022-02-18 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130268155A1 (en) * 2006-10-27 2013-10-10 International Electronic Machines Corporation Vehicle Evaluation Using Infrared Data
AU2014207250A1 (en) * 2013-01-17 2015-08-20 Sensen Networks Pty Ltd Automated vehicle recognition
CN106469299A (en) * 2016-08-31 2017-03-01 北京邮电大学 A kind of vehicle search method and device
CN106548165A (en) * 2016-11-28 2017-03-29 中通服公众信息产业股份有限公司 A kind of face identification method of the convolutional neural networks weighted based on image block
CN107622229A (en) * 2017-08-29 2018-01-23 中山大学 A kind of video frequency vehicle based on fusion feature recognition methods and system again
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN108171136A (en) * 2017-12-21 2018-06-15 浙江银江研究院有限公司 A kind of multitask bayonet vehicle is to scheme to search the system and method for figure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130268155A1 (en) * 2006-10-27 2013-10-10 International Electronic Machines Corporation Vehicle Evaluation Using Infrared Data
AU2014207250A1 (en) * 2013-01-17 2015-08-20 Sensen Networks Pty Ltd Automated vehicle recognition
CN106469299A (en) * 2016-08-31 2017-03-01 北京邮电大学 A kind of vehicle search method and device
CN106548165A (en) * 2016-11-28 2017-03-29 中通服公众信息产业股份有限公司 A kind of face identification method of the convolutional neural networks weighted based on image block
CN107622229A (en) * 2017-08-29 2018-01-23 中山大学 A kind of video frequency vehicle based on fusion feature recognition methods and system again
CN107729818A (en) * 2017-09-21 2018-02-23 北京航空航天大学 A kind of multiple features fusion vehicle recognition methods again based on deep learning
CN108171136A (en) * 2017-12-21 2018-06-15 浙江银江研究院有限公司 A kind of multitask bayonet vehicle is to scheme to search the system and method for figure

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHIHAO ZHOU 等: "A Car Face Parts Detection Algorithm Based on Faster R-CNN", 《HTTPS://ASCELIBRARY.ORG/DOI/10.1061/9780784481523.029》 *
宓超: "《装卸机器视觉及其应用》", 31 January 2016, 上海科学技术出版社 *
王盼盼: "基于特征融合和度量学习的车辆重识别", 《电子科技》 *
郭艺帆: "基于融合特征的车辆识别", 《万方学位论文》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046599A (en) * 2019-04-23 2019-07-23 东北大学 Intelligent control method based on depth integration neural network pedestrian weight identification technology
CN110334586A (en) * 2019-05-22 2019-10-15 深圳壹账通智能科技有限公司 A kind of automobile recognition methods, device, computer system and readable storage medium storing program for executing
CN110188708A (en) * 2019-06-03 2019-08-30 西安工业大学 A kind of facial expression recognizing method based on convolutional neural networks
CN110458086A (en) * 2019-08-07 2019-11-15 北京百度网讯科技有限公司 Vehicle recognition methods and device again
CN110781975B (en) * 2019-10-31 2022-11-29 深圳市商汤科技有限公司 Image processing method and device, electronic device and storage medium
CN110781975A (en) * 2019-10-31 2020-02-11 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111460891A (en) * 2020-03-01 2020-07-28 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Automatic driving-oriented vehicle-road cooperative pedestrian re-identification method and system
CN111783654A (en) * 2020-06-30 2020-10-16 苏州科达科技股份有限公司 Vehicle weight identification method and device and electronic equipment
CN111797782A (en) * 2020-07-08 2020-10-20 上海应用技术大学 Vehicle detection method and system based on image features
CN111797782B (en) * 2020-07-08 2024-04-16 上海应用技术大学 Vehicle detection method and system based on image features
CN112270228A (en) * 2020-10-16 2021-01-26 西安工程大学 Pedestrian re-identification method based on DCCA fusion characteristics
CN114067293A (en) * 2022-01-17 2022-02-18 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment
CN114067293B (en) * 2022-01-17 2022-04-22 武汉珞信科技有限公司 Vehicle weight identification rearrangement method and system based on dual attributes and electronic equipment

Similar Documents

Publication Publication Date Title
CN109508731A (en) A kind of vehicle based on fusion feature recognition methods, system and device again
Li et al. Automatic pavement crack detection by multi-scale image fusion
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
Eisenbach et al. How to get pavement distress detection ready for deep learning? A systematic approach
CN109087510B (en) Traffic monitoring method and device
CN111696128B (en) High-speed multi-target detection tracking and target image optimization method and storage medium
CN110533722A (en) A kind of the robot fast relocation method and system of view-based access control model dictionary
CN103366602B (en) Method of determining parking lot occupancy from digital camera images
CN110753892A (en) Method and system for instant object tagging via cross-modality verification in autonomous vehicles
CN106485740B (en) A kind of multidate SAR image registration method of combination stable point and characteristic point
Mallikarjuna et al. Traffic data collection under mixed traffic conditions using video image processing
CN113191459A (en) Road-side laser radar-based in-transit target classification method
CN110869936A (en) Method and system for distributed learning and adaptation in autonomous vehicles
CN110533695A (en) A kind of trajectory predictions device and method based on DS evidence theory
CN110799982A (en) Method and system for object-centric stereo vision in an autonomous vehicle
CN112101278A (en) Hotel point cloud classification method based on k nearest neighbor feature extraction and deep learning
CN108681693A (en) Licence plate recognition method based on trusted area
CN109299644A (en) A kind of vehicle target detection method based on the full convolutional network in region
CN102609720B (en) Pedestrian detection method based on position correction model
CN105205486A (en) Vehicle logo recognition method and device
CN103208008A (en) Fast adaptation method for traffic video monitoring target detection based on machine vision
CN105404886A (en) Feature model generating method and feature model generating device
CN108492298A (en) Based on the multispectral image change detecting method for generating confrontation network
CN105956632A (en) Target detection method and device
CN111027481A (en) Behavior analysis method and device based on human body key point detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190322