CN106326927B - A kind of shoes print new category detection method - Google Patents

A kind of shoes print new category detection method Download PDF

Info

Publication number
CN106326927B
CN106326927B CN201610716111.1A CN201610716111A CN106326927B CN 106326927 B CN106326927 B CN 106326927B CN 201610716111 A CN201610716111 A CN 201610716111A CN 106326927 B CN106326927 B CN 106326927B
Authority
CN
China
Prior art keywords
image
detected
similarity
training image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610716111.1A
Other languages
Chinese (zh)
Other versions
CN106326927A (en
Inventor
王新年
刘风竹
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN201610716111.1A priority Critical patent/CN106326927B/en
Publication of CN106326927A publication Critical patent/CN106326927A/en
Application granted granted Critical
Publication of CN106326927B publication Critical patent/CN106326927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of shoes print new category detection method, comprising: extracts the feature of training image and described image to be detected, the training image is the shoes watermark image of known class, for determining whether image to be detected is new category shoes watermark image;Extract the feature of the training image and described image to be detected;According to the similarity matrix between similarity matrix, the training image and described image to be detected between training image described in the feature calculation;Discriminant function is determined according to the similarity matrix between the training image;Determine the training image and image to be detected in the corresponding training image mapping of kernel and image to be detected mapping according to the discriminant function;Calculate the Euclidean distance of image to be detected mapping and training image mapping;Determine whether image to be detected is new category according to the Euclidean distance;Shoes watermark image is detected according to the new category.The present invention realizes the effective management printed to shoes, realizes more accurately detection shoes print new category.

Description

A kind of shoes print new category detection method
Technical field
The present embodiments relate to art of image analysis more particularly to a kind of shoes to print new category detection method.
Background technique
There is the new category detection algorithm based on shoes print data set, human face data collection and general data collection at present, is printed based on shoes New category detection algorithm have using the cascade opener mark image classification method of multilayer;New category detection algorithm based on face Have: based on similarity distribution opener face recognition algorithms, in conjunction with the Adaboost opener face recognition algorithms of geometric transformation, base In the opener face recognition algorithms that transduction principle and KNN are combined;Having in conjunction with kernel and core Foley- based on general data collection The new category detection algorithm of Sammon transformation.The main thought of each method is as follows: (1) the new category detection algorithm based on shoes print is Using the cascade opener mark image classification method of multilayer, this method is first according to pretreated image to be classified and trace figure As the candidate categories of the similitude screening image to be classified of mark image each in library, image to be classified and candidate class are then calculated Rank the first the mark image of position in not and its similitude of corresponding representative image recycle the cascade mode of multilayer judge to point Whether class image belongs to a certain classification or new category image in mark image library.(2) the new category detection algorithm based on face There are several types of: the opener face recognition algorithms based on similarity distribution, this method pass through the test specimens of a large amount of tape identification first The similarity vector of this acquisition sample, and classify to test sample according to following three kinds of situations: 1) test sample belongs to known class Not, and classify correct;2) test sample belongs to known class, but classification error;3) test sample is not belonging to known class.By A kind of vector of situation is classified as 0 class, and the vector of the two or three kind of situation is classified as 1 class.Then it introduces linear discriminant and finds out optimal differentiation Hyperplane judges whether test sample belongs to using the hyperplane learnt after a new test sample enters identifying system Known class;In conjunction with the Adaboost opener face recognition algorithms of geometric transformation, this method is using classifier to the tendency of positive sample Property, so that the new samples that former positive sample transformation generates still can be by classifier, the approximate sample of negative sample is but not easily passed through, energy The positive negative sample of overlapping region is effectively distinguished, and reduces the expense of time by two layers of identification structure, makes classifier correct In the case that discrimination is constant, false acceptance rate can be greatly reduced;The opener recognition of face combined based on transduction principle and KNN Algorithm, this method combined first with transduction principle and KNN calculate P value, that is, test sample one of test sample effectively with Machine test, by the corresponding label output of maximum P value if maximum P value is significantly more than other P values, if maximum P value with Neighbouring score, which is not much different but differs greatly with other P values, then still exports the corresponding label of maximum P value, only sets Reliability is very low, if all P value random distributions and not obviously should be to this sample rejection if the P value more much larger than other values Not.It (3) based on general data collection is a kind of new category detection method converted in conjunction with kernel and core Foley-Sammon, it should Method calculates image to be detected mapping and training image by the way that training image and image to be detected are respectively mapped to kernel Then the Euclidean distance of mapping is realized to be detected by the smallest distance as final new category score according to the threshold value of setting Collection is outer in the collection of image determines.
Above-mentioned shoes print new category detection algorithm there are the problem of: (1) asked existing for the new category detection algorithm based on shoes print Topic is that different addition strategies is taken different score sections based on the cascade opener mark image classification method of multilayer, and threshold The setting of value is very crucial, needs very big skill.(2) the new category detection algorithm based on face the problem is that: be based on phase Differentiated like the opener recognition methods of degree using the entire similarity vector of test sample, rather than considers K neighbour's sample Similarity, there is no by similarity value it is especially low sample rejecting may will affect differentiation effect in this way;In conjunction with geometric transformation Adaboost opener identification need transformed n sample passing sequentially through classifier, see that can transformed sample intensive Pass through, it is done so that increase of the expense on recognition time at n times, be not suitable for the knowledge containing a large amount of face databases Not;Need first to calculate test sample and all trained classifications based on transduction principle and KNN the opener face recognition algorithms combined Unqualified degree calculates P value again according to unqualified degree to form the forecast set of test sample, and the distribution determination further according to P value is refused The threshold value of identification, the computation complexity of this method is high, memory consumption is big, therefore is not suitable for the number containing a large amount of training sample classifications According to collection.(3) the new category detection algorithm based on general data collection the problem is that: if only using arest neighbors classification one dimension The range information of degree carries out new category detection, will lose the range information with other classifications, influences the efficiency differentiated.To sum up institute It states, the Detection accuracy that prior technical shoes print new category is low.
Summary of the invention
The embodiment of the present invention provides a kind of shoes print new category detection method, to overcome above-mentioned technical problem.
A kind of shoes of the present invention print new category detection method, comprising:
The feature of the training image and described image to be detected is extracted, the training image is the shoes impression of known class Picture, for determining whether image to be detected is new category shoes watermark image;
According between training image described in the feature calculation similarity matrix, the training image with it is described to be detected Similarity matrix between image;
Discriminant function is determined according to the similarity matrix between the training image;
Determine that the training image and image to be detected are reflected in the corresponding training image of kernel according to the discriminant function It penetrates and is mapped with image to be detected;
Calculate the Euclidean distance of image to be detected mapping and training image mapping;
Determine whether image to be detected is new category according to the Euclidean distance.
Further, the feature for extracting the training image and described image to be detected, comprising:
It will be used as sole above the training image and image to be detected according to preset value, remainder is as heel;
By the sole, heel image mirrors processing;
Sole, heel image after original image and mirror image is subjected to a quarter that wavelet transformation is the original image;
Image after the wavelet transformation is subjected to polar coordinate transform, and extracts Fourier transformation and obtains feature.
Further, the similarity matrix between the training image according to the feature calculation, the training figure Picture and the similarity matrix between described image to be detected, comprising:
According to the similarity between the sole before and after training image mirror image processing described in two width of feature calculation;And compare Two similarities, by the biggish similarity as the two width training image sole of similarity;
According to the similarity between the heel before and after training image mirror image processing described in two width of feature calculation;And compare Two similarities, by the biggish similarity as the two width training image heel of similarity;
According to the similarity between the sole before and after the feature calculation training image and image to be detected mirror image processing;And Compare two similarities, by similarity it is biggish as between the training image and described image to be detected sole it is similar Degree;
According to the similarity between the heel before and after the feature calculation training image and image to be detected mirror image processing;And Compare two similarities, by similarity it is biggish as between the training image and described image to be detected heel it is similar Degree.
It is further, described that discriminant function is determined according to the similarity matrix, comprising:
The similarity matrix of the training image is subjected to centralization;
The characteristic value and feature vector of similarity matrix after seeking the centralization, and give up the characteristic value less than zero pair The feature vector answered;
According to described eigenvector computational discrimination function.
Further, described to determine that the training image and image to be detected are corresponding in kernel according to the discriminant function Training image mapping and image to be detected mapping, comprising:
According to the discriminant function training image of the same category is mapped to kernel one by one and obtains a kind of training image Mapping;
The similarity for calculating image to be detected and all training images obtains similarity vector, and by the similarity vector Each component sorts from large to small, and image to be detected mapping is obtained after preceding K similarity component is multiplied with the discriminant function.
It is further, described to determine whether image to be detected is new category according to the Euclidean distance, comprising:
By Euclidean distance according to sequence from small to large, and sequence first place and last N of ratio are calculated separately, by institute The Euclidean distance value summation for stating N number of ratio and the sequence first place determines new category value;
By the new category value and threshold value comparison, determine that described image to be detected is if the new category value is greater than threshold value New category determines that described image to be detected belongs to existing classification if the new category value is less than the threshold value.
The present invention realizes the effective management printed to shoes, realizes more accurately detection shoes print new category, improves work effect Rate.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with It obtains other drawings based on these drawings.
Fig. 1 is that shoes of the present invention print new category detection method flow chart;
Fig. 2 is that shoes of the present invention print new category detection method overall flow figure;
Fig. 3 is that image to be detected of the present invention belongs to schematic diagram in collection;
Fig. 4 is that image to be detected of the present invention belongs to the outer schematic diagram of collection.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Fig. 1 is that shoes of the present invention print detection method flow chart, as shown in Figure 1, the method for the present embodiment may include:
Step 101, the feature for extracting the training image and described image to be detected, the training image are known class Shoes watermark image, for determining whether image to be detected is new category shoes watermark image;
Similarity matrix, the training image and institute between step 102, the training image according to the feature calculation State the similarity matrix between image to be detected;
Step 103 determines discriminant function according to the similarity matrix between the training image;
Step 104 determines the training image and image to be detected in the corresponding instruction of kernel according to the discriminant function Practice image mapping and image to be detected mapping;
Step 105, the Euclidean distance for calculating image to be detected mapping and training image mapping;
Step 106 determines whether image to be detected is new category according to the Euclidean distance.
Further, the feature for extracting the training image and described image to be detected, comprising:
It will be used as sole above the training image and image to be detected according to preset value, remainder is as heel;
By the sole, heel image mirrors processing;
Sole, heel image after original image and mirror image is subjected to a quarter that wavelet transformation is the original image;
Image after the wavelet transformation is subjected to polar coordinate transform, and extracts Fourier transformation and obtains feature.
Further, the similarity matrix between the training image according to the feature calculation, the training figure Picture and the similarity matrix between described image to be detected, comprising:
According to the similarity between the sole before and after training image mirror image processing described in two width of feature calculation;And compare Two similarities, by the biggish similarity as the two width training image sole of similarity;
According to the similarity between the heel before and after training image mirror image processing described in two width of feature calculation;And compare Two similarities, by the biggish similarity as the two width training image heel of similarity;
According to the similarity between the sole before and after the feature calculation training image and image to be detected mirror image processing;And Compare two similarities, by similarity it is biggish as between the training image and described image to be detected sole it is similar Degree;
According to the similarity between the heel before and after the feature calculation training image and image to be detected mirror image processing;And Compare two similarities, by similarity it is biggish as between the training image and described image to be detected heel it is similar Degree.
Specifically, using above image 60 percent shoes watermark image as sole, residue percent in the present embodiment For 40 shoes watermark image as heel, then by sole, heel mirror image, then carrying out wavelet transformation to each section becomes original image Then a quarter carries out polar coordinate transform and takes Fourier transformation to obtain feature again, using this tetrameric feature as a width figure The global feature of picture.The similarity between similarity, sole and the sole mirror image between corresponding sole is calculated, in order to eliminate left and right The influence of foot, the similarity as two images sole for taking similarity big;Similarity between corresponding heel, heel are calculated again With the similarity between heel mirror image, the similarity as two images heel of similarity greatly is equally taken, last sole is similar The weight of degree is 0.6, and heel weight is 0.4, and the sum of the two is as the similarity between final two images.And so on calculating Similarity between any two width training figure.
It is further, described that discriminant function is determined according to the similarity matrix, comprising:
The similarity matrix of the training image is subjected to centralization;
The characteristic value and feature vector of similarity matrix after seeking the centralization, and give up the characteristic value less than zero pair The feature vector answered;
According to described eigenvector computational discrimination function.
Specifically, characteristic value feature vector S corresponding less than zero will be given up, according to
Feature vector=feature vector × (1/ characteristic value)
Standardization processing is carried out, according to the eigenvectors matrix computational discrimination function after standardization, process is as follows:
A, according to the matrix H after standardization, T=H is calculatedTThe feature vector N of H-matrix.
B, the feature vector S in conjunction with features described above vector N and training image similarity matrix, which can be found out, final sentences Other function Wherein, I is the unit matrix of m × m size, and L is m × m size and each element of matrix It is the number of training image for 1/m, m.
Further, described to determine that the training image and image to be detected are corresponding in kernel according to the discriminant function Training image mapping and image to be detected mapping, comprising:
According to the discriminant function training image of the same category is mapped to kernel one by one and obtains a kind of training image Mapping;Since the coordinate that the training sample of the same category is mapped in the every dimension of kernel is all identical, therefore every a kind of formation is independent A point.
The similarity for calculating image to be detected and all training images obtains similarity vector, and by the similarity vector Each component sorts from large to small, and image to be detected mapping is obtained after preceding K similarity component is multiplied with the discriminant function.
It is further, described to determine whether image to be detected is new category according to the Euclidean distance, comprising:
By Euclidean distance according to sequence from small to large, and sequence first place and last N of ratio are calculated separately, by institute The Euclidean distance value summation for stating N number of ratio and the sequence first place determines new category value;
Specifically, it is assumed that training sample classification has 5 classes, calculates separately the Euclidean distance of image to be detected Yu this five class, Assuming that Euclidean distance and is calculated separately first place and name last, inverse according to being 1,2,3,4,5 after sorting from small to large The ratio of second place is 0.2 and 0.25, the Euclidean distance value summation of described two ratios and the sequence first place is determined new Class label 1.45.
By the new category value and threshold value comparison, determine that described image to be detected is if the new category value is greater than threshold value New category determines that described image to be detected belongs to existing classification if the new category value is less than the threshold value.The present invention arranges Flow chart is as shown in Figure 2.
The invention has the benefit that
1) present invention utilizes this nearest information of distance incessantly, it is also contemplated that the useful letter that two farthest classifications of distance include Breath, i.e., if image to be detected belongs to known class, it is close apart from a certain classification and relatively far away from apart from other classifications;If Image to be detected is new category, then its at a distance from training sample classification all farther out, this method can using the comparative information of distance To increase the difference for collecting the interior outer image to be detected score of collection, Detection accuracy is improved.Image to be detected is belonging respectively in collection or collection Outer situation is as shown in Figure 3,4 at a distance from training sample classification, and wherein triangle represents image to be detected, and other shapes represent Training sample classification.
2) training set of the present invention when seeking discriminant function merely with known class makes criterion reach infinite, obtains Optimal discriminant function, so that discriminant function will not be different because of the change of unknown classification sample.
3) present invention is used for the detection of new category shoe sole print, realizes the effective management printed to shoes, realizes more accurately Shoes print is detected, is improved work efficiency.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (5)

1. a kind of shoes print new category detection method characterized by comprising
The feature of training image and image to be detected is extracted, the training image is the shoes watermark image of known class, for determining Whether image to be detected is new category shoes watermark image;
According to similarity matrix, the training image and described image to be detected between training image described in the feature calculation Between similarity matrix;
Discriminant function is determined according to the similarity matrix between the training image;
According to the discriminant function determine the training image and image to be detected in the mapping of kernel corresponding training image and Image to be detected mapping;
Calculate the Euclidean distance of image to be detected mapping and training image mapping;
Determine whether image to be detected is new category according to the Euclidean distance;Include:
By Euclidean distance according to sequence from small to large, and sequence first place and last N of ratio are calculated separately, by the N The Euclidean distance value of a ratio and the sequence first place, which is summed, determines new category value;
By the new category value and threshold value comparison, determine that described image to be detected is new class if the new category value is greater than threshold value Not, determine that described image to be detected belongs to existing classification if the new category value is less than the threshold value.
2. the method according to claim 1, wherein the extraction training image and described image to be detected Feature, comprising:
It will be used as sole above the training image and image to be detected according to preset value, remainder is as heel;
By the sole, heel image mirrors processing;
Sole, heel image after original image and mirror image is subjected to a quarter that wavelet transformation is the original image;
Image after the wavelet transformation is subjected to polar coordinate transform, and extracts Fourier transformation and obtains feature.
3. according to the method described in claim 2, it is characterized in that, between the training image according to the feature calculation Similarity matrix, the similarity matrix between the training image and described image to be detected, comprising:
According to the similarity between the sole before and after training image mirror image processing described in two width of feature calculation;And compare two Similarity, by the biggish similarity as the two width training image sole of similarity;
According to the similarity between the heel before and after training image mirror image processing described in two width of feature calculation;And compare two Similarity, by the biggish similarity as the two width training image heel of similarity;
According to the similarity between the sole before and after the feature calculation training image and image to be detected mirror image processing;And compare Two similarities, by the biggish similarity as sole between the training image and described image to be detected of similarity;
According to the similarity between the heel before and after the feature calculation training image and image to be detected mirror image processing;And compare Two similarities, by the biggish similarity as heel between the training image and described image to be detected of similarity.
4. the method according to claim 1, wherein described determine discriminant function according to the similarity matrix, Include:
The similarity matrix of the training image is subjected to centralization;
The characteristic value and feature vector of similarity matrix after seeking the centralization, and it is corresponding less than zero to give up the characteristic value Feature vector;
According to described eigenvector computational discrimination function.
5. the method according to claim 1, wherein described determine the training image according to the discriminant function With image to be detected in the mapping of kernel corresponding training image and image to be detected mapping, comprising:
According to the discriminant function training image of the same category is mapped to kernel one by one and obtains a kind of training image mapping;
The similarity for calculating image to be detected and all training images obtains similarity vector, and by each point of the similarity vector Amount sorts from large to small, and is mapped in kernel after preceding K similarity component is multiplied with the discriminant function and obtains mapping to be checked The mapping of picture.
CN201610716111.1A 2016-08-24 2016-08-24 A kind of shoes print new category detection method Active CN106326927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610716111.1A CN106326927B (en) 2016-08-24 2016-08-24 A kind of shoes print new category detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610716111.1A CN106326927B (en) 2016-08-24 2016-08-24 A kind of shoes print new category detection method

Publications (2)

Publication Number Publication Date
CN106326927A CN106326927A (en) 2017-01-11
CN106326927B true CN106326927B (en) 2019-06-04

Family

ID=57791986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610716111.1A Active CN106326927B (en) 2016-08-24 2016-08-24 A kind of shoes print new category detection method

Country Status (1)

Country Link
CN (1) CN106326927B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778922A (en) * 2017-02-16 2017-05-31 大连海事大学 A kind of footwear print new category detection method of suitable high dimensional feature
US10621473B1 (en) * 2019-01-30 2020-04-14 StradVision, Inc. Method for providing object detecting system capable of updating types of detectable classes in real-time by using continual learning and devices using the same
CN111476297A (en) * 2020-04-07 2020-07-31 中国民航信息网络股份有限公司 Category determination method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650944A (en) * 2009-09-17 2010-02-17 浙江工业大学 Method for distinguishing speakers based on protective kernel Fisher distinguishing method
CN105023025A (en) * 2015-08-03 2015-11-04 大连海事大学 Set opening trace image classification method and system
CN105608443A (en) * 2016-01-22 2016-05-25 合肥工业大学 Multi-feature description and local decision weighting face identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650944A (en) * 2009-09-17 2010-02-17 浙江工业大学 Method for distinguishing speakers based on protective kernel Fisher distinguishing method
CN105023025A (en) * 2015-08-03 2015-11-04 大连海事大学 Set opening trace image classification method and system
CN105608443A (en) * 2016-01-22 2016-05-25 合肥工业大学 Multi-feature description and local decision weighting face identification method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Kernel Null Space Methods for Novelty Detection;Bodesheim P;《IEEE Conference on Computer Vision and Pattern Recognition》;20131231;正文第3374-3381页
基于反馈的现场鞋底痕迹花纹检索算法研究;孙会会;《中国优秀硕士学位论文全文数据库》;20150315(第3期);正文第39-89页
基于相似度分布的开集人脸识别方法;张凯;《模式识别与人工智能》;20110215;第24卷(第1期);全文
核零空间线性鉴别分析及其在人脸识别中的应用;甘俊英;《计算机学报》;20141115;第37卷(第11期);全文

Also Published As

Publication number Publication date
CN106326927A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
US10650237B2 (en) Recognition process of an object in a query image
CN110197502B (en) Multi-target tracking method and system based on identity re-identification
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
CN103136504B (en) Face identification method and device
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
Li et al. Human sperm health diagnosis with principal component analysis and K-nearest neighbor algorithm
CN106326927B (en) A kind of shoes print new category detection method
CN111815582A (en) Two-dimensional code area detection method for improving background prior and foreground prior
CN110874576A (en) Pedestrian re-identification method based on canonical correlation analysis fusion features
CN112613474B (en) Pedestrian re-identification method and device
Singhal et al. Image classification using bag of visual words model with FAST and FREAK
CN112307894A (en) Pedestrian age identification method based on wrinkle features and posture features in community monitoring scene
Arsa et al. Vehicle detection using dimensionality reduction based on deep belief network for intelligent transportation system
Dutra et al. Re-identifying people based on indexing structure and manifold appearance modeling
CN106778922A (en) A kind of footwear print new category detection method of suitable high dimensional feature
Bakheet et al. Content-based image retrieval using brisk and surf as bag-of-visual-words for naïve Bayes classifier
Guerrero-Peña et al. Search-space sorting with hidden Markov models for occluded object recognition
Sezavar et al. Multi-depth deep similarity learning for person re-identification
Umarani et al. Z-Score Normalized Features with Maximum Distance Measure Based k-NN Automated Blood Cancer Diagnosis System
Milgram et al. Two-stage classification system combining model-based and discriminative approaches
CN115019365B (en) Hierarchical face recognition method based on model applicability measurement
CN110390309B (en) Finger vein illegal user identification method based on residual distribution
Musa Using Machine Learning to Overcome Facial Recognition Bias in Africa.
Yuan et al. Parameter sensitive detectors
Taffar et al. Viewpoint invariant face detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant