CN110009049A - It is a kind of based on from step tied mechanism can supervision image classification method - Google Patents

It is a kind of based on from step tied mechanism can supervision image classification method Download PDF

Info

Publication number
CN110009049A
CN110009049A CN201910283982.2A CN201910283982A CN110009049A CN 110009049 A CN110009049 A CN 110009049A CN 201910283982 A CN201910283982 A CN 201910283982A CN 110009049 A CN110009049 A CN 110009049A
Authority
CN
China
Prior art keywords
training
sample
image classification
model
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910283982.2A
Other languages
Chinese (zh)
Inventor
张涛
于宏斌
冯长安
葛格
石慧
许志强
崔光明
潘详
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201910283982.2A priority Critical patent/CN110009049A/en
Publication of CN110009049A publication Critical patent/CN110009049A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses it is a kind of based on from step tied mechanism can supervision image classification method include, divide training sample difficulty or ease type;Sparse representation model is established, and sample brings sparse representation model training into;Obtain image classification model and prediction model;And building classification decision-making device;Wherein, the training sample difficulty or ease type includes the easy sample of training and the difficult sample of training;The division training sample difficulty or ease type is used to be divided from step constraint matrix;The present invention is by dividing training sample from step constraint matrix, the easy sample of training and difficult sample are successively brought into the sparse representation model of definition and constantly trained, it may make up the specific image classification scheme from step constraint, convenient for utilizing more discriminant informations, and there is robustness to sample noise, so as to solve the problems, such as that supervision dictionary learning mechanism will be no longer applicable in when facing the complex samples comprising changing in noise and huge class, improve image recognition effect.

Description

It is a kind of based on from step tied mechanism can supervision image classification method
Technical field
Image classification identification technology of the present invention field more particularly to a kind of supervising based on step tied mechanism certainly Image classification method.
Background technique
Carrying out classification to the natural image comprising a large amount of object type is one of problem most challenging in pattern-recognition, Mainstream solution includes small echo interconnection vector machine (WRVM), global and local significant characteristics encode and word packet model (bow), Previous image classification algorithms are absorbed in always the visable representation for obtaining characteristics of image, and have ignored the information of certain kinds, are It was found that a kind of method for being more suitable for data expression, a large amount of scheme is developed to solve this problem, has opened recently at these In the model of hair, the rarefaction representation sorting technique with supervision causes the emerging of many people because of its powerful image modeling ability Interest, many studies have shown that, this performance of classification (SRC) algorithm in computer vision research based on rarefaction representation goes out very much Color, however, supervision dictionary learning mechanism will be no longer applicable in when facing the complex samples comprising changing in noise and huge class, In addition, study one has taste from complicated training sample and representative dictionary is also still a challenge.
Summary of the invention
The purpose of this section is to summarize some aspects of the embodiment of the present invention and briefly introduce some preferable implementations Example.It may do a little simplified or be omitted to avoid our department is made in this section and the description of the application and the title of the invention Point, the purpose of abstract of description and denomination of invention it is fuzzy, and this simplification or omit and cannot be used for limiting the scope of the invention.
In view of it is above-mentioned it is existing based on from step tied mechanism can supervision image classification method there are the problem of, propose this hair It is bright based on from step tied mechanism can supervision image classification method, the solution is to how based on supervising from step tied mechanism Superintend and direct image classification problem.
Therefore, it is an object of the present invention to provide it is a kind of based on from step tied mechanism can supervision image classification method, pass through Training sample is divided from step constraint matrix, the easy sample of training and difficult sample are successively brought into the sparse representation model of definition Middle progress is constantly trained, and may make up the specific image classification scheme from step constraint, convenient for utilizing more discriminant informations, and to sample This noise has robustness, so as to solve to supervise dictionary when facing the complex samples comprising changing in noise and huge class No longer applicable problem is improved image recognition effect by habit mechanism.
In order to solve the above technical problems, the invention provides the following technical scheme: by step constraint matrix certainly to training sample It is divided, the easy sample of training and difficult sample is successively brought into the sparse representation model of definition and are trained, this process can solve The problem that supervision dictionary learning mechanism will be no longer applicable certainly when facing the complex samples comprising changing in noise and huge class, mentions High image recognition effect.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: it is a kind of based on from step tied mechanism can supervision image classification method, including, divide training sample difficulty or ease type;It establishes dilute Dredging indicates model, and sample brings sparse representation model training into;Obtain image classification model and prediction model;And building class Other decision-making device;Wherein, the training sample difficulty or ease type includes the easy sample of training and the difficult sample of training.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the division training sample difficulty or ease type is used to be divided from step constraint matrix.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: it is described from step constraint matrix V are as follows:
Wherein, ai,jIndicate j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is class Sum, y indicates test sample, and λ is parameter, and V (ii)=1 indicates that the easy sample of training, V (ii)=0 indicate the difficult sample of training.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the sparse representation model is model relevant to from constraint regularization is walked.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the sparse representation model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], wherein DiIt is and the associated son of class i Collection, AiIndicate the sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], XiIt is A in DiSub- square Battle array, Indicate AiCoefficient,WiIndicate training sample Thisi,jEuclidean distance between test sample y, α indicate that the weighting coefficient from step constraint study, V are indicated from step constraint Matrix, λ1, λ2, ξ1, ξ2, ξ3It is rescaling parameter.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the step of acquisition image classification model and prediction model includes:
The easy sample of training training;
Update X;
Obtain sparse code X, coefficient code D and the weighting coefficient α from step constraint;
Determine image classification model and prediction model.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the step of update X includes: fixed dictionary D and α, and formula (2) is further rewritten are as follows:
That is,
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: described the step of obtaining sparse code X, coefficient code D and certainly the weighting coefficient α of step constraint includes: to update D=[D1, D2,......,DK], Z and α are obtained when fixing, i.e. formula (2) are as follows:
Wherein,
That is formula (4) are as follows:
Wherein,
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the determining prediction model eiUsing following formula:
Wherein, It is from the coefficient of step constraint, miIndicate class AiCoefficient vector, β1And β2 The preset value of presentation class model.
As it is of the present invention based on from step tied mechanism can supervision image classification method a kind of preferred embodiment, In: the classification decision-making device uses following formula:
Identity (y)=arg mini{ei}。
Beneficial effects of the present invention: the present invention is by dividing training sample from step constraint matrix, by the easy sample of training Originally it successively brings into difficult sample in the sparse representation model of definition and is constantly trained, may make up the specific image point for walking and constraining certainly Class scheme convenient for utilizing more discriminant informations, and has robustness to sample noise, so as to solve when in face of comprising noise With change in huge class complex samples when supervision dictionary learning mechanism will be no longer applicable problem, improve image recognition effect Fruit.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill of field, without any creative labor, it can also be obtained according to these attached drawings other Attached drawing.Wherein:
Fig. 1 be the present invention is based on from step tied mechanism can supervision image classification method one embodiment overall structure Schematic diagram.
Fig. 2 be the present invention is based on from step tied mechanism can acquisition described in second embodiment of supervision image classification method Image classification model and prediction model step schematic diagram.
Fig. 3 be the present invention is based on from step tied mechanism can supervision image classification method third embodiment verifying process Exemplary diagram.
Fig. 4 be the present invention is based on from step tied mechanism can the 4th embodiment of supervision image classification method Caltech- The exemplary diagram of 101 databases.
Fig. 5 be the present invention is based on from step tied mechanism can the 4th embodiment of supervision image classification method Verification result schematic diagram on Caltech-101 data set.
Fig. 6 be the present invention is based on from step tied mechanism can the 4th embodiment of supervision image classification method VOC 2012 The exemplary diagram of database.
Fig. 7 be the present invention is based on from step tied mechanism can 4 the embodiment of supervision image classification method in VOC Verification result schematic diagram on 2012 databases.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, right with reference to the accompanying drawings of the specification A specific embodiment of the invention is described in detail.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with Implemented using other than the one described here other way, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is not limited by the specific embodiments disclosed below.
Secondly, " one embodiment " or " embodiment " referred to herein, which refers to, may be included at least one realization side of the invention A particular feature, structure, or characteristic in formula." in one embodiment " that different places occur in the present specification not refers both to The same embodiment, nor the individual or selective embodiment mutually exclusive with other embodiments.
Thirdly, combination schematic diagram of the present invention is described in detail, when describing the embodiments of the present invention, for purposes of illustration only, Indicate that the sectional view of device architecture can disobey general proportion and make partial enlargement, and the schematic diagram is example, herein not The scope of protection of the invention should be limited.In addition, the three-dimensional space of length, width and depth should be included in actual fabrication.
Embodiment 1
Referring to Fig.1, provide it is a kind of based on from step tied mechanism can supervision image classification method overall structure signal Figure, such as Fig. 1, it is a kind of based on from step tied mechanism can supervision image classification method include S1: divide training sample difficulty or ease type; S2: sparse representation model is established, and sample brings sparse representation model training into;S3: image classification model and prediction model are obtained; And S4: building classification decision-making device;Wherein, training sample difficulty or ease type includes the easy sample of training and the difficult sample of training.
Specifically, main structure of the present invention includes S1: dividing training sample difficulty or ease type, operator's partition database figure Picture is divided into training sample and test sample, carries out difficulty division to training sample at this, wherein training sample is difficult Easy type divides into the easy sample of training and the difficult sample of training;S2: establishing sparse representation model, it should be noted that, rarefaction representation mould Type is model relevant to from constraint regularization is walked, and successively brings the easy sample of training after differentiation into sparse table with the difficult sample of training It is trained in representation model, it is emphasized that, it is the training process of first " easy " " hardly possible " afterwards during training, i.e., will first instructs Practice after easy sample brings into sparse representation model and be trained, then the difficult sample of training is brought into sparse representation model and is trained Process;S3: image classification model and prediction model are obtained, the unknowm coefficient in sparse representation model can be sought (i.e. by S2 Sparse code X, coefficient code D and the weighting coefficient α constrained from step);S4: building classification decision-making device, the prediction obtained according to S3 Model construction classification decision-making device, the present invention is by dividing training sample from step constraint matrix, by the easy sample of training and difficulty Sample is successively brought into the sparse representation model of definition and is constantly trained, and may make up the specific image classification side from step constraint Case convenient for utilizing more discriminant informations, and has robustness to sample noise, so as to solve when in face of comprising noise and huge No longer applicable problem is improved image recognition effect by supervision dictionary learning mechanism when the complex samples changed in major class.
Further, it divides training sample difficulty or ease type to divide using from step constraint matrix, from step constraint matrix V are as follows:
Wherein, ai,jIndicate j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is class Sum, y indicates test sample, and λ is parameter, and parameter lambda represents the threshold value determined in advance from step constraint study;It should be noted It is, as V (ii)=1, to indicate the easy sample of training, the difficult sample of training is indicated as V (ii)=0, will not participate in rarefaction representation Process.
Further, sparse representation model is model relevant to from constraint regularization is walked, it should be noted that rarefaction representation Model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], wherein DiIt is and the associated son of class i Collection, AiIndicate the sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], XiIt is A in DiSub- square Battle array, Indicate AiCoefficient,WiIndicate training Sample ai,jEuclidean distance between test sample y, α indicate that the weighting coefficient from step constraint study, V are indicated from step about Beam matrix, λ1, λ2, ξ1, ξ2, ξ3It is the scalar for controlling term;With the increase of parameter lambda threshold value, training hardly possible sample will be selected gradually Enter the process of dictionary study, the representation method so constantly improved in study can the effectively smoothness of control tactics devices and data Various structures, are incorporated hereinWithImprove the accurate of classification Property.
Embodiment 2
Referring to Fig. 2, which is different from one embodiment: obtaining the step of image classification model and prediction model It suddenly include: S41: the easy sample of training training;S42: X is updated;S43: sparse code X, coefficient code D and adding from step constraint are obtained Weight coefficient α;S44: image classification model and prediction model are determined.Specifically, main structure includes S1 referring to Fig. 1: dividing instruction Practice sample difficulty or ease type, operator's partition database image is divided into training sample and test sample, at this to training sample This progress difficulty division, wherein training sample difficulty or ease type divides into the easy sample of training and the difficult sample of training;S2: it establishes dilute Dredging indicates model, it should be noted that, sparse representation model is model relevant to from constraint regularization is walked, successively will be after differentiation The easy sample of training and the difficult sample of training are brought into sparse representation model and are trained, it is emphasized that, during training It is the training process of first " easy " " hardly possible " afterwards, i.e., first brings training easy sample in sparse representation model into after being trained, then will instructs Practice hardly possible sample and brings the process being trained in sparse representation model into;S3: image classification model and prediction model are obtained, by S2 It can seek the unknowm coefficient (i.e. sparse code X, coefficient code D and the weighting coefficient α from step constraint) in sparse representation model; S4: building classification decision-making device constructs classification decision-making device according to the prediction model that S3 is obtained, and the present invention passes through from step constraint matrix pair Training sample is divided, and the easy sample of training and difficult sample are successively brought into the sparse representation model of definition and constantly instructed Practice, may make up the specific image classification scheme from step constraint, convenient for utilizing more discriminant informations, and there is Shandong to sample noise Stick, so as to solve when facing the complex samples comprising changing in noise and huge class, supervision dictionary learning mechanism will no longer Applicable problem improves image recognition effect.
Further, it divides training sample difficulty or ease type to divide using from step constraint matrix, from step constraint matrix V are as follows:
Wherein, ai,J indicates j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is class Sum, y indicates test sample, and λ is parameter, and parameter lambda represents the threshold value determined in advance from step constraint study;It should be noted It is, as V (ii)=1, to indicate the easy sample of training, the difficult sample of training is indicated as V (ii)=0, will not participate in rarefaction representation Process.
Further, sparse representation model is model relevant to from constraint regularization is walked, it should be noted that rarefaction representation Model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], wherein DiIt is and the associated son of class i Collection, AiIndicate the sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], XiIt is A in DiSub- square Battle array, Indicate AiCoefficient,WiIndicate training sample Thisi,jEuclidean distance between test sample y, α indicate that the weighting coefficient from step constraint study, V are indicated from step constraint Matrix, λ1, λ2, ξ1, ξ2, ξ3It is the scalar for controlling term;With the increase of parameter lambda threshold value, training hardly possible sample will be gradually selected into The process of dictionary study, the representation method so constantly improved in study can the effectively smoothness of control tactics devices and data it is each Kind structure, is incorporated hereinWithImprove the accuracy of classification.And The step of obtaining image classification model and prediction model includes: S41: the easy sample of training training;Specifically, training is most simple first The sample i.e. easy sample of training, the easy sample of training is brought into sparse representation model, i.e., in formula (2);
S42: X is updated;Specifically, fixed dictionary D and α, regard formula (2) as a sparse coding problem to solve X= [X1,X2,......,XK], i.e., sparse representation model formula (2) can further be rewritten are as follows:
That is, Its convex with Lipschitzian continuity gradient;
Obtain sparse code XiAlgorithm, specifically, input use i class training subset Ai, wherein D indicates dictionary, parameter ρ, τ > 0, initialization:
When the maximum number of iterations is reached:
HereIt isDerivative
Its soft (u, τ/ρ) is defined as: soft (u, τ/ρ)=0, if | | uj|≤τ/ρ, by uj-sign(uj) τ/ρ imparting Soft (u, τ/ρ), otherwise, end loop, output:
S43: obtaining sparse code X, coefficient code D and the weighting coefficient α from step constraint, specifically, updating D=[D1, D2,......,DK], when X and α keeps constant, formula (2) are as follows:
Wherein,
That is formula (4) are as follows:
Wherein,
Obtain coefficient code DiAlgorithm:
Input: i class training subset A is usedi;Initial dictionary Di;Coefficient Xi
Enable Zi=[z1;z2;...;zpi], Di=[d1,d2,...,dpi], wherein zjIt is row vector, djIt is column vector.
For j=1 to pi do
For all dl,l≠j and update dj.
Let Y=Λi-∑l≠jdlzl
The min imization of Eq.(5)becomes:
then we can get the solution
end for
Output: all d are updatedi, that is, have updated entire dictionary D.
Wherein, gone to school the weighting coefficient α practised from step constraint study by following algorithm:
Initialization: Di;Test sample y;λ, ε and μ > 1
Update factor alpha:
While does not restrain do
Based on (2) formula solution Wi,Vi
It optimizes:
It returns: calculating reality output:
Update coefficient: λ=μ λ
Reach maximum number of iterations
Output: end while
Wherein, for the determination of α: once dictionary D and X are fixed, substituting into above-mentioned solution procedure, the value of α can be obtained.
S44: determining image classification model and prediction model, specifically, image classification model is that will acquire sparse code X, the coefficient code D and weighting coefficient α from step constraint brings the formula of formula (2) acquisition into, and formula is as follows:
And determine prediction model eiUsing following formula:
Wherein, it enables It is from the coefficient of step constraint, if y comes from i class, residual error It is small, andGreatly, wherein miIndicate class AiCoefficient vector, β1And β2The preset value of presentation class model needs to illustrate , in the transformation space that dictionary D is crossed over, mean vector miIt can be regarded as AiThe center of class.
S4: building classification decision-making device, specifically, the building of classification decision-making device and classification decision-making device eiRelated, classification is determined Plan device uses following formula:
Identity (y)=arg mini{ei}。
Embodiment 3
Referring to Fig. 4, which is different from above embodiments: the present embodiment is related to plant image Classification and Identification technology Plant image classification method is specially supervised based on step tied mechanism certainly in field.Specifically, referring to Fig. 1, main structure Including S1: dividing training sample difficulty or ease type, operator's partition database image is divided into training sample and test specimens This, carries out difficulty division to training sample at this, wherein training sample difficulty or ease type divides into the easy sample of training and training is difficult Sample;S2: establishing sparse representation model, it should be noted that, sparse representation model is model relevant to from constraint regularization is walked, The easy sample of training after differentiation is brought into sparse representation model with the difficult sample of training successively and is trained, it is emphasized that, It is the training process of first " easy " " hardly possible " afterwards during training, i.e., first the easy sample of training is brought into sparse representation model and carried out After training, then difficult sample will be trained to bring the process being trained in sparse representation model into;S3: image classification model and pre- is obtained Model is surveyed, can seeking the unknowm coefficient in sparse representation model by S2, (i.e. sparse code X, coefficient code D and certainly step constrain Weighting coefficient α);S4: building classification decision-making device constructs classification decision-making device according to the prediction model that S3 is obtained, and the present invention passes through certainly Step constraint matrix divides training sample, and the easy sample of training and difficult sample are successively brought into the sparse representation model of definition Constantly train, may make up the specific image classification scheme from step constraint, convenient for utilizing more discriminant informations, and to sample Noise has robustness, so as to solve to supervise dictionary learning when facing the complex samples comprising changing in noise and huge class No longer applicable problem is improved image recognition effect by mechanism.
Further, it divides training sample difficulty or ease type to divide using from step constraint matrix, from step constraint matrix V are as follows:
Wherein, ai,jIndicate j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is class Sum, y indicates test sample, and λ is parameter, and parameter lambda represents the threshold value determined in advance from step constraint study;It should be noted It is, as V (ii)=1, to indicate the easy sample of training, the difficult sample of training is indicated as V (ii)=0, will not participate in rarefaction representation Process.
Further, sparse representation model is model relevant to from constraint regularization is walked, it should be noted that rarefaction representation Model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], wherein DiIt is and the associated son of class i Collection, AiIndicate the sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], XiIt is A in DiSub- square Battle array, Indicate AiCoefficient,WiIndicate training sample Thisi,jEuclidean distance between test sample y, α indicate that the weighting coefficient from step constraint study, V are indicated from step constraint Matrix, λ1, λ2, ξ1, ξ2, ξ3It is the scalar for controlling term;With the increase of parameter lambda threshold value, training hardly possible sample will be gradually selected into The process of dictionary study, the representation method so constantly improved in study can the effectively smoothness of control tactics devices and data it is each Kind structure, is incorporated hereinWithImprove the accuracy of classification.And The step of obtaining image classification model and prediction model includes: S41: the easy sample of training training;Specifically, training is most simple first The sample i.e. easy sample of training, the easy sample of training is brought into sparse representation model, i.e., in formula (2);
S42: X is updated;Specifically, fixed dictionary D and α, regard formula (2) as a sparse coding problem to solve X= [X1,X2,......,XK], i.e., sparse representation model formula (2) can further be rewritten are as follows:
That is, Its convex with Lipschitzian continuity gradient;
Obtain sparse code XiAlgorithm, specifically, input use i class training subset Ai, wherein D indicates dictionary, parameter ρ, τ > 0, initialization:
When the maximum number of iterations is reached:
HereIt isDerivative
Its soft (u, τ/ρ) is defined as: soft (u, τ/ρ)=0, if | | uj|≤τ/ρ, by uj-sign(uj) τ/ρ imparting Soft (u, τ/ρ), otherwise, end loop, output:
S43: obtaining sparse code X, coefficient code D and the weighting coefficient α from step constraint, specifically, updating D=[D1, D2,......,DK], when X and α keeps constant, formula (2) are as follows:
Wherein,
That is formula (4) are as follows:
Wherein,
Obtain coefficient code DiAlgorithm:
Input: i class training subset A is usedi;Initial dictionary Di;Coefficient Xi
Enable Zi=[z1;z2;...;zpi], Di=[d1,d2,...,dpi], wherein zjIt is row vector, djIt is column vector.
For j=1 to pi do
For all dl,l≠j and update dj.
Let Y=Λi-∑l≠jdlzl
The minimization of Eq.(5)becomes:
then we can get the solution
end for
Output: all d are updatedi, that is, have updated entire dictionary D.
Wherein, gone to school the weighting coefficient α practised from step constraint study by following algorithm:
Initialization: Di;Test sample y;λ, ε and μ > 1
Update factor alpha:
While does not restrain do
Based on (2) formula solution Wi,Vi
It optimizes:
It returns: calculating reality output:
Update coefficient: λ=μ λ
Reach maximum number of iterations
Output: end while
Wherein, for the determination of α: once dictionary D and X are fixed, substituting into above-mentioned solution procedure, the value of α can be obtained.
S44: determining image classification model and prediction model, specifically, image classification model is that will acquire sparse code X, the coefficient code D and weighting coefficient α from step constraint brings the formula of formula (2) acquisition into, and formula is as follows:
And determine prediction model eiUsing following formula:
Wherein, it enables It is from the coefficient of step constraint, if y comes from i class, residual error It is small, andGreatly, wherein miIndicate class AiCoefficient vector, β1And β2The preset value of presentation class model needs to illustrate , in the transformation space that dictionary D is crossed over, mean vector miIt can be regarded as AiThe center of class.
S4: building classification decision-making device, specifically, the building of classification decision-making device and classification decision-making device eiRelated, classification is determined Plan device uses following formula:
Identity (y)=arg mini{ei}。
Botanist, which goes up a hill, shoots 300 plant pictures, it should be noted that, which includes 50 kinds of plants And its different angle picture of plant, botanist can clearly know the title of 50 kinds of plants very much, will randomly select shooting 200 plant pictures bring the method for the present invention and other method structures into as training sample, the collected training sample of botanist It is verified in the model built.
Concrete operations process is as follows:
The first step, botanist open know figure link/app, it should be noted that, know figure link/app include this method and Link/app of other method buildings, and link/app is opened on the electronic device of specific processing function, electronic device packet Include the equipment such as computer, plate, mobile phone;
Second step, botanist upload, paste network address or picture is directly drawn to link/interface app;
Third step, link/interface app receive picture, then the characteristics of image A of capture input picture brings this method into and obtains In the classification decision-making device formula taken;
4th step retrieves identical/similar pictures according to classification decision-making device, and is shown on the interface of electronic device.
Successively to this method and other methods (such as sparse classification (SRC) method, support vector machines (SVM) method, label one KSVD (LCSVD) method of cause, Fei Sheer criterion (FDDL) method and multimode walk study (MMSPL) method certainly) it is verified, It is corresponding that the results are shown in Table 1.
Table 1: verifying sample comparison (accuracy rate: %)
Embodiment 4
Referring to Fig. 5~7, which is different from above embodiments: the present embodiment is operating process and its test pair Compare process.Specifically, main structure includes S1 referring to Fig. 1: dividing training sample difficulty or ease type, operator's partition database Image is divided into training sample and test sample, carries out difficulty division to training sample at this, wherein training sample Difficulty or ease type divides into the easy sample of training and the difficult sample of training;S2: establishing sparse representation model, it should be noted that, rarefaction representation Model is model relevant to from constraint regularization is walked, and is successively brought the easy sample of training after differentiation with the difficult sample of training into sparse It indicates to be trained in model, it is emphasized that, it is the training process of first " easy " " hardly possible " afterwards during training, i.e. first general The easy sample of training is brought into sparse representation model be trained after, then the difficult sample of training is brought into sparse representation model and is instructed Experienced process;S3: image classification model and prediction model are obtained, the unknowm coefficient in sparse representation model can be sought by S2 (i.e. sparse code X, coefficient code D and the weighting coefficient α from step constraint);S4: building classification decision-making device obtains pre- according to S3 Survey model construction classification decision-making device, the present invention pass through from step constraint matrix training sample is divided, by the easy sample of training with Difficult sample is successively brought into the sparse representation model of definition and is constantly trained, and may make up the specific image classification side from step constraint Case convenient for utilizing more discriminant informations, and has robustness to sample noise, so as to solve when in face of comprising noise and huge No longer applicable problem is improved image recognition effect by supervision dictionary learning mechanism when the complex samples changed in major class.
Further, it divides training sample difficulty or ease type to divide using from step constraint matrix, from step constraint matrix V are as follows:
Wherein, ai,jIndicate j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is class Sum, y indicates test sample, and λ is parameter, and parameter lambda represents the threshold value determined in advance from step constraint study;It should be noted It is, as V (ii)=1, to indicate the easy sample of training, the difficult sample of training is indicated as V (ii)=0, will not participate in rarefaction representation Process.
Further, sparse representation model is model relevant to from constraint regularization is walked, it should be noted that rarefaction representation Model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], wherein DiIt is and the associated son of class i Collection, AiIndicate the sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], i.e. A indicates characteristics of image number According to being indicated with a matrix, XiIt is A in DiSubmatrix, Indicate AiCoefficient,WiIndicate training sample ai ,jEuclidean distance between test sample y, α are indicated certainly The weighting coefficient of step constraint study, V are indicated from step constraint matrix, λ1, λ2, ξ1, ξ2, ξ3It is the scalar for controlling term;With parameter lambda The increase of threshold value, training hardly possible sample will be gradually selected into the process of dictionary study, and the representation method so constantly improved in study can With the various structures of the smoothness of effective control tactics device and data, it is incorporated herein WithImprove the accuracy of classification.And the step of obtaining image classification model and prediction model includes: S41: The easy sample of training training;Specifically, training the simplest sample i.e. easy sample of training first, the easy sample of training is brought into sparse table In representation model, i.e., in formula (2);
S42: X is updated;Specifically, fixed dictionary D and α, regard formula (2) as a sparse coding problem to solve X= [X1,X2,......,XK], i.e., sparse representation model formula (2) can further be rewritten are as follows:
That is, Its convex with Lipschitzian continuity gradient;
Obtain sparse code XiAlgorithm, specifically, input use i class training subset Ai, wherein D indicates dictionary, parameter ρ, τ > 0, initialization:
When the maximum number of iterations is reached:
HereIt isDerivative
Its soft (u, τ/ρ) is defined as: soft (u, τ/ρ)=0, if | | uj|≤τ/ρ, by uj-sign(uj) τ/ρ imparting Soft (u, τ/ρ), otherwise, end loop, output:
S43: obtaining sparse code X, coefficient code D and the weighting coefficient α from step constraint, specifically, updating D=[D1, D2,......,DK], when X and α keeps constant, formula (2) are as follows:
Wherein,
That is formula (4) are as follows:
Wherein,
Obtain coefficient code DiAlgorithm:
Input: i class training subset A is usedi;Initial dictionary Di;Coefficient Xi
Enable Zi=[z1;z2;...;zpi], Di=[d1,d2,...,dpi], wherein zjIt is row vector, djIt is column vector.
For j=1 to pi do
For all dl,l≠j and update dj.
Let Y=Λi-∑l≠jdlzl
The minimization of Eq.(5)becomes:
then we can get the solution
end for
Output: all d are updatedi, that is, have updated entire dictionary D.
Wherein, gone to school the weighting coefficient α practised from step constraint study by following algorithm:
Initialization: Di;Test sample y;λ, ε and μ > 1
Update factor alpha:
While does not restrain do
Based on (2) formula solution Wi,Vi
It optimizes:
It returns: calculating reality output:
Update coefficient: λ=μ λ
Reach maximum number of iterations
Output: end while
Wherein, for the determination of α: once dictionary D and X are fixed, substituting into above-mentioned solution procedure, the value of α can be obtained.
S44: determining image classification model and prediction model, specifically, image classification model is that will acquire sparse code X, the coefficient code D and weighting coefficient α from step constraint brings the formula of formula (2) acquisition into, and formula is as follows:
And determine prediction model eiUsing following formula:
Wherein, it enables It is from the coefficient of step constraint, if y comes from i class, residual error It is small, andGreatly, wherein miIndicate class AiCoefficient vector, β1And β2The preset value of presentation class model needs to illustrate , in the transformation space that dictionary D is crossed over, mean vector miIt can be regarded as AiThe center of class.
S4: building classification decision-making device, specifically, the building of classification decision-making device and classification decision-making device eiRelated, classification is determined Plan device uses following formula:
Identity (y)=arg mini{ei}。
With reference to Fig. 5~7, the present embodiment is listed to be verified on 2012 data set of Caltech-101 and VOC and is based on From step tied mechanism can supervision image classification method establish model measurement comparative experiments, it is specific as follows: to establish sparse table On the basis of representation model, some parameters: λ are provided in the study stage of dictionary1=0.005, λ2=0.01, ξ1=0.01, ξ2= 0.02, ξ3=0.01, λ=1600, μ=1.1, maximum number of iterations 15, and in sorting phase, some parameter settings are as follows: β1 =0.05, β2=0.005, DiMiddle i is set as the quantity of training sample, it should be noted that, λ1, λ2, ξ1, ξ2, ξ3, β1And β2Numerical value It is to be obtained according to training experience, the size of numerical value is related with the dimension of feature and dictionary.
It is as follows in the experimentation of Caltech-101, it should be noted that, Caltech-101 data set includes 101 classifications Object, each classification are made of 40 to 800 images, and image pattern is acquired by Fei-Fei Li, and the resolution ratio of image is about 300 × 200 pixels, in order to prove the scheme and the lower certain kinds sparse model classifier of supervision for constraining study based on step certainly of building Superiority, compared with other methods, such as: sparse classification (SRC) method, support vector machines (SVM) method, label one KSVD (LCSVD) method of cause, Fei Sheer criterion (FDDL) method and multimode are tied accordingly from study (MMSPL) method is walked Fruit is as shown in table 2.
Table 2: the algorithm on Caltech-101 data set compares (accuracy rate: %)
By test above as can be seen that support vector machines (SVM) method precision be higher than SRC because original SRC only with Machine selects training sample as dictionary, and may abandon optimal dictionary;The performance of LCKSVD is better than FDDL, shows in entire dictionary Middle expression query example is than indicating that query example is more effective on the sub- dictionary of each certain kinds in the application;FDDL and The performance of LCKSVD is poorer than the method that we proposes, meaning can efficiently extract the part letter in sample from step constraint regularization Breath;In addition, the result of MMSPL is also attributable to the study of self-pacing, result and our result are close, but it have ignored it is sparse Bound term, and sparse constraint to improve nicety of grading play a significant role, i.e., the data set the result shows that, we propose Model passes through the Learning Scheme being included in from step constraint and is very effective.
It is tested in VOC 2012
Demonstrate the method for our proposition on new data set (VOC 2012), and by its with other advanced methods into Row compares, including multimode compares average standard from study (MMPSL), the multistage scheme of step certainly, SRC, FDDL and convolutional network model is walked True rate is as shown in table 3.
Table 3: the algorithm on 2012 data set of VOC compares (Average Accuracy)
Have as seen from the above table, be about 83.3% in the population mean precision (MAP) of 2012 data set of VOC, for most of Classification, the method nicety of grading obtained that we proposes is all 80% or more, and the model that we constructs is in most of classifications Performance there is the model of similar self-paced learning better than other.
It is important that, it should be noted that the construction and arrangement of the application shown in multiple and different exemplary implementation schemes is only It is illustrative.Although several embodiments are only described in detail in this disclosure, refering to the personnel of the displosure content It should be easily understood that many changes under the premise of substantially without departing from the novel teachings and advantage of theme described in this application Type is possible (for example, the size of various elements, scale, structure, shape and ratio and parameter value are (for example, temperature, pressure Deng), mounting arrangements, the use of material, color, the variation of orientation etc.).It can be by more for example, being shown as integrally formed element A part or element are constituted, and the position of element can be squeezed or change in other ways, and the property or number of discrete component Or position can be altered or changed.Therefore, all such remodeling are intended to be comprised in the scope of the present invention.It can be according to replacing The embodiment in generation changes or the order or sequence of resequence any process or method and step.In the claims, any " dress Set plus function " clause be intended to and be covered on the structure described herein for executing the function, and it is equivalent to be not only structure It but also is equivalent structure.Without departing from the scope of the invention, can exemplary implementation scheme design, operation Other replacements are made in situation and arrangement, remodeling, are changed and are omitted.Therefore, the present invention is not limited to specific embodiments, and It is to extend to a variety of remodeling still fallen within the scope of the appended claims.
In addition, all spies of actual implementation scheme can not be described in order to provide the terse description of exemplary implementation scheme Sign is (that is, with execution those incoherent features of optimal mode of the invention for currently considering, or in realizing that the present invention is incoherent Those features).
It should be understood that in the development process of any actual implementation mode, it, can such as in any engineering or design object A large amount of specific embodiment is made to determine.Such development effort may be complicated and time-consuming, but for those benefits For the those of ordinary skill of the displosure content, do not need excessively to test, the development effort will be one design, manufacture and The routine work of production.
It should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although referring to preferable Embodiment describes the invention in detail, those skilled in the art should understand that, it can be to technology of the invention Scheme is modified or replaced equivalently, and without departing from the spirit and scope of the technical solution of the present invention, should all be covered in this hair In bright scope of the claims.

Claims (10)

1. it is a kind of based on from step tied mechanism can supervision image classification method, it is characterised in that: including,
Divide training sample difficulty or ease type;
Sparse representation model is established, and sample brings sparse representation model training into;
Obtain image classification model and prediction model;And
Construct classification decision-making device;
Wherein, the training sample difficulty or ease type includes the easy sample of training and the difficult sample of training.
2. as described in claim 1 based on from step tied mechanism can supervision image classification method, it is characterised in that: described stroke Divide training sample difficulty or ease type to use to divide from step constraint matrix.
3. as claimed in claim 2 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described from Walk constraint matrix V are as follows:
Wherein, ai,jIndicate j-th of training sample for belonging to the i-th class, i=1 ..., K, j=1 ..., ni, K is the total of class Number, y indicate test sample, and λ is parameter, and V (ii)=1 indicates that the easy sample of training, V (ii)=0 indicate the difficult sample of training.
4. as described in claims 1 to 3 is any based on from step tied mechanism can supervision image classification method, feature exists In: the sparse representation model is model relevant to from constraint regularization is walked.
5. as claimed in claim 4 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described dilute Dredging indicates model is defined as:
Wherein, the class label in learning dictionary is expressed as D=[D1,D2,...,DK], K indicates the class number of dictionary D, DiIt is and class i Associated subset, AiIndicate the training sample data of input, A=[A1,A2,...,Ai],Ai=[ai1,ai2,...,ain], XiIt is D Middle AiSubmatrix, Indicate AiCoefficient,Wi Indicate training sample ai,jEuclidean distance between test sample y, α indicate the weighting coefficient from step constraint study, V table Show from step constraint matrix, λ1, λ2, ξ1, ξ2, ξ3It is rescaling parameter.
6. as claimed in claim 5 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described to obtain The step of taking image classification model and prediction model include:
The easy sample of training training;
Update X;
Obtain sparse code X, coefficient code D and the weighting coefficient α from step constraint;
Determine image classification model and prediction model.
7. as claimed in claim 6 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described more The step of new X includes: fixed dictionary D and α, and formula (2) is further rewritten are as follows:
That is,
8. as claimed in claim 7 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described to obtain The step of taking sparse code X, coefficient code D and certainly the weighting coefficient α of step constraint includes: to update D=[D1,D2,......,DK], Z and α is obtained when fixed, i.e. formula (2) are as follows:
Wherein,
That is formula (4) are as follows:
Wherein,
9. as claimed in claim 8 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described really Determine prediction model eiUsing following formula:
Wherein, It is from the coefficient of step constraint, miIndicate class AiCoefficient vector, β1And β2It indicates The preset value of disaggregated model.
10. as claimed in claim 9 based on from step tied mechanism can supervision image classification method, it is characterised in that: it is described Classification decision-making device uses following formula:
Identity (y)=arg mini{ei}。
CN201910283982.2A 2019-04-10 2019-04-10 It is a kind of based on from step tied mechanism can supervision image classification method Pending CN110009049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910283982.2A CN110009049A (en) 2019-04-10 2019-04-10 It is a kind of based on from step tied mechanism can supervision image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910283982.2A CN110009049A (en) 2019-04-10 2019-04-10 It is a kind of based on from step tied mechanism can supervision image classification method

Publications (1)

Publication Number Publication Date
CN110009049A true CN110009049A (en) 2019-07-12

Family

ID=67170691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910283982.2A Pending CN110009049A (en) 2019-04-10 2019-04-10 It is a kind of based on from step tied mechanism can supervision image classification method

Country Status (1)

Country Link
CN (1) CN110009049A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967499A (en) * 2020-07-21 2020-11-20 电子科技大学 Data dimension reduction method based on self-learning
CN112288027A (en) * 2020-11-05 2021-01-29 河北工业大学 Heterogeneous multi-modal image genetics data feature analysis method
CN115242539B (en) * 2022-07-29 2023-06-06 广东电网有限责任公司 Network attack detection method and device for power grid information system based on feature fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564107A (en) * 2018-03-21 2018-09-21 温州大学苍南研究院 The sample class classifying method of semi-supervised dictionary learning based on atom Laplce's figure regularization

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564107A (en) * 2018-03-21 2018-09-21 温州大学苍南研究院 The sample class classifying method of semi-supervised dictionary learning based on atom Laplce's figure regularization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAO ZHANG等: "Supervised Image Classification with Self-paced Regularization", 《2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967499A (en) * 2020-07-21 2020-11-20 电子科技大学 Data dimension reduction method based on self-learning
CN111967499B (en) * 2020-07-21 2023-04-07 电子科技大学 Data dimension reduction method based on self-learning
CN112288027A (en) * 2020-11-05 2021-01-29 河北工业大学 Heterogeneous multi-modal image genetics data feature analysis method
CN112288027B (en) * 2020-11-05 2022-05-03 河北工业大学 Heterogeneous multi-modal image genetics data feature analysis method
CN115242539B (en) * 2022-07-29 2023-06-06 广东电网有限责任公司 Network attack detection method and device for power grid information system based on feature fusion

Similar Documents

Publication Publication Date Title
CN109948425B (en) Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching
CN113707235B (en) Drug micromolecule property prediction method, device and equipment based on self-supervision learning
CN109508650A (en) A kind of wood recognition method based on transfer learning
CN108334847B (en) A kind of face identification method based on deep learning under real scene
CN112396027B (en) Vehicle re-identification method based on graph convolution neural network
CN108764308A (en) Pedestrian re-identification method based on convolution cycle network
CN105809672B (en) A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring
CN110188725A (en) The scene Recognition system and model generating method of high-resolution remote sensing image
CN110210551A (en) A kind of visual target tracking method based on adaptive main body sensitivity
CN110009049A (en) It is a kind of based on from step tied mechanism can supervision image classification method
CN108256424A (en) A kind of high-resolution remote sensing image method for extracting roads based on deep learning
CN104933428B (en) A kind of face identification method and device based on tensor description
CN111353352A (en) Abnormal behavior detection method and device
CN109948490A (en) A kind of employee's specific behavior recording method identified again based on pedestrian
CN109409261A (en) A kind of Crop classification method and system
CN109063112A (en) A kind of fast image retrieval method based on multi-task learning deep semantic Hash, model and model building method
CN109741410A (en) Fluorescence-encoded micro-beads image based on deep learning generates and mask method
CN106650667A (en) Pedestrian detection method and system based on support vector machine
CN109902662A (en) A kind of pedestrian recognition methods, system, device and storage medium again
CN109492618A (en) Object detection method and device based on grouping expansion convolutional neural networks model
CN109376683A (en) A kind of video classification methods and system based on dense graph
CN109325513A (en) A kind of image classification network training method based on magnanimity list class single image
CN109376787A (en) Manifold learning network and computer visual image collection classification method based on it
US20230053911A1 (en) Detecting an object in an image using multiband and multidirectional filtering
CN109284760A (en) A kind of furniture detection method and device based on depth convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination