CN110210553A - Method, apparatus, electronic equipment and the computer readable storage medium of training classifier - Google Patents

Method, apparatus, electronic equipment and the computer readable storage medium of training classifier Download PDF

Info

Publication number
CN110210553A
CN110210553A CN201910453997.9A CN201910453997A CN110210553A CN 110210553 A CN110210553 A CN 110210553A CN 201910453997 A CN201910453997 A CN 201910453997A CN 110210553 A CN110210553 A CN 110210553A
Authority
CN
China
Prior art keywords
image
classifier
label information
target object
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910453997.9A
Other languages
Chinese (zh)
Inventor
王诗吟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910453997.9A priority Critical patent/CN110210553A/en
Publication of CN110210553A publication Critical patent/CN110210553A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure discloses a kind of methods of trained classifier characterized by comprising obtains image collection, described image set includes the first subclass, and second subset is closed and third subclass, each subclass respectively correspond label information;Determine that the output project of classifier, the output project of the classifier are corresponding respectively at the label information;According to the described image set training classifier.Method, apparatus, electronic equipment and the computer readable storage medium for the training classifier that the embodiment of the present disclosure provides, during training classifier include in used image collection the set of different classes of target object and do not include the target object set, so as to realize by a classifier to whether including the judgement of the target object and the judgement of the classification to the target object included in input picture in input picture, training process is simplified.

Description

Method, apparatus, electronic equipment and the computer readable storage medium of training classifier
Technical field
This disclosure relates to field of information processing more particularly to a kind of method, apparatus, electronic equipment and the meter of trained classifier Calculation machine readable storage medium storing program for executing.
Background technique
With the progress of computer technology, application relevant to image is more abundant, such as based on convolutional neural networks Classifier can be used in identification and/or classification to input picture.
Need classifier (the also referred to as convolutional neural networks classification by training set training based on convolutional neural networks Device), classifier could be used to that the image of input to be identified and/or classified.With identify input image in whether include For who object, needs the image largely to include who object and do not include the image of who object as trained set Train classifier, the process of a typical training classifier includes the meter that convolutional layer is carried out to the image in training set It calculates, the calculating of non-linear layer, and/or the calculating of pond layer, then process, which is fully connected layer and calculates classification results, (such as classifies Be include the image of who object and do not include who object image), it is believed that above-mentioned convolutional layer, non-linear layer, Chi Hua It layer and is fully connected layer and constitutes the frame of convolutional neural networks.For being fully connected the calculated classification results of layer, also need By it, whether the label information including who object is compared to construct loss function with image itself, under through gradient Scheduling algorithm parameters such as weight and biasing according to involved in loss function update training process drop and then according to updated Parameter recalculates classification results, and such iteration means that classifier training is completed when calculating optimal classification results then, from And the image of input can be identified and/or be classified by the classifier, such as the image of input, which can With provide in the input picture include who object or not include who object classification results.Similarly, instruction can also be passed through Practice classifier of the set training for classifying to the target object in image, such as largely to include the figure of who object As gathering as training to train classifier to classify and/or identify man or woman etc..
As described above, if it is desired to be classified by classifier to the target object in input picture, be somebody's turn to do uncertain In the case where whether including target object in input picture, need first to train a classifier for judge in input picture whether Including target object, if it is determined that input picture includes target object, then another classifier of retraining is for judging the target The classification of object, therefore two classifiers of training are needed altogether, training process is very cumbersome.
Summary of the invention
The embodiment of the present disclosure provides the method for training classifier, device, electronic equipment and computer readable storage medium, Include the set of different classes of target object in used image collection during training classifier and does not include The set of the target object, so as to be realized by classifier to whether including the target object in input picture Judgement and the classification to the target object included in input picture judgement, simplify training process.
In a first aspect, the embodiment of the present disclosure provides a kind of method of trained classifier characterized by comprising obtain figure Image set closes, and described image set includes the first subclass, and second subset is closed and third subclass, in first subclass Image is corresponding with the first label information, and the image in the second subset conjunction is corresponding with the second label information, the third subset Image in conjunction is corresponding with third label information, wherein the first label information instruction is corresponding with first label information Image in include first category target object, second label information indicates figure corresponding with second label information Including the target object of second category as in, the third label information indicates figure corresponding with the third label information It does not include the target object as in;Determine that the output project of classifier, the output project include first item, Section 2 Mesh and third item, first label information is corresponding with the first item, second label information and the Section 2 Mesh is corresponding, and the third label information is corresponding with the third item;According to the described image set training classifier.
Further, according to the described image set training classifier, comprising: according to flexible maximum value loss function (softmax loss function) and center loss function (center loss function) determine the damage of the classifier Lose function;According to the loss function of described image set and the classifier training classifier.
Further, the flexible maximum value loss functionWherein, m be from The quantity for the image chosen in described image set, n are the output item purpose quantity of the classifier, xiFor from described image collection I-th of image is input to the characteristics of image of the output layer of the classifier, W in the m image chosen in conjunctionyiFor from described image The value that output project corresponding to the label information of i-th of image is exported in the m image chosen in set, WjIt is described point J-th of output project is directed to the value that i-th of image is exported, b in n output project of class deviceyiAnd bjFor offset parameter; The center loss functionWherein, m is the number for the image chosen from described image set Amount, xiFor from the m image chosen in described image set i-th of image be input to the classifier output layer image Feature, cyiIt is defeated for image corresponding with the label information of i-th of image from the m image chosen in described image set Enter the average value of the characteristics of image of the output layer to the classifier;The loss function L=Ls+ λ Lc of the classifier, wherein λ For balance parameters.
Further, in the image in first subclass include first category the target object, described second Include the target object of second category in image in subclass, does not include described in the image in the third subclass Target object.
Further, the value that each output project of the classifier is exported be more than or equal to 0 and be less than or Equal to 1, value that each output project of the classifier is exported and be 1.
Further, the classifier includes the classifier based on convolutional neural networks.
Further, the convolutional neural networks include MobileNetV2.
Further, the target object includes food item.
Second aspect, the embodiment of the present disclosure provide a kind of device of trained classifier characterized by comprising obtain mould Block, for obtaining image collection, described image set includes the first subclass, and second subset is closed and third subclass, and described the Image in one subclass is corresponding with the first label information, and the image in the second subset conjunction is corresponding with the second label information, Image in the third subclass is corresponding with third label information, wherein the first label information instruction and described first It include the target object of first category in the corresponding image of label information, the second label information instruction and second label It include the target object of second category in the corresponding image of information, the third label information instruction and the third label It does not include the target object in the corresponding image of information;Determining module, for determining the output project of classifier, the output Project includes first item, second item and third item, and first label information is corresponding with the first item, described Second label information is corresponding with the second item, and the third label information is corresponding with the third item;Training module is used According to the described image set training classifier.
Further, the training module is also used to: according to flexible maximum value loss function (softmax loss Function) and center loss function (center loss function) determines the loss function of the classifier;According to institute State the loss function training classifier of image collection and the classifier.
Further, the flexible maximum value loss functionWherein, m be from The quantity for the image chosen in described image set, n are the output item purpose quantity of the classifier, xiFor from described image collection I-th of image is input to the characteristics of image of the output layer of the classifier, W in the m image chosen in conjunctionyiFor from described image The value that output project corresponding to the label information of i-th of image is exported in the m image chosen in set, WjIt is described point J-th of output project is directed to the value that i-th of image is exported, b in n output project of class deviceyiAnd bjFor offset parameter; The center loss functionWherein, m is the number for the image chosen from described image set Amount, xiFor from the m image chosen in described image set i-th of image be input to the classifier output layer image Feature, cyiIt is defeated for image corresponding with the label information of i-th of image from the m image chosen in described image set Enter the average value of the characteristics of image of the output layer to the classifier;The loss function L=Ls+ λ Lc of the classifier, wherein λ For balance parameters.
Further, in the image in first subclass include first category the target object, described second Include the target object of second category in image in subclass, does not include described in the image in the third subclass Target object.
Further, the value that each output project of the classifier is exported be more than or equal to 0 and be less than or Equal to 1, value that each output project of the classifier is exported and be 1.
Further, the classifier includes the classifier based on convolutional neural networks.
Further, the convolutional neural networks include MobileNetV2.
Further, the target object includes food item.
The third aspect, the embodiment of the present disclosure provide a kind of electronic equipment, comprising: memory, it is computer-readable for storing Instruction;And the one or more processors coupled with the memory, for running the computer-readable instruction, so that institute State the method that any trained classifier in aforementioned first aspect is realized when processor operation.
Fourth aspect, the embodiment of the present disclosure provide a kind of non-transient computer readable storage medium, which is characterized in that described Non-transient computer readable storage medium stores computer instruction, when the computer instruction is computer-executed, so that institute The method for stating any trained classifier that computer executes in aforementioned first aspect.
The present disclosure discloses method, apparatus, electronic equipment and the computer readable storage mediums of a kind of trained classifier.Its Described in training classifier method characterized by comprising obtain image collection, described image set include the first subset It closes, second subset is closed and third subclass, and the image in first subclass is corresponding with the first label information, and described second Image in subclass is corresponding with the second label information, and the image in the third subclass is corresponding with third label information, In, first label information indicates the target object in image corresponding with first label information including first category, Second label information indicates the target object in image corresponding with second label information including second category, The third label information indicates that in image corresponding with the third label information do not include the target object;Determine classification The output project of device, the output project of the classifier include first item corresponding with first label information, and described The corresponding second item of second label information, and third item corresponding with the third label information;According to described image collection Close the training classifier.The method, apparatus of training classifier that the embodiment of the present disclosure provides, electronic equipment and computer-readable Storage medium during training classifier includes the set of different classes of target object in used image collection with And not include the target object set, so as to be realized by classifier to whether including described in input picture The judgement of target object and the judgement of the classification to the target object included in input picture, simplify and trained Journey.
Above description is only the general introduction of disclosed technique scheme, in order to better understand the technological means of the disclosure, and It can be implemented in accordance with the contents of the specification, and to allow the above and other objects, features and advantages of the disclosure can be brighter Show understandable, it is special below to lift preferred embodiment, and cooperate attached drawing, detailed description are as follows.
Detailed description of the invention
In order to illustrate more clearly of the embodiment of the present disclosure or technical solution in the prior art, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this public affairs The some embodiments opened for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of the embodiment of the method for the training classifier that the embodiment of the present disclosure provides;
Fig. 2 is a kind of neural network model schematic diagram that the disclosure provides;
Fig. 3 is a kind of processing flow schematic diagram for trained classifier that the embodiment of the present disclosure provides;
Fig. 4 is the structural schematic diagram of the Installation practice for the training classifier that the embodiment of the present disclosure provides;
Fig. 5 is the structural schematic diagram of the electronic equipment provided according to the embodiment of the present disclosure.
Specific embodiment
Illustrate embodiment of the present disclosure below by way of specific specific example, those skilled in the art can be by this specification Disclosed content understands other advantages and effect of the disclosure easily.Obviously, described embodiment is only the disclosure A part of the embodiment, instead of all the embodiments.The disclosure can also be subject to reality by way of a different and different embodiment It applies or applies, the various details in this specification can also be based on different viewpoints and application, in the spirit without departing from the disclosure Lower carry out various modifications or alterations.It should be noted that in the absence of conflict, the feature in following embodiment and embodiment can To be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are without creative efforts Every other embodiment obtained belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways. For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in diagram are drawn System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields The skilled person will understand that the aspect can be practiced without these specific details.
The method of the training classifier provided in this embodiment can be executed by the device of a trained classifier, the dress It sets and can be implemented as software, can be implemented as hardware, be also implemented as the combination of software and hardware, such as the training classification The device of device includes computer equipment, to execute the training classifier provided in this embodiment by the computer equipment Method, as understood by those skilled in the art, computer equipment can be desk-top or portable computer device, can also be shifting Dynamic terminal device etc..
Fig. 1 is the flow chart of the embodiment of the method for the training classifier that the embodiment of the present disclosure provides, as shown in Figure 1, this public affairs The method for opening the training classifier of embodiment includes the following steps:
Step S101 obtains image collection, and described image set includes the first subclass, and second subset is closed and third Gather, the image in first subclass is corresponding with the first label information, image and the second mark in the second subset conjunction It signs information to correspond to, the image in the third subclass is corresponding with third label information, wherein the first label information instruction It include the target object of first category in image corresponding with first label information, the second label information instruction and institute State the target object in the corresponding image of the second label information including second category, the third label information instruction and institute Stating in the corresponding image of third label information does not include the target object;
Need by training gather training classifier, therefore in step s101 obtain image collection as training set with Training classifier.
Since whether the embodiment of the present disclosure wishes through one classifier of training to realize to including described in input picture The judgement of target object and the judgement of the classification to the target object included in input picture, such as the classifier Classification results include indicate the input picture in do not include the output project of target object, indicate the input picture include first The output project of the target object of classification and indicate that the input picture includes the output project of the target object of second category, Therefore the embodiment of the present disclosure proposes that the image in the training set of classifier includes mark corresponding with the output project of classification results It signs information (such as first label information, the second label information, third label information), especially includes the figure in third subclass Picture, it includes the target pair that third label information corresponding to the image in the third subclass, which indicates corresponding image not, As thus can be by the mark of the image in the classification results and training set in training process during training classifier Label information is compared to construct loss function, and then corrects the parameters such as weight involved in training process and biasing, to logical It crosses and a single classifier is trained to realize to whether including the judgement of the target object and to input figure in input picture The judgement of the classification of the included target object as in.
It in an alternative embodiment, include the target pair of first category in the image in first subclass As, it include the target object of second category in the image in the second subset conjunction, the image in the third subclass In do not include the target object.Since the image in first subclass is corresponding with the first label information, and the first label Information indicates the target object in image corresponding with first label information including first category, therefore when first son When image in set includes the target object of first category, content indicated by first label information and described first The information of the target object in image in subclass is consistent, to keep classifier quasi- during training classifier Really study includes the feature of the image of the target object of first category, and includes the image of the target object of first category according to this The first corresponding label information is verified, thus preferably correct training process involved in weight and biasing etc. parameters with Improve the accuracy of classifier, similarly, the second subset close in image it is corresponding with the second label information and described second sub It include the target object of second category in image in set, and the instruction of the second label information is corresponding with second label information Image in include second category target object;Image in the third subclass is corresponding and described with third label information It does not include the target object in image in third subclass, and the instruction of third label information and the third label information pair It does not include the target object in the image answered, that is to say, that content indicated by second label information and second son The information of the target object in image in set is consistent, content indicated by the third label information and the third subclass In image in the information of target object (in the image in third subclass do not include target object, therefore its target object Information is no target object) unanimously, information and label information training classifier based on the target object in the consistent image, The parameters such as weight involved in training process and biasing can be corrected preferably to improve the accuracy of classifier.
In the embodiment of the present disclosure, optionally, the target object includes who object, and correspondingly, the classifier can Judge in input picture whether to include who object and if include that who object can also judge the who object for including Classification (such as the classification results of the classifier include indicating that in the input picture do not include the output project of who object, referring to Show that the input picture includes the who object of first category, the output project of such as man and indicates that the input picture includes The output project of the who object of second category, such as woman);Optionally, the target object includes subject, such as is wrapped Food item is included, correspondingly, whether the classifier can judge in input picture including food item and if including food Object object can also judge the food item for including classification (such as the classification results of the classifier include indicate the input figure Do not include the output project of food item as in, indicate that the input picture includes the who object of first category, such as sweet food Output project and indicate that the input picture includes the who object of second category, the output project of for example salty food).
Step S102 determines that the output project of classifier, the output project include first item, second item and Three projects, first label information is corresponding with the first item, and second label information is corresponding with the second item, The third label information is corresponding with the third item;
The output project of classifier in step S102 is used for the classification results of presentation class device, such as those skilled in the art Understood, the difference of the calculation according to used by classifier, the output project of classifier can pass through diversified forms table Show classification results, as an optional embodiment, the value that some output project exports in each output project of classifier is The value of A, other output project outputs are B, then the output project for meaning that the value of output is A indicates classifier to input picture Classification;As another optional embodiment, the value that each output project of the classifier is exported is more than or equal to 0 And be less than or equal to 1, the value that each output project of the classifier is exported and be 1, i.e. each output of classifier The value that project is exported represents the classifier to the class probability of input picture, and the maximum output project of probability value indicates point Classification of the class device to input picture.
As previously mentioned, the embodiment of the present disclosure wish by training one classifier with realize in input picture whether include The judgement of the target object and the judgement of the classification to the target object included in input picture, therefore trained The classification results of the classifier should include indicating the corresponding output project of above-mentioned judgement content, and the training of classifier is gathered In image label information it is corresponding with the output project of the classifier.In an alternative embodiment, the classifier Output project included by first item instruction classifier input picture include first category target object, described Second item indicates that the input picture includes that the target object mesh of second category, the third item indicate in the input picture not Including target object, the first label information instruction as involved in step S101 is corresponding with first label information It include the target object of first category in image, and first label information is corresponding with the first item therefore described The input picture of first item instruction classifier includes the target object of first category, and similarly, the second item indicates that this is defeated Entering image includes that the target object mesh of second category, the third item indicate in the input picture not include target object.
Step S103, according to the described image set training classifier.
The image collection of the training classifier has been determined in step s101, the classification has been determined in step s 102 The output project of device, thus in step s 103 can according to the described image set training classifier, the classifier it is defeated Project includes the output project of the identified classifier in the step S102 out.
Optionally, the classifier includes the classifier based on convolutional neural networks, optionally, the convolutional neural networks Including but not limited to MobileNetV2, AlexNet, ***Net, VGGNet, DenseNet etc..Such as MobileNetV2 net Network is a kind of deep neural network of the lightweight proposed by Google company for embedded device, in MobileNetV1 On the basis of make improvement, use first expansion recompression strategy to reduce calculation amount and improve precision.
It will be appreciated by those skilled in the art that different convolutional neural networks have different frameworks, this is embodied in possibility Layer including different layer and different number.As shown in Fig. 2, the frame of typical convolutional neural networks include convolutional layer, it is non- Linear layer, pond layer and it is fully connected layer.
Convolutional layer is mainly used for extracting characteristics of image from input picture, can be by one or more filters (also referred to as Characteristic detector) according to preset step-length characteristics of image is extracted from input picture.As understood by those skilled in the art, scheme As being made of pixel, each of image pixel, such as input picture packet can be characterized by color parameter and location parameter 48*48 pixel is included, is exported according to step-length by the 1 available convolutional layer of extractor characteristics of image by the filter of 5*5 The image characteristic matrix of 44*44.
It can connect non-linear layer or pond layer after convolutional layer, wherein non-linear layer is used for the image exported to convolutional layer Feature carries out Further Feature Extraction, pond layer can using be averaged by the way of pond or the mode in maximum pond to convolutional layer or The output result of non-linear layer is handled, and can reduce the dimension of characteristics of image, reduces operation times.
The last of convolutional neural networks is to be fully connected layer, and the last layer for being fully connected layer is output layer, can also be claimed For the output layer of classifier, it is fully connected the characteristics of image of the layer before layer receives, and described image feature is located layer by layer Reason, finally, treated characteristics of image is input to output layer, is carried out by activation primitive to the characteristics of image in output layer It calculates, and calculated result is mapped to multiple output projects included by output layer, multiple output project can be used as classification The output project of device.
During based on convolutional neural networks training classifier, for the image in training set, volume can be input to Product neural network, calculates and handles layer by layer according to the framework of convolutional neural networks, finally in the output layer output for being fully connected layer Then classification results are compared to construct loss function with the label information of image according to the classification results, are passing through gradient Decline scheduling algorithm according to the loss function update training process involved in weight and biasing etc. parameters and then according to update after Parameter recalculate classification results, such iteration completes the training to classifier after obtaining optimal classification results, thus For the image of input, it can be classified and/or be identified by the classifier.
Therefore in step s 103, in the manner described above, the image in image collection is input to convolutional neural networks, led to The classifier crossing each layer of feature extraction and calculating to train classifier, to be completed by the training is realized to input It whether include the judgement of the target object and the classification to the target object included in input picture in image Judgement.
In view of including third subclass in the image collection in the embodiment of the present disclosure for training classifier, and third is sub Image in set does not include the target object and is possible to include various objects, this will cause training classifier larger dry It disturbs, in order to solve this technical problem, loss function can be improved in the classifier that the training embodiment of the present disclosure provides, To reduce interference.Referring to Fig. 3, step S103: according to the described image set training classifier, can also include:
Step S301, according to flexible maximum value loss function (softmax loss function) and center loss function (center loss function) determines the loss function of the classifier;
Step S302, according to the loss function of described image set and the classifier training classifier.
Using flexible maximum value loss function as the loss function of classifier, the output project institute that classifier can be improved is right The discrimination between classification answered, and the output of classifier can be made as the loss function of classifier by center loss function The distance between the characteristics of image of image in classification corresponding to project is smaller, to obviously distinguish each class.As previously mentioned, Since the third subclass of the image collection in the embodiment of the present disclosure is possible to include each due to not including the target object Kind of object, this will cause larger interference to training classifier, therefore in order to reduce interference, improve classifier classification and/or identification Accuracy rate determines the loss of the classifier according to flexible maximum value loss function and center loss function in step S301 Function, and in step s 302 according to the loss function of the classifier determined in step S301 the training classification Device so that the distance between classification corresponding to the output project of classifier increases, and reduces the figure of the image in classification As the distance between feature, good classifying quality is obtained.
Optionally, the flexible maximum value loss functionWherein, m is from institute The quantity for the image chosen in image collection is stated, n is the output item purpose quantity of the classifier, xiFor from described image set I-th of image is input to the characteristics of image of the output layer of the classifier, W in m image of middle selectionyiFor from described image collection The value that output project corresponding to the label information of i-th of image is exported in the m image chosen in conjunction, WjFor the classification J-th of output project is directed to the value that i-th of image is exported, b in n output project of deviceyiAnd bjFor offset parameter;It can Choosing, the offset parameter is constant, such as byiAnd bjEqual to 0.
The center loss functionWherein, m is the figure chosen from described image set The quantity of picture, xiFor from the m image chosen in described image set i-th of image be input to the output layer of the classifier Characteristics of image, cyiIt is corresponding with the label information of i-th of image from the m image chosen in described image set Image is input to the average value of the characteristics of image of the output layer of the classifier;
The loss function L=Ls+ λ Lc of the classifier, wherein λ is balance parameters.λ is used to adjust the specific gravity of Ls and Lc, So as to better control tactics result.Optionally, λ is constant, such as λ is greater than 0 or λ and is equal to 1.
Fig. 4 show the structural schematic diagram of 400 embodiment of device of the training classifier of embodiment of the present disclosure offer, such as schemes Shown in 4, the device 400 of the trained classifier includes obtaining module 401, determining module 402, training module 403.
Wherein, the acquisition module 401, for obtaining image collection, described image set includes the first subclass, and second Subclass and third subclass, the image in first subclass is corresponding with the first label information, during the second subset is closed Image it is corresponding with the second label information, the image in the third subclass is corresponding with third label information, wherein described One label information indicates the target object in image corresponding with first label information including first category, second mark Sign the target object including second category in information instruction image corresponding with second label information, the third mark It signs information and indicates that in image corresponding with the third label information do not include the target object;The determining module 402 is used In the output project for determining classifier, the output project includes first item, second item and third item, and described first Label information is corresponding with the first item, and second label information is corresponding with the second item, the third label letter It ceases corresponding with the third item;The training module 403, for according to the described image set training classifier.
The method that Fig. 4 shown device can execute Fig. 1 and/or embodiment illustrated in fig. 3, the portion that the present embodiment is not described in detail Point, it can refer to the related description to Fig. 1 and/or embodiment illustrated in fig. 3.The implementation procedure and technical effect of the technical solution referring to Description in Fig. 1 and/or embodiment illustrated in fig. 3, details are not described herein.
Below with reference to Fig. 5, it illustrates the structural representations for the electronic equipment 500 for being suitable for being used to realize the embodiment of the present disclosure Figure.Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, digital broadcasting and connect Receive device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal (such as vehicle Carry navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electricity shown in Fig. 5 Sub- equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 5, electronic equipment 500 may include processing unit (such as central processing unit, graphics processor etc.) 501, random access can be loaded into according to the program being stored in read-only memory (ROM) 502 or from storage device 508 Program in memory (RAM) 503 and execute various movements appropriate and processing.In RAM 503, it is also stored with electronic equipment Various programs and data needed for 500 operations.Processing unit 501, ROM 502 and RAM 503 pass through bus or communication line 504 are connected with each other.Input/output (I/O) interface 505 is also connected to bus or communication line 504.
In general, following device can connect to I/O interface 505: including such as touch screen, touch tablet, keyboard, mouse, figure As the input unit 506 of sensor, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaking The output device 507 of device, vibrator etc.;Storage device 508 including such as tape, hard disk etc.;And communication device 509.It is logical T unit 509 can permit electronic equipment 500 and wirelessly or non-wirelessly be communicated with other equipment to exchange data.Although Fig. 5 shows The electronic equipment 500 with various devices is gone out, it should be understood that being not required for implementing or having all dresses shown It sets.It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 509, or from storage device 508 It is mounted, or is mounted from ROM 502.When the computer program is executed by processing unit 501, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example including but be not limited to Electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.It is computer-readable The more specific example of storage medium can include but is not limited to: have electrical connection, the portable computing of one or more conducting wires Machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM Or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned Any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage program it is tangible Medium, the program can be commanded execution system, device or device use or in connection.And in the disclosure, Computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, wherein carrying Computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be other than computer readable storage medium Any computer-readable medium, which can send, propagates or transmit for by instruction execution System, device or device use or program in connection.The program code for including on computer-readable medium can To transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned any appropriate Combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity When sub- equipment executes, so that the method that the electronic equipment executes the training classifier in above-described embodiment.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C++, It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete It executes, partly executed on the user computer on the user computer entirely, being executed as an independent software package, part Part executes on the remote computer or executes on a remote computer or server completely on the user computer.It is relating to And in the situation of remote computer, remote computer can include local area network (LAN) or wide area network by the network-of any kind (WAN)-it is connected to subscriber computer, or, it may be connected to outer computer (such as led to using ISP Cross internet connection).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions.
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (11)

1. a kind of method of trained classifier characterized by comprising
Image collection is obtained, described image set includes the first subclass, and second subset is closed and third subclass, and described first Image in subclass is corresponding with the first label information, and the image in the second subset conjunction is corresponding with the second label information, institute The image stated in third subclass is corresponding with third label information, wherein the first label information instruction and first mark The target object including first category in the corresponding image of information is signed, the second label information instruction is believed with second label The target object in corresponding image including second category is ceased, the third label information instruction is believed with the third label Ceasing does not include the target object in corresponding image;
Determining the output project of classifier, the output project includes first item, second item and third item, and described One label information is corresponding with the first item, and second label information is corresponding with the second item, the third label Information is corresponding with the third item;
According to the described image set training classifier.
2. the method for trained classifier according to claim 1, which is characterized in that according to the training of described image set Classifier, comprising:
According to flexible maximum value loss function (softmax loss function) and center loss function (center loss Function the loss function of the classifier) is determined;
According to the loss function of described image set and the classifier training classifier.
3. the method for trained classifier according to claim 2, which is characterized in that the flexibility maximum value loss functionWherein, m is the quantity for the image chosen from described image set, and n is described The output item purpose quantity of classifier, xiDescribed in i-th of image is input to from the m image chosen in described image set The characteristics of image of the output layer of classifier, WyiFor the label letter of i-th of image from the m image chosen in described image set The value that the corresponding output project of breath is exported, WjInstitute is directed to for j-th of output project in n output project of the classifier State the value that i-th of image is exported, byiAnd bjFor offset parameter;
The center loss functionWherein, m is the image chosen from described image set Quantity, xiFor from the m image chosen in described image set i-th of image be input to the classifier output layer figure Picture feature,For figure corresponding with the label information of i-th of image from the m image chosen in described image set Average value as being input to the characteristics of image of the output layer of the classifier;
The loss function L=Ls+ λ Lc of the classifier, wherein λ is balance parameters.
4. the method for trained classifier according to claim 1 to 3, which is characterized in that in first subclass Image in include first category the target object, the second subset close in image in include the described of second category Target object does not include the target object in the image in the third subclass.
5. the method for trained classifier according to claim 1 to 3, which is characterized in that each of described classifier The value that output project is exported is more than or equal to 0 and less than or equal to 1, and each output project of the classifier is exported Value and be 1.
6. the method for trained classifier according to claim 1 to 3, which is characterized in that the classifier includes base In the classifier of convolutional neural networks.
7. the method for trained classifier according to claim 6, which is characterized in that the convolutional neural networks include MobileNetV2。
8. the method for trained classifier according to claim 1 to 3, which is characterized in that the target object includes Food item.
9. a kind of device of trained classifier characterized by comprising
Module is obtained, for obtaining image collection, described image set includes the first subclass, and second subset is closed and third Gather, the image in first subclass is corresponding with the first label information, image and the second mark in the second subset conjunction It signs information to correspond to, the image in the third subclass is corresponding with third label information, wherein the first label information instruction It include the target object of first category in image corresponding with first label information, the second label information instruction and institute State the target object in the corresponding image of the second label information including second category, the third label information instruction and institute Stating in the corresponding image of third label information does not include the target object;
Determining module, for determining that the output project of classifier, the output project include first item, second item and Three projects, first label information is corresponding with the first item, and second label information is corresponding with the second item, The third label information is corresponding with the third item;
Training module, for according to the described image set training classifier.
10. a kind of electronic equipment, comprising:
Memory, for storing computer-readable instruction;And
Processor, for running the computer-readable instruction, so that realizing according to claim 1-8 when the processor is run Any one of described in training classifier method.
11. a kind of non-transient computer readable storage medium, for storing computer-readable instruction, when the computer-readable finger When order is executed by computer, so that the computer perform claim requires the side of training classifier described in any one of 1-8 Method.
CN201910453997.9A 2019-05-28 2019-05-28 Method, apparatus, electronic equipment and the computer readable storage medium of training classifier Pending CN110210553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910453997.9A CN110210553A (en) 2019-05-28 2019-05-28 Method, apparatus, electronic equipment and the computer readable storage medium of training classifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910453997.9A CN110210553A (en) 2019-05-28 2019-05-28 Method, apparatus, electronic equipment and the computer readable storage medium of training classifier

Publications (1)

Publication Number Publication Date
CN110210553A true CN110210553A (en) 2019-09-06

Family

ID=67789160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910453997.9A Pending CN110210553A (en) 2019-05-28 2019-05-28 Method, apparatus, electronic equipment and the computer readable storage medium of training classifier

Country Status (1)

Country Link
CN (1) CN110210553A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446729A (en) * 2018-03-13 2018-08-24 天津工业大学 Egg embryo classification method based on convolutional neural networks
CN109145840A (en) * 2018-08-29 2019-01-04 北京字节跳动网络技术有限公司 video scene classification method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446729A (en) * 2018-03-13 2018-08-24 天津工业大学 Egg embryo classification method based on convolutional neural networks
CN109145840A (en) * 2018-08-29 2019-01-04 北京字节跳动网络技术有限公司 video scene classification method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
程光 等: "《加密流量测量和分析》", 31 December 2018, 东南大学出版社 *
袁建新: "基于深度学习的危险器具检测与识别方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN107633218A (en) Method and apparatus for generating image
CN108446387A (en) Method and apparatus for updating face registration library
CN107578017A (en) Method and apparatus for generating image
CN108846440A (en) Image processing method and device, computer-readable medium and electronic equipment
CN110288082A (en) Convolutional neural networks model training method, device and computer readable storage medium
CN109740018A (en) Method and apparatus for generating video tab model
CN110276346A (en) Target area identification model training method, device and computer readable storage medium
CN109410253B (en) For generating method, apparatus, electronic equipment and the computer-readable medium of information
CN108989882A (en) Method and apparatus for exporting the snatch of music in video
CN110188719A (en) Method for tracking target and device
CN108509892A (en) Method and apparatus for generating near-infrared image
CN110222726A (en) Image processing method, device and electronic equipment
CN109872276A (en) Method and apparatus for generating image super-resolution model
CN109815365A (en) Method and apparatus for handling video
CN109919244A (en) Method and apparatus for generating scene Recognition model
CN109376268A (en) Video classification methods, device, electronic equipment and computer readable storage medium
CN110399847A (en) Extraction method of key frame, device and electronic equipment
CN110070499A (en) Image processing method, device and computer readable storage medium
CN109961032A (en) Method and apparatus for generating disaggregated model
CN109800730A (en) The method and apparatus for generating model for generating head portrait
CN110222641A (en) The method and apparatus of image for identification
CN110062157A (en) Render method, apparatus, electronic equipment and the computer readable storage medium of image
CN110069191A (en) Image based on terminal pulls deformation implementation method and device
CN110287816A (en) Car door motion detection method, device and computer readable storage medium
CN110263255A (en) Acquisition methods, system, server and the storage medium of customer attribute information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190906