WO2020108474A1 - 图片分类、分类识别模型的生成方法、装置、设备及介质 - Google Patents

图片分类、分类识别模型的生成方法、装置、设备及介质 Download PDF

Info

Publication number
WO2020108474A1
WO2020108474A1 PCT/CN2019/120903 CN2019120903W WO2020108474A1 WO 2020108474 A1 WO2020108474 A1 WO 2020108474A1 CN 2019120903 W CN2019120903 W CN 2019120903W WO 2020108474 A1 WO2020108474 A1 WO 2020108474A1
Authority
WO
WIPO (PCT)
Prior art keywords
classification
level
picture
neural network
training
Prior art date
Application number
PCT/CN2019/120903
Other languages
English (en)
French (fr)
Inventor
潘跃
刘振强
梁柱锦
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Publication of WO2020108474A1 publication Critical patent/WO2020108474A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • Embodiments of the present application relate to data processing technologies, for example, to a method, device, device, and medium for generating a picture classification and classification recognition model.
  • the above related technologies have at least the following problems: First, because the deep neural network mainly uses the reverse gradient propagation method in the training process, therefore, as the depth of the network continues to increase, the difficulty of training also gradually increases; second, because The computational complexity of the forward reasoning process of deep neural networks is huge. Therefore, as the depth of the network continues to increase, the computational volume also gradually increases, which further reduces the classification efficiency.
  • Embodiments of the present application provide a method, device, equipment, and medium for generating a picture classification and classification recognition model, so as to improve the accuracy and efficiency of picture classification.
  • An embodiment of the present application provides a picture classification method.
  • the method includes:
  • the classification recognition result of each picture is determined according to the classification score of each picture; when the classification score of each picture does not satisfy the In the case of preset conditions, input each picture into the pre-trained next-level classification and recognition model until the classification and recognition result of each picture set is obtained; wherein, each level of classification and recognition model is based on neural network training generate.
  • An embodiment of the present application also provides a method for generating a classification and recognition model.
  • the method includes:
  • the training sample including a training picture and the original classification label of the training picture
  • the neural network model Input the training picture and the original classification label of the training picture into a neural network model to obtain a classification score for the training picture of each neural network layer in a multi-level neural network layer, and obtain a multi-level fully connected layer
  • the classification score and classification label of the training picture of each level of the fully connected layer in the middle, the neural network model includes an N level neural network layer and an N-1 level fully connected layer, and the i level fully connected layer is located at the i+1 level After the neural network layer, N ⁇ 3, i ⁇ [1, N-1];
  • the P level loss function of the P level neural network layer is obtained, P ⁇ [2, N];
  • the loss function of the neural network model is recalculated until the loss function of the neural network model reaches a preset function value, and each level of the neural network layer is used as the classification and recognition model of the corresponding level.
  • An embodiment of the present application also provides a picture classification device, which includes:
  • a picture collection obtaining module configured to obtain a picture collection to be classified, the picture collection including at least two pictures;
  • the classification result generation module is set to input the picture set into the pre-trained current-level classification recognition model to obtain the classification score of each picture;
  • the classification recognition result generation module is set to determine the classification recognition result of each picture according to the classification score of each picture when the classification score of each picture meets a preset condition; When the classification score of the picture does not satisfy the preset condition, input each picture into the pre-trained next-level classification recognition model until the classification recognition result of each picture is obtained; wherein, each level The classification and recognition model is generated based on neural network training.
  • An embodiment of the present application further provides a device for generating a classification and recognition model.
  • the device includes:
  • a training sample acquisition module configured to acquire a training sample, the training sample including a training picture and an original classification label of the training picture
  • the classification score and classification label generation module is set to input the training picture and the original classification label of the training picture into the neural network model to obtain the classification of the training picture by each neural network layer in the multi-level neural network layer Score, and obtain the classification score and classification label of the training picture of each level of the multi-level fully connected layer for the training picture, the neural network model includes an N-level neural network layer and an N-1 level fully-connected layer, the i The level of fully connected layer is located after the level i+1 neural network layer, N ⁇ 3, i ⁇ [1, N-1];
  • the first-level loss function generation module is set to obtain the first-level loss function of the first-level neural network layer according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image ;
  • the P-level loss function generation module is set to obtain the P-level loss function of the P-level neural network layer according to the classification score and classification label of the P-1 level fully connected layer for the training picture, P ⁇ [2, N];
  • the classification and recognition model generation module is set to determine the loss function of the neural network model according to the multi-level loss function, and adjust the network parameters of the multi-level neural network layer and the multi-level The network parameters of the connection layer, based on the adjusted network parameters, recalculate the loss function of the neural network model until the loss function of the neural network model reaches a preset function value, and each level of the neural network layer is regarded as the corresponding level Classification recognition model.
  • An embodiment of the present application also provides a device, which includes:
  • One or more processors are One or more processors;
  • Memory set to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method described in the embodiments of the present application.
  • An embodiment of the present application also provides a computer-readable storage medium that stores a computer program, and when the program is executed by a processor, the method described in the embodiment of the present application is implemented.
  • FIG. 1 is a flowchart of a picture classification method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of another image classification method provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for generating a classification and recognition model provided by an embodiment of the present application
  • FIG. 4 is a flowchart of another method for generating a classification and recognition model provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a neural network model provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a picture classification device according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a device for generating a classification and recognition model according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • each embodiment provides multiple features and examples at the same time.
  • the multiple features described in the embodiments of this application can be combined to form multiple schemes.
  • Each numbered embodiment should not be regarded as As a technical solution.
  • the present application will be described below with reference to the drawings and embodiments. The specific embodiments described herein are only used to explain the present application, not to limit the present application. For ease of description, the drawings only show part, but not all structures related to the present application.
  • a classification and recognition model generated based on deep neural network training can be used to classify pictures.
  • the classification recognition model In order to make the classification recognition model have a higher classification accuracy, that is, whether it is a simple picture or a difficult picture, it can accurately determine its classification, and the method of increasing the depth of the deep neural network can be used, but the following will produce the following Problem: Since the deep neural network mainly uses the reverse gradient propagation method during the training process, as the depth of the network continues to increase, the difficulty of training also gradually increases. In addition, due to the huge amount of computation in the forward inference process of the deep neural network, as the depth of the network continues to increase, the amount of computation also gradually increases, which further reduces the classification efficiency.
  • Multi-level refers to different levels
  • the classification and recognition model of each level is used to classify and recognize pictures of corresponding difficulty levels. The following will be described in conjunction with specific embodiments.
  • FIG. 1 is a flowchart of a picture classification method provided by an embodiment of the present application. This embodiment can be applied to improve the accuracy and efficiency of picture classification.
  • the method can be performed by a picture classification device, which can use software And/or hardware, the device can be configured in a device, such as a computer or a mobile terminal. As shown in Figure 1, the method includes the following steps:
  • Step 110 Acquire a picture collection to be classified, the picture collection includes at least two pictures.
  • the picture collection to be classified may be a picture collection uploaded by the user to the network platform, or may be a pre-stored picture collection.
  • the source of the picture collection may be selected according to actual conditions, and no setting is made here.
  • the picture set includes at least two pictures, and the classification and recognition difficulty of at least two pictures may be the same or different, that is, the classification and recognition degree of at least two pictures in the picture set is difficult, in other words, at least two pictures in the picture set Pictures may need to be identified by different levels of classification and recognition models.
  • Step 120 Input the picture set into the pre-trained current-level classification and recognition model to obtain the classification score of each picture.
  • Step 130 Determine whether the classification score of each picture meets the preset condition; if the classification score of each picture meets the preset condition, perform step 140; if the classification score of each picture does not meet the preset If conditions are met, step 150 is performed.
  • Step 140 Determine the classification recognition result of each picture according to the classification score of each picture.
  • Step 150 Continue to input each picture into the pre-trained next-level classification recognition model to obtain a classification score for each picture, and return to step 130.
  • each level of the classification and recognition model can be used to classify and recognize the corresponding difficulty level pictures, because each level of the classification and recognition model is used for classification and recognition of the pictures Difficulty levels are different. Therefore, the complexity of the network structure of each classification and recognition model is usually different. The more complicated the network structure of the classification and recognition model, the harder the pictures used for classification and recognition. Recognize the classification of pictures. In the above-mentioned hierarchical recognition process, the number of pictures passing through each stage of the classification recognition model is continuously reduced, and accordingly, the amount of calculation is reduced, thereby improving the classification efficiency.
  • the complexity of the network structure described here is relative.
  • Each level of classification and recognition model is generated based on neural network training, and is generated by collaborative training, rather than training separately, that is, during the training process, the classification scores of the multi-level classification and recognition model affect each other.
  • the current-level classification and recognition model can refer to the classification and recognition model used to classify and identify the simplest picture.
  • the next-level classification and recognition model can be understood as the second-level classification and recognition model and the third-level classification and recognition model.
  • the picture set After obtaining the picture set to be classified, the picture set will be input into the pre-trained current level classification recognition model to obtain the classification score of each picture in the picture set, and determine whether the picture needs to be input according to the classification score of each picture Go to the next-level classification and recognition model until the classification score of each picture in the picture set is obtained.
  • the picture set is input into a pre-trained current-level classification recognition model to obtain the classification score of each picture in the picture set, and it is determined whether the classification score of the picture satisfies the preset condition, and if the classification score of the picture Set conditions, you can determine the classification and recognition results of the picture according to the classification score of the picture, and no longer input the picture into the next-level classification and recognition model; if the classification score of the picture does not meet the preset conditions, then the The picture continues to be input into the next-level classification recognition model to obtain the classification score of the picture, and to continue to determine whether the classification score of the picture meets the preset condition, if the classification score of the picture meets the preset condition, according to the classification of the picture The score determines the classification and recognition result of the picture, and the picture is no longer input to the next-level classification and recognition model.
  • the picture is input to the next-level classification and recognition model Until the classification and recognition result of each picture in the picture set is obtained.
  • the preset condition may be that the classification probability of the picture is greater than or equal to the probability threshold, and the classification probability of the picture is calculated according to the classification score of the picture.
  • the technical solution provided by the embodiments of the present application is directed to the problem of binary classification of pictures.
  • the so-called binary classification means that the classification score is yes or no.
  • yes or no can be characterized by a preset identifier. Exemplarily, if it is determined whether a picture contains illegal content, "Yes” is represented by "1", and “No” is represented by "0". If the classification score is "1" (that is, yes), it means that the picture contains illegal content, or, if the classification score is "0" (that is, no), it means that the picture does not contain illegal content.
  • the classification recognition result of the picture is determined according to the classification score of the picture, which can be understood as follows: the classification score is set in advance, for example, the classification score may include “1" and “0", “1” means “yes”, “ “0” means “no”, the meaning of "yes” and “no” needs to be determined according to the content to be identified, as described above to determine whether a picture contains illegal content, if the classification score is "1" (ie), It means that the picture contains illegal content, or if the classification score is "0" (that is, not), it means that the picture does not contain illegal content.
  • the difficulty of the pictures in the picture set is different, the number of pictures input to the next-level classification and recognition model decreases in sequence.
  • most pictures in the picture set can be accurately classified and recognized by the current level classification and recognition model. Therefore, for the next level classification and recognition model, due to the classification The number of pictures is small, therefore, the classification efficiency of the classification recognition model can be improved.
  • the above process also embodies the classification and recognition according to the difficulty level of the picture. Compared with all the difficulty level pictures being classified and recognized by the same classification recognition model, the classification accuracy is improved.
  • the improvement of the classification accuracy mentioned above can be understood as follows: when training the same classification recognition model, the classification scores of pictures that have a variety of degrees of difficulty that affect the reverse gradient propagation, and not only the classification scores of difficult pictures.
  • the effect on reverse gradient propagation will not include the classification score of the pictures that have been classified and recognized, that is, the classification recognition model of the later level, when training, the reverse gradient
  • the effect of propagation will be the classification score of the harder pictures.
  • the above-mentioned training model mechanism makes the specificity of the classification recognition model obtained by training more prominent.
  • the classification and recognition results of the pictures can be determined according to the classification scores obtained by these pictures through the last level of the classification and recognition model.
  • the N-level classification and recognition model may include a first-level classification and recognition model, a second-level classification and recognition model, ..., an N-1th-level classification and recognition model, and an Nth-level classification and recognition Model, the picture set to be classified includes M pictures.
  • the picture set includes at least two pictures, and the picture set is input into a pre-trained current-level classification recognition model to obtain a classification score for each picture.
  • the classification recognition result of each picture is determined according to the classification score of each picture; when the classification score of each picture does not satisfy the preset condition , Continue to input each picture into the pre-trained next-level classification and recognition model until the classification and recognition result of each picture is obtained, and each level of classification and recognition model is generated based on neural network training, using a multi-level classification and recognition model Classification of pictures improves the accuracy and efficiency of picture classification.
  • the classification score of each picture satisfying the preset condition includes: the classification probability of each picture is greater than or equal to a probability threshold; the classification score of each picture does not satisfy the preset condition includes: the classification of each picture The probability is less than the probability threshold.
  • inputting the picture set into the pre-trained current class classification recognition model will obtain the classification score of each picture, and the classification score can be understood as a vector.
  • the classification score of the picture set is composed of multiple pictures Composed of classification scores.
  • a classifier is used to calculate the classification probability of the picture according to the classification score of the picture.
  • each picture is determined according to the classification score of each picture.
  • the classification and recognition result of may include: when the classification probability of each picture is greater than or equal to a probability threshold, determining the classification and recognition result of each picture according to the classification score of each picture.
  • the classification score of each picture does not satisfy the preset condition, input each picture into the pre-trained next-level classification recognition model until the classification recognition result of each picture is obtained,
  • the method includes: when the classification probability of each picture is less than a probability threshold, inputting each picture into a pre-trained next-level classification recognition model until the classification recognition result of each picture is obtained.
  • the classifier can be Softmax or Logistic.
  • FIG. 2 is a flowchart of another image classification method provided by an embodiment of the present application. As shown in FIG. 2, the method includes the following steps:
  • Step 210 Acquire a picture collection to be classified, the picture collection includes at least two pictures.
  • Step 220 Input the picture set into the pre-trained current-level classification and recognition model to obtain the classification score of each picture.
  • Step 230 Obtain the classification probability of each picture according to the classification score of each picture.
  • Step 240 Determine whether the classification probability of each picture is greater than or equal to the probability threshold; if the classification score of each picture meets the preset condition, perform step 250; if the classification score of each picture does not meet the pre- If conditions are set, step 260 is executed.
  • Step 250 Determine the classification recognition result of each picture according to the classification score of each picture.
  • Step 260 Enter each picture into the pre-trained next-level classification and recognition model to obtain the classification score of each picture, and return to step 240.
  • each level of classification and recognition model is generated based on neural network training.
  • the picture set includes at least two pictures, and the picture set is input into a pre-trained current-level classification recognition model to obtain a classification score for each picture.
  • the classification score of each picture to obtain the classification probability of each picture and in the case that the classification probability of each picture is greater than or equal to a probability threshold, the classification score of each picture is determined according to the classification score of each picture Classification recognition result; in the case that the classification probability of each picture is less than the probability threshold, continue to input each picture into the pre-trained next-level classification recognition model until the classification recognition of each picture is obtained
  • each level of classification and recognition model is generated based on neural network training, and a multi-level classification and recognition model is used to classify pictures, which improves the accuracy and efficiency of picture classification.
  • FIG. 3 is a flowchart of a method for generating a classification and recognition model according to an embodiment of the present application. This embodiment can be applied to improve the accuracy and efficiency of image classification.
  • the method can be executed by a device for generating a classification and recognition model
  • the device can be implemented in software and/or hardware.
  • the device can be configured in a device, such as a computer or mobile terminal. As shown in Figure 3, the method includes the following steps:
  • Step 310 Obtain training samples, which include training pictures and original classification labels of the training pictures.
  • a training sample is obtained, and the training sample may include a training picture and an original classification label of the training picture, and the number of training pictures is at least two.
  • the classification label is used to characterize the classification of the training picture.
  • Step 320 Input the training picture and the original classification label of the training picture into the neural network model to obtain the classification score for the training picture of each level of the neural network layer in the multi-level neural network layer, and obtain each level in the multi-level fully connected layer
  • the classification scores and classification labels of the training images of the fully connected layer includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-level fully connected layer is located after the i+1 level neural network layer. 3, i ⁇ [1, N-1].
  • the neural network model may include an N-level neural network layer and an N-1 level fully connected layer, and the i-level fully connected layer is located at the i+1 level neural network layer and the i+2 level neural network Between layers, N ⁇ 3, i ⁇ [1, N-1].
  • the neural network is based on the basic principles of neural networks in biology. After understanding and abstracting the human brain structure and external stimulus response mechanism, the network topology knowledge is used as the theoretical basis to simulate the complex information of the human brain’s nervous system. A mathematical model of the processing mechanism. The model relies on the complexity of the system and adjusts the weights of the interconnection between a large number of internal nodes (neurons) to process information.
  • Neural networks can include convolutional neural networks, recurrent neural networks, and deep neural networks.
  • the following uses convolutional neural networks as an example.
  • the core problem solved by convolutional neural networks is how to automatically extract and abstract features, and then map features to tasks The goal is to solve practical problems.
  • a convolutional neural network generally consists of the following three parts. The first part is the input layer, the second part is a combination of the convolution layer, the excitation layer, and the pooling layer (or downsampling layer). The third part It consists of a fully connected multi-layer perceptron classifier (that is, a fully connected layer). Convolutional neural networks have the feature of weight sharing.
  • Weight sharing refers to the convolution kernel, which can extract the same characteristics of different positions of image data through the operation of a convolution kernel. In other words, it is in a piece of image data. The characteristics of the same target in different positions are basically the same. Only one part of the features can be obtained by using one convolution kernel. You can set up multi-core convolution and use each convolution kernel to learn different features to extract the features of the picture. In picture classification, the role of the convolutional layer is to extract and analyze low-level features into high-level features. Low-level features are basic features, such as textures and edges, and high-level features such as the shape of faces and objects. It can better represent the attributes of the sample. This process is the hierarchical nature of the convolutional neural network.
  • the fully connected layer acts as a "classifier" in the entire convolutional neural network. If the operations such as convolutional layer, excitation layer and pooling layer are to map the original data to the hidden feature space, the fully connected layer plays the role of mapping the learned "distributed feature representation" to the sample label space.
  • the fully connected layer can be realized by a convolution operation: the fully connected layer that is fully connected to the front layer can be converted into a convolution with a convolution kernel of 1x1; and the fully connected layer that is the convolution layer can be converted into The convolution kernel is the global convolution of H ⁇ W, and H and W are the height and width of the previous layer convolution result, respectively.
  • the N-1 fully-connected layer provided in the embodiment of the present application is a fully-connected layer other than the N-level neural network layer, that is, each level of the neural network layer may itself include a fully-connected layer, but the neural network layer includes all fully connected layers.
  • the connection layer is different from the N-1 level fully connected layer in the neural network model.
  • the training samples into the neural network model that is, input the training pictures and the original classification labels of the training pictures into the neural network model, respectively obtain the classification scores of the multi-level neural network layer for the training pictures, and respectively obtain the multi-level fully connected layers
  • the classification score and classification label of the fully connected layer for the training picture are used to calculate the loss function of the neural network layer
  • the classification score of the neural network layer for the training picture is used to calculate the classification score of the fully connected layer And classification labels.
  • Step 330 Obtain the first-level loss function of the first-level neural network layer according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image.
  • Step 340 According to the classification score and classification label of the P-1 level fully connected layer for the training picture, obtain the P level loss function of the P level neural network layer, P ⁇ [2,N].
  • Step 350 Determine the loss function of the neural network model according to the multi-level loss function, and adjust the network parameters of the multi-level neural network layer and the network parameters of the multi-level fully connected layer according to the loss function of the neural network model, based on the adjusted network parameters.
  • the loss function of the neural network model is calculated until the loss function of the neural network model reaches the preset function value, and each level of the neural network layer is used as the classification and recognition model of the corresponding level.
  • the loss function is a function that maps an event or value of one or more variables to a real number that can intuitively represent a "cost" associated with it, that is, the loss function converts one or more variables Of events are mapped to real numbers associated with a cost.
  • the loss function can be used to measure the model performance and the inconsistency between the actual value and the predicted value. The model performance increases as the value of the loss function decreases.
  • the predicted value here refers to the classification score of the first-level neural network layer for the training picture and the classification score of the multi-level fully connected layer for the training picture
  • the actual value refers to the original classification of the training picture
  • the loss function can be a cross-entropy loss function, a 0-1 loss function, a square loss function, an absolute loss function, a log loss function, etc., which can be set according to the actual situation and is not limited herein.
  • the training process of the neural network model is to calculate the loss function of the neural network model through forward propagation, and calculate the partial derivative of the loss function on the network parameters.
  • the multi-level neural network layer and the multi-level fully connected layer The network parameters are adjusted, and the loss function of the neural network model is recalculated based on the adjusted network parameters until the loss function of the neural network model reaches the preset function value.
  • the loss function value of the neural network model reaches the preset function value, it means that the neural network model has been trained.
  • the network parameters of the multi-level neural network layer and the multi-level fully connected layer are also determined.
  • the neural network layer of each level is used as the classification and recognition model of the corresponding level, that is, the first-level neural network layer is used as the first-level classification and recognition model, and the P-level neural network layer is used as the P-level classification and recognition model, P ⁇ [2, N].
  • the loss function of the neural network model described in the embodiment of the present application is obtained by weighted summation of the loss functions of the N-level neural network layer.
  • the first-level loss function of the first-level neural network layer is calculated according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image.
  • the P-level loss function is calculated based on the classification score and classification label of the P-1 level fully connected layer for the training image, P ⁇ [2, N].
  • the multi-level fully connected layers all have classification labels for the training images.
  • the classification labels of the training images are updated every time the first-level fully connected layer passes.
  • the update will be explained.
  • the classification label of the P-level fully connected layer for the training picture may be the same as the classification label of the superior fully connected layer for the training picture, or may be fully connected with the superior
  • the classification label of the training picture is different for the layer.
  • the upper level mentioned here refers to all levels before the Pth level. Therefore, the update mentioned here refers to the execution of the update operation.
  • the result of the update operation may be the training picture.
  • the classification label of the P-level fully connected layer for the training picture is different from the classification label of the superior fully-connected layer for the training picture
  • the classification label of the training picture has not been updated ( That is, the classification label of the P-level fully connected layer for the training picture is the same as the classification label of the upper-level fully-connected layer for the training picture).
  • the loss function based on the neural network model is The network parameters of the multi-level neural network layer and the network parameters of the multi-level fully connected layer are adjusted, and the complexity of the final multi-level neural network layer structure is also different.
  • the complexity of the multi-level classification recognition model structure is also different. .
  • the multi-level classification and recognition model can be used to classify pictures with corresponding difficulty levels. In other words, simple pictures can obtain classification results that meet the requirements through the simple structure of the classification and recognition model, while difficult pictures need to be more complicated. Only the classification model can get the classification results that meet the requirements, that is, the multi-level classification and recognition model separately processes the corresponding difficulty training pictures, instead of the multi-level classification and recognition model processing all the training pictures. The above makes the classification efficiency greatly improved.
  • the N-level neural network layer and the N-1 full-connected layer are generated by collaborative training instead of training separately.
  • the results of the multi-level neural network layer and the multi-level fully-connected layer affect each other.
  • the performance of the multi-level neural network layer trained in the above manner will be better than that of a neural network layer trained on only one neural network layer. Since each level of the neural network layer serves as the classification and recognition model of the corresponding level, the performance of the multi-level classification and recognition model trained in the above manner will be better than that of a classification and recognition model trained on only one neural network layer.
  • the N-level neural network layer when training the N-level neural network layer, can be initialized by loading a pre-trained model.
  • the pre-trained model mentioned here refers to a model that has been trained. Both the model and the N-level neural network layer to be trained are used to classify similar training samples.
  • the training samples include the original classification labels of the training image and the training image, and the original classification labels of the training image and the training image are input into the neural network model to obtain the multi-level neural network layer respectively.
  • the neural network model includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-level fully connected layer is located After the i+1 level neural network layer, N ⁇ 3, i ⁇ [1, N-1].
  • the first level loss function of the first level neural network layer is obtained, and according to the classification score of the fully connected layer of the P-1 level for the training picture And classification labels to obtain the P-level loss function of the P-level neural network layer, P ⁇ [2, N], determine the loss function of the neural network model according to the multi-level loss function, and adjust the loss function according to the loss function of the neural network model.
  • the network parameters of the level neural network layer and the network parameters of the multi-level fully connected layer recalculate the loss function of the neural network model based on the adjusted network parameters until the loss function of the neural network model reaches the preset function value.
  • the layer uses a collaborative training method to obtain a multi-level classification and recognition model, which improves the accuracy and efficiency of the classification and recognition model for picture classification.
  • the classification scores of the training pictures of the multi-level fully connected layer can be generated as follows: According to the classification scores of the first-level neural network layer for the training pictures and the classification of the second-level neural network layers for the training pictures Score to get the classification score of the first level fully connected layer for the training picture. According to the classification score of the P-1 level fully connected layer for the training picture and the classification score of the P+1 level neural network layer for the training picture, the classification score of the P level fully connected layer for the training picture is obtained, P ⁇ [2, N].
  • the classification score of the other multi-level fully-connected layer for the training picture can be generated as follows: According to the P-1 level fully connected The classification score of the training picture of the layer and the training picture of the P+1-level neural network layer are used to obtain the classification score of the training picture of the P-level fully connected layer, P ⁇ [2, N].
  • the classification score of the first level fully connected layer for the training picture can be generated as follows: According to the classification score of the first level neural network layer for the training picture and the classification score of the second level neural network layer for the training picture, the first level The classification score of the training image for the connection layer.
  • the classification labels for the training pictures of the multi-level fully connected layer can be generated by: updating the original classification labels of the training pictures according to the classification scores of the first-level neural network layer for the training pictures to obtain the first level The classification label of the training image in the fully connected layer.
  • the classification label of the P-1 level fully connected layer for the training picture is updated to obtain the classification label of the P level fully connected layer for the training picture, P ⁇ [2, N].
  • the multi-level fully connected layer has a classification label for the training picture.
  • the classification label of the training image is updated once after the first-level fully connected layer.
  • the classification label of the training picture can be generated by updating the original classification label of the training picture according to the classification score of the first-level neural network layer for the training picture, to obtain the classification label of the first-level fully connected layer for the training picture, according to the P-
  • the classification score of the training picture of the level 1 fully connected layer updates the classification label of the training picture of the level P-1 fully connected layer.
  • the update mentioned here refers to performing an update operation, and whether to update the classification label can be determined by whether the classification score of the training layer by the network layer satisfies a preset condition.
  • the preset condition described here may be: according to the network layer
  • the classification score of the training picture obtains the classification probability of the network layer for the training picture; the classification probability of the network layer for the training picture is greater than or equal to the probability threshold.
  • updating the original classification label of the training picture according to the classification score of the first-level neural network layer for the training picture to obtain the classification label of the first-level fully connected layer for the training picture may include: The classification score of the training image by the neural network layer obtains the classification probability of the training image of the first-level neural network layer.
  • the original classification label of the training picture is modified to the preset classification label, and the preset classification label is used as the first-level full connection The classification label of the training image.
  • the classification probability of the first-level neural network layer for the training picture is less than the first probability threshold, keep the original classification label of the training picture unchanged, and use the original classification label of the training picture as the first-level fully connected layer for the training picture Classification label.
  • a classifier may be used to convert the classification score of the training picture into the classification probability of the training picture.
  • the classifier described here may be a Softmax function, and the Softmax function may map the classification score to (0, 1 ) In the interval, it can be understood as a probability, that is, the classification score of the training picture can be converted into the classification probability of the training picture through Softmax.
  • the classifier can also be a Logistic function, which classifier to choose can be determined according to the actual situation, not limited here.
  • the classification probability of the first level neural network layer for the training picture is obtained. If the classification probability of the first level neural network layer for the training picture is greater than or equal to the first probability threshold, then The original classification label of the training image can be modified to a preset classification label, and the preset classification label is used as the classification label of the first-level fully connected layer for the training image; if the classification probability of the first-level neural network layer for the training image is less than the first A probability threshold can keep the original classification label of the training picture unchanged, and use the original classification label of the training picture as the classification label of the first-level fully connected layer for the training picture. In this embodiment, the first probability threshold can be used as a standard for whether to modify the original classification label of the training picture, and its numerical value can be set according to actual conditions, and is not limited here.
  • the object targeted here is each training picture, that is, the relationship between the classification probability of each training picture and the first probability threshold needs to be determined, and whether the classification label of the training picture is modified or retained according to the result.
  • the reason for modifying the classification label of the training image to the preset classification label is that: if the classification of the training image by the first-level neural network layer If the probability is greater than or equal to the first probability threshold, it can indicate that the classification result of the first-level neural network layer for the training picture has met the requirements.
  • the reason for keeping the original classification label of the training image unchanged is that if the classification probability of the first-level neural network layer for the training image is less than the first probability
  • the threshold value can indicate that the classification result of the training picture by the first-level neural network layer does not meet the requirements.
  • the original classification label of the training picture remains unchanged, so that when the network parameters are adjusted according to the loss function, the training picture participates in the The adjustment of the network parameters of the lower neural network layer and the network parameters of the fully connected layer.
  • the classification label of the P-1 level fully connected layer for the training picture is updated according to the classification score of the P-1 level fully connected layer for the training picture, to obtain the P level of the fully connected layer for the training picture.
  • the classification label, P ⁇ [2,N] may include: according to the classification score of the P-1 level fully connected layer for the training picture, the classification probability of the P-1 level fully connected layer for the training picture is obtained, P ⁇ [2 , N].
  • the classification label of the P-1 level fully connected layer for the training picture is modified to the preset classification label, and The preset classification label is used as the classification label of the P-level fully connected layer for the training image.
  • the classification probability of the P-1 level fully connected layer for the training picture is less than the Pth probability threshold, keep the classification label of the P-1 level fully connected layer for the training picture unchanged, and change the P-1 level
  • the classification label of the connection layer for the training picture serves as the classification label of the P-level fully connected layer for the training picture.
  • a classifier can also be used to convert the classification score of the training picture into the classification probability of the training picture.
  • the classifier described here can be a Softmax function or a Logistic function. Which classifier can be determined according to the actual situation, not limited here.
  • the classification probability of the P-1 level fully connected layer for the training picture is obtained, P ⁇ [2, N], if the P-1 level neural network layer is If the classification probability of the training picture is greater than or equal to the Pth probability threshold, the classification label of the P-1 level fully connected layer for the training picture can be modified to a preset classification label, and the preset classification label can be used as the Pth fully connected layer For the classification label of the training picture; if the classification probability of the P-1 level fully connected layer for the training picture is less than the Pth probability threshold, the classification label of the P-1 level fully connected layer for the training picture can be kept unchanged, and the The classification label of the P-1 level fully connected layer for the training picture serves as the classification label of the P level fully connected layer for the training picture.
  • the Pth probability threshold can be used as a standard for whether to modify the classification label of the training picture at the P-1 level fully connected layer, and its value can be set according to the actual situation, which is not
  • the target here is still each training picture, that is, the relationship between the classification probability of each training picture and the Pth probability threshold needs to be determined, and whether the classification label of the training picture is modified or retained according to the result .
  • the reason for modifying the classification label of the P-1 level fully connected layer for the training picture to the preset classification label is:
  • the classification probability of the P-1 level fully connected layer for the training picture is greater than or equal to the first probability threshold, it can indicate that the classification result of the P level neural network layer for the training picture has met the requirements, by classifying the training picture Modified to the preset classification label, so that when the network parameters are adjusted according to the loss function, the training picture corresponding to the preset classification label does not participate in the adjustment of the network parameters of the lower neural network layer and the network parameters of the fully connected layer.
  • the reason for keeping the classification label of the P-1 level fully connected layer for the training picture unchanged is that if the P-1 level is fully
  • the classification probability of the connection layer for the training picture is less than the Pth probability threshold, which can indicate that the classification result of the P-level neural network layer for the training picture does not meet the requirements.
  • the classification label of the training picture is unchanged, so that the subsequent When the network parameters are adjusted, the training picture participates in the adjustment of the network parameters of the lower neural network layer and the network parameters of the fully connected layer.
  • the classification label of the training picture is updated every time through the first-level fully connected layer, so that the simple training picture does not participate in the adjustment of the network parameters of the lower-level neural network layer and the fully-connected layer, thereby making the multi-level neural network layer obtained by training
  • the complexity of the structure is different.
  • the loss function of the neural network model is determined according to the multilevel loss function, and the network parameters of the multilevel neural network layer and the network parameters of the multilevel fully connected layer are adjusted according to the loss function of the neural network model, based on the adjustment After the network parameters are calculated, the loss function of the neural network model is recalculated until the loss function of the neural network model reaches the preset function value.
  • the neural network layer of each level is used as the classification and recognition model of the corresponding level, which may include: determining the nerve according to the multi-level loss function The loss function of the network model.
  • the neural network layer of each level is used as the classification and recognition model of the corresponding level.
  • determining the loss function of the neural network model according to the multi-level loss function can be understood as follows: weighting and summing the multi-level loss function to obtain the loss function of the neural network model, the multi-level loss function can be set Corresponding to the proportional coefficient, the multi-level loss function is multiplied by the corresponding proportional coefficient to obtain the weighted value, and then the multiple weighted values are added to obtain the loss function of the neural network model.
  • the loss function of each level of the neural network layer uses Loss(f i )
  • the scale factor corresponding to the loss function of each level is T i , where i ⁇ [1, N]
  • the loss function of the neural network model can be Expressed as Based on the above, the proportion of the loss function of each level Loss(f i ) in the loss function of the neural network model can be adjusted by adjusting the proportional coefficient corresponding to the loss function of each level.
  • the loss function can be a cross-entropy loss function, a 0-1 loss function, a square loss function, an absolute loss function, a log loss function, etc., which can be set according to the actual situation and is not limited herein.
  • the network parameters described here include weights and Bias, using the reverse gradient propagation method, adjust the network parameters of the multi-level neural network layer and the multi-level fully connected layer according to the partial derivative, and recalculate the loss function based on the adjusted network parameters until the loss function reaches the preset function value,
  • the preset function value described here can be the minimum loss function value. When the loss function reaches the preset function value, it can indicate that the neural network model has been trained, and the multi-level neural network layer can be determined according to the network parameters, and each level The neural network layer serves as the classification and recognition model of the corresponding level.
  • the target partial derivative is set to zero, where the target partial derivative is the After the classification score of a training picture is substituted into the loss function of the neural network model, the partial derivative of the obtained loss function to the network parameters of the first-level fully connected layer, that is, the training picture corresponding to the preset classification label does not participate in the lower-level neural network Adjustment of the network parameters of the layer and the fully connected layer.
  • FIG. 4 is a flowchart of another method for generating a classification and recognition model according to an embodiment of the present application. This embodiment can be applied to improve the accuracy and efficiency of image classification.
  • the method can be generated by a device for generating a classification and recognition model.
  • the device can be implemented in software and/or hardware.
  • the device can be configured in a device, such as a computer or mobile terminal. As shown in Figure 4, the method includes the following steps:
  • Step 410 Obtain a training sample.
  • the training sample includes the training picture and the original classification label of the training picture.
  • Step 420 Input the training picture and the original classification label of the training picture into the neural network model to obtain a classification score for the training picture of each neural network layer in the multi-level neural network layer.
  • Step 430 According to the classification score of the first level neural network layer for the training picture and the classification score of the second level neural network layer for the training picture, obtain the classification score of the first level fully connected layer for the training picture; according to the level P-1 The classification score of the fully connected layer for the training picture and the classification score of the P+1 level neural network layer for the training picture give the classification score of the training picture for the P level full training level, P ⁇ [2,N].
  • Step 440 Update the original classification label of the training picture according to the classification score of the first level neural network layer for the training picture to obtain the classification label of the first fully connected layer for the training picture; according to the classification of the training picture for the P-1 level fully connected layer The score updates the classification label of the P-1 level fully connected layer for the training picture to obtain the classification label of the P level fully connected layer for the training picture.
  • Step 450 Obtain the first-level loss function of the first-level neural network layer according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image; according to the P-1 level fully-connected layer for the training image The classification score and classification label of the P-level neural network layer to obtain the P-level loss function.
  • Step 460 Determine the loss function of the neural network model according to the multi-level loss function.
  • Step 470 Calculate the partial derivative of the loss function of the neural network model on the network parameters of the multi-level neural network layer and the partial derivative of the loss function of the neural network model on the network parameters of the multi-level fully connected layer.
  • the classification label of a picture is a preset classification label
  • the target partial derivative is set to zero, wherein the target partial derivative is a loss function of substituting the classification score of the training picture into the neural network model Then, the partial derivative of the obtained loss function to the network parameters of the first-level fully connected layer.
  • Step 480 Adjust the network parameters of the multi-level neural network layer and the network parameters of the multi-level fully connected layer according to the partial derivatives, and recalculate the loss function of the neural network model based on the adjusted network parameters until the loss function of the neural network model reaches the pre- Set the function value, and use the neural network layer of each level as the classification and recognition model of the corresponding level.
  • updating the original classification label of the training picture according to the classification score of the first level neural network layer for the training picture to obtain the classification label of the first level fully connected layer for the training picture may include: The classification score of the training image by the neural network layer obtains the classification probability of the training image of the first-level neural network layer.
  • the classification probability of the first-level neural network layer for the training picture is greater than or equal to the first probability threshold
  • the original classification label of the training picture is modified to the preset classification label, and the preset classification label is used as the first-level full connection The classification label of the training image.
  • the classification probability of the first-level neural network layer for the training picture is less than the first probability threshold, keep the original classification label of the training picture unchanged, and use the original classification label of the training picture as the first-level fully connected layer for the training picture Classification label.
  • the classification label of the P-1 level fully connected layer for the training picture is updated to obtain the classification label of the P level fully connected layer for the training picture, P ⁇ [2, N], which may include: according to the classification score of the P-1 level fully connected layer for the training picture, obtain the classification probability of the P-1 level fully connected layer for the training picture, P ⁇ [2,N].
  • the classification label of the P-1 level fully connected layer for the training picture is modified to the preset classification label, and The preset classification label is used as the classification label of the P-level fully connected layer for the training image.
  • the classification probability of the P-1 level fully connected layer for the training picture is less than the Pth probability threshold, keep the classification label of the P-1 level fully connected layer for the training picture unchanged, and change the P-1 level
  • the classification label of the connection layer for the training picture serves as the classification label of the P-level fully connected layer for the training picture.
  • the following description uses a neural network model including a three-level neural network layer and a two-level fully connected layer as examples.
  • the neural network model includes a three-level neural network layer and a two-level fully connected layer, namely a first-level neural network layer, a second-level neural network layer and a third-level neural network layer, and a first-level fully connected layer and a third-level neural network layer
  • the second level fully connected layer, the first level fully connected layer is located after the second level neural network layer
  • the second level fully connected layer is located after the third level neural network layer.
  • the training samples include the training pictures and the original classification labels of the training pictures.
  • the training samples are input into the neural network model to obtain the classification scores for the training pictures of each neural network layer in the multi-level neural network layer.
  • the classification score of the first level neural network layer for the training picture and the classification score of the second level neural network layer for the training picture the classification score of the first level fully connected layer for the training picture is obtained; according to the first level fully connected layer for the training
  • the classification score of the picture and the classification score of the third-level neural network layer for the training picture result in the classification score of the second-level fully connected layer for the training picture.
  • the classification label of the fully connected layer of the level for the training picture is obtained as the classification label of the fully connected layer of the second level for the training picture.
  • the first-level loss function of the first-level neural network layer is obtained; according to the classification score and classification of the first-level fully connected layer for the training picture Label to get the second-level loss function of the second-level neural network layer.
  • the loss function of the neural network model is determined according to the first-level loss function and the second-level loss function.
  • the target partial derivative is set to zero, wherein the target partial derivative is the loss function obtained after substituting the classification score of the one training picture into the loss function of the neural network model, and the resulting loss function The partial derivative of the parameter.
  • Use each level of neural network layer as the corresponding level of classification and recognition model, that is, the first level of neural network layer as the first level of classification and recognition model, the second level of neural network layer as the second level of classification and recognition model, the third level of neural network layer As a third-level classification and recognition model.
  • the training samples include the original classification labels of the training image and the training image, and the original classification labels of the training image and the training image are input into the neural network model to obtain the multi-level neural network layer respectively.
  • the neural network model includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-level fully connected layer is located After the i+1 level neural network layer, N ⁇ 3, i ⁇ [1, N-1].
  • the first level loss function of the first level neural network layer is obtained, and according to the classification score of the fully connected layer of the P-1 level for the training picture And classification labels to obtain the P-level loss function of the P-level neural network layer, P ⁇ [2, N], determine the loss function of the neural network model according to the multi-level loss function, and adjust the multi-level according to the loss function of the neural network model
  • the network parameters of the neural network layer and the multi-level fully connected layer recalculate the loss function of the neural network model based on the adjusted network parameters until the loss function of the neural network model reaches the preset function value, and each level of the neural network layer is regarded as the corresponding level
  • the classification and recognition model of the multi-level classification and recognition model is obtained through collaborative training, which improves the accuracy and efficiency of the classification and recognition model for picture classification.
  • FIG. 6 is a schematic structural diagram of a picture classification device provided by an embodiment of the present application. This embodiment can be applied to improve the accuracy and efficiency of picture classification.
  • the device can be implemented in software and/or hardware. It can be configured in devices, such as computers or mobile terminals. As shown in Figure 6, the device includes:
  • the picture collection obtaining module 510 is configured to obtain a picture collection to be classified, and the picture collection includes at least two pictures.
  • the classification result generation module 520 is set to input the picture set into the current-level classification recognition model trained in advance to obtain the classification score of each picture.
  • the classification recognition result generating module 530 is set to determine the classification recognition result of each picture according to the classification score of each picture when the classification score of each picture meets a preset condition; When the classification score of the pictures does not satisfy the preset condition, input each picture into the pre-trained next-level classification recognition model until the classification recognition result of each picture is obtained; wherein, each level of classification The recognition model is generated based on neural network training.
  • the picture set includes at least two pictures, and the picture set is input into a pre-trained current-level classification recognition model to obtain a classification score for each picture.
  • the classification recognition result of each picture is determined according to the classification score of each picture; when the classification score of each picture does not satisfy the preset condition , Input each picture into the pre-trained next-level classification and recognition model until the classification and recognition result of each picture is obtained, and each level of classification and recognition model is generated based on neural network training, using a multi-level classification and recognition model Classification of pictures improves the accuracy and efficiency of picture classification.
  • FIG. 7 is a schematic structural diagram of a device for generating a classification and recognition model according to an embodiment of the present application. This embodiment can be applied to improve the accuracy and efficiency of image classification.
  • the device can be implemented in software and/or hardware.
  • the device can be configured in devices, such as computers or mobile terminals. As shown in Figure 7, the device includes:
  • the training sample acquisition module 610 is configured to acquire a training sample, and the training sample includes the training picture and the original classification label of the training picture.
  • the classification score and classification label generation module 620 is set to input the training picture and the original classification label of the training picture into the neural network model, to obtain the classification score of the training picture of each level of the neural network layer in the multi-level neural network layer, and to obtain In the multi-level fully connected layer, the classification score and classification label of the training picture of each level of the fully connected layer.
  • the neural network model includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-th fully connected layer is located at the i+1 After the level neural network layer, N ⁇ 3, i ⁇ [1, N-1].
  • the first-level loss function generation module 630 is set to obtain the first-level loss function of the first-level neural network layer according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image.
  • the P-level loss function generation module 640 is set to obtain the P-level loss function of the P-level neural network layer according to the classification score and classification label of the P-1 level fully connected layer for the training picture, P ⁇ [2, N ].
  • the classification recognition model generation module 650 is set to determine the loss function of the neural network model according to the multilevel loss function, and adjust the network parameters of the multilevel neural network layer and the network parameters of the multilevel fully connected layer according to the loss function of the neural network model, based on The adjusted network parameters recalculate the loss function of the neural network model until the loss function of the neural network model reaches the preset function value, and each level of the neural network layer is used as the classification and recognition model of the corresponding level.
  • the training samples include the original classification labels of the training picture and the training picture, and the original classification labels of the training picture and the training picture are input into the neural network model, and each of the multi-level neural network layers
  • the neural network layer's classification score for the training picture, and to obtain the classification score and classification label for the training picture of each level of the fully connected layer in the multi-level fully connected layer the neural network model includes an N-level neural network layer and an N-1 full-level
  • the connection layer, the i-th fully connected layer is located behind the i+1-th neural network layer, N ⁇ 3, i ⁇ [1, N-1].
  • the first level loss function of the first level neural network layer is obtained, and according to the classification score of the fully connected layer of the P-1 level for the training picture And classification labels to obtain the P-level loss function of the P-level neural network layer, P ⁇ [2, N], determine the loss function of the neural network model according to the multi-level loss function, and adjust the multi-level according to the loss function of the neural network model
  • the network parameters of the neural network layer and the network parameters of the multi-level fully connected layer recalculate the loss function of the neural network model based on the adjusted network parameters until the loss function of the neural network model reaches the preset function value.
  • a multi-level classification and recognition model is obtained through collaborative training, which improves the accuracy and efficiency of the classification and recognition model for picture classification.
  • FIG. 8 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • FIG. 8 shows a block diagram of an exemplary device 712 suitable for implementing embodiments of the present application.
  • the device 712 shown in FIG. 8 is only an example, and should not bring any limitation to the functions and usage scope of the embodiments of the present application.
  • the device 712 is represented in the form of a general-purpose computing device.
  • the components of device 712 may include, but are not limited to, one or more processors 716, system memory 728, and bus 718 connected to different system components (including system memory 728 and processor 716).
  • the processor 716 executes a variety of functional applications and data processing by running a program stored in the system memory 728, for example, to implement a picture classification method provided by an embodiment of the present application, the method includes: obtaining a picture set to be classified, The picture collection includes at least two pictures. Input the picture set into the pre-trained current-level classification and recognition model to obtain the classification score of each picture.
  • the classification recognition result of each picture is determined according to the classification score of each picture; when the classification score of each picture does not satisfy the preset Under the condition, each picture is input into the pre-trained next-level classification and recognition model until the classification and recognition result of each picture is obtained; wherein, each level of classification and recognition model is generated based on neural network training.
  • An embodiment of the present application further provides another device, including: one or more processors; a memory configured to store one or more programs; when the one or more programs are executed by the one or more processors , So that the one or more processors implement a method for generating a classification and recognition model provided by an embodiment of the present application.
  • the method includes: obtaining a training sample, where the training sample includes a training picture and an original classification label of the training picture. Input the training picture and the original classification label of the training picture into the neural network model to obtain the classification score for the training picture of each level of the neural network layer in the multi-level neural network layer, and obtain the full classification of each level in the multi-level fully connected layer The connection layer's classification score and classification label for the training picture.
  • the neural network model includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-level fully connected layer is located after the i+1 level neural network layer, N ⁇ 3 , I ⁇ [1, N-1].
  • the first-level loss function of the first-level neural network layer is obtained according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image.
  • the P level loss function of the P level neural network layer is obtained, P ⁇ [2,N].
  • the processor may also implement the technical solution applied to the image classification method of the device provided in any embodiment of the present application or the technical solution applied to the method of generating the classification recognition model of the device.
  • the hardware structure and function of the device can be explained with reference to the content of the embodiment.
  • An embodiment of the present application also provides a computer-readable storage medium that stores a computer program, and when the program is executed by a processor, a picture classification method as provided in the embodiment of the present application is implemented.
  • the method includes: acquiring the to-be-classified Picture collection, the picture collection includes at least two pictures. Input the picture set into the pre-trained current-level classification and recognition model to obtain the classification score of each picture.
  • the classification recognition result of each picture is determined according to the classification score of each picture; when the classification score of each picture does not satisfy the preset Under the condition, each picture is input into the pre-trained next-level classification and recognition model until the classification and recognition result of each picture is obtained; wherein, each level of classification and recognition model is generated based on neural network training.
  • An embodiment of the present application also provides another computer-readable storage medium.
  • the computer-executable instructions are used to execute a method for generating a classification and recognition model when executed by a computer processor.
  • the method includes: obtaining training samples, training The sample includes the training picture and the original classification label of the training picture. Input the training picture and the original classification label of the training picture into the neural network model to obtain the classification score for the training picture of each level of the neural network layer in the multi-level neural network layer, and obtain the full classification of each level in the multi-level fully connected layer The connection layer's classification score and classification label for the training picture.
  • the neural network model includes an N-level neural network layer and an N-1 level fully connected layer.
  • the i-level fully connected layer is located after the i+1 level neural network layer, N ⁇ 3 , I ⁇ [1, N-1].
  • the first-level loss function of the first-level neural network layer is obtained according to the classification score of the first-level neural network layer for the training image and the original classification label of the training image.
  • the P level loss function of the P level neural network layer is obtained, P ⁇ [2,N].
  • a computer-readable storage medium provided by an embodiment of the present application the computer-executable instructions in the computer-readable storage medium are not limited to the method operations described above, and can also execute the picture of the device provided by any embodiment of the present application Related operations in the classification method and the classification recognition model generation method. For the introduction of the storage medium, please refer to the content explanation in the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图片分类、分类识别模型的生成方法、装置、设备及介质。该图片分类方法包括:获取待分类的图片集,图片集包括至少两张图片(110);将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分(120);判断所述每张图片的分类得分满是否足预设条件(130),如果是,则根据所述每张图片的分类得分确定所述每张图片的分类识别结果(140),如果否,则将所述每张图片继续输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类得分(150);其中,每级分类识别模型基于神经网络训练生成。

Description

图片分类、分类识别模型的生成方法、装置、设备及介质
本申请要求在2018年11月30日提交中国专利局、申请号为201811457125.1的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据处理技术,例如涉及一种图片分类、分类识别模型的生成方法、装置、设备及介质。
背景技术
随着深度学习技术的快速发展,深度神经网络在图片分类领域被大量的使用。
相关技术中,为了使基于深度神经网络训练生成的分类识别模型具有较高的分类准确率,通常采用增加深度神经网络的深度的方式。
上述相关技术至少存在如下问题:其一、由于深度神经网络在训练过程中主要采用反向梯度传播的方法,因此,随着网络深度的不断增大,训练难度也逐步加大;其二、由于深度神经网络的前向推理过程的运算量巨大,因此,随着网络深度的不断增大,运算量也逐步增加,进而降低了分类效率。
发明内容
本申请实施例提供一种图片分类、分类识别模型的生成方法、装置、设备及介质,以提高图片分类的准确率和效率。
本申请实施例提供了一种图片分类方法,该方法包括:
获取待分类的图片集,所述图片集包括至少两张图片;
将所述图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分;
在所述每张图片的分类结果满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足所述预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片集的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
本申请实施例还提供了一种分类识别模型的生成方法,该方法包括:
获取训练样本,所述训练样本包括训练图片和所述训练图片的原始分类标签;
将所述训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于所述训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于所述训练图片的分类得分和分类标签,所述神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1];
根据第一级神经网络层对于所述训练图片的分类得分和所述训练图片的原始分类标签,得到所述第一级神经网络层的第一级损失函数;
根据第P-1级全连接层对于所述训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N];
根据多级损失函数确定所述神经网络模型的损失函数,并且根据所述神经网络模型的损失函数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
本申请实施例还提供了一种图片分类装置,该装置包括:
图片集获取模块,设置为获取待分类的图片集,所述图片集包括至少两张图片;
分类结果生成模块,设置为将所述图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分;
分类识别结果生成模块,设置为在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足所述预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
本申请实施例还提供了一种分类识别模型的生成装置,该装置包括:
训练样本获取模块,设置为获取训练样本,所述训练样本包括训练图片和所述训练图片的原始分类标签;
分类得分和分类标签生成模块,设置为将所述训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于所述训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于 所述训练图片的分类得分和分类标签,所述神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1];
第一级损失函数生成模块,设置为根据第一级神经网络层对于所述训练图片的分类得分和所述训练图片的原始分类标签,得到所述第一级神经网络层的第一级损失函数;
第P级损失函数生成模块,设置为根据第P-1级全连接层对于所述训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N];
分类识别模型生成模块,设置为根据多级损失函数确定所述神经网络模型的损失函数,并且根据所述神经网络模型的损失函数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
本申请实施例还提供了一种设备,该设备包括:
一个或多个处理器;
存储器,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请实施例所述的方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,该程序被处理器执行时实现如本申请实施例所述的方法。
附图说明
图1是本申请实施例提供的一种图片分类方法的流程图;
图2是本申请实施例提供的另一种图片分类方法的流程图;
图3是本申请实施例提供的一种分类识别模型的生成方法的流程图;
图4是本申请实施例提供的另一种分类识别模型的生成方法的流程图;
图5是本申请实施例提供的一种神经网络模型的结构示意图;
图6是本申请实施例提供的一种图片分类装置的结构示意图;
图7是本申请实施例提供的一种分类识别模型的生成装置的结构示意图;
图8是本申请实施例提供的一种设备的结构示意图。
具体实施方式
下述实施例中,每个实施例中同时提供了多个特征和示例,本申请实施例中记载的多个特征可进行组合,形成多个方案,不应将每个编号的实施例仅视为一个技术方案。下面结合附图和实施例对本申请进行说明。此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
随着网络技术的不断发展,网络的功能越来越强大。人们可以通过网络将自己拍摄的图片上传至网络平台,供网络平台的其他用户观看,网络平台如短视频应用程序或直播平台。由于用户上传的图片的质量参差不齐,有的图片不但会影响其他用户的身心健康,还可能违反法律。因此,就需要对用户上传的图片进行审核,而进行审核的前提是准确实现对上传的图片进行分类识别。并且,由于上传的图片的不同,存在简单图片,以及难图片之分,这里所述的简单或难指的是分类识别难度,如果容易确定该图片所属分类,则可将该图片称为简单图片;如果不容易确定该图片所属分类,则可将该图片称为难图片。上述仅是一个需要进行图片分类的应用场景。
相关技术中,可以采用基于深度神经网络训练生成的分类识别模型对图片进行分类。为了使分类识别模型具有较高的分类准确率,即无论是简单图片,还是难图片,均可以准确确定其所属分类,可以采用增加深度神经网络的深度的方式,但是随之而来将产生如下问题:由于深度神经网络在训练过程中主要采用反向梯度传播的方法,因此,随着网络深度的不断增大,训练难度也逐步加大。此外,由于深度神经网络的前向推理过程的运算量巨大,因此,随着网络深度的不断增大,运算量也逐步增加,进而降低了分类效率。
为了解决上述问题,即实现在不增加网络深度的基础上,获得较高的分类准确率以及提高分类效率,可考虑采用多级分类识别模型的方式,这里所述的多级指的是不同级别的分类识别模型,每级分类识别模型用于对相应难易程度的图片进行分类识别,下面将结合具体实施例对上述内容进行说明。
图1为本申请实施例提供的一种图片分类方法的流程图,本实施例可适用于提高图片分类的准确率和效率的情况,该方法可以由图片分类装置来执行,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于设备中,例如计算机或移动终端等。如图1所示,该方法包括如下步骤:
步骤110、获取待分类的图片集,图片集包括至少两张图片。
在本申请的实施例中,待分类的图片集可以为用户上传至网络平台的图片集,也可以为预先存储的图片集,图片集的来源可以根据实际情况选择,在此 不作设定。图片集包括至少两张图片,至少两张图片的分类识别难度可能相同,也可能不同,即图片集中至少两张图片的分类识别程度有难易之分,换句话说,图片集中的至少两张张图片可能需要通过不同级分类识别模型才可以确定。
步骤120、将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分。
步骤130、判断所述每张图片的分类得分是否满足预设条件;若所述每张图片的分类得分满足预设条件,则执行步骤140;若所述每张图片的分类得分不满足预设条件,则执行步骤150。
步骤140、根据所述每张图片的分类得分确定所述每张图片的分类识别结果。
步骤150、将所述每张图片继续输入至预先训练的下一级分类识别模型中,得到所述每张图片的分类得分,并返回执行步骤130。
在本申请的实施例中,存在不同级别的分类识别模型,每级分类识别模型可以用于对相应难易程度的图片进行分类识别,,由于每级分类识别模型所用于进行分类识别的图片的难易程度不同,因此,每级分类识别模型的网络结构的复杂程度通常也是不同的,分类识别模型的网络结构越复杂,用于分类识别的图片越难,上述不同级别的分类识别模型可以实现对图片的分级识别。在上述分级识别过程中,经过每级分类识别模型的图片的个数不断减少,相应的,减少了运算量,从而提高了分类效率。这里所述的网络结构的复杂程度是相对而言的。每级分类识别模型均是基于神经网络训练生成的,且是协同训练生成的,而不是分别训练生成的,即在训练过程中,多级分类识别模型的分类得分相互影响。
当前级分类识别模型可以指用于分类识别最简单图片的分类识别模型,可将当前级分类识别模型理解为第一级分类识别模型,下一级分类识别模型可以指用于分类识别除最简单图片外其它难度程度图片的分类识别模型,可将下一级分类识别模型理解为第二级分类识别模型、第三级分类识别模型等。
获取到待分类的图片集后,会将图片集输入至预先训练的当前级分类识别模型中,得到图片集中每张图片的分类得分,并根据每张图片的分类得分确定是否需要将该图片输入至下一级分类识别模型中,直至得到图片集中每张图片的分类得分。一实施例中,将图片集输入至预先训练的当前级分类识别模型中,得到图片集中每张图片的分类得分,确定该图片的分类得分是否满足预设条件,如果该图片的分类得分满足预设条件,则可以根据该图片的分类得分确定该图片的分类识别结果,并不再将该图片输入至下一级分类识别模型中;如果该图 片的分类得分不满足预设条件,则将该图片继续输入至下一级分类识别模型中,得到该图片的分类得分,并继续确定该图片的分类得分是否满足预设条件,如果该图片的分类得分满足预设条件,则根据该图片的分类得分确定该图片的分类识别结果,并不再将该图片输入至下一级分类识别模型中,如果该图片的分类得分不满足预设条件,则将该图片输入至下一级分类识别模型中,直至得到图片集中每张图片的分类识别结果。一实施例中,预设条件可以为图片的分类概率大于或等于概率阈值,图片的分类概率根据图片的分类得分计算得到。
本申请实施例所提供的技术方案,针对的是图片的二分类问题,所谓二分类即表示分类得分为是或者否。一实施例中,是或者否可以用预设标识表征。示例性的,如确定一张图片是否包含违法内容,“是”用“1”表征,“否”用“0”表征。如果分类得分为“1”(即是),则表示该图片包含违法内容,或者,如果分类得分为“0”(即否),则表示该图片不包含违法内容。
基于上述,根据图片的分类得分确定图片的分类识别结果,可作如下理解:预先对分类得分进行设定,如分类得分可以包括“1”和“0”,“1”表示“是”,“0”表示“否”,“是”和“否”的含义需要根据所要识别的内容确定,如前文所述的确定一张图片是否包含违法内容,如果分类得分为“1”(即是),则表示该图片包含违法内容,或者,如果分类得分为“0”(即不是),则表示该图片不包含违法内容。
如果图片集中的图片的难易程度不同,则输入至下一级分类识别模型的图片的张数依次减少。此外,通常由于简单图片的张数较多,换句话说,图片集中大多数图片通过当前级分类识别模型便可以被准确分类识别,因此,对于下一级分类识别模型来说,由于待分类的图片的张数较少,因此,可以提高分类识别模型的分类效率。同时,上述过程也体现了根据图片的难易程度,进行分级识别,相比于所有难易程度的图片均由同一分类识别模型进行分类识别而言,提高了分类准确率。上述提高了分类准确率可作如下理解:对同一分类识别模型进行训练时,对反向梯度传播起作用的包括多种难易程度的图片的分类得分,而并不只包括难图片的分类得分。对不同级别的分类识别模型进行训练时,对反向梯度传播起作用的将不包括已分类识别出的图片的分类得分,即级别越靠后的分类识别模型,在训练时,对反向梯度传播起作用的将是越难图片的分类得分,上述训练模型机制,使得训练得到的分类识别模型的特异性更加突出。
如果通过所有预先训练的分类识别模型,仍存在一些图片的分类得分不满足预设条件,则可根据这些图片经过最后一级分类识别模型得到的分类得分确定该图片的分类识别结果。
示例性的,如存在N级分类识别模型,N级分类识别模型可以包括第一级 分类识别模型、第二级分类识别模型、……、第N-1级分类识别模型和第N级分类识别模型,待分类的图片集包括M张图片。将M张图片输入至第一级分类识别模型中,得到每张图片的分类得分,确定U张图片的分类得分满足预设条件,则根据U张图片的分类得分确定U张图片的分类识别结果,将(M-U)张图片继续输入至第二级分类识别模型中,确定K张图片的分类得分满足预设条件,则根据K张图片的分类得分确定K张图片的分类识别结果,再将(M-U-K)张图片继续输入至第三级分类识别模型中,确定(M-U-K)张图片的分类得分满足预设条件,则根据(M-U-K)张图片的分类得分确定(M-U-K)张图片的分类识别结果。至此,得到图片集中每张图片的分类识别结果,结束对待分类的图片集的分类识别操作。
本实施例的技术方案,通过获取待分类的图片集,图片集包括至少两张图片,将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分,在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足预设条件的情况下,继续将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果,每级分类识别模型基于神经网络训练生成,采用多级分类识别模型对图片进行分类,提高了图片分类的准确率和效率。
一实施例中,在上述技术方案的基础上,在将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分之后,还可以包括:根据所述每张图片的分类得分,得到所述每张图片的分类概率。所述每张图片的分类得分满足预设条件包括:所述每张图片的分类概率大于或等于概率阈值;所述每张图片的分类得分不满足预设条件包括:所述每张图片的分类概率小于概率阈值。
在本申请的实施例中,将图片集输入至预先训练的当前级分类识别模型中,将得到每张图片的分类得分,可将分类得分理解为向量,图片集的分类得分是由多张图片的分类得分所组成。
采用分类器,根据图片的分类得分计算图片的分类概率,相应的,在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果,可以包括:在所述每张图片的分类概率大于或等于概率阈值的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果。在所述每张图片的分类得分不满足预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果,可以包括:在所述每张图片的分类概率小于概率阈值,将所述 每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果。分类器可以为Softmax或Logistic等。
图2为本申请实施例提供的另一种图片分类方法的流程图,如图2所示,该方法包括如下步骤:
步骤210、获取待分类的图片集,图片集包括至少两张图片。
步骤220、将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分。
步骤230、根据所述每张图片的分类得分,得到所述每张图片的分类概率。
步骤240、判断所述每张图片的分类概率是否大于或等于概率阈值;若所述每张图片的分类得分满足预设条件,则执行步骤250;若所述每张图片的分类得分不满足预设条件,则执行步骤260。
步骤250、根据所述每张图片的分类得分确定所述每张图片的分类识别结果。
步骤260、将所述每张图片继续输入至预先训练的下一级分类识别模型中,得到所述每张图片的分类得分,并返回执行步骤240。
在本申请的实施例中,每级分类识别模型基于神经网络训练生成。
本实施例的技术方案,通过获取待分类的图片集,图片集包括至少两张图片,将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分,根据所述每张图片的分类得分,得到所述每张图片的分类概率,在所述每张图片的分类概率大于或等于概率阈值的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类概率小于概率阈值的情况下,继续将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果,每级分类识别模型基于神经网络训练生成,采用多级分类识别模型对图片进行分类,提高了图片分类的准确率和效率。
图3为本申请实施例提供的一种分类识别模型的生成方法的流程图,本实施例可适用于提高图片分类的准确率和效率的情况,该方法可以由分类识别模型的生成装置来执行,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于设备中,例如计算机或移动终端等。如图3所示,该方法包括如下步骤:
步骤310、获取训练样本,训练样本包括训练图片和训练图片的原始分类标签。
在本申请的实施例中,获取训练样本,训练样本可以包括训练图片和训练 图片的原始分类标签,训练图片的张数为至少两张。分类标签用于表征训练图片的所属分类。
步骤320、将训练图片和训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。
在本申请的实施例中,神经网络模型可以包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层和第i+2级神经网络层之间,N≥3,i∈[1,N-1]。本实施例中,神经网络是基于生物学中神经网络的基本原理,在理解和抽象了人脑结构和外界刺激响应机制后,以网络拓扑知识为理论基础,模拟人脑的神经***对复杂信息的处理机制的一种数学模型。该模型是依靠***的复杂程度,通过调整内部大量节点(神经元)之间相互连接的权值,来实现处理信息的。
神经网络可以包括卷积神经网络、循环神经网络和深度神经网络,下面以卷积神经网络为例进行说明,卷积神经网络解决的核心问题就是如何自动提取并抽象特征,进而将特征映射到任务目标解决实际问题,一个卷积神经网络一般由以下三部分组成,第一部分是输入层,第二部分由卷积层、激励层和池化层(或下采样层)组合而成,第三部分由一个全连接的多层感知机分类器(即全连接层)构成。卷积神经网络具有权值共享特性,权值共享即指卷积核,可以通过一个卷积核的操作提取图像数据的不同位置的同样特征,换句话说,即是在一张图像数据中的不同位置的相同目标,它们的特征是基本相同的。使用一个卷积核只能得到一部分特征,可以通过设置多核卷积,用每个卷积核来学习不同的特征来提取图片的特征。在图片分类中,卷积层的作用是将低层次的特征抽取分析为高层次特征,低层次的特征是基本特征,诸如纹理和边缘等特征,高层次特征如人脸和物体的形状等,更能表现样本的属性,这个过程就是卷积神经网络的层次性。
全连接层在整个卷积神经网络中起到“分类器”的作用。如果说卷积层、激励层和池化层等操作是将原始数据映射到隐藏特征空间的话,全连接层则起到将学到的“分布式特征表示”映射到样本标记空间的作用。在实际使用中,全连接层可由卷积操作实现:对前层是全连接的全连接层可以转化为卷积核为1x1的卷积;而前层是卷积层的全连接层可以转化为卷积核为H×W的全局卷积,H和W分别为前层卷积结果的高和宽。由于全连接层参数冗余,仅全连接层参数就可占整个网络参数80%左右,因此一些性能优异的网络模型如残差网 络模型等采用全局平均池化取代全连接层来融合学到的深度特征,也即,卷积神经网络可能不包括全连接层。
本申请实施例所提供的N-1级全连接层是N级神经网络层之外的全连接层,即每级神经网络层本身可能包括全连接层,但是,神经网络层中所包括的全连接层与神经网络模型中的N-1级全连接层不同。
将训练样本输入至神经网络模型中,即将训练图片和训练图片的原始分类标签输入至神经网络模型中,分别得到多级神经网络层对于训练图片的分类得分,以及,分别得到多级全连接层对于训练图片的分类得分和分类标签,全连接层对于训练图片的分类得分和分类标签用于计算神经网络层的损失函数,神经网络层对于训练图片的分类得分用于计算全连接层的分类得分以及分类标签。
步骤330、根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数。
步骤340、根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N]。
步骤350、根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
在本申请的实施例中,损失函数是将一个或多个变量的一个事件或值映射为可以直观地表示一种与之相关“成本”的实数的函数,即损失函数将一个或多个变量的事件映射到与一个成本相关的实数上。损失函数可以用于测量模型性能以及实际值与预测值之间的不一致性,模型性能随着损失函数的值的降低而增加。针对本申请实施例来说,这里的预测值指的是第一级神经网络层对于训练图片的分类得分以及多级全连接层对于训练图片的分类得分,实际值指的是训练图片的原始分类标签以及多级全连接层对于训练图片的分类标签。损失函数可以为交叉熵损失函数、0-1损失函数、平方损失函数、绝对损失函数和对数损失函数等,可根据实际情况进行设定,在此不作限定。
神经网络模型的训练过程是经过前向传播计算神经网络模型的损失函数,并计算损失函数对网络参数的偏导数,采用反向梯度传播方法,对多级神经网络层和多级全连接层的网络参数进行调整,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值。当神 经网络模型的损失函数值达到预设函数值时,表示神经网络模型已训练完成,此时,多级神经网络层和多级全连接层的网络参数也得以确定。在此基础上,将每级神经网络层作为对应级的分类识别模型,即第一级神经网络层作为第一级分类识别模型,第P级神经网络层作为第P级分类识别模型,P∈[2,N]。
本申请实施例所述的神经网络模型的损失函数是由N级神经网络层的损失函数加权求和得到的。一实施例中,第一级神经网络层的第一级损失函数是根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签计算得到的,第P级神经网络层的第P级损失函数是根据第P-1级全连接层对于训练图片的分类得分和分类标签计算得到的,P∈[2,N]。
多级全连接层均有对于训练图片的分类标签,换句话说,每经过一级全连接层便会对训练图片的分类标签更新一次,这里对更新进行说明。一实施例中,对于每张训练图片的分类标签而言,第P级全连接层对于该训练图片的分类标签可能与上级全连接层对于该训练图片的分类标签相同,也可能与上级全连接层对于该训练图片的分类标签不同,这里所述的上级指的是第P级之前的所有级,因此,这里所述的更新指的是执行更新操作,更新操作的结果可能是对该训练图片的分类标签进行了更新(即第P级全连接层对于该训练图片的分类标签与上级全连接层对于该训练图片的分类标签不同),也可能是对该训练图片的分类标签没有进行更新(即第P级全连接层对于该训练图片的分类标签与上级全连接层对于该训练图片的分类标签相同)。
由于确定多级神经网络层的损失函数所依据的训练图片的分类得分和分类标签并不相同,得到的多级神经网络层的损失函数也不相同,因此,基于神经网络模型的损失函数对多级神经网络层的网络参数和多级全连接层的网络参数进行调整,最终确定的多级神经网络层结构的复杂程度也不相同,相应的,多级分类识别模型结构的复杂程度也不相同。基于上述,多级分类识别模型可用于对相应难度程度的图片进行分类,换句话说,简单图片通过结构简单的分类识别模型便可以得到符合要求的分类结果,而难图片则需要通过结构较复杂的分类模型才可以得到符合要求的分类结果,即多级分类识别模型分别处理相应难易程度的训练图片,而不是多级分类识别模型均处理全部训练图片。上述使得分类效率得到极大提升。
本实施例中,N级神经网络层和N-1级全连接层是协同训练生成的,而不是分别训练生成的,多级神经网络层和多级全连接层的结果相互影响。通过上述方式训练得到的多级神经网络层的性能将优于只对一个神经网络层进行训练得到的神经网络层。而由于每级神经网络层作为对应级的分类识别模型,因此,通过上述方式训练得到的多级分类识别模型的性能将优于只对一个神经网络层 进行训练得到的分类识别模型。
一实施例中,在对N级神经网络层训练时,可以通过加载预训练模型的方式,对N级神经网络层进行初始化,这里所述的预训练模型指的是已训练完成的模型,该模型与待训练的N级神经网络层均用于对相似训练样本进行分类。
本实施例的技术方案,通过获取训练样本,训练样本包括训练图片和训练图片的原始分类标签,将训练图片和训练图片的原始分类标签输入至神经网络模型中,分别得到多级神经网络层对于训练图片的分类得分,以及,分别得到多级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数,根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N],根据多级损失函数确定神经网络模型的损失函数,并根据神经网络模型的损失函数且调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,采用协同训练方式得到多级分类识别模型,提高了分类识别模型进行图片分类的准确率和效率。
在上述技术方案的基础上,多级全连接层对于训练图片的分类得分可以通过如下方式生成:根据第一级神经网络层对于训练图片的分类得分和第二级神经网络层对于训练图片的分类得分,得到第一级全连接层对于训练图片的分类得分。根据第P-1级全连接层对于训练图片的分类得分和第P+1级神经网络层对于训练图片的分类得分,得到第P级全连接层对于训练图片的分类得分,P∈[2,N]。
在本申请的实施例中,除第一级全连接层对于训练图片的分类得分外,其它多级全连接层对于训练图片的分类得分可通过如下方式生成:根据第P-1级全连接层对于训练图片的分类得分和第P+1级神经网络层对于训练图片的分类得分,得到第P级全连接层对于训练图片的分类得分,P∈[2,N]。
第一级全连接层对于训练图片的分类得分可通过如下方式生成:根据第一级神经网络层对于训练图片的分类得分和第二级神经网络层对于训练图片的分类得分,得到第一级全连接层对于训练图片的分类得分。
在上述技术方案的基础上,多级全连接层对于训练图片的分类标签可以通过如下方式生成:根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一级全连接层对于训练图片的分类标签。根据第P-1 级全连接层对于训练图片的分类得分更新第P-1级全连接层对于训练图片的分类标签,得到第P级全连接层对于训练图片的分类标签,P∈[2,N]。
在本申请的实施例中,多级全连接层均有对于训练图片的分类标签,换句话说,每经过一级全连接层便会对训练图片的分类标签更新一次,多级全连接层对训练图片的分类标签可通过如下方式生成:根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一级全连接层对于训练图片的分类标签,根据第P-1级全连接层对于训练图片的分类得分更新第P-1级全连接层对于训练图片的分类标签。这里所述的更新指的是执行更新操作,是否对分类标签进行更新可以由网络层对于训练图片的分类得分是否满足预设条件所确定,这里所述的预设条件可以为:根据网络层对于训练图片的分类得分,得到网络层对于训练图片的分类概率;网络层对于训练图片的分类概率大于或等于概率阈值。
在上述技术方案的基础上,根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一级全连接层对于训练图片的分类标签,可以包括:根据第一级神经网络层对于训练图片的分类得分,得到第一级神经网络层对于训练图片的分类概率。在第一级神经网络层对于训练图片的分类概率大于或等于第一概率阈值的情况下,将训练图片的原始分类标签修改为预设分类标签,并将预设分类标签作为第一级全连接层对于训练图片的分类标签。在第一级神经网络层对于训练图片的分类概率小于第一概率阈值的情况下,保持训练图片的原始分类标签不变,并将训练图片的原始分类标签作为第一级全连接层对于训练图片的分类标签。
在本申请的实施例中,可以采用分类器将训练图片的分类得分转换为训练图片的分类概率,这里所述的分类器可以为Softmax函数,Softmax函数可将分类得分,映射到(0,1)区间内,可以看成概率来理解,即可通过Softmax将训练图片的分类得分转换为训练图片的分类概率。此外,分类器还可以为Logistic函数,选用哪种分类器可根据实际情况进行确定,在此不作限定。
根据第一级神经网络层对于训练图片的分类得分,得到第一级神经网络层对于训练图片的分类概率,如果第一级神经网络层对于训练图片的分类概率大于或等于第一概率阈值,则可将训练图片的原始分类标签修改为预设分类标签,并将预设分类标签作为第一级全连接层对于训练图片的分类标签;如果第一级神经网络层对于训练图片的分类概率小于第一概率阈值,则可保持训练图片的原始分类标签不变,并将训练图片的原始分类标签作为第一级全连接层对于训练图片的分类标签。本实施例中,第一概率阈值可作为是否对训练图片的原始分类标签进行修改的标准,其数值大小可根据实际情况进行设定,在此不作限 定。
这里针对的对象是每张训练图片,即需要确定每张训练图片的分类概率与第一概率阈值的关系,并根据结果确定对该张训练图片的分类标签是修改还是保留。
如果第一级神经网络层对于训练图片的分类概率大于或等于第一概率阈值,则将训练图片的分类标签修改为预设分类标签的原因在于:如果第一级神经网络层对于训练图片的分类概率大于或等于第一概率阈值,则可以说明第一级神经网络层对于该训练图片的分类结果已符合要求,通过将该训练图片的分类标签修改为预设分类标签,使得后续根据损失函数对网络参数进行调整时,预设分类标签对应的训练图片不参与对下级神经网络层的网络参数和全连接层的网络参数的调整。
如果第一级神经网络层对于训练图片的分类概率小于第一概率阈值,则保持训练图片的原始分类标签不变的原因在于:如果第一级神经网络层对于训练图片的分类概率小于第一概率阈值,则可以说明第一级神经网络层对于该训练图片的分类结果不符合要求,通过该训练图片的原始分类标签不变,使得后续根据损失函数对网络参数进行调整时,该训练图片参与对下级神经网络层的网络参数和全连接层的网络参数的调整。
在上述技术方案的基础上,根据第P-1级全连接层对于训练图片的分类得分更新第P-1级全连接层对于训练图片的分类标签,得到第P级全连接层对于训练图片的分类标签,P∈[2,N],可以包括:根据第P-1级全连接层对于训练图片的分类得分,得到第P-1级全连接层对于训练图片的分类概率,P∈[2,N]。在第P-1级全连接层对于训练图片的分类概率大于或等于第P概率阈值的情况下,将第P-1级全连接层对于训练图片的分类标签修改为预设分类标签,并将预设分类标签作为第P级全连接层对于训练图片的分类标签。在第P-1级全连接层对于训练图片的分类概率小于第P概率阈值的情况下,保持第P-1级全连接层对于训练图片的分类标签不变,并将第P-1级全连接层对于训练图片的分类标签作为第P级全连接层对于训练图片的分类标签。
在本申请的实施例中,如前文所述,同样可以采用分类器将训练图片的分类得分转换为训练图片的分类概率,这里所述的分类器可以为Softmax函数,还可以为Logistic函数,选用哪种分类器可根据实际情况进行确定,在此不作限定。
根据第P-1级全连接层对于训练图片的分类得分,得到第P-1级全连接层对于训练图片的分类概率,P∈[2,N],如果第P-1级神经网络层对于训练图片的分类概率大于或等于第P概率阈值,则可将第P-1级全连接层对于训练图片的分类标签修改为预设分类标签,并将预设分类标签作为第P级全连接层对于训 练图片的分类标签;如果第P-1级全连接层对于训练图片的分类概率小于第P概率阈值,则可保持第P-1级全连接层对于训练图片的分类标签不变,并将第P-1级全连接层对于训练图片的分类标签作为第P级全连接层对于训练图片的分类标签。本实施例中,第P概率阈值可作为是否对第P-1级全连接层对训练图片的分类标签进行修改的标准,其数值大小可根据实际情况进行设定,在此不作限定。
如前文所述,这里针对的对象依然是每张训练图片,即需要确定每张训练图片的分类概率与第P概率阈值的关系,并根据结果确定对该张训练图片的分类标签是修改还是保留。
如果第P-1级全连接层对于训练图片的分类概率大于或等于第P概率阈值,则将第P-1级全连接层对于训练图片的分类标签修改为预设分类标签的原因在于:如果第P-1级全连接层对于训练图片的分类概率大于或等于第一概率阈值,则可以说明第P级神经网络层对于该训练图片的分类结果已符合要求,通过将该训练图片的分类标签修改为预设分类标签,使得后续根据损失函数对网络参数进行调整时,预设分类标签对应的训练图片不参与对下级神经网络层的网络参数和全连接层的网络参数的调整。
如果第P-1级全连接层对于训练图片的分类概率小于第P概率阈值,则保持第P-1级全连接层对于训练图片的分类标签不变的原因在于:如果第P-1级全连接层对于训练图片的分类概率小于第P概率阈值,则可以说明第P级神经网络层对于该训练图片的分类结果不符合要求,通过该训练图片的分类标签不变,使得后续根据损失函数对网络参数进行调整时,该训练图片参与对下级神经网络层的网络参数和全连接层的网络参数的调整。
上述每经过一级全连接层便对训练图片的分类标签更新一次,实现了简单训练图片不参与对下级神经网络层和全连接层的网络参数的调整,进而使得训练得到的多级神经网络层结构的复杂程度不同。
在上述技术方案的基础上,根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,可以包括:根据多级损失函数确定神经网络模型的损失函数。计算神经网络模型的损失函数对多级神经网络层的网络参数的偏导数和神经网络模型的损失函数对多级全连接层的网络参数的偏导数,并在一级全连接层的一张图片的分类标签为预设分类标签的情况下,将目标偏导数设置为零,其中,所述目标偏导数为将所述一张训练图片的分类得分代入所述神经网络模 型的损失函数后,得到的损失函数对所述一级全连接层的网络参数的偏导数。根据偏导数调整多级神经网络层的网络参数和多级全连接层的网络参数,并基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
在本申请的实施例中,根据多级损失函数确定神经网络模型的损失函数可作如下理解:对多级损失函数进行加权求和,得到神经网络模型的损失函数,可以设定多级损失函数对应的比例系数,将多级损失函数分别与对应的比例系数相乘得到加权值,再将多个加权值相加得到神经网络模型的损失函数。示例性的,如每级神经网络层的损失函数用Loss(f i),每级损失函数对应的比例系数为T i,其中,i∈[1,N],则神经网络模型的损失函数可表示为
Figure PCTCN2019120903-appb-000001
基于上述,可以通过调整每级损失函数对应的比例系数来调整每级损失函数Loss(f i)在神经网络模型的损失函数中所占的比例。损失函数可以为交叉熵损失函数、0-1损失函数、平方损失函数、绝对损失函数和对数损失函数等,可根据实际情况进行设定,在此不作限定。
根据多级损失函数确定神经网络模型的损失函数后,计算神经网络模型的损失函数对多级神经网络层和多级全连接层的网络参数的偏导数,这里所述的网络参数包括权值和偏置,采用反向梯度传播方法,根据偏导数调整多级神经网络层和多级全连接层的网络参数,并基于调整后的网络参数重新计算损失函数,直至损失函数达到预设函数值,这里所述的预设函数值可以为最小损失函数值,当损失函数达到预设函数值后,则可以说明神经网络模型已训练完成,可以根据网络参数确定多级神经网络层,并将每级神经网络层作为对应级的分类识别模型。
在神经网络模型训练过程中,在一级全连接层的一张图片的分类标签为预设分类标签的情况下,将目标偏导数设置为零,其中,所述目标偏导数为将所述一张训练图片的分类得分代入所述神经网络模型的损失函数后,得到的损失函数对所述一级全连接层的网络参数的偏导数,即预设分类标签对应的训练图片不参与下级神经网络层和全连接层的网络参数的调整。
图4为本申请实施例提供的另一种分类识别模型的生成方法的流程图,本实施例可适用于提高图片分类的准确率和效率的情况,该方法可以由分类识别模型的生成装置来执行,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于设备中,例如计算机或移动终端等。如图4所示,该方法包括如下步骤:
步骤410、获取训练样本,训练样本包括训练图片和训练图片的原始分类标签。
步骤420、将训练图片和训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分。
步骤430、根据第一级神经网层对于训练图片的分类得分和第二级神经网络层对于训练图片的分类得分,得到第一级全连接层对于训练图片的分类得分;根据第P-1级全连接层对于训练图片的分类得分和第P+1级神经网络层对于训练图片的分类得分,得到第P级全练级层对于训练图片的分类得分,P∈[2,N]。
步骤440、根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一全连接层对于训练图片的分类标签;根据P-1级全连接层对于训练图片的分类得分更新第P-1级全连接层对于训练图片的分类标签,得到第P级全连接层对于训练图片的分类标签。
步骤450、根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数;根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数。
步骤460、根据多级损失函数确定神经网络模型的损失函数。
步骤470、计算神经网络模型的损失函数对多级神经网络层的网络参数的偏导数和神经网络模型的损失函数对多级全连接层的网络参数的偏导数,并在一级全连接层的一张图片的分类标签为预设分类标签的情况下,将目标偏导数设置为零,其中,所述目标偏导数为将所述一张训练图片的分类得分代入所述神经网络模型的损失函数后,得到的损失函数对所述一级全连接层的网络参数的偏导数。
步骤480、根据偏导数调整多级神经网络层的网络参数和多级全连接层的网络参数,并基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
在本申请的实施例中,根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一级全连接层对于训练图片的分类标签,可以包括:根据第一级神经网络层对于训练图片的分类得分,得到第一级神经网络层对于训练图片的分类概率。在第一级神经网络层对于训练图片的分类概率大于或等于第一概率阈值的情况下,将训练图片的原始分类标签修改为预设分类标签,并将预设分类标签作为第一级全连接层对于训练图片的分类标签。在第一级神经网络层对于训练图片的分类概率小于第一概率阈值的情况下,保持训练图片的原始分类标签不变,并将训练图片的原始分类标签作为第一级全 连接层对于训练图片的分类标签。
根据第P-1级全连接层对于训练图片的分类得分更新第P-1级全连接层对于训练图片的分类标签,得到第P级全连接层对于训练图片的分类标签,P∈[2,N],可以包括:根据第P-1级全连接层对于训练图片的分类得分,得到第P-1级全连接层对于训练图片的分类概率,P∈[2,N]。在第P-1级全连接层对于训练图片的分类概率大于或等于第P概率阈值的情况下,将第P-1级全连接层对于训练图片的分类标签修改为预设分类标签,并将预设分类标签作为第P级全连接层对于训练图片的分类标签。在第P-1级全连接层对于训练图片的分类概率小于第P概率阈值的情况下,保持第P-1级全连接层对于训练图片的分类标签不变,并将第P-1级全连接层对于训练图片的分类标签作为第P级全连接层对于训练图片的分类标签。
为了理解本申请实施例所提供的技术方案,下面以神经网络模型包括三级神经网络层和两级全连接层为例进行说明。
图5为本申请实施例提供的一种神经网络模型的结构示意图。该神经网络模型包括三级神经网络层和两级全连接层,分别为第一级神经网络层、第二级神经网络层和第三级神经网络层,以及,第一级全连接层和第二级全连接层,第一级全连接层位于第二级神经网络层之后,第二级全连接层位于第三级神经网络层之后。
获取训练样本,训练样本包括训练图片和训练图片的原始分类标签,将训练样本输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分。根据第一级神经网络层对于训练图片的分类得分和第二级神经网络层对于训练图片的分类得分,得到第一级全连接层对于训练图片的分类得分;根据第一级全连接层对于训练图片的分类得分和第三级神经网络层对于训练图片的分类得分,得到第二级全连接层对于训练图片的分类得分。
根据第一级神经网络层对于训练图片的分类得分更新训练图片的原始分类标签,得到第一全连接层对于训练图片的分类标签;根据第一级全连接层对于训练图片的分类得分更新第一级全连接层对于训练图片的分类标签,得到第二级全连接层对于训练图片的分类标签。
根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数;根据第一级全连接层对于训练图片的分类得分和分类标签,得到第二级神经网络层的第二级损失函数。
根据第一级损失函数和第二级损失函数确定神经网络模型的损失函数。
计算神经网络模型的损失函数对多级神经网络层和多级全连接层的网络参 数的偏导数,并在一级全连接层的一张图片的分类标签为预设分类标签的情况下,将目标偏导数设置为零,其中,所述目标偏导数为将所述一张训练图片的分类得分代入所述神经网络模型的损失函数后,得到的损失函数对所述一级全连接层的网络参数的偏导数。
根据偏导数调整多级神经网络层的网络参数和多级全连接层的网络参数,并基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,即第一级神经网络层作为第一级分类识别模型,第二级神经网络层作为第二级分类识别模型,第三级神经网络层作为第三级分类识别模型。
本实施例的技术方案,通过获取训练样本,训练样本包括训练图片和训练图片的原始分类标签,将训练图片和训练图片的原始分类标签输入至神经网络模型中,分别得到多级神经网络层对于训练图片的分类得分,以及,分别得到多级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数,根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N],根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,采用协同训练方式得到多级分类识别模型,提高了分类识别模型进行图片分类的准确率和效率。
图6为本申请实施例提供的一种图片分类装置的结构示意图,本实施例可适用于提高图片分类的准确率和效率的情况,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于设备中,例如计算机或移动终端等。如图6所示,该装置包括:
图片集获取模块510,设置为获取待分类的图片集,图片集包括至少两张图片。
分类结果生成模块520,设置为将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分。
分类识别结果生成模块530,设置为在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足预设条件的情况下,将所述每张图片输入至 预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
本实施例的技术方案,通过获取待分类的图片集,图片集包括至少两张图片,将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分,在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果,每级分类识别模型基于神经网络训练生成,采用多级分类识别模型对图片进行分类,提高了图片分类的准确率和效率。
图7为本申请实施例提供的一种分类识别模型的生成装置的结构示意图,本实施例可适用于提高图片分类的准确率和效率的情况,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于设备中,例如计算机或移动终端等。如图7所示,该装置包括:
训练样本获取模块610,设置为获取训练样本,训练样本包括训练图片和训练图片的原始分类标签。
分类得分和分类标签生成模块620,设置为将训练图片和训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。
第一级损失函数生成模块630,设置为根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数。
第P级损失函数生成模块640,设置为根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N]。
分类识别模型生成模块650,设置为根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
本实施例的技术方案,通过获取训练样本,训练样本包括训练图片和训练图片的原始分类标签,将训练图片和训练图片的原始分类标签输入至神经网络 模型中,到多级神经网络层中每级神经网络层对于训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数,根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N],根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,采用协同训练方式得到多级分类识别模型,提高了分类识别模型进行图片分类的准确率和效率。
图8为本申请实施例提供的一种设备的结构示意图。图8示出了适于用来实现本申请实施方式的示例性设备712的框图。图8显示的设备712仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图8所示,设备712以通用计算设备的形式表现。设备712的组件可以包括但不限于:一个或者多个处理器716,***存储器728,连接于不同***组件(包括***存储器728和处理器716)的总线718。
处理器716通过运行存储在***存储器728中的程序,从而执行多种功能应用以及数据处理,例如实现本申请实施例所提供的一种图片分类方法,该方法包括:获取待分类的图片集,图片集包括至少两张图片。将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分。在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
本申请实施例还提供了另一种设备,包括:一个或多个处理器;存储器,设置为存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本申请实施例所提供的一种分类识别模型的生成方法,该方法包括:获取训练样本,训练样本包括训练图片和训练图片的原始分类标签。将训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第 i+1级神经网络层之后,N≥3,i∈[1,N-1]。根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数。根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N]。根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
处理器还可以实现本申请任意实施例所提供应用于设备的图片分类方法的技术方案或者应用于设备的分类识别模型的生成方法的技术方案。该设备的硬件结构以及功能可参见实施例的内容解释。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,该程序被处理器执行时实现如本申请实施例所提供的一种图片分类方法,该方法包括:获取待分类的图片集,图片集包括至少两张图片。将图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分。在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
本申请实施例还提供了另一种计算机可读存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种分类识别模型的生成方法,该方法包括:获取训练样本,训练样本包括训练图片和训练图片的原始分类标签。将训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于训练图片的分类得分和分类标签,神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1]。根据第一级神经网络层对于训练图片的分类得分和训练图片的原始分类标签,得到第一级神经网络层的第一级损失函数。根据第P-1级全连接层对于训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N]。根据多级损失函数确定神经网络模型的损失函数,并且根据神经网络模型的损失函数调整多级神经网络层的网络参数和多级全连接层的网络参数,基于调整后的网络参数重新计算神经网络模型的损失函数,直至神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
本申请实施例所提供的一种计算机可读存储介质,该计算机可读存储介质中的计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的设备的图片分类方法和分类识别模型的生成方法中的相关操作。对存储介质的介绍可参见实施例中的内容解释。

Claims (12)

  1. 一种图片分类方法,包括:
    获取待分类的图片集,所述图片集包括至少两张图片;
    将所述图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分;
    在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足所述预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
  2. 根据权利要求1所述的方法,在所述将所述图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分之后,还包括:
    根据每张图片的分类得分,得到所述每张图片的分类概率;
    其中,所述每张图片的分类得分满足预设条件为所述每张图片的分类概率大于或等于概率阈值;所述每张图片的分类得分不满足预设条件为所述每张图片的分类概率小于所述概率阈值。
  3. 一种分类识别模型的生成方法,包括:
    获取训练样本,所述训练样本包括训练图片和所述训练图片的原始分类标签;
    将所述训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于所述训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于所述训练图片的分类得分和分类标签,所述神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1];
    根据第一级神经网络层对于所述训练图片的分类得分和所述训练图片的原始分类标签,得到所述第一级神经网络层的第一级损失函数;
    根据第P-1级全连接层对于所述训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N];
    根据多级损失函数确定所述神经网络模型的损失函数,并且根据所述神经网络模型的损失函数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
  4. 根据权利要求3所述的方法,其中,所述多级全连接层对于所述训练图片的分类得分通过如下方式生成:
    根据所述第一级神经网络层对于所述训练图片的分类得分和第二级神经网络层对于所述训练图片的分类得分,得到第一级全连接层对于所述训练图片的分类得分;
    根据第P-1级全连接层对于所述训练图片的分类得分和第P+1级神经网络层对于所述训练图片的分类得分,得到第P级全连接层对于所述训练图片的分类得分,P∈[2,N]。
  5. 根据权利要求3或4所述的方法,其中,所述多级全连接层对于所述训练图片的分类标签通过如下方式生成:
    根据所述第一级神经网络层对于所述训练图片的分类得分更新所述训练图片的原始分类标签,得到第一级全连接层对于所述训练图片的分类标签;
    根据第P-1级全连接层对于所述训练图片的分类得分更新所述第P-1级全连接层对于所述训练图片的分类标签,得到第P级全连接层对于所述训练图片的分类标签,P∈[2,N]。
  6. 根据权利要求5所述的方法,其中,所述根据所述第一级神经网络层对于所述训练图片的分类得分更新所述训练图片的原始分类标签,得到第一级全连接层对于所述训练图片的分类标签,包括:
    根据所述第一级神经网络层对于所述训练图片的分类得分,得到所述第一级神经网络层对于所述训练图片的分类概率;
    在所述第一级神经网络层对于所述训练图片的分类概率大于或等于第一概率阈值的情况下,将所述训练图片的原始分类标签修改为预设分类标签,并将所述预设分类标签作为第一级全连接层对于所述训练图片的分类标签;
    在所述第一级神经网络层对于所述训练图片的分类概率小于所述第一概率阈值的情况下,保持所述训练图片的原始分类标签不变,并将所述训练图片的原始分类标签作为所述第一级全连接层对于所述训练图片的分类标签。
  7. 根据权利要求6所述的方法,其中,所述根据第P-1级全连接层对于所述训练图片的分类得分更新所述第P-1级全连接层对于所述训练图片的分类标签,得到第P级全连接层对于所述训练图片的分类标签,P∈[2,N],包括:
    根据第P-1级全连接层对于所述训练图片的分类得分,得到所述第P-1级全连接层对于所述训练图片的分类概率,P∈[2,N];
    在所述第P-1级全连接层对于所述训练图片的分类概率大于或等于第P概 率阈值的情况下,将所述第P-1级全连接层对于所述训练图片的分类标签修改为所述预设分类标签,并将所述预设分类标签作为第P级全连接层对于所述训练图片的分类标签;
    在所述第P-1级全连接层对于所述训练图片的分类概率小于所述第P概率阈值的情况下,保持所述第P-1级全连接层对于所述训练图片的分类标签不变,并将所述第P-1级全连接层对于所述训练图片的分类标签作为所述第P级全连接层对于所述训练图片的分类标签。
  8. 根据权利要求7所述的方法,其中,所述根据多级损失函数确定所述神经网络模型的损失函数,并且根据所述神经网络模型的损失函数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型,包括:
    根据多级损失函数确定所述神经网络模型的损失函数;
    计算所述神经网络模型的损失函数对所述多级神经网络层的网络参数的偏导数和所述神经网络模型的损失函数对所述多级全连接层的网络参数的偏导数,并在一级全连接层的一张图片的分类标签为所述预设分类标签的情况下,将目标偏导数设置为零,其中,所述目标偏导数为将所述一张训练图片的分类得分代入所述神经网络模型的损失函数后,得到的损失函数对所述一级全连接层的网络参数的偏导数;
    根据所述偏导数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,并基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
  9. 一种图片分类装置,包括:
    图片集获取模块,设置为获取待分类的图片集,所述图片集包括至少两张图片;
    分类结果生成模块,设置为将所述图片集输入至预先训练的当前级分类识别模型中,得到每张图片的分类得分;
    分类识别结果生成模块,设置为在所述每张图片的分类得分满足预设条件的情况下,根据所述每张图片的分类得分确定所述每张图片的分类识别结果;在所述每张图片的分类得分不满足所述预设条件的情况下,将所述每张图片输入至预先训练的下一级分类识别模型中,直至得到所述每张图片的分类识别结果;其中,每级分类识别模型基于神经网络训练生成。
  10. 一种分类识别模型的生成装置,包括:
    训练样本获取模块,设置为获取训练样本,所述训练样本包括训练图片和所述训练图片的原始分类标签;
    分类得分和分类标签生成模块,设置为将所述训练图片和所述训练图片的原始分类标签输入至神经网络模型中,得到多级神经网络层中每级神经网络层对于所述训练图片的分类得分,以及,得到多级全连接层中每级全连接层对于所述训练图片的分类得分和分类标签,所述神经网络模型包括N级神经网络层和N-1级全连接层,第i级全连接层位于第i+1级神经网络层之后,N≥3,i∈[1,N-1];
    第一级损失函数生成模块,设置为根据第一级神经网络层对于所述训练图片的分类得分和所述训练图片的原始分类标签,得到所述第一级神经网络层的第一级损失函数;
    第P级损失函数生成模块,设置为根据第P-1级全连接层对于所述训练图片的分类得分和分类标签,得到第P级神经网络层的第P级损失函数,P∈[2,N];
    分类识别模型生成模块,设置为根据多级损失函数确定所述神经网络模型的损失函数,并且根据所述神经网络模型的损失函数调整所述多级神经网络层的网络参数和所述多级全连接层的网络参数,基于调整后的所述网络参数重新计算所述神经网络模型的损失函数,直至所述神经网络模型的损失函数达到预设函数值,将每级神经网络层作为对应级的分类识别模型。
  11. 一种设备,包括:
    至少一个处理器;
    存储器,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-8任一所述的方法。
  12. 一种计算机可读存储介质,存储有计算机程序,所述程序被处理器执行时实现如权利要求1-8任一所述的方法。
PCT/CN2019/120903 2018-11-30 2019-11-26 图片分类、分类识别模型的生成方法、装置、设备及介质 WO2020108474A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811457125.1 2018-11-30
CN201811457125.1A CN109583501B (zh) 2018-11-30 2018-11-30 图片分类、分类识别模型的生成方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2020108474A1 true WO2020108474A1 (zh) 2020-06-04

Family

ID=65926768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120903 WO2020108474A1 (zh) 2018-11-30 2019-11-26 图片分类、分类识别模型的生成方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN109583501B (zh)
WO (1) WO2020108474A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783861A (zh) * 2020-06-22 2020-10-16 北京百度网讯科技有限公司 数据分类方法、模型训练方法、装置和电子设备
CN112084944A (zh) * 2020-09-09 2020-12-15 清华大学 一种动态演化表情的识别方法与***
CN112182269A (zh) * 2020-09-27 2021-01-05 北京达佳互联信息技术有限公司 图像分类模型的训练、图像分类方法、装置、设备及介质
CN112465042A (zh) * 2020-12-02 2021-03-09 中国联合网络通信集团有限公司 一种分类网络模型的生成方法及装置
CN113361451A (zh) * 2021-06-24 2021-09-07 福建万福信息技术有限公司 基于多级模型和预置点自动调节的生态环境目标识别方法
CN113935407A (zh) * 2021-09-29 2022-01-14 光大科技有限公司 一种异常行为识别模型确定方法及装置
US20220215792A1 (en) * 2019-05-23 2022-07-07 Lg Electronics Inc. Display device
US12033385B2 (en) 2020-02-27 2024-07-09 Lg Electronics Inc. Display device

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583501B (zh) * 2018-11-30 2021-05-07 广州市百果园信息技术有限公司 图片分类、分类识别模型的生成方法、装置、设备及介质
CN110222724B (zh) * 2019-05-15 2023-12-19 平安科技(深圳)有限公司 一种图片实例检测方法、装置、计算机设备及存储介质
CN110210356A (zh) * 2019-05-24 2019-09-06 厦门美柚信息科技有限公司 一种图片鉴别方法、装置及***
CN110738267B (zh) * 2019-10-18 2023-08-22 北京达佳互联信息技术有限公司 图像分类方法、装置、电子设备及存储介质
CN111738290B (zh) * 2020-05-14 2024-04-09 北京沃东天骏信息技术有限公司 图像检测方法、模型构建和训练方法、装置、设备和介质
CN111782905B (zh) * 2020-06-29 2024-02-09 中国工商银行股份有限公司 一种数据组包方法和装置、终端设备和可读存储介质
CN112286440A (zh) * 2020-11-20 2021-01-29 北京小米移动软件有限公司 触摸操作分类、模型训练方法及装置、终端及存储介质
CN112445410B (zh) * 2020-12-07 2023-04-18 北京小米移动软件有限公司 触控事件识别方法、装置及计算机可读存储介质
CN112686289A (zh) * 2020-12-24 2021-04-20 微梦创科网络科技(中国)有限公司 图片分类方法和装置
CN112784985A (zh) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 神经网络模型的训练方法及装置、图像识别方法及装置
CN113063843A (zh) * 2021-02-22 2021-07-02 广州杰赛科技股份有限公司 一种管道缺陷识别方法、装置及存储介质
CN113705735A (zh) * 2021-10-27 2021-11-26 北京值得买科技股份有限公司 一种基于海量信息的标签分类方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679185A (zh) * 2012-08-31 2014-03-26 富士通株式会社 卷积神经网络分类器***、其训练方法、分类方法和用途
CN106096670A (zh) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 级联卷积神经网络训练和图像检测方法、装置及***
CN107403198A (zh) * 2017-07-31 2017-11-28 广州探迹科技有限公司 一种基于级联分类器的官网识别方法
US20180211164A1 (en) * 2017-01-23 2018-07-26 Fotonation Limited Method of training a neural network
CN109583501A (zh) * 2018-11-30 2019-04-05 广州市百果园信息技术有限公司 图片分类、分类识别模型的生成方法、装置、设备及介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161592A1 (en) * 2015-12-04 2017-06-08 Pilot Ai Labs, Inc. System and method for object detection dataset application for deep-learning algorithm training
CN108875456B (zh) * 2017-05-12 2022-02-18 北京旷视科技有限公司 目标检测方法、目标检测装置和计算机可读存储介质
CN108509978B (zh) * 2018-02-28 2022-06-07 中南大学 基于cnn的多级特征融合的多类目标检测方法及模型

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679185A (zh) * 2012-08-31 2014-03-26 富士通株式会社 卷积神经网络分类器***、其训练方法、分类方法和用途
CN106096670A (zh) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 级联卷积神经网络训练和图像检测方法、装置及***
US20180211164A1 (en) * 2017-01-23 2018-07-26 Fotonation Limited Method of training a neural network
CN107403198A (zh) * 2017-07-31 2017-11-28 广州探迹科技有限公司 一种基于级联分类器的官网识别方法
CN109583501A (zh) * 2018-11-30 2019-04-05 广州市百果园信息技术有限公司 图片分类、分类识别模型的生成方法、装置、设备及介质

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220215792A1 (en) * 2019-05-23 2022-07-07 Lg Electronics Inc. Display device
US11798457B2 (en) * 2019-05-23 2023-10-24 Lg Electronics Inc. Display device
US12033385B2 (en) 2020-02-27 2024-07-09 Lg Electronics Inc. Display device
CN111783861A (zh) * 2020-06-22 2020-10-16 北京百度网讯科技有限公司 数据分类方法、模型训练方法、装置和电子设备
CN112084944A (zh) * 2020-09-09 2020-12-15 清华大学 一种动态演化表情的识别方法与***
CN112182269A (zh) * 2020-09-27 2021-01-05 北京达佳互联信息技术有限公司 图像分类模型的训练、图像分类方法、装置、设备及介质
CN112182269B (zh) * 2020-09-27 2023-11-28 北京达佳互联信息技术有限公司 图像分类模型的训练、图像分类方法、装置、设备及介质
CN112465042A (zh) * 2020-12-02 2021-03-09 中国联合网络通信集团有限公司 一种分类网络模型的生成方法及装置
CN112465042B (zh) * 2020-12-02 2023-10-24 中国联合网络通信集团有限公司 一种分类网络模型的生成方法及装置
CN113361451A (zh) * 2021-06-24 2021-09-07 福建万福信息技术有限公司 基于多级模型和预置点自动调节的生态环境目标识别方法
CN113361451B (zh) * 2021-06-24 2024-04-30 福建万福信息技术有限公司 基于多级模型和预置点自动调节的生态环境目标识别方法
CN113935407A (zh) * 2021-09-29 2022-01-14 光大科技有限公司 一种异常行为识别模型确定方法及装置

Also Published As

Publication number Publication date
CN109583501A (zh) 2019-04-05
CN109583501B (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2020108474A1 (zh) 图片分类、分类识别模型的生成方法、装置、设备及介质
WO2021155706A1 (zh) 利用不平衡正负样本对业务预测模型训练的方法及装置
CN108875807B (zh) 一种基于多注意力多尺度的图像描述方法
US11537884B2 (en) Machine learning model training method and device, and expression image classification method and device
WO2019228317A1 (zh) 人脸识别方法、装置及计算机可读介质
CN110674850A (zh) 一种基于注意力机制的图像描述生成方法
CN111507993A (zh) 一种基于生成对抗网络的图像分割方法、装置及存储介质
CN111598968B (zh) 一种图像处理方法、装置、存储介质和电子设备
WO2021051987A1 (zh) 神经网络模型训练的方法和装置
CN109033107A (zh) 图像检索方法和装置、计算机设备和存储介质
Zeng et al. CNN model design of gesture recognition based on tensorflow framework
CN111898703B (zh) 多标签视频分类方法、模型训练方法、装置及介质
JP6908302B2 (ja) 学習装置、識別装置及びプログラム
WO2021042857A1 (zh) 图像分割模型的处理方法和处理装置
CN114842267A (zh) 基于标签噪声域自适应的图像分类方法及***
US20220132050A1 (en) Video processing using a spectral decomposition layer
CN108492301A (zh) 一种场景分割方法、终端及存储介质
CN112580728B (zh) 一种基于强化学习的动态链路预测模型鲁棒性增强方法
CN113987236B (zh) 基于图卷积网络的视觉检索模型的无监督训练方法和装置
EP3769270A1 (en) A method, an apparatus and a computer program product for an interpretable neural network representation
CN113330462A (zh) 使用软最近邻损失的神经网络训练
CN112749737A (zh) 图像分类方法及装置、电子设备、存储介质
CN111079930B (zh) 数据集质量参数的确定方法、装置及电子设备
CN115238909A (zh) 一种基于联邦学习的数据价值评估方法及其相关设备
CN112883931A (zh) 基于长短期记忆网络的实时真假运动判断方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19888992

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19888992

Country of ref document: EP

Kind code of ref document: A1