CN109711386B - Method and device for obtaining recognition model, electronic equipment and storage medium - Google Patents

Method and device for obtaining recognition model, electronic equipment and storage medium Download PDF

Info

Publication number
CN109711386B
CN109711386B CN201910023511.8A CN201910023511A CN109711386B CN 109711386 B CN109711386 B CN 109711386B CN 201910023511 A CN201910023511 A CN 201910023511A CN 109711386 B CN109711386 B CN 109711386B
Authority
CN
China
Prior art keywords
identity
class
recognition model
sample data
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910023511.8A
Other languages
Chinese (zh)
Other versions
CN109711386A (en
Inventor
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910023511.8A priority Critical patent/CN109711386B/en
Publication of CN109711386A publication Critical patent/CN109711386A/en
Application granted granted Critical
Publication of CN109711386B publication Critical patent/CN109711386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for obtaining a recognition model, electronic equipment and a storage medium, belonging to the technical field of deep learning, wherein the method comprises the following steps: in the process of training the recognition model, determining a first identity category to which each sample data belongs according to the first recognition model; determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs; and performing iterative training on the first recognition model according to the first probability and the weight of the sample data to obtain a second recognition model. According to the method and the device, model training is performed by calculating the loss value of the sample data through the weighted first probability, and the correlation between classes in the model training process is fully considered, so that the accuracy of model identification is improved.

Description

Method and device for obtaining recognition model, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of deep learning technologies, and in particular, to a method and an apparatus for obtaining a recognition model, an electronic device, and a storage medium.
Background
With the development of artificial intelligence, deep learning technology is more and more widely applied. Particularly, when the problem of identifying identity information from an image is solved, the identity information of the image is generally identified according to an image identification model. Therefore, model training is typically required to build the image recognition model before image recognition. For example, before the age corresponding to the face in the face image is recognized by the image recognition model, model training is required to establish the face age recognition model.
In the related art, in training an image recognition model, an age is divided into several age groups, a probability of a plurality of sample images in each age group is determined, and the image recognition model is trained based on the probabilities of the plurality of sample images in each age group.
In the above related art, when the recognition model is trained, model training is performed directly based on the probability of the plurality of sample images in each age group, which may cause model training inaccuracy, and thus the accuracy of the recognition model obtained by training is low.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for obtaining a recognition model, which can overcome the problem that when a recognition model is trained, model training is directly performed based on the probability of a plurality of sample images in each age group, which may cause model training inaccuracy, thereby resulting in low accuracy of the recognition model obtained by training.
According to a first aspect of embodiments of the present disclosure, there is provided a method of obtaining a recognition model, the method including:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
In a possible implementation manner, the determining, according to the first identity class and the second identity class to which the sample data belongs, a weight of the first probability in a model training process includes:
determining a discrimination error between the first identity class and the second identity class;
and determining the weight of the first probability in a model training process according to the discrimination error, wherein the discrimination error is in positive correlation with the weight.
In another possible implementation manner, the determining a discrimination error between the first identity class and the second identity class includes:
determining a first class value for the first identity class and a second class value for the second identity class;
and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
In another possible implementation manner, after performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model, the method further includes:
and identifying the target data according to the second identification model to obtain the identity information corresponding to the target data.
In another possible implementation manner, the recognizing the target data according to the second recognition model to obtain the identity information corresponding to the target data includes:
inputting the target data into the second recognition model, and outputting the probability that the target data respectively belongs to each specified identity category;
and carrying out weighted summation on the probability that the target data respectively belongs to each appointed identity category and the identity value corresponding to each appointed identity category to obtain the identity value corresponding to the target data.
In another possible implementation manner, the target data is an image of a target object or an audio signal of the target object, and the identity value is an age of the target object.
In another possible implementation manner, the iteratively training the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model includes:
determining a loss value of each designated identity category according to the first probability and the weight of each sample data;
determining a loss value of the first recognition model according to the loss value of each designated identity category;
determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition;
and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
In another possible implementation manner, the determining, according to the first recognition model, a first identity class to which the sample data belongs includes:
inputting the sample data into the first identification model, and outputting the probability that the sample data respectively belongs to each specified identity category;
selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category;
and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for obtaining a recognition model, the apparatus including:
the first determination module is configured to determine, according to a first recognition model, a first identity category to which each sample data belongs in a process of training the recognition model;
a second determining module configured to determine, according to a first identity class and a second identity class to which the sample data belongs, a weight of a first probability in a model training process, the first probability being a probability that the sample data belongs to the second identity class, the second identity class being an identity class to which the sample data actually belongs;
and the training module is configured to perform iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
In one possible implementation, the second determining module is further configured to determine a discrimination error between the first identity class and the second identity class; determining the weight of the first probability in the model training process according to the discrimination error; wherein the discrimination error is positively correlated with the weight.
In another possible implementation, the second determining module is further configured to determine a first class value of the first identity class and a second class value of the second identity class; and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
In another possible implementation manner, the apparatus further includes:
and the identification module is configured to identify the target data according to the second identification model to obtain identity information corresponding to the target data.
In another possible implementation manner, the identification module is further configured to input the target data to the second identification model, and output a probability that the target data respectively belongs to each specified identity category; and carrying out weighted summation on the probability that the target data respectively belongs to each appointed identity category and the identity value corresponding to each appointed identity category to obtain the identity value corresponding to the target data.
In another possible implementation manner, the target data is an image of a target object or an audio signal of the target object, and the identity value is an age of the target object.
In another possible implementation manner, the training module is further configured to determine a loss value of each specified identity class according to the first probability and the weight of each sample data; determining a loss value of the first recognition model according to the loss value of each designated identity category; determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition; and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
In another possible implementation manner, the first determining module is further configured to input the sample data into the first recognition model, and output probabilities that the sample data respectively belong to each designated identity category; selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category; and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform a method of obtaining a recognition model, the method comprising:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
According to a fifth aspect of embodiments of the present disclosure, there is provided an application program, when instructions in the application program are executed by a processor of an electronic device, the application program enabling the electronic device to execute the method for acquiring a recognition model, the method including:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the present disclosure, in the process of model training, the second identity class with the highest probability of the sample data is determined, and according to the second identity class and the first identity class to which the sample data actually belongs, the weight occupied by the first probability that the sample data is identified as the first identity class to which the sample data actually belongs in the training process is determined. By improving the cross entropy loss function and using the weighted first probability to calculate the loss value of the sample data to train the model, the correlation between classes in the model training process is fully considered, and therefore the accuracy of the model recognition is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of obtaining a recognition model according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of obtaining a recognition model in accordance with an exemplary embodiment.
FIG. 3 is a block diagram illustrating an apparatus for obtaining a recognition model according to an example embodiment.
FIG. 4 is a block diagram illustrating an electronic device for obtaining a recognition model in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method of obtaining a recognition model according to an exemplary embodiment, where the method of obtaining a recognition model, as shown in fig. 1, includes:
in step S101, in the process of training the recognition model, for each sample data, according to the first recognition model, a first identity class to which the sample data belongs is determined.
In step S102, a weight of a first probability in a model training process is determined according to a first identity class and a second identity class to which the sample data belongs, where the first probability is a probability that the sample data belongs to the second identity class, and the second identity class is an identity class to which the sample data actually belongs.
In step S103, the first recognition model is iteratively trained according to the first probability of the sample data and the weight, so as to obtain a second recognition model.
In one possible implementation manner, the determining, according to the first identity class and the second identity class to which the sample data belongs, a weight of the first probability in the model training process includes:
determining a discrimination error between the first identity class and the second identity class;
and determining the weight of the first probability in the model training process according to the discrimination error, wherein the discrimination error is positively correlated with the weight.
In another possible implementation manner, the determining a discrimination error between the first identity class and the second identity class includes:
determining a first class value for the first identity class and a second class value for the second identity class;
and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
In another possible implementation manner, after performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model, the method further includes:
and identifying the target data according to the second identification model to obtain the identity information corresponding to the target data.
In another possible implementation manner, the recognizing the target data according to the second recognition model to obtain the identity information corresponding to the target data includes:
inputting the target data into the second recognition model, and outputting the probability that the target data respectively belongs to each specified identity category;
and carrying out weighted summation on the probability that the target data respectively belongs to each specified identity category and the identity value corresponding to each specified identity category to obtain the identity value corresponding to the target data.
In another possible implementation, the target data is an image of a target object or an audio signal of the target object, and the identity value is an age of the target object.
In another possible implementation manner, the iteratively training the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model includes:
determining a loss value of each designated identity category according to the first probability and the weight of each sample data;
determining a loss value of the first recognition model according to the loss value of each designated identity category;
determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition;
and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
In another possible implementation manner, the determining, according to the first recognition model, a first identity class to which the sample data belongs includes:
inputting the sample data into the first identification model, and outputting the probability that the sample data respectively belongs to each specified identity category;
selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category;
and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
In the embodiment of the present disclosure, in the process of model training, the second identity class with the highest probability of the sample data is determined, and according to the second identity class and the first identity class to which the sample data actually belongs, the weight occupied by the first probability that the sample data is identified as the first identity class to which the sample data actually belongs in the training process is determined. By improving the cross entropy loss function and using the weighted first probability to calculate the loss value of the sample data to train the model, the correlation between classes in the model training process is fully considered, and therefore the accuracy of the model recognition is improved.
Fig. 2 is a flowchart illustrating a method of obtaining a recognition model according to an exemplary embodiment, where the method of obtaining a recognition model, as shown in fig. 2, includes:
in step S201, in the process of training the recognition model, for each sample data, the electronic device determines, according to the first recognition model, a first identity class to which the sample data belongs.
In an embodiment of the present disclosure, an electronic device trains a second recognition model based on a first recognition model and a plurality of sample data; and then identifying the identity information corresponding to the target data through a second identification model. The first recognition model can be an initial recognition model or an intermediate recognition model in an iterative training process; the first recognition model may be a deep learning-based neural Network model such as a VGG (Visual Geometry Group Network) model. The identity information may be age, gender, height, etc., and correspondingly, the identity category may be age, gender, height range, etc. The sample data is data of a known identity class; for example, the sample data may be a sample image or a sample audio signal. Accordingly, the target data may also be an image of the target object or an audio signal of the target object. The target object may be a human or an animal, etc.
The first identity class may be a designated identity class for which the probability of sample data output is greatest after the sample data is input into the first recognition model. Accordingly, this step can be realized by the following steps (1) to (3), including:
(1) the electronic equipment inputs the sample data into the first identification model and outputs the probability that the sample data respectively belongs to each specified identity category.
The electronic equipment is preset with a plurality of specified identity categories, and the first identification model is used for identifying the probability that input sample data belongs to each specified identity category. In this step, the electronic device inputs the sample data into a first identification model, the first identification model performs classification identification on the sample data, and the first identification model outputs the probability that the sample data is identified into each designated identity category.
The number of the designated identity categories can be set and changed according to the identity information, and in the embodiment of the disclosure, the number of the designated identity categories is not specifically limited; for example, when the identity information is age, the number of specified identity categories may be 10 or 13, etc. When the identity information is gender, the number of assigned identity categories may be 2. When the identity information is height, the number of specified identity categories may be 10.
For example, when the identity information is age, the plurality of designated identity categories are a plurality of age groups, the number of the plurality of age groups can be 13, and the number of the plurality of age groups is 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 24, 25 to 29, 30 to 34, 35 to 39, 40 to 44, 45 to 49, 50 to 54, 55 to 59, and greater than 60.
When the identity information is gender and the plurality of designated identity categories are genders, the number of the plurality of designated identity categories is 2, and the plurality of designated identity categories are male and female respectively.
When the information of the identity type is height, the plurality of designated identity types are a plurality of height ranges, the number of the height ranges can be 10, and the height ranges are 0-100 cm, 100-110 cm, 110-120 cm, 120-130 cm, 130-140 cm, 140-150 cm, 150-160 cm, 160-170 cm, 170-180 cm and larger than 180 cm.
It should be noted that, for a plurality of sample data, each sample data determines the probability that each sample data belongs to each designated identity category according to step (1). Moreover, a plurality of sample data need to cover a plurality of specified identity categories; that is, the plurality of sample data need to include sample data of each specified identity category, and sample equalization is performed on each specified identity category, so that the number of sample data of two specified identity categories is guaranteed to be within 2 times of each other. I.e. for any two specified identity classes, where the ratio of the first number to the second number is not larger than 2, the first number being the number of sample data of one specified identity class and the second number being the number of sample data of the other specified identity class.
For example, when the identity information is age, the plurality of designated identity categories are a plurality of age groups, the number of the plurality of age groups can be 13, and the number of the plurality of age groups is 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 24, 25 to 29, 30 to 34, 35 to 39, 40 to 44, 45 to 49, 50 to 54, 55 to 59, and greater than 60. The number of samples in each class may be 50, 55, 60, 65, 70, 75, 80, 75, 70, 65, 60, 55, 50, respectively.
(2) The electronic device selects a maximum probability from the probabilities that the sample data respectively belongs to each of the specified identity categories.
(3) And the electronic equipment determines the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
In step S202, the electronic device determines a discrimination error between the first identity class and the second identity class.
Wherein, the second identity type is the appointed identity type to which the sample data actually belongs; the discrimination error is used to indicate the magnitude of the error with which the sample data is classified into the first identity class. In one possible implementation, the electronic device may store the discrimination error between each identity category; in the step, the electronic equipment determines the discrimination error between the first identity class and the second identity class from the stored discrimination errors of each identity class according to the first identity class and the second identity class.
In another possible implementation manner, the discrimination error between the first identity class and the second identity class may be determined according to a first class value of the first identity class and a second class value of the second identity class; accordingly, this step can be realized by the following steps (1) to (2), including:
(1) the electronic device determines a first class value for the first identity class and a second class value for the second identity class.
In one possible implementation, the first class value may be a first class number of the first identity class, and correspondingly, the second class value may be a second class number of the second identity class. Therefore, the step can be: the electronic device determines a first category number of the first identity category, determines the first category number as a first category value, determines a second category number of the second identity category, and determines the second category number as a second category value.
The method further comprises the step of determining, by the electronic device, a first class value of the first identity class and a second class value of the second identity class. The numbering sequence for numbering the specified identity categories can be default by a system or defined by a user, and can also be numbered according to the content of the specified identity categories. For example, when the designated identity category is an age group, the designated identity category may be numbered according to the size of the age in the designated age group from large to small, or the identity category may be numbered according to the size of the age in the designated age group from small to large.
For example, when the identity information is age, the plurality of designated identity categories are a plurality of age groups, the number of the plurality of age groups can be 13, and the number of the plurality of age groups is 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 24, 25 to 29, 30 to 34, 35 to 39, 40 to 44, 45 to 49, 50 to 54, 55 to 59, and greater than 60. The 13 age groups are numbered from small to large according to age and are respectively 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13. The first identity class is 20-24, the first class has a serial number of 5, the second identity class is 30-34, and the second class has a serial number of 7.
In another possible implementation, the first category value may be a mean value of a range of values of the first identity category, and correspondingly, the second category value may be a mean value of a range of values of the second identity category. Therefore, the step can be: the electronic equipment determines a first average value of a numerical range of a first identity category, and determines the first average value as a first category value; a second mean of the range of values for the second identity class is determined, the second mean being determined as the second class value.
For example, designating the identity category as age group, the numerical range of the first age group is: 0. 1, 2, 3, the first class value is 1.5; accordingly, the numerical ranges for the second age group are: 4. 5, 6, 7, the second class value is 5.5.
(2) The electronic device uses the difference between the first category value and the second category value as the discrimination error between the first identity category and the second identity category.
The larger the difference between the first class value and the second class value, the larger the discrimination error between the first identity class and the second identity class, and the larger the error after the sample data is identified as the first identity class.
For example, when the identity information is age, the plurality of designated identity categories are a plurality of age groups, the number of the plurality of age groups can be 13, and the number of the plurality of age groups is 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 24, 25 to 29, 30 to 34, 35 to 39, 40 to 44, 45 to 49, 50 to 54, 55 to 59, and greater than 60. The 13 age groups are numbered from small to large according to age and are respectively 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13. When the first identity category is 20-24 and the second identity category is 30-34, the first category is numbered 5 and the second category is numbered 7. The first discrimination error is 2. When the first identity category is 15-19 and the second identity category is 30-34, the first category is numbered 4 and the second category is numbered 7. The second discrimination error is 3. The error for which the sample data is identified as 15-19 is greater than the error for which the sample data is identified as 20-24.
In step S203, the electronic device determines a weight of the first probability in a model training process according to the discriminant error.
Wherein the first probability is the probability that the sample data belongs to the second identity class. The discrimination error is positively correlated with the weight; the electronic device can determine the weight of the first probability in the model training process according to the discrimination error and any algorithm that positively correlates the discrimination error with the weight. For example, the algorithm may sum the discriminant error with a specified value, which may be 1, 1.1, or 1.2, etc. When the designated value is 1, the electronic device determines the weight of the first probability in the model training process according to the discrimination error through the following formula I.
The formula I is as follows: beta ═ t-j | +1)
Wherein β is the weight of the first probability in the model training process, | t-j | is the error in the discrimination between the first identity category and the second identity category, t is the first identity category, and j is the second identity category.
In the embodiment of the present disclosure, the larger the difference between the first identity class and the second identity class is, the larger the discrimination error between the first identity class and the second identity class is. The first probability is weighted by the discrimination error such that the greater the difference between the first identity class and the second identity class, the greater the weight of the first probability in the model training process. And weighting the first probability, calculating the loss value of the sample data through the weighted first probability, wherein the larger the difference between the second identity category and the first identity category is, the larger the error of the sample data identified as belonging to the first identity category is, the larger the loss value of the sample data is, and further in the training process, the first identification model continues to iterate, so that the precision of model training is improved, and the identification is more accurate. The first probability is the probability that the sample data belongs to the second identity class, that is, the first probability is the probability that the first identification model correctly identifies the sample data as the second identity class.
In step S204, the electronic device performs iterative training on the first recognition model according to the first probability of the sample data and the weight, so as to obtain a second recognition model.
In a possible implementation manner, the electronic device determines a loss value of sample data through a loss function, and stops iteration to obtain the second identification model when the sum of the loss values of a plurality of sample data meets an iteration stop condition. Accordingly, the process can be realized by the following steps (1) to (4), including:
(1) the electronic device determines a loss value for each of the specified identity classes based on the first probability and the weight for each of the sample data.
For each specified identity category, the electronic device determines a plurality of first sample data classified into the specified identity category; and determining a third quantity of the first sample data, and determining a loss value of the specified identity class according to the third quantity, the first probability of each first sample data and the weight of the first probability by the following formula II.
The formula II is as follows:
Figure BDA0001941644840000111
wherein L isjFor a loss value of a given identity class, N is a third number, pi(j) Dividing the ith first sample data into jth class first probability, wherein i is the serial number of the first sample data;j is the specified identity class.
It should be noted that, when the first identity class is the same as the second identity class, i.e., t ═ j, the loss value of the specified identity class is determined by the following formula three.
The formula III is as follows:
Figure BDA0001941644840000112
wherein L isjFor a loss value of a given identity class, N is a third number, pi(j) Dividing the ith first sample data into jth class first probability, wherein i is the serial number of the first sample data; j is the specified identity class.
(2) The electronic device determines a loss value for the first recognition model based on the loss value for each of the assigned identity classes.
The electronic device determines the loss value of each designated identity class in step (1), and in this step, the electronic device may sum the sample loss values of each designated identity class to obtain the loss value of the first recognition model. The electronic device determining whether the loss value satisfies an iteration stop condition; when the loss value meets the iteration stop condition, executing the step (3); and (4) when the loss value does not meet the iteration stop condition, executing the step.
(3) When the loss value satisfies the iteration stop condition, the electronic equipment determines the first recognition model as the second recognition model, and the method is ended.
In one possible implementation, the iteration stop condition is whether the loss value of the first recognition model is smaller than a preset loss value; correspondingly, when the loss value is smaller than the preset loss value, the electronic equipment determines that the loss value meets the iteration stop condition; and (4) when the loss value is not less than the preset loss value, the electronic equipment determines that the loss value does not meet the iteration stop condition, and the step is executed.
In another possible implementation manner, the iteration stop condition is that a difference value between two adjacent loss values is smaller than a preset threshold; correspondingly, the electronic equipment acquires a loss value of the first identification model obtained by the last iteration, determines a difference value between the loss value obtained by the last iteration and the loss value obtained by the current iteration, and determines that the loss value meets an iteration stop condition when the difference value is smaller than a preset threshold value; and when the difference is not less than the preset threshold, the electronic equipment determines that the loss value does not meet the iteration stop condition, and the step (4) is executed.
It should be noted that when the loss value satisfies the iteration stop condition, that is, the model training is completed, the iteration training is stopped, and the first recognition model is determined as the second recognition model.
(4) When the loss value does not satisfy the iteration stop condition, the electronic device updates the first recognition model until the loss value satisfies the iteration stop condition, and determines the first recognition model satisfying the iteration stop condition as the second recognition model.
And when the loss value does not meet the iteration stop condition, the electronic equipment updates the first recognition model, and performs iteration training according to the plurality of sample data and the updated first recognition model until the iteration stop condition is met.
It should be noted that, in the above steps S201 to S204, a training process of obtaining the second recognition model for training is performed, the training process may be performed by an electronic device or a server, and the execution subject of the training process is not specifically limited in this disclosure. When the training process is completed by the electronic device, the electronic device stores the trained second recognition model, and when the target data is subjected to identity recognition, directly performs step 205. When the training process is completed by the server, the electronic device needs to acquire the second recognition model from the server before the electronic device performs identity recognition on the target data. The process of the electronic device obtaining the second recognition model from the server may be:
the electronic equipment sends an acquisition request to the server, wherein the acquisition request carries the electronic equipment identifier of the electronic equipment and is used for acquiring the second recognition model; the server receives an acquisition request sent by the electronic equipment, and sends the second recognition model to the electronic equipment according to the electronic equipment identifier in the acquisition request; the electronic device receives the second recognition model sent by the server.
The other point to be explained is that before the target data is classified, the electronic device only needs to be trained once or obtained from the server once, and then identity information identification can be directly performed through the second identification model without repeated training or repeated obtaining.
In step S205, when the target data is identified, the electronic device identifies the target data according to the second identification model, and obtains identity information corresponding to the target data.
The target data may be an image of the target object or an audio signal of the target object, and the identity information may be age, gender, height, or the like. The electronic equipment determines the category of the target data according to the probability of the target data belonging to each designated identity category and the identity value corresponding to each designated identity, and correspondingly, the process can be realized by the following steps, including:
(1) the electronic equipment inputs the target data into the second recognition model and outputs the probability that the target data respectively belongs to each specified identity category.
When the target data is an image of a target object and the identity information is an age, the electronic device inputs the image of the target object into the second identification model and outputs the probability that the target object belongs to each age group.
(2) And the electronic equipment carries out weighted summation on the probability that the target data respectively belongs to each appointed identity category and the identity value corresponding to each appointed identity category to obtain the identity value corresponding to the target data.
The electronic equipment carries out weighted summation on the probability that the target data respectively belongs to each designated identity category and the identity value corresponding to each designated identity category to obtain a first numerical value, and the identity value corresponding to the target data is determined based on the first numerical value. When the first numerical value is an integer, the electronic device directly determines the first numerical value as the identity value corresponding to the target data. And when the first numerical value is a non-integer, the electronic equipment performs rounding-up on the first numerical value to obtain an identity value corresponding to the target data.
For example, when the target data is an image of a target object and the identity information is an age, the electronic device obtains the age of the target object by performing weighted summation on the probability that the target object belongs to each age group and the age value corresponding to each age group according to the following formula four.
The formula four is as follows:
Figure BDA0001941644840000131
wherein A is the age of the target object, M is the fourth number of age groups, M is the number of the mth age group, pmIs the probability that the target object is identified as the mth age group, amIs the age value corresponding to the mth age group.
The identity value corresponding to each designated identity category may be a maximum value, a minimum value, or an average value in a numerical range in each designated identity category, and the identity value corresponding to each designated identity category is not specifically limited in the present disclosure. For example, the identity value may be an average value of contents in each designated identity category, the designated identity category may be designated age groups, accordingly, a first designated age group has a value range of 0, 1, 2, 3, a second designated age group has a value range of 4, 5, 6, 7, and the probability that the target data belongs to the first age group is 0.4, and the probability that the target data belongs to the second age group is 0.6, then the age corresponding to the target data is output as 3.9, and the target object is rounded up to the age and has an age of 4.
When the target data is an image of a target object and the identity information is an age, in another possible implementation, the electronic device determines the age of the target object according to the age group to which the target object belongs. Correspondingly, the steps can be as follows: the electronic equipment inputs the image of the target object into a second identification model to obtain the probability that the target object belongs to each appointed age group; the maximum probability is selected from the probabilities that the target object belongs to each of the designated age groups, the age group corresponding to the maximum probability is set as the age group of the target object, and the age of the target object is determined based on the age group.
The electronic device may randomly select an age from the age group as the age of the target object, or may use the maximum age, the minimum age, or the average age of the age group as the age of the target object.
In the embodiment of the present disclosure, in the process of model training, the second identity class with the highest probability of the sample data is determined, and according to the second identity class and the first identity class to which the sample data actually belongs, the weight occupied by the first probability that the sample data is identified as the first identity class to which the sample data actually belongs in the training process is determined. By improving the cross entropy loss function and using the weighted first probability to calculate the loss value of the sample data to train the model, the correlation between classes in the model training process is fully considered, and therefore the accuracy of the model recognition is improved. And when the identity information is identified through the identification model, the accuracy rate of identity information identification is improved.
Fig. 3 is a block diagram illustrating an apparatus for acquiring a recognition model according to an exemplary embodiment, as shown in fig. 3, the apparatus including:
the first determining module 301 is configured to, in the process of training the recognition model, for each sample data, determine, according to the first recognition model, a first identity class to which the sample data belongs.
A second determining module 302 configured to determine a weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, the first probability being a probability that the sample data belongs to the second identity class, the second identity class being an identity class to which the sample data actually belongs.
The training module 303 is configured to perform iterative training on the first recognition model according to the first probability of the sample data and the weight, so as to obtain a second recognition model.
In a possible implementation, the second determining module 302 is further configured to determine a discrimination error between the first identity class and the second identity class; determining the weight of the first probability in the model training process according to the discrimination error; wherein the discrimination error is positively correlated with the weight.
In another possible implementation, the second determining module 302 is further configured to determine a first class value of the first identity class and a second class value of the second identity class; and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
In another possible implementation manner, the apparatus further includes:
and the identification module is configured to identify the target data according to the second identification model to obtain the identity information corresponding to the target data.
In another possible implementation manner, the identification module is further configured to input the target data into the second identification model, and output probabilities that the target data respectively belong to each of the specified identity categories; and carrying out weighted summation on the probability that the target data respectively belongs to each specified identity category and the identity value corresponding to each specified identity category to obtain the identity value corresponding to the target data.
In another possible implementation, the target data is an image of a target object or an audio signal of the target object, and the identity value is an age of the target object.
In another possible implementation manner, the training module 303 is further configured to determine a loss value of each specified identity class according to the first probability and the weight of each sample data; determining a loss value of the first recognition model according to the loss value of each designated identity category; determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition; and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
In another possible implementation manner, the first determining module 301 is further configured to input the sample data into the first recognition model, and output probabilities that the sample data respectively belongs to each of the designated identity categories; selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category; and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
With regard to the apparatus for acquiring a recognition model in the above embodiments, the specific manner in which each module performs operations has been described in detail in the embodiments of the method for acquiring a recognition model, and will not be described in detail here.
In the embodiment of the present disclosure, in the process of model training, the second identity class with the highest probability of the sample data is determined, and according to the second identity class and the first identity class to which the sample data actually belongs, the weight occupied by the first probability that the sample data is identified as the first identity class to which the sample data actually belongs in the training process is determined. By improving the cross entropy loss function and using the weighted first probability to calculate the loss value of the sample data to train the model, the correlation between classes in the model training process is fully considered, and therefore the accuracy of the model recognition is improved.
FIG. 4 is a block diagram illustrating an electronic device 400 for obtaining a recognition model in accordance with an exemplary embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, electronic device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the electronic device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the above-described method of obtaining a recognition model. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on the electronic device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 406 provides power to the various components of the electronic device 400. Power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 400.
The multimedia component 408 includes a screen that provides an output interface between the electronic device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the electronic device 400. For example, the sensor component 414 can detect an open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the electronic device 400, the sensor component 414 can also detect a change in the position of the electronic device 400 or a component of the electronic device 400, the presence or absence of user contact with the electronic device 400, orientation or acceleration/deceleration of the electronic device 400, and a change in the temperature of the electronic device 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the electronic device 400 and other devices. The electronic device 400 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications.
In an exemplary embodiment, the electronic device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described method of obtaining a recognition model.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the electronic device 400 to perform the above-described method of obtaining a recognition model is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium having instructions which, when executed by a processor of an electronic device, enable the electronic device to perform a method of obtaining a recognition model, the method comprising:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
In an exemplary embodiment, there is also provided an application program, when instructions in the application program are executed by a processor of an electronic device, enabling the electronic device to perform a method of obtaining a recognition model, the method comprising:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining the weight of a first probability in a model training process according to a first identity class and a second identity class to which the sample data belongs, wherein the first probability is the probability that the sample data belongs to the second identity class, and the second identity class is the identity class to which the sample data actually belongs;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A method of obtaining a recognition model, the method comprising:
in the process of training the recognition model, for each sample data, determining a first identity class to which the sample data belongs according to the first recognition model;
determining a discrimination error between the first identity category and a second identity category, wherein the second identity category is an identity category to which the sample data actually belongs;
determining the weight of a first probability in a model training process according to the discrimination error, wherein the discrimination error and the weight have positive correlation, and the first probability is the probability that the sample data belongs to the second identity class;
and performing iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
2. The method of claim 1, wherein determining the discrimination error between the first identity class and the second identity class comprises:
determining a first class value for the first identity class and a second class value for the second identity class;
and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
3. The method of claim 1, wherein after iteratively training the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model, the method further comprises:
and identifying the target data according to the second identification model to obtain the identity information corresponding to the target data.
4. The method according to claim 3, wherein the recognizing the target data according to the second recognition model to obtain the identity information corresponding to the target data comprises:
inputting the target data into the second recognition model, and outputting the probability that the target data respectively belongs to each specified identity category;
and carrying out weighted summation on the probability that the target data respectively belongs to each appointed identity category and the identity value corresponding to each appointed identity category to obtain the identity value corresponding to the target data.
5. The method of claim 4, wherein the target data is an image of a target object or an audio signal of a target object, and the identity value is an age of the target object.
6. The method according to any of claims 1 to 5, wherein said iteratively training said first recognition model according to said first probability of said sample data and said weight to obtain a second recognition model comprises:
determining a loss value of each designated identity category according to the first probability and the weight of each sample data;
determining a loss value of the first recognition model according to the loss value of each designated identity category;
determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition;
and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
7. The method of any of claims 1 to 5, wherein said determining a first identity class to which said sample data belongs according to a first recognition model comprises:
inputting the sample data into the first identification model, and outputting the probability that the sample data respectively belongs to each specified identity category;
selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category;
and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
8. An apparatus for obtaining a recognition model, the apparatus comprising:
the first determination module is configured to determine, according to a first recognition model, a first identity category to which each sample data belongs in a process of training the recognition model;
a second determination module configured to determine a discrimination error between the first identity class and a second identity class, the second identity class being an identity class to which the sample data actually belongs; determining the weight of a first probability in a model training process according to the discrimination error, wherein the discrimination error and the weight have positive correlation, and the first probability is the probability that the sample data belongs to the second identity class;
and the training module is configured to perform iterative training on the first recognition model according to the first probability of the sample data and the weight to obtain a second recognition model.
9. The apparatus of claim 8,
the second determination module further configured to determine a first class value for the first identity class and a second class value for the second identity class; and taking the difference value between the first class value and the second class value as the discrimination error between the first identity class and the second identity class.
10. The apparatus of claim 8, further comprising:
and the identification module is configured to identify the target data according to the second identification model to obtain the identity information corresponding to the target data.
11. The apparatus of claim 10,
the identification module is further configured to input the target data into the second identification model, and output probabilities that the target data respectively belong to each specified identity category; and carrying out weighted summation on the probability that the target data respectively belongs to each appointed identity category and the identity value corresponding to each appointed identity category to obtain the identity value corresponding to the target data.
12. The apparatus of claim 11, wherein the target data is an image of a target object or an audio signal of a target object, and the identity value is an age of the target object.
13. The apparatus according to any one of claims 8 to 12,
the training module is further configured to determine a loss value of each specified identity class according to the first probability and the weight of each sample data; determining a loss value of the first recognition model according to the loss value of each designated identity category; determining the first recognition model as the second recognition model when the loss value satisfies an iteration stop condition; and when the loss value does not meet the iteration stop condition, updating the first identification model until the loss value meets the iteration stop condition, and determining the first identification model meeting the iteration stop condition as the second identification model.
14. The apparatus according to any one of claims 8 to 12,
the first determining module is further configured to input the sample data into the first recognition model, and output probabilities that the sample data respectively belong to each specified identity category; selecting a maximum probability from the probabilities that the sample data respectively belongs to each designated identity category; and determining the specified identity class corresponding to the maximum probability as the first identity class to which the sample data belongs.
15. An electronic device, characterized in that the electronic device comprises:
one or more processors;
one or more memories for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the method of obtaining a recognition model of any one of claims 1-7.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of obtaining a recognition model of any one of claims 1-7.
CN201910023511.8A 2019-01-10 2019-01-10 Method and device for obtaining recognition model, electronic equipment and storage medium Active CN109711386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910023511.8A CN109711386B (en) 2019-01-10 2019-01-10 Method and device for obtaining recognition model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910023511.8A CN109711386B (en) 2019-01-10 2019-01-10 Method and device for obtaining recognition model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109711386A CN109711386A (en) 2019-05-03
CN109711386B true CN109711386B (en) 2020-10-09

Family

ID=66259996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910023511.8A Active CN109711386B (en) 2019-01-10 2019-01-10 Method and device for obtaining recognition model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109711386B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112890572B (en) * 2021-02-07 2021-08-17 广州一盒科技有限公司 Intelligent control system and method for cooking food materials

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8315962B1 (en) * 2009-11-25 2012-11-20 Science Applications International Corporation System and method for multiclass discrimination of neural response data
CN101719222B (en) * 2009-11-27 2014-02-12 北京中星微电子有限公司 Method and device for training classifiers and method and device for identifying human face
CN105069400B (en) * 2015-07-16 2018-05-25 北京工业大学 Facial image gender identifying system based on the sparse own coding of stack
US20170220951A1 (en) * 2016-02-02 2017-08-03 Xerox Corporation Adapting multiple source classifiers in a target domain
CN107301380A (en) * 2017-06-01 2017-10-27 华南理工大学 One kind is used for pedestrian in video monitoring scene and knows method for distinguishing again
CN107633223A (en) * 2017-09-15 2018-01-26 深圳市唯特视科技有限公司 A kind of video human attribute recognition approach based on deep layer confrontation network
CN107832672B (en) * 2017-10-12 2020-07-07 北京航空航天大学 Pedestrian re-identification method for designing multi-loss function by utilizing attitude information
CN107679513B (en) * 2017-10-20 2021-07-13 北京达佳互联信息技术有限公司 Image processing method and device and server
CN107886062B (en) * 2017-11-03 2019-05-10 北京达佳互联信息技术有限公司 Image processing method, system and server
CN107844781A (en) * 2017-11-28 2018-03-27 腾讯科技(深圳)有限公司 Face character recognition methods and device, electronic equipment and storage medium
CN107862300A (en) * 2017-11-29 2018-03-30 东华大学 A kind of descending humanized recognition methods of monitoring scene based on convolutional neural networks
CN107766850B (en) * 2017-11-30 2020-12-29 电子科技大学 Face recognition method based on combination of face attribute information
CN108256555B (en) * 2017-12-21 2020-10-16 北京达佳互联信息技术有限公司 Image content identification method and device and terminal
CN108334816B (en) * 2018-01-15 2021-11-23 桂林电子科技大学 Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network
CN108563999A (en) * 2018-03-19 2018-09-21 特斯联(北京)科技有限公司 A kind of piece identity's recognition methods and device towards low quality video image
CN108765279A (en) * 2018-03-19 2018-11-06 北京工业大学 A kind of pedestrian's face super-resolution reconstruction method towards monitoring scene
CN108537143B (en) * 2018-03-21 2019-02-15 光控特斯联(上海)信息科技有限公司 A kind of face identification method and system based on key area aspect ratio pair
CN108564029B (en) * 2018-04-12 2020-12-01 厦门大学 Face attribute recognition method based on cascade multitask learning deep neural network
CN108921051B (en) * 2018-06-15 2022-05-20 清华大学 Pedestrian attribute identification network and technology based on cyclic neural network attention model
CN109145876A (en) * 2018-09-29 2019-01-04 北京达佳互联信息技术有限公司 Image classification method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109711386A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
CN106202330B (en) Junk information judgment method and device
CN110782468B (en) Training method and device of image segmentation model and image segmentation method and device
RU2577188C1 (en) Method, apparatus and device for image segmentation
CN107945133B (en) Image processing method and device
CN109446961B (en) Gesture detection method, device, equipment and storage medium
CN107133354B (en) Method and device for acquiring image description information
CN106557759B (en) Signpost information acquisition method and device
CN111539443A (en) Image recognition model training method and device and storage medium
CN111461304B (en) Training method of classified neural network, text classification method, device and equipment
CN107194464B (en) Training method and device of convolutional neural network model
CN110781323A (en) Method and device for determining label of multimedia resource, electronic equipment and storage medium
US11335348B2 (en) Input method, device, apparatus, and storage medium
CN109255784B (en) Image processing method and device, electronic equipment and storage medium
CN112148980B (en) Article recommending method, device, equipment and storage medium based on user click
CN111046927B (en) Method and device for processing annotation data, electronic equipment and storage medium
CN111814538A (en) Target object type identification method and device, electronic equipment and storage medium
CN110941727A (en) Resource recommendation method and device, electronic equipment and storage medium
CN108154090B (en) Face recognition method and device
CN107832691B (en) Micro-expression identification method and device
CN113656557A (en) Message reply method, device, storage medium and electronic equipment
CN113920293A (en) Information identification method and device, electronic equipment and storage medium
CN111797746B (en) Face recognition method, device and computer readable storage medium
CN109711386B (en) Method and device for obtaining recognition model, electronic equipment and storage medium
CN110738267B (en) Image classification method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant