CN112926479A - Cat face identification method and system, electronic device and storage medium - Google Patents

Cat face identification method and system, electronic device and storage medium Download PDF

Info

Publication number
CN112926479A
CN112926479A CN202110253387.1A CN202110253387A CN112926479A CN 112926479 A CN112926479 A CN 112926479A CN 202110253387 A CN202110253387 A CN 202110253387A CN 112926479 A CN112926479 A CN 112926479A
Authority
CN
China
Prior art keywords
cat
face
face recognition
recognition model
cat face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110253387.1A
Other languages
Chinese (zh)
Inventor
申啸尘
周有喜
乔国坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinjiang Aiwinn Information Technology Co Ltd
Original Assignee
Xinjiang Aiwinn Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinjiang Aiwinn Information Technology Co Ltd filed Critical Xinjiang Aiwinn Information Technology Co Ltd
Priority to CN202110253387.1A priority Critical patent/CN112926479A/en
Publication of CN112926479A publication Critical patent/CN112926479A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The application discloses a cat face identification method, a system, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a front face image of a cat collected at a predetermined place; extracting key points of the cat face from the front face image; carrying out affine transformation on key points of the cat face to obtain fixed points; inputting the fixed point into a pre-trained cat face recognition model to obtain an output result of the cat face recognition model, wherein the output result at least comprises one cat face label, and the cat face labels of different cats are different; behavior information and time information of the cat at a preset place are obtained, are associated with the cat face label and are recorded and stored. The cat face recognition is carried out on the front face image of the cat collected at the preset place, the label of the cat appearing at the preset place can be accurately recognized, the cat label, the behavior information of the cat at the preset place and the time information are correlated, the things the cat does at the preset place and the time of the things do can be formed, namely the life habits of the cat, and therefore the life habits of each cat are accurately recorded.

Description

Cat face identification method and system, electronic device and storage medium
Technical Field
The application relates to the technical field of cat face identification, in particular to a cat face identification method, a cat face identification system, an electronic device and a storage medium.
Background
Among pets kept by people, cats have a large proportion, and when cats get ill, lifestyle habits (such as eating, drinking, and excreting time) become abnormal, so it is necessary to record the lifestyle habits of cats.
However, some cat owners have a relatively large number of cats kept, and the cat owners may not have sufficient energy to record the life habits of the cats, and thus, may not be able to find out whether the life habits of the cats are abnormal.
Some technologies can identify the cat face through a face identification model at present, and although the face identification model can perform face identification at present, compared with the human face, the cat face has hair and other influences, so that the cat face is difficult to identify by the face identification model, and therefore the cat face cannot be accurately identified by using the existing face identification method, and further the living habits of each cat cannot be accurately recorded.
Disclosure of Invention
In view of this, the present application provides a cat face recognition method, a system, an electronic device, and a storage medium, so as to solve the problem that the current face recognition method cannot accurately recognize each cat and record the living habits of each cat.
The application provides a cat face identification method in a first aspect, including: acquiring a front face image of a cat collected at a predetermined place; extracting key points of a cat face from the front face image, wherein the key points of the cat face comprise key points of two eyes and a nose of a cat; carrying out affine transformation on the key points of the cat face to obtain fixed points; inputting the fixed point into a pre-trained cat face recognition model to obtain an output result of the cat face recognition model, wherein the output result at least comprises one cat face label, and the cat face labels of different cats are different; and acquiring behavior information and time information of the cat in the preset place, associating the behavior information and the time information with the cat face label, and recording and storing the behavior information and the time information.
The training method of the cat face recognition model comprises the following steps: establishing an original face recognition model of a ten-thousand-level feature library, wherein the original face recognition model at least comprises a full connection layer and a feature layer; optimizing an original face recognition model; and inputting the cat face sample data into the original face recognition model for training to obtain the cat face recognition model, wherein the cat face sample data at least comprises cat face characteristics and cat face labels of cats.
Wherein, optimizing the original face recognition model comprises: constructing a cross entropy loss function; acquiring a feature vector output by a feature layer and a weight output by a classification network; and performing L2 norm normalization on the feature vectors and the weights to express the distance between the feature vectors as cosine similarity, and completing the optimization of the original face recognition model.
Wherein, optimizing the original face recognition model further comprises: screening difficult samples in cat face sample data, and constructing a triple loss function; and calculating the cosine similarity of the difficult samples by utilizing the triple loss function, and finishing the optimization of the original face recognition model.
Wherein the difficult samples include: the method comprises the steps of providing N sample data with the same cat face label and the largest cosine similarity difference, providing N sample data with different cat face labels and the smallest cosine similarity difference, wherein N is larger than or equal to two.
Wherein, optimizing the original face recognition model comprises: inputting two groups of cat face sample data into an original face recognition model, and constructing a cross entropy loss function and a triple loss function on the feature layer; calculating a cross entropy loss value of the first group of cat face sample data and the cat face label by using a cross entropy loss function; screening out difficult samples in the second group of cat face sample data by using the original face recognition model; calculating a triplet loss value by using a triplet loss function on the difficult sample; and adding and summing the triple loss values and the cross entropy loss values to obtain a final loss value of the original face recognition model, and finishing the optimization of the original face recognition model.
And the data magnitude calculated by the cross entropy loss function is ten times that calculated by the triple loss function.
This application second aspect provides a cat face identification system, includes: the image acquisition module is used for acquiring a front face image of the cat acquired at a preset place; the system comprises a cat face key point extraction module, a face image extraction module and a face image extraction module, wherein the cat face key point extraction module is used for extracting cat face key points from a front face image, and the cat face key points comprise key points of two eyes and a nose of a cat; the affine transformation module is used for carrying out affine transformation on the cat face key points extracted by the cat face key point extraction module to obtain fixed points; the cat face identification module is used for training a cat face identification model capable of identifying a cat face by integrating and using cat face sample data in advance, the output result of the cat face identification model at least comprises a cat face label, and the cat face labels of different cats are different; the data interaction module is used for inputting the fixed point obtained by the affine transformation module into the cat face recognition module and obtaining an output result of the cat face recognition model in the cat face recognition module; and the association recording module is used for acquiring behavior information and time information of the cat at a preset place, associating the behavior information and the time information with the cat face label and recording and storing the behavior information and the time information.
A third aspect of the present application provides an electronic apparatus comprising: the cat face recognition system comprises a memory and a processor, wherein the memory stores a program, and the program is used for realizing the cat face recognition method when being executed by the processor.
A fourth aspect of the present application provides a readable storage medium, on which a program is stored, wherein the program is configured to implement the cat face recognition method according to any one of the above aspects when executed by a processor.
According to the cat face identification method, on one hand, the cat face is identified through the pre-trained cat face identification model, so that the influence of hairs and the like on identification can be reduced, and the difficulty of cat face identification is reduced; on the other hand, by performing cat face recognition on a front face image of a cat collected at a predetermined place, a tag of the cat appearing at the predetermined place can be accurately recognized, and when the cat face recognition method is applied to a scene where the predetermined place is a place such as eating, drinking, excretion and the like of the cat, the tag of the cat, behavior information and time information of the cat at the predetermined place are associated and recorded and stored, so that the living habits of each cat can be accurately recorded.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a cat face identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an optimization process of an original face recognition model by the cat face recognition method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another optimization process of the original face recognition model according to the cat face recognition method of the embodiment of the present application;
FIG. 4 is a block diagram illustrating the structure of a cat face identification system according to an embodiment of the present application;
fig. 5 is a block diagram illustrating a structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. The following embodiments and their technical features may be combined with each other without conflict.
Referring to fig. 1, a cat face identification method provided in an embodiment of the present application includes: s1, acquiring a front face image of the cat collected at a preset place; s2, extracting key points of the cat face of the positive face image; s3, carrying out affine transformation on the key points of the cat face to obtain fixed points; s4, inputting the fixed point into a pre-trained cat face recognition model to obtain an output result of the cat face recognition model, wherein the output result at least comprises one cat face label, and the cat face labels of different cats are different; and S5, acquiring behavior information and time information of the cat at a preset place, associating the behavior information and the time information with the cat face label, and storing the behavior information and the time information.
In this embodiment, the key points of the face of the cat are the key points of the two eyes and the nose of the cat. The preset places are execution places of the living habits of the cats, and each preset place is matched with the living habits of one cat; the predetermined place can be cat litter basin, cat food basin, cat climb frame, cat nest etc. and the custom of corresponding the cat respectively is for excreteing, the food intake, drink water, leisure, sleep, and the time of gathering the face image then is time information, and in this embodiment, the predetermined place is cat litter basin, and the custom of corresponding the cat is the excretion of cat, and time information is the time of gathering the face image when the cat gets into cat litter basin.
The other camera device that can set up of cat litter basin to gather the positive face image of cat, can also use closed cat litter basin, set up camera device in closed cat litter basin, when the cat got into closed cat litter basin like this, because the face will be inwards all the time, consequently can obtain the positive face image of cat in the direction that corresponds.
After acquiring the front face image of the cat, the cat face is identified, because the mouth and the nose of the cat are connected, only one of the two key points is used as the key point in the mouth and the nose, and in the embodiment, the three used key points are respectively the key points of the nose and the two eyes.
After drawing the cat face key point, with cat face key point affine transform to the fixed point, the cat face recognition model of training in advance, then can regard the fixed point as the input, then the output result, the cat face label has been contained in the output result, consequently can judge according to the cat face label and carry out what excrete in cat litter basin department is which cat, thereby can note this cat's habits and customs, in this embodiment, can also regard the acquisition time of image as the drainage time of cat, thereby can be convenient for cat owner to look over which cat excretes in cat litter basin department, whether drainage time is normal, or which cat does not go cat litter basin department and excrete, and the excretion of cat is also one kind of habits and customs, therefore cat owner can look over whether the excretion habits and customs of cat are unusual through the record, thereby in time judge out whether the cat is sick.
In one embodiment, the method for training the cat face recognition model comprises the following steps: establishing an original face recognition model of a ten-thousand-level feature library, wherein the original face recognition model at least comprises a full connection layer and a feature layer; optimizing an original face recognition model; the method comprises the steps of inputting cat face sample data into an original face recognition model for training to obtain a cat face recognition model capable of recognizing cat faces, wherein the cat face sample data at least comprises cat face features and cat face labels of cats.
In this embodiment, the original face model used may use any one of the related models in the prior art, for example, a residual network model, a network model such as mobilentv 3, and the original face recognition model is a cat face recognition model after training, and a ten-thousand-level feature library can ensure the capability of extracting features of the cat face recognition model, and by optimizing the original face model, the performance of the original face model can be improved, so that a cat face recognition model with better performance can be trained during subsequent training using cat face sample data.
In one embodiment, optimizing the original face recognition model comprises: constructing a cross entropy loss function; acquiring a feature vector output by a feature layer and a weight output by a classification network; and performing L2 norm normalization on the feature vectors and the weights to express the distance between the feature vectors as cosine similarity, and completing the optimization of the original face recognition model.
The optimized direction of the characteristic vector is expressed as cosine similarity between vectors, loss values can be expressed more clearly, and therefore the original face recognition model can be used more conveniently, and the consistency of the optimized direction after being combined with other loss functions is enhanced.
In this embodiment, the cross entropy loss function outputs a probability value P firstjIt can be expressed by equation 1:
Figure BDA0002966497940000061
wherein z isjExpressing the normalization of the feature vector and the weight calculated by the original face recognition modelValue of (a), zjW x, w represents the normalized weight of the norm of the full connection layer L2 in the classification network, x is the normalized feature vector of the norm of the feature layer L2, K represents the number of samples in the sample set, j represents the jth sample in the sample set, z represents the number of samples in the sample set, andka value, z, representing the weight normalization of the eigenvectors of the truths of the samples in the sample setkAnd calculating by using a true distribution model of the sample set.
The cross entropy loss function then outputs a metric loss value, which can be expressed by equation 2:
Figure BDA0002966497940000062
wherein the content of the first and second substances,
Figure BDA0002966497940000063
representing the true distribution model of the sample set, p representing a probability value, K representing the number of samples of the sample set,
Figure BDA0002966497940000064
represents the label value, p, of the jth sample in the true distribution modeljRepresenting the probability value of the j-th sample.
In this embodiment, the cross entropy is a direct measure of the distribution difference between two models, one of which is the original face recognition model and the other is the true distribution model of the sample set, and the use of the cross entropy loss function is to explain how much the original face recognition model outputs and the true distribution model of the sample set explain the sample set, and the true distribution model of the sample set is a conventional model and will not be described in this embodiment.
The above-mentioned label value is 0 or 1, when the prediction output of the corresponding category is closer to the true label value, the smaller the value of the cross entropy loss function is, the closer the identified result is to the sample, that is, the higher the model identification accuracy is, and when the prediction output of the corresponding category is further from the true label value, the larger the value of the cross entropy loss function is, the further the identified result is from the sample, that is, the lower the model identification accuracy is.
In one embodiment, optimizing the original face recognition model further comprises: screening difficult samples in cat face sample data, and constructing a triple loss function; and calculating the cosine similarity of the difficult samples by utilizing the triple loss function, and finishing the optimization of the original face recognition model.
After the cross entropy loss function is used, the triple loss function is used, and data which cannot be processed by the cross entropy loss function on the difficult sample can be processed, so that the performance of the trained cat face recognition model is improved.
Referring to fig. 2, in this embodiment, a cross entropy loss function and a triplet loss function are sequentially used to perform common sample cosine similarity calculation and difficult sample cosine similarity calculation, respectively, so as to optimize an original face recognition model, where a common sample is all samples included in a training set, and in this process, since the cross entropy loss function does not need to normalize the weight of a full connection layer and the feature layer features, when the cross entropy loss function and the triplet loss function based on the cosine similarity are used subsequently, there is a difference in consistency of optimization directions. In order to ensure the consistency of the optimized directions and enable the training of the original face recognition model to be more convergent, when a cross entropy loss function is used, the feature vectors obtained by the feature layer and the weight of the full connection layer need to be subjected to L2 normalization, and finally, the distance between the feature vectors is expressed as cosine similarity. And then selecting the difficult samples in the cat face sample data, and optimizing the cosine similarity of the difficult samples by using a triple loss function, so that the difficult samples can be learned by the original face recognition model, and finally, the trained cat face recognition model can adapt to cat face image recognition under more conditions.
In one embodiment, the difficult samples include: the method comprises the steps of providing N sample data with the same cat face label and the largest cosine similarity difference, providing N sample data with different cat face labels and the smallest cosine similarity difference, wherein N is larger than or equal to two.
The method has the advantages that the same cat face label is provided, N sample data with the largest cosine similarity difference represent the same cat, but the cat face has the slight change under different conditions, for example, when the cat is sick, the five sense organs have slight changes, and the nose and eyes also have corresponding changes when the cat calls, so that when the cat face changes, the cosine similarity difference represented by the change is larger than the cosine similarity of a normal cat face, the sample can be used as a positive sample to enable an original face recognition model to learn, and the inclusiveness of the model to the class intervals in the same cat face is enhanced.
When different cat face labels are provided, N sample data with the smallest cosine similarity difference represent different cats, the sample can be used as a negative sample to enable an original face recognition model to learn, and the discrimination of the model on the difference between different cat face classes is enhanced.
Therefore, after the original face recognition model learns the positive sample, the trained cat face recognition model can increase the probability of recognizing the same cat face in the image, and after the negative sample is learned, the probability of filtering out the image which is not the same cat face can be increased, so that the probability of correct recognition of the cat face recognition model is further increased.
In other embodiments, the optimization of the original face recognition model can be further implemented by: inputting two groups of cat face sample data into an original face recognition model, and constructing a cross entropy loss function and a triple loss function on a feature layer; calculating a cross entropy loss value of the first group of cat face sample data and the cat face label by using a cross entropy loss function; screening out difficult samples in the second group of cat face sample data by using the original face recognition model; calculating a triplet loss value by using a triplet loss function on the difficult sample; and adding and summing the triple loss values and the cross entropy loss values to obtain a final loss value of the original face recognition model, and finishing the optimization of the original face recognition model. In this embodiment, the cross entropy loss function calculates ten times the magnitude of the data calculated by the triplet loss function. In addition, the definition and selection of the difficult samples are the same as those of the above embodiments, and please refer to the above embodiments for details, which are not described herein again.
Referring to fig. 3, in the embodiment, the cross entropy loss function and the triplet loss function are used simultaneously to perform the ordinary sample cosine similarity calculation and the difficult sample cosine similarity calculation respectively, thereby optimizing the original face recognition model, wherein a common sample is a sample of which the cross entropy loss function can calculate the cosine similarity, in the process, the cross entropy loss function and the triple loss function are used simultaneously, particularly, the cross entropy loss function based on cosine similarity is used as a main loss function, the similarity of the cat face recognition model and the result trained on the independent cross entropy loss function is kept, meanwhile, the triple loss function is used as an auxiliary loss function to learn the difficult samples, so that the learning capability of the original face recognition model on the difficult samples is optimized, the trained cat face recognition model can be better suitable for the scene of cat face recognition.
In any of the embodiments described above, the cat face recognition method may further include: facial features of the cat are recorded using the cat face recognition model and entered into a feature library. Therefore, after the cat enters the cat litter box, the image can be collected, the key point of the cat face is extracted to identify the cat, the cosine similarity of the face feature recorded is calculated, and the label, name or code number of the cat is identified.
On one hand, the cat face is identified through a pre-trained cat face identification model, so that the influence of hairs and the like on identification can be reduced, and the difficulty of cat face identification is reduced; on the other hand, by performing cat face recognition on a front face image of a cat collected at a predetermined place, a tag of the cat appearing at the predetermined place can be accurately recognized, and when the cat face recognition method is applied to a scene where the predetermined place is a place such as eating, drinking, excretion and the like of the cat, the tag of the cat, behavior information and time information of the cat at the predetermined place are associated and recorded and stored, so that the living habits of each cat can be accurately recorded.
Referring to fig. 4, a cat face recognition system provided in an embodiment of the present application includes: the system comprises an image acquisition module 1, a cat face key point extraction module 2, an affine transformation module 3, a cat face identification module 4, a data interaction module 5 and an associated recording module 6.
The image acquisition module 1 is used for acquiring a front face image of a cat acquired at a preset place;
the cat face key point extraction module 2 is used for extracting cat face key points from the front face image, wherein the cat face key points comprise key points of two eyes and a nose of a cat; the affine transformation module 3 is used for carrying out affine transformation on the cat face key points extracted by the cat face key point extraction module 2 to obtain fixed points; the cat face identification module 4 is used for training a cat face identification model capable of identifying a cat face by integrating and using cat face sample data in advance, the output result of the cat face identification model at least comprises a cat face label, and the cat face labels of different cats are different; the data interaction module 5 is used for inputting the fixed point obtained by the affine transformation module 3 into the cat face recognition model module and obtaining an output result of the cat face recognition model in the cat face recognition model module; and the association recording module 6 is used for acquiring behavior information and time information of the cat at a preset place, associating the behavior information and the time information with the cat face label, and recording and storing the behavior information and the time information.
In this embodiment, the key points of the face of the cat are the key points of the two eyes and the nose of the cat. The preset places are execution places of the living habits of the cats, and each preset place is matched with the living habits of one cat; the predetermined place can be cat litter basin, cat food basin, cat climb frame, cat nest etc. and the custom of corresponding the cat respectively is for excreteing, the food intake, drink water, leisure, sleep, and the time of gathering the face image then is time information, and in this embodiment, the predetermined place is cat litter basin, and the custom of corresponding the cat is the excretion of cat, and time information is the time of gathering the face image when the cat gets into cat litter basin.
In one embodiment, cat face identification module 4 includes: the system comprises an original face recognition model establishing unit, an optimizing unit and a training unit; the original face recognition model establishing unit is used for establishing an original face recognition model of a ten-thousand-level feature library, and the original face recognition model at least comprises a full connection layer and a feature layer; the optimization unit is used for optimizing the original face recognition model; the training unit is used for inputting the cat face sample data into the original face recognition model for training to obtain the cat face recognition model capable of recognizing the cat face, and the cat face sample data at least comprises cat face features and cat face labels of the cat.
In one embodiment, the optimization unit comprises: the system comprises a first function construction subunit, a data acquisition subunit and a normalization subunit; the first function constructing subunit is used for constructing a cross entropy loss function; the data acquisition subunit is used for acquiring the feature vectors output by the feature layer and the weight output by the classification network; and the normalization subunit is used for performing L2 norm normalization on the feature vectors and the weights so as to express the distance between the feature vectors as cosine similarity, and thus, the optimization of the original face recognition model is completed.
In one embodiment, the optimization unit further comprises: the second function construction subunit is used for screening the difficult samples in the cat face sample data and constructing a triple loss function; the difficult sample optimization unit is used for independently calculating and optimizing the cosine similarity of the difficult samples by utilizing the triple loss function to complete the optimization of the original face recognition model.
In this embodiment, the difficult samples include: the method comprises the steps of providing N sample data with the same cat face label and the largest cosine similarity difference, providing N sample data with different cat face labels and the smallest cosine similarity difference, wherein N is larger than or equal to two.
In other embodiments, the optimization unit may further include: the third function construction subunit is used for inputting two groups of cat face sample data to the original face recognition model and constructing a cross entropy loss function and a triplet loss function in a feature layer; the cross entropy loss value operator unit is used for calculating a cross entropy loss value of the first group of cat face sample data and the cat face label by using a cross entropy loss function; the difficult sample screening subunit is used for screening out difficult samples in the second group of cat face sample data by utilizing the original face recognition model; the triplet loss value operator unit is used for calculating a triplet loss value by using a triplet loss function on the difficult sample; and the summation subunit is used for summing the triple loss values and the cross entropy loss values to obtain a final loss value of the original face recognition model, and finishing the optimization of the original face recognition model. In this embodiment, the difficult samples are defined as the same as the above embodiments, and please refer to the above embodiments for details, which are not described herein again.
In addition, in this embodiment, the cross entropy loss function calculates ten times the magnitude of the data calculated by the triplet loss function.
According to the cat face recognition system provided by the embodiment, on one hand, the cat face is recognized through the pre-trained cat face recognition model, so that the influence of hairs and the like on recognition can be reduced, and the difficulty of cat face recognition is reduced; on the other hand, by performing cat face recognition on a front face image of a cat collected at a predetermined place, a tag of the cat appearing at the predetermined place can be accurately recognized, and when the cat face recognition method is applied to a scene where the predetermined place is a place such as eating, drinking, excretion and the like of the cat, the tag of the cat, behavior information and time information of the cat at the predetermined place are associated and recorded and stored, so that the living habits of each cat can be accurately recorded.
Referring to fig. 5, an embodiment of the present application provides an electronic device, including: a memory 601, a processor 602, wherein the memory 601 stores a program executable on the processor 602, and the program is used for implementing the cat face recognition method described in the foregoing when being executed by the processor 602.
Further, the electronic device further includes: at least one input device 603 and at least one output device 604.
The memory 601, the processor 602, the input device 603, and the output device 604 are connected by a bus 605.
The input device 603 may be a camera, a touch panel, a physical button, a mouse, or the like. The output device 604 may be embodied as a display screen.
The Memory 601 may be a high-speed Random Access Memory (RAM) Memory, or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 601 is used for storing a set of executable program code, and the processor 602 is coupled to the memory 601.
Further, an embodiment of the present application also provides a readable storage medium, which may be provided in the electronic device in the foregoing embodiments, and the readable storage medium may be the memory 601 in the foregoing. The readable storage medium has a program stored thereon, and the program is configured to implement the cat face recognition method described in the foregoing embodiment when executed by the processor 602.
Further, the storable medium may be various media that can store the program code, such as a usb disk, a removable hard disk, a Read-Only Memory 601 (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the foregoing description, various details have been set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes are not shown in detail to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In the above description, for a person skilled in the art, there are variations on the specific implementation and application scope according to the concepts of the embodiments of the present invention, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A cat face identification method, comprising:
acquiring a front face image of a cat collected at a predetermined place;
extracting key points of a cat face from the front face image, wherein the key points of the cat face comprise key points of two eyes and a nose of a cat;
carrying out affine transformation on the key points of the cat face to obtain fixed points;
inputting the fixed point into a pre-trained cat face recognition model to obtain an output result of the cat face recognition model, wherein the output result at least comprises one cat face label, and the cat face labels of different cats are different;
and acquiring behavior information and time information of the cat in the preset place, associating the behavior information and the time information with the cat face label, and recording and storing the behavior information and the time information.
2. The cat face recognition method according to claim 1,
the training method of the cat face recognition model comprises the following steps:
establishing an original face recognition model of a ten-thousand-level feature library, wherein the original face recognition model at least comprises a full connection layer and a feature layer;
optimizing the original face recognition model;
and inputting cat face sample data into the original face recognition model for training to obtain a cat face recognition model, wherein the cat face sample data at least comprises cat face characteristics and cat face labels of cats.
3. The cat face recognition method according to claim 2,
the optimizing the original face recognition model comprises:
constructing a cross entropy loss function;
acquiring a feature vector output by the feature layer and the weight output by the classification network;
and performing L2 norm normalization on the feature vectors and the weights to express the distance between the feature vectors as cosine similarity, and completing the optimization of the original face recognition model.
4. The cat face recognition method according to claim 3,
the optimizing the original face recognition model further comprises:
screening difficult samples in the cat face sample data, and constructing a triple loss function;
and calculating the cosine similarity of the difficult samples by using the triple loss function, and finishing the optimization of the original face recognition model.
5. The cat face recognition method according to claim 4,
the difficult samples include: the method comprises the steps of providing N sample data with the same cat face label and the largest cosine similarity difference, providing N sample data with different cat face labels and the smallest cosine similarity difference, wherein N is larger than or equal to two.
6. The cat face recognition method according to claim 2,
the optimizing the original face recognition model comprises:
inputting two groups of cat face sample data into the original face recognition model, and constructing a cross entropy loss function and a triple loss function on the feature layer;
calculating a cross entropy loss value of the first group of cat face sample data and the cat face label by using the cross entropy loss function;
screening out difficult samples in the second group of cat face sample data by using the original face recognition model;
calculating a triplet loss value using the triplet loss function for the difficult sample;
and adding and summing the triple loss values and the cross entropy loss values to obtain a final loss value of the original face recognition model, and finishing the optimization of the original face recognition model.
7. The cat face recognition method according to claim 6,
the data magnitude calculated by the cross entropy loss function is ten times that calculated by the triplet loss function.
8. A cat face identification system comprising:
the image acquisition module is used for acquiring a front face image of the cat acquired at a preset place;
the face image processing module is used for processing the face image, and the face image processing module is used for processing the face image;
the affine transformation module is used for carrying out affine transformation on the cat face key points extracted by the cat face key point extraction module to obtain fixed points;
the cat face identification module is used for training a cat face identification model capable of identifying a cat face by integrating and using cat face sample data in advance, the output result of the cat face identification model at least comprises a cat face label, and the cat face labels of different cats are different;
the data interaction module is used for inputting the fixed point obtained by the affine transformation module into the cat face recognition module and obtaining an output result of a cat face recognition model in the cat face recognition module;
and the association recording module is used for acquiring behavior information and time information of the cat in the preset place, associating the behavior information and the time information with the cat face tag, and recording and storing the behavior information and the time information.
9. An electronic device, comprising: comprising a memory and a processor, said memory having stored thereon a program, characterized in that said program is adapted to carry out the method of any one of claims 1 to 7 when executed by said processor.
10. A readable storage medium on which a program is stored, the program being adapted to carry out the method of any one of claims 1 to 7 when executed by a processor.
CN202110253387.1A 2021-03-08 2021-03-08 Cat face identification method and system, electronic device and storage medium Pending CN112926479A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110253387.1A CN112926479A (en) 2021-03-08 2021-03-08 Cat face identification method and system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110253387.1A CN112926479A (en) 2021-03-08 2021-03-08 Cat face identification method and system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN112926479A true CN112926479A (en) 2021-06-08

Family

ID=76171995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110253387.1A Pending CN112926479A (en) 2021-03-08 2021-03-08 Cat face identification method and system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112926479A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088207A1 (en) * 2004-10-22 2006-04-27 Henry Schneiderman Object recognizer and detector for two-dimensional images using bayesian network based classifier
US20150085139A1 (en) * 2010-07-02 2015-03-26 Sony Corporation Image processing device and image processing method
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
JP2019062436A (en) * 2017-09-27 2019-04-18 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
WO2020019591A1 (en) * 2018-07-27 2020-01-30 北京字节跳动网络技术有限公司 Method and device used for generating information
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN110956149A (en) * 2019-12-06 2020-04-03 中国平安财产保险股份有限公司 Pet identity verification method, device and equipment and computer readable storage medium
CN112241723A (en) * 2020-10-27 2021-01-19 新疆爱华盈通信息技术有限公司 Sex and age identification method, system, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088207A1 (en) * 2004-10-22 2006-04-27 Henry Schneiderman Object recognizer and detector for two-dimensional images using bayesian network based classifier
US20150085139A1 (en) * 2010-07-02 2015-03-26 Sony Corporation Image processing device and image processing method
JP2019062436A (en) * 2017-09-27 2019-04-18 キヤノン株式会社 Image processing apparatus, image processing method, and program
WO2020019591A1 (en) * 2018-07-27 2020-01-30 北京字节跳动网络技术有限公司 Method and device used for generating information
CN109614928A (en) * 2018-12-07 2019-04-12 成都大熊猫繁育研究基地 Panda recognition algorithms based on limited training data
CN109670440A (en) * 2018-12-14 2019-04-23 央视国际网络无锡有限公司 The recognition methods of giant panda face and device
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN110956149A (en) * 2019-12-06 2020-04-03 中国平安财产保险股份有限公司 Pet identity verification method, device and equipment and computer readable storage medium
CN112241723A (en) * 2020-10-27 2021-01-19 新疆爱华盈通信息技术有限公司 Sex and age identification method, system, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAHEEN RASHID ET.AL: "Interspecies Knowledge Transfer for Facial Keypoint Detection", 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) *
喜欢打酱油的老鸟: "猫脸关键点检测大赛:三种方法,轻松实现猫脸识别!", pages 5, Retrieved from the Internet <URL:https://blog.csdn.net/weixin_42137700/article/details/103723324/> *

Similar Documents

Publication Publication Date Title
CN107169454B (en) Face image age estimation method and device and terminal equipment thereof
Norouzzadeh et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning
CN110414432B (en) Training method of object recognition model, object recognition method and corresponding device
WO2021077984A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
CN112183577A (en) Training method of semi-supervised learning model, image processing method and equipment
CN111523621A (en) Image recognition method and device, computer equipment and storage medium
CN110659665B (en) Model construction method of different-dimension characteristics and image recognition method and device
CA3066029A1 (en) Image feature acquisition
CN109101946B (en) Image feature extraction method, terminal device and storage medium
CN110909618B (en) Method and device for identifying identity of pet
CN111291809A (en) Processing device, method and storage medium
CN110222718B (en) Image processing method and device
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN112001438B (en) Multi-mode data clustering method for automatically selecting clustering number
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
CN111694954B (en) Image classification method and device and electronic equipment
CN113449548A (en) Method and apparatus for updating object recognition model
Bouguila On multivariate binary data clustering and feature weighting
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
US20230401838A1 (en) Image processing method and related apparatus
CN110675312B (en) Image data processing method, device, computer equipment and storage medium
Lake et al. Application of artificial intelligence algorithm in image processing for cattle disease diagnosis
CN113449550A (en) Human body weight recognition data processing method, human body weight recognition method and device
CN112926479A (en) Cat face identification method and system, electronic device and storage medium
CN115115910A (en) Training method, using method, device, equipment and medium of image processing model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination