CN110909618A - Pet identity recognition method and device - Google Patents

Pet identity recognition method and device Download PDF

Info

Publication number
CN110909618A
CN110909618A CN201911039645.5A CN201911039645A CN110909618A CN 110909618 A CN110909618 A CN 110909618A CN 201911039645 A CN201911039645 A CN 201911039645A CN 110909618 A CN110909618 A CN 110909618A
Authority
CN
China
Prior art keywords
pet
image
feature point
accurate positioning
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911039645.5A
Other languages
Chinese (zh)
Other versions
CN110909618B (en
Inventor
刘岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201911039645.5A priority Critical patent/CN110909618B/en
Publication of CN110909618A publication Critical patent/CN110909618A/en
Application granted granted Critical
Publication of CN110909618B publication Critical patent/CN110909618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for identifying pet identity, wherein the method comprises the following steps: determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training; determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training; carrying out image alignment on the pet face image area to obtain an aligned image; extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet. The method provided by the embodiment of the invention can accurately identify the identity of the pet, and solves the problem that the identity of the pet cannot be accurately identified when the insurance industry applies insurance to the pet.

Description

Pet identity recognition method and device
Technical Field
The invention relates to the field of pet identity recognition, in particular to a method and a device for recognizing pet identity.
Background
Most of the current identification methods for animals still remain for identification of different species of animals. I.e., identify the animal as a cat, dog, or other species of animal. Identification techniques between different individuals of the same animal are not well established. With the gradual increase of the number of pets, and the increase of the attention degree of people to the pets. The pet insurance is a product for the production insurance developed in the last two years, and mainly guarantees the health and the accidents of various pets (such as cats and dogs).
However, the identification of pet identity is also a major challenge due to the poor identification accuracy of the existing identification technology between different individuals of the same animal.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying pet identities, which aim to solve the problem that the accuracy rate of identifying pet identities is poor in the prior art.
According to an aspect of the present invention, there is provided a method for identifying an identity of a pet, the method comprising:
determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training;
determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training;
carrying out image alignment on the pet face image area to obtain an aligned image;
extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
Optionally, the step of training to obtain the target image detection network includes:
acquiring the special training data determined for the pet in the image to be identified;
and training a target detection network by adopting a two-classification mode according to the special training data to obtain the trained target detection network as the target image detection network.
Optionally, the step of determining the pet face image region in the pet image region according to the pre-trained face detection classifier includes:
removing image areas with areas smaller than a preset area threshold value in the pet image area to obtain candidate areas;
inputting the candidate region into the face detection classifier, and determining a pet face image region in the candidate region.
Optionally, the step of training the face detection classifier includes:
acquiring a plurality of training images containing pet images and training images not containing pet images;
obtaining Haar characteristics of each training image;
constructing a training sample feature set according to the Haar features and whether the training image contains a pet face image;
and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
Optionally, the step of performing image alignment on the pet face image region to obtain an aligned image includes:
acquiring coarse positioning feature points of the pet face organ positions in the pet face image region according to a feature point detection network obtained through pre-training;
segmenting the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
revising the characteristic point detection network according to the characteristic point of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
obtaining accurate positioning feature points corresponding to the coarse positioning feature points according to each local area and the corresponding revised feature point detection network;
reversely mapping the local areas back to the pet face image area, and determining the position relation among the accurate positioning feature points;
and aligning the pet face image region according to the position relation among the accurate positioning feature points to obtain an aligned image.
Optionally, the coarse positioning feature points of the pet face organ positions at least include: one coarse positioning feature point at the left eye position, one coarse positioning feature point at the right eye position, one coarse positioning feature point at the nose tip position, three coarse positioning feature points at the mouth position, and two coarse positioning feature points at the ear root position.
Optionally, the step of aligning the pet face image region according to the positional relationship between the accurate positioning feature points to obtain an aligned image includes:
determining whether the pet face image areas are aligned or not according to the position relation among the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relationship between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning characteristic points and the expected characteristic points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
Optionally, the step of selecting at least three accurate positioning feature points, and calculating an expected feature point corresponding to each selected accurate positioning feature point according to a relationship between positions of the organ where the selected accurate positioning feature points are located includes:
selecting a first accurate positioning feature point on the left eye position, a second accurate positioning feature point on the right eye position and a third accurate positioning feature point on the nose tip position in the pet face image area;
calculating to obtain a first distance which is a distance between the first accurate positioning feature point and the second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance which is a distance between a middle point of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and the third accurate positioning feature point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and positioned on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the middle point of the connecting line, the distance between the third expected feature point and the middle point of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the middle point of the connecting line is equal to the horizontal straight line.
Optionally, the step of extracting the feature vector in the aligned image includes:
converting the size of the alignment image into a preset size;
and inputting the alignment image with the preset size into a preset residual error network model to obtain the multi-dimensional characteristic vector.
Optionally, the step of identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet includes:
calculating the distance between each identity vector in a preset pet identity feature library and the feature vector to obtain a plurality of identity distance values;
and if the minimum value in the identity distance values is smaller than a preset threshold value, the pet identity in the image to be recognized is a warehousing identity, wherein the warehousing identity is identity information indicated by the identity vector corresponding to the minimum value in the identity distance values.
According to a further aspect of the present invention, there is provided a pet identification device, the device comprising:
the first area confirmation module is used for determining a pet image area in the image to be recognized according to a target image detection network obtained by pre-training;
the second area confirmation module is used for determining a pet face image area in the pet image area according to a face detection classifier obtained through pre-training;
the alignment module is used for carrying out image alignment on the pet face image area to obtain an aligned image;
and the identification module is used for extracting the characteristic vector in the aligned image and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
Optionally, the second area confirmation module includes:
the screening unit is used for removing image areas with the area smaller than a preset area threshold value in the pet image area to obtain a candidate area;
and the region confirmation unit is used for inputting the candidate region into the face detection classifier and determining the pet face image region in the candidate region.
Optionally, the alignment module includes:
the first feature point unit is used for acquiring rough positioning feature points of the pet face organ position in the pet face image region according to a feature point detection network obtained through pre-training;
the segmentation unit is used for segmenting the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
the network revising unit is used for revising the characteristic point detection network according to the characteristic point of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
a second feature point unit, configured to detect a network according to each local region and the corresponding revised feature point, to obtain an accurate positioning feature point corresponding to the coarse positioning feature point;
the reflection unit is used for reversely mapping the local areas back to the pet face image area and determining the position relation between the accurate positioning characteristic points;
and the alignment unit is used for aligning the pet face image region according to the position relation between the accurate positioning feature points to obtain an aligned image.
Optionally, the coarse positioning feature points of the pet face organ positions at least include: one coarse positioning feature point at the left eye position, one coarse positioning feature point at the right eye position, one coarse positioning feature point at the nose tip position, three coarse positioning feature points at the mouth position, and two coarse positioning feature points at the ear root position.
Optionally, the aligning unit is specifically configured to determine whether the pet face image region is aligned according to a position relationship between the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relationship between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning characteristic points and the expected characteristic points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
Optionally, the identification module includes:
a conversion unit configured to convert a size of the alignment image into a preset size;
and the extraction unit is used for inputting the alignment image with the preset size into a preset residual error network model to obtain the multi-dimensional characteristic vector.
Optionally, the identification module includes:
the calculating unit is used for calculating the distance between each identity vector in a preset pet identity feature library and the feature vector to obtain a plurality of identity distance values;
and the identification unit is used for determining the pet identity in the image to be identified as the warehousing identity if the minimum value in the identity distance values is smaller than a preset threshold value, wherein the warehousing identity is the identity information indicated by the identity vector corresponding to the minimum value in the identity distance values.
According to a further aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the steps of the pet identification method as described above.
According to yet another aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for identifying an identity of a pet as described above.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training; the pet face image area is positioned in a mode of gradually reducing the range of the target area, so that the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After determining the pet face image area in the pet image area, carrying out image alignment on the pet face image area to obtain an aligned image; and extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet. The pet identification method has the advantages that the pet in the image to be identified can be accurately identified, the identity information of the pet is obtained, and the problem that the identity of the pet cannot be accurately identified when the insurance industry puts the pet into insurance is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flowchart illustrating steps of a method for identifying pet identities according to an embodiment of the present invention;
FIG. 2 is a flowchart of the steps for training a derived face detection classifier according to an embodiment of the present invention;
FIG. 3 is a block diagram of a pet identification device according to an embodiment of the present invention;
fig. 4 is a block diagram of a second area confirmation module according to an embodiment of the present invention;
FIG. 5 is a block diagram of an alignment module according to an embodiment of the present invention;
FIG. 6 is a block diagram of an identification module according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The pet identification method provided by the invention is suitable for most pets, such as cats, dogs and the like, but is not limited to the method. When the identification of the pet identity is performed for a type of pet, different individuals in the type of pet can be identified. For example, Persian cats in cats are not greatly distinguished by human eyes, even some similar Persian cats cannot be distinguished by human eyes at all, but the method provided by the invention can accurately identify the identities of different individuals in Persian cats. The pet identification is only used as an example of the pet in the following embodiments, but is not limited to the pet cat identification.
Referring to fig. 1, an embodiment of the present invention provides a method for identifying pet identity, including:
step 101: determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training;
it should be noted that the target image detection network may identify the pet image area in the image. For example, after an image with a pet cat is detected by the target image detection network, the pet cat in the image can be selected out through the rectangular frame, so that the pet image area is determined to be the selected part of the rectangular frame. In order to accurately identify the pet image area in the image to be identified, a target detection network for detecting a target needs to be trained, and the obtained trained target detection network is the target image detection network. The target detection network can adopt a common YOLO-V3 network, a ResNet network, a GoogleNet network and the like.
Specifically, the step of training to obtain the target image detection network includes:
acquiring the special training data determined for the pet in the image to be identified;
and training the target detection network by adopting a two-classification mode according to the special training data to obtain the trained target detection network as a target image detection network.
The method comprises the steps of obtaining a plurality of kinds of pet cats, classifying the pet cats into one kind, classifying the rest of the pet cats into one kind, and obtaining the special training data according to the two kinds of the classified pets. The predetermined number may be set by itself, and may be 42, for example, that is, the common predetermined kind of pet cats may be the common 42 kinds of pet cats, but is not limited thereto.
Step 102: determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training;
it should be noted that the face detection classifier may select the pet face image region in the pet image region by a frame selection method. The pet face in the pet face image area selected by the frame occupies the main part.
Since the number of the pet image regions may be multiple, the number of the determined pet face image regions may also be multiple, and in order to reduce interference, the pet face image region with a smaller area may be filtered out, and only the pet face image region with a larger area is reserved. Specifically, the step of determining the pet face image region in the pet image region according to the pre-trained face detection classifier includes:
removing image areas with areas smaller than a preset area threshold value in the pet image area to obtain candidate areas;
and inputting the candidate region into a face detection classifier, and determining a pet face image region in the candidate region.
The preset area threshold may be set by itself, and may be, for example, one fourth of the area of the image to be recognized, but is not limited thereto.
Step 103: carrying out image alignment on the pet face image area to obtain an aligned image;
it should be noted that, since the pet face displayed in the obtained pet face image region may be a side face or other non-aligned condition, the features extracted in such an image cannot accurately represent the features of the pet in the image. Therefore, the image alignment technology can be adopted to align the images of the obtained pet image areas.
Step 104: and extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
It should be noted that, a database may be preset, and feature vectors of known pet identity information are stored in the database, that is, pet identity information of a plurality of pets is stored in the database, and each piece of pet identity information corresponds to one feature vector, and preferably, the feature vector is extracted in the same manner as the feature vector in step 104.
When the pet in the image to be recognized is recognized through the extracted feature vector, the obtained feature vector is matched with the feature vector in the database, and if the matched first feature vector is obtained, the identity information of the pet in the image to be recognized can be determined to be the identity of the pet corresponding to the first feature vector; if the matched characteristic vector is not obtained, the identity information of the pet in the image to be recognized can be determined to be not the known pet identity information in the database.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training; the pet face image area is positioned in a mode of gradually reducing the range of the target area, so that the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After determining the pet face image area in the pet image area, carrying out image alignment on the pet face image area to obtain an aligned image; and extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet. The pet identification method has the advantages that the pet in the image to be identified can be accurately identified, the identity information of the pet is obtained, and the problem that the identity of the pet cannot be accurately identified when the insurance industry puts the pet into insurance is solved.
As shown in fig. 2, in order to obtain a face detection classifier with higher accuracy, on the basis of the above embodiment of the present invention, in the embodiment of the present invention, the step of training the obtained face detection classifier includes:
step 201: acquiring a plurality of training images containing pet images and training images not containing pet images;
it should be noted that, in the case of the pet identification of the pet cat, the training images include a training image including the pet cat and a training image not including the pet cat.
Step 202: obtaining Haar characteristics of each training image;
it should be noted that Haar features include four categories of features: edge features, linear features, center features, and diagonal features. One or more types of features may be acquired. For example, but not limited to, 4 edge features, 4 linear features, 2 center features, and 4 diagonal features, and 14 features may be obtained.
Step 203: constructing a training sample feature set according to the Haar features and whether the training image contains the pet face image;
step 204: and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
It should be noted that, for convenience of understanding, the embodiment of the present invention only shows how to obtain the face detection classifier by taking the selection of 14 features as an example. And the pet in this embodiment is exemplified by a pet cat. First, m samples, i.e., m training images, are acquired, and then the selected 14 features are acquired for each training image. And then constructing an initial weak classifier for each feature, constructing a training sample feature set, and training by an AdaBoost method to obtain a weak classifier. Finally, 14 weak classifiers are obtained, and then a strong classifier is generated according to the obtained 14 weak classifiers. The strong classifier is a face detection classifier.
As described below, the steps for training a feature according to the embodiment of the present invention are as follows:
let X denote the feature set of sample data of m samples, and Y denote the tag set of the cat face sample features, and since it is a cat face that belongs to the binary problem, Y { -1, 1 }. Let S { (x)i,yi) I 1, 2.. m } is a training sample feature set, where xi∈X,yi∈Y。
Initializing the weights of the m samples to be: dt(i)=1/m,Dt(i) Indicating the assigned training samples in the t-th training iterationSamples (x) in the feature seti,yi) The weight of the sample is updated once per training round, and the weak classifier of the t-th round is assumed to be h on the training sample feature set StThen, before the next round of training, the weights of the samples in the training sample feature set need to be updated, and the weight updating rule is as follows:
the sum of the weights for calculating the misclassified samples is:
Figure BDA0002252483410000111
wherein epsilontRepresenting the sum of the weights of the misclassified samples; h istWeak classifiers representing the t-th round; x is the number ofiRepresenting the x value of the ith sample in the training sample feature set; y isiRepresenting the y value of the ith sample in the training sample feature set;
order to
Figure BDA0002252483410000112
Then the weight of the training sample in the t +1 th round is:
Figure BDA0002252483410000113
wherein ZtIs a regular factor to ensure ∑iDt+1(i)=1。
The weak classifiers corresponding to each feature can be obtained by adopting the method, a training mode of fixing the number of rounds can be adopted in the training process, namely, the number of training rounds with preset number is set, and the training is stopped or the error is avoided when the number of training rounds reaches the preset number. Of course, it is also possible to detect an error during the training process, and stop the training when the error is smaller than a preset threshold.
Then, the strong classifiers that are finally generated from all the weak classifiers are:
Figure BDA0002252483410000114
wherein
Figure BDA0002252483410000115
Represents a sample xiThe weights output by all the weak classifiers are accumulated and summed; n represents the number of weak classifiers, i.e., the number of selected Haar features; q represents a dichotomy threshold; h istIndicating the weak classifiers of the t-th round.
The second classification threshold Q is calculated as follows:
assuming that the set of k positive samples in the training sample feature set is denoted as P, the set of f negative samples is denoted as N, and all positive samples in P pass through V (x)i) The value range set is Pv, and all samples in N pass through V (x)i) The resulting set of value ranges is Nv, then
Mean value of Pv:
Figure BDA0002252483410000121
mean value of Nv:
Figure BDA0002252483410000122
then the threshold Q is:
Figure BDA0002252483410000123
in order to improve the accuracy of image alignment, on the basis of the above embodiment of the present invention, in the embodiment of the present invention, the step of performing image alignment on the pet face image region to obtain an aligned image includes:
acquiring coarse positioning feature points of the pet face organ positions in the pet face image region according to a feature point detection network obtained through pre-training;
dividing the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning characteristic point;
revising the characteristic point detection network according to the characteristic points of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
acquiring accurate positioning characteristic points corresponding to the rough positioning characteristic points according to each local area and the corresponding revised characteristic point detection network;
reversely mapping the local areas back to the pet face image area, and determining the position relation between the accurate positioning feature points;
and aligning the pet face image region according to the position relation among the accurate positioning feature points to obtain an aligned image.
It should be noted that, the accuracy of the finally obtained feature points is improved by adopting a manner of locating the feature points twice. When the coarse positioning feature points are determined, a softmax layer with a preset dimension is cascaded behind the last full connection layer of the ResNet network based on the ResNet network, then the modified ResNet network is trained based on calibrated training data, and the trained ResNet network is used for determining the coarse positioning feature points. Wherein, the coarse positioning feature points of the pet face organ position at least comprise: one coarse positioning feature point at the left eye position, one coarse positioning feature point at the right eye position, three coarse positioning feature points at the mouth position, and two coarse positioning feature points at the ear root position.
When the pet face image area is divided into a plurality of local areas, preferably, the pet face image area can be divided into 5 local areas, namely, the first local area only covers one coarse positioning feature point on the nose tip position; a second local region containing only three coarse localization feature points on the mouth position; a third local region containing only one coarse localization feature point at the right eye position; a fourth partial area containing only one coarse localization feature point at the left eye position; a fifth partial region containing only two coarse localization feature points at the position of the ear root. And revising the residual error network trained when the coarse positioning feature points are determined according to the number of the feature points of each local area, and determining the precisely positioned feature points by using the revised residual error network. For the convenience of the subsequent description, the trained residual error network when determining the coarse positioning feature points is also called a positioning network.
Specifically, when the accurate positioning feature point corresponding to the coarse positioning feature point in the first local area is determined:
and adjusting the input of the positioning network according to the size of the first local area, then defining the output of the positioning network as 1 characteristic point and 2-dimensional vector, and revising the output layer softmax of the positioning network as 2-dimensional output. And retraining the adjusted positioning network, inputting the first local area into the retrained positioning network, and obtaining the accurate positioning feature points corresponding to the coarse positioning feature points in the first local area.
Specifically, when the accurate positioning feature point corresponding to the coarse positioning feature point in the second local area is determined:
and adjusting the input of the positioning network according to the size of the second local area, then defining the output of the positioning network as 3 characteristic points and 6-dimensional vectors, and then revising the output layer softmax of the positioning network as 6-dimensional output. And retraining the adjusted positioning network, inputting the second local area into the retrained positioning network, and obtaining the accurate positioning feature points corresponding to the coarse positioning feature points in the second local area.
Specifically, when the accurate positioning feature point corresponding to the coarse positioning feature point in the third local area is determined:
and adjusting the input of the positioning network according to the size of the third local area, then defining the output of the positioning network as 1 characteristic point and 2-dimensional vector, and then revising the output layer softmax of the positioning network as 2-dimensional output. And inputting the retrained positioning network into the third local area to obtain the accurate positioning feature points corresponding to the coarse positioning feature points in the third local area.
Specifically, when the accurate positioning feature point corresponding to the coarse positioning feature point in the fourth local area is determined:
and adjusting the input of the positioning network according to the size of the fourth local area, then defining the output of the positioning network as 1 characteristic point and 2-dimensional vector, and then revising the output layer softmax of the positioning network as 2-dimensional output. And inputting the retrained positioning network into the fourth local area to obtain the accurate positioning feature points corresponding to the coarse positioning feature points in the fourth local area.
Specifically, when the accurate positioning feature point corresponding to the coarse positioning feature point in the fifth local area is determined:
and adjusting the input of the positioning network according to the size of the fifth local area, then defining the output of the positioning network as 2 characteristic points and 4-dimensional vectors, and then revising the output layer softmax of the positioning network as 4-dimensional output. And inputting the retrained positioning network into the fifth local area to obtain the accurate positioning feature points corresponding to the coarse positioning feature points in the fifth local area.
On the basis of the above embodiments of the present invention, in the embodiments of the present invention, the step of aligning the pet face image region according to the positional relationship between the precise positioning feature points to obtain an aligned image includes:
determining whether the pet face image areas are aligned or not according to the position relation among the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relationship between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning characteristic points and the expected characteristic points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
It should be noted that, when judging whether the pet face image areas are aligned, whether an included angle formed by the nose tip and a connecting line of two eyes can be bisected by a straight line passing through the nose tip and perpendicular to a horizontal line or whether the included angle between the connecting line of two eyes and the horizontal line is smaller than a preset angle can be judged; and if the images are aligned, directly extracting the characteristic vectors in the aligned images, and identifying the pet in the image to be identified according to the characteristic vectors to obtain the identity information of the pet.
The precisely located feature points and the expected feature points corresponding thereto refer to points on the unaligned image and points on the aligned image, respectively, at a certain position of the pet's face. The method comprises the following steps of selecting at least three accurate positioning characteristic points, and calculating expected characteristic points corresponding to each selected accurate positioning characteristic point according to the relationship between the positions of organs where the selected accurate positioning characteristic points are located:
selecting a first accurate positioning feature point on the left eye position, a second accurate positioning feature point on the right eye position and a third accurate positioning feature point on the nose tip position in the pet face image area;
calculating to obtain a first distance which is the distance between the first accurate positioning characteristic point and the second accurate positioning characteristic point, wherein an included angle between a connecting line of the first accurate positioning characteristic point and the second accurate positioning characteristic point and a horizontal straight line is a rotation angle, and a second distance which is the distance between the middle point of the connecting line of the first accurate positioning characteristic point and the second accurate positioning characteristic point and the third accurate positioning characteristic point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and positioned on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the middle point of the connecting line, the distance between the third expected feature point and the middle point of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the middle point of the connecting line is equal to the horizontal straight line.
Taking 3 precisely positioned feature points and corresponding expected feature points as examples, the step of solving the affine transformation matrix is as follows:
assume an affine transformation matrix as:
Figure BDA0002252483410000151
the coordinates of six feature points, namely the above-mentioned 3 precisely-positioned feature points and the corresponding desired feature points, are known.
Assume that the input point is the first pinpoint feature point B ═ x, y, and the corresponding first desired feature point B1 ═ x1,y1Then B is affine transformed to be:
a00x+a01y+b00=x1
a10x+a11y+b10=y1
similarly, another four equations can be obtained through another four feature points, and six unknowns in the affine transformation matrix can be obtained through the six equations in total, namely a00、a01、b00、a10、a11、b10Thereby determining an affine transformation matrix. The method for obtaining the aligned image by performing affine transformation on the pixel points of the pet face image region through the known affine transformation matrix is the conventional method and is not repeated herein.
On the basis of the above embodiments of the present invention, in the embodiments of the present invention, the step of extracting the feature vector in the aligned image includes:
converting the size of the aligned image into a preset size;
and inputting the alignment image with the preset size into a preset residual error network model to obtain the multi-dimensional characteristic vector.
It should be noted that converting the size of the aligned image to the preset size facilitates direct input of the preset residual network model. The residual error network model can be obtained by training based on a ResNet network, the dimensionality of the feature vector can be defined by self, and a full connection layer is used as output by removing the final classification layer of the ResNet network. For example, 1000-dimensional feature vectors can be obtained by presetting a residual network model.
On the basis of the above embodiments of the present invention, in the embodiments of the present invention, the step of identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet includes:
calculating the distance between each identity vector and the feature vector in a preset pet identity feature library to obtain a plurality of identity distance values;
and if the minimum value in the identity distance values is smaller than a preset threshold value, the pet identity in the image to be recognized is a warehousing identity, wherein the warehousing identity is identity information indicated by the identity vector corresponding to the minimum value in the identity distance values.
It should be noted that the distance between the identity vector and the feature vector may be a euclidean distance, a cosine distance, etc.
The pet identification method provided by the embodiment of the invention is described above, and the pet identification device provided by the embodiment of the invention is described below with reference to the accompanying drawings.
Referring to fig. 3, an embodiment of the present invention further provides a pet identification device, including:
the first area confirmation module 31 is configured to determine a pet image area in an image to be recognized according to a target image detection network obtained through pre-training;
a second region confirmation module 32, configured to determine a pet face image region in the pet image region according to a pre-trained face detection classifier;
the alignment module 33 is used for performing image alignment on the pet face image area to obtain an aligned image;
and the identification module 34 is configured to extract the feature vector in the aligned image, and identify the pet in the image to be identified according to the feature vector to obtain the identity information of the pet.
It should be noted that the step of training the target image detection network in the first area confirmation module 31 includes:
acquiring the special training data determined for the pet in the image to be identified;
and training the target detection network by adopting a two-classification mode according to the special training data to obtain the trained target detection network as a target image detection network.
Referring to fig. 4, the second area confirmation module 32 includes:
the screening unit 321 is configured to remove an image region with an area smaller than a preset area threshold in the pet image region to obtain a candidate region;
and a region confirmation unit 322, configured to input the candidate region into the face detection classifier, and determine a pet face image region in the candidate region.
Wherein the step of training to obtain the face detection classifier comprises:
acquiring a plurality of training images containing pet images and training images not containing pet images; obtaining Haar characteristics of each training image; constructing a training sample feature set according to the Haar features and whether the training image contains the pet face image; and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
Referring to fig. 5, the alignment module 33 includes:
the first feature point unit 331 is configured to obtain a coarse positioning feature point of a pet face organ position in the pet face image region according to a pre-trained feature point detection network;
a segmentation unit 332 for segmenting the pet face image region into a plurality of local regions, wherein each local region includes at least one coarse positioning feature point;
a network revising unit 333, configured to revise the feature point detection networks according to the feature points of each local area, respectively, to obtain a plurality of revised feature point detection networks, where each revised feature point detection network corresponds to one local area;
a second feature point unit 334, configured to detect a network according to each local region and the corresponding revised feature point, to obtain an accurate positioning feature point corresponding to the coarse positioning feature point;
the mapping unit 335 is used for mapping the local areas back to the pet face image area and determining the position relation between the accurate positioning feature points;
and an alignment unit 336, configured to align the pet face image region according to the positional relationship between the accurate positioning feature points, so as to obtain an aligned image.
Wherein, the coarse positioning feature points of the pet face organ position at least comprise: one coarse positioning feature point at the left eye position, one coarse positioning feature point at the right eye position, one coarse positioning feature point at the nose tip position, three coarse positioning feature points at the mouth position, and two coarse positioning feature points at the ear root position.
An alignment unit 336, configured to determine whether the pet face image region is aligned according to the position relationship between the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relationship between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning characteristic points and the expected characteristic points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
The alignment unit 336 is specifically configured to select a first accurate positioning feature point on the left eye position, a second accurate positioning feature point on the right eye position, and a third accurate positioning feature point on the nose tip position in the pet face image region;
calculating to obtain a first distance which is the distance between the first accurate positioning characteristic point and the second accurate positioning characteristic point, wherein an included angle between a connecting line of the first accurate positioning characteristic point and the second accurate positioning characteristic point and a horizontal straight line is a rotation angle, and a second distance which is the distance between the middle point of the connecting line of the first accurate positioning characteristic point and the second accurate positioning characteristic point and the third accurate positioning characteristic point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and positioned on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the middle point of the connecting line, the distance between the third expected feature point and the middle point of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the middle point of the connecting line is equal to the horizontal straight line.
Referring to fig. 6, the identification module 34 includes:
a conversion unit 341 configured to convert the size of the alignment image into a preset size;
the extracting unit 342 is configured to input the aligned image with the preset size into a preset residual error network model to obtain a multi-dimensional feature vector.
The calculating unit 343 is configured to calculate a distance between each identity vector in the preset pet identity feature library and the feature vector, so as to obtain a plurality of identity distance values;
the identifying unit 344 is configured to, if the minimum value in the identity distance values is smaller than a preset threshold, determine that the pet identity in the image to be identified is a warehousing identity, where the warehousing identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
The pet identity recognition device provided by the embodiment of the invention can realize each process in the pet identity recognition method embodiment, and is not repeated here to avoid repetition.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training; the pet face image area is positioned in a mode of gradually reducing the range of the target area, so that the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After determining the pet face image area in the pet image area, carrying out image alignment on the pet face image area to obtain an aligned image; and extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet. The pet identification method has the advantages that the pet in the image to be identified can be accurately identified, the identity information of the pet is obtained, and the problem that the identity of the pet cannot be accurately identified when the insurance industry puts the pet into insurance is solved.
On the other hand, the embodiment of the present invention further provides an electronic device, which includes a memory, a processor, a bus, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in the method for identifying an identity of a pet when executing the program.
For example, fig. 7 shows a schematic physical structure diagram of an electronic device.
As shown in fig. 7, the electronic device may include: a processor (processor)1010, a communication Interface (Communications Interface)1020, a memory (memory)1030, and a communication bus 1040, wherein the processor 1010, the communication Interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may call logic instructions in memory 1030 to perform the following method:
determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training;
determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training;
carrying out image alignment on the pet face image area to obtain an aligned image;
extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
Furthermore, the logic instructions in the memory 1030 can be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In still another aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the pet identity recognition method provided in the foregoing embodiments, for example, the method includes:
determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training;
determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training;
carrying out image alignment on the pet face image area to obtain an aligned image;
extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. A pet identity recognition method is characterized by comprising the following steps:
determining a pet image area in an image to be identified according to a target image detection network obtained by pre-training;
determining a pet face image area in the pet image area according to a face detection classifier obtained by pre-training;
carrying out image alignment on the pet face image area to obtain an aligned image;
extracting the characteristic vector in the aligned image, and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
2. The method of claim 1, wherein the step of training the target image detection network comprises:
acquiring the special training data determined for the pet in the image to be identified;
and training a target detection network by adopting a two-classification mode according to the special training data to obtain the trained target detection network as the target image detection network.
3. The method of claim 1, wherein the step of determining the pet face image region in the pet image region according to a pre-trained face detection classifier comprises:
removing image areas with areas smaller than a preset area threshold value in the pet image area to obtain candidate areas;
inputting the candidate region into the face detection classifier, and determining a pet face image region in the candidate region.
4. The method of claim 1 or 3, wherein the step of training the face detection classifier comprises:
acquiring a plurality of training images containing pet images and training images not containing pet images;
obtaining Haar characteristics of each training image;
constructing a training sample feature set according to the Haar features and whether the training image contains a pet face image;
and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
5. The method of claim 1, wherein the step of image aligning the pet face image region to obtain an aligned image comprises:
acquiring coarse positioning feature points of the pet face organ positions in the pet face image region according to a feature point detection network obtained through pre-training;
segmenting the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
revising the characteristic point detection network according to the characteristic point of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
obtaining accurate positioning feature points corresponding to the coarse positioning feature points according to each local area and the corresponding revised feature point detection network;
reversely mapping the local areas back to the pet face image area, and determining the position relation among the accurate positioning feature points;
and aligning the pet face image region according to the position relation among the accurate positioning feature points to obtain an aligned image.
6. The method of claim 5, wherein the coarse localization feature points of the pet facial organ positions at least comprise: one coarse positioning feature point at the left eye position, one coarse positioning feature point at the right eye position, one coarse positioning feature point at the nose tip position, three coarse positioning feature points at the mouth position, and two coarse positioning feature points at the ear root position.
7. The method according to claim 5, wherein the step of aligning the pet face image region according to the positional relationship between the precise positioning feature points to obtain an aligned image comprises:
determining whether the pet face image areas are aligned or not according to the position relation among the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relationship between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning characteristic points and the expected characteristic points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
8. The method according to claim 7, wherein the step of selecting at least three precisely positioned feature points and calculating the expected feature point corresponding to each selected precisely positioned feature point according to the relationship between the organ positions where the selected precisely positioned feature points are located comprises:
selecting a first accurate positioning feature point on the left eye position, a second accurate positioning feature point on the right eye position and a third accurate positioning feature point on the nose tip position in the pet face image area;
calculating to obtain a first distance which is a distance between the first accurate positioning feature point and the second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance which is a distance between a middle point of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and the third accurate positioning feature point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and positioned on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the middle point of the connecting line, the distance between the third expected feature point and the middle point of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the middle point of the connecting line is equal to the horizontal straight line.
9. The method of claim 1, wherein the step of extracting feature vectors in the aligned images comprises:
converting the size of the alignment image into a preset size;
and inputting the alignment image with the preset size into a preset residual error network model to obtain the multi-dimensional characteristic vector.
10. The method according to claim 1, wherein the step of identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet comprises:
calculating the distance between each identity vector in a preset pet identity feature library and the feature vector to obtain a plurality of identity distance values;
and if the minimum value in the identity distance values is smaller than a preset threshold value, the pet identity in the image to be recognized is a warehousing identity, wherein the warehousing identity is identity information indicated by the identity vector corresponding to the minimum value in the identity distance values.
11. An apparatus for identifying the identity of a pet, the apparatus comprising:
the first area confirmation module is used for determining a pet image area in the image to be recognized according to a target image detection network obtained by pre-training;
the second area confirmation module is used for determining a pet face image area in the pet image area according to a face detection classifier obtained through pre-training;
the alignment module is used for carrying out image alignment on the pet face image area to obtain an aligned image;
and the identification module is used for extracting the characteristic vector in the aligned image and identifying the pet in the image to be identified according to the characteristic vector to obtain the identity information of the pet.
12. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, performs the steps of the method of identifying the identity of a pet of any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the pet identity recognition method according to any one of claims 1 to 10.
CN201911039645.5A 2019-10-29 2019-10-29 Method and device for identifying identity of pet Active CN110909618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911039645.5A CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911039645.5A CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Publications (2)

Publication Number Publication Date
CN110909618A true CN110909618A (en) 2020-03-24
CN110909618B CN110909618B (en) 2023-04-21

Family

ID=69814679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911039645.5A Active CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Country Status (1)

Country Link
CN (1) CN110909618B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444965A (en) * 2020-03-27 2020-07-24 泰康保险集团股份有限公司 Data processing method based on machine learning and related equipment
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN112926479A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Cat face identification method and system, electronic device and storage medium
CN113076886A (en) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 Face individual identification device and method for cat
CN113673422A (en) * 2021-08-19 2021-11-19 苏州中科先进技术研究院有限公司 Pet type identification method and identification system
US20220036054A1 (en) * 2020-07-31 2022-02-03 Korea Institute Of Science And Technology System and method for companion animal identification based on artificial intelligence
CN115393904A (en) * 2022-10-20 2022-11-25 星宠王国(北京)科技有限公司 Dog nose print identification method and system
US11948390B1 (en) 2023-06-30 2024-04-02 Xingchong Kingdom (Beijing) Technology Co., Ltd Dog nose print recognition method and system

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020106114A1 (en) * 2000-12-01 2002-08-08 Jie Yan System and method for face recognition using synthesized training images
KR20060027482A (en) * 2004-09-23 2006-03-28 전자부품연구원 Method for authenticating human face
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
US20110188713A1 (en) * 2008-07-16 2011-08-04 Imprezzeo Pty Ltd Facial image recognition and retrieval
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method
CN107545249A (en) * 2017-08-30 2018-01-05 国信优易数据有限公司 A kind of population ages' recognition methods and device
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN109190477A (en) * 2018-08-02 2019-01-11 平安科技(深圳)有限公司 Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109784219A (en) * 2018-12-28 2019-05-21 广州海昇计算机科技有限公司 A kind of face identification method, system and device based on concentration cooperated learning
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
CN109919048A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 A method of face critical point detection is realized based on cascade MobileNet-V2
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020106114A1 (en) * 2000-12-01 2002-08-08 Jie Yan System and method for face recognition using synthesized training images
KR20060027482A (en) * 2004-09-23 2006-03-28 전자부품연구원 Method for authenticating human face
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
US20110188713A1 (en) * 2008-07-16 2011-08-04 Imprezzeo Pty Ltd Facial image recognition and retrieval
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN107545249A (en) * 2017-08-30 2018-01-05 国信优易数据有限公司 A kind of population ages' recognition methods and device
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN109190477A (en) * 2018-08-02 2019-01-11 平安科技(深圳)有限公司 Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109784219A (en) * 2018-12-28 2019-05-21 广州海昇计算机科技有限公司 A kind of face identification method, system and device based on concentration cooperated learning
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
CN109919048A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 A method of face critical point detection is realized based on cascade MobileNet-V2
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHIWEI WANG ET AL: "V-Head: Face Detection and Alignment for Facial Augmented Reality Applications", 《23RD INTERNATIONAL CONFERENCE ON MULTIMEDIA MODELING (MMM)》 *
姜亚东: "基于卷积神经网络的低分辨率行人与人脸检测和识别研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
王学彬: "基于红外视频的驾驶员脸部检测与跟踪方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444965A (en) * 2020-03-27 2020-07-24 泰康保险集团股份有限公司 Data processing method based on machine learning and related equipment
CN111444965B (en) * 2020-03-27 2024-03-12 泰康保险集团股份有限公司 Data processing method based on machine learning and related equipment
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
US20220036054A1 (en) * 2020-07-31 2022-02-03 Korea Institute Of Science And Technology System and method for companion animal identification based on artificial intelligence
US11847849B2 (en) * 2020-07-31 2023-12-19 Korea Institute Of Science And Technology System and method for companion animal identification based on artificial intelligence
CN112926479A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Cat face identification method and system, electronic device and storage medium
CN113076886A (en) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 Face individual identification device and method for cat
WO2022213396A1 (en) * 2021-04-09 2022-10-13 深圳市悦保科技有限公司 Cat face recognition apparatus and method, computer device, and storage medium
CN113673422A (en) * 2021-08-19 2021-11-19 苏州中科先进技术研究院有限公司 Pet type identification method and identification system
CN115393904A (en) * 2022-10-20 2022-11-25 星宠王国(北京)科技有限公司 Dog nose print identification method and system
WO2024082714A1 (en) * 2022-10-20 2024-04-25 星宠王国(北京)科技有限公司 Dog noseprint recognition method and system
US11948390B1 (en) 2023-06-30 2024-04-02 Xingchong Kingdom (Beijing) Technology Co., Ltd Dog nose print recognition method and system

Also Published As

Publication number Publication date
CN110909618B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
CN110909618B (en) Method and device for identifying identity of pet
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN109389074B (en) Facial feature point extraction-based expression recognition method
US10204283B2 (en) Image recognizing apparatus, image recognizing method, and storage medium
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
CN110674874B (en) Fine-grained image identification method based on target fine component detection
KR100647322B1 (en) Apparatus and method of generating shape model of object and apparatus and method of automatically searching feature points of object employing the same
JP4905931B2 (en) Human body region extraction method, apparatus, and program
CN107463865B (en) Face detection model training method, face detection method and device
US10262214B1 (en) Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
US10275667B1 (en) Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
Wang et al. Head pose estimation with combined 2D SIFT and 3D HOG features
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
US20240087368A1 (en) Companion animal life management system and method therefor
CN108596195B (en) Scene recognition method based on sparse coding feature extraction
Demirkus et al. Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos
Kokkinos et al. Bottom-up & top-down object detection using primal sketch features and graphical models
KR102325250B1 (en) companion animal identification system and method therefor
CN113420709A (en) Cattle face feature extraction model training method and system and cattle insurance method and system
CN110175500B (en) Finger vein comparison method, device, computer equipment and storage medium
Chen et al. Image segmentation based on mathematical morphological operator
CN112215066A (en) Livestock face image recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant