CN111695405A - Method, device and system for detecting dog face characteristic points and storage medium - Google Patents

Method, device and system for detecting dog face characteristic points and storage medium Download PDF

Info

Publication number
CN111695405A
CN111695405A CN202010327006.5A CN202010327006A CN111695405A CN 111695405 A CN111695405 A CN 111695405A CN 202010327006 A CN202010327006 A CN 202010327006A CN 111695405 A CN111695405 A CN 111695405A
Authority
CN
China
Prior art keywords
face
contour
feature points
points
dog face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010327006.5A
Other languages
Chinese (zh)
Other versions
CN111695405B (en
Inventor
李庆
曾凯
赵宇
李广
陈旸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202010327006.5A priority Critical patent/CN111695405B/en
Publication of CN111695405A publication Critical patent/CN111695405A/en
Application granted granted Critical
Publication of CN111695405B publication Critical patent/CN111695405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a device and a system for marking dog face characteristic points and a computer storage medium. The method for labeling the dog face characteristic points comprises the following steps: detecting the feature points based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points; the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned feature points comprises the following steps: detecting feature points based on the full face image of the dog face and a first-level network of a detection model to obtain roughly positioned feature points; and positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the precisely positioned feature points. According to the method, the device, the system and the computer storage medium, the accuracy and the real-time performance of the detection of the dog face characteristic points can be effectively improved.

Description

Method, device and system for detecting dog face characteristic points and storage medium
The divisional application is filed on 2018, 12 and 28, with the application number of 201811628345.6 and the name of 'a method, a device, a system and a storage medium for detecting dog face characteristic points'.
Technical Field
The invention relates to the technical field of image processing, in particular to processing of a dog face image.
Background
The feature point marking is used as an important step before image alignment, and the overall performance of an image recognition, analysis and search system is greatly influenced. At present, in the process of face recognition, a plurality of effective feature point labeling methods exist, but the methods for labeling feature points in animal recognition are very few, for example, labeling of dog face feature points in dog face recognition.
If the traditional characteristic point labeling and detecting method is adopted to label the dog face, because each key point is detected independently, the global geometric information of the dog face is completely ignored, so that the dog face is very sensitive to fine disturbance and has poor robustness on illumination change, posture change and the like. In addition, the calculation time, complexity and number of feature points are proportional, and the more feature points to be detected, the more detectors are required, which makes it difficult to implement in applications with denser feature points.
Therefore, a better method for labeling the dog face characteristic points is lacked in the prior art, the traditional characteristic point labeling method is greatly influenced by fine disturbance, missing reports or false reports are easily caused, the accuracy and recall rate are low, and when the number of labeled points is large, the operation efficiency is low.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a method, a device and a system for marking dog face characteristic points and a computer storage medium.
According to an aspect of the present invention, there is provided a method for detecting a dog face feature point, including:
detecting the feature points based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned feature points comprises the following steps:
detecting feature points based on the full face image of the dog face and a first-level network of a detection model to obtain roughly positioned feature points;
and positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the precisely positioned feature points.
Illustratively, the method further comprises: and segmenting the full face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
Illustratively, the positioning the roughly-positioned feature points based on the local image of the dog face and the second-level network of the detection model to obtain the finely-positioned feature points includes:
positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
Exemplarily, the coordinate transformation and the integration of the feature points of the local image are performed to obtain the precisely located feature points of the dog face, and the method includes:
obtaining a reference position and a rotation angle of a local image of the dog face relative to a full-face image;
performing coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of the transformed local images to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the training of the detection model includes: marking the characteristic points of the dog face in the full-face image and the local image of the training sample based on a preset rule;
and training the detection model based on the marked full-face image and the marked local image of the training sample to obtain the trained detection model.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, the labeling of the feature points based on the ear contour of the dog face comprises: marking left and right boundary characteristic points of the ear root, ear root center characteristic points, ear tip characteristic points and characteristic points which are marked equidistantly by taking the ear root center characteristic points to the ear tip characteristic points as references.
Illustratively, the labeling of the feature points based on the eye contour of the dog face comprises: marking a left eye center feature point, a feature point of the intersection of a left eye center horizontal line and the left and right sides of the left eye outline, and a feature point of the intersection of a left eye center vertical line and the upper and lower sides of the left eye outline; and
marking a right eye center feature point, a feature point of the intersection of a right eye center horizontal line and the left side and the right side of the right eye contour, and a feature point of the intersection of a right eye center vertical line and the upper side and the lower side of the right eye contour.
Illustratively, the labeling of the feature points based on the eye contour of the dog face further comprises: marking characteristic points at equal intervals along the left eye contour by taking the characteristic points of the intersection of the left eye central horizontal line and the left eye central vertical line and the left eye contour as a reference; and
and the right eye central horizontal line and the right eye central vertical line are used for marking the feature points at the intersection of the right eye contour as a reference and along the feature points at the equal distance of the right eye contour.
Illustratively, labeling based on the nasal contour of the dog face comprises: and marking the characteristic point of the center of the nose tip.
Illustratively, labeling based on the mouth contour of the dog face comprises: marking a left mouth corner characteristic point, an upper lip left side contour inflection point characteristic point, an upper lip center characteristic point, an upper lip right side contour inflection point characteristic point, a right mouth corner characteristic point, a lower lip left side contour inflection point characteristic point, a lower lip center characteristic point and a lower lip right side contour inflection point characteristic point.
Illustratively, labeling based on the facial contour of the dog face comprises:
labeling a characteristic point of the top center of the head, a characteristic point of the intersection of the left eye center horizontal line and the left face contour, a characteristic point of the intersection of the nose tip center horizontal line and the left face contour, a characteristic point of the intersection of the right eye center horizontal line and the right face contour, and a characteristic point of the intersection of the nose tip center horizontal line and the right face contour.
Illustratively, labeling based on the facial contour of the dog face further comprises: marking feature points at equal intervals along the left contour of the face by taking the feature points of the intersection of the left eye center horizontal line and the left contour of the face and the feature points of the intersection of the nose tip center horizontal line and the left contour of the face as references; and marking characteristic points along the contour of the right side of the face at equal intervals by taking the characteristic points of intersection of the center line of the right eye and the contour of the right side of the face and the characteristic points of intersection of the horizontal line of the center of the nose tip and the contour of the right side of the face as datum points.
According to another aspect of the present invention, there is provided a detection apparatus for a dog face feature point, comprising:
the detection module is used for carrying out feature point labeling based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, wherein the first-level network is used for detecting feature points based on the full-face image of the dog face and the first-level network of the detection model to obtain coarsely positioned feature points;
and the second-level network is used for positioning the roughly positioned feature points based on the local image of the dog face and the second-level network of the detection model to obtain the precisely positioned feature points.
Illustratively, the training of the detection model includes: marking the characteristic points of the dog face in the full-face image and the local image of the training sample based on a preset rule;
and training the detection model based on the marked full-face image and the marked local image of the training sample to obtain the trained detection model.
Illustratively, the training of the first-level network of detection models includes:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
and training the first neural network according to the marked full-face sample image of the dog face to obtain a first-stage network of the trained detection model.
Illustratively, the detection module is further configured to: and segmenting the full face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
Illustratively, the training of the second-level network of detection models includes:
obtaining a dog face local sample image before labeling based on the training sample image before labeling, and obtaining a dog face local sample image after labeling based on the training sample image after labeling;
and training the second neural network according to the labeled dog face local sample image to obtain a second-level network of the trained detection model.
Illustratively, the second level network is further configured to:
positioning the roughly positioned characteristic points based on the local image of the dog face to obtain the characteristic points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the second level network is further configured to:
obtaining a reference position and a rotation angle of a local image of the dog face relative to a full-face image;
performing coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of the transformed local images to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the apparatus further comprises: and the output module is used for outputting the dog face image comprising the dog face characteristic points and/or the dog face characteristic point coordinates.
According to another aspect of the present invention, there is provided a detection system for dog face feature points, comprising a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor implements the steps of the above method when executing the computer program.
According to another aspect of the present invention, there is provided a computer readable storage medium having a computer program stored thereon, wherein the computer program is adapted to carry out the steps of the above-mentioned method when executed by a computer.
According to the method, the device and the system for detecting the dog face characteristic points and the computer storage medium, the position information of the dog face characteristic points is gradually and accurately predicted through the cascade neural network established based on the whole face information and the local information, the high-precision positioning of the dog face characteristic points is realized, the accuracy and the real-time performance of the detection of the dog face characteristic points can be effectively improved, and the method, the device and the system can be widely applied to various occasions related to dog face image processing.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a schematic block diagram of an example electronic device for implementing a method and apparatus for detection of dog face feature points in accordance with embodiments of the invention;
FIG. 2 is a schematic flow chart of a method of detecting dog face feature points according to an embodiment of the invention;
FIG. 3 is an exemplary illustration of dog face feature points according to an embodiment of the invention;
FIG. 4 is a schematic block diagram of a detection apparatus for dog face feature points according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a detection system for dog face feature points according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
First, an example electronic device 100 for implementing the method and apparatus for detecting dog face feature points according to the embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 101, one or more memory devices 102, an input device 103, an output device 104, an image sensor 105, which are interconnected via a bus system 106 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 103 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 104 may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
The image sensor 105 may take an image (e.g., a photograph, a video, etc.) desired by the user and store the taken image in the storage device 102 for use by other components.
For example, an example electronic device for implementing the method and apparatus for detecting a dog face feature point according to the embodiment of the present invention may be implemented as a smart phone, a tablet computer, a video capture terminal of an access control system, or the like.
Next, a method 200 for detecting dog face feature points according to an embodiment of the present invention will be described with reference to fig. 2. The method 200 comprises the following steps:
detecting the feature points based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned feature points comprises the following steps:
detecting feature points based on the full face image of the dog face and a first-level network of a detection model to obtain roughly positioned feature points;
and positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the precisely positioned feature points.
The first-level network roughly estimates the characteristic points of the dog face based on the full-face image of the dog face to obtain roughly positioned characteristic points of the dog face; in order to further improve the detection precision of the feature points, the second-level network adjusts the roughly positioned feature points based on the local image of the dog face on the basis of the roughly positioned feature points, and finally obtains the precisely positioned feature points. And the position information of the dog face characteristic points is gradually and accurately predicted based on a cascade neural network formed by a first-stage network established based on the full face information of the dog face and a second-stage network established based on the local information, so that the high-precision positioning of the dog face characteristic points is realized. Illustratively, the method for detecting dog face feature points according to the embodiments of the present invention may be implemented in a device, an apparatus or a system having a memory and a processor.
The method for detecting the dog face characteristic points can be deployed at an image acquisition end, for example, the method can be deployed at the image acquisition end of an access control system; may be deployed at a personal terminal such as a smart phone, tablet, personal computer, etc. Alternatively, the detection method for the dog face characteristic points according to the embodiment of the invention can also be distributively deployed at a server side (or a cloud side) and a personal terminal.
According to the detection method of the dog face characteristic points, the position information of the dog face characteristic points is gradually and accurately predicted through the cascade neural network established based on the whole face and the local information, the high-precision positioning of the dog face characteristic points is realized, the accuracy and the real-time performance of the detection of the dog face characteristic points can be effectively improved, and the detection method can be widely applied to various occasions related to dog face image processing.
According to an embodiment of the present invention, the method 200 further comprises: marking the characteristic points of the dog face in the full-face image and the local image of the training sample based on a preset rule;
and training based on the marked training sample full face image and the training sample local image to obtain a trained detection model.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, the feature point labeling based on the ear contour of the dog face comprises: marking left and right boundary characteristic points of the ear root, ear root center characteristic points, ear tip characteristic points and characteristic points which are marked equidistantly by taking the ear root center characteristic points to the ear tip characteristic points as references.
Illustratively, labeling based on the eye contour of the dog face comprises: marking a left eye center feature point, a feature point of the intersection of a left eye center horizontal line and the left and right sides of the left eye outline, and a feature point of the intersection of a left eye center vertical line and the upper and lower sides of the eye outline; and
marking a right eye center feature point, a feature point of the intersection of a right eye center horizontal line and the left side and the right side of the right eye contour, and a feature point of the intersection of a right eye center vertical line and the upper side and the lower side of the right eye contour.
Illustratively, labeling based on the eye contour of the dog face further comprises: marking characteristic points at equal intervals along the left eye contour by taking the characteristic points of the intersection of the left eye central horizontal line and the left eye central vertical line and the left eye contour as a reference; and
and the right eye central horizontal line and the right eye central vertical line are used for marking the feature points at the intersection of the right eye contour as a reference and along the feature points at the equal distance of the right eye contour.
Illustratively, labeling based on the nasal contour of the dog face comprises: and marking the characteristic point of the center of the nose tip.
Illustratively, labeling based on the mouth contour of the dog face comprises: marking a left mouth corner characteristic point, an upper lip left side contour inflection point characteristic point, an upper lip center characteristic point, an upper lip right side contour inflection point characteristic point, a right mouth corner characteristic point, a lower lip left side contour inflection point characteristic point, a lower lip center characteristic point and a lower lip right side contour inflection point characteristic point.
In one embodiment, labeling based on the mouth contour of the dog face comprises: triggering from one side mouth corner, sequentially marking characteristic points of the side mouth corner along the upper lip, characteristic points of the side profile inflection point of the upper lip, central characteristic points of the upper lip, characteristic points of the profile inflection point of the other side of the upper lip and characteristic points of the mouth corner of the other side;
and sequentially marking the characteristic point of the inflection point of the profile of the side of the lower lip, the central characteristic point of the lower lip and the characteristic point of the inflection point of the profile of the other side of the lower lip along the lower lip.
Illustratively, labeling based on the facial contour of the dog face comprises:
labeling a characteristic point of the top center of the head, a characteristic point of the intersection of the left eye center horizontal line and the left face contour, a characteristic point of the intersection of the nose tip center horizontal line and the left face contour, a characteristic point of the intersection of the right eye center horizontal line and the right face contour, and a characteristic point of the intersection of the nose tip center horizontal line and the right face contour.
Illustratively, labeling based on the facial contour of the dog face further comprises:
marking feature points at equal intervals along the left face contour (from bottom to top if the feature points are taken as datum points by the feature points of the intersection of the left eye center horizontal line and the left face contour and the feature points of the intersection of the nose tip center horizontal line and the left face contour; and marking characteristic points along the right side contour of the face at equal intervals by taking the characteristic points of the intersection of the right eye center line and the right side contour of the face and the characteristic points of the intersection of the nose tip center horizontal line and the right side contour of the face as datum points.
In one embodiment, as shown in FIG. 3, FIG. 3 shows an example of an image containing feature points of a dog face according to an embodiment of the present invention. Referring to fig. 3, the labeling of a dog face according to a predetermined rule is further illustrated by taking a dog face image as an example. Specifically, suitable feature points are selected for labeling based on five basic parts of the ear, face, nose, mouth and face contour of the dog face in the dog face image shown in fig. 3.
Firstly, marking characteristic points based on the ear contour of the dog face; the method specifically comprises the following steps: marking left and right boundary characteristic points 1 and 2 of a left ear root based on an ear contour of a dog face, and drawing two characteristic points 1 and 2 from top to bottom of the left ear root; marking a left ear root central characteristic point 3, a left ear tip characteristic point 4, and characteristic points 5 and 6 which are marked equidistantly by taking the left ear root central characteristic point 3 to the left ear tip characteristic point 4 as references; similarly, marking the left and right boundary characteristic points 7 and 8 of the right ear root, and drawing two characteristic points 7 and 8 from the upper to the lower of the right ear root; marking a right ear root central characteristic point 9, a right ear tip characteristic point 10, and characteristic points 11 and 12 which are marked equidistantly by taking the right ear root central characteristic point 9 to the right ear tip characteristic point 10 as a reference.
Then, labeling is carried out on the basis of the eye contour of the dog face; the method specifically comprises the following steps: marking a left eye center feature point 14, feature points 15 and 16 of which left eye center horizontal lines are intersected with the left side and the right side of the left eye outline, and feature points 17 and 18 of which left eye center vertical lines are intersected with the upper side and the lower side of the left eye outline; with left eye center horizontal line and left eye center vertical line with the crossing four feature points of left eye profile 15, 16, 17, 18 as the benchmark, divide into four blocks with the left eye profile, send out clockwise from the upper left side, every piece equidistance marks out a feature point, marks out 4 feature points altogether: 19. 20, 21, 22;
marking a right eye center feature point 23, feature points 24 and 25 of the intersection of a right eye center horizontal line and the left side and the right side of the right eye contour, and intersection feature points 26 and 27 of a right eye center vertical line and the upper side and the lower side of the right eye contour; with right eye center horizontal line and right eye center vertical line will four characteristic points 24, 25, 26, 27 that the right eye profile intersects are the benchmark, divide into four bold with the right eye profile, send out clockwise from the upper left side, and every piece equidistance marks a characteristic point, marks 4 characteristic points altogether: 28. 29, 30, 31.
Then, labeling is carried out based on the nose contour of the dog face, and the labeling specifically comprises the following steps: the nose tip center feature point 32 is labeled.
Then, labeling is performed based on the face contour of the dog face, and the labeling specifically includes: marking a head top center feature point 13, a feature point 33 of the intersection of a left eye center horizontal line and a left face outline, and a feature point 34 of the intersection of a nose tip center horizontal line and the left face outline; marking feature points along the left contour of the face at equal intervals by taking the two feature points 33 and 34 as references, wherein the two feature points 35 and 36 can be marked at equal intervals from top to bottom;
marking a characteristic point 37 of the intersection of the horizontal line of the center of the right eye and the contour of the right side of the face, and marking a characteristic point 38 of the intersection of the horizontal line of the center of the nose tip and the contour of the right side of the face; the feature points may be equally spaced from top to bottom by marking the feature points 39 and 40 along the contour on the right side of the face with the two feature points 37 and 38 as references.
Finally, labeling is carried out based on the mouth outline of the dog face, and the labeling specifically comprises the following steps: marking a left mouth corner characteristic point 41, an upper lip left side contour inflection point characteristic point 42, an upper lip center characteristic point 43, an upper lip right side contour inflection point characteristic point 44, a right mouth corner characteristic point 45, a lower lip left side contour inflection point characteristic point 46, a lower lip center characteristic point 47 and a lower lip right side contour inflection point characteristic point 48;
starting from the left mouth corner, marking a left mouth corner characteristic point 41, a left lip contour inflection point characteristic point 42, a center characteristic point 43, a right lip contour inflection point characteristic point 44 and a right mouth corner characteristic point 45 in sequence along the contour of the upper lip; marking a left lip contour inflection point characteristic point 46, a center lower lip characteristic point 47 and a right lip contour inflection point characteristic point 48 of the lower lip in sequence along the contour of the lower lip. It is understood that the corresponding feature points may also be labeled along the upper lip contour and the lower lip contour based on starting from the right lip corner.
Therefore, in this embodiment, feature points are labeled on the dog face based on a predetermined rule, and finally, 6 feature points are labeled on each ear, 9 feature points are labeled on each eye, 8 feature points are labeled on the lip part, 1 feature point is labeled on the nose, 9 feature points are labeled on the face contour, and 48 feature points are total.
It should be noted that the above labeling steps are only examples and do not represent the predetermined rule; the predetermined rule has no limitation on the labeling order. In addition, the predetermined rule can increase the number of the feature points according to the design requirement and the actual situation so as to improve the accuracy of the annotation and provide a good data base for the subsequent procedure.
According to an embodiment of the present invention, the method 200 further comprises:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
and training the first neural network according to the marked full-face sample image of the dog face to obtain a first-stage network of the trained detection model.
According to an embodiment of the present invention, the method 200 further comprises: and segmenting the full face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
The partial image is an image obtained by dividing a full face image of a dog face into a plurality of parts according to the dog face organ, and each part contains the complete dog face organ. For example, dog face organs include: left ear, right ear, left eye, right eye, nose, mouth, left face, right face, etc., then the partial image of the dog's face may be an image that includes the left ear, right ear, left eye, right eye, nose, mouth, left face, or right face.
According to an embodiment of the present invention, the method 200 further comprises:
obtaining a dog face local sample image before labeling based on the training sample image before labeling, and obtaining a dog face local sample image after labeling based on the training sample image after labeling;
and training the second neural network according to the labeled dog face local sample image to obtain a second-level network of the trained detection model.
According to an embodiment of the invention, the method 200 further comprises: positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the precisely positioned feature points, wherein the step of positioning the roughly positioned feature points comprises the following steps:
positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
The first-level network of the detection model detects the feature points of the full-face image of the dog face to obtain the rough positioning feature points of the full-face image; in order to further improve the accuracy of the feature points, the coarse positioning feature points are further adjusted through a second-level network. The second-level network is based on the local image of the dog face, the local information is adopted for feature point detection, the local image input into the second-level network is required to be normalized to a standard size by the same local image corresponding to the same organ, the local image is input into the second-level network after being rotationally aligned to a uniform angle, the output of the first-level network can be divided into a plurality of groups of roughly-positioned feature points corresponding to the local images, each local image adjusts the corresponding group of roughly-positioned feature points through the second-level network to obtain a group of feature points corresponding to each local image, and the coordinate transformation and integration are carried out on the feature points corresponding to the local images to obtain the precisely-positioned feature points of the whole dog face.
Exemplarily, the coordinate transformation and the integration of the feature points of the local image are performed to obtain the precisely located feature points of the dog face, and the method includes:
obtaining a reference position and a rotation angle of a local image of the dog face relative to a full-face image;
performing coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of the transformed local images to obtain the precisely positioned characteristic points of the dog face.
The coordinates of the local image feature points are determined relative to the local image, so that a reference position of the local image relative to the full-face image needs to be obtained, and the positions of the local image feature points in the full-face image are obtained according to the reference position. Further, since the partial image input to the second network is rotated, it is necessary to rotate and convert the feature points of the partial image to positions in the full face in accordance with the rotation angle of the partial image with respect to the full face image. The coordinate transformation is linear transformation from two-dimensional coordinates to two-dimensional coordinates, the straightness and the parallelism of the image are kept, and aiming at the change of the object pose on the plane, the input image and the transformation matrix are subjected to matrix multiplication through matrix description to obtain the transformed image coordinates.
In the process of detecting the characteristic points of the local image by the second-level network, the local image is rotated and aligned to the same angle relative to the reference position of the full-face image, and therefore the accuracy of detecting the characteristic points is guaranteed.
According to an embodiment of the present invention, the method 200 further comprises: outputting a dog face image including the dog face feature points.
Fig. 4 shows a schematic block diagram of a detection apparatus 300 for dog face feature points according to an embodiment of the present invention. As shown in fig. 4, the apparatus 400 for detecting a dog face feature point according to an embodiment of the present invention includes:
the detection module 410 is used for detecting feature points based on the images containing the dog faces and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network 411 and a second-level network 412, wherein the first-level network 411 is used for detecting feature points of a full-face image of the dog face to obtain roughly positioned feature points;
the second-level network 412 is configured to locate the roughly located feature points according to the local image of the dog face, so as to obtain the precisely located feature points.
According to an embodiment of the present invention, the training of the detection model includes: marking the characteristic points of the dog face in the full-face image and the local image of the training sample based on a preset rule;
and training the detection model based on the marked full-face image and the marked local image of the training sample to obtain the trained detection model.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, the labeling of the feature points based on the ear contour of the dog face comprises: marking left and right boundary characteristic points of the ear root, ear root center characteristic points, ear tip characteristic points and characteristic points which are marked equidistantly by taking the ear root center characteristic points to the ear tip characteristic points as references.
Illustratively, the labeling of the feature points based on the eye contour of the dog face comprises: marking a left eye center feature point, a feature point of the intersection of a left eye center horizontal line and the left and right sides of the left eye contour, and a feature point of the intersection of a left eye center vertical line and the upper and lower sides of the eye contour; and
marking a right eye center feature point, a feature point of the intersection of a right eye center horizontal line and the left side and the right side of the right eye contour, and a feature point of the intersection of a right eye center vertical line and the upper side and the lower side of the right eye contour.
Illustratively, the labeling of the feature points based on the eye contour of the dog face further comprises: marking characteristic points at equal intervals along the left eye contour by taking the characteristic points of the intersection of the left eye central horizontal line and the left eye central vertical line and the left eye contour as a reference; and
and the right eye central horizontal line and the right eye central vertical line are used for marking the feature points at the intersection of the right eye contour as a reference and along the feature points at the equal distance of the right eye contour.
Illustratively, the labeling of the feature points based on the nose contour of the dog face comprises: and marking the characteristic point of the center of the nose tip.
Illustratively, labeling feature points based on the mouth contour of the dog face comprises: marking a left mouth corner characteristic point, an upper lip left side contour inflection point characteristic point, an upper lip center characteristic point, an upper lip right side contour inflection point characteristic point, a right mouth corner characteristic point, a lower lip left side contour inflection point characteristic point, a lower lip center characteristic point and a lower lip right side contour inflection point characteristic point.
Illustratively, the labeling of the feature points based on the face contour of the dog face comprises:
labeling a characteristic point of the top center of the head, a characteristic point of the intersection of the left eye center horizontal line and the left face contour, a characteristic point of the intersection of the nose tip center horizontal line and the left face contour, a characteristic point of the intersection of the right eye center horizontal line and the right face contour, and a characteristic point of the intersection of the nose tip center horizontal line and the right face contour.
Illustratively, the labeling of the feature points based on the face contour of the dog face further comprises:
marking characteristic points at equal intervals along the left contour of the face by taking the characteristic points of the intersection of the left eye central horizontal line and the left contour of the face and the characteristic points of the intersection of the nose tip central horizontal line and the left contour of the face as datum points; and marking characteristic points along the right side contour of the face at equal intervals by taking the characteristic points of the intersection of the right eye center line and the right side contour of the face and the characteristic points of the intersection of the nose tip center horizontal line and the right side contour of the face as datum points.
According to an embodiment of the present invention, the training of the first-level network of the detection model includes:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
and training the first neural network according to the marked full-face sample image of the dog face to obtain a first-stage network of the trained detection model.
According to an embodiment of the present invention, the detecting module 410 is further configured to: and segmenting the full face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
According to an embodiment of the present invention, the training of the second-level network of the detection model includes:
obtaining a dog face local sample image before labeling based on the training sample image before labeling, and obtaining a dog face local sample image after labeling based on the training sample image after labeling;
and training the second neural network according to the labeled dog face local sample image to obtain a second-level network of the trained detection model.
According to an embodiment of the present invention, the second level network 412 is further configured to:
positioning the roughly positioned characteristic points based on the local image of the dog face to obtain the characteristic points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the second level network 412 is further configured to:
obtaining a reference position and a rotation angle of a local image of the dog face relative to a full-face image;
performing coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of the transformed local images to obtain the precisely positioned characteristic points of the dog face.
According to an embodiment of the present invention, the apparatus 400 further comprises: an output module 420, configured to output a dog face image including the dog face feature point and/or the dog face feature point coordinate.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
FIG. 5 shows a schematic block diagram of annotation detection 500 of dog face feature points according to an embodiment of the invention. The detection system 500 for dog face feature points includes an image sensor 510, a storage device 530, and a processor 540.
The image sensor 510 is used to collect image data.
The storage device 530 stores program codes for implementing respective steps in the detection method of the dog face feature point according to the embodiment of the present invention.
The processor 540 is configured to run the program codes stored in the storage 530 to execute the corresponding steps of the method for detecting dog face feature points according to the embodiment of the present invention, and is configured to implement the detection module 410 in the device for detecting dog face feature points according to the embodiment of the present invention.
Further, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the labeling method of the dog face characteristic points according to the embodiment of the present invention, and for implementing the corresponding modules in the labeling device of the dog face characteristic points according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer readable storage medium can be any combination of one or more computer readable storage media, e.g., one containing computer readable program code for randomly generating a sequence of action instructions and another containing computer readable program code for labeling dog face feature points.
In one embodiment, the computer program instructions may implement, when executed by a computer, various functional modules of the detection apparatus for a dog face feature point according to the embodiment of the present invention, and/or may execute a detection method for a dog face feature point according to the embodiment of the present invention.
The modules in the detection system of the dog face characteristic point according to the embodiment of the present invention may be implemented by a processor of an electronic device for detection of the dog face characteristic point according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to the embodiment of the present invention are executed by a computer.
According to the method, the device, the system and the storage medium for detecting the dog face characteristic points, the position information of the dog face characteristic points is gradually and accurately predicted through the cascade neural network established based on the whole face and the local information, the high-precision positioning of the dog face characteristic points is realized, the accuracy and the real-time performance of the detection of the dog face characteristic points can be effectively improved, and the method, the device, the system and the storage medium can be widely applied to various occasions related to dog face image processing.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A method for detecting dog face characteristic points is characterized by comprising the following steps:
detecting the feature points based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned feature points comprises the following steps:
detecting feature points based on the full face image of the dog face and a first-level network of a detection model to obtain roughly positioned feature points;
segmenting the full-face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face;
positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
2. The detection method according to claim 1, wherein the coordinate transformation and integration of the feature points of the partial image to obtain the finely positioned feature points of the dog face comprises:
obtaining a reference position and a rotation angle of a local image of the dog face relative to a full-face image;
performing coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of the transformed local images to obtain the precisely positioned characteristic points of the dog face.
3. The detection method of claim 1, wherein the training of the detection model comprises: marking the characteristic points of the dog face in the full-face image and the local image of the training sample based on a preset rule;
and training the detection model based on the marked full-face image and the marked local image of the training sample to obtain the trained detection model.
4. The detection method according to claim 3, wherein the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
5. The detection method according to claim 4, wherein the labeling of the feature points based on the ear contour of the dog face comprises: marking left and right boundary characteristic points of the ear root, ear root center characteristic points, ear tip characteristic points and characteristic points which are marked equidistantly by taking the ear root center characteristic points to the ear tip characteristic points as references.
6. The detection method according to claim 4, wherein labeling feature points based on the eye contour of the dog face comprises: marking a left eye center feature point, a feature point of the intersection of a left eye center horizontal line and the left and right sides of the left eye outline, and a feature point of the intersection of a left eye center vertical line and the upper and lower sides of the left eye outline; and
marking a right eye center feature point, a feature point of the intersection of a right eye center horizontal line and the left side and the right side of the right eye contour, and a feature point of the intersection of a right eye center vertical line and the upper side and the lower side of the right eye contour.
7. The detection method according to claim 6, wherein labeling feature points based on the eye contour of the dog face further comprises: marking characteristic points at equal intervals along the left eye contour by taking the characteristic points of the intersection of the left eye central horizontal line and the left eye central vertical line and the left eye contour as a reference; and
and the right eye central horizontal line and the right eye central vertical line are used for marking the feature points at the intersection of the right eye contour as a reference and along the feature points at the equal distance of the right eye contour.
8. The detection method of claim 4, wherein labeling based on the nasal profile of the dog face comprises: and marking the characteristic point of the center of the nose tip.
9. The detection method of claim 4, wherein labeling based on the mouth contour of the dog face comprises: marking a left mouth corner characteristic point, an upper lip left side contour inflection point characteristic point, an upper lip center characteristic point, an upper lip right side contour inflection point characteristic point, a right mouth corner characteristic point, a lower lip left side contour inflection point characteristic point, a lower lip center characteristic point and a lower lip right side contour inflection point characteristic point.
10. The detection method of claim 4, wherein labeling based on the facial contour of the dog face comprises:
labeling a characteristic point of the top center of the head, a characteristic point of the intersection of the left eye center horizontal line and the left face contour, a characteristic point of the intersection of the nose tip center horizontal line and the left face contour, a characteristic point of the intersection of the right eye center horizontal line and the right face contour, and a characteristic point of the intersection of the nose tip center horizontal line and the right face contour.
11. The detection method of claim 10, wherein labeling based on the facial contour of the dog face further comprises: marking feature points at equal intervals along the left contour of the face by taking the feature points of the intersection of the left eye center horizontal line and the left contour of the face and the feature points of the intersection of the nose tip center horizontal line and the left contour of the face as references; and marking characteristic points along the contour of the right side of the face at equal intervals by taking the characteristic points of intersection of the center line of the right eye and the contour of the right side of the face and the characteristic points of intersection of the horizontal line of the center of the nose tip and the contour of the right side of the face as datum points.
12. An apparatus for detecting a dog face feature point, the apparatus comprising:
the detection module is used for detecting the feature points based on the images containing the dog faces and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, wherein the first-level network is used for detecting feature points of a full-face image of the dog face to obtain roughly positioned feature points;
segmenting the full-face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face;
the second-level network is used for positioning the roughly positioned feature points according to the local image of the dog face to obtain the precisely positioned feature points, and comprises the following steps:
positioning the roughly positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local image to obtain the precisely positioned characteristic points of the dog face.
13. A system for detecting dog face feature points, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a computer, carries out the steps of the method of any one of claims 1 to 11.
CN202010327006.5A 2018-12-28 2018-12-28 Dog face feature point detection method, device and system and storage medium Active CN111695405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010327006.5A CN111695405B (en) 2018-12-28 2018-12-28 Dog face feature point detection method, device and system and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010327006.5A CN111695405B (en) 2018-12-28 2018-12-28 Dog face feature point detection method, device and system and storage medium
CN201811628345.6A CN109829380B (en) 2018-12-28 2018-12-28 Method, device and system for detecting dog face characteristic points and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811628345.6A Division CN109829380B (en) 2018-12-28 2018-12-28 Method, device and system for detecting dog face characteristic points and storage medium

Publications (2)

Publication Number Publication Date
CN111695405A true CN111695405A (en) 2020-09-22
CN111695405B CN111695405B (en) 2023-12-12

Family

ID=66861476

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811628345.6A Active CN109829380B (en) 2018-12-28 2018-12-28 Method, device and system for detecting dog face characteristic points and storage medium
CN202010327006.5A Active CN111695405B (en) 2018-12-28 2018-12-28 Dog face feature point detection method, device and system and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811628345.6A Active CN109829380B (en) 2018-12-28 2018-12-28 Method, device and system for detecting dog face characteristic points and storage medium

Country Status (1)

Country Link
CN (2) CN109829380B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909618B (en) * 2019-10-29 2023-04-21 泰康保险集团股份有限公司 Method and device for identifying identity of pet
CN114155240A (en) * 2021-12-13 2022-03-08 韩松洁 Ear acupoint detection method and device and electronic equipment
CN115240230A (en) * 2022-09-19 2022-10-25 星宠王国(北京)科技有限公司 Canine face detection model training method and device, and detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
US20130223726A1 (en) * 2012-02-29 2013-08-29 Canon Kabushiki Kaisha Method and apparatus of classification and object detection, image pickup and processing device
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094906A (en) * 2005-09-29 2007-04-12 Toshiba Corp Characteristic point detection device and method
CN103971112B (en) * 2013-02-05 2018-12-07 腾讯科技(深圳)有限公司 Image characteristic extracting method and device
CN103208133B (en) * 2013-04-02 2015-08-19 浙江大学 The method of adjustment that in a kind of image, face is fat or thin
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN107403141B (en) * 2017-07-05 2020-01-10 中国科学院自动化研究所 Face detection method and device, computer readable storage medium and equipment
CN108875480A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 A kind of method for tracing of face characteristic information, apparatus and system
CN107704817B (en) * 2017-09-28 2021-06-25 成都品果科技有限公司 Method for detecting key points of animal face
CN108985210A (en) * 2018-07-06 2018-12-11 常州大学 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130223726A1 (en) * 2012-02-29 2013-08-29 Canon Kabushiki Kaisha Method and apparatus of classification and object detection, image pickup and processing device
CN103295024A (en) * 2012-02-29 2013-09-11 佳能株式会社 Method and device for classification and object detection and image shoot and process equipment
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAIPENG ZHANG等: "Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks", 《IEEE SIGNAL PROCESSING LETTERS》, vol. 23, no. 10, XP011622636, DOI: 10.1109/LSP.2016.2603342 *
陈锐等: "基于级联卷积神经网络的人脸关键点定位", 《四川理工学院学报( 自然科学版)》, vol. 30, no. 1, pages 33 - 34 *

Also Published As

Publication number Publication date
CN109829380B (en) 2020-06-02
CN109829380A (en) 2019-05-31
CN111695405B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN106203305B (en) Face living body detection method and device
CN108875524B (en) Sight estimation method, device, system and storage medium
CN105938552B (en) Face recognition method and device for automatically updating base map
CN106650662B (en) Target object shielding detection method and device
CN109543663B (en) Method, device and system for identifying identity of dog and storage medium
CN108875510B (en) Image processing method, device, system and computer storage medium
CN108875766B (en) Image processing method, device, system and computer storage medium
CN108961149B (en) Image processing method, device and system and storage medium
CN106385640B (en) Video annotation method and device
CN108875534B (en) Face recognition method, device, system and computer storage medium
CN108875731B (en) Target identification method, device, system and storage medium
CN109829380B (en) Method, device and system for detecting dog face characteristic points and storage medium
CN108932456B (en) Face recognition method, device and system and storage medium
JP6815707B2 (en) Face posture detection method, device and storage medium
JP2013012190A (en) Method of approximating gabor filter as block-gabor filter, and memory to store data structure for access by application program running on processor
CN106327546B (en) Method and device for testing face detection algorithm
CN110929612A (en) Target object labeling method, device and equipment
CN108875531B (en) Face detection method, device and system and computer storage medium
CN110826610A (en) Method and system for intelligently detecting whether dressed clothes of personnel are standard
CN109544516B (en) Image detection method and device
CN106156794B (en) Character recognition method and device based on character style recognition
GB2462903A (en) Single Stroke Character Recognition
WO2022247403A1 (en) Keypoint detection method, electronic device, program, and storage medium
CN106682187B (en) Method and device for establishing image base
CN109858363B (en) Dog nose print feature point detection method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant