CN115082996A - Face key point detection method and device, terminal equipment and storage medium - Google Patents

Face key point detection method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN115082996A
CN115082996A CN202210747716.2A CN202210747716A CN115082996A CN 115082996 A CN115082996 A CN 115082996A CN 202210747716 A CN202210747716 A CN 202210747716A CN 115082996 A CN115082996 A CN 115082996A
Authority
CN
China
Prior art keywords
face
face image
detected
key point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210747716.2A
Other languages
Chinese (zh)
Inventor
胡束芒
陈现岭
赵龙
颉毅
林枝叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202210747716.2A priority Critical patent/CN115082996A/en
Publication of CN115082996A publication Critical patent/CN115082996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of artificial intelligence, and provides a face key point detection method, a face key point detection device, terminal equipment and a computer-readable storage medium, wherein the method comprises the following steps: acquiring a face image to be detected; at least one face key point of the face image to be detected is shielded; performing face key point detection processing on a face image to be detected to obtain a first detection result of other face key points except at least one face key point; searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded; acquiring a second detection result of at least one face key point of the target face image; and superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected. By adopting the method, the accuracy rate of detecting the key points of the face image with the occlusion can be improved.

Description

Face key point detection method and device, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a face key point detection method, a face key point detection device, terminal equipment and a computer-readable storage medium.
Background
The human face key point detection refers to identifying key points of human face outlines and facial features according to human face facial features and facial features on the basis of human face detection.
The existing face key point detection method is to train a pre-constructed face key point detection model through a large number of face image samples with key point positions marked, and then detect face key points in an image to be processed according to the trained face key point detection model. However, in actual use, the human face pose, wearing, limb movement, light rays in the area where the user is located, and the like of the user often cause shielding on the human face area of the user, so that the accuracy of detecting the human face key points is low.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting face key points, terminal equipment and a computer readable storage medium, which can improve the accuracy of face key point detection on a face image with occlusion.
In a first aspect, an embodiment of the present application provides a method for detecting a face key point, including:
acquiring a face image to be detected; at least one face key point of the face image to be detected is shielded;
performing face key point detection processing on the face image to be detected to obtain a first detection result of other face key points except the at least one face key point;
searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded;
acquiring a second detection result of the at least one face key point of the target face image;
and superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected.
Optionally, the face image to be detected carries an identity of a person to be detected; the searching for the target face image with the same personnel identity as the face image to be detected comprises the following steps:
searching a face image to be selected with the same personnel identity as the face image to be detected according to face key points contained in each face image in a pre-constructed face image library and non-shielded face key points in the face image to be detected;
and selecting the face image with the highest similarity with the face image to be detected from the face images to be selected as the target face image.
Optionally, the face image library includes: a plurality of different face angle data sets, each of which contains a plurality of face images at the same face angle; each face image in the plurality of face images comprises a plurality of face key points; the method for searching the face image to be selected with the same personnel identity as the face image to be detected according to the face key points contained in each face image in the face image library constructed in advance and the face key points which are not shielded in the face image to be detected comprises the following steps:
carrying out face angle identification processing on the face image to be detected to obtain a target face angle of the face image to be detected;
and searching a face image to be selected with the same personnel identity as that of the face image to be detected according to the face key points which are not shielded in the face image to be detected and the plurality of face key points contained in each face image in the face angle data set corresponding to the target face angle from the face angle data set corresponding to the target face angle.
Optionally, the obtaining a second detection result of the at least one face key point of the target face image includes:
determining a characteristic value of the at least one face key point of the face in the target face image at a target face angle;
and acquiring a second detection result of the at least one face key point according to the characteristic value of the at least one face key point.
Optionally, the face images included in the face image library are obtained by the following method:
acquiring sample face images of different personnel identities;
inputting the sample face image into a trained face occlusion detection model for processing to obtain the proportion of occluded face key points in the sample face image;
and if the proportion is smaller than a first threshold value, adding the sample face image to the face image library.
Optionally, after the adding the sample face image to the face image library, the method further includes:
detecting the number of face images contained in the face image library;
and if the number of the face images is smaller than a second threshold value, returning to the step of obtaining sample face images with different personnel identities and the subsequent steps until the number of the face images is larger than or equal to the second threshold value.
Optionally, the overlapping the first detection result and the second detection result to obtain the face key point detection result of the face image to be detected includes:
calculating to obtain an offset value of the target face key point according to the position information of the target face key point which is not shielded in the face image to be detected and the position information of the target face key point in the target face image;
correcting the second detection result according to the prestored distance value between each face key point corresponding to the personnel identity of the face image to be detected and the deviation value;
and superposing the first detection result and the corrected second detection result to obtain the face key point detection result.
In a second aspect, an embodiment of the present application provides a face keypoint detection apparatus, including:
the first image acquisition unit is used for acquiring a face image to be detected; at least one face key point of the face image to be detected is shielded;
the first processing unit is used for executing face key point detection processing on the face image to be detected to obtain a first detection result of other face key points except the at least one face key point;
the first searching unit is used for searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded;
a first result obtaining unit, configured to obtain a second detection result of the at least one face key point of the target face image;
and the first superposition unit is used for superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the face keypoint detection method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the face keypoint detection method according to any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when running on a terminal device, enables the terminal device to execute the face keypoint detection method according to any one of the above first aspects.
The method for detecting the key points of the face, provided by the embodiment of the application, comprises the steps of obtaining a face image to be detected; at least one face key point of the face image to be detected is shielded; performing face key point detection processing on a face image to be detected to obtain a first detection result of other face key points except at least one face key point; searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded; acquiring a second detection result of at least one face key point of the target face image; and superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected. The face key point detection method provided by the embodiment of the application comprises the steps of firstly executing face key point detection processing on a face image to be detected, obtaining first detection results of other face key points, namely detection results of the face key points which are not shielded in the face image to be detected, then obtaining detection results of the shielded face key points in the face image to be detected according to a target face image which is the same as the person identity of the face image to be detected and is not shielded in the face image to be detected, and finally superposing the two detection results to obtain complete face key point detection results of the face image to be detected, so that the purpose of accurately detecting the face image with shielding is achieved, and the accuracy of face key point detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a method for detecting key points of a human face according to an embodiment of the present application;
FIG. 2 is a diagram of an actual situation that a key point of a face is occluded according to an embodiment of the present application;
FIG. 3 is an exemplary diagram of the number of key points of different faces provided in an embodiment of the present application;
fig. 4 is a flowchart illustrating an implementation of a face keypoint detection method according to another embodiment of the present application;
fig. 5 is a flowchart illustrating an implementation of a face keypoint detection method according to yet another embodiment of the present application;
fig. 6 is a flowchart illustrating an implementation of a face keypoint detection method according to yet another embodiment of the present application;
fig. 7 is a flowchart illustrating an implementation of a face keypoint detection method according to another embodiment of the present application;
fig. 8 is a flowchart illustrating an implementation of a face keypoint detection method according to yet another embodiment of the present application;
fig. 9 is a flowchart of an implementation of a face keypoint detection method according to yet another embodiment of the present application;
fig. 10 is a flowchart of establishing a face image library according to an embodiment of the present application;
fig. 11 is a specific application scene diagram of a face key point detection method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a face key point detection apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
It should be noted that the face key point detection methods provided in all embodiments of the present application can be applied to a driving fatigue detection System (Driver Monitor System, DMS). The DMS system is used for detecting the state of a driver in the driving process. The DMS system comprises a face authentication mode (faceID), fatigue detection, distraction detection, expression recognition, gesture recognition, dangerous action recognition, sight tracking and the like.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a method for detecting key points of a human face according to an embodiment of the present application. In the embodiment of the application, the execution main body of the face key point detection method is terminal equipment.
As shown in fig. 1, the method for detecting a face key point according to an embodiment of the present application may include steps S101 to S105, which are detailed as follows:
in S101, a face image to be detected is obtained; and at least one face key point of the face image to be detected is shielded.
In an implementation manner of the embodiment of the application, the terminal device may acquire the face image to be detected through the camera device. The camera device may be a camera, or the like.
It should be noted that at least one face key point of the face image to be detected is blocked. Exemplarily, as shown in fig. 2, fig. 2 is several actual situation diagrams of the face key points that are provided in the embodiment of the present application and are occluded.
In practical applications, the face key points include, but are not limited to, a contour of a human cheek, a contour of a chin, an upper lip, a lower lip, a nose, a bridge of a nose, eyes, eyelids, a pupil, eyebrows, and the like. Illustratively, as shown in fig. 3, fig. 3(a) is an exemplary diagram of 5 key points of a human face, fig. 3(b) is an exemplary diagram of 22 key points of a human face, and fig. 3(c) is an exemplary diagram of 65 key points of a human face.
In the embodiment of the application, the camera device can record the scene in the shooting range to obtain the corresponding video. Or, the camera device may photograph a scene within a shooting range thereof based on a preset shooting time interval, so as to obtain a plurality of frames of video images. The preset shooting time interval can be set according to actual needs, and is not limited here.
Based on this, the terminal device can acquire the video shot by the camera device, and perform framing processing on the video to obtain a plurality of frames of video images. Or the terminal equipment can directly acquire the multi-frame video images shot by the camera device.
After the terminal equipment acquires the multi-frame video images shot by the camera device, the face image to be detected is acquired from the multi-frame video images.
In an embodiment of the present application, since a human body or a non-human body may exist in a shooting range of the camera, a human body or a non-human body may be included in the multi-frame video images captured by the camera. The non-human body may be a living body other than a human body, an inanimate object, or the like.
Based on this, the terminal device may determine that the image captured by the image capturing device is a human body image through the following steps, which are detailed as follows:
acquiring a plurality of frames of video images collected by the camera device.
And inputting the multi-frame video images into a preset target detection model for target identification to obtain target identification results corresponding to the multi-frame video images.
And determining the video image including the human body in the target recognition result as the human body image.
In this embodiment, the target detection model is used to detect a target object in an image and identify a type of the target object. The object detection model may be an existing convolutional neural network-based object detection model.
Wherein the target recognition result is used for describing the type of the target object contained in the video image.
The type of target object may include, but is not limited to, a human body or a non-human body. By way of example, the non-human body may include, but is not limited to, cats, dogs, mice, or dishware.
When the target recognition result corresponding to a certain frame of video image includes a human body, it is indicated that the video image includes the human body, and therefore, the terminal device can determine the video image including the human body in the target recognition result as a human body image. Wherein the human body image comprises a human face.
Based on the above, after the terminal device acquires the human body image, the human body image can be input to a trained human face occlusion detection model for processing, so as to obtain the proportion of occluded human face key points in the human body image; and when the ratio is detected to be larger than or equal to the first threshold value, the human body image is cut to obtain an image only containing a human face area, and the image is determined as a human face image to be detected.
In S102, a face key point detection process is performed on the face image to be detected, so as to obtain a first detection result of other face key points except the at least one face key point.
In the embodiment of the application, after the terminal device acquires the face image to be detected, the terminal device can directly execute face key point detection processing on the face image to be detected, so as to obtain a first detection result of other face key points (i.e. non-blocked face key points) except at least one face key point (i.e. blocked face key point).
It should be noted that the first detection result may refer to the location information of the above-mentioned other face key points. Wherein the position information may be represented by coordinates.
In S103, searching a target face image with the same personnel identity as the face image to be detected; and the key points of the face of the target face image are not shielded.
In the embodiment of the application, after the terminal device acquires the face image to be detected, the non-shielded face key points in the face image to be detected can be determined, and the target face image with the same personnel identity as that of the face image to be detected is determined according to the non-shielded face key points and the face key points contained in different face images stored in advance. And different face images stored in advance are face images of which the key points of the faces are not shielded. And the key points of the face in the target face image are not shielded.
It should be noted that each of the different face images may carry the corresponding person identification. The personal identification includes, but is not limited to, a name, an identification number, and the like.
In an embodiment of the present application, the terminal device may specifically search for a target face image with the same person identity as that of the face image to be detected through S401 to S402 shown in fig. 4, which is detailed as follows:
in S401, a face image to be selected having the same person identity as the face image to be detected is searched for according to a face key point included in each face image in a pre-constructed face image library and an unshielded face key point in the face image to be detected.
In this embodiment, the terminal device may pre-construct a face image library for storing a plurality of face images, and each face image may carry a corresponding person identification.
In an embodiment of the present application, the terminal device may specifically obtain the face images in the face image library through S501 to S503 shown in fig. 5, which are detailed as follows:
in S501, sample face images of different person identities are acquired.
In an implementation manner of this embodiment, the terminal device may acquire a plurality of face images captured by the camera in advance, and sequentially input the plurality of face images into the face recognition model for recognition processing, so as to obtain sample face images of different person identities.
It should be noted that the face recognition model may be an existing face recognition model based on a convolutional neural network.
In S502, the sample face image is input to a trained face occlusion detection model for processing, so as to obtain a proportion of occluded face key points in the sample face image.
In this embodiment, the face occlusion detection model is used to detect the proportion of occluded face key points in the sample face image. The face occlusion detection model may be obtained by training a first deep learning model that is constructed in advance based on a first preset sample set. Each sample data in the first preset sample set comprises a historical face image and the proportion of the blocked face key points corresponding to the historical face image. When a first deep learning model which is constructed in advance is trained, a historical face image in each sample is used as input of the first deep learning model, the proportion of shielded face key points corresponding to the historical face image in each sample is used as output of the first deep learning model, through training, the first deep learning model can learn the corresponding relation between all possible face images and shielded face key points, and the trained first deep learning model is used as a face shielding detection model.
Based on this, the terminal device can input the sample face image into the trained face occlusion detection model for processing, so that the proportion of occluded face key points in the sample face image can be obtained. The proportion refers to the ratio of the number of the occluded face key points in the face image to the number of all the face key points in the face image.
In this embodiment, after obtaining the proportion of the face key points that are shielded in the sample face image, the terminal device may compare the proportion with the first threshold. The first threshold may be set according to actual needs, and is not limited herein.
In an embodiment of the application, when the terminal device detects that the proportion of the face key points in the sample face image that are blocked is greater than or equal to the first threshold, it indicates that the face key points in the sample face image are blocked, and therefore, the terminal device may discard the sample face image and not add the sample face image to the face image library.
In another embodiment of the present application, the terminal device may execute step S303 when detecting that the proportion of the face key points that are occluded in the sample face image is smaller than the first threshold.
In S503, if the ratio is smaller than a first threshold, the sample face image is added to the face image library.
In this embodiment, when the terminal device detects that the proportion of the face key points that are blocked in the sample face image is smaller than the first threshold, it indicates that none of the face key points of the sample face image is blocked, and therefore the terminal device may add the sample face image to the face image library.
In this embodiment, after obtaining the face image library, the terminal device may search for a face image to be selected, which has the same person identity as that of the face image to be detected, in the pre-constructed face image library.
In an embodiment of the present application, the face image library may include a plurality of different face angle data sets, each face angle data set includes a plurality of face images at the same face angle, and each face image in the plurality of face images includes a plurality of face key points, so to improve the search accuracy, the terminal device may specifically obtain the candidate face image through S601 to S602 shown in fig. 6, which are detailed as follows:
in S601, a face angle recognition process is performed on the face image to be detected to obtain a target face angle of the face image to be detected.
In an embodiment of the application, the terminal device may input the face image to be detected to the trained face angle recognition model for processing, so as to determine a target face angle of the face image to be detected.
In this embodiment, the face angle recognition model is used to detect a face angle in a face image to be detected. The face angle recognition model can be obtained by training a second deep learning model which is constructed in advance based on a second preset sample set. Each sample data in the second preset sample set comprises a historical face image and a face angle corresponding to the historical face image. When a pre-constructed second deep learning model is trained, the historical face images in each sample are used as the input of the second deep learning model, the face angles corresponding to the historical face images in each sample are used as the output of the second deep learning model, through training, the second deep learning model can learn the corresponding relation between all possible face images and face angles, and the trained second deep learning model is used as a face angle recognition model.
In S602, from the face angle data set corresponding to the target face angle, a face image to be selected having the same person identity as that of the face image to be detected is searched according to the face key point that is not covered in the face image to be detected and the plurality of face key points included in each face image in the face angle data set corresponding to the target face angle.
In this embodiment, each face image in each face angle data set includes a plurality of face key points, and therefore, in this embodiment, after the terminal device determines the target face angle of the face image to be detected, the terminal device may search the face image to be selected, which has the same person identity as that of the face image to be detected, from the face angle data set corresponding to the target face angle, according to the face key points that are not blocked in the face image to be detected and the plurality of face key points included in each face image in the face angle data set corresponding to the target face angle.
In S402, a face image with the highest similarity to the face image to be detected is selected from the face images to be selected as the target face image.
In this embodiment, the number of the face images to be selected may be multiple, and therefore, after the terminal device obtains multiple face images to be selected, the similarity between each face image to be selected and the face image to be detected may be calculated.
In an embodiment of the application, after obtaining the similarity between each to-be-selected face image and the to-be-detected face image, the terminal device may compare the similarities between the plurality of to-be-selected face images and the to-be-detected face images one by one, and determine the to-be-selected face image with the highest similarity as the target face image.
In another embodiment of the application, after the terminal device obtains the similarity between each candidate face image and the face image to be detected, the candidate face image with the similarity greater than or equal to the third threshold may be determined as the target face image. That is, the target face image may be plural. The third threshold may be set according to actual needs, and is not limited herein.
In S104, a second detection result of the at least one face key point of the target face image is obtained.
In the embodiment of the application, the second detection result refers to the position information of the key point of the blocked face.
In an embodiment of the application, the terminal device prestores corresponding relationships between different historical face images and position information of face key points existing in the different historical face images, and therefore, after the terminal device obtains the target face image, the terminal device can obtain a second detection result of at least one face key point of the target face image according to the prestored corresponding relationships between the different historical face images and the position information of the face key points existing in the different historical face images.
In another embodiment of the present application, in combination with S601 to S602, the terminal device may specifically obtain a second detection result of at least one face key point of the target face image through S701 to S702 shown in fig. 7, which is detailed as follows:
in S701, a feature value of the at least one face key point of the face in the target face image at the target face angle is determined.
In S702, a second detection result of the at least one face key point is obtained according to the feature value of the at least one face key point.
In this embodiment, the face key point feature value is used to describe an offset angle interval of a face key point of a face in a target face image at a target face angle. The offset angle is used for describing the offset degree of the face key point in the target face image relative to the face key point corresponding to the face key point in the front face image with the same person identity as the target face image. The offset angle interval represents an angular range of the offset.
Exemplary, the feature values of the face key points may include, but are not limited to: the first face characteristic value, the second face characteristic value, the third face characteristic value, the fourth face characteristic value, the fifth face characteristic value and the sixth face characteristic value. The offset angle interval for describing the first face feature value is [0,60 °), the offset angle interval for describing the second face feature value is [60 °,120 °), the offset angle interval for describing the third face feature value is [120 °,180 °), the offset angle interval for describing the fourth face feature value is [180 °,240 °), the offset angle interval for describing the fifth face feature value is [240 °,300 °), and the offset angle interval for describing the sixth face feature value is [300 °,360 °).
In this embodiment, for any face key point, the terminal device pre-stores a corresponding relationship between the feature value of the face key point and the position information of the face key point, so that after determining the feature value of at least one face key point of the face in the target face image at the target face angle, the terminal device may obtain a second detection result of at least one face key point in the face image to be detected according to the corresponding relationship between the feature values of different face key points and the position information of the face key point.
In another embodiment of the present application, with reference to S402, when there are a plurality of target face images, in order to improve the detection accuracy of at least one face key point in the face image to be detected, for any face key point in the at least one face key point, the terminal device may obtain, according to the face key point, a plurality of position information of a key point corresponding to the face key point in the plurality of target face images, and determine, according to the plurality of position information, target position information of the face key point. And finally, determining to obtain a second detection result of the at least one face key point in the face image to be detected according to the target position information corresponding to the at least one face key point in the face image to be detected.
In an implementation manner of this embodiment, for any face key point in at least one face key point in a face image to be detected, after acquiring multiple pieces of position information of the face key point, a terminal device may average the multiple pieces of position information, and determine the average as target position information of the face key point.
In another implementation manner of this embodiment, after acquiring multiple target face images, a terminal device may sort the multiple target face images from top to bottom according to similarities between the multiple target face images and a face image to be detected, determine weight ratios of multiple pieces of position information of key points corresponding to the face key points in the multiple target face images according to the sorting results, and calculate target position information of the face key points according to the multiple pieces of position information and the weight ratios corresponding to the multiple pieces of position information.
In S105, the first detection result and the second detection result are superimposed to obtain a face key point detection result of the face image to be detected.
In the embodiment of the application, since the first detection result indicates position information of other face key points that are not blocked in the face image to be detected, and the second detection result indicates position information of at least one face key point that is blocked in the face image to be detected, the terminal device needs to superimpose the first detection result and the second detection result, so as to obtain a face key point detection result of the face image to be detected, that is, a detection result of all face key points in the face image to be detected.
In an embodiment of the present application, in order to further improve the detection accuracy of the face key points, the terminal device may specifically use the face key point detection results of the face images to be detected in S801 to S803 shown in fig. 8, which are detailed as follows:
in S801, an offset value of the target face key point is calculated according to the position information of the target face key point that is not covered in the to-be-detected face image and the position information of the target face key point in the target face image.
In this embodiment, the target face key point refers to any face key point existing in both the face image to be detected and the target face image.
Because the face image to be detected still has some slight differences with the target face image, the terminal device needs to calculate the offset value of the target face key point according to the position information of the target face key point in the face image to be detected, which is not shielded, and the position information of the target face key point in the target face image. The offset value specifically refers to a difference value between position information of a target face key point in the face image to be detected and position information of the target face key point in the target face image.
In S802, the second detection result is corrected according to the prestored distance value between each face key point corresponding to the person identity of the face image to be detected and the offset value.
In S803, the first detection result and the modified second detection result are superimposed to obtain the face key point detection result.
In practical application, the distance value between each face key point of each person is fixed, so that the terminal device can correct the second detection result of the face image to be detected according to the prestored distance value between each face key point corresponding to the person identity of the face image to be detected and the offset value, that is, adjust the position information of at least one face key point which is shielded in the face image to be detected.
Specifically, after the terminal device obtains the offset value of the target face key point, for any one blocked face key point in the face image to be detected, the terminal device determines the position information of the face key point in the target face image corresponding to the blocked face key point as the position information of the blocked face key point in the face image to be detected, and then corrects the position information of the blocked face key point according to the offset value, so as to obtain the target position information of the blocked face key point in the face image to be detected.
Based on this, the terminal device can obtain the target position information of all the occluded face key points in the face image to be detected, namely the corrected second detection result.
Therefore, the method for detecting the key points of the human face provided by the embodiment of the application obtains the image of the human face to be detected; at least one face key point of the face image to be detected is shielded; performing face key point detection processing on a face image to be detected to obtain a first detection result of other face key points except at least one face key point; searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded; acquiring a second detection result of at least one face key point of the target face image; and superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected. The face key point detection method provided by the embodiment of the application comprises the steps of firstly executing face key point detection processing on a face image to be detected, obtaining first detection results of other face key points, namely detection results of the face key points which are not shielded in the face image to be detected, then obtaining detection results of the shielded face key points in the face image to be detected according to a target face image which is the same as the person identity of the face image to be detected and is not shielded in the face image to be detected, and finally superposing the two detection results to obtain complete face key point detection results of the face image to be detected, so that the purpose of accurately detecting the face image with shielding is achieved, and the accuracy of face key point detection is improved.
In another embodiment of the present application, in combination with S501 to S503, in order to improve the detection accuracy of face occlusion, the number of face images in the face image library needs to be sufficient, so please refer to fig. 9, where fig. 9 is a method for detecting key points of a face according to another embodiment of the present application. With respect to the embodiment corresponding to fig. 5, after S503, the present embodiment may further include S901 to S902, which are detailed as follows:
in S901, the number of face images included in the face image library is detected.
In S902, if the number of face images is less than a second threshold, the step of obtaining sample face images with different person identities and subsequent steps are executed again until the number of face images is greater than or equal to the second threshold.
In this embodiment, after the terminal device adds the sample face image to the face image library, the number of face images included in the face image library may be detected in real time, and the number of face images may be compared with the second threshold. The second threshold may be set according to actual needs, and is not limited herein.
When the terminal device detects that the number of the face images in the face image library is smaller than the second threshold, it indicates that the number of the face images in the face image library is insufficient, and therefore, the terminal device may return to execute steps S301 to S303 until the number of the face images in the face image library is greater than or equal to the second threshold.
As can be seen from the above, the method for detecting key points of a human face provided by this embodiment detects the number of human face images contained in a human face image library; when the number of the detected face images is smaller than the second threshold value, the step of obtaining the sample face images of different personnel identities and the subsequent steps are returned to be executed until the number of the face images is larger than or equal to the second threshold value, so that the number of the face images in the face image library can be ensured to be enough, and the detection accuracy of face shielding can be improved.
Referring to fig. 10, fig. 10 is a flow chart of establishing a face image library according to an embodiment of the present application. As shown in fig. 10, in combination with all embodiments provided by the present application, when a face image library is constructed by a terminal device, a sample face image carrying a person identity is obtained by a camera device, the sample face image is compared with a plurality of historical non-occluded face images having the same person identity one by one, and when it is detected that a historical non-occluded face image having a similarity greater than a fourth threshold with the sample face image exists in the plurality of historical non-occluded face images, all face key points in the sample face image are detected, and feature extraction is performed on the face key points in the sample face image. The fourth threshold may be set according to actual needs, and is not limited herein, and for example, the fourth threshold may be set to 0.9.
Referring to fig. 10, after detecting that a plurality of historical unoccluded face images have a historical unoccluded face image whose similarity to the sample face image is greater than a fourth threshold, the terminal device may further determine a face angle in the sample face image, and classify the sample face image according to the face angle.
Based on this, the terminal device performs feature extraction on the face key points in the sample face image, and after classifying the sample face image according to the face angle, the sample face image may be added to the face image library, and the number of face images in the face image library at that time is detected, or whether the update time of the face image library at that time is detected. When the terminal equipment detects that the number of the face images of the face image library at the moment is smaller than a second threshold value or detects that the updating time of the face image library at the moment is smaller than the second threshold value, the terminal equipment returns to execute the step of obtaining a sample face image carrying the personnel identity identification through the camera device and the subsequent steps until the number of the face images of the face image library at the moment is larger than or equal to the second threshold value or the updating time of the face image library at the moment is detected, and then construction of the face image library is completed.
It should be noted that, in the face keypoint detection methods provided in all embodiments of the present application, the process of processing the face keypoint detection can be implemented in one face keypoint detection model, based on which, please refer to fig. 11, where fig. 11 is a specific application scenario diagram of a face keypoint detection method provided in an embodiment of the present application. As shown in fig. 11, in combination with all embodiments provided by the present application, when the terminal device performs face key point detection on a face image to be detected, the face image to be detected may be input into a face key point detection model for processing. The face key point detection model comprises a face candidate region extraction module 1, a face image library construction module 2, a face key point detection module 3 and a face feature verification and restoration module 4.
It should be noted that the face candidate extraction module 1 includes a convolution layer and a down-sampling layer. The face candidate region extraction module is used for extracting the unoccluded face key points and the occluded face key points in the face image A to be detected.
The face image library construction module 2 is used for detecting and extracting face key points of the acquired sample face image B, and storing the face key points in the sample face image B and face angles of the face in the sample face image B in an associated manner. Specifically, after receiving a sample face image B, the face image library construction module 2 may perform face key point detection and extraction on the sample face image B through a convolution layer and a full-link layer therein, and finally perform associated storage on each face key point characteristic value a in the sample face image B and a face angle B of a face in the sample face image B.
The face key point detection module 3 is configured to detect occluded face key points in the face image a to be detected, and obtain a second detection result R2 of the occluded face key points. For a specific process, please refer to specific implementation processes of the face key point detection method described in the embodiments provided in fig. 4, fig. 5, fig. 6, and fig. 7, which are not described herein again.
The face key point verification and restoration module 4 firstly performs face key point detection processing on the face image a to be detected to obtain a first detection result R1 of the face key which is not shielded in the face image to be detected, then obtains a face key detection result R3 of the face image a to be detected according to the first detection result R1 and the second detection result R2, and finally outputs a target face image C with a face key point detection mark. For a specific process, please refer to a specific implementation process of the face key point detection method described in the embodiment provided in fig. 8, which is not described herein again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 12 shows a block diagram of a structure of a face keypoint detection apparatus provided in the embodiment of the present application, which corresponds to the face keypoint detection method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description. Referring to fig. 12, the face keypoint detection apparatus 100 includes: a first image obtaining unit 11, a first processing unit 12, a first searching unit 13, a first result obtaining unit 14 and a first superimposing unit 15. Wherein:
the first image obtaining unit 11 is configured to obtain a face image to be detected; and at least one face key point of the face image to be detected is shielded.
The first processing unit 12 is configured to perform face key point detection processing on the face image to be detected, so as to obtain a first detection result of other face key points except the at least one face key point.
The first searching unit 13 is configured to search for a target face image with the same person identity as the face image to be detected; and the key points of the face of the target face image are not shielded.
The first result obtaining unit 14 is configured to obtain a second detection result of the at least one face key point of the target face image.
The first superimposing unit 15 is configured to superimpose the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected.
In an embodiment of the application, the face image to be detected carries an identity of a person to be detected; the first search unit 13 specifically includes: a second searching unit and a selecting unit. Wherein:
the second searching unit is used for searching the face image to be selected with the same personnel identity as the face image to be detected according to the face key points contained in each face image in the face image library constructed in advance and the face key points which are not shielded in the face image to be detected.
The selecting unit is used for selecting the face image with the highest similarity with the face image to be detected from the face images to be selected as the target face image.
In one embodiment of the present application, the face image library includes: a plurality of different face angle data sets, each of which contains a plurality of face images at the same face angle; each face image in the plurality of face images comprises a plurality of face key points; the second search unit specifically includes: a second processing unit and a third searching unit. Wherein:
the second processing unit is used for carrying out face angle identification processing on the face image to be detected to obtain a target face angle of the face image to be detected.
And the third searching unit is used for searching the face image to be selected with the same personnel identity as that of the face image to be detected according to the face key points which are not shielded in the face image to be detected and the face key points contained in each face image in the face angle data set corresponding to the target face angle.
In an embodiment of the present application, the first result obtaining unit specifically includes: a characteristic value determining unit and a second result obtaining unit. Wherein:
the characteristic value determining unit is used for determining the characteristic value of the at least one face key point of the face in the target face image at the target face angle.
The second result obtaining unit is used for obtaining a second detection result of the at least one face key point according to the characteristic value of the at least one face key point.
In an embodiment of the present application, the facial images included in the facial image library are obtained by the following units:
the second image acquisition unit is used for acquiring sample face images of different personnel identities.
And the third processing unit is used for inputting the sample face image into a trained face occlusion detection model for processing to obtain the proportion of occluded face key points in the sample face image.
The adding unit is used for adding the sample face image to the face image library if the proportion is smaller than a first threshold value.
In an embodiment of the present application, the face keypoint detection apparatus 100 further includes: a detection unit and a return unit. Wherein:
the detection unit is used for detecting the number of the face images contained in the face image library.
And the returning unit is used for returning to execute the step of acquiring the sample face images with different personnel identities and the subsequent steps if the number of the face images is less than a second threshold value until the number of the face images is greater than or equal to the second threshold value.
In an embodiment of the present application, the first result obtaining unit 15 specifically includes: the device comprises a calculation unit, a correction unit and a second superposition unit. Wherein:
the calculation unit is used for calculating and obtaining the deviation value of the target face key point according to the position information of the target face key point which is not shielded in the face image to be detected and the position information of the target face key point in the target face image.
And the correction unit is used for correcting the second detection result according to the prestored distance value between each face key point corresponding to the personnel identity of the face image to be detected and the deviation value.
And the second superposition unit is used for superposing the first detection result and the corrected second detection result to obtain the face key point detection result.
It should be noted that, for the information interaction, execution process, and other contents between the above devices/units, the specific functions and technical effects thereof based on the same concept as those of the method embodiment of the present application can be specifically referred to the method embodiment portion, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one first processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 13, the terminal device 2 of this embodiment includes: at least one processor 20 (only one shown in fig. 13), a memory 21, and a computer program 22 stored in the memory 21 and operable on the at least one processor 20, wherein the processor 20, when executing the computer program 22, implements the steps in any of the above-mentioned face keypoint detection method embodiments.
The terminal device may include, but is not limited to, a processor 20, a memory 21. Those skilled in the art will appreciate that fig. 13 is only an example of the terminal device 2, and does not constitute a limitation to the terminal device 2, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include an input/output device, a network access device, and the like.
The Processor 20 may be a Central Processing Unit (CPU), and the Processor 20 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the terminal device 2, such as a memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device 1. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 21 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A face key point detection method is characterized by comprising the following steps:
acquiring a face image to be detected; at least one face key point of the face image to be detected is shielded;
performing face key point detection processing on the face image to be detected to obtain a first detection result of other face key points except the at least one face key point;
searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded;
acquiring a second detection result of the at least one face key point of the target face image;
and superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected.
2. The method for detecting the face key points according to claim 1, wherein the searching for the target face image having the same person identity as the face image to be detected comprises:
searching a face image to be selected with the same personnel identity as the face image to be detected according to face key points contained in each face image in a pre-constructed face image library and non-shielded face key points in the face image to be detected;
and selecting the face image with the highest similarity with the face image to be detected from the face images to be selected as the target face image.
3. The method of detecting face keypoints according to claim 2, wherein the face image library comprises: a plurality of different face angle data sets, each of which contains a plurality of face images at the same face angle; each face image in the plurality of face images comprises a plurality of face key points; the method for searching the face image to be selected with the same personnel identity as the face image to be detected according to the face key points contained in each face image in the face image library constructed in advance and the face key points which are not shielded in the face image to be detected comprises the following steps:
carrying out face angle identification processing on the face image to be detected to obtain a target face angle of the face image to be detected;
and searching a face image to be selected with the same personnel identity as that of the face image to be detected according to the face key points which are not shielded in the face image to be detected and the plurality of face key points contained in each face image in the face angle data set corresponding to the target face angle from the face angle data set corresponding to the target face angle.
4. The method of claim 3, wherein the obtaining the second detection result of the at least one face keypoint of the target face image comprises:
determining a characteristic value of the at least one face key point of the face in the target face image at a target face angle;
and acquiring a second detection result of the at least one face key point according to the characteristic value of the at least one face key point.
5. The method for detecting the key points of the human face according to claim 2, wherein the human face images contained in the human face image library are obtained by:
acquiring sample face images of different personnel identities;
inputting the sample face image into a trained face occlusion detection model for processing to obtain the proportion of occluded face key points in the sample face image;
and if the proportion is smaller than a first threshold value, adding the sample face image to the face image library.
6. The method of claim 5, wherein after the adding the sample face image to the face image library, further comprising:
detecting the number of face images contained in the face image library;
and if the number of the face images is smaller than a second threshold value, returning to the step of obtaining sample face images with different personnel identities and the subsequent steps until the number of the face images is larger than or equal to the second threshold value.
7. The method for detecting the face key points according to any one of claims 1 to 6, wherein the step of superposing the first detection result and the second detection result to obtain the face key point detection result of the face image to be detected comprises the steps of:
calculating to obtain an offset value of the key point of the target face according to the position information of the key point of the target face which is not shielded in the face image to be detected and the position information of the key point of the target face in the face image to be detected;
correcting the second detection result according to the prestored distance value between each face key point corresponding to the personnel identity of the face image to be detected and the deviation value;
and superposing the first detection result and the corrected second detection result to obtain the face key point detection result.
8. A face key point detection device, comprising:
the first image acquisition unit is used for acquiring a face image to be detected; at least one face key point of the face image to be detected is shielded;
the first processing unit is used for executing face key point detection processing on the face image to be detected to obtain a first detection result of other face key points except the at least one face key point;
the first searching unit is used for searching a target face image with the same personnel identity as the face image to be detected; the face key points of the target face image are not shielded;
a first result obtaining unit, configured to obtain a second detection result of the at least one face key point of the target face image;
and the first superposition unit is used for superposing the first detection result and the second detection result to obtain a face key point detection result of the face image to be detected.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face keypoint detection method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the face keypoint detection method according to any one of claims 1 to 7.
CN202210747716.2A 2022-06-29 2022-06-29 Face key point detection method and device, terminal equipment and storage medium Pending CN115082996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210747716.2A CN115082996A (en) 2022-06-29 2022-06-29 Face key point detection method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210747716.2A CN115082996A (en) 2022-06-29 2022-06-29 Face key point detection method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115082996A true CN115082996A (en) 2022-09-20

Family

ID=83256528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210747716.2A Pending CN115082996A (en) 2022-06-29 2022-06-29 Face key point detection method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115082996A (en)

Similar Documents

Publication Publication Date Title
US20210056715A1 (en) Object tracking method, object tracking device, electronic device and storage medium
JP4951498B2 (en) Face image recognition device, face image recognition method, face image recognition program, and recording medium recording the program
CN109657533A (en) Pedestrian recognition methods and Related product again
CN109886080A (en) Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
KR20090115739A (en) Information extracting method, information extracting device, program, registering device and collating device
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
EP3617993B1 (en) Collation device, collation method and collation program
CN111027481A (en) Behavior analysis method and device based on human body key point detection
CN111695462A (en) Face recognition method, face recognition device, storage medium and server
CN111597910A (en) Face recognition method, face recognition device, terminal equipment and medium
KR20190142553A (en) Tracking method and system using a database of a person's faces
CN112330715A (en) Tracking method, tracking device, terminal equipment and readable storage medium
CN111444817B (en) Character image recognition method and device, electronic equipment and storage medium
CN112749655A (en) Sight tracking method, sight tracking device, computer equipment and storage medium
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
CN114821786A (en) Gait recognition method based on human body contour and key point feature fusion
CN117593792A (en) Abnormal gesture detection method and device based on video frame
EP2128820A1 (en) Information extracting method, registering device, collating device and program
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN115082996A (en) Face key point detection method and device, terminal equipment and storage medium
CN113902030A (en) Behavior identification method and apparatus, terminal device and storage medium
CN113158788B (en) Facial expression recognition method and device, terminal equipment and storage medium
CN114627528A (en) Identity comparison method and device, electronic equipment and computer readable storage medium
CN114495252A (en) Sight line detection method and device, electronic equipment and storage medium
CN113989914A (en) Security monitoring method and system based on face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination