CN111783605B - Face image recognition method, device, equipment and storage medium - Google Patents

Face image recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN111783605B
CN111783605B CN202010592663.2A CN202010592663A CN111783605B CN 111783605 B CN111783605 B CN 111783605B CN 202010592663 A CN202010592663 A CN 202010592663A CN 111783605 B CN111783605 B CN 111783605B
Authority
CN
China
Prior art keywords
face
network
image
sample
shielding image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010592663.2A
Other languages
Chinese (zh)
Other versions
CN111783605A (en
Inventor
杨馥魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010592663.2A priority Critical patent/CN111783605B/en
Publication of CN111783605A publication Critical patent/CN111783605A/en
Priority to JP2022577076A priority patent/JP2023529225A/en
Priority to PCT/CN2020/123588 priority patent/WO2021258588A1/en
Priority to KR1020227036111A priority patent/KR20220154227A/en
Application granted granted Critical
Publication of CN111783605B publication Critical patent/CN111783605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face image recognition method, a device, equipment and a storage medium, relates to the field of deep learning, cloud computing and computer vision in artificial intelligence, and particularly relates to the face recognition aspect of a mask. The specific implementation scheme is as follows: based on the spatial network characteristics of a pre-acquired face shielding image, carrying out spatial transformation on the face shielding image to obtain a front face shielding image; and inputting the front face shielding image into a face recognition network to obtain an identity recognition result. The error of the converted front face shielding image can be reduced, and the accuracy of face recognition is improved.

Description

Face image recognition method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to the field of deep learning, cloud computing and computer vision in artificial intelligence, and particularly relates to the face recognition aspect of a mask.
Background
The face recognition technology is a biological recognition technology for carrying out identity recognition based on facial feature information of people, and is widely applied to various fields in life. In the process of face recognition, in order to improve the recognition accuracy, it is generally required to convert a face image into a frontal face image and then perform identity recognition. Currently, in the prior art, a face image to be recognized is generally converted into a frontal face image based on face key point features of the face image to be recognized. However, when a face shielding image of a fitting such as a mask, a sunglasses, a hat and the like is recognized, since a face area part in the face shielding image is shielded, key point features of the face are difficult to accurately recognize, so that errors of the converted front face image are large, the accuracy of subsequent face recognition is seriously affected, and improvement is needed.
Disclosure of Invention
The disclosure provides a face image recognition method, a face image recognition device, face image recognition equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a face image recognition method, including:
Based on the spatial network characteristics of a pre-acquired face shielding image, carrying out spatial transformation on the face shielding image to obtain a front face shielding image;
and inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
According to another aspect of the present disclosure, there is provided a face image recognition apparatus including:
The space transformation module is used for carrying out space transformation on the face shielding image based on the space network characteristics of the face shielding image obtained in advance to obtain a front face shielding image;
And the identity recognition module is used for inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the facial image recognition method of any one of the embodiments of the present application.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the face image recognition method according to any one of the embodiments of the present application.
The technology according to the embodiment of the application solves the problems that the prior art aligns the face shielding image based on the key point characteristics of the face and has larger conversion error, and improves the accuracy of the identity recognition of the face shielding image.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
Fig. 1 is a flowchart of a face image recognition method according to an embodiment of the present application;
fig. 2A is a flowchart of another face image recognition method according to an embodiment of the present application;
fig. 2B is a schematic diagram of a network structure of a spatial transformation network according to an embodiment of the present application;
FIG. 3 is a flowchart of another face image recognition method according to an embodiment of the present application;
Fig. 4A is a flowchart of another face image recognition method according to an embodiment of the present application;
FIG. 4B is a diagram of a model architecture for joint training of a spatial transformation network and a face recognition network provided in accordance with an embodiment of the present application;
fig. 5 is a schematic structural diagram of a face image recognition device according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a face image recognition method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a face image recognition method according to an embodiment of the present application. The embodiment of the application is suitable for the situation of carrying out user identity recognition on the face image. The method is particularly suitable for the situation of carrying out user identity recognition on the face shielding image of accessories such as masks, glasses or hats. The embodiment may be performed by a face image recognition apparatus configured in an electronic device, which may be implemented in software and/or hardware. As shown in fig. 1, the method includes:
s101, carrying out space transformation on the face shielding image based on the space network characteristics of the face shielding image obtained in advance, and obtaining a front face shielding image.
The face-shielding image may be a face image captured after a part of the face is shielded by wearing accessories such as a mask, glasses, and a hat. The photographed face-blocked image may not be a standard frontal face due to photographing angle or face position, etc. For example, a face image of a side face may be used, a face image of a low head may be used, and the like. The spatial network characteristic of the face shielding image in the embodiment of the application can be a relevant characteristic for representing the spatial position transformation relationship between the acquired face shielding image and the face shielding image (namely the front face shielding image) corresponding to the acquired face shielding image. Optionally, the type of the spatial network feature depends on the type of spatial transformation performed on the face mask image. For example, when a two-dimensional spatial transformation is performed on the acquired face mask image, the spatial network features may be a representation of features in 6 dimensions (i.e., translation, rotation, and scaling in the X-direction and Y-direction). It should be noted that the spatial network features are not features that characterize key points of a face in a face occlusion image.
Alternatively, in the embodiment of the present application, the pre-acquired face mask image may be an image uploaded by the user in the electronic device of the face recognition system, or may be an image captured by the electronic device of the face recognition system through an image capturing device (such as a camera) configured on the electronic device of the face recognition system. For the obtained face shielding image, determining the spatial network characteristics of the face shielding image, for example, the spatial network characteristics of the face shielding image and the corresponding front face shielding image can be calculated by a preset algorithm; the spatial network characteristics of the acquired face shielding image can also be determined through a pre-trained spatial transformation network. The embodiment of the present application is not limited thereto. After the spatial network characteristics of the obtained face shielding image are determined, the face shielding image can be subjected to translation, rotation, scaling and other transformation operations based on the spatial position transformation relation of the spatial network characteristics in the dimensions of translation, rotation, scaling and the like, so that a front face shielding image corresponding to the face shielding image is obtained.
Optionally, in the embodiment of the present application, when an image to be identified obtained in advance by an electronic device of a face recognition system is a face shielding image only including a face area, spatial network features of the face shielding image may be directly determined, and spatial transformation is performed on the face shielding image based on the spatial network features to obtain a front face shielding image; when the image to be recognized, which is acquired in advance by the electronic device of the face recognition system, is a whole-body or half-body character image that obstructs a partial face area. In order to improve the accuracy of face recognition, the face position may be detected from the whole body or half body figure image, and the face region may be extracted from the whole body or half body figure image according to the detected face position to be used as a face shielding image, so as to determine the spatial network characteristics of the extracted face shielding image, and perform spatial transformation on the face shielding image to obtain a front face shielding image. Alternatively, there are many ways to detect and extract the face region from the whole-body or half-body character image, for example, the face region in the whole-body or half-body character image may be detected and extracted through a pre-trained face detection network. It is also possible to detect and extract a face region or the like from a whole-body or half-body person image by a face contour detection algorithm. This embodiment is not limited.
S102, inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
The face recognition network may be a neural network for user identification according to an input face image. The face recognition network can be composed of a convolution network for extracting image features, a feature extraction network for extracting face key points, an activation network for predicting the identity of a user and the like.
Optionally, the embodiment of the application may input the frontal face occlusion image obtained in S101 into a face recognition network, where the face recognition network analyzes the input frontal face occlusion image based on an algorithm during training, for example, firstly, extracts a feature image of the frontal face occlusion image through a convolution network, then extracts a face key point feature from the feature image through a feature extraction network, further analyzes the face key point feature through an activation network, and predicts a user identity corresponding to the input frontal face occlusion image.
According to the technical scheme, the face shielding image is converted into the front face shielding image based on the spatial network characteristics of the face shielding image, and then user identity recognition is carried out on the front face shielding image through a face recognition network. According to the scheme provided by the embodiment of the application, the front face shielding image is determined according to the spatial network characteristics of the face shielding image, and compared with the case that the front face shielding image is determined based on the face key point characteristics in the prior art, under the condition that the face area is shielded and the face key points are difficult to accurately identify, the error of the converted front face shielding image is greatly reduced, and the accuracy of face recognition is improved.
Fig. 2A is a flowchart of another face image recognition method according to an embodiment of the present application; fig. 2B is a schematic diagram of a network structure of a spatial transformation network according to an embodiment of the present application. Based on the embodiment, the embodiment is further optimized, and the spatial network characteristics based on the face shielding image obtained in advance are provided, and the face shielding image is subjected to spatial transformation to obtain the specific condition introduction of the face shielding image. As shown in fig. 2A-2B, the method specifically includes:
s201, inputting the pre-acquired face shielding image into a convolution network in a space transformation network to obtain a characteristic image of the face shielding image.
The spatial transformation network of the embodiment of the application can be a neural network used for determining the spatial network characteristics of the acquired face shielding image and performing spatial transformation operation on the face shielding image based on the spatial network characteristics to obtain the front face shielding image. As shown in fig. 2B, the spatial transformation network 2 includes: a convolutional network 21, a positioning network 22, a transform network 23 and an interpolation network 24.
Optionally, the embodiment of the present application may input the acquired face shielding image into the convolution network 21 in the spatial transformation network 2, where the convolution network 21 performs convolution processing on the input face shielding image, extracts image features, and obtains a feature image of the input face shielding image.
S202, inputting the characteristic image into a positioning network in a space transformation network to obtain the space network characteristic of the face shielding image.
Optionally, after the feature image of the face occlusion image is extracted by the convolution network 21 of the spatial transformation network 2, the feature image may be input into the positioning network 22 of the spatial transformation network 2, where the positioning network 22 may be a network for returning transformation parameters, and the input feature image may be analyzed to output the spatial network feature of the face occlusion image. For example, when the face mask image is a two-dimensional transform, the spatial network feature of the output face mask image may be a spatial transform matrix composed of translation, rotation, and scaling output vectors of 6 dimensions (2×3) in the X-direction and the Y-direction.
S203, inputting the spatial network characteristics and the characteristic images into a transformation network in the spatial transformation network to obtain pixel point conversion data of the characteristic images.
Optionally, the method comprises the steps of. After the spatial network feature of the face shielding image is obtained, the spatial network feature and the feature image obtained in S201 may be input into the transformation network 23 of the spatial transformation network 2, where the transformation network 23 performs spatial transformation operation on the feature image according to the input spatial network feature, and specifically, the transformation network 23 may perform spatial transformation on the original position coordinates of each pixel point in the feature image according to the spatial network feature to obtain the position coordinates of each pixel point after transformation, which is used as pixel point transformation data of the feature image. For example, pixel conversion data of a feature image may be calculated from a spatial network feature according to the following formula (1).
Wherein,Position coordinates after transformation of the ith pixel point in the feature image, namely conversion data of the ith pixel point; /(I)Is a 6-dimensional spatial network feature; /(I)Is the original position coordinate of the ith pixel point in the feature image.
S204, inputting the pixel point conversion data and the characteristic image into an interpolation network in the space transformation network to obtain a front face shielding image.
Optionally, because the spatial network feature obtained in S202 may not be an integer, the pixel point conversion data obtained in S203 may not be an integer, and the position coordinates of the pixel points in the image are all positive integers, so in the embodiment of the present application, the pixel point conversion data obtained in S203 needs to be processed, and the position coordinates after each pixel point is transformed are adjusted to be positive integers, so as to obtain the front face occlusion image.
Specifically, in the embodiment of the present application, the pixel point conversion data obtained in S203 and the feature image obtained in S201 are input into the interpolation network 24 of the spatial transformation network 2, and at this time, the interpolation network 24 performs interpolation processing on the position coordinate converted by the pixel point according to the position coordinate converted by each pixel point in the pixel point conversion data and the original position coordinate of the pixel point in the feature image, so as to convert the position coordinate converted by the pixel point into a positive integer, and generates an image after interpolation processing, that is, a front face occlusion image according to the position coordinate of each pixel point and the pixel value of each pixel point after interpolation processing.
S205, inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
According to the technical scheme, the characteristic image is extracted from the acquired face shielding image through the convolution network in the space transformation network, the characteristic image is transmitted to the positioning network to determine the space network characteristic of the face shielding image, the space network characteristic and the characteristic image are transmitted to the transformation network to spatially transform the characteristic image, the obtained pixel point conversion data are input to the interpolation network to interpolate the pixel point conversion data, and the face shielding image is transmitted to the face recognition network to be identified. According to the scheme provided by the embodiment of the application, the convolution network, the positioning network, the transformation network and the interpolation network in the spatial transformation network are mutually matched, so that the front face shielding image is determined based on the spatial network characteristics of the face shielding image, and under the condition that the face area is shielded and the key points of the face are difficult to accurately identify, the accuracy of the converted front face shielding image is greatly improved, and the face recognition accuracy is further improved.
FIG. 3 is a flowchart of another face image recognition method according to an embodiment of the present application; the embodiment is further optimized based on the above embodiment, and a preferred example of a face image recognition method is given. As shown in fig. 3, the method specifically includes:
S301, aligning the face shielding images based on the face key point characteristics of the face shielding images acquired in advance.
Optionally, in the embodiment of the present application, after the face recognition system acquires the face shielding image, the face shielding image may be aligned once based on the key point feature of the face. It should be noted that, the operation of aligning the face mask image is also a process of converting the face mask image into a frontal face. Specifically, the face key point extraction may be performed on the obtained face shielding image based on a feature extraction algorithm or a pre-trained face alignment network, for example, 78 key point features of the eyes, nose, mouth, eyebrows and other regions may be extracted. And then carrying out affine transformation on the obtained face shielding image based on the extracted face key point characteristics to realize the alignment operation of the face shielding image and obtain an aligned face shielding image.
It should be noted that, the face image to be recognized in the embodiment of the present application is a face image with a face area blocked by accessories such as a mask, glasses, and a hat, and because the face area is blocked, it is difficult to accurately recognize key point features of each face in the face blocking image in this step, so that an error of the face blocking image obtained after alignment based on the key point features associated with the face is larger, and is not an accurate front face blocking image, and the following operation S302 needs to be further executed, and further, spatial transformation is performed on the aligned face blocking image to obtain a standard front face blocking image.
S302, based on the spatial network characteristics of the face shielding images after alignment, carrying out spatial transformation on the face shielding images to obtain the front face shielding images.
Optionally, in the embodiment of the present application, according to the spatial network feature of the face occlusion image aligned based on the face key point feature in S301, the face occlusion image aligned in S301 is spatially transformed to obtain a final front face occlusion image. Optionally, the face shielding image after the alignment of S301 may be input into a convolution network in a spatial transformation network to obtain a feature image of the face shielding image after the alignment; inputting the characteristic image into a positioning network in the space transformation network to obtain the space network characteristics of the aligned face shielding image; inputting the spatial network characteristics and the characteristic images into a transformation network in the spatial transformation network to obtain pixel point conversion data of the characteristic images; and inputting the pixel point conversion data and the characteristic image into an interpolation network in the space transformation network to obtain a front face shielding image. The specific spatial transformation process has been described in the above embodiments, and this embodiment will not be described here.
S303, inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
According to the technical scheme, the face shielding image is aligned based on the face key point characteristics of the face shielding image, and then the aligned face shielding image is subjected to space transformation based on the spatial network characteristics of the aligned face shielding image, so that a final front face shielding image is obtained, and user identification is carried out on the front face shielding image through a face recognition network. According to the scheme provided by the embodiment of the application, the acquired face shielding image is subjected to two-time spatial transformation based on the face key point characteristics and the spatial network characteristics, so that the accuracy of the finally obtained face shielding image of the front face is further improved, and the robustness and the stability of the face recognition system for face recognition are improved.
Fig. 4A is a flowchart of another face image recognition method according to an embodiment of the present application; fig. 4B is a model architecture diagram for joint training of a spatial transformation network and a face recognition network according to an embodiment of the present application. The embodiment further optimizes the model based on the embodiment, and provides a specific description of the space transformation network and the face recognition network in the model training stage.
Optionally, in the model training stage, the embodiment of the application performs joint training on the spatial transformation network and the face recognition network. Specifically, as shown in fig. 4B, the output of the spatial transformation network 2 is connected to the input of the face recognition model 4, so as to realize that the spatial transformation network 2 and the face recognition network 4 are fused in a unified model framework to perform joint training. The method has the advantages that the coupling between the space transformation network and the face recognition network can be improved, and the accuracy of user identification of the face shielding image is improved. A specific joint training procedure is described next. As shown in fig. 4A-4B, the method specifically includes:
S401, inputting the sample face shielding image into a space transformation network to obtain a sample front face shielding image.
Alternatively, the sample face mask image may be composed of a large number of face images in which the face portion area is masked due to the accessory being worn. The face image with the blocked face part area can be directly selected as a sample face blocking image, or the sample face blocking image can be obtained after the normal non-blocking face image is blocked. The embodiment of the application can be that a sample face shielding image is input into a convolution network 21 of a spatial transformation network 2 shown in fig. 4B, the convolution network 21 performs feature extraction on the input sample face shielding image to obtain a feature image, the feature image is input into a positioning network 22, a transformation network 23 and an interpolation network 24, the positioning network 22 analyzes the feature image to obtain spatial network features, the spatial network features are input into the transformation network 23, the transformation network 23 performs spatial transformation on the position coordinates of each pixel point in the feature image based on the spatial network features to obtain pixel point conversion data, the interpolation network 24 performs interpolation processing on the pixel point conversion data based on the received feature image and the pixel point conversion data to obtain a converted sample face shielding image.
S402, inputting the sample front face shielding image into a face recognition network to obtain a sample identity recognition result.
Optionally, as shown in fig. 4B, since the spatial transformation network 2 and the face recognition network 4 are already fused in a unified model framework, at this time, the sample front face occlusion image output by the interpolation network 24 of the spatial transformation network 2 may be input into the convolution network 41 in the face recognition network 4, the convolution network 41 may perform feature extraction on the input sample front face occlusion image, so as to obtain a feature image of the sample front face occlusion image, input the feature image into the feature extraction network 42, the feature extraction network 42 may further extract a face key point feature from the input feature image, input the feature image into the activation network 43, and the activation network 43 may perform user identity prediction of the sample front face occlusion image based on the extracted face key point feature, and output a sample identity recognition result.
S403, constructing a joint loss function according to the sample frontal face shielding image, the sample identity recognition result, the face key points and the true identities marked in the sample face shielding image.
Optionally, since the embodiment of the present application performs joint training on the space transformation network 2 and the face recognition network 4. The joint loss function as a supervisory signal needs to include the spatial transformation loss function of the supervisory training spatial transformation network 2 and the recognition loss function of the supervisory training face recognition network 4.
Optionally, the implementation process of constructing the joint loss function according to the embodiment of the present application may include: determining a space transformation loss function according to the face key points marked in the sample face shielding image and the sample front face shielding image; determining a recognition loss function according to the true identity marked in the sample face shielding image and the sample identity recognition result; and constructing a joint loss function according to the space transformation loss function and the identification loss function. Specifically, accurate key points and true identities of the human face are marked in the sample human face shielding image in advance. When constructing the joint loss function, the accurate alignment of the sample face shielding image can be performed based on the accurate face key points marked in the sample face shielding image, the sample face shielding image is converted into a standard front face shielding image, then the standard front face shielding image is matched with the sample front face shielding image output by the space transformation network 2, and the space transformation loss function is calculated; and matching the true identity marked in the sample face shielding image with a sample identity recognition result predicted by the face recognition network 4, calculating a recognition loss function, and taking the calculated space transformation loss function and the recognition loss function as a combined loss function of the training.
S404, performing supervision training on the space transformation network and the face recognition network based on the joint loss function.
Optionally, the embodiment of the present application may use the spatial transformation loss function and the recognition loss function as supervisory signals to perform supervisory training on the spatial transformation network 2 and the face recognition network 4, so as to continuously update network parameters of the spatial transformation network 2 and the face recognition network 4. Specifically, the trained spatial transformation network 2 can be supervised based on a spatial transformation loss function, so that the trained spatial transformation network 2 can convert an input nonstandard face shielding image into a standard front face shielding image; the trained face recognition network 4 is supervised based on the recognition loss function, so that the trained face recognition network 4 can extract key point features of the face from the front face shielding image more accurately, and user identity recognition can be accurately performed.
Optionally, after the spatial transformation network 2 and the face recognition network 4 are jointly supervised and trained, at least one set of test data is adopted to test the accuracy of the trained spatial transformation network 2 and face recognition network 4, and if the accuracy of the two networks meets the preset accuracy requirement, the spatial transformation network 2 and the face recognition network 4 are supervised and trained.
S405, inputting the pre-acquired face shielding image into a trained spatial transformation network, and performing spatial transformation on the face shielding image according to the spatial network characteristics of the face shielding image to obtain a front face shielding image.
S406, inputting the front face shielding image into a trained face recognition network to obtain an identity recognition result.
According to the technical scheme, the spatial transformation network and the face recognition network are fused in a unified model framework, the sample face shielding image is input into the spatial transformation network, the obtained sample front face shielding image is input into the face recognition network, the sample identity recognition result is obtained, the joint loss function is constructed according to the sample front face shielding image, the sample identity recognition result and the face key points and the real identities marked in the sample face shielding image in advance, and the spatial transformation network and the face recognition network are trained jointly based on the joint loss function. The face shielding image is converted into a front face shielding image based on the spatial network characteristics of the face shielding image through the spatial transformation network after the joint training, and the user identity recognition is carried out on the front face shielding image through the face recognition network after the joint training. According to the scheme provided by the embodiment of the application, the space transformation network and the face recognition network are integrated in a unified model framework, and the two networks are jointly trained based on the joint loss function, so that the coupling between the space transformation network and the face recognition network is improved, and the accuracy of user identity recognition of the face shielding image is further improved.
Fig. 5 is a schematic structural diagram of a face image recognition device according to an embodiment of the present application. The embodiment of the application is suitable for the situation of carrying out user identity recognition on the face image. The method is particularly suitable for the situation of carrying out user identity recognition on the face shielding image of accessories such as masks, glasses or hats. The device can realize the face image recognition method according to any embodiment of the application. The apparatus may be integrated into an electronic device, and the apparatus 500 specifically includes:
The space transformation module 501 is configured to spatially transform the face occlusion image based on a spatial network feature of a pre-acquired face occlusion image, so as to obtain a front face occlusion image;
The identity recognition module 502 is configured to input the front face occlusion image to a face recognition network to obtain an identity recognition result.
According to the technical scheme, the face shielding image is converted into the front face shielding image based on the spatial network characteristics of the face shielding image, and then user identity recognition is carried out on the front face shielding image through a face recognition network. According to the scheme provided by the embodiment of the application, the front face shielding image is determined according to the spatial network characteristics of the face shielding image, and compared with the case that the front face shielding image is determined based on the face key point characteristics in the prior art, under the condition that the face area is shielded and the face key points are difficult to accurately identify, the error of the converted front face shielding image is greatly reduced, and the accuracy of face recognition is improved.
Further, the spatial transformation module 501 includes:
the characteristic image determining unit is used for inputting a face shielding image obtained in advance into a convolution network in the space transformation network to obtain a characteristic image of the face shielding image;
The network characteristic determining unit is used for inputting the characteristic image into a positioning network in the space transformation network to obtain the space network characteristic of the face shielding image;
The data conversion unit is used for inputting the spatial network characteristics and the characteristic images into a conversion network in the spatial conversion network to obtain pixel point conversion data of the characteristic images;
and the data difference unit is used for inputting the pixel point conversion data and the characteristic image into an interpolation network in the space transformation network to obtain a front face shielding image.
Further, the device further comprises:
And the image alignment module is used for aligning the face shielding images based on the face key point characteristics of the face shielding images acquired in advance.
Further, the front face occlusion image is obtained based on a spatial transformation network, and the device further comprises:
And the model training module is used for carrying out joint training on the space transformation network and the face recognition network in a model training stage.
Further, the model training module includes:
the first data input module is used for inputting the sample face shielding image into the space transformation network to obtain a sample front face shielding image;
the second data input module is used for inputting the sample front face shielding image into the face recognition network to obtain a sample identity recognition result;
The loss function construction unit is used for constructing a joint loss function according to the sample front face shielding image, the sample identity recognition result, the face key points marked in the sample face shielding image and the real identity;
And the supervised training unit is used for performing supervised training on the space transformation network and the face recognition network based on the joint loss function.
Further, the loss function construction unit is specifically configured to:
Determining a space transformation loss function according to the face key points marked in the sample face shielding image and the sample front face shielding image;
determining an identification loss function according to the true identity marked in the sample face shielding image and the sample identity identification result;
and constructing a joint loss function according to the space transformation loss function and the identification loss function.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
As shown in fig. 6, a block diagram of an electronic device of a face image recognition method according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
The memory 602 is a non-transitory computer readable storage medium provided by the present application. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the face image recognition method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the face image recognition method provided by the present application.
The memory 602 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the spatial transformation module 501 and the identification module 502 shown in fig. 5) corresponding to the facial image recognition method according to the embodiment of the present application. The processor 601 executes various functional applications of the server and data processing, i.e., implements the face image recognition method in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the electronic device of the face image recognition method, and the like. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 602 may optionally include memory remotely located with respect to the processor 601, which may be connected to the electronic device of the face image recognition method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the face image recognition method may further include: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the facial image recognition method, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. input devices. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the face shielding image is converted into the front face shielding image through the spatial network characteristic based on the face shielding image, and then the user identity recognition is carried out on the front face shielding image through the face recognition network. According to the scheme provided by the embodiment of the application, the front face shielding image is determined according to the spatial network characteristics of the face shielding image, and compared with the case that the front face shielding image is determined based on the face key point characteristics in the prior art, under the condition that the face area is shielded and the face key points are difficult to accurately identify, the error of the converted front face shielding image is greatly reduced, and the accuracy of face recognition is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. A face image recognition method, comprising:
Aligning the face shielding image based on the face key point characteristics of the face shielding image obtained in advance;
inputting the aligned face shielding image into a convolution network in a space transformation network to obtain a characteristic image of the face shielding image;
Inputting the characteristic image into a positioning network in the space transformation network to obtain the space network characteristic of the face shielding image; the spatial network features are related features for representing the spatial position transformation relationship between the aligned face shielding images and the corresponding front face shielding images;
inputting the spatial network characteristics and the characteristic images into a transformation network in the spatial transformation network to obtain pixel point conversion data of the characteristic images;
inputting the pixel point conversion data and the characteristic image into an interpolation network in the space transformation network to obtain a front face shielding image;
and inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
2. The method of claim 1, wherein the frontal face occlusion image is derived based on a spatial transformation network, the method further comprising:
and in the model training stage, carrying out joint training on the space transformation network and the face recognition network.
3. The method of claim 2, wherein jointly training the spatial transformation network and the face recognition network comprises:
inputting the sample face shielding image into the space transformation network to obtain a sample front face shielding image;
Inputting the sample front face shielding image into the face recognition network to obtain a sample identity recognition result;
Constructing a joint loss function according to the sample front face shielding image, the sample identity recognition result, and the face key points and the real identities marked in the sample face shielding image;
and performing supervised training on the spatial transformation network and the face recognition network based on the joint loss function.
4. A method according to claim 3, wherein constructing a joint loss function from the sample frontal face occlusion image, the sample identity recognition result, face keypoints and true identities noted in the sample face occlusion image, comprises:
Determining a space transformation loss function according to the face key points marked in the sample face shielding image and the sample front face shielding image;
determining an identification loss function according to the true identity marked in the sample face shielding image and the sample identity identification result;
and constructing a joint loss function according to the space transformation loss function and the identification loss function.
5. A face image recognition apparatus comprising:
The image alignment module is used for aligning the face shielding images based on the face key point characteristics of the face shielding images acquired in advance;
A spatial transform module, comprising: the characteristic image determining unit is used for inputting the aligned face shielding image into a convolution network in the space transformation network to obtain a characteristic image of the face shielding image; the network characteristic determining unit is used for inputting the characteristic image into a positioning network in the space transformation network to obtain the space network characteristic of the face shielding image; the spatial network features are related features for representing the spatial position transformation relationship between the aligned face shielding images and the corresponding front face shielding images; the data conversion unit is used for inputting the spatial network characteristics and the characteristic images into a conversion network in the spatial conversion network to obtain pixel point conversion data of the characteristic images; the data difference unit is used for inputting the pixel point conversion data and the characteristic image into an interpolation network in the space transformation network to obtain a front face shielding image;
And the identity recognition module is used for inputting the front face shielding image into a face recognition network to obtain an identity recognition result.
6. The apparatus of claim 5, wherein the frontal face occlusion image is derived based on a spatial transformation network, the apparatus further comprising:
And the model training module is used for carrying out joint training on the space transformation network and the face recognition network in a model training stage.
7. The apparatus of claim 6, wherein the model training module comprises:
the first data input module is used for inputting the sample face shielding image into the space transformation network to obtain a sample front face shielding image;
the second data input module is used for inputting the sample front face shielding image into the face recognition network to obtain a sample identity recognition result;
The loss function construction unit is used for constructing a joint loss function according to the sample front face shielding image, the sample identity recognition result, the face key points marked in the sample face shielding image and the real identity;
And the supervised training unit is used for performing supervised training on the space transformation network and the face recognition network based on the joint loss function.
8. The apparatus of claim 7, wherein the loss function construction unit is specifically configured to:
Determining a space transformation loss function according to the face key points marked in the sample face shielding image and the sample front face shielding image;
determining an identification loss function according to the true identity marked in the sample face shielding image and the sample identity identification result;
and constructing a joint loss function according to the space transformation loss function and the identification loss function.
9. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the face image recognition method of any one of claims 1-4.
10. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the face image recognition method of any one of claims 1-4.
CN202010592663.2A 2020-06-24 2020-06-24 Face image recognition method, device, equipment and storage medium Active CN111783605B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010592663.2A CN111783605B (en) 2020-06-24 2020-06-24 Face image recognition method, device, equipment and storage medium
JP2022577076A JP2023529225A (en) 2020-06-24 2020-10-26 Face image recognition method, device, equipment and storage medium
PCT/CN2020/123588 WO2021258588A1 (en) 2020-06-24 2020-10-26 Face image recognition method, apparatus and device and storage medium
KR1020227036111A KR20220154227A (en) 2020-06-24 2020-10-26 Face image identification method, device, facility and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010592663.2A CN111783605B (en) 2020-06-24 2020-06-24 Face image recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111783605A CN111783605A (en) 2020-10-16
CN111783605B true CN111783605B (en) 2024-05-24

Family

ID=72759827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010592663.2A Active CN111783605B (en) 2020-06-24 2020-06-24 Face image recognition method, device, equipment and storage medium

Country Status (4)

Country Link
JP (1) JP2023529225A (en)
KR (1) KR20220154227A (en)
CN (1) CN111783605B (en)
WO (1) WO2021258588A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783605B (en) * 2020-06-24 2024-05-24 北京百度网讯科技有限公司 Face image recognition method, device, equipment and storage medium
CN112364827B (en) * 2020-11-30 2023-11-10 腾讯科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN112507833A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Face recognition and model training method, device, equipment and storage medium
CN112418190B (en) * 2021-01-21 2021-04-02 成都点泽智能科技有限公司 Mobile terminal medical protective shielding face recognition method, device, system and server
CN113963426B (en) * 2021-12-22 2022-08-26 合肥的卢深视科技有限公司 Model training method, mask wearing face recognition method, electronic device and storage medium
WO2023158408A1 (en) * 2022-02-16 2023-08-24 Bahcesehir Universitesi Face recognition method
CN114549369B (en) * 2022-04-24 2022-07-12 腾讯科技(深圳)有限公司 Data restoration method and device, computer and readable storage medium
CN116453201B (en) * 2023-06-19 2023-09-01 南昌大学 Face recognition method and system based on adjacent edge loss
CN117197865A (en) * 2023-08-21 2023-12-08 上饶高投智城科技有限公司 Elevator entrance guard security method and system based on image recognition model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157073A (en) * 2008-12-26 2010-07-15 Fujitsu Ltd Device, method and program for recognizing face
CN109886121A (en) * 2019-01-23 2019-06-14 浙江大学 A kind of face key independent positioning method blocking robust
CN109948573A (en) * 2019-03-27 2019-06-28 厦门大学 A kind of noise robustness face identification method based on cascade deep convolutional neural networks
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992148A (en) * 2015-06-18 2015-10-21 江南大学 ATM terminal human face key points partially shielding detection method based on random forest
CN111783605B (en) * 2020-06-24 2024-05-24 北京百度网讯科技有限公司 Face image recognition method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157073A (en) * 2008-12-26 2010-07-15 Fujitsu Ltd Device, method and program for recognizing face
CN109960975A (en) * 2017-12-23 2019-07-02 四川大学 A kind of face generation and its face identification method based on human eye
CN109886121A (en) * 2019-01-23 2019-06-14 浙江大学 A kind of face key independent positioning method blocking robust
CN109948573A (en) * 2019-03-27 2019-06-28 厦门大学 A kind of noise robustness face identification method based on cascade deep convolutional neural networks
CN110232369A (en) * 2019-06-20 2019-09-13 深圳和而泰家居在线网络科技有限公司 A kind of face identification method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的人脸对齐研究;闫祥;国优秀硕士学位论文全文数据库(电子期刊);论文的摘要、第三章、第四章 *

Also Published As

Publication number Publication date
KR20220154227A (en) 2022-11-21
WO2021258588A1 (en) 2021-12-30
JP2023529225A (en) 2023-07-07
CN111783605A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111783605B (en) Face image recognition method, device, equipment and storage medium
KR102597377B1 (en) Image recognition method and apparatus, device, computer storage medium, and computer program
CN111291885B (en) Near infrared image generation method, training method and device for generation network
EP3859605A2 (en) Image recognition method, apparatus, device, and computer storage medium
CN111598818A (en) Face fusion model training method and device and electronic equipment
CN112270669B (en) Human body 3D key point detection method, model training method and related devices
CN111783606B (en) Training method, device, equipment and storage medium of face recognition network
CN111709874A (en) Image adjusting method and device, electronic equipment and storage medium
CN111709875B (en) Image processing method, device, electronic equipment and storage medium
CN111968203B (en) Animation driving method, device, electronic equipment and storage medium
CN112001248B (en) Active interaction method, device, electronic equipment and readable storage medium
US11403799B2 (en) Method and apparatus for recognizing face-swap, device and computer readable storage medium
WO2016165614A1 (en) Method for expression recognition in instant video and electronic equipment
CN112241716B (en) Training sample generation method and device
US20220343636A1 (en) Method and apparatus for establishing image recognition model, device, and storage medium
CN111783619B (en) Human body attribute identification method, device, equipment and storage medium
CN111899159B (en) Method, device, apparatus and storage medium for changing hairstyle
CN111523467B (en) Face tracking method and device
CN113447128B (en) Multi-human-body-temperature detection method and device, electronic equipment and storage medium
KR20220113830A (en) Facial keypoint detection method, device and electronic device
CN111832611B (en) Training method, device, equipment and storage medium for animal identification model
CN111710008B (en) Method and device for generating people stream density, electronic equipment and storage medium
KR20210154774A (en) Image recognition method, device, electronic equipment and computer program
CN112949467B (en) Face detection method, device, electronic equipment and storage medium
CN111862030B (en) Face synthetic image detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant