CA3174691A1 - Human face fuzziness detecting method, device, computer equipment and storage medium - Google Patents

Human face fuzziness detecting method, device, computer equipment and storage medium

Info

Publication number
CA3174691A1
CA3174691A1 CA3174691A CA3174691A CA3174691A1 CA 3174691 A1 CA3174691 A1 CA 3174691A1 CA 3174691 A CA3174691 A CA 3174691A CA 3174691 A CA3174691 A CA 3174691A CA 3174691 A1 CA3174691 A1 CA 3174691A1
Authority
CA
Canada
Prior art keywords
human face
fuzziness
plural
block
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3174691A
Other languages
French (fr)
Inventor
Benben ZHANG
Xin HANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10353744 Canada Ltd
Original Assignee
10353744 Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10353744 Canada Ltd filed Critical 10353744 Canada Ltd
Publication of CA3174691A1 publication Critical patent/CA3174691A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to the technical field of computer vision, and disclosed are a face blur detection method and apparatus, a computer device and a storage medium. The method comprises: respectively extracting block images in which a plurality of facial feature points are located from within a face image; performing prediction on each block image by means of a pre-trained blur detection model to obtain the degree of confidence of each block image corresponding to each of a plurality of level labels, wherein the plurality of level labels comprise a plurality of definition levels and a plurality of blurriness levels; according to the degree of confidence of each block image corresponding to each of the plurality of level labels, acquiring the definition and blurriness of each block image; and according to the definition and blurriness of all of the block images, calculating the blurriness of the face image. In the embodiments of the present invention, the accuracy of face blur detection may be effectively improved.

Description

HUMAN FACE FUZZINESS DETECTING METHOD, DEVICE, COMPUTER
EQUIPMENT AND STORAGE MEDIUM
BACKGROUND OF THE INVENTION
Technical Field [0001] The present invention relates to the field of computer vision technology, and more particularly to a human face fuzziness detecting method, and corresponding device, computer equipment, and storage medium.
Description of Related Art
[0002] With the advent of the age of artificial intelligence, the human face recognition technology has seemed to be more and more important in such aspects as payment by face-swiping, and going through gate by face-swiping, etc., whereby people's life is made greatly convenient. However, qualities of human face images input in the human face recognition model affect the recognition effect, and it appears particularly important to reasonably screen these human face images, to discard images whose fuzziness degrees are unduly high, for example.
[0003] The current human face fuzziness detection mainly includes a method with full reference and a method without reference:
[0004] (1) By the method with full reference it is required to use an original human face image before quality degradation as reference for comparison with fuzzy images, and this method is deficient in the fact that the original human face image before quality degradation is not easily obtainable;
[0005] (2) By the method without reference it is not required to take any image as reference, as fuzziness judgment is directly made on the human face image, and this method has broader Date Regue/Date Received 2022-09-07 applicability.
[0006] With respect to the fuzziness detecting method with full reference, a reference image whose quality is not degraded is firstly needed, and this is a restriction in many application scenarios; moreover, since a human face captured by a camera will be directly used for fuzziness judgment, it is impractical to take it as a reference image, so what is broadly employed is the fuzziness detecting method without reference.
[0007] As regards the fuzziness detecting method without reference, it is modus operandi to input an image containing a human face and background, the region of the human face is firstly detected in order to exclude interference of the background, such a gradient function as Brenner, Tenengrad, and Laplacian algorithms is then used to calculate the gradient value of the human face region, the greater the gradient value is, the more definite will be the contour of the human face, i.e., the clearer will be the human face image; to the contrary, the smaller the gradient value is, the fuzzier will be the contour of the human face, i.e., the fuzzier will be the human face image. This method is effective to few human face images, but is ineffective to great batches of human face images, as many definite images are judged as fuzzy ones, and there is the problem that the precision rate of detection is not so high.
[0008] In addition, with the abrupt development of deep learning, the neural network exhibits strong capability to extract image features, there appears the application of the deep learning method to the detection of human face fuzziness, and certain progress is correspondingly achieved in the process. The deep learning method is usually employed to classify human face block images into the two categories of being fuzzy and being definite, but it has been found after experimentation that there are still some definite human face images that are judged as being fuzzy, and it is impossible to achieve the detection requirement on high precision rate.

Date Regue/Date Received 2022-09-07 SUMMARY OF THE INVENTION
[0009] In order to solve at least one of the above problems prevailing in the state of the art, the present invention provides a human face fuzziness detecting method, and corresponding device, computer equipment, and storage medium, enabling effective enhancement of precision rate in the detection of human face fuzziness. Specific technical solutions provided by the embodiments of the present invention are as follows.
[0010] According to the first aspect, there is provided a human face fuzziness detecting method that comprises:
[0011] extracting, from a human face image, block images in which plural human face feature points respectively reside;
[0012] predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
[0013] obtaining definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and
[0014] calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
[0015] Further, the step of extracting, from a human face image, block images in which plural human face feature points respectively reside includes:
[0016] detecting the human face image, and locating a human face region and plural human face feature points; and
[0017] adjusting a size of the human face region to a preset size, and extracting a block image in which each of the human face feature points resides from the adjusted human face region.

Date Regue/Date Received 2022-09-07
[0018] Further, the fuzziness detecting model is trained to be obtained through the following steps:
[0019] extracting, from plural human face image samples, a block image sample in which each of the human face feature points resides, wherein the plural image samples include definite human face image samples and fuzzy human face image samples;
[0020] marking a corresponding grade label on each of the block image samples, and classifying the plural block image samples marked with grade labels into a training set and a verifying set; and
[0021] iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
[0022] Further, the deep neural network includes a data input layer, a feature extraction layer, a first full connection layer, an activation function layer, a Dropout layer, a second full connection layer, and a loss function layer sequentially connected in cascades, wherein the feature extraction layer includes a convolution layer, a maximum pooling layer, a minimum pooling layer, and a concatenate layer, the data input layer, the maximum pooling layer, and the minimum pooling layer are respectively connected with the convolution layer, and the maximum pooling layer, the minimum pooling layer, and the first full connection layer are respectively connected with the concatenate layer.
[0023] Moreover, the method further comprises:
[0024] employing different testing sets to calculate an optimum threshold for the fuzziness detecting model according to an ROC curve.
[0025] Moreover, after the step of calculating fuzziness of the human face image according to definitions and fuzziness of all the block images, the method further comprises:
[0026] judging whether the fuzziness of the human face image obtained by calculation is higher than the optimum threshold;
[0027] if yes, deciding the human face image to be a fuzzy image, if not, deciding the human Date Regue/Date Received 2022-09-07 face image to be a definite image.
[0028] According to the second aspect, there is provided a human face fuzziness detecting device that comprises:
[0029] an extracting module, for extracting, from a human face image, block images in which plural human face feature points respectively reside;
[0030] a predicting module, for predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
[0031] an obtaining module, for calculating definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and
[0032] a calculating module, for calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
[0033] Further, the extracting module is specifically employed for:
[0034] detecting the human face image, and locating a human face region and plural human face feature points; and
[0035] adjusting a size of the human face region to a preset size, and extracting a block image in which each of the human face feature points resides from the adjusted human face region.
[0036] Moreover, the device further comprises a training module that is specifically employed for:
[0037] extracting, from plural human face image samples, a block image sample in which each of the human face feature points resides, wherein the plural image samples include definite human face image samples and fuzzy human face image samples;
[0038] marking a corresponding grade label on each of the block image samples, and classifying the plural block image samples marked with grade labels into a training set and a Date Regue/Date Received 2022-09-07 verifying set; and
[0039] iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
[0040] Further, the deep neural network includes a data input layer, a feature extraction layer, a first full connection layer, an activation function layer, a Dropout layer, a second full connection layer, and a loss function layer sequentially connected in cascades, wherein the feature extraction layer includes a convolution layer, a maximum pooling layer, a minimum pooling layer, and a concatenate layer, the data input layer, the maximum pooling layer, and the minimum pooling layer are respectively connected with the convolution layer, and the maximum pooling layer, the minimum pooling layer, and the first full connection layer are respectively connected with the concatenate layer.
[0041] Moreover, the training module is specifically further employed for:
[0042] employing different testing sets to calculate an optimum threshold for the fuzziness detecting model according to an ROC curve.
[0043] Moreover, the device further comprises a judging module that is specifically employed for:
[0044] judging whether the fuzziness of the human face image obtained by calculation is higher than the optimum threshold;
[0045] if yes, deciding the human face image to be a fuzzy image, if not, deciding the human face image to be a definite image.
[0046] According to the third aspect, there is provided a computer equipment that comprises a memory, a processor, and a computer program stored on the memory and operable on the processor, and the human face fuzziness detecting method as recited in the first aspect is realized when the processor executes the computer program.

Date Regue/Date Received 2022-09-07
[0047] According to the fourth aspect, there is provided a computer-readable storage medium that stores a computer program thereon, and the human face fuzziness detecting method as recited in the first aspect is realized when the computer program is executed by a processor.
[0048] As can be known from the above technical solutions, by extracting, from a human face image, block images in which plural human face feature points respectively reside, thereafter employing a previously well-trained fuzziness detecting model to predict a confidence degree of each of the block images corresponding to each grade label in plural grade labels, and obtaining definition and fuzziness of each block image according to the confidence degree of each block image corresponding to each grade label in plural grade labels, the present invention finally calculates fuzziness of the human face image according to definitions and fuzziness of all the block images; thusly, the plural block images in the human face image are respectively predicted as to fuzziness by means of a block prediction conception, and the prediction results are then combined together to predict the fuzziness of the entire human face image, whereby is avoided, to certain extent, that the entire result is wrongly made due to misjudgment of a certain block of the human face, so that accuracy in the detection of human face fuzziness is effectively enhanced; in addition, the present invention employs a previously well-trained fuzziness detecting model to predict confidence degrees of different block images in the human face image corresponding to each grade label in plural grade labels, and obtains fuzziness of each block image according to the confidence degree of each block image corresponding to each grade label in plural grade labels, since the plural grade labels include plural definition grades and plural fuzziness grades, in comparison with the binary-classification processing method in prior-art technology in which the deep learning method is employed to only differentiate human face block images into the two categories of being fuzzy and being definite, the present invention converts the binary-classification problem to a multi-classification problem for processing, and thereafter reconverts the problem back to binary-classification to obtain the fuzziness result, whereby it is made possible to effectively avoid the problem of misjudging a definite image as a fuzzy image, and to further enhance precision rate in the detection of image fuzziness.

Date Regue/Date Received 2022-09-07 BRIEF DESCRIPTION OF THE DRAWINGS
[0049] To describe the technical solutions in the embodiments of the present invention more clearly, drawings required for use in the description of the embodiments will be briefly introduced below. Apparently, the drawings introduced below are merely directed to some embodiments of the present invention, and it is possible for persons ordinarily skilled in the art to base on these drawings to acquire other drawings without creative effort being spent in the process.
[0050] Fig. 1 is a flowchart illustrating a human face fuzziness detecting method provided by an embodiment of the present invention;
[0051] Fig. 2 is a flowchart illustrating the process of training a fuzziness detecting model provided by an embodiment of the present invention;
[0052] Fig. 3 is a view schematically illustrating the structure of a deep neural network provided by an embodiment of the present invention;
[0053] Figs. 4a-4c are views illustrating ROC curves of the fuzziness detecting model on different testing sets provided by the embodiments of the present invention;
[0054] Fig. 5 is a view illustrating the structure of a human face fuzziness detecting device provided by an embodiment of the present invention; and
[0055] Fig. 6 is a view illustrating the internal structure of a computer equipment provided by an embodiment of the present invention.

Date Regue/Date Received 2022-09-07 DETAILED DESCRIPTION OF THE INVENTION
[0056] To make more lucid and clear the objectives, technical solutions and advantages of the present invention, the technical solutions in the embodiments of the present invention will be clearly and comprehensively described below in conjunction with accompanying drawings in the embodiments of the present invention. Apparently, the embodiments as described below are merely partial, rather than the entire, embodiments of the present invention. All other embodiments makeable by persons ordinarily skilled in the art on the basis of the embodiments in the present invention without spending any creative effort in the process shall all fall within the protection scope of the present invention.
[0057] As should be noted, unless explicitly demanded otherwise in the context, such wordings as "comprising", "including", "containing" and their various forms as used throughout the Description and Claims shall be understood to denote the meaning of inclusion, rather than the meaning of exclusion or exhaustion; in other words, these wordings denote the meaning of "including, but not limited to". In addition, unless explained otherwise in the description of the present invention, the wordings of "plural" and "a plurality of' denote the meaning of "two or more".
[0058] Fig. 1 is a flowchart illustrating a human face fuzziness detecting method provided by an embodiment of the present invention, as shown in Fig. 1, the method can comprise the following steps.
[0059] Step 101 - extracting, from a human face image, block images in which plural human face feature points respectively reside.
[0060] Specifically, a human face region is detected from the human face image, and block images in which plural human face feature points respectively reside are extracted from the human face region.

Date Regue/Date Received 2022-09-07
[0061] The human face feature points can include feature points to which a left pupil, a right pupil, a nose tip, a left comer of the mouth, and a right comer of the mouth correspond, and can further include other feature points, such as the feature point to which the brow corresponds.
[0062] In this embodiment, block images in which plural human face feature points respectively reside are extracted from the human face image, different human face feature points are contained in different block images, whereby plural block images can be extracted, for example, a left eye block image that contains the left pupil, and a right eye block image that contains the right pupil, etc.
[0063] Step 102 - predicting each block image via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each block image corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades.
[0064] The confidence degree of a certain block image corresponding to a certain grade label is used to indicate the probability of the block image corresponding to this grade label.
[0065] The definition grades are previously classified into three grades according to definition degrees in a descending order, including highly definite, mediumly definite, and lightly definite, and the corresponding grade labels are 0, 1, 2 respectively; the fuzziness grades are previously classified into three grades according to fuzziness degrees in an ascending order, including lightly fuzzy, mediumly fuzzy, and highly fuzzy, and the corresponding grade labels are 3, 4, 5 respectively; understandably, the number of grades of the definition grades and the number of grades of the fuzziness grades are both not restricted to three grades, to which no specific definition is made in the embodiments of the present invention.
Date Regue/Date Received 2022-09-07
[0066] Specifically, each block image is sequentially input in the fuzziness detecting model for prediction, and a confidence degree of each block image corresponding to each grade label in plural grade labels is obtained as output from the fuzziness detecting model.
[0067] Step 103 ¨ obtaining definition and fuzziness of each block image according to the confidence degree of each block image corresponding to each grade label in plural grade labels.
[0068] Specifically, with respect to each block image, the confidence degree of the block image corresponding to each grade label in plural grade labels is calculated to obtain definition and fuzziness of the block image. Confidence degrees of the block image corresponding to all definition grades can be directly accumulatively added to obtain the definition of the block image, and confidence degrees of the block image corresponding to all fuzziness grades can be directly accumulatively added to obtain the fuzziness of the block image, and it is also possible to employ other calculation modes to obtain the definition and fuzziness of the block image, while the embodiments of the present invention make no specific definition thereto.
[0069] Exemplarily, suppose that the circumstance, in which confidence degrees of a left eye block image of a certain human face image correspond to the aforementioned six types of grade labels, is as follows: the probability corresponding to grade label "0"
is 0, the probability corresponding to grade label "1" is 0.9, the probability corresponding to grade label "2" is 0.05, the probability corresponding to grade label "3" is 0.05, and the probabilities corresponding to grade label "4" and grade label "5" are both 0; the confidence degrees of the left eye block image corresponding to all definition grades are directly accumulatively added to derive the definition of the block image as 0.95, and the confidence degrees of the left eye block image corresponding to all fuzziness grades are accumulatively added to derive the fuzziness of the block image as 0.05.
[0070] Step 104 - calculating fuzziness of the human face image according to definitions and Date Regue/Date Received 2022-09-07 fuzziness of all the block images.
[0071] Specifically, the definitions of all the block images are accumulatively added, and the result is divided by the number of the entire block images to obtain the definition of the human face image, and the fuzziness of all the block images are accumulatively added, and the result is divided by the number of the entire block images to obtain the fuzziness of the human face image.
[0072] This embodiment of the present invention provides a human face fuzziness detecting method, by extracting, from a human face image, block images in which plural human face feature points respectively reside, thereafter employing a previously well-trained fuzziness detecting model to predict a confidence degree of each of the block images corresponding to each grade label in plural grade labels, and obtaining definition and fuzziness of each block image according to the confidence degree of each block image corresponding to each grade label in plural grade labels, fuzziness of the human face image is finally calculated according to definitions and fuzziness of all the block images; thusly, the plural block images in the human face image are respectively predicted as to fuzziness by means of a block prediction conception, and the prediction results are then combined together to predict the fuzziness of the entire human face image, whereby is avoided, to certain extent, that the entire result is wrongly made due to misjudgment of a certain block of the human face, so that accuracy in the detection of human face fuzziness is effectively enhanced; in addition, the present invention employs a previously well-trained fuzziness detecting model to predict confidence degrees of different block images in the human face image corresponding to each grade label in plural grade labels, and obtains fuzziness of each block image according to the confidence degree of each block image corresponding to each grade label in plural grade labels, since the plural grade labels include plural definition grades and plural fuzziness grades, in comparison with the binary-classification processing method in prior-art technology in which the deep learning method is employed to only differentiate human face block images into the two categories of being fuzzy and being definite, the present invention converts the binary-Date Regue/Date Received 2022-09-07 classification problem to a multi-classification problem for processing, and thereafter reconverts the problem back to binary-classification to obtain the fuzziness result, whereby it is made possible to effectively avoid the problem of misjudging a definite image as a fuzzy image, and to further enhance precision rate in the detection of image fuzziness.
[0073] In a preferred embodiment, the process of extracting, from a human face image, feature block images in which plural human face feature points respectively reside can include:
[0074] detecting the human face image, locating a human face region and plural human face feature points, adjusting a size of the human face region to a preset size, and extracting a block image in which each human face feature point resides from the adjusted human face region.
[0075] Specifically, a well-trained MTCNN (Multi-task convolutional neural network) human face detection model is employed to detect the human face image to locate a human face region and plural human face feature points, and the MTCNN human face detection model here includes P-Net, R-Net, and 0-Net network layers that are respectively responsible for generating a detection frame, refining the detection frame, and locating human face feature points; the MTCNN human face detection model can be trained with reference to prior-art model training methods, to which no redundancy is made.
[0076] After the human face region and the plural human face feature points have been located, the size of the human face region is scaled to a preset size, coordinates of the various human face feature points are simultaneously converted from the human face image to within a frame of the size-adjusted human face region, pixel expansion is made all around with the various human face feature points as centers, plural rectangular block images are obtained and cross boundary processing is performed; in this embodiment, the preset size is 184*184, and 24 pixels are expanded all around with the various human face feature points as centers to respectively constitute block images sized 48*48.

Date Regue/Date Received 2022-09-07
[0077] In a preferred embodiment, as shown in Fig. 2, the fuzziness detecting model is trained to be obtained by a method that comprises the following steps.
[0078] Step 201 - extracting, from a human face image sample, a block image sample in which each human face feature point resides, wherein the human face image sample includes definite human face image samples with different definition grades and fuzzy human face image samples with different fuzziness grades.
[0079] In this embodiment, human face image samples with three definition grades and three fuzziness grades are firstly obtained, and the human face image samples of each grade reach a certain number (200 for example). Human face regions are thereafter detected from the human face image samples, and block image samples in which human face feature points respectively reside are extracted from the human face regions, wherein a well-trained MTCNN human face detection model can be used to detect the human face regions and to locate the human face feature points. Since image sizes of the image samples are inconsistent with one another, and sizes of the detected human face regions are also inconsistent with one another, the human face regions as obtained are uniformly scaled to a preset size, coordinates of the various human face feature points are simultaneously converted from the human face image to within frames of the size-adjusted human face regions, pixel expansion is made all around with the various human face feature points as centers, plural rectangular block images are obtained and cross boundary processing is performed; in this embodiment, the preset size is 184*184, the left pupil, the right pupil, the nose tip, the left corner of the mouth, and the right comer of the mouth are selected to serve as human face feature points, 24 pixels are expanded all around with the various human face feature points as centers to respectively constitute block image samples sized 48*48, and these are stored. Thusly, by processing few amounts of human face image samples, fivefold block image samples can be generated for use in model training.
[0080] Step 202 - marking a corresponding grade label on each block image sample, and Date Regue/Date Received 2022-09-07 classifying plural block image samples marked with grade labels into a training set and a verifying set.
[0081] In this embodiment, through the above Step 201 are obtained about 1000 block image samples for human face image samples of each grade, in this Step 202, a corresponding grade label is firstly manually marked on each block image sample, that is, each block image sample is ascribed to the correct category according to definition degree and fuzziness degree through manual examination, the highly definite label being 0, the mediumly definite label being 1, the lightly definite label being 2, the lightly fuzzy label being 3, the mediumly fuzzy label being 4, and the highly fuzzy label being 5, and the block image samples marked with grade labels are thereafter classified into a training set and a verifying set in accordance with a preset proportion (9:1, for example), of which the training set is used for training the parametric model, and the verifying set is used for correcting the model during the training process.
[0082] Step 203 - iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
[0083] Specifically, the preconstructed deep neural network is trained with block image samples in the training set as inputs and with grade labels to which the block image samples correspond as outputs, and the trained deep neural network is verified according to the verifying set; if the verifying result does not conform to an iteration ending condition, the deep neural network is continually iteratively trained and verified until the verifying result conforms to the iteration ending condition, whereupon a fuzziness detecting model is obtained.
[0084] During the process of specific implementation, before the model is trained, the training set and the verifying set are package-processed into data of the LMDB format, the preconstructed deep neural network structure is stored in a document of a format with the Date Regue/Date Received 2022-09-07 suffix ".prototxt", the batches by which the data is read can be set as a reasonable numerical value according to hardware performance, a hyperparameter is set at "solver.prototxt", a learning rate is set as 0.005, the maximum times of iterations are set as 4,000, times of verifications and testing intervals are set as 50 and 100, and all of these parameters are adjustable. Training of the model ensues to obtain a model document with the suffix ".caffemodel". What the present invention employs is a deep learning caffe framework, and the use of other deep learning frameworks is similar in principle.
[0085] Generally speaking, ten or even hundred thousands of training samples are required to train a deep learning model, but actual fuzzy samples are extremely limited in number during practical production, and Gaussian fuzzy samples or motion fuzzy samples generated by simulation by the mode of image processing differ apparently from actual samples, whereas the present invention collects definite human face image samples with different definition grades and fuzzy human face image samples with different fuzziness grades, extracts, from these image samples respectively, block image samples in which plural human face feature points respectively reside, marks corresponding grade labels thereon, and thereafter makes use of plural block image samples marked with grade labels to train the constructed deep neural network, whereby multi-fold actual training samples can be obtained by merely using a small quantity of human face image samples, so that performance of the model can be further guaranteed, and precision in the detection of image fuzziness can hence be effectively enhanced.
[0086] In addition, since being highly definite and being highly fuzzy are two extremities in the detection of fuzziness, they are relatively easily differentiated, while those samples that are mediumly definite, lightly definite, lightly fuzzy and mediumly fuzzy due to influences by illumination, jitter of the shooter or camera pixels are not easily differentiated. In the process of training the fuzziness detecting model in the present invention, the binary-classification problem is converted to multi-classification problem for processing, whereby interferences from samples of the two extremities can be greatly reduced, by paying full attention to Date Regue/Date Received 2022-09-07 samples that are difficult to be differentiated, a better detecting result is achieved than the method of direct binary-classification processing without differentiating definition grades and fuzziness grades, so that the problem of misjudging an definite image as a fuzzy image can be effectively avoided, and precision rate in the detection of image fuzziness is further enhanced.
[0087] In a preferred embodiment, the aforementioned deep neural network includes a data input layer, a feature extraction layer, a first full connection layer, an activation function layer, a Dropout layer, a second full connection layer, and a loss function layer sequentially connected in cascades, of which the feature extraction layer includes a convolution layer, a maximum pooling layer, a minimum pooling layer, and a concatenate layer, the data input layer, the maximum pooling layer, and the minimum pooling layer are respectively connected with the convolution layer, and the maximum pooling layer, the minimum pooling layer, and the first full connection layer are respectively connected with the concatenate layer.
[0088] See as shown in Fig. 3, which is a view schematically illustrating the structure of a deep neural network provided by an embodiment of the present invention. The first is the data input layer, and its function is to package data and thereafter input the packaged data in the network in small batches. The convolution layer follows. Then come separated pooling layers:
one is maximum value pooling and the other is minimum value pooling, of which the maximum value pooling mode is to retain the most notable feature, and the minimum value pooling mode is to store the most easily neglectable feature, the combined use of the two pooling modes achieves excellent effect, and feature maps obtained by the two types of pooling are thereafter concatenated by the concatenate layer to serve together as input to the next layer. What follow are the full connection layer, the activation function layer, and the Dropout layer, of which the full connection layer is used to classify block image features input thereto, a Relu activation function in the activation function layer is used to discard any neuron whose output value is smaller than 0 to engender sparseness, and the Dropout layer is used to subtract few parameters during each model training to increase the generalization Date Regue/Date Received 2022-09-07 capability of the model. What ensues is still a full connection layer that is used to output score values of each definition grade and each fuzziness grade. The last is a normalization and loss function layer that is used to map the result output from the previous full connection layer to a corresponding probability value, and thereafter employ a cross entropy loss function to reduce smaller and smaller the difference between them and the label; the specific cross entropy loss function formula can be inferred from prior-art technology, and no redundancy is made thereto in this context.
[0089] In a preferred embodiment, after the step of iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model, the method can further comprise:
[0090] employing different testing sets to calculate an optimum threshold for the fuzziness detecting model according to an ROC curve.
[0091] The various testing sets include block image testing samples in which the human face feature points reside as extracted from the human face image testing sample, the specific extracting process can be inferred from Step 201, while no redundancy is made thereto in this context.
[0092] Specifically, fuzziness prediction is performed on each block image testing sample in each testing set on the basis of the fuzziness detecting model to obtain a prediction result, an ROC (receiver operating characteristic) curve to which each testing set corresponds is drawn according to the prediction result of each block image testing sample in each testing set and a preset threshold, the ROC curve to which each testing set corresponds is analyzed, and an optimum threshold is obtained.
[0093] In a practical application, 138669 definite human face images, 2334 semi-definite human face images, 19050 definite human face images and 1446 fuzzy human face images of security thumbnails are collected to make up three image sets: definite human face images Date Regue/Date Received 2022-09-07 and fuzzy human face images, semi-definite human face images and fuzzy human face images, and definite human face images and fuzzy human face images of security thumbnails, block image testing samples in which human face feature points reside are extracted from the human face images in the three image sets respectively to form three testing sets, the fuzziness detecting model is thereafter employed to predict the various testing sets, and ROC curves are respectively drawn according to the prediction result of each block image testing sample in each testing set and the preset threshold, with reference to Figs. 4a-4c, of which Fig. 4a illustrates an ROC curve of the fuzziness detecting model formed by definite and fuzzy human face images on a testing set, Fig. 4b illustrates an ROC curve of the fuzziness detecting model formed by security definite thumbnails and fuzzy human face images on a testing set, and Fig. 4c illustrates an ROC curve of the fuzziness detecting model formed by semi-definite and fuzzy human face images on a testing set. In this embodiment, three levels of preset thresholds can be set through the expert experience method, the thresholds are, in an ascending order, 0.19, 0.39 and 0.79, respectively, and 0.39 is selected to serve as the optimum threshold through analyses of the ROC curves. 0.39 is selected to perform tests on the testing sets of definite and fuzzy human faces, and precision rate of the testing results reaches 99.3%.
[0094] In a preferred embodiment, after the above step of calculating fuzziness of the human face image according to definitions and fuzziness of all the block images, the method can further comprise:
[0095] judging whether the fuzziness of the human face image obtained by calculation is higher than the optimum threshold; if yes, deciding the human face image to be a fuzzy image, if not, deciding the human face image to be a definite image.
[0096] In this embodiment, the optimum threshold is taken as standard to judge whether the human face image is a fuzzy image, when the fuzziness of the human face image is higher than the optimum threshold, it is decided that the human face image is a fuzzy image, whereby is achieved automatic detection of fuzzy images, and image quality is enhanced.

Date Regue/Date Received 2022-09-07
[0097] Fig. 5 is a view illustrating the structure of a human face fuzziness detecting device provided by an embodiment of the present invention, as shown in Fig. 5, the device comprises:
[0098] an extracting module 51, for extracting, from a human face image, block images in which plural human face feature points respectively reside;
[0099] a predicting module 52, for predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
[0100] an obtaining module 53, for calculating definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and
[0101] a calculating module 54, for calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
[0102] In a preferred embodiment, the extracting module 51 is specifically employed for:
[0103] detecting the human face image, and locating a human face region and plural human face feature points; and
[0104] adjusting a size of the human face region to a preset size, and extracting a block image in which each of the human face feature points resides from the adjusted human face region.
[0105] In a preferred embodiment, the device further comprises a training module 50 that is specifically employed for:
[0106] extracting, from plural human face image samples, a block image sample in which each of the human face feature points resides, wherein the plural image samples include definite human face image samples and fuzzy human face image samples;
[0107] marking a corresponding grade label on each of the block image samples, and classifying the plural block image samples marked with grade labels into a training set and a verifying set; and Date Regue/Date Received 2022-09-07
[0108] iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
[0109] In a preferred embodiment, the deep neural network includes a data input layer, a feature extraction layer, a first full connection layer, an activation function layer, a Dropout layer, a second full connection layer, and a loss function layer sequentially connected in cascades, wherein the feature extraction layer includes a convolution layer, a maximum pooling layer, a minimum pooling layer, and a concatenate layer, the data input layer, the maximum pooling layer, and the minimum pooling layer are respectively connected with the convolution layer, and the maximum pooling layer, the minimum pooling layer, and the first full connection layer are respectively connected with the concatenate layer.
[0110] In a preferred embodiment, the training module 50 is specifically further employed for:
[0111] employing different testing sets to calculate an optimum threshold for the fuzziness detecting model according to an ROC curve.
[0112] In a preferred embodiment, the device further comprises a judging module 55 that is specifically employed for:
[0113] judging whether the fuzziness of the human face image obtained by calculation is higher than the optimum threshold;
[0114] if yes, deciding the human face image to be a fuzzy image, if not, deciding the human face image to be a definite image.
[0115] As should be noted, the human face fuzziness detecting device provided by the foregoing embodiment is merely exemplarily explained by being divided into the aforementioned various functional modules, whereas it is possible, in actual application, to assign the above functions to different functional modules for completion according to requirements, that is to say, the internal structure of the device is classified into different functional modules to complete the entire or partial functions as described above. In addition, Date Regue/Date Received 2022-09-07 since the human face fuzziness detecting device provided by this embodiment pertains to the same conception as the human face fuzziness detecting method provided by the previous embodiment, the process of its specific implementation and the advantageous effects achieved thereby can be inferred from the embodiment of the human face fuzziness detecting method, while no repetition is redundantly made in this context.
[0116] Fig. 6 is a view illustrating the internal structure of a computer equipment provided by an embodiment of the present invention. The computer equipment can be a server, and its internal structure can be as shown in Fig. 6. The computer equipment comprises a processor, a memory, and a network interface connected to each other via a system bus.
The processor of the computer equipment is employed to provide computing and controlling capabilities.
The memory of the computer equipment includes a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores therein an operating system, a computer program and a database. The internal memory provides environment for the running of the operating system and the computer program in the nonvolatile storage medium.
The network interface of the computer equipment is employed to connect to an external terminal via network for communication. The computer program realizes a human face fuzziness detecting method when it is executed by a processor.
[0117] As understandable to persons skilled in the art, the structure illustrated in Fig. 6 is merely a block diagram of partial structure relevant to the solution of the present invention, and does not constitute any restriction to the computer equipment on which the solution of the present invention is applied, as the specific computer equipment may comprise component parts that are more than or less than those illustrated in Fig. 6, or may combine certain component parts, or may have different layout of component parts.
[0118] In one embodiment, there is provided a computer equipment that comprises a memory, a processor and a computer program stored on the memory and operable on the processor, and the following steps are realized when the processor executes the computer program:

Date Regue/Date Received 2022-09-07
[0119] extracting, from a human face image, block images in which plural human face feature points respectively reside;
[0120] predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
[0121] obtaining definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and
[0122] calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
[0123] In one embodiment, there is provided a computer-readable storage medium storing thereon a computer program, and the following steps are realized when the computer program is executed by a processor:
[0124] extracting, from a human face image, block images in which plural human face feature points respectively reside;
[0125] predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
[0126] obtaining definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and
[0127] calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
[0128] As comprehensible to persons ordinarily skilled in the art, the entire or partial flows in the methods according to the aforementioned embodiments can be completed via a computer Date Regue/Date Received 2022-09-07 program instructing relevant hardware, the computer program can be stored in a nonvolatile computer-readable storage medium, and the computer program can include the flows as embodied in the aforementioned various methods when executed. Any reference to the memory, storage, database or other media used in the various embodiments provided by the present application can all include nonvolatile and/or volatile memory/memories. The nonvolatile memory can include a read-only memory (ROM), a programmable ROM
(PROM), an electrically programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM) or a flash memory. The volatile memory can include a random access memory (RAM) or an external cache memory. To serve as explanation rather than restriction, the RAM is obtainable in many forms, such as static RAM
(SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM), etc.
[0129] Technical features of the aforementioned embodiments are randomly combinable, while all possible combinations of the technical features in the aforementioned embodiments are not exhausted for the sake of brevity, but all these should be considered to fall within the scope recorded in the Description as long as such combinations of the technical features are not mutually contradictory.
[0130] The foregoing embodiments are merely directed to several modes of execution of the present invention, and their descriptions are relatively specific and detailed, but they should not be hence misunderstood as restrictions to the inventive patent scope. As should be pointed out, persons with ordinary skill in the art may further make various modifications and improvements without departing from the conception of the present invention, and all these should pertain to the protection scope of the present invention. Accordingly, the patent protection scope of the present invention shall be based on the attached Claims.

Date Regue/Date Received 2022-09-07

Claims (10)

CA 03174691 2022-09-07What is claimed is:
1. A human face fuzziness detecting method, characterized in that the method comprises:
extracting, from a human face image, block images in which plural human face feature points respectively reside;
predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
obtaining definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
2. The method according to Claim 1, characterized in that the step of extracting, from a human face image, feature block images in which plural human face feature points respectively reside includes:
detecting the human face image, and locating a human face region and plural human face feature points; and adjusting a size of the human face region to a preset size, and extracting a block image in which each of the human face feature points resides from the adjusted human face region.
3. The method according to Claim 1 or 2, characterized in that the fuzziness detecting model is trained to be obtained through the following steps:
extracting, from a human face image sample, a block image sample in which each of the human face feature points resides, wherein the human face image sample includes definite human face image samples with different definition grades and fuzzy human face image samples with Date Regue/Date Received 2022-09-07 different fuzziness grades;
marking a corresponding grade label on each of the block image samples, and classifying the plural block image samples marked with grade labels into a training set and a verifying set; and iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
4. The method according to Claim 3, characterized in that the deep neural network includes a data input layer, a feature extraction layer, a first full connection layer, an activation function layer, a Dropout layer, a second full connection layer, and a loss function layer sequentially connected in cascades, wherein the feature extraction layer includes a convolution layer, a maximum pooling layer, a minimum pooling layer, and a concatenate layer, the data input layer, the maximum pooling layer, and the minimum pooling layer are respectively connected with the convolution layer, and the maximum pooling layer, the minimum pooling layer, and the first full connection layer are respectively connected with the concatenate layer.
5. The method according to Claim 3, characterized in that the method further comprises:
employing different testing sets to calculate an optimum threshold for the fuzziness detecting model according to an ROC curve.
6. The method according to Claim 5, characterized in that, after the step of calculating fuzziness of the human face image according to definitions and fuzziness of all the block images, the method further comprises:
judging whether the fuzziness of the human face image obtained by calculation is higher than the optimum threshold;
if yes, deciding the human face image to be a fuzzy image, if not, deciding the human face image to be a definite image.
7. A human face fuzziness detecting device, characterized in that the device comprises:
an extracting module, for extracting, from a human face image, block images in which plural Date Regue/Date Received 2022-09-07 human face feature points respectively reside;
a predicting module, for predicting each of the block images via a previously well-trained fuzziness detecting model, and obtaining a confidence degree of each of the block images corresponding to each grade label in plural grade labels, wherein the plural grade labels include plural definition grades and plural fuzziness grades;
an obtaining module, for calculating definition and fuzziness of each of the block images according to the confidence degree of each of the block images corresponding to each grade label in plural grade labels; and a calculating module, for calculating fuzziness of the human face image according to definitions and fuzziness of all the block images.
8. The device according to Claim 7, characterized in that the device further comprises a training module, and that the training module is specifically employed for:
extracting, from a human face image sample, a block image sample in which each of the human face feature points resides, wherein the human face image sample includes definite human face image samples with different definition grades and fuzzy human face image samples with different fuzziness grades;
marking a corresponding grade label on each of the block image samples, and classifying the plural block image samples marked with grade labels into a training set and a verifying set; and iteratively training a preconstructed deep neural network according to the training set and the verifying set, and obtaining the fuzziness detecting model.
9. A computer equipment, comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, characterized in that the human face fuzziness detecting method as recited in anyone of Claims 1 to 6 is realized when the processor executes the computer program.
10. A computer-readable storage medium, storing a computer program thereon, characterized in that the human face fuzziness detecting method as recited in anyone of Claims 1 to 6 is realized Date Regue/Date Received 2022-09-07 when the computer program is executed by a processor.

Date Regue/Date Received 2022-09-07
CA3174691A 2020-03-09 2020-06-19 Human face fuzziness detecting method, device, computer equipment and storage medium Pending CA3174691A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202010156039.8A CN111368758B (en) 2020-03-09 2020-03-09 Face ambiguity detection method, face ambiguity detection device, computer equipment and storage medium
CN202010156039.8 2020-03-09
PCT/CN2020/097009 WO2021179471A1 (en) 2020-03-09 2020-06-19 Face blur detection method and apparatus, computer device and storage medium

Publications (1)

Publication Number Publication Date
CA3174691A1 true CA3174691A1 (en) 2021-09-16

Family

ID=71206593

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3174691A Pending CA3174691A1 (en) 2020-03-09 2020-06-19 Human face fuzziness detecting method, device, computer equipment and storage medium

Country Status (3)

Country Link
CN (1) CN111368758B (en)
CA (1) CA3174691A1 (en)
WO (1) WO2021179471A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359104A (en) * 2022-01-10 2022-04-15 北京理工大学 Cataract fundus image enhancement method based on hierarchical generation

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862040B (en) * 2020-07-20 2023-10-31 中移(杭州)信息技术有限公司 Portrait picture quality evaluation method, device, equipment and storage medium
CN112085701B (en) * 2020-08-05 2024-06-11 深圳市优必选科技股份有限公司 Face ambiguity detection method and device, terminal equipment and storage medium
CN111914939B (en) * 2020-08-06 2023-07-28 平安科技(深圳)有限公司 Method, apparatus, device and computer readable storage medium for recognizing blurred image
CN113239738B (en) * 2021-04-19 2023-11-07 深圳市安思疆科技有限公司 Image blurring detection method and blurring detection device
CN113362304B (en) * 2021-06-03 2023-07-21 北京百度网讯科技有限公司 Training method of definition prediction model and method for determining definition level
CN113627314A (en) * 2021-08-05 2021-11-09 Oppo广东移动通信有限公司 Face image blur detection method and device, storage medium and electronic equipment
CN113902740A (en) * 2021-12-06 2022-01-07 深圳佑驾创新科技有限公司 Construction method of image blurring degree evaluation model
CN117475091B (en) * 2023-12-27 2024-03-22 浙江时光坐标科技股份有限公司 High-precision 3D model generation method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689039B (en) * 2016-08-05 2021-01-26 同方威视技术股份有限公司 Method and device for estimating image fuzziness
CN106920229B (en) * 2017-01-22 2021-01-05 北京奇艺世纪科技有限公司 Automatic detection method and system for image fuzzy area
CN107844766A (en) * 2017-10-31 2018-03-27 北京小米移动软件有限公司 Acquisition methods, device and the equipment of facial image fuzziness
US11462052B2 (en) * 2017-12-20 2022-10-04 Nec Corporation Image processing device, image processing method, and recording medium
CN109389030B (en) * 2018-08-23 2022-11-29 平安科技(深圳)有限公司 Face characteristic point detection method and device, computer equipment and storage medium
CN110059642B (en) * 2019-04-23 2020-07-31 北京海益同展信息科技有限公司 Face image screening method and device
CN110163114B (en) * 2019-04-25 2022-02-15 厦门瑞为信息技术有限公司 Method and system for analyzing face angle and face blurriness and computer equipment
CN110363753B (en) * 2019-07-11 2021-06-22 北京字节跳动网络技术有限公司 Image quality evaluation method and device and electronic equipment
CN110705511A (en) * 2019-10-16 2020-01-17 北京字节跳动网络技术有限公司 Blurred image recognition method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359104A (en) * 2022-01-10 2022-04-15 北京理工大学 Cataract fundus image enhancement method based on hierarchical generation
CN114359104B (en) * 2022-01-10 2024-06-11 北京理工大学 Cataract fundus image enhancement method based on hierarchical generation

Also Published As

Publication number Publication date
WO2021179471A1 (en) 2021-09-16
CN111368758A (en) 2020-07-03
CN111368758B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CA3174691A1 (en) Human face fuzziness detecting method, device, computer equipment and storage medium
US11403876B2 (en) Image processing method and apparatus, facial recognition method and apparatus, and computer device
JP7078803B2 (en) Risk recognition methods, equipment, computer equipment and storage media based on facial photographs
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN111666995B (en) Vehicle damage assessment method, device, equipment and medium based on deep learning model
CN109241842B (en) Fatigue driving detection method, device, computer equipment and storage medium
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN111626123A (en) Video data processing method and device, computer equipment and storage medium
WO2021047484A1 (en) Text recognition method and terminal device
EP4300417A1 (en) Method and apparatus for evaluating image authenticity, computer device, and storage medium
WO2021000832A1 (en) Face matching method and apparatus, computer device, and storage medium
CN111191532A (en) Face recognition method and device based on construction area and computer equipment
CN110956628B (en) Picture grade classification method, device, computer equipment and storage medium
CN111666990A (en) Vehicle damage characteristic detection method and device, computer equipment and storage medium
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
US20210224565A1 (en) Method for optical character recognition in document subject to shadows, and device employing method
CN112668462B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN111539317A (en) Vehicle illegal driving detection method and device, computer equipment and storage medium
CN116863522A (en) Acne grading method, device, equipment and medium
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116740728A (en) Dynamic acquisition method and system for wafer code reader
CN111985340A (en) Face recognition method and device based on neural network model and computer equipment
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907

EEER Examination request

Effective date: 20220907