CN113468925A - Shielded face recognition method, intelligent terminal and storage medium - Google Patents

Shielded face recognition method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN113468925A
CN113468925A CN202010244613.5A CN202010244613A CN113468925A CN 113468925 A CN113468925 A CN 113468925A CN 202010244613 A CN202010244613 A CN 202010244613A CN 113468925 A CN113468925 A CN 113468925A
Authority
CN
China
Prior art keywords
image
face
occlusion
recognition
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010244613.5A
Other languages
Chinese (zh)
Other versions
CN113468925B (en
Inventor
熊宇龙
李渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202010244613.5A priority Critical patent/CN113468925B/en
Publication of CN113468925A publication Critical patent/CN113468925A/en
Application granted granted Critical
Publication of CN113468925B publication Critical patent/CN113468925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying a shielded face, an intelligent terminal and a storage medium, wherein the method comprises the following steps: acquiring an image of a shielding face; carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images; respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image; determining an occlusion type according to the local image; determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type. The invention is based on the shielding face recognition technology of local image recognition, has low data processing requirement, does not need to increase shielding data deliberately, can extract local images quickly, accurately recognizes the whole face in the shielding face image, and accurately and efficiently recognizes the face.

Description

Shielded face recognition method, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing a face to be shielded, an intelligent terminal and a storage medium.
Background
The wave of artificial intelligence brings a lot of technical innovation, and the intelligent vision technology becomes indispensable while the life is facilitated. The shielded face recognition technology is the easiest technology to fall to the ground in artificial intelligence, the flow is fixed, the data set is more, and the difficulty of the shielded face recognition technology is greatly reduced along with the improvement of the camera shooting technology; however, the occluded face, such as sunglasses, masks, uneven illumination, etc., loses part of the details to cause feature loss, and brings great challenges to the occluded face recognition accuracy.
The existing occlusion face recognition technology based on deep learning mainly comprises two technologies, wherein the first technology is to increase the robustness of a data set, add an occlusion interference item in the data set for training, and increase the feature separation degree by using a parameter adjusting method so as to adapt to an occlusion face; the second is a constraint training method, in which a constraint (a limiting condition) is added to a shielded region, and the shielded region is identified by a local weight method (i.e., the face is identified by dividing the face into a plurality of regions), which has the disadvantage of poor generalization capability for various hard shields.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention mainly aims to provide an occlusion face recognition method, an intelligent terminal and a storage medium, and aims to solve the problems of low speed and low precision of occlusion face recognition in the prior art.
In order to achieve the above object, the present invention provides an occluded face recognition method, which comprises the following steps:
acquiring an image of a shielding face;
carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images;
respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image;
determining an occlusion type according to the local image;
determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type.
Optionally, the method for identifying an occluded face, wherein the obtaining an occluded face image specifically includes:
detecting a face existing in a current image;
and extracting the shielding face image with a preset image size from the image.
Optionally, the method for identifying an occluded face includes, before performing image blocking processing on the occluded face image to obtain a plurality of block images, extracting a local image in the plurality of block images:
and aligning the shielding face image according to a preset alignment face feature point to obtain the aligned shielding face image.
Optionally, the method for identifying an occluded face includes performing image blocking processing on the occluded face image to obtain a plurality of block images, and extracting a local image in the plurality of block images, and specifically includes:
carrying out image blocking processing on the aligned shielding face image to obtain a plurality of blocked images;
and extracting the local image from the block image.
Optionally, the method for identifying an occluded face, wherein the respectively performing image identification on each of the local images to obtain a plurality of candidate identification images of the occluded face image specifically includes:
respectively carrying out image recognition on each local image, and respectively obtaining at least one alternative recognition image corresponding to each local image;
and taking all the alternative recognition images corresponding to all the local images as alternative recognition images of the occlusion face image.
Optionally, the method for identifying an occlusion face image, where the determining, by the occlusion type, a final recognition image of the occlusion face image from the multiple candidate recognition images specifically includes:
determining the area identification weight of the image area corresponding to each local image according to the shielding type;
calculating the similarity between the alternative recognition image and the shielding face image according to the region recognition weight;
and determining a final recognition image of the occlusion face image from the alternative recognition images according to the similarity.
Optionally, the method for identifying an occluded face, wherein the calculating a similarity between the candidate identification image and the occluded face image according to the region identification weight specifically includes:
respectively calculating the similarity of each image region corresponding to each local image in the alternative identification image according to the region identification weight;
and taking the sum of the numerical values of the similarity of each region in the alternative recognition image as the similarity of the alternative recognition image and the shielding face image.
Optionally, the method for identifying an occlusion face, wherein the determining an occlusion type according to the local image specifically includes:
and comparing the distribution condition of the shielding in the local image with a preset shielding type table, and judging the current shielding type of the local image.
Optionally, the method for identifying an occluded face, wherein the respectively performing image identification on each local image specifically includes:
receiving the input of a plurality of non-shielding faces in advance, and storing the input in the face database;
and matching the local images with the face database, and determining at least one non-occlusion face image corresponding to the local images, wherein the non-occlusion face image is a candidate identification image of the occlusion face image.
Optionally, the occlusion face recognition method may further include that the occlusion type is any combination of at least one or more of a left eye occlusion, a right eye occlusion, a nose occlusion, and a mouth occlusion.
In addition, to achieve the above object, the present invention further provides an intelligent terminal, wherein the intelligent terminal includes: the device comprises a memory, a processor and an occlusion face recognition program which is stored on the memory and can run on the processor, wherein the occlusion face recognition program realizes the steps of the occlusion face recognition method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores an occlusion face recognition program, and the occlusion face recognition program, when executed by a processor, implements the steps of the occlusion face recognition method as described above.
The method comprises the steps of obtaining an image of a shielding face; carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images; respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image; determining an occlusion type according to the local image; determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type. The invention is based on the shielding face recognition technology of local image recognition, has low data processing requirement, does not need to increase shielding data deliberately, can extract local images quickly, accurately recognizes the whole face in the shielding face image, and accurately and efficiently recognizes the face.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the method for identifying an occluded face of the present invention;
FIG. 2 is a flowchart of step S10 in the preferred embodiment of the method for recognizing an occluded face of the present invention;
FIG. 3 is a flowchart of step S20 in the preferred embodiment of the method for recognizing an occluded face of the present invention;
FIG. 4 is a flowchart of step S30 in the preferred embodiment of the method for recognizing an occluded face of the present invention;
FIG. 5 is a flowchart of step S50 in the preferred embodiment of the method for recognizing an occluded face of the present invention;
FIG. 6 is a schematic diagram of face detection in the preferred embodiment of the method for identifying an occluded face according to the present invention;
FIG. 7 is a schematic view of face alignment in the preferred embodiment of the method for recognizing an occluded face according to the present invention;
FIG. 8 is a diagram of image pre-segmentation in accordance with a preferred embodiment of the present invention;
FIG. 9 is a schematic diagram of extracting a local image according to the preferred embodiment of the method for recognizing an occluded face of the present invention;
FIG. 10 is a schematic diagram illustrating various occlusion types in a preferred embodiment of the method for identifying an occluded face of the present invention;
fig. 11 is a schematic operating environment diagram of an intelligent terminal according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the method for identifying an occluded face according to the preferred embodiment of the present invention includes the following steps:
and step S10, acquiring an occlusion face image.
The occlusion face image refers to a face image in which a face appearing in an image is occluded. For example, when the face in the face image is unclear, incomplete or missing due to wearing sunglasses, a mask, uneven lighting, etc. (for example, key parts such as eyes, a nose, a face, etc. are blocked), the missing part of details may cause feature loss, so that real face information cannot be recognized, and such an image is called a blocked face image.
Please refer to fig. 2, which is a flowchart of step S10 in the method for recognizing an occluded face according to the present invention.
As shown in fig. 2, the step S10 includes:
s11, detecting the face existing in the current image;
and S12, extracting the occlusion face image with the preset image size from the image.
After the occlusion face image with the preset image size is extracted, the method further comprises the following steps: and carrying out statistics according to a data set to obtain preset alignment face characteristic points in advance, and aligning the shielding face image according to the preset alignment face characteristic points to obtain the aligned shielding face image.
Specifically, a human face existing in a current image is detected quickly and accurately by using deep learning, as shown in fig. 6 (a human face detection schematic diagram), which is a new research direction in the field of machine learning, and is introduced into machine learning to make it closer to artificial intelligence. The final goal of deep learning is to make a machine capable of human-like analytical learning, and to recognize data such as characters, images, and sounds. The effects of deep learning on speech and image recognition far exceed those of the related art, and deep learning achieves many achievements in search technology, data mining, machine learning, machine translation, natural language processing, multimedia learning, speech, recommendation and personalization technologies and other related fields. The deep learning enables the machine to imitate human activities such as audio-visual and thinking, solves a plurality of complex pattern recognition problems, and makes great progress on the artificial intelligence related technology. Face region detection and face keypoint detection are combined together, for example, by MTCNN (Multi-task convolutional neural network), i.e., a multitask convolutional neural network, for a multitask neural network model of a face detection task. The multitask neural network model mainly adopts three cascaded networks and adopts the idea of adding a candidate frame and a classifier to carry out rapid and efficient face detection. In the invention, the multitask convolution neural network is used for detecting the occluded human face existing in the current image; and when the shielded face is detected, acquiring a shielded face image for subsequent processing.
The face information in the face image which is shielded is accurately detected, and the subsequent face matching is facilitated. Meanwhile, after the face existing in the current image is detected by the deep learning method, the shielding face image with the size limited by the image size in the current image is extracted, namely the size of the shielding face image is within the size range of the preset image size, so that the definition of the shielding face image can be ensured, and the subsequent recognition effect can be ensured. The preset image size can be adjusted according to actual requirements, and does not limit the present invention.
Wherein, the extracted face image is corrected, the extracted face is placed at the same position (for example, two face images are placed at the same horizontal position in fig. 7), the face images are aligned according to preset aligned face feature points (the face feature points refer to key parts of the face, such as eyes, nose and mouth), the image with the position offset in the face image is adjusted to be an aligned image (the alignment refers to the image with the original whole image position offset is adjusted to be an aligned image, for example, an a4 paper is used as an aligned image, the state of the aligned image is horizontal centering and vertical centering), as shown in fig. 7 (a face alignment diagram), the subsequent matching accuracy and the image quality are ensured, and the position of the preset aligned face feature points can be statistically obtained through a data set (a set of training data) with higher quality (namely, a number is detected through face key point detection Each feature point in the dataset is labeled and then averaged).
And step S20, carrying out image blocking processing on the occlusion face image to obtain a plurality of block images, and extracting local images in the plurality of block images.
Please refer to fig. 3, which is a flowchart of step S20 in the method for recognizing an occluded face according to the present invention.
As shown in fig. 3, the step S20 includes:
s21, carrying out image blocking processing on the aligned shielding face image to obtain a plurality of blocked images;
and S22, extracting the local image from the block image.
Specifically, the aligned face images are segmented according to a priori knowledge (the priori knowledge is knowledge prior to experience, for example, based on a certain rule obtained through multiple experiments or analysis) and a size of an image required for subsequent local image extraction (the required size of the image can be determined according to a pre-input instruction), so as to obtain a plurality of segmented images, and the local image extraction speed can be accelerated by respectively identifying the single segmented image. It can be understood that the recognition speed of the single block image is faster than that of the face image combined by a plurality of block images; for example, as shown in fig. 8 (schematic diagram of preset image segmentation), dividing the occlusion face image into 4 blocks results in the segmentation result as shown in fig. 9, and the content in each segmented image of fig. 9 sequentially represents the right eye, the left eye, the nose and the mouth from left to right.
According to the input size of the image required by the local feature extractor, the feature point images detected in fig. 8 (i.e. the four feature point images in fig. 9) are extended by a certain proportion (the proportion is an empirical value, and the range is 10% -30%, for example, 10% of the original image), so that the accuracy of local image extraction is ensured.
Through machine learning (machine learning is a multi-domain cross subject, and relates to a multi-domain subject such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like, a local image extractor which is specially used for researching how a computer simulates or realizes human learning behaviors to obtain new knowledge or skills and reorganizes an existing knowledge structure to continuously improve the performance of the local image extractor judges whether an interest region exists in a block image (namely, a region which needs to be concerned by detection can be understood as a region which a detector is interested in a detected occlusion image or can be understood as a target region, for example, whether mouth occlusion exists or not), and if the interest region exists, image extraction is carried out. As shown in fig. 9 (partial image extraction diagram), the left and right eyes, nose, and mouth in the block image are extracted, respectively; the local image extractor for machine learning is not limited to one, and may be, for example, the local image extractor Adaboost, Haar, Hog, or the like, and aims to extract the region of interest quickly and efficiently.
Step S30, performing image recognition on each of the partial images to obtain a plurality of candidate recognition images of the occlusion face image.
Please refer to fig. 4, which is a flowchart of step S30 in the method for recognizing an occluded face according to the present invention.
As shown in fig. 4, the step S30 includes:
s31, respectively carrying out image recognition on each local image, and respectively obtaining at least one alternative recognition image corresponding to each local image;
and S32, taking all the alternative recognition images corresponding to all the local images as alternative recognition images of the occlusion face image.
Specifically, a face database is established first, a plurality of non-occlusion faces (that is, a plurality of faces recorded are clear face images without any occlusion and defined as non-occlusion faces) are received in advance and stored in the face database (the face database may be updated in real time, each face image records the identity of the corresponding user, such as the identity information of the name, sex, age, and the like, that is, the real information of the face), then the partial images are matched with the face database, at least one non-occlusion face image corresponding to the partial image is determined (for example, 1 partial image corresponds to 5 non-occlusion face images, 4 partial images correspond to 20 non-occlusion face images), the non-occlusion face image is a candidate identification image of the occlusion face image, and then all the candidate identification images (for example, 20) corresponding to all the partial images are used as the candidates of the occlusion face image And selecting the identification image.
And step S40, determining the occlusion type according to the local image.
Specifically, the current occlusion type of the local image is determined by comparing the occlusion distribution condition in the local image with a preset occlusion type table.
In this embodiment, the occlusion types include: the occlusion type is any combination of at least one or more of left eye occlusion, right eye occlusion, nose occlusion and mouth occlusion; in other embodiments, the method may further include: eyebrow shielding, chin shielding, left face shielding, right face shielding, forehead shielding, etc. As shown in fig. 10 (exemplary schematic view of various types of occlusion), there are right eye occlusion, left eye occlusion, and mouth-nose occlusion at the same time; the occlusion type is composite occlusion, and the total number of the occlusion types is 11 except that complete occlusion does not exist in the face detection image, as shown in the following table:
Figure BDA0002433657030000111
Figure BDA0002433657030000121
tables of different occlusion types (where 0 stands for no occlusion and 1 stands for occlusion)
And judging the current occlusion type of the local image according to the table.
Step S50, determining a final recognition image of the occlusion face image from the plurality of candidate recognition images according to the occlusion type.
Please refer to fig. 5, which is a flowchart of step S50 in the method for recognizing an occluded face according to the present invention.
As shown in fig. 5, the step S50 includes:
s51, determining the area identification weight of the image area corresponding to each local image according to the shielding type;
s52, calculating the similarity between the alternative recognition image and the occlusion face image according to the region recognition weight;
and S53, determining the final recognition image of the occlusion face image from the alternative recognition images according to the similarity.
Specifically, different shielding types are judged according to the number distribution of local images, and the subsequent identification weight is determined. For example, a blocking face image is subjected to blocking processing, and then 4 partial images are extracted, where the 4 partial images are partial images 1, 2, and 3, and partial image 4. After the occlusion type is determined, corresponding weights are respectively given to local images of different areas in the process of identifying the local images, the area identification weight corresponding to the local image 1 is A, the area identification weight corresponding to the local image 2 is B, the area identification weight corresponding to the local image 3 is C, and the area identification weight corresponding to the local image 4 is D.
More specifically, in the process of identifying the local images, the local images that have been assigned to different regions are assigned with corresponding weights, for example, as described above, the local image 1 corresponds to the region identification weight a, the local image 2 corresponds to the region identification weight B, the local image 3 corresponds to the region identification weight C, and the local image 4 corresponds to the region identification weight D; then, respectively calculating the similarity of each image region corresponding to each local image in the alternative identification image according to the region identification weight; taking the sum of numerical values of the similarity of each region in the alternative recognition image as the similarity of the alternative recognition image and the shielding face image; and finally, determining a final recognition image (namely a target image which is finally required to be recognized by the invention) of the shielding face image from the alternative recognition images according to the similarity.
Further, after the occlusion type is determined, image recognition extraction is performed on each existing local image, and a preset number of candidate recognition images meeting preset requirements are extracted (for example, 1 local image corresponds to 5 candidate recognition images, and the present invention preferably corresponds to 4 local images to 20 candidate recognition images).
The specific calculation formula of the sum of the numerical values of the similarity of each region in the candidate identification image is as follows: sn ═ a5+ B5+ C5+ D5;
wherein n is the number of candidate recognition images, n is less than or equal to 20 (or is another constant, 1 local image corresponds to 5 candidate recognition images, and then 4 local images correspond to 20 candidate recognition images), and Sn is the sum of the numerical values of the similarity of each region in the candidate recognition images, that is, the weight set of all candidate recognition images is { s1, s2, s3, … Sn }, that is, Sn represents the value of the sum of the numerical values of the similarity (weighted sum); a represents the local image of left eye occlusion and B represents the local image of right eye occlusion and C represents the nose occlusion of nose occlusion, D represents the local image of mouth occlusion and weights (namely, the needed coefficients in the process of block image identification, and the local image of different regions can be same or different), a5, B5, C5 and D5 are respectively a weighted set of local image regions and are { zp 1, xp 2, cp 3, vp 4 and mp 5}, p1 and p5 are alternative identification images, z, x, C, v and m are weights, and z, x, C, v and m are decreased by similarity.
The 4 local image weights (i.e., a, B, C, and D) may be preset according to a specific scene, and may be valued in an average weighting manner in a normal scene, that is, under the condition that the 4 local image weights (a, B, C, and D) are equal, and weighting manners in other scenes are valued according to the preset manner, that is, the setting is not limited to one type according to the requirements.
According to the result of the weighting calculation, a candidate identification image with the highest value (i.e. the sum of the values of the similarity of each region in the candidate identification image is the largest) is obtained (i.e. the value of the weighting result is the largest, and the face image corresponding to the candidate identification image with the largest value is the final identification image to be identified in the present invention, i.e. the candidate identification image with the largest value obtained by performing weighted summation on the above 20 candidate identification images is the final identification image), i.e. the non-occlusion face image (final identification image) corresponding to the occlusion face image is identified, which is equivalent to restoring the face image when the original face image is not occluded.
The method identifies the face to be shielded through the local image, adapts to each scene, and has good robustness. Furthermore, the method for extracting the local images can also be used for judging the human face posture, such as the situations of side face and possibly failed extraction of the mouth images, and the like, so as to provide auxiliary information for the final human face identity recognition.
Further, as shown in fig. 11, based on the above method for identifying the occluded face, the present invention also provides an intelligent terminal, which includes a processor 10, a memory 20, and a display 30. Fig. 11 shows only some of the components of the smart terminal, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may be an internal storage unit of the intelligent terminal in some embodiments, such as a hard disk or a memory of the intelligent terminal. The memory 20 may also be an external storage device of the Smart terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the Smart terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the smart terminal. The memory 20 is used for storing application software installed in the intelligent terminal and various data, such as program codes of the installed intelligent terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores an occlusion face recognition program 40, and the occlusion face recognition program 40 can be executed by the processor 10 to implement the occlusion face recognition method of the present application.
The processor 10 may be a Central Processing Unit (CPU), a microprocessor or other data Processing chip in some embodiments, and is used for running program codes stored in the memory 20 or Processing data, such as executing the occlusion face recognition method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the intelligent terminal and for displaying a visual user interface. The components 10-30 of the intelligent terminal communicate with each other via a system bus.
In one embodiment, the processor 10 when executing the occlusion face recognition program 40 in the memory 20 performs the following steps:
acquiring an image of a shielding face;
carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images;
respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image;
determining an occlusion type according to the local image;
determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type.
The acquiring of the shielding face image specifically comprises:
detecting a face existing in a current image;
and extracting the shielding face image with a preset image size from the image.
Before the image blocking processing is performed on the shielding face image to obtain a plurality of block images, and a local image in the plurality of block images is extracted, the method further includes:
and aligning the shielding face image according to a preset alignment face feature point to obtain the aligned shielding face image.
The image blocking processing is performed on the shielding face image to obtain a plurality of block images, and the extracting of the local images in the plurality of block images specifically includes:
carrying out image blocking processing on the aligned shielding face image to obtain a plurality of blocked images;
and extracting the local image from the block image.
The image recognition of each local image to obtain a plurality of candidate recognition images for covering the face image specifically includes:
respectively carrying out image recognition on each local image, and respectively obtaining at least one alternative recognition image corresponding to each local image;
and taking all the alternative recognition images corresponding to all the local images as alternative recognition images of the occlusion face image.
Determining, by the occlusion type, a final recognition image of the occlusion face image from the plurality of candidate recognition images, specifically including:
determining the area identification weight of the image area corresponding to each local image according to the shielding type;
calculating the similarity between the alternative recognition image and the shielding face image according to the region recognition weight;
and determining a final recognition image of the occlusion face image from the alternative recognition images according to the similarity.
The calculating the similarity between the candidate recognition image and the occlusion face image according to the region recognition weight specifically includes:
respectively calculating the similarity of each image region corresponding to each local image in the alternative identification image according to the region identification weight;
and taking the sum of the numerical values of the similarity of each region in the alternative recognition image as the similarity of the alternative recognition image and the shielding face image.
The determining the occlusion type according to the local image specifically includes:
and comparing the distribution condition of the shielding in the local image with a preset shielding type table, and judging the current shielding type of the local image.
The respectively performing image recognition on each local image specifically includes:
receiving the input of a plurality of non-shielding faces in advance, and storing the input in the face database;
and matching the local images with the face database, and determining at least one non-occlusion face image corresponding to the local images, wherein the non-occlusion face image is a candidate identification image of the occlusion face image.
The occlusion type is any combination of at least one or more of left eye occlusion, right eye occlusion, nose occlusion, and mouth occlusion.
The present invention further provides a storage medium, wherein the storage medium stores an occlusion face recognition program, and the occlusion face recognition program, when executed by a processor, implements the steps of the occlusion face recognition method as described above.
In summary, the present invention provides a method for identifying an occluded face, an intelligent terminal and a storage medium, wherein the method includes: acquiring an image of a shielding face; carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images; respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image; determining an occlusion type according to the local image; determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type. The invention is based on the shielding face recognition technology of local image recognition, has low data processing requirement, does not need to increase shielding data deliberately, can extract local images quickly, accurately recognizes the whole face in the shielding face image, and accurately and efficiently recognizes the face.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (12)

1. An occlusion face recognition method, characterized in that the occlusion face recognition method comprises:
acquiring an image of a shielding face;
carrying out image blocking processing on the shielding face image to obtain a plurality of block images, and extracting local images in the plurality of block images;
respectively carrying out image recognition on each local image to obtain a plurality of alternative recognition images of the shielding face image;
determining an occlusion type according to the local image;
determining a final recognition image of the occlusion face image from the plurality of candidate recognition images by the occlusion type.
2. The method for identifying an occluded face according to claim 1, wherein the obtaining of the occluded face image specifically comprises:
detecting a face existing in a current image;
and extracting the shielding face image with a preset image size from the image.
3. The method according to claim 2, wherein before the image blocking processing is performed on the occlusion face image to obtain a plurality of block images, and the extracting of the local image in the plurality of block images, the method further comprises:
and aligning the shielding face image according to a preset alignment face feature point to obtain the aligned shielding face image.
4. The method according to claim 3, wherein the image blocking processing is performed on the occlusion face image to obtain a plurality of block images, and the extracting of the local image in the plurality of block images specifically comprises:
carrying out image blocking processing on the aligned shielding face image to obtain a plurality of blocked images;
and extracting the local image from the block image.
5. The method according to claim 1, wherein the obtaining a plurality of candidate recognition images of the occlusion face image by respectively performing image recognition on each of the local images specifically comprises:
respectively carrying out image recognition on each local image, and respectively obtaining at least one alternative recognition image corresponding to each local image;
and taking all the alternative recognition images corresponding to all the local images as alternative recognition images of the occlusion face image.
6. The occlusion face recognition method according to claim 1, wherein the determining, by the occlusion type, a final recognition image of the occlusion face image from the plurality of candidate recognition images comprises:
determining the area identification weight of the image area corresponding to each local image according to the shielding type;
calculating the similarity between the alternative recognition image and the shielding face image according to the region recognition weight;
and determining a final recognition image of the occlusion face image from the alternative recognition images according to the similarity.
7. The method according to claim 6, wherein the calculating the similarity between the candidate recognition image and the occlusion face image according to the region recognition weight specifically comprises:
respectively calculating the similarity of each image region corresponding to each local image in the alternative identification image according to the region identification weight;
and taking the sum of the numerical values of the similarity of each region in the alternative recognition image as the similarity of the alternative recognition image and the shielding face image.
8. The occlusion face recognition method according to claim 4, wherein the determining an occlusion type according to the local image specifically includes:
and comparing the distribution condition of the shielding in the local image with a preset shielding type table, and judging the current shielding type of the local image.
9. The method for identifying an occluded face according to claim 1, wherein the respectively performing image identification on each local image specifically comprises:
receiving the input of a plurality of non-shielding faces in advance, and storing the input in the face database;
and matching the local images with the face database, and determining at least one non-occlusion face image corresponding to the local images, wherein the non-occlusion face image is a candidate identification image of the occlusion face image.
10. The occlusion face recognition method of claim 1, wherein the occlusion type is any combination of at least one or more of a left eye occlusion, a right eye occlusion, a nose occlusion, and a mouth occlusion.
11. An intelligent terminal, characterized in that, intelligent terminal includes: memory, a processor and an occlusion face recognition program stored on the memory and executable on the processor, the occlusion face recognition program when executed by the processor implementing the steps of the occlusion face recognition method according to any of claims 1-10.
12. A storage medium, characterized in that the storage medium stores an occlusion face recognition program, which when executed by a processor implements the steps of the occlusion face recognition method according to any of claims 1-10.
CN202010244613.5A 2020-03-31 2020-03-31 Occlusion face recognition method, intelligent terminal and storage medium Active CN113468925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244613.5A CN113468925B (en) 2020-03-31 2020-03-31 Occlusion face recognition method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244613.5A CN113468925B (en) 2020-03-31 2020-03-31 Occlusion face recognition method, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113468925A true CN113468925A (en) 2021-10-01
CN113468925B CN113468925B (en) 2024-02-20

Family

ID=77865454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244613.5A Active CN113468925B (en) 2020-03-31 2020-03-31 Occlusion face recognition method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113468925B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115810214A (en) * 2023-02-06 2023-03-17 广州市森锐科技股份有限公司 Verification management method, system, equipment and storage medium based on AI face recognition
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1975759A (en) * 2006-12-15 2007-06-06 中山大学 Human face identifying method based on structural principal element analysis
CN106372595A (en) * 2016-08-31 2017-02-01 重庆大学 Shielded face identification method and device
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN108446619A (en) * 2018-03-12 2018-08-24 清华大学 Face critical point detection method and device based on deeply study
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN109215131A (en) * 2017-06-30 2019-01-15 Tcl集团股份有限公司 The driving method and device of conjecture face
US20190279365A1 (en) * 2018-03-07 2019-09-12 Omron Corporation Imaging apparatus
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
CN110782554A (en) * 2018-07-13 2020-02-11 宁波其兰文化发展有限公司 Access control method based on video photography

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1975759A (en) * 2006-12-15 2007-06-06 中山大学 Human face identifying method based on structural principal element analysis
CN106372595A (en) * 2016-08-31 2017-02-01 重庆大学 Shielded face identification method and device
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN109215131A (en) * 2017-06-30 2019-01-15 Tcl集团股份有限公司 The driving method and device of conjecture face
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
US20190279365A1 (en) * 2018-03-07 2019-09-12 Omron Corporation Imaging apparatus
CN108446619A (en) * 2018-03-12 2018-08-24 清华大学 Face critical point detection method and device based on deeply study
CN110705337A (en) * 2018-07-10 2020-01-17 普天信息技术有限公司 Face recognition method and device aiming at glasses shielding
CN110782554A (en) * 2018-07-13 2020-02-11 宁波其兰文化发展有限公司 Access control method based on video photography
CN110569756A (en) * 2019-08-26 2019-12-13 长沙理工大学 face recognition model construction method, recognition method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZOPF R 等: "Limits on visual awareness of object targets in the context of other object category masks: Investigating bottlenecks in the continuous flash suppression paradigm with hand and tool stimuli", 《JOURNAL OF VISION》, vol. 19, no. 5 *
仝琼琳 等: "遮挡人脸识别技术研究综述", 《伊犁师范学院学报(自然科学版)》, vol. 14, no. 1 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128514A (en) * 2022-11-28 2023-05-16 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN116128514B (en) * 2022-11-28 2023-10-13 武汉利楚商务服务有限公司 Face brushing payment method and device under multi-face intervention
CN115810214A (en) * 2023-02-06 2023-03-17 广州市森锐科技股份有限公司 Verification management method, system, equipment and storage medium based on AI face recognition
CN115810214B (en) * 2023-02-06 2023-05-12 广州市森锐科技股份有限公司 AI-based face recognition verification management method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN113468925B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
WO2021077984A1 (en) Object recognition method and apparatus, electronic device, and readable storage medium
CN108829900B (en) Face image retrieval method and device based on deep learning and terminal
Zhang et al. Fast and robust occluded face detection in ATM surveillance
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
KR100647322B1 (en) Apparatus and method of generating shape model of object and apparatus and method of automatically searching feature points of object employing the same
US20110293189A1 (en) Facial Analysis Techniques
CN111144366A (en) Strange face clustering method based on joint face quality assessment
EP3647992A1 (en) Face image processing method and apparatus, storage medium, and electronic device
Tarrés et al. A novel method for face recognition under partial occlusion or facial expression variations
US20080304699A1 (en) Face feature point detection apparatus and method of the same
CN105303150A (en) Method and system for implementing image processing
Parris et al. Face and eye detection on hard datasets
CN110443181A (en) Face identification method and device
CN113468925A (en) Shielded face recognition method, intelligent terminal and storage medium
CN113239739A (en) Method and device for identifying wearing article
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
CN111666976A (en) Feature fusion method and device based on attribute information and storage medium
CN105701486A (en) Method for realizing human face information analysis and extraction in video camera
Wei et al. Omni-face detection for video/image content description
CN115115976A (en) Video processing method and device, electronic equipment and storage medium
Reddy et al. Comparison of HOG and fisherfaces based face recognition system using MATLAB
CN114241202A (en) Method and device for training dressing classification model and method and device for dressing classification
Lin et al. An effective eye states detection method based on the projection of the gray interval distribution
Frías-Velázquez et al. Object identification by using orthonormal circus functions from the trace transform
Paul et al. Extraction of facial feature points using cumulative distribution function by varying single threshold group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant