CN111199176B - Face identity detection method and device - Google Patents

Face identity detection method and device Download PDF

Info

Publication number
CN111199176B
CN111199176B CN201811385951.XA CN201811385951A CN111199176B CN 111199176 B CN111199176 B CN 111199176B CN 201811385951 A CN201811385951 A CN 201811385951A CN 111199176 B CN111199176 B CN 111199176B
Authority
CN
China
Prior art keywords
image
face
adjustment
identity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811385951.XA
Other languages
Chinese (zh)
Other versions
CN111199176A (en
Inventor
熊宇龙
林志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811385951.XA priority Critical patent/CN111199176B/en
Publication of CN111199176A publication Critical patent/CN111199176A/en
Application granted granted Critical
Publication of CN111199176B publication Critical patent/CN111199176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a face identity detection method and device. The method comprises the following steps: extracting feature points of the face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point; calculating an image topology operator vector group between each face feature point and a corresponding adjacent pixel point; acquiring adjustment parameters of each face feature point corresponding to an image topology operator vector group of the face feature point; performing image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face feature points to obtain a corresponding target adjustment image; and comparing the target adjustment image with the stored face identity image to obtain a target identity image matched with the target adjustment image so as to finish face identity detection of the face image to be detected. The method can reduce the influence degree of the face beautifying technology on the face identity recognition result and improve the accuracy of face identity recognition.

Description

Face identity detection method and device
Technical Field
The application relates to the technical field of face identity recognition, in particular to a face identity detection method and device.
Background
The image deformation beautifying processing technology (beautifying technology) provides a beautifying effect for image photographing of people and simultaneously provides a small challenge for the face identification technology. There are a lot of differences between the face features shown by the face images subjected to the face beautifying treatment and the real face features. If the face identification is directly carried out by the face identification of the face image for face inspection after face beautifying, the face identification technology for realizing the identification effect by adopting the pattern searching pattern is adopted to finally output the face identification image for representing the specific identity of the person, and huge differences exist between the face identification image and the real face identification image corresponding to the face image for face inspection, so that the accuracy of the face identification is affected.
Disclosure of Invention
In order to overcome the defects in the prior art, the purpose of the application is to provide a face identity detection method and a face identity detection device, wherein the face identity detection method can reduce the influence degree of a face beautifying technology on a face identity recognition result and improve the accuracy of face identity recognition.
In terms of a method, an embodiment of the present application provides a face identity detection method, which is applied to an image processing device, where a correspondence between an image topology operator vector set and an adjustment parameter and a face identity image for indicating a specific identity of a person are stored in the image processing device, and the method includes:
extracting feature points of a face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point;
calculating an image topology operator vector group between each face feature point and a corresponding adjacent pixel point;
acquiring adjustment parameters of each face feature point corresponding to an image topology operator vector group of the face feature point;
performing image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face feature points to obtain a corresponding target adjustment image;
and comparing the target adjustment image with the stored face identity image to obtain target identity images matched with the target adjustment image in all the face identity images so as to finish face identity detection of the face image to be detected.
In terms of a device, an embodiment of the present application provides a face identity detection device, which is applied to an image processing apparatus, where a correspondence between an image topology operator vector group and an adjustment parameter and a face identity image for indicating a specific identity of a person are stored in the image processing apparatus, and the device includes:
the feature point extraction module is used for extracting feature points of the face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point;
the vector group calculation module is used for calculating an image topology operator vector group between each face characteristic point and the corresponding adjacent pixel point;
the parameter acquisition module is used for acquiring adjustment parameters of each face characteristic point, which correspond to the image topology operator vector group of the face characteristic point;
the image adjustment module is used for carrying out image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face feature points to obtain a corresponding target adjustment image;
and the identity comparison module is used for comparing the target adjustment image with the stored face identity image to obtain target identity images matched with the target adjustment image in all the face identity images so as to complete face identity detection of the face image to be detected.
Compared with the prior art, the face identity detection method and device provided by the embodiment of the application have the following beneficial effects: the face identity detection method can reduce the influence degree of the face beautifying technology on the face identity recognition result and improve the accuracy of face identity recognition. Firstly, the method acquires face feature points in the face image to be detected and adjacent pixel points corresponding to each face feature point by extracting the feature points of the face image to be detected. Then, the method obtains the adjustment parameters for reducing the beauty treatment effect corresponding to each face feature point according to the corresponding relation between the image topology operator vector group stored by the image processing equipment and the adjustment parameters by calculating the image topology operator vector group between each face feature point and the corresponding adjacent pixel point in the face image to be detected. Then, the method adjusts the image of the face image to be detected based on the obtained adjusting parameters to obtain a corresponding target adjusting image so as to reduce the possible beautifying effect on the face image to be detected. Finally, the method obtains the target identity image matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected by means of image comparison between the target adjustment image and the face identity image stored by the image processing equipment, so that the influence of the face beautifying technology on face identity identification is reduced, and face identity detection with high accuracy is realized.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
For a clearer description of the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope of protection of the claims of the present application, and that other related drawings can be obtained from these drawings without the inventive effort for a person skilled in the art.
Fig. 1 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a face identity detection method according to an embodiment of the present application.
Fig. 3 is a schematic flow chart of the sub-steps included in step S230 shown in fig. 2.
Fig. 4 is a schematic flow chart of the sub-steps included in step S240 shown in fig. 2.
Fig. 5 is a schematic flow chart of the sub-steps included in step S250 shown in fig. 2.
Fig. 6 is another flow chart of a face identity detection method according to an embodiment of the present application.
Fig. 7 is a schematic block diagram of a face identity detection apparatus according to an embodiment of the present application.
Fig. 8 is another schematic block diagram of a face identity detection apparatus according to an embodiment of the present application.
Icon: 10-an image processing device; 11-memory; 12-a processor; 13-a communication unit; 100-a face identity detection device; 110-a feature point extraction module; 120-vector group calculation module; 130-a parameter acquisition module; 140-an image adjustment module; 150-identity comparison module; 160-relationship configuration module.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, a block diagram of an image processing apparatus 10 according to an embodiment of the present application is shown. In this embodiment of the present application, the image processing apparatus 10 may be used for performing face identity detection with high accuracy on a face image to be detected, where the image processing apparatus 10 includes a face identity detection device 100, a memory 11, a processor 12, and a communication unit 13. The memory 11, the processor 12 and the communication unit 13 are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The face identity detection apparatus 100 includes at least one software function module capable of being stored in the memory 11 in the form of software or firmware (firmware), and the processor 12 executes various function applications and data processing by executing the corresponding software function module of the face identity detection apparatus 100 stored in the memory 11. In the present embodiment, the image processing apparatus 10 may be, but is not limited to, a server, a mobile terminal, or the like.
In this embodiment, the memory 11 may be configured to store a feature point extraction model for extracting feature points of a face in a face image, where the feature points of the face include at least one or more of two corner points of each eyebrow and a center point thereof, two corner points of each eye, an upper and lower eyelid center point, and an eye center point, a nose tip point, a nose vertex, two nose wing points, a nasal septum point, two corner points of a mouth, a mouth center point, an uppermost point of an upper lip, and a lowermost point of a lower lip in the face image. The image processing device 10 may extract all face feature points actually existing in a face image within the coverage range of the feature point extraction model from the face image through the feature point extraction model, where the feature point extraction model is an extraction model obtained by training a training sample face image based on a convolutional neural network (Convolutional Neural Network, CNN) and using a manual calibration face feature point, and the feature point extraction model may be obtained by the image processing device 10 itself through sample training, or may be obtained from an external device and stored in the memory 11.
In this embodiment, the memory 11 may be further configured to store a correspondence between the image topology operator vector group and the adjustment parameter. The adjustment parameters are used for representing image processing parameters required by the pixel point to reduce the corresponding beautifying effect, the image topology operator vector group is used for representing the image topology relation between one image pixel point in the image and a leading pixel point of the image pixel point in the RGB color space, the image topology operator can be expressed as a Laplacian operator or as a gradient operator, and specific operator expression conditions can be configured differently according to requirements. In the following description, the laplace operator is taken as an image topology operator to take an example for description, and it is understood that the laplace operator is only one expression form of the image topology operator, and the expression form of the image topology operator is not limited to the laplace operator, and the related operation when the image topology operator is expressed by other operators is similar to the related operation when the image topology operator is expressed by the laplace operator.
In this embodiment, when the image topology operator is expressed by a laplacian operator, the laplacian operator vector set is used to represent a transformation difference between each color vector (R (Red) vector, G (Green) vector, and B (Blue) vector) corresponding in the RGB color space between an image pixel and an adjacent pixel of the image pixel, so as to reflect the degree of the beautifying process of the image pixel. If the image processing device 10 calculates the laplace operator vector group by adopting a 4 neighborhood system for the image pixel points, the image processing device 10 calculates the laplace operator vector group by extracting 4 adjacent pixel points around the image pixel points and the image pixel points; if the image processing apparatus 10 calculates the laplace operator vector group by using an 8-neighborhood system for an image pixel, the image processing apparatus 10 performs the laplace operator vector group calculation for 8 pixels adjacent to the periphery of the image pixel and the image pixel. In one implementation of this embodiment, the image processing apparatus 10 calculates the laplacian vector set using an 8-neighborhood system for image pixels.
The corresponding relation between the Laplace operator vector group and the adjustment parameters can be represented by a parameter comparison model which is obtained based on convolutional neural network training. The training samples used in the training process of the parameter comparison model are various Laplacian vector groups which are obtained by corresponding calculation from the non-beauty process to the beauty process of adopting different beauty parameters, the parameter comparison model can compare the Laplacian vector groups when adopting the beauty parameters with the Laplacian vector groups when not adopting the beauty processes (comprising vector value comparison and vector direction comparison), and corresponding adjustment parameters are generated based on the comparison results. And carrying out vector transformation on the Laplace operator vector group when the beauty parameters are adopted based on the generated adjustment parameters, then calculating the integral difference between the transformed Laplace operator vector group and the Laplace operator vector group when the beauty parameters are not adopted based on a least square method, and accordingly reversely adjusting the adjustment parameters based on the integral difference until the Laplace operator vector group when the beauty parameters are adopted carries out vector transformation based on the adjustment parameters, the integral difference between the Laplace operator vector group and the Laplace operator vector group when the beauty parameters are not adopted is minimum, and taking the adjustment parameters corresponding to the smallest integral difference as the adjustment parameters corresponding to the Laplace operator vector group. The beauty parameters comprise any one or more combination of sub-parameters such as whitening operator parameters, skin grinding operator parameters, stretching operator parameters, skin tendering operator parameters and the like, and the adjustment parameters comprise any one or more combination of parameters such as image sharpening parameters, image stretching parameters, image noise parameters and the like. The parameter control model may be obtained by the image processing apparatus 10 itself through sample training, or may be obtained from an external apparatus and stored in the memory 11.
In this embodiment, the memory 11 may also be used to store a face identity image for indicating a specific identity of a person. The memory 11 may be, but is not limited to, a random access memory, a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, etc. The memory 11 may also be used to store various applications that the processor 12 executes after receiving execution instructions. Further, the software programs and modules within the memory 11 may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
In this embodiment, the processor 12 may be an integrated circuit chip with signal processing capabilities. The processor 12 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a network processor (Network Processor, NP), etc., and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the present embodiment, the communication unit 13 is configured to establish a communication connection between the image processing apparatus 10 and other external apparatuses through a network, and perform data transmission through the network. For example, the image processing apparatus 10 may obtain, from an external apparatus, a face image to be detected, which may be a face image after the face beautifying process or an original face image without face beautifying, through the communication unit 13.
In this embodiment, the image processing apparatus 10 can reduce the influence degree of the beautifying technology on the face identification result by the face identification detection device 100 stored in the memory 11, and improve the accuracy of face identification.
It is to be understood that the configuration shown in fig. 1 is only one structural schematic diagram of the image processing apparatus 10, and that the image processing apparatus 10 may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Fig. 2 is a schematic flow chart of a face identity detection method according to an embodiment of the present application. In the embodiment of the present application, the face identity detection method is applied to the image processing apparatus 10, where the image processing apparatus 10 stores the correspondence between the image topology operator vector set and the adjustment parameters, and the face identity image for indicating the specific identity of the person. The specific flow and steps of the face identity detection method shown in fig. 2 are described in detail below.
Step S210, extracting feature points of the face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point.
In this embodiment, after the image processing apparatus 10 obtains the face image to be detected, feature point extraction may be performed on the face image to be detected based on a feature point extraction model, so as to obtain face feature points of the face image to be detected, and simultaneously obtain adjacent pixel points corresponding to each face feature point in the face image to be detected.
Step S220, calculating an image topology operator vector group between each face feature point and the corresponding adjacent pixel point.
In this embodiment, taking the laplace operator as an example of the expression form of the image topology operator, the image processing apparatus 10 calculates the laplace operator vector set between each face feature point and the corresponding adjacent pixel point in the RGB color space after obtaining the face feature point and the adjacent pixel point of each face feature point in the face image to be detected.
Step S230, obtaining adjustment parameters of each face feature point corresponding to the image topology operator vector group of the face feature point.
Optionally, please refer to fig. 3, which is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2. In this embodiment, the correspondence between the image topology operator vector group and the adjustment parameters includes adjustment parameters corresponding to different image topology operator vector groups, and the step S230 includes a substep S231 and a substep S232.
In sub-step S231, a corresponding matched set of target vectors is found from all sets of image topology operator vectors stored in the image processing apparatus 10 according to the sets of image topology operator vectors for each face feature point.
And step S232, if the searching is successful, the adjusting parameters corresponding to the searched target vector group are used as the adjusting parameters of the face characteristic points.
In this embodiment, taking the laplace operator as an expression form of the image topology operator as an example, the image processing apparatus 10 searches for a target vector group having the same vector value and vector direction as the laplace operator vector group of the face feature point in the parameter matching model by inputting the laplace operator vector group of each face feature point to the parameter matching model described above, and takes an adjustment parameter corresponding to the target vector group in the parameter matching model as an adjustment parameter of the face feature point. It will be appreciated that the laplace operator is only one representation of an image topology operator, which is not limited to the laplace operator, but rather operates similarly to the image topology operator when expressed in the laplace operator.
And step S240, performing image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face feature points to obtain a corresponding target adjustment image.
In this embodiment, after obtaining the adjustment parameters of each face feature point in the face image to be detected, the image processing apparatus 10 adjusts the image of the face image to be detected by using the obtained adjustment parameters of all face feature points according to the adjustment influence degree of all the adjustment parameters of all the face feature points on each pixel point in the face image to be detected, so as to obtain a target adjustment image corresponding to the face image to be detected after the face image to be detected may have a beautifying effect, where the adjustment influence degree indicates the adjustment participation degree of the adjustment parameters at each face feature point in the image adjustment process of other pixel points.
Optionally, please refer to fig. 4, which is a flowchart illustrating the sub-steps included in step S240 shown in fig. 2. In this embodiment, the step S240 may include a substep S241 and a substep S242.
And S241, establishing a blending field among all pixel points in the face image to be detected, and uniformly transmitting the adjustment parameters of the corresponding face feature points to other pixel points in the blending field at the positions corresponding to each face feature point in the blending field.
And sub-step S242, carrying out pixel adjustment on each pixel point in the blending field according to the adjustment parameters correspondingly obtained by the pixel point, and correspondingly generating the target adjustment image.
In this embodiment, the blending field is a field that is neither passive nor rotational, and when each face feature point propagates an adjustment parameter at a corresponding position in the blending field, the adjustment parameter will be uniformly propagated by radiation from the face feature point to surrounding pixel points, so that each pixel point in the blending field correspondingly obtains the adjustment parameter from the radiation propagation of each face feature point. The image processing device 10 obtains a target adjustment image corresponding to the face image to be detected after adjustment and transformation by performing pixel adjustment and transformation on each pixel point according to the adjustment parameter obtained by the corresponding pixel point.
Step S250, comparing the target adjustment image with the stored face identity image to obtain a target identity image matched with the target adjustment image in all face identity images, so as to complete face identity detection of the face image to be detected.
In this embodiment, after obtaining the target adjustment image with the reduced beautifying effect corresponding to the face image to be detected, the image processing apparatus 10 obtains the target identity image, which is matched with the target adjustment image and is used for indicating the specific identity of the person corresponding to the face image to be detected, in all the stored face identity images by comparing the target adjustment image with the face identity image stored in the image processing apparatus 10, so as to reduce the influence of the beautifying technology on the face identity recognition, and realize the face identity detection with high accuracy.
Optionally, please refer to fig. 5, which is a flowchart illustrating the sub-steps included in step S250 shown in fig. 2. In the embodiment of the present application, the step S250 may include a sub-step S251 and a sub-step S252.
In a substep S251, an image confidence level between the target adjustment image and each stored face identity image is calculated.
In sub-step S252, the face identity image with the highest image confidence is selected as the target identity image to which the face image to be detected points.
Fig. 6 is a schematic flow chart of another face identity detection method according to the embodiment of the present application. In this embodiment of the present application, the face identity detection method may further include step S209 before step S210.
Step S209, the corresponding relation between the image topology vector group and the adjustment parameters is configured and stored.
In this embodiment, taking the laplace operator as an example of the expression form of the image topology operator, the image processing apparatus 10 may configure the correspondence between the laplace operator vector group and the adjustment parameter by training the parameter collation model between the laplace operator vector group and the adjustment parameter using samples based on the convolutional neural network, and store it in the memory 11.
When the parameter reference model is trained, the image processing device 10 records the laplace operator vector group corresponding to each beauty parameter and the facial image feature points under different beauty parameters, compares the laplace operator vector group when the beauty parameters are adopted with the laplace operator vector group when the beauty parameters are not adopted, initially generates an adjustment parameter for the beauty parameters according to the comparison result, performs vector transformation on the laplace operator vector group when the beauty parameters are adopted by the generated adjustment parameter, calculates the overall difference between the transformed laplace operator vector group and the laplace operator vector group when the beauty parameters are not adopted based on the least square method, and reversely adjusts the adjustment parameter based on the overall difference until the laplace operator vector group when the beauty parameters are adopted is minimum based on the overall difference between the laplace operator vector group when the beauty parameters are not adopted after vector transformation, and takes the corresponding adjustment parameter when the overall difference is minimum as the adjustment parameter corresponding to the laplace operator vector group.
Fig. 7 is a block diagram of a face identity detection apparatus 100 according to an embodiment of the present application. In the embodiment of the present application, the face identity detection apparatus 100 is applied to the image processing device 10 shown in fig. 1, where the image processing device 10 stores a correspondence between an image topology operator vector group and adjustment parameters, and a face identity image for indicating a specific identity of a person. The face identity detection apparatus 100 includes a feature point extraction module 110, a vector group calculation module 120, a parameter acquisition module 130, an image adjustment module 140, and an identity comparison module 150.
The feature point extraction module 110 is configured to extract feature points of a face image to be detected, so as to obtain face feature points of the face image to be detected, and adjacent pixel points corresponding to each face feature point.
In this embodiment, the feature point extraction module 110 may perform step S210 shown in fig. 2, and the specific implementation may refer to the above detailed description of step S210.
The vector group calculation module 120 is configured to calculate an image topology operator vector group between each face feature point and a corresponding adjacent pixel point.
In this embodiment, the vector group calculation module 120 may perform step S220 shown in fig. 2, and the specific implementation may refer to the above detailed description of step S220.
The parameter obtaining module 130 is configured to obtain an adjustment parameter of each face feature point corresponding to the image topology operator vector group of the face feature point.
In this embodiment, the parameter obtaining module 130 may perform the step S230 shown in fig. 2, and the sub-steps S231 and S232 shown in fig. 3, and the detailed descriptions of the step S220, the sub-steps S231 and the sub-step S232 are referred to above.
The image adjustment module 140 is configured to perform image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face feature points, so as to obtain a corresponding target adjustment image.
In this embodiment, the image adjustment module 140 may perform the step S240 shown in fig. 2, and the sub-steps S241 and S242 shown in fig. 4, and the detailed descriptions of the step S240, the sub-steps S241 and S242 are referred to above.
The identity comparison module 150 is configured to perform image comparison on the target adjustment image and a stored face identity image, so as to obtain a target identity image matched with the target adjustment image in all face identity images, so as to complete face identity detection on the face image to be detected.
In this embodiment, the identity comparison module 150 may perform the step S250 shown in fig. 2, and the sub-steps S251 and S252 shown in fig. 5, and the detailed description of the steps S250, S251 and S252 is referred to above.
Fig. 8 is another block diagram of the face identity detection apparatus 100 according to the embodiment of the present application. In an embodiment of the present application, the face identity detection apparatus 100 may further include a relationship configuration module 160.
The relationship configuration module 160 is configured to store a correspondence between the image topology operator vector set and the adjustment parameter.
In this embodiment, the relationship configuration module 160 may perform step S209 shown in fig. 6, and the specific implementation may refer to the detailed description of step S209.
In summary, in the face identity detection method and the face identity detection device provided by the embodiment of the application, the face identity detection method can reduce the influence degree of the face beautifying technology on the face identity recognition result and improve the accuracy of face identity recognition. Firstly, the method acquires face feature points in the face image to be detected and adjacent pixel points corresponding to each face feature point by extracting the feature points of the face image to be detected. Then, the method obtains the adjustment parameters for reducing the beauty treatment effect corresponding to each face feature point according to the corresponding relation between the image topology operator vector group stored by the image processing equipment and the adjustment parameters by calculating the image topology operator vector group between each face feature point and the corresponding adjacent pixel point in the face image to be detected. Then, the method adjusts the image of the face image to be detected based on the obtained adjusting parameters to obtain a corresponding target adjusting image so as to reduce the possible beautifying effect on the face image to be detected. Finally, the method obtains the target identity image matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected by means of image comparison between the target adjustment image and the face identity image stored by the image processing equipment, so that the influence of the face beautifying technology on face identity identification is reduced, and face identity detection with high accuracy is realized.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A face identity detection method, characterized in that it is applied to an image processing device, where a correspondence between an image topology operator vector set and an adjustment parameter and a face identity image for indicating a specific identity of a person are stored, the method comprising:
extracting feature points of a face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point;
calculating an image topology operator vector group between each face feature point and a corresponding adjacent pixel point;
acquiring adjustment parameters of each face feature point corresponding to an image topology operator vector group of the face feature point, wherein the adjustment parameters are used for representing image processing parameters required by corresponding pixel points for reducing the beauty treatment effect;
performing image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face feature points to obtain a corresponding target adjustment image;
and comparing the target adjustment image with the stored face identity image to obtain target identity images matched with the target adjustment image in all the face identity images so as to finish face identity detection of the face image to be detected.
2. The method according to claim 1, wherein the correspondence between the image topology operator vector group and the adjustment parameters includes adjustment parameters corresponding to different image topology operator vector groups, and the step of obtaining the adjustment parameters corresponding to the image topology operator vector group of each face feature point includes:
searching a corresponding matched target vector group in all image topology operator vector groups stored in the image processing equipment according to the image topology operator vector groups of each face feature point;
if the searching is successful, the adjusting parameters corresponding to the searched target vector group are used as the adjusting parameters of the face feature points.
3. The method according to claim 1, wherein the step of performing image adjustment on the face image to be detected based on the acquired adjustment parameters of all face feature points to obtain a corresponding target adjustment image includes:
establishing a blending field among all pixel points in the face image to be detected, and uniformly spreading adjustment parameters of corresponding face feature points to other pixel points in the blending field at positions corresponding to each face feature point in the blending field;
and carrying out pixel adjustment on each pixel point in the blending field according to the adjustment parameters correspondingly obtained by the pixel point, and correspondingly generating the target adjustment image.
4. The method of claim 1, wherein the step of comparing the target adjustment image with the stored face identity image to obtain a target identity image of all face identity images that matches the target adjustment image comprises:
calculating the image confidence coefficient between the target adjustment image and each stored face identity image;
and selecting the face identity image with the maximum image confidence as a target identity image pointed by the face image to be detected.
5. The method according to any one of claims 1-4, further comprising:
and carrying out configuration storage on the corresponding relation between the image topology operator vector group and the adjustment parameters.
6. A face identity detection apparatus, characterized in that it is applied to an image processing device, in which a correspondence between an image topology operator vector group and adjustment parameters, and a face identity image for indicating a specific identity of a person are stored, the apparatus comprising:
the feature point extraction module is used for extracting feature points of the face image to be detected to obtain face feature points of the face image to be detected and adjacent pixel points corresponding to each face feature point;
the vector group calculation module is used for calculating an image topology operator vector group between each face characteristic point and the corresponding adjacent pixel point;
the parameter acquisition module is used for acquiring adjustment parameters of each face feature point corresponding to the image topology operator vector group of the face feature point, wherein the adjustment parameters are used for representing image processing parameters required by the corresponding pixel point for reducing the beautifying processing effect;
the image adjustment module is used for carrying out image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face feature points to obtain a corresponding target adjustment image;
and the identity comparison module is used for comparing the target adjustment image with the stored face identity image to obtain target identity images matched with the target adjustment image in all the face identity images so as to complete face identity detection of the face image to be detected.
7. The apparatus according to claim 6, wherein the correspondence between the image topology operator vector group and the adjustment parameters includes adjustment parameters corresponding to different image topology operator vector groups, and the parameter obtaining module is specifically configured to:
searching a corresponding matched target vector group in all image topology operator vector groups stored in the image processing equipment according to the image topology operator vector groups of each face feature point;
if the searching is successful, the adjusting parameters corresponding to the searched target vector group are used as the adjusting parameters of the face feature points.
8. The apparatus of claim 6, wherein the image adjustment module is specifically configured to:
establishing a blending field among all pixel points in the face image to be detected, and uniformly spreading adjustment parameters of corresponding face feature points to other pixel points in the blending field at positions corresponding to each face feature point in the blending field;
and carrying out pixel adjustment on each pixel point in the blending field according to the adjustment parameters correspondingly obtained by the pixel point, and correspondingly generating the target adjustment image.
9. The apparatus of claim 6, wherein the identity comparison module is specifically configured to:
calculating the image confidence coefficient between the target adjustment image and each stored face identity image;
and selecting the face identity image with the maximum image confidence as a target identity image pointed by the face image to be detected.
10. The apparatus according to any one of claims 6-9, characterized in that the apparatus further comprises:
and the relation configuration module is used for carrying out configuration storage on the corresponding relation between the image topology operator vector group and the adjustment parameters.
CN201811385951.XA 2018-11-20 2018-11-20 Face identity detection method and device Active CN111199176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811385951.XA CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811385951.XA CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Publications (2)

Publication Number Publication Date
CN111199176A CN111199176A (en) 2020-05-26
CN111199176B true CN111199176B (en) 2023-06-20

Family

ID=70747049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811385951.XA Active CN111199176B (en) 2018-11-20 2018-11-20 Face identity detection method and device

Country Status (1)

Country Link
CN (1) CN111199176B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN112101296B (en) * 2020-10-14 2024-03-08 杭州海康威视数字技术股份有限公司 Face registration method, face verification method, device and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8031961B2 (en) * 2007-05-29 2011-10-04 Hewlett-Packard Development Company, L.P. Face and skin sensitive image enhancement

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106534661A (en) * 2015-09-15 2017-03-22 中国科学院沈阳自动化研究所 Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊宇龙.《基于调和场的几何模型快速融合方法与应用研究》.《知网硕士论文库》.2018,全文. *

Also Published As

Publication number Publication date
CN111199176A (en) 2020-05-26

Similar Documents

Publication Publication Date Title
US11487995B2 (en) Method and apparatus for determining image quality
KR102299847B1 (en) Face verifying method and apparatus
CN109389069B (en) Gaze point determination method and apparatus, electronic device, and computer storage medium
CN108229296B (en) Face skin attribute identification method and device, electronic equipment and storage medium
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
WO2018036462A1 (en) Image segmentation method, computer apparatus, and computer storage medium
CN113343826B (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN108961175B (en) Face brightness adjusting method and device, computer equipment and storage medium
CN107679466B (en) Information output method and device
CN108229301B (en) Eyelid line detection method and device and electronic equipment
KR20180053108A (en) Method and apparatus for extracting iris region
CN103473564B (en) A kind of obverse face detection method based on sensitizing range
CN107463865B (en) Face detection model training method, face detection method and device
EP3647992A1 (en) Face image processing method and apparatus, storage medium, and electronic device
CN110930296B (en) Image processing method, device, equipment and storage medium
CN111353506A (en) Adaptive gaze estimation method and apparatus
CN110648289B (en) Image noise adding processing method and device
CN110826372B (en) Face feature point detection method and device
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
CN111199176B (en) Face identity detection method and device
CN113302619B (en) System and method for evaluating target area and characteristic points
EP3695340B1 (en) Method and system for face detection
CN113269719A (en) Model training method, image processing method, device, equipment and storage medium
CN109523564B (en) Method and apparatus for processing image
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant