WO2019000462A1 - 人脸图像处理方法、装置、存储介质及电子设备 - Google Patents

人脸图像处理方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2019000462A1
WO2019000462A1 PCT/CN2017/091352 CN2017091352W WO2019000462A1 WO 2019000462 A1 WO2019000462 A1 WO 2019000462A1 CN 2017091352 W CN2017091352 W CN 2017091352W WO 2019000462 A1 WO2019000462 A1 WO 2019000462A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
face image
adjustment parameter
determining
Prior art date
Application number
PCT/CN2017/091352
Other languages
English (en)
French (fr)
Inventor
梁昆
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Priority to EP17916284.7A priority Critical patent/EP3647992A4/en
Priority to CN201780092006.8A priority patent/CN110741377A/zh
Priority to PCT/CN2017/091352 priority patent/WO2019000462A1/zh
Publication of WO2019000462A1 publication Critical patent/WO2019000462A1/zh
Priority to US16/700,584 priority patent/US11163978B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of face image processing technologies, and in particular, to a face image processing method, apparatus, storage medium, and electronic device.
  • Embodiments of the present invention provide a method, a device, a storage medium, and an electronic device for processing a face image, which can improve the speed of face image processing.
  • an embodiment of the present invention provides a method for processing a face image, including:
  • the face image is processed according to the target image adjustment parameter.
  • the embodiment of the present invention further provides a face image processing apparatus, including:
  • An identification module configured to identify a face image and obtain a recognition result
  • An acquiring module configured to acquire a corresponding image adjustment parameter set according to the recognition result
  • a determining module configured to determine a deflection angle of the face in the face image relative to the reference face in the reference face image
  • a selection module configured to select a target image adjustment parameter from the image adjustment parameter set according to the deflection angle
  • a processing module configured to process the face image according to the target image adjustment parameter.
  • an embodiment of the present invention further provides a storage medium, where the storage medium stores a plurality of instructions, the instructions being adapted to be loaded by a processor to execute the above-described face image processing method.
  • an embodiment of the present invention further provides an electronic device, where the electronic device includes a processor and a memory, the processor is electrically connected to the memory, and the memory is used to store instructions and data, and the processing Used to perform the following steps:
  • the face image is processed according to the target image adjustment parameter.
  • Embodiments of the present invention provide a method, a device, a storage medium, and an electronic device for processing a face image, which can improve the speed of face image processing.
  • FIG. 1 is a schematic flow chart of a face image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a face image according to an embodiment of the present invention.
  • FIG. 3 is another schematic flowchart of a face image processing method according to an embodiment of the present invention.
  • FIG. 4 is another schematic diagram of a face image according to an embodiment of the present invention.
  • FIG. 5 is still another schematic flowchart of a face image processing method according to an embodiment of the present invention.
  • FIG. 6 is still another schematic diagram of a face image according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of comparison before and after face image processing according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a first structure of a face image processing apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a second structure of a face image processing apparatus according to an embodiment of the present invention.
  • FIG. 10 is a third schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention.
  • FIG. 11 is a fourth schematic structural diagram of a face image processing apparatus according to an embodiment of the present invention.
  • FIG. 12 is a fifth structural diagram of a face image processing apparatus according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 14 is another schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • An embodiment of the present invention provides a method for processing a face image. As shown in FIG. 1 , the method for processing a face image may include the following steps:
  • the face image may be a face image that is collected by the electronic device through the camera lens, or may be a face image that is pre-stored in the electronic device.
  • the face image can also be obtained from the network, for example, an instant messaging tool (such as WeChat) receives the face image carried in the message sent by the friend user.
  • the face image is recognized, that is, the face image is subjected to face recognition.
  • face recognition is a biometric recognition technology based on human facial feature information for identification.
  • the face recognition system mainly includes four components, including: face image acquisition and detection, face image preprocessing, face image feature extraction, and matching and recognition.
  • the face image after the face image is acquired, since the acquired original image is limited by various conditions and random interference, it is often not directly used, and grayscale correction and noise filtering must be performed in the early stage of image processing. Wait for pre-processing operations. Then, the face image can be extracted by a knowledge-based representation method, an algebra-based feature or a statistical learning representation method. For example, information such as histogram features, color features, template features, structural features, etc. can be extracted from the face image, and useful information can be selected therefrom to implement face detection.
  • the face image may be identified according to the extracted feature: searching and matching the feature data of the extracted face image with the feature template stored in the database, by setting a threshold, when similar When the degree exceeds the threshold, the result obtained by the matching is output, and the face feature to be recognized is compared with the obtained face feature template, and the identity information of the face is judged according to the degree of similarity to obtain the recognition result.
  • the face image information may be analyzed and processed by a specified classification model, which may be machine-based deep learning, trained deep neural networks, such as CNN (Convolutional Neural Networks, Convolutional Neural Network), CNN is a multi-layer neural network consisting of an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer. It supports inputting images of multidimensional input vectors directly into the network, avoiding The reconstruction of data during feature extraction and classification greatly reduces the complexity of image processing.
  • CNN Convolutional Neural Networks, Convolutional Neural Network
  • CNN is a multi-layer neural network consisting of an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer. It supports inputting images of multidimensional input vectors directly into the network, avoiding The reconstruction of data during feature extraction and classification greatly reduces the complexity of image processing.
  • the target image information is input into the CNN network, the information is transformed from the input layer to the output layer, and the calculation process performed by the
  • the CNN model needs to be trained according to the sample and classification information in advance. For example, a large number of sample face images can be collected in advance, and each sample face image can manually mark the gender, age, person identity, etc. of the face image. Information, and then input these sample face images into the CNN network for training.
  • the training process mainly includes two stages: a forward propagation phase and a backward propagation phase.
  • a weight matrix By inputting the sample picture to the convolutional neural network, a weight matrix can be obtained, and then In the backward propagation phase, the difference between each actual output Oi and the ideal output Yi can be calculated, and the adjustment weight matrix is back-propagated according to the method of minimizing the error, wherein Yi is obtained according to the labeling information of the sample Xi. For example, if the gender of the sample image Xi is female, then Yi can be set to 1. If the gender of the sample image Xi is male, then Yi can be set to 0. Finally, the training is determined according to the adjusted weight matrix.
  • the convolutional neural network in turn, can analyze each picture according to the trained convolutional neural network intelligence, and more accurately obtain the gender, age, person identity and other information of the characters in the picture as the recognition result.
  • the image adjustment parameter set includes a plurality of sets of image adjustment parameters
  • the image processing parameters are historical image adjustment parameters for the face in the face image.
  • the electronic device may record the processing habits of the user when processing different face images during a historical period of time, and analyze and learn the processing habits of the user during a historical period based on machine learning. The electronic device generates image adjustment parameters for processing different face images through self-analysis and learning processes to obtain different adjustment parameter sets.
  • the image adjustment parameters may include: brightness value, exposure value, contrast, sharpness, liquefaction value, pixel blur radius, and other parameter values, etc., which are not enumerated here.
  • the recognition will fail.
  • the face image is not recognized, the user can manually determine the recognition result.
  • the electronic device receives a result selection instruction triggered by the user through the display screen, obtains and displays a corresponding result list according to the result selection instruction, and selects a target result from the result list as the recognition result. Then, the electronic device can acquire the image adjustment parameter set corresponding to the recognition result.
  • the recognition result includes a person identification.
  • the person identity is used to distinguish the person to whom the face belongs in the face image stored in the electronic device.
  • the identity of the person is not limited to the identity card number, but may be a person's name, a nickname or a number, etc., as long as the face image of the same person can be categorized.
  • the step “acquiring the corresponding image adjustment parameter set according to the recognition result” may include the following processes:
  • the correspondence between the identity of the person and the image set needs to be established in advance. That is, the image adjustment parameter corresponding to the face image of the same person is added to a set, and when the person identity of the face image is recognized, the set is obtained from the plurality of image adjustment parameter sets based on the person identity identifier. .
  • the reference face image is used as a reference object, and may be any image that includes a face in the face image.
  • the image A is the front image of the face in the face image and is used as the reference face image
  • the image B is the left-deflection image of the face in the face image, relative to the image A.
  • the face deflection angle is 45° horizontally to the left
  • the image C is the right deflection image of the face in the face image
  • the face deflection angle is 45° horizontally and rightward relative to the image A.
  • the image B is used as the reference face image
  • the image A is deflected horizontally 45 degrees to the right with respect to the face deflection angle in the image B
  • the image C is horizontally deflected 90 degrees with respect to the face deflection angle in the image B.
  • Any one of image A, image B, and image C can be used as a reference face image.
  • the image adjustment parameter set includes a plurality of sets of image adjustment parameters
  • the face images of different face angles may be image adjustment parameters required to achieve a preset screen effect.
  • the front image of a face and the image displayed by the side image are different, so there are differences in the image adjustment parameters required (for example, the left face has a scar, and the right cheek has no scar, when the face image In the case where the angle problem shows that most of the left face is exposed and the scar is exposed, the scar in the image needs to be processed; and when the face image shows most of the right face and the scar in the left face is blocked, there is no need to perform scar on the image. deal with).
  • the orientation of the face in the face image can be determined according to the deflection angle of the face relative to the reference face in the reference face image, thereby determining the target image adjustment parameter corresponding to the selection. That is, the step "selecting the target image adjustment parameter from the image adjustment parameter set according to the deflection angle" may include:
  • the preset image adjustment parameter is used as a target image adjustment parameter.
  • the plurality of preset angle intervals may be obtained by analyzing and processing the user behavior habits by the electronic device for a period of time.
  • the manner of processing the face image according to the target image adjustment parameter may be various.
  • an area that needs to be adjusted may be determined in the face image according to the target image adjustment parameter, and the determined area is adjusted according to the acquired target image adjustment parameter.
  • the image features of the face image may be adjusted according to the target image adjustment parameters to process the face image. That is, the step "processing the face image according to the target image adjustment parameter" may include the following process:
  • the adjusted target layer is merged with the unselected initial layer in the layer set.
  • the plurality of image features can include color features, shape features, texture features, and spatial relationship features.
  • the face image may be parsed, and an initial layer corresponding to each image feature is generated according to the analysis result.
  • the color feature corresponds to a color layer that represents the color of the face image
  • the shape feature correspondingly represents the face image. Shape shape layer.
  • the target image adjustment parameters Since not all image features need to be modified, there are parameters in the target image adjustment parameters that have empty parameter values. Therefore, only the parameters in the target image adjustment parameters whose parameter values are not empty are used.
  • the target layer to be modified may be selected from the layer set according to the parameter whose parameter value is not empty, to obtain the adjusted target layer. Finally, the adjusted target layer is superimposed with the remaining unadjusted initial layers to synthesize the processed face image.
  • the processed face image in FIG. 7 adjusts the shape feature, the texture feature, and the color feature with respect to the pre-processing.
  • the specific performance is: enhance the brightness value, increase the sharpness, enhance the contrast, partially dermabrasion, modify the hairstyle, and the like.
  • the step "determining the angle of deflection of the face in the face image relative to the reference face in the reference face image” may include the following sub-steps:
  • the displacement offset information can include an offset distance and an offset direction.
  • the first offset reference point X1 (pupil) is determined in the image A.
  • the image B is determined in the currently acquired face image.
  • a corresponding second offset reference point X2 (pupil) is determined in the image B according to the first offset reference point X1.
  • the second offset reference point X2 is positionally decomposed by the mapping method, and the second offset reference point X2 is determined to correspond to the equivalent point X2' in the image A, as shown in FIG.
  • the X2' point has an offset distance d with respect to the first reference point X1 in the horizontal direction, and the offset direction is horizontal to the left. Then, the corresponding deflection angle algorithm can be obtained, and the obtained offset distance and the offset direction are input, and the deflection angle is output through operation.
  • the step "determining the angle of deflection of the face in the face image relative to the reference face in the reference face image” may include the following sub-steps:
  • the feature points in the reference face image may be obtained to obtain a feature point set, and then any two representative target feature points are selected from the feature point set to calculate the first reference vector. That is, the step "determining the first reference vector in the face of the face image" may include the following process:
  • the first reference vector is determined according to the first feature point and the second feature point.
  • the angle between the first reference vector a and the second reference vector b can be directly calculated using a formula.
  • the angle ⁇ between the first reference vector a and the horizontal plane and the angle ⁇ between the second reference vector b and the horizontal plane can be separately calculated, and the angular difference between ⁇ and ⁇ is obtained as two vectors. The angle between.
  • the corresponding deflection angle algorithm can be called, and after the value of the angle is obtained, the deflection angle is output through the operation.
  • the method of the embodiment of the present invention is also applicable to a face image that produces a deflection angle in the vertical direction, and a face image that simultaneously produces an angular deflection in the horizontal direction and the vertical direction.
  • a face image that produces a deflection angle in the vertical direction and a face image that simultaneously produces an angular deflection in the horizontal direction and the vertical direction.
  • the face image processing method obtains the recognition result by recognizing the face image, and then obtains a corresponding image adjustment parameter set according to the recognition result, and determines that the face in the face image is relative to Referring to the deflection angle of the reference face in the face image, the target image adjustment parameter is selected from the image adjustment parameter set according to the determined deflection angle, and the face image is processed according to the target image adjustment parameter.
  • the solution can process the face image according to the recognition result of the face image and the deflection angle of the face in the image, and determine the target image adjustment parameter from the corresponding image adjustment parameter set, without manually determining how to adjust the image, and improving
  • the speed and efficiency of image processing reduces operating time while reducing the power consumption of electronic devices.
  • the embodiment of the invention further provides a face image processing device, which can be integrated in a server.
  • the face image processing apparatus 300 includes an identification module 301, an acquisition module 302, a determination module 303, a selection module 304, and a processing module 305.
  • the identification module 301 is configured to identify the face image to obtain a recognition result.
  • the face image may be a face image that is collected by the electronic device through the camera lens, or may be a face image that is pre-stored in the electronic device.
  • the face image can also be obtained from the network, for example, an instant messaging tool (such as WeChat) receives the face image carried in the message sent by the friend user.
  • the face image is recognized, that is, the face image is subjected to face recognition. The recognition result can be used to determine the person to which the different face images belong.
  • the obtaining module 302 is configured to obtain a corresponding image adjustment parameter set according to the recognition result.
  • the image adjustment parameter set includes a plurality of sets of image adjustment parameters
  • the image processing parameters are historical image adjustment parameters for the face in the face image.
  • the electronic device may record the processing habits of the user when processing different face images during a historical period of time, and analyze and learn the processing habits of the user during a historical period based on machine learning. The electronic device generates image adjustment parameters for processing different face images through self-analysis and learning processes to obtain different adjustment parameter sets.
  • the image adjustment parameters may include: brightness value, exposure value, contrast, sharpness, liquefaction value, pixel blur radius, and other parameter values, etc., which are not enumerated here.
  • the recognition result includes a person identification.
  • the person identity is used to distinguish the person to whom the face belongs in the face image stored in the electronic device.
  • the identity of the person is not limited to the identity card number, but may be a person's name, a nickname or a number, etc., as long as the face image of the same person can be categorized.
  • the obtaining module 302 can be used to:
  • the correspondence between the identity of the person and the image set needs to be established in advance. That is, the image adjustment parameter corresponding to the face image of the same person is added to a set, and when the person identity of the face image is recognized, the set is obtained from the plurality of image adjustment parameter sets based on the person identity identifier. .
  • the determining module 303 is configured to determine a deflection angle of the face in the face image relative to the reference face in the reference face image.
  • the reference face image is used as a reference object, and may be any image that includes a face in the face image.
  • the selecting module 304 is configured to select a target image adjustment parameter from the image adjustment parameter set according to the deflection angle.
  • the image adjustment parameter set includes a plurality of sets of image adjustment parameters, and the face images of different face angles may be image adjustment parameters required to achieve a preset screen effect.
  • the front image of a face and the image displayed by the side image are different. Therefore, the orientation of the face in the face image can be determined according to the deflection angle of the face relative to the reference face in the reference face image, thereby determining the target image adjustment parameter corresponding to the selection.
  • the selection module 304 can be used to:
  • the preset image adjustment parameter is used as a target image adjustment parameter.
  • the processing module 305 is configured to process the face image according to the target image adjustment parameter.
  • the manner of processing the face image according to the target image adjustment parameter may be various.
  • an area that needs to be adjusted may be determined in the face image according to the target image adjustment parameter, and the determined area is adjusted according to the acquired target image adjustment parameter.
  • the recognition result includes a person identity; the obtaining module 302 includes:
  • the identifier obtaining sub-module 3021 is configured to obtain, from the preset identifier set, a preset person identity identifier that matches the person identity identifier;
  • the set acquisition sub-module 3022 is configured to obtain a preset image adjustment parameter set corresponding to the preset person identity identifier, and obtain a corresponding image adjustment parameter set by using the preset image adjustment parameter set as the recognition result.
  • the determining module 303 includes: an image obtaining submodule 3031, a first determining submodule 3032, an information obtaining submodule 3033, and a second determining submodule 3034.
  • the image acquisition sub-module 3031 is configured to acquire a reference face image corresponding to the face image according to the recognition result
  • a first determining sub-module 3032 configured to determine a corresponding first offset reference point in a reference face of the reference face image, and determine a second offset corresponding to the first offset reference point in the face of the face image Shift reference point;
  • the information acquisition sub-module 3033 is configured to acquire position offset information of the second offset reference point relative to the first offset reference point
  • the second determining sub-module 3034 is configured to determine, according to the location offset information, a deflection angle of the face in the face image relative to the reference face in the reference face image.
  • the determining module 303 includes: an image obtaining submodule 3031 , a third determining submodule 3035 , an angle acquiring submodule 3036 , and a fourth determining submodule 3037 .
  • the image acquisition sub-module 3031 is configured to acquire a reference face image corresponding to the face image according to the recognition result
  • a third determining sub-module 3035 configured to determine a first reference vector in a face of the face image, and determine a second reference vector corresponding to the first reference vector in the reference face of the reference face image;
  • An angle acquisition sub-module 3036 configured to obtain an angle between the first reference vector and the second reference vector
  • the fourth determining sub-module 3037 is configured to determine, according to the included angle, a deflection angle of the face in the face image relative to the reference face in the reference face image.
  • the processing module 305 includes:
  • a feature acquisition sub-module 3051 configured to acquire a plurality of image features of the face image
  • a generating submodule 3052 configured to respectively generate initial layers corresponding to the plurality of image features, to obtain a layer set
  • the sub-module 3053 is configured to select a target layer from the layer set according to the target image adjustment parameter for adjustment;
  • the fusion sub-module 3054 is configured to fuse the adjusted target layer with the unselected initial layer in the layer set.
  • the plurality of image features can include color features, shape features, texture features, and spatial relationship features.
  • the face image may be parsed, and an initial layer corresponding to each image feature is generated according to the analysis result.
  • the color feature corresponds to a color layer that represents the color of the face image
  • the shape feature correspondingly represents the face image. Shape shape layer.
  • the target image adjustment parameters Since not all image features need to be modified, there are parameters in the target image adjustment parameters that have empty parameter values. Therefore, only the parameters in the target image adjustment parameters whose parameter values are not empty are used.
  • the target layer to be modified may be selected from the layer set according to the parameter whose parameter value is not empty, to obtain the adjusted target layer. Finally, the adjusted target layer is superimposed with the remaining unadjusted initial layers to synthesize the processed face image.
  • the recognition result includes a person identification
  • the acquisition module 302 is configured to:
  • the selection module is used to:
  • the preset image adjustment parameter is used as the target image adjustment parameter.
  • the face image processing device obtains the recognition result by recognizing the face image, and then obtains a corresponding image adjustment parameter set according to the recognition result, and determines that the face in the face image is relative to Referring to the deflection angle of the reference face in the face image, the target image adjustment parameter is selected from the image adjustment parameter set according to the determined deflection angle, and the face image is processed according to the target image adjustment parameter.
  • the solution can determine the target image adjustment parameter according to the recognition result of the face image and the deflection angle of the face in the image, without manually determining how to adjust the image, thereby improving the speed and efficiency of the image processing, Reduces operating time while reducing power consumption of electronic devices.
  • Embodiments of the present invention also provide a storage medium having a plurality of instructions stored therein, the instructions being adapted to be loaded by a processor to perform any of the above-described face image processing methods.
  • the embodiment of the invention further provides an electronic device, which may be a device such as a smart phone or a tablet computer.
  • the electronic device 400 includes a processor 401 and a memory 402.
  • the processor 401 is electrically connected to the memory 402.
  • the processor 401 is a control center of the electronic device 400, and connects various parts of the entire electronic device 400 using various interfaces and lines, by running or loading an application stored in the memory 402, and calling data stored in the memory 402, executing The various functions of the electronic device and the processing of the data enable overall monitoring of the electronic device 400.
  • the processor 401 in the electronic device 400 loads the instructions corresponding to the process of one or more applications into the memory 402 according to the following steps, and is stored and stored in the memory 402 by the processor 401.
  • the application thus implementing various functions:
  • the acquired face image is processed according to the acquisition target image adjustment parameter.
  • processor 401 performs the following steps:
  • the angle of deflection of the face in the face image relative to the reference face in the reference face image is determined.
  • processor 401 performs the following steps:
  • the angle of deflection of the face in the face image relative to the reference face in the reference face image is determined according to the angle.
  • the processor 401 performs the steps of: determining a corresponding first feature point and a second feature point in the face image; determining the first reference vector according to the first feature point and the second feature point.
  • processor 401 performs the following steps:
  • the adjusted target layer is merged with the unselected initial layer in the layer set.
  • the recognition result includes a person identification
  • the processor 401 performs the following steps:
  • the step of obtaining the corresponding image adjustment parameter set according to the recognition result includes:
  • processor 401 performs the following steps:
  • the preset image adjustment parameter is used as the target image adjustment parameter.
  • Memory 402 can be used to store applications and data.
  • the application stored in the memory 402 contains instructions that are executable in the processor.
  • Applications can form various functional modules.
  • the processor 401 executes various functional applications and data processing by running an application stored in the memory 402.
  • the electronic device 400 further includes a radio frequency circuit 403, a display screen 404, a control circuit 405, an input unit 406, an audio circuit 407, a sensor 408, and a power source 409.
  • the processor 401 is electrically connected to the radio frequency circuit 403, the display screen 404, the control circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409, respectively.
  • the radio frequency circuit 403 is configured to transceive radio frequency signals to communicate with a server or other electronic device through a wireless communication network.
  • the display screen 404 can be used to display information entered by the user or information provided to the user as well as various graphical user interfaces of the electronic device, which can be composed of images, text, icons, video, and any combination thereof.
  • the control circuit 405 is electrically connected to the display screen 404 for controlling the display screen 404 to display information.
  • the input unit 406 can be configured to receive input digits, character information, or user characteristic information (eg, fingerprints), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function controls.
  • user characteristic information eg, fingerprints
  • the audio circuit 407 can provide an audio interface between the user and the electronic device through a speaker and a microphone.
  • Sensor 408 is used to collect external environmental information.
  • Sensor 408 can include one or more of ambient brightness sensors, acceleration sensors, gyroscopes, and the like.
  • Power source 409 is used to power various components of electronic device 400.
  • the power supply 409 can be logically coupled to the processor 401 through a power management system to enable functions such as managing charging, discharging, and power management through the power management system.
  • the electronic device 400 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • an embodiment of the present invention provides an electronic device, which obtains a recognition result by recognizing a face image, and then acquires a corresponding image adjustment parameter set according to the recognition result, and determines a face in the face image relative to the reference.
  • the deflection angle of the reference face in the face image is selected from the image adjustment parameter set according to the determined deflection angle, and the face image is processed according to the target image adjustment parameter.
  • the solution can determine the target image adjustment parameter according to the recognition result of the face image and the deflection angle of the face in the image, without manually determining how to adjust the image, thereby improving the speed and efficiency of the image processing, Reduces operating time while reducing power consumption of electronic devices.
  • the embodiment of the present invention further provides a storage medium, where the storage medium stores a plurality of instructions, the instructions being adapted to be loaded by a processor to execute the face image processing method described in any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种人脸图像处理方法,包括:对人脸图像进行识别,得到识别结果;根据识别结果获取对应的图像调整参数集合;确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;根据偏转角度从图像调整参数集合中选取目标图像调整参数;根据目标图像调整参数处理人脸图像。本发明还涉及人脸图像处理装置、存储介质及电子设备。

Description

人脸图像处理方法、装置、存储介质及电子设备 技术领域
本发明涉及人脸图像处理技术领域,特别涉及一种人脸图像处理方法、装置、存储介质及电子设备。
背景技术
随着互联网的发展和移动通信网络的发展,同时也伴随着电子设备的处理能力和存储能力的迅猛发展,海量的应用程序得到了迅速传播和使用。尤其是与人脸图像处理相关的应用,其人脸图像处理功能越来越强大。目前,很多相机应用程序提供方便快捷的照片美化功能,只需要简单的操作就能够实现照片效果的即时优化。
技术问题
本发明实施例提供一种人脸图像处理方法、装置、存储介质及电子设备,可以提高人脸图像处理的速度。
技术解决方案
第一方面,本发明实施例提供一种人脸图像处理方法,包括:
对人脸图像进行识别,得到识别结果;
根据所述识别结果获取对应的图像调整参数集合;
确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
根据所述目标图像调整参数对所述人脸图像进行处理。
第二方面,本发明实施例还提供一种人脸图像处理装置,包括:
识别模块,用于对人脸图像进行识别,得到识别结果;
获取模块,用于根据所述识别结果获取对应的图像调整参数集合;
确定模块,用于确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
选取模块,用于根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
处理模块,用于根据所述目标图像调整参数对所述人脸图像进行处理。
第三方面,本发明实施例还提供一种存储介质,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行上述人脸图像处理方法。
第四方面,本发明实施例还提供一种电子设备,所述电子设备包括处理器和存储器,所述处理器与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行以下步骤:
对人脸图像进行识别,得到识别结果;
根据所述识别结果获取对应的图像调整参数集合;
确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
根据所述目标图像调整参数对所述人脸图像进行处理。
有益效果
本发明实施例提供一种人脸图像处理方法、装置、存储介质及电子设备,可以提高人脸图像处理的速度。
附图说明
图1是本发明实施例提供的人脸图像处理方法的种流程示意图。
图2是本发明实施例提供的人脸图像的示意图。
图3是本发明实施例提供的人脸图像处理方法的另一流程示意图。
图4是本发明实施例提供的人脸图像的另一示意图。
图5是本发明实施例提供的人脸图像处理方法的又一流程示意图。
图6是本发明实施例提供的人脸图像的又一示意图。
图7是本发明实施例提供的人脸图像处理前后的对比示意图。
图8是本发明实施例提供的人脸图像处理装置的第一种结构示意图。
图9是本发明实施例提供的人脸图像处理装置的第二种结构示意图。
图10是本发明实施例提供的人脸图像处理装置的第三种结构示意图。
图11是本发明实施例提供的人脸图像处理装置的第四种结构示意图。
图12是本发明实施例提供的人脸图像处理装置的第五种结构示意图。
图13是本发明实施例提供的电子设备的结构示意图。
图14是本发明实施例提供的电子设备的另一结构示意图。
本发明的最佳实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。
本发明的说明书和权利要求书以及上述附图中的术语“第一”、“第二”、“第三”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应当理解,这样描述的对象在适当情况下可以互换。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤的过程、方法或包含了一系列模块或单元的装置、电子设备、***不必限于清楚地列出的那些步骤或模块或单元,还可以包括没有清楚地列出的步骤或模块或单元,也可以包括对于这些过程、方法、装置、电子设备或***固有的其它步骤或模块或单元。
本发明实施例提供一种人脸图像处理方法,如图1所示,该人脸图像处理方法可以包括以下步骤:
S110、对人脸图像进行识别,得到识别结果。
本发明实施例中,人脸图像的来源可以有多种,该人脸图像可以是电子设备即时通过摄像镜头采集而来的人脸图像,也可以是预先存储在电子设备中的人脸图像。当然,该人脸图像还可以从网络上获取而来的人脸图像,比如,通过即时通讯工具(如微信)接收到好友用户发送的消息中携带的人脸图像。对人脸图像进行识别,也即对该人脸图像进行人脸识别。
其中,人脸识别,是基于人的脸部特征信息进行身份识别的一种生物识别技术。人脸识别***主要包括四个组成部分,包括:人脸图像采集及检测、人脸图像预处理、人脸图像特征提取以及匹配与识别。
在一些实施例中,获取到人脸图像后,由于获取的原始图像由于受到各种条件的限制和随机干扰,往往不能直接使用,必须在图像处理的早期阶段对它进行灰度校正、噪声过滤等预处理操作。然后,可通过基于知识的表征方法、基于代数特征或统计学习的表征方法,对该人脸图像进行特征提取。比如,可从人脸图像中提取直方图特征、颜色特征、模板特征、结构特征等信息,并从中选取有用的信息,以实现人脸检测。
比如,可在提取到相应特征后,可根据提取的特征对人脸图像进行识别:将提取的人脸图像的特征数据与数据库中存储的特征模板进行搜索匹配,通过设定一阈值,当相似度超过该阈值时,将匹配得到的结果输出,并将待识别的人脸特征与已得到的人脸特征模板进行比较,根据相似程度对人脸的身份信息进行判断,以得到识别结果。
在一些实施方式中,可以通过指定的分类模型对人脸图像信息进行分析处理,该分类模型可以是基于机器的深度学习,训练好的深度神经网络,比如CNN(Convolutional Neural Networks,卷积神经网络),CNN是一种多层神经网络,由输入层、卷积层、池化层、全连接层和输出层组成,其支持将多维输入向量的图像直接输入网络,避免了特征提取和分类过程中数据的重建,极大降低了图像处理的复杂度。当将目标图像信息输入CNN网络中时,信息会从输入层经过逐级的变换,传输到输出层,CNN网络执行的计算过程实际上就是将输入(即人脸图像)与每层的权值矩阵相点乘,从而得到最终输出(如性别、年龄、人物身份等)的过程。
容易理解的是,该CNN模型需要提前根据样本和分类信息训练得到,比如可以提前采集大量的样本人脸图像,每一样本人脸图像可以人工标注出人脸图像对应的性别、年龄、人物身份等信息,然后将这些样本人脸图像输入CNN网络中进行训练,该训练过程主要包括两个阶段:前向传播阶段和后向传播阶段,在前向传播阶段中,可以将每一样本Xi(也即样本图片)输入n层卷积神经网络中,得到实际输出Oi,其中,Oi=Fn(…(F2(F1(XiW(1))W(2))...)W(n)),i为正整数,W(n)为第n层的权值,F为激活函数(比如sigmoid函数或者双曲线正切函数),通过向卷积神经网络输入该样本图片,可以得到权值矩阵,之后,在后向传播阶段,可以计算每一实际输出Oi和理想输出Yi的差,按极小化误差的方法反向传播调整权值矩阵,其中,Yi是根据样本Xi的标注信息得到的,比如,若样本图片Xi标注的性别为女,则Yi可以设为1,若样本图片Xi标注的性别为男,则Yi可以设为0,最后,根据调整后的权值矩阵确定训练好的卷积神经网络,从而后续能根据训练好的卷积神经网络智能对每一图片进行分析,较准确的得出图片中人物的性别、年龄、人物身份等信息,以作为识别结果。
S120、根据识别结果获取对应的图像调整参数集合。
在本发明实施例中,图像调整参数集合中包括有多组图像调整参数,该图像处理参数为针对该人脸图像中人脸的历史图像调整参数。在一些实施例中,电子设备可以记录用户在一段历史时段内,对不同人脸图像进行处理时的处理习惯,并基于机器学习对用户在一段历史时段内的处理习惯进行分析和学习处理。电子设备通过自行分析和学习的处理过程,生成处理不同人脸图像时的图像调整参数,以得到不同的调整参数集合。
在一些实施方式中,该图像调整参数可以包括:亮度值、曝光值、对比度、清晰度、液化值、像素模糊半径以及其他参数值等,在此不一一列举。
实际应用中,由于外界因素影响或人脸图像本存在问题无法避免地会导致识别失败,比如,人脸比对时,与***中存储的人脸图像有出入,例如剃了胡子、换了发型、多了眼镜、变了表情都有可能引起比对失败。当无法识别人脸图像时,用户可手动确定识别结果。比如,电子设备接收用户通过显示屏触发的结果选择指令,根据结果选择指令获取并显示相应的结果列表,并从该结果列表中选取目标结果,以作为识别结果。然后,电子设备便可获取该识别结果对应的图像调整参数集合。
在一些实施例中,该识别结果包括人物身份标识。其中,该人物身份标识用以区分电子设备中所存储的人脸图像中人脸所属的人物。该人物身份标识不仅限于身份证号码,还可以是人名、昵称或编号等等,只要能将同一人物的人脸图像归位一类即可。则步骤“根据识别结果获取对应的图像调整参数集合”可以包括以下流程:
从预设标识集合中获取与人物身份标识匹配的预设人物身份标识;
获取预设人物身份标识对应的预设图像调整参数集合,将预设图像调整参数集合作为识别结果获取对应的图像调整参数集合。
本实施例中,需预先建立人物身份标识与图像集合之间的对应关系。也即,将同一人物的人脸图像对应的图像调整参数添加到一集合,当识别出该人脸图像的人物身份时,将从众多图像调整参数集合中,基于该人物身份标识获取到该集合。
S130、确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
其中,参考人脸图像作为一个参照物,可以是任意一张包含该人脸图像中人脸的图像。
比如,参考图2,若图像A为该人脸图像中人脸的正面图像,将其作为参考人脸图像,则图像B为该人脸图像中人脸的左偏转图像,相对于图像A中人脸偏转角度为做水平向左偏转45°;图像C为该人脸图像中人脸的右偏转图像,相对于图像A中人脸偏转角度为做水平向右偏转45°。同样地,若将图像B作为参考人脸图像,则图像A相对于图像B中人脸偏转角度为水平向右偏转45°;图像C相对于图像B中人脸偏转角度为水平向右偏转90°。图像A、图像B、图像C中任意一张图像都能作为参考人脸图像。
S140、根据偏转角度从图像调整参数集合中选取目标图像调整参数。
实际应用中,不同角度拍摄的图像所呈现出来的画面定不相同,因此,所需要的图像调整参数也不相同。
本发明实施例中,该图像调整参数集合包括的多组图像调整参数,可以为不同人脸角度的人脸图像为达到预设画面效果所需要的图像调整参数。比如,一张人脸的正面图像和侧面图像所显示的画面是不一样的,因此所需要的图像调整参数也会存在差异(例如,左脸有疤痕,而右脸颊无疤痕,当人脸图像中由于角度的问题显示大部分左脸而将疤痕暴露时,需对图像中疤痕进行处理;而当人脸图像显示大部分右脸而左脸中疤痕被遮挡掉时,无需对图像中疤痕进行处理)。因此,可根据人脸相对于参考人脸图像中参考人脸的偏转角度,来确定人脸图像中人脸的朝向,从而确定对应需要选取的目标图像调整参数。也即,步骤“根据偏转角度从图像调整参数集合中选取目标图像调整参数”可以包括:
从多个预设角度区间中确定偏转角度所落入的目标角度区间;
从图像调整参数集合中获取目标角度区间对应的预设图像调整参数;
将该预设图像调整参数作为目标图像调整参数。
其中,该多个预设角度区间可以由电子设备经一段时间深度学习用户行为习惯,分析处理而得到。
S150、根据目标图像调整参数对人脸图像进行处理。
本发明实施例中,根据目标图像调整参数对人脸图像进行处理的方式可以有多种。比如,可根据目标图像调整参数在该人脸图像中确定需要调整的区域,再根据获取的目标图像调整参数对确定的区域进行调整。
在一些实施例中,可根据目标图像调整参数调整人脸图像的图像特征,以对人脸图像进行处理。也即,步骤“根据目标图像调整参数对人脸图像进行处理”可以包括以下流程:
获取人脸图像的多个图像特征;
分别生成多个图像特征各自对应的初始图层,得到图层集合;
根据目标图像调整参数从所述图层集合中选取目标图层进行调整;
将调整后的目标图层与图层集合中未被选中的初始图层融合。
在一些实施例中,该多个图像特征可包括颜色特征、形状特征、纹理特征以及空间关系特征。具体地,可对人脸图像进行解析,根据解析结果生成每一图像特征各自对应的初始图层,比如,颜色特征对应表征该人脸图像颜色的颜色图层,形状特征对应表征该人脸图像形状的形状图层。
由于并非所有图像特征都需要进行修改,故目标图像调整参数中所包含的参数中存在有参数值为空的参数,因此,只需用到目标图像调整参数中参数值不为空的参数。相应地,可根据参数值不为空的参数从图层集合中选取需修改特征的目标图层进行调整,以得到调整后的目标图层。最后,将调整后的目标图层与剩余未经调整的初始图层叠加融合,以合成处理后的人脸图像。
参考图7,图7中处理后的人脸图像相对于处理前而言,对形状特征、纹理特征、颜色特征进行了调整。具体表现为:增强亮度值、增加清晰度、增强对比度、局部磨皮、修饰发型等。
在一些实施例中,如图3所示,步骤“确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度”可以包括以下子步骤:
S131、根据识别结果获取人脸图像对应的参考人脸图像;
S132、在参考人脸图像的参考人脸中确定相应的第一偏移参考点;
S133、在人脸图像的人脸中确定与第一偏移参考点对应的第二偏移参考点;
S134、获取第二偏移参考点相对于第一偏移参考点的位置偏移信息;
S135、根据位置偏移信息,确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在一些实施例中,位移置偏移信息可包括偏移距离和偏移方向。参考图4,以参考人脸图像为正面人脸图像为例,将图像A作为参考人脸图像,在图像A中确定第一偏移参考点X1(瞳孔)。将图像B作为当前获取的人脸图像,根据第一偏移参考点X1,在图像B中确定对应的第二偏移参考点X2(瞳孔)。具体地,通过作图法对该第二偏移参考点X2进行位置分解,确定第二偏移参考点X2对应在图像A中的等效点X2',如图4所示,会发现在竖直方向上X2'点相对于第一参考点X1不存在距离差,而在水平方向位上X2'点相对于第一参考点X1存在偏移距离d,偏移方向为水平向左。然后,便可获取相应偏转角度算法,将获取的偏移距离和偏移方向输入后,经运算输出偏转角度。
在一些实施例中,如图5所示,步骤“确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度”可以包括以下子步骤:
S131、根据识别结果获取人脸图像对应的参考人脸图像;
S136、在人脸图像的人脸中确定第一参考向量;
S137、在参考人脸图像的参考人脸中确定与第一参考向量对应的第二参考向量;
S138、获取第一参考向量与所述第二参考向量之间的夹角;
S139、根据夹角确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在本实施例中,确定第一参考向量的方式可以有多种。比如,可获取参考人脸图像中的特征点,得到特征点集合,然后从特征点集合中选取任意两个具代表性的目标特征点以计算得到第一参考向量。也即,步骤“在人脸图像的人脸中确定第一参考向量”可以包括以下流程:
在人脸图像中确定相应的第一特征点和第二特征点;
根据第一特征点和第二特征点确定第一参考向量。
参考图6,以参考人脸图像为正面人脸图像为例,将图像A作为参考人脸图像,在图像A中确定第一特征点Y1(瞳孔)和第二特征点Z1(鼻子),得到从Y1到Z1的第一参考向量a。将图像B作为当前获取的人脸图像,根据第一特征点Y1在图像B中确定对应的第三特征点Y2(瞳孔),根据第二特征点Z1在图像B中确定对应的第二特征点Z2(鼻子),得到从Y2到Z2的第二参考向量b。
本实施例中,获取第一参考向量与第二参考向量之间的夹角的方式有多种。在一些实施方式中,可利用公式直接计算第一参考向量a和第二参考向量b之间的夹角。在一些实施方式中,继续参考图6,可分别计算第一参考向量a与水平面的夹角α,以及第二参考向量b和水平面的夹角β,获取α和β的角度差即为两向量之间的夹角。
之后,便可调用相应偏转角度算法,将获取夹角的数值输入后,经运算输出偏转角度。
当然,本发明实施例方法还适用于在竖直方向上产生偏转角度的人脸图像,以及在水平方向上和竖直方向上同时产生角度偏转的人脸图像。具体处理方法可参考上述内容,对此不再赘述。
由上可知,本发明实施例提供的人脸图像处理方法,通过对人脸图像进行识别,得到识别结果,再根据识别结果获取对应的图像调整参数集合,并确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度,根据确定的偏转角度从图像调整参数集合中选取目标图像调整参数,并根据目标图像调整参数对所述人脸图像进行处理。该方案可根据人脸图像的识别结果结合图像中人脸的偏转角度,从相应的图像调整参数集合中确定目标图像调整参数对该人脸图像进行处理,而无需人为去确定如何调整图像,提升了图像处理的速度和效率、减少了操作时间,同时降低了电子设备的功耗。
本发明实施例还提供一种人脸图像处理装置,该装置可以集成在服务器中。如图8所示,人脸图像处理装置300包括:识别模块301、获取模块302、确定模块303、选取模块304以及处理模块305。
识别模块301,用于对人脸图像进行识别,得到识别结果。
本发明实施例中,人脸图像的来源可以有多种,该人脸图像可以是电子设备即时通过摄像镜头采集而来的人脸图像,也可以是预先存储在电子设备中的人脸图像。当然,该人脸图像还可以从网络上获取而来的人脸图像,比如,通过即时通讯工具(如微信)接收到好友用户发送的消息中携带的人脸图像。对人脸图像进行识别,也即对该人脸图像进行人脸识别。识别结果可以用以区确定不同人脸图像所属的人物。
获取模块302,用于根据识别结果获取对应的图像调整参数集合。
在本发明实施例中,图像调整参数集合中包括有多组图像调整参数,该图像处理参数为针对该人脸图像中人脸的历史图像调整参数。在一些实施例中,电子设备可以记录用户在一段历史时段内,对不同人脸图像进行处理时的处理习惯,并基于机器学习对用户在一段历史时段内的处理习惯进行分析和学习处理。电子设备通过自行分析和学习的处理过程,生成处理不同人脸图像时的图像调整参数,以得到不同的调整参数集合。
在一些实施方式中,该图像调整参数可以包括:亮度值、曝光值、对比度、清晰度、液化值、像素模糊半径以及其他参数值等,在此不一一列举。
在一些实施例中,该识别结果包括人物身份标识。其中,该人物身份标识用以区分电子设备中所存储的人脸图像中人脸所属的人物。该人物身份标识不仅限于身份证号码,还可以是人名、昵称或编号等等,只要能将同一人物的人脸图像归位一类即可。则获取模块302可用于:
从预设标识集合中获取与人物身份标识匹配的预设人物身份标识;
获取预设人物身份标识对应的预设图像调整参数集合,将预设图像调整参数集合作为识别结果获取对应的图像调整参数集合。
本实施例中,需预先建立人物身份标识与图像集合之间的对应关系。也即,将同一人物的人脸图像对应的图像调整参数添加到一集合,当识别出该人脸图像的人物身份时,将从众多图像调整参数集合中,基于该人物身份标识获取到该集合。
确定模块303,用于确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
其中,参考人脸图像作为一个参照物,可以是任意一张包含该人脸图像中人脸的图像。
选取模块304,用于根据偏转角度从图像调整参数集合中选取目标图像调整参数。
本发明实施例中,该图像调整参数集合包括的多组图像调整参数,可以为不同人脸角度的人脸图像为达到预设画面效果所需要的图像调整参数。比如,一张人脸的正面图像和侧面图像所显示的画面是不一样的。因此,可根据人脸相对于参考人脸图像中参考人脸的偏转角度,来确定人脸图像中人脸的朝向,从而确定对应需要选取的目标图像调整参数。
在一些实施例中,选取模块304可用于:
从多个预设角度区间中确定偏转角度所落入的目标角度区间;
从图像调整参数集合中获取目标角度区间对应的预设图像调整参数;
将该预设图像调整参数作为目标图像调整参数。
处理模块305,用于根据目标图像调整参数对人脸图像进行处理。
本发明实施例中,根据目标图像调整参数对人脸图像进行处理的方式可以有多种。比如,可根据目标图像调整参数在该人脸图像中确定需要调整的区域,再根据获取的目标图像调整参数对确定的区域进行调整。
在一些实施例中,参考图9,识别结果包括人物身份标识;获取模块302包括:
标识获取子模块3021,用于从预设标识集合中获取与人物身份标识匹配的预设人物身份标识;
集合获取子模块3022,用于获取预设人物身份标识对应的预设图像调整参数集合,将预设图像调整参数集合作为识别结果获取对应的图像调整参数集合。
在一些实施例中,如图10所示,确定模块303包括:图像获取子模块3031、第一确定子模块3032、信息获取子模块3033以及第二确定子模块3034。
图像获取子模块3031,用于根据识别结果获取人脸图像对应的参考人脸图像;
第一确定子模块3032,用于在参考人脸图像的参考人脸中确定相应的第一偏移参考点,在人脸图像的人脸中确定与第一偏移参考点对应的第二偏移参考点;
信息获取子模块3033,用于获取第二偏移参考点相对于第一偏移参考点的位置偏移信息;
第二确定子模块3034,用于根据位置偏移信息,确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在一些实施例中,如图11所示,确定模块303包括:图像获取子模块3031、第三确定子模块3035、夹角获取子模块3036以及第四确定子模块3037。
图像获取子模块3031,用于根据识别结果获取人脸图像对应的参考人脸图像;
第三确定子模块3035,用于在人脸图像的人脸中确定第一参考向量,在参考人脸图像的参考人脸中确定与第一参考向量对应的第二参考向量;
夹角获取子模块3036,用于获取第一参考向量与第二参考向量之间的夹角;
第四确定子模块3037,用于根据夹角,确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在一些实施例中,参考图12,处理模块305包括:
特征获取子模块3051,用于获取人脸图像的多个图像特征;
生成子模块3052,用于分别生成多个图像特征各自对应的初始图层,得到图层集合;
选取子模块3053,用于根据目标图像调整参数从图层集合中选取目标图层进行调整;
融合子模块3054,用于将调整后的目标图层与图层集合中未被选中的初始图层融合。
在一些实施例中,该多个图像特征可包括颜色特征、形状特征、纹理特征以及空间关系特征。具体地,可对人脸图像进行解析,根据解析结果生成每一图像特征各自对应的初始图层,比如,颜色特征对应表征该人脸图像颜色的颜色图层,形状特征对应表征该人脸图像形状的形状图层。
由于并非所有图像特征都需要进行修改,故目标图像调整参数中所包含的参数中存在有参数值为空的参数,因此,只需用到目标图像调整参数中参数值不为空的参数。相应地,可根据参数值不为空的参数从图层集合中选取需修改特征的目标图层进行调整,以得到调整后的目标图层。最后,将调整后的目标图层与剩余未经调整的初始图层叠加融合,以合成处理后的人脸图像。
在一些实施例中,识别结果包括人物身份标识;获取模块302用于:
从预设标识集合中获取与人物身份标识匹配的预设人物身份标识;
获取预设人物身份标识对应的预设图像调整参数集合,将预设图像调整参数集合作为识别结果获取对应的图像调整参数集合。
在一些实施例中,选取模块用于:
从多个预设角度区间中确定偏转角度所落入的目标角度区间;
从图像调整参数集合中获取目标角度区间对应的预设图像调整参数;
将预设图像调整参数作为目标图像调整参数。
由上可知,本发明实施例提供的人脸图像处理装置,通过对人脸图像进行识别,得到识别结果,再根据识别结果获取对应的图像调整参数集合,并确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度,根据确定的偏转角度从图像调整参数集合中选取目标图像调整参数,并根据目标图像调整参数对所述人脸图像进行处理。该方案可根据人脸图像的识别结果结合图像中人脸的偏转角度,确定目标图像调整参数对该人脸图像进行处理,而无需人为去确定如何调整图像,提升了图像处理的速度和效率、减少了操作时间,同时降低了电子设备的功耗。
本发明实施例还提供一种存储介质,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行上述任一人脸图像处理方法。
本发明实施例还提供一种电子设备,该电子设备可以是智能手机、平板电脑等设备。如图13所示,电子设备400包括:处理器401和存储器402。其中,处理器401与存储器402电性连接。
处理器401是电子设备400的控制中心,利用各种接口和线路连接整个电子设备400的各个部分,通过运行或加载存储在存储器402内的应用程序,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备400进行整体监控。
在本实施例中,电子设备400中的处理器401会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现各种功能:
对人脸图像进行识别,得到识别结果;
根据获取识别结果获取对应的图像调整参数集合;
确定获取人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
根据获取偏转角度从获取图像调整参数集合中选取目标图像调整参数;
根据获取目标图像调整参数对获取人脸图像进行处理。
在一些实施例中,处理器401执行以下步骤:
根据识别结果获取人脸图像对应的参考人脸图像;
在参考人脸图像的参考人脸中确定相应的第一偏移参考点;
在人脸图像的人脸中确定与第一偏移参考点对应的第二偏移参考点;
获取第二偏移参考点相对于第一偏移参考点的位置偏移信息;
根据位置偏移信息,确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在一些实施例中,处理器401执行以下步骤:
根据识别结果获取人脸图像对应的参考人脸图像;
在人脸图像的人脸中确定第一参考向量;
在参考人脸图像的参考人脸中确定与第一参考向量对应的第二参考向量;
获取第一参考向量与第二参考向量之间的夹角;
根据夹角确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度。
在一些实施例中,处理器401执行以下步骤:在人脸图像中确定相应的第一特征点和第二特征点;根据第一特征点和第二特征点确定第一参考向量。
在一些实施例中,处理器401执行以下步骤:
获取人脸图像的多个图像特征;
分别生成多个图像特征各自对应的初始图层,得到图层集合;
根据目标图像调整参数从图层集合中选取目标图层进行调整;
将调整后的目标图层与图层集合中未被选中的初始图层融合。
在一些实施例中,识别结果包括人物身份标识,处理器401执行以下步骤:
根据识别结果获取对应的图像调整参数集合的步骤包括:
从预设标识集合中获取与人物身份标识匹配的预设人物身份标识;
获取预设人物身份标识对应的预设图像调整参数集合,将预设图像调整参数集合作为识别结果获取对应的图像调整参数集合。
在一些实施例中,处理器401执行以下步骤:
从多个预设角度区间中确定偏转角度所落入的目标角度区间;
从图像调整参数集合中获取目标角度区间对应的预设图像调整参数;
将预设图像调整参数作为目标图像调整参数。
存储器402可用于存储应用程序和数据。存储器402存储的应用程序中包含有可在处理器中执行的指令。应用程序可以组成各种功能模块。处理器401通过运行存储在存储器402的应用程序,从而执行各种功能应用以及数据处理。
在一些实施例中,如图14所示,电子设备400还包括:射频电路403、显示屏404、控制电路405、输入单元406、音频电路407、传感器408以及电源409。其中,处理器401分别与射频电路403、显示屏404、控制电路405、输入单元406、音频电路407、传感器408以及电源409电性连接。
射频电路403用于收发射频信号,以通过无线通信网络与服务器或其他电子设备进行通信。
显示屏404可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图像、文本、图标、视频和其任意组合来构成。
控制电路405与显示屏404电性连接,用于控制显示屏404显示信息。
输入单元406可用于接收输入的数字、字符信息或用户特征信息(例如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
音频电路407可通过扬声器、传声器提供用户与电子设备之间的音频接口。
传感器408用于采集外部环境信息。传感器408可以包括环境亮度传感器、加速度传感器、陀螺仪等传感器中的一种或多种。
电源409用于给电子设备400的各个部件供电。在一些实施例中,电源409可以通过电源管理***与处理器401逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管图14中未示出,电子设备400还可以包括摄像头、蓝牙模块等,在此不再赘述。
由上可知,本发明实施例提供了一种电子设备,通过对人脸图像进行识别,得到识别结果,再根据识别结果获取对应的图像调整参数集合,并确定人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度,根据确定的偏转角度从图像调整参数集合中选取目标图像调整参数,并根据目标图像调整参数对所述人脸图像进行处理。该方案可根据人脸图像的识别结果结合图像中人脸的偏转角度,确定目标图像调整参数对该人脸图像进行处理,而无需人为去确定如何调整图像,提升了图像处理的速度和效率、减少了操作时间,同时降低了电子设备的功耗。
本发明实施例还提供一种存储介质,该存储介质中存储有多条指令,该指令适于由处理器加载以执行上述任一实施例所述的人脸图像处理方法。
需要说明的是,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于计算机可读的介质中,该介质可以包括但不限于:只读存储器(ROM,Read Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或光盘等。
以上对本发明实施例所提供的人脸图像处理、方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有修改之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种人脸图像处理方法,其中,所述人脸图像处理方法包括:
    对人脸图像进行识别,得到识别结果;
    根据所述识别结果获取对应的图像调整参数集合;
    确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
    根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
    根据所述目标图像调整参数对所述人脸图像进行处理。
  2. 根据权利要求1所述的人脸图像处理方法,其中,确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度的步骤包括:
    根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    在所述参考人脸图像的参考人脸中确定相应的第一偏移参考点;
    在所述人脸图像的人脸中确定与所述第一偏移参考点对应的第二偏移参考点;
    获取所述第二偏移参考点相对于所述第一偏移参考点的位置偏移信息;
    根据所述位置偏移信息,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  3. 根据权利要求1所述的人脸图像处理方法,其中,确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度的步骤包括:
    根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    在所述人脸图像的人脸中确定第一参考向量;
    在所述参考人脸图像中的参考人脸上确定与所述第一参考向量对应的第二参考向量;
    获取所述第一参考向量与所述第二参考向量之间的夹角;
    根据所述夹角,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  4. 根据权利要求3所述的人脸图像处理方法,其中,在所述人脸图像的人脸中确定第一参考向量的步骤包括:
    在所述人脸图像中确定相应的第一特征点和第二特征点;
    根据所述第一特征点和第二特征点确定第一参考向量。
  5. 根据权利要求1所述的人脸图像处理方法,其中,根据所述目标图像调整参数对所述人脸图像进行处理的步骤包括:
    获取所述人脸图像的多个图像特征;
    分别生成所述多个图像特征各自对应的初始图层,得到图层集合;
    根据所述目标图像调整参数从所述图层集合中选取目标图层进行调整;
    将调整后的目标图层与所述图层集合中未被选中的初始图层融合。
  6. 根据权利要求1所述的人脸图像处理方法,其中,所述识别结果包括人物身份标识;
    根据所述识别结果获取对应的图像调整参数集合的步骤包括:
    从预设标识集合中获取与所述人物身份标识匹配的预设人物身份标识;
    获取所述预设人物身份标识对应的预设图像调整参数集合,将所述预设图像调整参数集合作为所述识别结果获取对应的图像调整参数集合。
  7. 根据权利要求1所述的人脸图像处理方法,其中,根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数的步骤包括:
    从多个预设角度区间中确定所述偏转角度所落入的目标角度区间;
    从所述图像调整参数集合中获取所述目标角度区间对应的预设图像调整参数;
    将所述预设图像调整参数作为目标图像调整参数。
  8. 一种人脸图像处理装置,其中,所述人脸图像处理装置包括:
    识别模块,用于对人脸图像进行识别,得到识别结果;
    获取模块,用于根据所述识别结果获取对应的图像调整参数集合;
    确定模块,用于确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
    选取模块,用于根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
    处理模块,用于根据所述目标图像调整参数对所述人脸图像进行处理。
  9. 根据权利要求8所述的人脸图像处理装置,其中,所述确定模块包括:
    图像获取子模块,用于根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    第一确定子模块,用于在所述参考人脸图像的参考人脸中确定相应的第一偏移参考点,在所述人脸图像的人脸中确定与所述第一偏移参考点对应的第二偏移参考点;
    信息获取子模块,用于获取所述第二偏移参考点相对于所述第一偏移参考点的位置偏移信息;
    第二确定子模块,用于根据所述位置偏移信息,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  10. 根据权利要求8所述的人脸图像处理装置,其中,所述确定模块包括:
    图像获取子模块,用于根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    第三确定子模块,用于在所述人脸图像的人脸中确定第一参考向量,在所述参考人脸图像的参考人脸中确定与所述第一参考向量对应的第二参考向量;
    夹角获取子模块,用于获取所述第一参考向量与所述第二参考向量之间的夹角;
    第四确定子模块,用于根据所述夹角,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  11. 根据权利要求8所述的人脸图像处理装置,其中,所述处理模块包括:
    特征获取子模块,用于获取所述人脸图像的多个图像特征;
    生成子模块,用于分别生成所述多个图像特征各自对应的初始图层,得到图层集合;
    选取子模块,用于根据所述目标图像调整参数从所述图层集合中选取目标图层进行调整;
    融合子模块,用于将调整后的目标图层与所述图层集合中未被选中的初始图层融合。
  12. 根据权利要求8所述的人脸图像处理装置,其中,所述识别结果包括人物身份标识;所述获取模块包括:
    标识获取子模块,用于从预设标识集合中获取与所述人物身份标识匹配的预设人物身份标识;
    集合获取子模块,用于获取所述预设人物身份标识对应的预设图像调整参数集合,将所述预设图像调整参数集合作为所述识别结果获取对应的图像调整参数集合。
  13. 一种存储介质,其中,所述介质中存储有多条指令,所述指令适于由处理器加载以执行权利要求1至7中任一项所述的人脸图像处理方法。
  14. 一种电子设备,其中,所述电子设备包括处理器和存储器,所述处理器与所述存储器电性连接,所述存储器用于存储指令和数据,所述处理器用于执行以下步骤:
    对人脸图像进行识别,得到识别结果;
    根据所述识别结果获取对应的图像调整参数集合;
    确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度;
    根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数;
    根据所述目标图像调整参数对所述人脸图像进行处理。
  15. 根据权利要求14所述的电子设备,其中,确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度的步骤包括:
    根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    在所述参考人脸图像的参考人脸中确定相应的第一偏移参考点;
    在所述人脸图像的人脸中确定与所述第一偏移参考点对应的第二偏移参考点;
    获取所述第二偏移参考点相对于所述第一偏移参考点的位置偏移信息;
    根据所述位置偏移信息,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  16. 根据权利要求14所述的电子设备,其中,确定所述人脸图像中人脸相对于参考人脸图像中参考人脸的偏转角度的步骤包括:
    根据所述识别结果获取所述人脸图像对应的参考人脸图像;
    在所述人脸图像的人脸中确定第一参考向量;
    在所述参考人脸图像中的参考人脸上确定与所述第一参考向量对应的第二参考向量;
    获取所述第一参考向量与所述第二参考向量之间的夹角;
    根据所述夹角,确定所述人脸图像中人脸相对于所述参考人脸图像中参考人脸的偏转角度。
  17. 根据权利要求16所述的电子设备,其中,在所述人脸图像的人脸中确定第一参考向量的步骤包括:
    在所述人脸图像中确定相应的第一特征点和第二特征点;
    根据所述第一特征点和第二特征点确定第一参考向量。
  18. 根据权利要求14所述的电子设备,其中,根据所述目标图像调整参数对所述人脸图像进行处理的步骤包括:
    获取所述人脸图像的多个图像特征;
    分别生成所述多个图像特征各自对应的初始图层,得到图层集合;
    根据所述目标图像调整参数从所述图层集合中选取目标图层进行调整;
    将调整后的目标图层与所述图层集合中未被选中的初始图层融合。
  19. 根据权利要求14所述的电子设备,其中,所述识别结果包括人物身份标识;
    根据所述识别结果获取对应的图像调整参数集合的步骤包括:
    从预设标识集合中获取与所述人物身份标识匹配的预设人物身份标识;
    获取所述预设人物身份标识对应的预设图像调整参数集合,将所述预设图像调整参数集合作为所述识别结果获取对应的图像调整参数集合。
  20. 根据权利要求14所述的电子设备,其中,根据所述偏转角度从所述图像调整参数集合中选取目标图像调整参数的步骤包括:
    从多个预设角度区间中确定所述偏转角度所落入的目标角度区间;
    从所述图像调整参数集合中获取所述目标角度区间对应的预设图像调整参数;
    将所述预设图像调整参数作为目标图像调整参数。
PCT/CN2017/091352 2017-06-30 2017-06-30 人脸图像处理方法、装置、存储介质及电子设备 WO2019000462A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17916284.7A EP3647992A4 (en) 2017-06-30 2017-06-30 FACE IMAGE PROCESSING METHOD AND APPARATUS, INFORMATION MEDIUM, AND ELECTRONIC DEVICE
CN201780092006.8A CN110741377A (zh) 2017-06-30 2017-06-30 人脸图像处理方法、装置、存储介质及电子设备
PCT/CN2017/091352 WO2019000462A1 (zh) 2017-06-30 2017-06-30 人脸图像处理方法、装置、存储介质及电子设备
US16/700,584 US11163978B2 (en) 2017-06-30 2019-12-02 Method and device for face image processing, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091352 WO2019000462A1 (zh) 2017-06-30 2017-06-30 人脸图像处理方法、装置、存储介质及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/700,584 Continuation US11163978B2 (en) 2017-06-30 2019-12-02 Method and device for face image processing, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2019000462A1 true WO2019000462A1 (zh) 2019-01-03

Family

ID=64741898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091352 WO2019000462A1 (zh) 2017-06-30 2017-06-30 人脸图像处理方法、装置、存储介质及电子设备

Country Status (4)

Country Link
US (1) US11163978B2 (zh)
EP (1) EP3647992A4 (zh)
CN (1) CN110741377A (zh)
WO (1) WO2019000462A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652123A (zh) * 2020-06-01 2020-09-11 腾讯科技(深圳)有限公司 图像处理和图像合成方法、装置和存储介质
CN114339393A (zh) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 直播画面的显示处理方法、服务器、设备、***及介质
US11455831B2 (en) * 2017-07-25 2022-09-27 Arcsoft Corporation Limited Method and apparatus for face classification

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110741377A (zh) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 人脸图像处理方法、装置、存储介质及电子设备
CN108229308A (zh) * 2017-11-23 2018-06-29 北京市商汤科技开发有限公司 目标对象识别方法、装置、存储介质和电子设备
CN110309691B (zh) * 2018-03-27 2022-12-27 腾讯科技(深圳)有限公司 一种人脸识别方法、装置、服务器及存储介质
CN113255399A (zh) * 2020-02-10 2021-08-13 北京地平线机器人技术研发有限公司 目标匹配方法和***、服务端、云端、存储介质、设备
KR102497805B1 (ko) 2020-07-31 2023-02-10 주식회사 펫타버스 인공지능 기반 반려동물 신원확인 시스템 및 방법
CN112085701B (zh) * 2020-08-05 2024-06-11 深圳市优必选科技股份有限公司 一种人脸模糊度检测方法、装置、终端设备及存储介质
CN112561787B (zh) * 2020-12-22 2024-03-22 维沃移动通信有限公司 图像处理方法、装置、电子设备及存储介质
CN113487745A (zh) * 2021-07-16 2021-10-08 思享智汇(海南)科技有限责任公司 一种增强现实的方法、装置及***
CN113657379A (zh) * 2021-08-09 2021-11-16 杭州华橙软件技术有限公司 图像处理方法、装置、计算机可读存储介质及处理器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060280380A1 (en) * 2005-06-14 2006-12-14 Fuji Photo Film Co., Ltd. Apparatus, method, and program for image processing
CN103605965A (zh) * 2013-11-25 2014-02-26 苏州大学 一种多姿态人脸识别方法和装置
CN103793693A (zh) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 一种人脸转向的检测方法及其应用该方法的脸型优化方法
CN105069007A (zh) * 2015-07-02 2015-11-18 广东欧珀移动通信有限公司 一种建立美颜数据库的方法及装置
CN105530435A (zh) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 一种拍摄方法及移动终端

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7027618B2 (en) * 2001-09-28 2006-04-11 Koninklijke Philips Electronics N.V. Head motion estimation from four feature points
JP4606779B2 (ja) * 2004-06-07 2011-01-05 グローリー株式会社 画像認識装置、画像認識方法およびその方法をコンピュータに実行させるプログラム
US9690979B2 (en) * 2006-03-12 2017-06-27 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
JP5239625B2 (ja) * 2008-08-22 2013-07-17 セイコーエプソン株式会社 画像処理装置、画像処理方法および画像処理プログラム
CN101561710B (zh) * 2009-05-19 2011-02-09 重庆大学 一种基于人脸姿态估计的人机交互方法
US8854397B2 (en) * 2011-12-13 2014-10-07 Facebook, Inc. Photo selection for mobile devices
CN104182114A (zh) * 2013-05-22 2014-12-03 辉达公司 用于调整移动设备的画面显示方向的方法和***
US9286706B1 (en) * 2013-12-06 2016-03-15 Google Inc. Editing image regions based on previous user edits
CN103927719B (zh) 2014-04-04 2017-05-17 北京猎豹网络科技有限公司 图片处理方法及装置
JP6365671B2 (ja) * 2014-07-24 2018-08-01 富士通株式会社 顔認証装置、顔認証方法および顔認証プログラム
CN104537612A (zh) * 2014-08-05 2015-04-22 华南理工大学 一种自动的人脸图像皮肤美化方法
US10043089B2 (en) * 2015-03-11 2018-08-07 Bettina Jensen Personal identification method and apparatus for biometrical identification
JP6754619B2 (ja) * 2015-06-24 2020-09-16 三星電子株式会社Samsung Electronics Co.,Ltd. 顔認識方法及び装置
CN110741377A (zh) * 2017-06-30 2020-01-31 Oppo广东移动通信有限公司 人脸图像处理方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060280380A1 (en) * 2005-06-14 2006-12-14 Fuji Photo Film Co., Ltd. Apparatus, method, and program for image processing
CN103605965A (zh) * 2013-11-25 2014-02-26 苏州大学 一种多姿态人脸识别方法和装置
CN103793693A (zh) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 一种人脸转向的检测方法及其应用该方法的脸型优化方法
CN105069007A (zh) * 2015-07-02 2015-11-18 广东欧珀移动通信有限公司 一种建立美颜数据库的方法及装置
CN105530435A (zh) * 2016-02-01 2016-04-27 深圳市金立通信设备有限公司 一种拍摄方法及移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3647992A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11455831B2 (en) * 2017-07-25 2022-09-27 Arcsoft Corporation Limited Method and apparatus for face classification
CN111652123A (zh) * 2020-06-01 2020-09-11 腾讯科技(深圳)有限公司 图像处理和图像合成方法、装置和存储介质
CN111652123B (zh) * 2020-06-01 2023-11-14 腾讯科技(深圳)有限公司 图像处理和图像合成方法、装置和存储介质
CN114339393A (zh) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 直播画面的显示处理方法、服务器、设备、***及介质

Also Published As

Publication number Publication date
EP3647992A4 (en) 2020-07-08
US20200104568A1 (en) 2020-04-02
EP3647992A1 (en) 2020-05-06
CN110741377A (zh) 2020-01-31
US11163978B2 (en) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2019000462A1 (zh) 人脸图像处理方法、装置、存储介质及电子设备
WO2018117428A1 (en) Method and apparatus for filtering video
WO2018117704A1 (en) Electronic apparatus and operation method thereof
WO2020105948A1 (en) Image processing apparatus and control method thereof
WO2019216499A1 (ko) 전자 장치 및 그 제어 방법
WO2019103484A1 (ko) 인공지능을 이용한 멀티모달 감성인식 장치, 방법 및 저장매체
WO2019143227A1 (en) Electronic device providing text-related image and method for operating the same
WO2019093819A1 (ko) 전자 장치 및 그 동작 방법
WO2015137666A1 (ko) 오브젝트 인식 장치 및 그 제어 방법
WO2020130747A1 (ko) 스타일 변환을 위한 영상 처리 장치 및 방법
WO2019022472A1 (en) ELECTRONIC DEVICE AND ITS CONTROL METHOD
EP3669181A1 (en) Vision inspection management method and vision inspection system
WO2017131348A1 (en) Electronic apparatus and controlling method thereof
WO2020091519A1 (en) Electronic apparatus and controlling method thereof
WO2019045521A1 (ko) 전자 장치 및 그 제어 방법
WO2022139155A1 (ko) 컨텐츠 기반 케어 서비스를 제공하는 전자 장치 및 그 제어 방법
WO2021230485A1 (ko) 영상을 제공하는 방법 및 장치
WO2021025509A1 (en) Apparatus and method for displaying graphic elements according to object
EP3997559A1 (en) Device, method, and program for enhancing output content through iterative generation
WO2019190142A1 (en) Method and device for processing image
WO2021132798A1 (en) Method and apparatus for data anonymization
WO2019231068A1 (en) Electronic device and control method thereof
EP3707678A1 (en) Method and device for processing image
EP3738305A1 (en) Electronic device and control method thereof
WO2021206413A1 (en) Device, method, and computer program for performing actions on iot devices

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017916284

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017916284

Country of ref document: EP

Effective date: 20200130