WO2021128593A1 - 人脸图像处理的方法、装置及*** - Google Patents

人脸图像处理的方法、装置及*** Download PDF

Info

Publication number
WO2021128593A1
WO2021128593A1 PCT/CN2020/078818 CN2020078818W WO2021128593A1 WO 2021128593 A1 WO2021128593 A1 WO 2021128593A1 CN 2020078818 W CN2020078818 W CN 2020078818W WO 2021128593 A1 WO2021128593 A1 WO 2021128593A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
face
original image
skin color
image
Prior art date
Application number
PCT/CN2020/078818
Other languages
English (en)
French (fr)
Inventor
孙文君
周凡贻
黄小寒
Original Assignee
上海传英信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海传英信息技术有限公司 filed Critical 上海传英信息技术有限公司
Publication of WO2021128593A1 publication Critical patent/WO2021128593A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This application relates to the field of image processing technology, and in particular to a method, device and system for face image processing.
  • HDR High-Dynamic Range, high dynamic range image
  • LDR Low-Dynamic Range, low dynamic range image
  • This application provides a method, device, and system for face image processing, so as to achieve a clearer display of portrait details in various scenes, and can also express the stereoscopic effect of dark-skinned portraits, and achieve portrait photography in different environments. Consistency, so as to improve the dynamic range of the face when taking portraits, especially dark-skinned portraits.
  • a face image processing method provided by an embodiment of the present application includes:
  • the method before selecting the first area of the original image, the method further includes:
  • a face area is selected from the original image, and if it is detected that the brightness of the face area is not less than a preset target value, the corresponding original image is taken as a standard image, and the current exposure T is stored.
  • selecting the first area of the original image includes:
  • the method further includes:
  • the division of skin color information on the original image according to the first region includes:
  • the skin color category corresponding to the first region is determined according to the matching subregion.
  • obtaining the corresponding target face image based on the skin color information includes:
  • the method before determining the skin color category corresponding to the first region according to the matching subregion, the method further includes:
  • the method further includes:
  • the target face image is smoothed and displayed on the display interface.
  • the obtaining the original image includes:
  • a face image processing apparatus provided by an embodiment of the present application includes:
  • the acquisition module is used to acquire the original image
  • the input module is used to use the original image as the input of the skin color face model; wherein, the skin color face model refers to the classification of the skin color category of the original image according to the forehead area of the face, and based on the The skin color category obtains the corresponding target face image;
  • the output module is configured to output the target face image through the skin color face model.
  • a face image processing system provided by an embodiment of the present application includes: a memory and a processor.
  • the memory stores executable instructions of the processor; wherein the processor is configured to execute The executable instruction is used to execute the face image processing method described in any one of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium with a computer program stored thereon, and when the program is executed by a processor, the face image processing method described in any one of the first aspect is implemented.
  • the present application provides a method, device, and system for processing a face image.
  • the method includes: obtaining an original image; selecting a first region of the original image; and dividing the original image into skin color information according to the first region; Obtain the corresponding target face image based on the skin color information. It not only achieves a clearer display of portrait details in each scene, but also expresses the three-dimensional sense of dark-skinned portraits, and achieves the consistency of portrait photography in different environments, thereby improving portrait photography, especially the face of dark-skinned portraits. Dynamic range.
  • Figure 1 is a schematic diagram of an application scenario of this application
  • Embodiment 2 is a flowchart of a method for processing a face image provided by Embodiment 1 of the application;
  • FIG. 3 is a schematic structural diagram of a face model provided in Embodiment 2 of this application.
  • FIG. 4 is a schematic structural diagram of a face image processing apparatus provided in Embodiment 3 of this application;
  • FIG. 5 is a schematic structural diagram of a face image processing apparatus provided in Embodiment 4 of this application.
  • FIG. 6 is a schematic structural diagram of a face image processing system provided by Embodiment 5 of this application.
  • HDR High Dynamic Range, high dynamic range
  • HDR is an image post-processing technology, which is an image mapping technology that exceeds the brightness range that the display can express. Mapping a wide brightness range to the brightness range that the paper or screen can represent is similar to the camera exposure function, and the human eye has a similar function. Through the aperture of the camera, the amount of light entering the photoreceptor can be controlled, and the brightness of the photoreceptor can be processed to a certain extent, and then convincing photos can be obtained.
  • FIG 1 is a schematic diagram of an application scenario of this application.
  • users use mobile phones, tablets, ipads and other smart devices to shoot scenes to obtain images with appropriate exposure.
  • users use mobile phones 11 to shoot scenes including portraits. Due to the different exposures of the scenes and the different received light, the captured images will be bright and dark, especially the shooting of dark-skinned portraits cannot meet the requirements of high dynamic range of portraits. Therefore, it is necessary to use the metering mode of this application for scene shooting.
  • a dark-skinned portrait is obtained, and then a target face image, that is, a highly dynamic face image 12, is output according to the skin-skinned face model, which improves the photographing experience of dark-skinned people.
  • FIG. 2 is a flowchart of a method for processing a face image provided in Embodiment 1 of this application. As shown in FIG. 2, the method for processing a face image in this embodiment may include:
  • the preview interface (such as a mobile phone screen). If no human face is detected, the camera enters the camera mode, and the camera is dimmed according to the preset light metering mode. To get the original image.
  • the original image includes face information.
  • the metering mode refers to the way to test the reflectivity of the camera and the subject. Measuring the brightness of the light is the process of the camera to achieve correct exposure.
  • the advantage of metering through the lens is that it directly reflects the size of the light seen by the object.
  • the first area of the original image before selecting the first area of the original image, it further includes: selecting a face area from the original image, and if the brightness of the detected face area is not less than the preset target value, then Take the corresponding original image as the standard image, and store the current exposure T.
  • the face area is selected from the original image, and then it is detected that the brightness of the face area is not less than the preset target value AE, then the original image is used as a standard image, for example, referring to the picture displayed in the mobile phone 11 in FIG. And store the original image corresponding to the current exposure T.
  • the current exposure indicates the exposure of the image, that is, the amount of light received. The more light received, that is, the higher the exposure, the brighter the image; the lower the exposure , The darker the image will be.
  • the preset target value is not limited in this embodiment.
  • Further obtaining the first region of the original image includes: extracting features of the standard image, dividing the standard image into a plurality of sub-regions, and selecting the first region from the sub-regions, where the first region includes the forehead region corresponding to the human face. It also includes performing positive exposure and negative exposure processing on each sub-region according to the current exposure T to obtain multiple candidate sub-regions.
  • the feature of the original image corresponding to the standard image is extracted, and the standard image is divided into multiple sub-regions, for example, the standard image is divided into multiple sub-regions, and the face region is divided into M sub-regions, where the sub-regions include the first Area, such as the forehead area of a human face.
  • the current exposure level T is used for multi-frame exposure, and each sub-region of the standard image is processed with a small STEP multi-frame positive exposure and negative exposure, such as positive exposure and negative exposure X times, respectively, and the corresponding candidates are obtained. Sub-area.
  • 2X+1 candidate sub-regions corresponding to the sub-region are obtained.
  • the forehead area (ie the first area) of the face is processed for X times of positive exposure and negative exposure based on the current exposure T to obtain 2X+1 corresponding to the forehead area Alternative sub-area.
  • the STEP required for multi-frame exposure should be small enough, that is, enough frames are required.
  • the 2X+1 candidate sub-regions collected in each sub-region are cached, and M*(2X+1) candidate sub-images are obtained according to the M sub-regions.
  • the feature of the forehead region and its corresponding candidate subregion are matched to obtain at least one candidate subregion, and then the matching subregion is obtained from the candidate subregions, and the skin color category is determined according to the matching subregion, for example, Brown and black; then get the matching sub-regions corresponding to the other (M-1) sub-regions in the face area according to the brown-black skin tone category, that is, get the matching sub-regions from the 2X+1 candidate sub-regions corresponding to each sub-region, and then All matching sub-regions are merged to obtain the target face image.
  • the matching subregion is obtained from the candidate subregions
  • the skin color category is determined according to the matching subregion, for example, Brown and black
  • S104 Obtain matching sub-regions corresponding to each sub-region in the face region according to the skin color category; fuse all matching sub-regions to obtain a target face image.
  • the matching sub-region determines the skin color category corresponding to the forehead region (ie the first region) of the face, and obtains the matching sub-region corresponding to each sub-region in the face region according to the skin color category, and then merges all the matching sub-regions to obtain Target face image.
  • the sub-region may include the forehead region divided from the eyebrows to the hairline, the cheek region from the eyes to the lower jaw, the eye region formed by the outer rim of the eyelid, and so on.
  • the method further includes: smoothing the target face image and displaying it on the display interface.
  • each sub-region, especially the matching sub-region corresponding to the face region is merged to obtain the target face image, and the noise or distortion on the target face image can be reduced by smooth (image smoothing).
  • the final output is a high-dynamic face image, which can not only get all the details of the face, but also match the brightness order of the obtained portrait, so that the portrait is clear and three-dimensional. Then it is displayed on the display interface.
  • the light and dark of the human face may appear in the photographing of the face.
  • the light and dark of the forehead area of the acquired face may also appear to be light and dark.
  • this embodiment can effectively ensure that the obtained target image is clear and three-dimensional, and has good consistency, thereby improving the photographing experience of dark-skinned people, and is suitable for processing face images in various scenes.
  • FIG. 3 is a schematic structural diagram of the face model provided in the second embodiment of the application; specifically select branch 21, demarcate branch 22, match branch 23, and merge branch 24; select branch , Is used to select the face area according to the original image to obtain the standard image; the branch road is used to extract the features of the standard image, the standard image is divided into multiple sub-regions, and the first region is selected from the sub-regions, and obtained according to the sub-regions Corresponding multiple candidate sub-regions; matching branch, used to match the feature of the first region with the feature of the candidate sub-region, and obtain at least one corresponding candidate sub-region as the matching sub-region; determine according to the matching sub-region The skin color category corresponding to the first region; the fusion branch is used to obtain the matching sub-region corresponding to each sub-region in the face region according to the skin color category; all the matching sub-regions are merged to obtain the target face image.
  • select branch Is used to select the face area according to the original image to obtain the standard image
  • the branch road
  • the original image is input into the skin color face model, and the skin color face model can be divided into smaller sub-regions.
  • the obtained standard image can also be divided into the same number of smaller spare parts. Select a sub-region; then match the features of the forehead region with its corresponding candidate sub-region to obtain at least one matching candidate sub-region, and then obtain a corresponding matching sub-region, and determine the skin color category according to the matching sub-region, for example Brown and black; then get the matching sub-regions corresponding to other sub-regions according to the brown-black skin tone category, that is, obtain the matching sub-regions from the multiple candidate sub-regions corresponding to each sub-region, and merge all the matching sub-regions to obtain the target face image. It can effectively ensure that the target image is clear and three-dimensional, and has good consistency, thereby improving the photographing experience of dark-skinned people, and is suitable for processing face images in various scenes.
  • the method before determining the skin color category corresponding to the first region according to the matching sub-region, the method further includes: obtaining a one-to-one correspondence between the matching sub-region and the skin color category corresponding to the first region through training data set training .
  • the training data set includes many face images of different skin colors.
  • the training data set can be set and stored according to an existing face database, or face images of different skin colors can be collected and downloaded from the Internet to form a training data set.
  • N categories of people with dark-skinned can be obtained, and the skin color category is divided mainly based on the forehead area of the portrait as the reference point, because the forehead area contains the most abundant information, such as H (Hue, hue or hue) , S (Saturation, saturation), Y (yellow, yellow) value and so on.
  • This embodiment enhances the dynamic range of the face when taking portraits, especially dark-skinned portraits, not only can obtain clearer portrait details in various scenes, but also can show the stereoscopic effect of portraits of dark-skinned people, and can ensure that they are in different environments. Down to achieve the consistency of portrait photography.
  • FIG. 4 is a schematic structural diagram of a face image processing apparatus provided in Embodiment 3 of this application. As shown in FIG. 4, the face image processing apparatus in this embodiment may include:
  • the obtaining module 33 is used to obtain the original image
  • the input module 34 is used to use the original image as the input of the skin tone face model;
  • the skin tone face model refers to: the original image is divided into skin tone categories according to the forehead area of the face, and the corresponding target person is obtained based on the skin tone category Face image
  • the output module 35 is used to output the target face image through the skin color face model.
  • the face image processing apparatus of this embodiment can execute the technical solution in the method shown in FIG. 2.
  • the specific implementation process and technical principle please refer to the related description in the method shown in FIG. 2 and will not be repeated here.
  • Fig. 5 is a schematic structural diagram of a face image processing apparatus provided in the fourth embodiment of the application.
  • the face image processing apparatus in this embodiment can use the original image as a skin color face based on Fig. 4 Before the input of the model, it also includes:
  • the construction module 31 is used to construct an initial face model; wherein, the initial face model includes: branch selection, branch division, matching branch and fusion branch; branch selection is used to select the face area according to the original image, Obtain the standard image; draw branch roads to extract the features of the standard image, divide the standard image into multiple sub-regions, select the first region from the sub-regions, and obtain multiple candidate sub-regions corresponding to the sub-regions; Road, used to match the features of the first region with the features of the candidate subregions to obtain at least one corresponding candidate subregion as a matching subregion; determine the skin color category corresponding to the first region according to the matching subregion; fusion branch , Used to obtain the matching sub-region corresponding to each sub-region in the face region according to the skin color category; fuse all the matching sub-regions to obtain the target face image.
  • branch selection is used to select the face area according to the original image, Obtain the standard image; draw branch roads to extract the features of the standard image, divide the standard image into multiple sub-regions,
  • the obtaining module 32 is used to train the initial face model through the training data set to obtain the skin color face model.
  • FIG. 6 is a schematic structural diagram of a face image processing system provided in the fifth embodiment of this application.
  • the face image processing system 40 of this embodiment may include a processor 41 and a memory 42.
  • the memory 42 is used to store computer programs (such as application programs and functional modules that implement the above-mentioned face image processing method), computer instructions, etc.;
  • the above-mentioned computer programs, computer instructions, etc. may be partitioned and stored in one or more memories 42.
  • the above-mentioned computer programs, computer instructions, data, etc. can be called by the processor 41.
  • the processor 41 is configured to execute a computer program stored in the memory 42 to implement each step in the method involved in the foregoing embodiment.
  • the processor 41 and the memory 42 may be independent structures, or may be an integrated structure integrated together. When the processor 41 and the memory 42 are independent structures, the memory 42 and the processor 41 may be coupled and connected through the bus 43.
  • the server of this embodiment can execute the technical solution in the method shown in FIG. 2.
  • the specific implementation process and technical principle please refer to the related description in the method shown in FIG. 2, which will not be repeated here.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions.
  • the user equipment executes the aforementioned various possibilities. Methods.
  • the computer-readable medium includes a computer storage medium and a communication medium, where the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the ASIC may be located in the user equipment.
  • the processor and the storage medium may also exist as discrete components in the communication device.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium. When the program is executed, it executes the steps including the foregoing method embodiments; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种人脸图像处理的方法、装置及***,该方法,包括:获取原始图像;选取所述原始图像的第一区域;根据所述第一区域对原始图像进行肤色信息的划分;基于所述肤色信息得到对应的目标人脸图像。不仅实现了在各个场景下更加清楚的显示人像细节,还将深肤色人像的立体感表现出来,且在不同环境下达到人像拍照的一致性,从而提升人像拍照尤其是深肤色人像拍照时人脸的动态范围。

Description

人脸图像处理的方法、装置及***
本申请要求于2019年12月26日提交中国专利局、申请号为201911370960.6、申请名称为"人脸图像处理的方法、装置及***"的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种人脸图像处理的方法、装置及***。
背景技术
HDR(High-Dynamic Range,高动态范围图像)随着图像处理技术的发展为让摄像机看到影像的特色而运用的一种技术,相比普通的图像,可以提供更多的动态范围和图像细节,根据不同的曝光时间的LDR(Low-Dynamic Range,低动态范围图像),并利用每个曝光时间相对应最佳细节的LDR图像来合成最终HDR图像。它能够更好的反映出真实环境中的视觉效果。
然而现在的HDR方法大部分测光模式都是针对整张图片场景的HDR,很少与人像检测结合。
即使某些涉及到单独人像检测后的HDR方法仅仅是对人像进行多次处理,取最清晰像素点合成。但是对像素点来说如何为最清晰,无法定量分析,即使人像部分全部的像素点都是清晰的也不能保证合成的人脸部分没有问题,例如会出现人像的立体感会消失不见,人像的动态范围反而会下降,人像会平面等问题。
发明内容
本申请提供一种人脸图像处理的方法、装置及***,以实现在各个场景下更加清楚的显示人像细节,还可以将深肤色人像的立体感表现出来,且在不同环境下达到人像拍照的一致性,从而提升人像拍照尤其是深肤色人像拍照时人脸的动态范围。
第一方面,本申请实施例提供的一种人脸图像处理的方法,包括:
获取原始图像;
选取所述原始图像的第一区域;
根据所述第一区域对原始图像进行肤色信息的划分;
基于所述肤色信息得到对应的目标人脸图像。
在一种可能的设计中,在选取所述原始图像的第一区域之前,还包括:
从所述原始图像中选取人脸区域,若检测所述人脸区域的亮度不小于预设目标值,则将对应的原始图像作为标准图像,并存储当前的曝光度T。
在一种可能的设计中,选取所述原始图像的第一区域,包括:
提取所述标准图像的特征,将所述标准图像划分为多个子区域,并从所述子区域中选取所述第一区域,其中第一区域包括人脸对应的额头区域。
在一种可能的设计中,所述方法,还包括:
根据所述当前的曝光度T,对每个所述子区域进行正曝光、负曝光处理,得到多个备选子区域。
在一种可能的设计中,根据所述第一区域对原始图像进行肤色信息的划分,包括:
将所述第一区域的特征与所述备选子区域的特征进行匹配,得到对应的至少一个备选子区域作为匹配子区域;
根据所述匹配子区域确定第一区域对应的肤色类别。
在一种可能的设计中,基于所述肤色信息得到对应的目标人脸图像,包括:
根据所述肤色类别得到人脸区域中每个子区域对应的匹配子区域;
将所有的所述匹配子区域进行融合,得到所述目标人脸图像。
在一种可能的设计中,在根据匹配子区域确定第一区域对应的肤色类别之前,还包括:
通过训练数据集训练,得到匹配子区域与第一区域对应肤色类别的一一对应关系。
在一种可能的设计中,在基于所述肤色信息得到对应的目标人脸图像之后,还包括:
对所述目标人脸图像进行平滑处理,在显示界面进行显示。
在一种可能的设计中,所述获取原始图像,包括:
在预览界面进行人脸检测,若检测不存在人脸区域,则进入拍照模式,并按照预设的测光模式进行拍照调光,以获取原始图像。
第二方面,本申请实施例提供的一种人脸图像处理的装置,包括:
获取模块,用于获取原始图像;
输入模块,用于将所述原始图像作为肤色人脸模型的输入;其中,所述肤色人脸模型是指:根据人脸的额头区域对所述原始图像进行肤色类别的划分,并基于所述肤色类别得到对应的目标人脸图像;
输出模块,用于通过所述肤色人脸模型输出所述目标人脸图像。
第三方面,本申请实施例提供的一种人脸图像处理的***,包括:存储器和处理器,存储器中存储有所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行第一方面中任一项所述的人脸图像处理的方法。
第四方面,本申请实施例提供的一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现第一方面中任一项所述的人脸图像处理的方法。
本申请提供一种人脸图像处理的方法、装置及***,该方法,包括:获取原始图像;选取所述原始图像的第一区域;根据所述第一区域对原始图像进行肤色信息的划分;基于所述肤色信息得到对应的目标人脸图像。不仅实现了在各个场景下更加清楚的显示人像细节,还将深肤色人像的立体感表现出来,且在不同环境下达到人像拍照的一致性,从而提升人像拍照尤其是深肤色人像拍照时人脸的动态范围。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一应用场景示意图;
图2为本申请实施例一提供的人脸图像处理的方法的流程图;
图3为本申请实施例二提供的人脸模型的结构示意图;
图4为本申请实施例三提供的人脸图像处理的装置的结构示意图;
图5为本申请实施例四提供的人脸图像处理的装置的结构示意图;
图6为本申请实施例五提供的人脸图像处理的***的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。
HDR(High Dynamic Range,高动态范围)是一种图像后处理技术,是一种超过显示器所能表现的亮度范围的图像映射技术。将一个宽广的亮度范围映射到纸张或屏幕能表示的亮度范围类似于照相机曝光功能,人眼也有类似的功能。通过照相机的光圈,可以控制进入感光器的光线数量,感光器得到明暗程度经过一定的处理,就可以得到令人信服的照片。
然而现有技术中大部分测光模式都是对整张图片场景的HDR,很少与人像检测结合,即使某些涉及单独人像检测的人像HDR仅仅是对人像进 行多次处理,取最清晰像素点合成。
图1为本申请一应用场景示意图,目前用户使用手机、平板电脑、ipad等智能设备进行场景拍摄以获得曝光度适宜的图像,如图1所示,用户采用手机11进行包括人像的场景拍摄,由于各场景曝光度的不同,接收光线不同,使得拍摄的图像忽明忽暗,尤其是深肤色人像的拍摄无法满足人像高动态范围的要求,故需要采用本申请的测光模式进行场景拍摄,尤其获取深肤色人像,进而根据肤色人脸模型输出目标人脸图像,即高动态人脸图像12,提高了深肤色人群的拍照体验。
图2为本申请实施例一提供的人脸图像处理的方法的流程图,如图2所示,本实施例中人脸图像处理的方法可以包括:
S101、获取原始图像。
具体的,通过手机等终端设备进行拍照时,在预览界面(例如手机屏幕)进行人脸检测,若检测不存在人脸,则进入拍照模式,并按照预设的测光模式进行拍照调光,以获取原始图像。原始图像中包括人脸信息。
其中,测光模式是指测试相机与被摄物的反射率方式,计测光线的明暗,是相机实现正确曝光的过程,透过镜头测光的好处是直接反射所见景物的光线大小,主要有四种测光模式,平均测光、局部测光、点测光、中央重点平均测光。一般情况下,原始图像会出现整体曝光的一致性较差,图像忽明忽暗的结果。
S102、选取原始图像的第一区域。
具体的,在一种可选的实施例中,在选取原始图像的第一区域之前,还包括:从原始图像中选取人脸区域,若检测人脸区域的亮度不小于预设目标值,则将对应的原始图像作为标准图像,并存储当前的曝光度T。
例如基于现有技术的方法从原始图像选取人脸区域,进而检测人脸区域的亮度不小于预设目标值AE,则将该原始图像作为标准图像,例如参考图1手机11中显示的图片,且存储该原始图像对应当前的曝光度T,当前的曝光度表明该图像的曝光程度,即为接受光线的多少,接受光线越多,即曝光度越高,则图像就越亮;曝光度低,则图像就越偏暗。其中本实施例中不对预设目标值进行限定。
进而获取原始图像的第一区域,包括:提取标准图像的特征,将标准 图像划分为多个子区域,并从子区域中选取第一区域,其中第一区域包括人脸对应的额头区域。还包括根据当前的曝光度T,对每个子区域进行正曝光、负曝光处理,得到多个备选子区域。
具体的,提取原始图像对应标准图像的特征,并将标准图像划分为多个子区域,例如将标准图像划分为多个子区域,且其中的人脸区域划分为M个子区域,其中子区域包括第一区域,例如人脸的额头区域。进而以当前的曝光度T进行多帧曝光,对标准图像的每个子区域进行小STEP多帧的正曝光、负曝光的处理,例如正曝光、负曝光各处理X次,分别得到对应的备选子区域。例如对一个子区域基于当前的曝光度T进行正曝光、负曝光各X次处理后,得到该子区域对应的2X+1个备选子区域。尤其对人脸区域的曝光处理,例如对人脸的额头区域(即第一区域)基于当前的曝光度T进行正曝光、负曝光各X次处理后,得到该额头区域对应的2X+1个备选子区域。为了更加全面的采集人脸区域不同子区域的亮度值,故需多帧曝光的STEP要足够小,即要求帧数足够多。并将每个子区域采集的2X+1张备选子区域进行缓存,根据M个子区域得到M*(2X+1)张备选子图像。
S103、根据第一区域对原始图像进行肤色信息的划分,将第一区域的特征与备选子区域的特征进行匹配,得到对应的至少一个备选子区域作为匹配子区域;根据匹配子区域确定第一区域对应的肤色类别。
本实施例中将额头区域的特征与其对应的备选子区域进行匹配,得到至少一个备选子区域,进而从备选子区域中获得匹配子区域,并根据该匹配子区域确定肤色类别,例如棕黑;然后根据棕黑的肤色类别得到人脸区域中其他(M-1)个子区域对应的匹配子区域,即从每个子区域对应的2X+1张备选子区域中得到匹配子区域,进而将所有的匹配子区域进行融合得到目标人脸图像。
S104、根据肤色类别得到人脸区域中每个子区域对应的匹配子区域;将所有的匹配子区域进行融合,得到目标人脸图像。
具体的,将额头区域的特征与其对应的备选子区域的特征进行匹配,可以得到一个或者多个匹配成功的备选子区域,将这些备选子区域中的一个作为匹配子区域;根据该匹配子区域确定人脸的额头区域(即第一区 域)对应的肤色类别,并根据该肤色类别得到人脸区域中每个子区域对应的匹配子区域,进而将所有的匹配子区域进行融合后得到目标人脸图像。例如子区域可以包括眉毛至发际线的位置划分的额头区域,位于脸的两侧、从眼睛到下颌的面颊区域,由眼皮的外缘眶构成的眼睛区域等等。
在一种可选的实施例中,在基于肤色信息得到对应的目标人脸图像之后,还包括:对目标人脸图像进行平滑处理,在显示界面进行显示。
具体的,将每个子区域,尤其人脸区域对应的匹配子区域进行融合后得到目标人脸图像,可以通过smooth(图像平滑)减少目标人脸图像上的噪声或者失真。最终输出高动态人脸图像,该图像既可以得到人脸的全部细节,又可以匹配获取人像的亮度阶数,使人像既清晰又立体。进而在显示界面上进行显示。
现有技术中获取的原始图像由于曝光度不适宜,可能出现人脸拍摄的忽明忽暗,严重时甚至获取的人脸额头区域也会出现忽明忽暗的情况。而本实施例可以有效的保证得到的目标图像清晰立体,一致性好,从而提高深肤色人群的拍照体验,且适用于各场景下人脸图像的处理。
结合上述示例,参考图3,图3为本申请实施例二提供的人脸模型的结构示意图;具体的选取支路21、划分支路22、匹配支路23以及融合支路24;选取支路,用于根据原始图像选取人脸区域,得到标准图像;划分支路用于提取标准图像的特征,将标准图像划分为多个子区域,并从子区域中选取第一区域,并根据子区域得到对应的多个备选子区域;匹配支路,用于将第一区域的特征与备选子区域的特征进行匹配,得到对应的至少一个备选子区域作为匹配子区域;根据匹配子区域确定第一区域对应的肤色类别;融合支路,用于根据肤色类别得到人脸区域中每个子区域对应的匹配子区域;将所有的匹配子区域进行融合,得到目标人脸图像。
在一种可选的实施例中,将该原始图像输入肤色人脸模型,该肤色人脸模型可以划分得到更加小的子区域,同样将得到的标准图像也可以划分为相同数量更加小的备选子区域;进而将额头区域的特征与其对应的备选子区域进行匹配,得到至少一个匹配的备选子区域,进而得到一个对应的匹配子区域,且根据该匹配子区域确定肤色类别,例如棕黑;然后根据棕黑的肤色类别得到其他子区域对应的匹配子区域,即从每个 子区域对应的多张备选子区域中得到匹配子区域,并将所有的匹配子区域进行融合得到目标人脸图像。可以有效的保证目标图像清晰立体,一致性好,从而提高深肤色人群的拍照体验,适用于各场景下人脸图像的处理。
在一种可选的实施例中,在根据匹配子区域确定第一区域对应的肤色类别之前,还包括:通过训练数据集训练,得到匹配子区域与第一区域对应肤色类别的一一对应关系。
具体的,训练数据集包括众多不同肤色的人脸图像,该训练数据集可以根据现有的人脸数据库进行设置并存储,或者可以从网上收集下载不同肤色的人脸图像形成训练数据集。基于大量的深肤色人像分析可得到深肤色N大类人群,且主要根据人像的额头区域为基准点进行肤色类别的划分,因为额头区域包含的信息最为丰富例如包含H(Hue,色相或色调),S(Saturation,饱和度),Y(yellow,黄色)值等等。
通过该训练数据集多次训练,最终得到匹配子区域与人脸的额头区域对应肤色类别的一一对应关系,且具有高度的稳定性和较佳的鲁棒性,可以实现在逆光场景下保证人像明亮且不会发蒙,甚至即使在户外大晴天拍摄原始图像出现炫光的情况下,通过该模型仍可使人像效果得到提升。
本实施例提升人像拍照尤其是深肤色人像拍照时人脸的动态范围,不仅可以在各个场景下得到更加清楚的人像细节还可以将深肤色人群的人像立体感表现出来,并且可以保证在不同环境下达到人像拍照的一致性。
图4为本申请实施例三提供的人脸图像处理的装置的结构示意图,如图4所示,本实施例的人脸图像处理的装置可以包括:
获取模块33,用于获取原始图像;
输入模块34,用于将原始图像作为肤色人脸模型的输入;其中,肤色人脸模型是指:根据人脸的额头区域对原始图像进行肤色类别的划分,并基于肤色类别得到对应的目标人脸图像;
输出模块35,用于通过肤色人脸模型输出目标人脸图像。
本实施例的人脸图像处理的装置,可以执行图2所示方法中的技术方案, 其具体实现过程和技术原理参见图2所示方法中的相关描述,此处不再赘述。
图5为本申请实施例四提供的人脸图像处理的装置的结构示意图,如图5所示,本实施例中人脸图像处理的装置可以在图4的基础上将原始图像作为肤色人脸模型的输入之前,还包括:
构建模块31,用于构建初始人脸模型;其中,初始人脸模型包括:选取支路、划分支路、匹配支路以及融合支路;选取支路,用于根据原始图像选取人脸区域,得到标准图像;划分支路用于提取标准图像的特征,将标准图像划分为多个子区域,并从子区域中选取第一区域,并根据子区域得到对应的多个备选子区域;匹配支路,用于将第一区域的特征与备选子区域的特征进行匹配,得到对应的至少一个备选子区域作为匹配子区域;根据匹配子区域确定第一区域对应的肤色类别;融合支路,用于根据肤色类别得到人脸区域中每个子区域对应的匹配子区域;将所有的匹配子区域进行融合,得到目标人脸图像。
得到模块32,用于通过训练数据集训练初始人脸模型,得到肤色人脸模型。
图6为本申请实施例五提供的人脸图像处理的***的结构示意图,如图6所示,本实施例的人脸图像处理的***40可以包括:处理器41和存储器42。
存储器42,用于存储计算机程序(如实现上述人脸图像处理的方法的应用程序、功能模块等)、计算机指令等;
上述的计算机程序、计算机指令等可以分区存储在一个或多个存储器42中。并且上述的计算机程序、计算机指令、数据等可以被处理器41调用。
处理器41,用于执行存储器42存储的计算机程序,以实现上述实施例涉及的方法中的各个步骤。
具体可以参见前面方法实施例中的相关描述。
处理器41和存储器42可以是独立结构,也可以是集成在一起的集成结构。当处理器41和存储器42是独立结构时,存储器42、处理器41可以通过总线43耦合连接。
本实施例的服务器可以执行图2所示方法中的技术方案,其具体实现过 程和技术原理参见图2所示方法中的相关描述,此处不再赘述。
此外,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当用户设备的至少一个处理器执行该计算机执行指令时,用户设备执行上述各种可能的方法。
其中,计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于用户设备中。当然,处理器和存储介质也可以作为分立组件存在于通信设备中。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (10)

  1. 一种人脸图像处理的方法,其中,包括:
    获取原始图像;
    选取所述原始图像的第一区域;
    根据所述第一区域对原始图像进行肤色信息的划分;
    基于所述肤色信息得到对应的目标人脸图像。
  2. 根据权利要求1所述的方法,其中,在选取所述原始图像的第一区域之前,还包括:
    从所述原始图像中选取人脸区域,若检测所述人脸区域的亮度不小于预设目标值,则将对应的原始图像作为标准图像,并存储当前的曝光度T。
  3. 根据权利要求2所述的方法,其中,选取所述原始图像的第一区域,包括:
    提取所述标准图像的特征,将所述标准图像划分为多个子区域,并从所述子区域中选取所述第一区域,其中第一区域包括人脸对应的额头区域。
  4. 根据权利要求3所述的方法,其中,所述方法,还包括:
    根据所述当前的曝光度T,对每个所述子区域进行正曝光、负曝光处理,得到多个备选子区域。
  5. 根据权利要求4所述的方法,其中,根据所述第一区域对原始图像进行肤色信息的划分,包括:
    将所述第一区域的特征与所述备选子区域的特征进行匹配,得到对应的至少一个备选子区域作为匹配子区域;
    根据所述匹配子区域确定第一区域对应的肤色类别。
  6. 根据权利要求5所述的方法,其中,基于所述肤色信息得到对应的目标人脸图像,包括:
    根据所述肤色类别得到人脸区域中每个子区域对应的匹配子区域;
    将所有的所述匹配子区域进行融合,得到所述目标人脸图像。
  7. 根据权利要求5所述的方法,其中,在根据匹配子区域确定第一区域对应的肤色类别之前,还包括:
    通过训练数据集训练,得到匹配子区域与第一区域对应肤色类别的一一对应关系。
  8. 根据权利要求1-7中任一项所述的方法,其中,所述获取原始图像,包括:
    在预览界面进行人脸检测,若检测不存在人脸区域,则进入拍照模式,并按照预设的测光模式进行拍照调光,以获取原始图像。
  9. 一种人脸图像处理的装置,其中,包括:
    获取模块,用于获取原始图像;
    输入模块,用于将所述原始图像作为肤色人脸模型的输入;其中,所述肤色人脸模型是指:根据人脸的额头区域对所述原始图像进行肤色类别的划分,并基于所述肤色类别得到对应的目标人脸图像;
    输出模块,用于通过所述肤色人脸模型输出所述目标人脸图像。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现权利要求1-8中任一项所述的人脸图像处理的方法。
PCT/CN2020/078818 2019-12-26 2020-03-11 人脸图像处理的方法、装置及*** WO2021128593A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911370960.6 2019-12-26
CN201911370960.6A CN111127367A (zh) 2019-12-26 2019-12-26 人脸图像处理的方法、装置及***

Publications (1)

Publication Number Publication Date
WO2021128593A1 true WO2021128593A1 (zh) 2021-07-01

Family

ID=70503511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078818 WO2021128593A1 (zh) 2019-12-26 2020-03-11 人脸图像处理的方法、装置及***

Country Status (2)

Country Link
CN (1) CN111127367A (zh)
WO (1) WO2021128593A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792714A (zh) * 2021-11-16 2021-12-14 中国南方电网有限责任公司超高压输电公司广州局 换流站进出站人员识别方法、装置、***
CN115965735A (zh) * 2022-12-22 2023-04-14 百度时代网络技术(北京)有限公司 纹理贴图的生成方法和装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870404B (zh) * 2021-09-23 2024-05-07 聚好看科技股份有限公司 一种3d模型的皮肤渲染方法及显示设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8306286B1 (en) * 2006-07-14 2012-11-06 Chatman Andrew S Method and apparatus for determining facial characteristics
CN108564558A (zh) * 2018-01-05 2018-09-21 广州广电运通金融电子股份有限公司 宽动态图像处理方法、装置、设备和存储介质
CN110248107A (zh) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 图像处理方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8306286B1 (en) * 2006-07-14 2012-11-06 Chatman Andrew S Method and apparatus for determining facial characteristics
CN108564558A (zh) * 2018-01-05 2018-09-21 广州广电运通金融电子股份有限公司 宽动态图像处理方法、装置、设备和存储介质
CN110248107A (zh) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 图像处理方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792714A (zh) * 2021-11-16 2021-12-14 中国南方电网有限责任公司超高压输电公司广州局 换流站进出站人员识别方法、装置、***
CN113792714B (zh) * 2021-11-16 2022-05-17 中国南方电网有限责任公司超高压输电公司广州局 换流站进出站人员识别方法、装置、***
CN115965735A (zh) * 2022-12-22 2023-04-14 百度时代网络技术(北京)有限公司 纹理贴图的生成方法和装置
CN115965735B (zh) * 2022-12-22 2023-12-05 百度时代网络技术(北京)有限公司 纹理贴图的生成方法和装置

Also Published As

Publication number Publication date
CN111127367A (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111402135B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
Ram Prabhakar et al. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs
CN108335279B (zh) 图像融合和hdr成像
CN105122302B (zh) 无重影高动态范围图像的产生
WO2021128593A1 (zh) 人脸图像处理的方法、装置及***
KR101662846B1 (ko) 아웃 포커싱 촬영에서 빛망울 효과를 생성하기 위한 장치 및 방법
WO2018176925A1 (zh) Hdr图像的生成方法及装置
CN109844804B (zh) 一种图像检测的方法、装置及终端
CN108337445A (zh) 拍照方法、相关设备及计算机存储介质
CN107911625A (zh) 测光方法、装置、可读存储介质和计算机设备
KR20170017911A (ko) 디지털 이미지들의 컬러 프로세싱을 위한 방법들 및 시스템들
US20190318457A1 (en) Image processing methods and apparatuses, computer readable storage media and electronic devices
US20170154437A1 (en) Image processing apparatus for performing smoothing on human face area
WO2022261828A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN114979689B (zh) 多机位直播导播方法、设备以及介质
CN107682611B (zh) 对焦的方法、装置、计算机可读存储介质和电子设备
Liba et al. Sky optimization: Semantically aware image processing of skies in low-light photography
US20240127403A1 (en) Multi-frame image fusion method and system, electronic device, and storage medium
US20230033956A1 (en) Estimating depth based on iris size
CN116055895B (zh) 图像处理方法及其装置、芯片***和存储介质
WO2016202073A1 (zh) 图像处理的方法和装置
CN109658360B (zh) 图像处理的方法、装置、电子设备和计算机存储介质
CN109300186B (zh) 图像处理方法和装置、存储介质、电子设备
CN114979487B (zh) 图像处理方法、装置及电子设备和存储介质
CN110766631A (zh) 人脸图像的修饰方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905552

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20905552

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20905552

Country of ref document: EP

Kind code of ref document: A1