WO2022088976A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2022088976A1
WO2022088976A1 PCT/CN2021/116233 CN2021116233W WO2022088976A1 WO 2022088976 A1 WO2022088976 A1 WO 2022088976A1 CN 2021116233 W CN2021116233 W CN 2021116233W WO 2022088976 A1 WO2022088976 A1 WO 2022088976A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frequency
processed
pixel value
weighted
Prior art date
Application number
PCT/CN2021/116233
Other languages
English (en)
French (fr)
Inventor
秦文煜
陶建华
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022088976A1 publication Critical patent/WO2022088976A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of image technology, and in particular, to an image processing method and apparatus.
  • the user in order to improve the effect of the captured image, the user usually uses some beauty software to beautify the captured image.
  • beauty generally includes “clear” processing, but after “clear” processing, the processed image will appear “white borders” at the junction of the light and dark of the image, and it will also increase the noise of the image.
  • the present disclosure provides an image processing method and apparatus.
  • an image processing method comprising:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • an image processing apparatus comprising:
  • an acquisition module configured to perform acquisition of an image to be processed, and the to-be-processed image includes a face region; a high-frequency processing module, configured to perform Gaussian blurring on the first high-frequency image in the to-be-processed image to obtain a second high-frequency image,
  • the first high-frequency image is an image corresponding to the area where the high-frequency information in the face area is located;
  • the weighting processing module is configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image, to obtain a third high-frequency image.
  • the high-frequency image, the weight image is used to represent the weight coefficient corresponding to the pixel in the image to be processed;
  • the fusion module is configured to perform fusion of the image to be processed and the third high-frequency image to obtain the first target image.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • a non-volatile storage medium which enables the server to perform the following steps when instructions in the storage medium are executed by a processor of a server:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • a computer program product that, when the instructions in the computer program product are executed by a processor of a server, enables the server to perform the following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • Gaussian blurring is performed on the first high-frequency image corresponding to the high-frequency region in the image to be processed to obtain the second high-frequency image;
  • the high-frequency image is subjected to weighted fusion processing to obtain a third high-frequency image; here, by performing weighted fusion of the first high-frequency image used to represent the high-frequency information in the image to be processed and the second high-frequency image obtained by Gaussian blur, it can be appropriately Weakening the sharpness of strong edges such as facial contours and facial features contours can effectively suppress the "white edge” effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain the first target image, which can improve the high-frequency area.
  • the brightness of the obtained first target image makes the structure of the face region in the first target image more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the background noise in the image will not be enhanced.
  • Fig. 1 is a schematic diagram of an image obtained by "clear" processing according to an exemplary embodiment.
  • FIG. 2 is a schematic diagram illustrating an application environment of an image processing method, apparatus, electronic device, and storage medium according to an exemplary embodiment.
  • Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram of a face mask image according to an exemplary embodiment.
  • Fig. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • Fig. 6 is a block diagram of a server according to an exemplary embodiment.
  • FIG. 7 is a block diagram of an apparatus for data processing according to an exemplary embodiment.
  • beauty generally includes “clear” processing, but after “clear” processing, the structure of the processed image is not optimized, and the sense of hierarchy is not strong. In addition, there will be a "white edge” at the junction of the light and dark of the image, and the noise of the image will be increased.
  • “Clear” processing can generally be achieved by image sharpening.
  • Image sharpening is to compensate the contour of the image, enhance the edge of the image and the gray-scale transition part, and make the image clear. It is divided into spatial domain processing and frequency domain processing. kind. Image smoothing often blurs the borders and outlines in the image. In order to reduce the influence of such adverse effects, it is necessary to use image sharpening technology to make the edges of the image clear.
  • Sharpening is to reduce the blur in the image by enhancing the high frequency components, so it is also called high-pass filtering. Sharpening adds noise to the image while enhancing the edges of the image.
  • the principle of “sharpening” is: at the junction of light and dark, the dark side is adjusted to be darker, and the bright side is adjusted to be brighter. But what follows is that the processed image will have "white borders" at the junction of light and dark.
  • FIG. 1 is used as an example to illustrate the display effect of the image obtained by the “clear” process in the related art.
  • Fig. 1 is a schematic diagram of an image obtained by "clear” processing according to an exemplary embodiment. As shown in Figure 1, taking a face image as an example, after the image is “clear” processed, in the obtained image, at the junction between the face area 10 and the non-face area 20 (that is, the background area), a "white” appears. edge", which makes the processed image look less realistic and natural.
  • the present disclosure provides an image processing method, apparatus, electronic device and non-volatile storage medium.
  • the image processing method, device, electronic device and non-volatile storage medium can obtain the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed; and then, based on the weight The image performs weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image;
  • the weighted fusion of the second high-frequency image can appropriately weaken the sharpness of strong edges such as facial contours and facial contours, and can effectively suppress the "white edge” effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain
  • the first target image can enhance the brightness of the high frequency region, so that the structure of the face region in the obtained first target image is more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the
  • the electronic device in the embodiment of the present disclosure may be, for example, a server.
  • the server 100 is communicatively connected with one or more clients 200 through the network 300 for data communication or interaction.
  • the server 100 may be a web server, a database server, or the like.
  • the client 200 may be, but not limited to, a personal computer (personal computer, PC), a smart phone, a tablet computer, a personal digital assistant (PDA), and the like.
  • the network 300 may be a wired or wireless network.
  • the image processing methods provided in the embodiments of the present disclosure can be applied to the client 200.
  • the embodiments of the present disclosure are described with the client 200 as the execution body. It can be understood that the described execution body does not constitute a limitation of the present disclosure.
  • Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment. As shown in Figure 3, the image processing method may include the following steps:
  • S310 Acquire a to-be-processed image, where the to-be-processed image includes a face region.
  • S330 Perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain a third high-frequency image, where the weighted image is used to represent weight coefficients corresponding to pixels in the image to be processed.
  • the second high-frequency image is obtained by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed; Generate a weighted image, where the weights corresponding to the pixels of the contour lines are larger, and the weights corresponding to the pixels of the areas outside the contour lines are smaller; then, based on the weighted images, weighted fusion processing is performed on the first high-frequency image and the second high-frequency image , to obtain the third high-frequency image; here, the sharpening weight corresponding to the first high-frequency image at strong edges such as facial contours and facial features contours is smaller, and the second high-frequency image used to reflect other high-frequency areas such as the forehead and nose bridge The sharpening weight corresponding to the high-frequency image is relatively large, which can effectively suppress the "white edge" effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain the first target image, which can improve the to-be-processed image
  • the above steps may have the following specific implementation manners.
  • the client 200 may acquire an image to be processed, and the image to be processed includes a face region.
  • the facial area may be the facial area of a human face.
  • the facial area that the user wants to process may be further acquired.
  • the foreground area of the image to be processed can be obtained as the face area through techniques such as face key point recognition.
  • the client 200 can obtain the to-be-processed image in various public, legal and compliant ways, for example, it can obtain the to-be-processed image from a public data set, or obtain the to-be-processed image from the user after being authorized by the user.
  • the first high-frequency image in the image to be processed may be acquired in the following manner:
  • the first high frequency image is extracted from the to-be-processed image based on the edge detection algorithm; or, the to-be-processed image is subtracted from the low-frequency image of the to-be-processed image to obtain the first high frequency image.
  • the first high-frequency image includes high-frequency information of the image to be processed.
  • the first high-frequency image may include: eyebrows, eyes, lips, forehead, bridge of nose and other high-frequency information in the image to be processed that need to clearly display details.
  • the first high frequency image can be extracted from the image to be processed by an edge detection algorithm.
  • Edge detection is a fundamental problem in image processing and computer vision.
  • the purpose of edge detection is to identify points in digital images with obvious changes in brightness.
  • Significant changes in image properties often reflect significant events and changes in properties.
  • Notable changes in these image properties include: discontinuities in depth, discontinuities in surface orientation, changes in material properties, and changes in scene lighting. That is, the first high-frequency image is an image corresponding to the region where the high-frequency information in the face region is located.
  • the edge detection algorithm may include algorithms such as sobel, canny, prewitt or roberts.
  • the first high-frequency image may also be obtained by subtracting the low-frequency image of the to-be-processed image and the to-be-processed image.
  • the low-frequency image of the image to be processed can be obtained by performing Gaussian blurring on the image to be processed, and the low-frequency image of the image to be processed and the image to be processed are made difference to obtain the first high frequency images.
  • Gaussian blurring on the image to be processed
  • other algorithms capable of implementing low-pass filtering may also be selected, which is not specifically limited in this embodiment of the present disclosure.
  • high frequency information in the image to be processed can be quickly determined by extracting the first high frequency image from the image to be processed.
  • G1 is the first high-frequency image
  • G2 is the second high-frequency image.
  • Gaussian blur processing also called Gaussian smoothing, can enhance the image effect of the image at different scales.
  • S330 may acquire the weighted image in the following manner:
  • Extract the key points in the image to be processed, and the key points include facial key points and facial features; obtain facial contours according to facial key points, and obtain facial contours according to facial key points; A weight image representing the weight coefficients corresponding to the pixels in the image to be processed.
  • the key points in the to-be-processed image can be extracted by detecting the key points of the face, and the key points of the face and the facial features can be obtained, and the key points of the facial features and the facial features can be connected to obtain the outline that can represent the facial structure.
  • Line drawings that is, facial contours and facial features.
  • the width of these contour lines may be set to w pixels, so that the pixel value of the contour line is 1, and the pixel value of other areas of the image to be processed is 0, which can be used to represent the image in the image to be processed.
  • the weight image of the weight coefficient corresponding to the pixel point is
  • the weight value of contour lines in the weighted image may be set to be greater than a preset threshold, and the weight value of non-contour lines in the weighted image may be set to be less than the preset threshold.
  • the pixel value is 0.2, and the preset threshold is 0.5.
  • the weight coefficient corresponding to the pixel point of the contour line in the image to be processed is 0.8, and the weight coefficient corresponding to the pixel point of the non-contour line in the image to be processed is 0.2.
  • the color of the facial contour line and the facial features contour line is white, and the color of other pixels in the facial region of the image to be processed is black.
  • the step of performing weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain the third high-frequency image can be implemented by the following formula (1):
  • G1 in the formula (1) is the first high-frequency image
  • G2 is the second high-frequency image
  • K is the weight image
  • G3 is the third high-frequency image.
  • weighted fusion processing is performed on the first high-frequency image and the second high-frequency image, which can weaken the high-frequency information representing the face and facial features in the first high-frequency image, that is, the pixels in the weighted image. Regions with a value greater than 0 correspond to regions in the first high-frequency image for high-frequency weakening.
  • the weight corresponding to the pixel points of the contour line is larger, and the weight corresponding to the pixel points of the area outside the contour line is smaller; then, based on the weight image, the first high-frequency image and the second high-frequency image are weighted and fused to obtain the first high-frequency image and the second high-frequency image.
  • Three high-frequency images since the sharpening weight corresponding to the first high-frequency image at strong edges such as facial contours and facial features contours is small, the second high-frequency image used to reflect other high-frequency areas such as the forehead and nose bridge corresponds to The sharpening weight is relatively large, which can effectively suppress the "white edge" effect.
  • S330 may specifically include the following steps:
  • Gaussian blurring is performed on the weighted image to obtain a Gaussian blurred weighted image; weighted fusion processing is performed on the first high frequency image and the second high frequency image based on the Gaussian blurred weighted image to obtain a third high frequency image.
  • Gaussian blurring is performed on the weighted image, and the obtained Gaussian-blurred weighted image has smoother edges, so that the subsequent third high-frequency image fused based on the Gaussian-blurred weighted image has a better display effect.
  • USM Unsharp Mask
  • This sharpening method is to be processed
  • the image is first subjected to a Gaussian blur, and then the image to be processed is subtracted by a coefficient and multiplied by the Gaussian blur to obtain an edge image (ie, the third high-frequency image), and the edge image and the image to be processed are linearly combined to obtain sharpening.
  • the method based on USM sharpening can remove some small interfering details and noise, which is more realistic and credible than the image sharpening results obtained by using the convolution sharpening operator directly.
  • the above-mentioned steps of fusing the to-be-processed image and the third high-frequency image to obtain the first target image can be specifically implemented by the following formula (2):
  • S0 is the image to be processed
  • G3 is the third high-frequency image
  • R1 is the first target image
  • is a preset coefficient
  • the first target image obtained by linearly combining the third high-frequency image and the to-be-processed image can remove some fine interfering details and noise, so that the face in the first target image has high definition.
  • S340 may specifically include the following steps: performing brightness enhancement processing on the image to be processed to obtain the first image; and reducing the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, the second pixel value is less than the preset threshold; increase the fourth pixel value of the second pixel in the image to be processed to obtain a third pixel value, and the fourth pixel value is greater than the preset threshold; according to the first pixel value and the third pixel value Three pixel values are used to obtain a second image; weighted fusion processing is performed on the first image and the second image to obtain a third image; and the third image and the third high-frequency image are fused to obtain a first target image.
  • the above-mentioned steps of performing brightness enhancement processing on the image to be processed to obtain the first image can be specifically implemented by the following formula (3):
  • S0 is the image to be processed
  • S1 is the first image
  • brightness enhancement processing is performed on the image to be processed to obtain a first image, so as to enhance the overall brightness of the image.
  • the above-mentioned process involves reducing the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, and increasing the fourth pixel value of the second pixel in the image to be processed to obtain
  • the third pixel value, the step of obtaining the second image according to the first pixel value and the third pixel value can be specifically implemented by the following formula (4):
  • S0 is the image to be processed
  • S2 is the second image.
  • the second pixel value (pixel value of S0 ⁇ 0.5) is less than the preset threshold value of 0.5; the fourth pixel value (pixel value of S0 ⁇ 0.5) is greater than the preset threshold value of 0.5.
  • the dark part of the image to be processed can be made darker and the bright part of the image to be processed brighter, so that the highlights and shadow parts of the image to be processed can be more prominent, and the facial structure of the obtained second image has more distinct layers.
  • the above-mentioned steps of performing weighted fusion processing on the first image and the second image to obtain the third image can be specifically implemented by the following formula (5):
  • S1 is the first image
  • S2 is the second image
  • S3 is the third image
  • the fusion coefficient is a preset value a, which can be a value preset by the user and can be adjusted according to actual needs.
  • the first target image can be obtained by fusing the third image and the third high-frequency image.
  • the third image and the third high-frequency image obtained after performing the brightness adjustment processing on the image to be processed the layered sense of the structure of the first target image can be improved.
  • the above-mentioned steps of fusing the third image and the third high-frequency image to obtain the first target image may specifically include the following steps:
  • S4 is the fourth image.
  • S3 and S0 can be weighted and fused, and the fusion coefficient is a preset value b, which can be adjusted according to actual needs.
  • the face region mask map M weighted fusion processing is performed on the fourth image and the image to be processed, so that the background can be filtered out to obtain the fifth image.
  • the pixel value of the face region in the fifth image is the pixel value of the fourth image
  • the pixel value of the fifth image outside the face region is the pixel value of the image to be processed.
  • the structural clarity of the face region can be improved without enhancing the background noise.
  • the fifth image and the third high-frequency image are fused to obtain the first target image, which can improve the clarity of the face and avoid enhancing the background noise.
  • the following processing may also be performed on the first target image:
  • the first target image and the to-be-processed image are fused based on the face mask image to obtain a second target image, and the face mask image is used to mark the face area; wherein, the pixel value of the face area in the second target image is the first target image
  • the pixel value of the image, the pixel value of the second target image outside the face area is the pixel value of the image to be processed.
  • the above-mentioned face mask image is generated according to the face area and non-face area in the image to be processed, and the face mask image is an image corresponding to the image to be processed and used to mark the face area.
  • the facial area and non-face area in the image to be processed can be determined by using a face key point detection algorithm or a skin color detection model, and then according to the facial area in the image to be processed and the area other than the facial area in the second target image
  • the face mask image is generated by determining the mask value of the face area as 1 and the mask value of the area other than the face area as 0, and then the face mask image can be generated.
  • the area of interest (that is, the face area) is white, indicating that the pixels of the area of interest are all non-0, and the non-interesting area (that is, the non-face area) is all black, Indicates that the pixels in those regions are all 0s, so that once the image to be processed is ANDed with the face mask image, the resulting map leaves only the image of the region of interest in the image to be processed.
  • the step of fusing the first target image and the to-be-processed image based on the face mask image to obtain the second target image can be specifically implemented by the following formula (8):
  • R1 is the first target image
  • R2 is the second target image
  • M is the face mask image
  • S0 is the image to be processed.
  • the obtained pixel value of the face region in the second target image is the pixel value of the first target image
  • the face region in the second target image is The pixel values other than those are the pixel values of the image to be processed, so that the enhancement of the background noise can be avoided while improving the clarity of the face.
  • the face mask image is subjected to edge feathering processing to obtain a face mask image after edge feathering; based on the face mask image after edge feathering, the first target image and the image to be processed are fused to obtain a second target image.
  • the face mask image may be edge feathered using a guided filtering algorithm.
  • the guided filtering algorithm is an image filtering technology, which filters the target image P (input image) through a guiding graph G, so that the final output image is generally similar to the target image P, but the texture part is similar to the target image P.
  • the bootstrap graph G is similar.
  • the edge feathering of the face mask image by the guided filtering algorithm can smooth the boundary of the face mask image.
  • the first target image and the to-be-processed image are fused, so that the obtained second target image has smoother and more natural edges.
  • the embodiment of the present disclosure obtains the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
  • the pixel value corresponding to the contour image is generated to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
  • the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line is larger.
  • the first high-frequency image and the second high-frequency image are subjected to weighted fusion processing to obtain a third high-frequency image;
  • the sharpening weight corresponding to the image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
  • the first target image is obtained by fusing with the third high-frequency image, which can enhance the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered. Thereby, the "white fringing" effect can be suppressed while improving the sharpness of the image to be processed, and the background noise in the image will not be enhanced.
  • the present disclosure also provides an image processing apparatus.
  • the specific description will be made with reference to FIG. 5 .
  • Fig. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • the image processing apparatus 500 may include an acquisition module 510 , a high frequency processing module 520 , a weighting processing module 530 and a fusion module 540 .
  • the acquiring module 510 is configured to perform acquiring an image to be processed, where the image to be processed includes a face region.
  • the high-frequency processing module 520 is configured to perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image is an image corresponding to the area where the high-frequency information in the face area is located .
  • the weighted processing module 530 is configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain a third high-frequency image, where the weighted image is used to represent the weight corresponding to the pixel in the image to be processed coefficient.
  • the fusion module 540 is configured to perform fusion of the to-be-processed image and the third high-frequency image to obtain the first target image.
  • the fusion module 540 is further configured to perform fusion of the first target image and the to-be-processed image based on the face mask image to obtain the second target image, and the face mask image is used to mark the face region;
  • the pixel value of the face area in the second target image is the pixel value of the first target image
  • the pixel value of the second target image outside the face area is the pixel value of the image to be processed.
  • the fusion module 540 includes:
  • the brightness enhancement module is configured to perform brightness enhancement processing on the image to be processed to obtain the first image.
  • the brightness reduction module is configured to reduce the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, where the second pixel value is smaller than the preset threshold.
  • the brightness enhancement module is further configured to perform increasing the fourth pixel value of the second pixel in the image to be processed to obtain a third pixel value, where the fourth pixel value is greater than a preset threshold.
  • a determination module configured to perform obtaining the second image according to the first pixel value and the third pixel value.
  • the fusion module 540 is further configured to perform weighted fusion processing on the first image and the second image to obtain a third image.
  • the fusion module 540 is further configured to perform fusion of the third image and the third high-frequency image to obtain the first target image.
  • the fusion module 540 is further configured to perform weighted fusion processing on the third image and the to-be-processed image to obtain a fourth image.
  • the fusion module 540 is further configured to perform weighted fusion processing on the fourth image and the to-be-processed image based on the face mask image, to obtain a fifth image; wherein, the pixel value of the face region in the fifth image is the pixel value of the fourth image value, the pixel value of the fifth image outside the face area is the pixel value of the image to be processed.
  • the fusion module 540 is further configured to perform fusion of the fifth image and the third high-frequency image to obtain the first target image.
  • the image processing apparatus 500 further includes: a first extraction module and a subtraction module.
  • the first extraction module is configured to extract the first high-frequency image from the to-be-processed image based on an edge detection algorithm.
  • the subtraction module is configured to perform subtraction of the image to be processed and the low-frequency image of the image to be processed to obtain a first high-frequency image.
  • the image processing apparatus 500 further includes: a second extraction module, a connection module, and a generation module.
  • the second extraction module is configured to perform extraction of key points in the image to be processed, and the key points include facial key points and facial feature key points.
  • the connection module is configured to obtain the facial contour line according to the facial key points, and obtain the facial features contour line according to the facial feature key points.
  • the generating module is configured to generate, according to the facial contour line and the facial features contour line, a weight image used to represent the weight coefficient corresponding to the pixel point in the image to be processed.
  • the fusion module 540 further includes a Gaussian blurring module.
  • the Gaussian blurring module is configured to perform Gaussian blurring on the weighted image to obtain a Gaussian blurred weighted image.
  • the fusion module 540 is further configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the Gaussian blurred weighted image to obtain a third high-frequency image.
  • the fusion module 540 further includes an edge feathering module.
  • the edge feathering module is configured to perform edge feathering processing on the face mask image to obtain a face mask image after edge feathering.
  • the fusion module 540 is further configured to perform fusion of the first target image and the to-be-processed image based on the face mask image after edge feathering to obtain a second target image.
  • the embodiment of the present disclosure obtains the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
  • the pixel value corresponding to the contour image is generated to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
  • the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line is larger.
  • the first high-frequency image and the second high-frequency image are subjected to weighted fusion processing to obtain a third high-frequency image;
  • the sharpening weight corresponding to the image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
  • the first target image is obtained by fusing with the third high-frequency image, which can enhance the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to achieve The following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • the electronic device may be, for example, a server.
  • Fig. 6 is a block diagram of a server according to an exemplary embodiment. 6 , an embodiment of the present disclosure further provides a server, including a processor 610 , a communication interface 620 , a memory 630 and a communication bus 640 , wherein the processor 610 , the communication interface 620 and the memory 630 communicate with each other through the communication bus 640 Communication.
  • a server including a processor 610 , a communication interface 620 , a memory 630 and a communication bus 640 , wherein the processor 610 , the communication interface 620 and the memory 630 communicate with each other through the communication bus 640 Communication.
  • the memory 630 is used for storing instructions executable by the processor 610 .
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • the second high-frequency image is obtained by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
  • the pixel value corresponding to the contour image of the line is used to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
  • the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line.
  • the first high-frequency image and the second high-frequency image are weighted and fused based on the weighted image to obtain the third high-frequency image;
  • the sharpening weight corresponding to the high-frequency image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
  • the image and the third high-frequency image are fused to obtain the first target image, which can improve the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the background noise in the image will not be enhanced.
  • FIG. 7 is a block diagram of an apparatus for data processing according to an exemplary embodiment.
  • the device 700 may be provided as a server.
  • server 700 includes processing component 722, which further includes one or more processors, and a memory resource, represented by memory 732, for storing instructions executable by processing component 722, such as application programs.
  • An application program stored in memory 732 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 722 is configured to execute instructions to perform the following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • the device 700 may also include a power component 726 configured to perform power management of the device 700, a wired or wireless network interface 750 configured to connect the device 700 to a network, and an input output (I/O) interface 758.
  • Device 700 may operate based on an operating system stored in memory 732, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile storage medium is also provided.
  • the server can perform the following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
  • the non-volatile storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • a computer program product is also provided.
  • the server can perform the following steps:
  • an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法及装置,涉及图像技术领域。其中,该图像处理方法包括:获取待处理图像,待处理图像包括面部区域(S310);对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像(S320);基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像根据待处理图像中的面部轮廓线和五官轮廓线生成;将待处理图像和第三高频图像进行融合,得到第一目标图像(S340)。

Description

图像处理方法及装置
相关申请的交叉引用
本公开要求在2020年10月29日在中国提交的中国专利申请号No.202011182131.8的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及图像技术领域,尤其涉及一种图像处理方法及装置。
背景技术
随着移动电子设备的不断发展,很多用户喜欢通过移动电子设备上的摄像头进行拍照。
目前,在相关的技术中,为了使拍摄出的图像效果更佳,用户通常会用一些美颜软件对拍摄的图像进行美化处理。其中,美颜一般包括“清晰”处理,但是“清晰”处理之后,处理后的图像会出现图像明暗交界出现“白边”的情况、而且还会增加图像的噪声。
发明内容
本公开提供一种图像处理方法及装置。
本公开的技术方案如下:
根据本公开的一些实施例,提供一种图像处理方法,包括:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
根据本公开的一些实施例,提供一种图像处理装置,包括:
获取模块,被配置为执行获取待处理图像,待处理图像包括面部区域;高频处理模块,被配置为执行对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;加权处理模块,被配置为执行基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权 重图像用于表征待处理图像中像素点对应的权重系数;融合模块,被配置为执行将待处理图像和第三高频图像进行融合,得到第一目标图像。
根据本公开的一些实施例,提供一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令,以实现以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
根据本公开的一些实施例,提供一种非易失性存储介质,当存储介质中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
根据本公开的一些实施例,提供一种计算机程序产品,当计算机程序产品中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
在本公开实施例中,首先,对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这里,通过对用于表征待处理图像中高频信息的第一高频图像和高斯模糊得到的第二高频图像进行加权融合,能够适当弱化面部轮廓和五官轮廓等强边缘处的锐化程度,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,能够提升高频区域的亮度,使得到的第一目标图像中的面部区域的结构更有层次感。由此,能够在提高待处理图像的清晰度的同时,抑制“白边”效应,并且不会增强图像中的背景噪声。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限 制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的经“清晰”处理所得到的图像示意图。
图2是根据一示例性实施例示出的图像处理方法、装置、电子设备及存储介质应用环境示意图。
图3是根据一示例性实施例示出的一种图像处理方法的流程图。
图4是根据一示例性实施例示出的一种面部掩码图像示意图。
图5是根据一示例性实施例示出的一种图像处理装置的框图。
图6是根据一示例性实施例示出的一种服务器的框图。
图7是根据一示例性实施例示出的用于数据处理的设备的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
另外,本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。
目前,用户通常会用一些美颜软件对拍摄的图像进行美化处理。其中,美颜一般包括“清晰”处理,但是“清晰”处理之后,处理后的图像的结构并没有得到优化,层次感不强。而且还会出现图像明暗交界出现“白边”的情况、而且还会增加图像的噪声。
“清晰”处理一般可以通过图像锐化来实现,图像锐化是补偿图像的轮廓,增强图像的边缘及灰度跳变的部分,使图像变得清晰,分为空间域处理和频域处理两类。图像平滑往往使图像中的边界、轮廓变得模糊,为了减少这类不利效果的影响,这就需要利用图像锐化技术,使图像的边缘变的清晰。
锐化就是通过增强高频分量来减少图像中的模糊,因此又称为高通滤波。锐化处理在增强图像边缘的同时增加了图像的噪声。“锐化”的原理就是:将明暗交界处,暗的这边调整得更暗、亮的那边调整得更亮。但是随之而来的就是,处理后的图像在明暗交界处,会出现“白边”。
下面,以图1为例说明相关技术中经“清晰”处理所得到的图像的显示效果。
图1是根据一示例性实施例示出的经“清晰”处理所得到的图像示意图。如图1所示,以人脸图像为例,对图像进行“清晰”处理后,得到的图像中,面部区域10和非面部区域20(即背景区域)之间的交界处,出现了“白边”,这会使得处理后的图像看起来不够真实自然。
本公开提供了一种图像处理方法、装置、电子设备及非易失性存储介质。该图像处理方法、装置、电子设备及非易失性存储介质,可通过对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这里,通过对用于表征待处理图像中高频信息的第一高频图像和高斯模糊得到的第二高频图像进行加权融合,能够适当弱化面部轮廓和五官轮廓等强边缘处的锐化程度,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,能够提升高频区域的亮度,使得到的第一目标图像中的面部区域的结构更有层次感。由此,能够在提高待处理图像的清晰度的同时,抑制“白边”效应,并且不会增强图像中的背景噪声。
如图2所示,是本公开说明书一个或多个实施例提供的图像处理方法、装置、电子设备及非易失性存储介质的应用环境示意图。需要说明的是,本公开实施例中的电子设备例如可以为服务器。如图2所示,服务器100通过网络300与一个或多个用户端200通信连接,以进行数据通信或交互。所述服务器100可以是网络服务器、数据库服务器等。所述用户端200可以是,但不限于个人电脑(personal computer,PC)、智能手机、平板电脑、个人数字助理(personal digital assistant,PDA)等。所述网络300可以是有线或无线网络。
下面将对本公开实施例提供的图像处理方法进行详细说明。
本公开实施例提供的图像处理方法可以应用于用户端200,为了便于描述,除特别说明外,本公开实施例均以用户端200为执行主体进行说明。可以理解的是,所述的执行主体并不构成对本公开的限定。
图3是根据一示例性实施例示出的一种图像处理方法的流程图。如图3所示,该图像处理方法可以包括以下步骤:
S310,获取待处理图像,待处理图像包括面部区域。
S320,对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一 高频图像为面部区域中的高频信息所在区域对应的图像。
S330,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数。
S340,将待处理图像和第三高频图像进行融合,得到第一目标图像。
上述各步骤的具体实现方式将在下文中进行详细描述。
在本公开实施例中,通过对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,根据待处理图像中的面部轮廓线和五官轮廓线生成权重图像,这里,轮廓线像素点对应的权重较大,轮廓线以外的区域像素点对应的权重较小;接着,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这里,用于体现面部轮廓和五官轮廓等强边缘处的第一高频图像对应的锐化权重较小,用于体现如额头鼻梁等其他高频区域的第二高频图像对应的锐化权重相对较大,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,由此,能够提升对待处理图像进行“清晰”处理后的图像的效果。
在一些实施例中,上述各个步骤可以有以下具体实现方式。
在一些实施例中,关于S310,用户端200可以获取待处理图像,待处理图像包括面部区域。其中,面部区域可以为人脸的面部区域,在一些实施例中,获取待处理图像时,可以进一步获取用户想要处理的面部区域。这里可以通过人脸关键点识别等技术,得到待处理图像的前景区域作为面部区域。
需要说明的是,用户端200可以通过各种公开、合法合规的方式获取待处理图像,例如可以从公开数据集处获取待处理图像,或者在经过用户授权后从用户处获取待处理图像。
在本公开一些实施例中,关于S320,可以通过以下方式获取待处理图像中的第一高频图像:
基于边缘检测算法从待处理图像中提取出第一高频图像;或者,将待处理图像与待处理图像的低频图像相减,得到第一高频图像。
其中,第一高频图像包括待处理图像的高频信息,例如,第一高频图像可以包括:待处理图像中的眉毛、眼睛、嘴唇、额头、鼻梁等需要清晰展示细节的高频信息。
在一些实施例中,可以通过基于边缘检测算法从待处理图像中提取出第一高频图像。
边缘检测是图像处理和计算机视觉中的基本问题,边缘检测的目的是标识数字图像中亮度变化明显的点。图像属性中的显著变化通常反映了属性的重要事件和变化。这些图像属性中的显著变化包括:深度上的不连续、表面方向不连续、物质属性变化和场景照明变化。即第一高频图像为面部区域中的高频信息所在区域对应的图像。
示例性地,边缘检测算法可以包括:sobel、canny、prewitt或roberts等算法。
在一些实施例中,还可以通过将待处理图像与待处理图像的低频图像相减,得到第一高频图像。
在一些实施例中,可以通过对待处理图像进行高斯模糊处理,得到待处理图像的低频图像,即高斯模糊后的待处理图像,将待处理图像的低频图像与待处理图像做差,得到第一高频图像。在该实施方式中,除了可以选择计算量较低的高斯模糊来得到非高频图像,还可以选择其他能够实现低通滤波的算法,本公开实施例对此不做具体限定。
这里,通过从待处理图像中提取出第一高频图像能够快速确定待处理图像中的高频信息。
在S320中,可以通过对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,即S320可以通过公式G2=GaussFilter(G1)所示的方式实现。这里的G1为第一高频图像,G2为第二高频图像。高斯模糊处理也叫高斯平滑处理,可以增强图像在不同比例大小下的图像效果。
在本公开一些实施例中,S330可以通过以下方式获取权重图像:
提取待处理图像中的关键点,关键点包括面部关键点和五官关键点;根据面部关键点得到面部轮廓线,根据五官关键点得到五官轮廓线;根据面部轮廓线和五官轮廓线,生成用于表示待处理图像中像素点对应的权重系数的权重图像。
在一些实施例中,可以通过人脸关键点检测来提取待处理图像中的关键点,得到面部关键点和五官关键点,将面部关键点和五官关键点连接起来,得到能够表示面部结构的轮廓线图,即面部轮廓线和五官轮廓。在一些实施例中,可以将这些轮廓线的宽度设置为w个像素,使轮廓线的像素值为1,待处理图像的其他区域的像素值为0,即可得到用于表示待处理图像中像素点对应的权重系数的权重图像。
在本公开一些实施例中,可以设置权重图像中轮廓线的权重值大于预设阈值,权重图像中非轮廓线的权重值小于预设阈值,比如,轮廓线的像素值为0.8,其他区域的像素值为0.2,预设阈值为0.5。相应地,待处理图像中轮廓线的像素点对应的权重系数为0.8,待处理图像中非轮廓线的像素点对应的权重系数为0.2。
如图4所示,面部轮廓线和五官轮廓线的颜色为白色,待处理图像面部区域的其它像素点的颜色为黑色。
在一些实施例中,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像的步骤,可以通过下述公式(1)来实现:
G3=G2*K+G1*(1-K)        (1)
其中,公式(1)中的G1为第一高频图像、G2为第二高频图像、K为权重图像、G3为第三高频图像。
结合上述方式确定出来的权重图像,对第一高频图像和第二高频图像进行加权融合处理,能够弱化第一高频图像中代表面部及五官轮廓处的高频信息,即将权重图像中像素值大于0的区域,对应到第一高频图像中的区域进行高频弱化。
这里,显然轮廓线像素点对应的权重较大,轮廓线以外的区域像素点对应的权重较小;接着,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;由于用于体现面部轮廓和五官轮廓等强边缘处的第一高频图像对应的锐化权重较小,用于体现如额头鼻梁等其他高频区域的第二高频图像对应的锐化权重相对较大,从而能够有效抑制“白边”效应。
在本公开一些实施例中,S330中,具体可以包括以下步骤:
对权重图像进行高斯模糊处理,得到高斯模糊后的权重图像;基于高斯模糊后的权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像。
其中,上述涉及到的对权重图像进行高斯模糊处理,得到高斯模糊后的权重图像的步骤,可以为:对权重图像进行高斯模糊,以平滑边缘,得到高斯模糊后的权重图像。即该步骤可以通过公式K’=GaussFilter(K)所示的方式实现。其中,K’为高斯模糊后的权重图像,K为权重图像。高斯模糊也叫高斯平滑,高斯平滑也用于计算机视觉算法中的预先处理阶段,以增强图像在不同比例大小下的图像效果,可以参见尺度空间表示以及尺度空间实现。
这里,对权重图像进行高斯模糊,得到的高斯模糊后的权重图像的边缘更加平滑,进而使后续基于高斯模糊后的权重图像融合后的第三高频图像的显示效果更好。
在一些实施例中,关于S340,可以理解的是,通过图像卷积处理实现锐化有一种常用的算法叫做虚边蒙片(Unsharp Mask,USM)算法方法,这种锐化的方法就是对待处理图像先做一个高斯模糊,然后用待处理图像减去一个系数乘以高斯模糊之后的图像,得到边缘图像(即第三高频图像),将边缘图像和待处理图像进行线性组合得到锐化后图像。基于USM锐化的方法可以去除一些细小的干扰细节和噪声,比一般直接使用卷积锐化算子得到的图像锐化结果更加真实可信。
在一些实施例中,上述涉及到的将待处理图像和第三高频图像进行融合,得到第一目标图像的步骤,具体可以通过下述公式(2)实现:
R1=S0+β*G3        (2)
其中,S0为待处理图像、G3为第三高频图像、R1为第一目标图像,β为预设系数。
这里,通过将第三高频图像和待处理图像进行线性组合得到的第一目标图像,可以去除一些细小的干扰细节和噪声,使第一目标图像中的面部清晰度高。
在本公开一些实施例中,S340中,具体可以包括以下步骤:对待处理图像进行亮度增 强处理,得到第一图像;降低待处理图像中的第一像素点的第二像素值,得到第一像素值,第二像素值小于预设阈值;增大待处理图像中的第二像素点的第四像素值,得到第三像素值,第四像素值大于预设阈值;根据第一像素值和第三像素值得到第二图像;对第一图像和第二图像进行加权融合处理,得到第三图像;将第三图像和第三高频图像进行融合,得到第一目标图像。
在一些实施例中,上述涉及到的对待处理图像进行亮度增强处理,得到第一图像的步骤,具体可以通过下述公式(3)实现:
S1=1.0-(1.0–S0)*(1.0–S0)        (3)
其中,S0为待处理图像,S1为第一图像。
举例:在像素值是0.2的情况下,S1=1-(1-0.2)*(1-0.2)=0.36>0.2;在像素值是0.8的情况下,S1=1-(1-0.8)*(1-0.8)=0.96>0.8。
这里,对待处理图像进行亮度增强处理,得到第一图像,以提升图像的整体亮度。
在一些实施例中,上述涉及到的降低待处理图像中的第一像素点的第二像素值,得到第一像素值,增大待处理图像中的第二像素点的第四像素值,得到第三像素值,根据第一像素值和第三像素值得到第二图像的步骤,具体可以通过下述公式(4)实现:
在S0<0.5的情况下,S2=2*S0*S0          (4)
在S0>0.5的情况下,S2=1-2*(1-S0)*(1-S0)
其中,S0为待处理图像,S2为第二图像。第二像素值(S0<0.5的像素值)小于预设阈值0.5;第四像素值(S0<0.5的像素值)大于预设阈值0.5。
举例:在像素值是0.2的情况下,S2=2*0.2*0.2=0.08<0.2;在像素值是0.8的情况下,S2=1-2*(1-0.8)*(1-0.8)=0.92>0.8。
这里,通过调节待处理图像的亮度,可以使待处理图像的暗部更暗,亮部更亮,使得待处理图像的高光和阴影部分更加凸显出来,得到的第二图像的面部结构的层次更加分明。
在一些实施例中,上述涉及到的对第一图像和第二图像进行加权融合处理,得到第三图像的步骤,具体可以通过下述公式(5)实现:
S3=S1*a+S2*(1-a)          (5)
其中,S1为第一图像、S2为第二图像、S3为第三图像。
对于上述涉及到的公式(4),在原图像素值都小于0.5的情况下,则计算出来的S2会更加暗;在所有像素值大于0.5的情况下,S2会提亮,但是S2<S1。因此可以将S1和S2融合起来,达到暗图提亮,亮图不会更亮的目的。融合系数为预设值a,可以为用户预设的数值,可以根据实际需要来调节。
在一些实施例中,将第三图像和第三高频图像进行融合,即可得到第一目标图像。通 过将对待处理图像进行亮度调节处理后得到的第三图像和第三高频图像进行融合,能够提升第一目标图像结构的层次感。
其中,上述涉及到的将第三图像和第三高频图像进行融合,得到第一目标图像的步骤中,具体可以包括以下步骤:
对第三图像和待处理图像进行加权融合处理,得到第四图像;基于面部掩码图像对第四图像和待处理图像进行加权融合处理,得到第五图像;其中,第五图像中的面部区域的像素值为第四图像的像素值,第五图像中的面部区域以外的像素值为待处理图像的像素值;将第五图像和第三高频图像进行融合,得到第一目标图像。
其中,上述涉及到的对第三图像和待处理图像进行加权融合处理,得到第四图像的步骤,具体可以通过下述公式(6)实现:
S4=S3*b+S0*(1-b)         (6)
其中,S4为第四图像。
这里,同样为了约束S3,可以将S3和S0进行加权融合,融合系数为预设值b,可以根据实际需要来调节。
其中,上述涉及到的基于面部掩码图像对第四图像和待处理图像进行加权融合处理,得到第五图像的步骤,具体可以通过下述公式(7)实现:
R3=S4*M+S0*(1-M)           (7)
其中,M为面部掩码图像,R3为第五图像。
这里,通过利用面部区域掩码图M,对第四图像和待处理图像进行加权融合处理,能够将背景过滤掉,得到第五图像。这样,第五图像中的面部区域的像素值为第四图像的像素值,第五图像中的面部区域以外的像素值为待处理图像的像素值。由此,能够在提升面部区域的结构清晰度的同时,不会增强背景噪声。最后再将第五图像和第三高频图像进行融合,以得到第一目标图像,能够在提升面部清晰度的同时,避免增强背景噪声。
在本公开一些实施例中,还可以对第一目标图像进行如下处理:
基于面部掩码图像对第一目标图像和待处理图像进行融合,得到第二目标图像,面部掩码图像用于标记面部区域;其中,第二目标图像中的面部区域的像素值为第一目标图像的像素值,第二目标图像中的面部区域以外的像素值为待处理图像的像素值。
其中,上述涉及到的面部掩码图像,是根据待处理图像中的面部区域和非面部区域生成的,面部掩码图像是与待处理图像对应的用于标记面部区域的图像。具体可以通过利用人脸关键点检测算法,或者肤色检测模型确定待处理图像中的面部区域和非面部区域,然后根据待处理图像中的面部区域和第二目标图像中的面部区域以外的区域来生成面部掩码图像,即将面部区域的掩码值确定为1,将面部区域以外的区域的掩码值确定为0,即可生 成面部掩码图像。
由于通过上述方式得到的面部掩码图像中,感兴趣的区域(即面部区域)是白色的,表明感兴趣区域的像素都是非0,而非感兴趣区域(即非面部区域)都是黑色,表明那些区域的像素都是0,从而一旦待处理图像与面部掩码图像进行与运算后,得到的结果图即可只留下待处理图像中的感兴趣区域的图像。
其中,基于面部掩码图像对第一目标图像和待处理图像进行融合,得到第二目标图像的步骤,具体可以通过下述公式(8)实现:
R2=R1*M+S0*(1-M)         (8)
其中,R1为第一目标图像、R2为第二目标图像、M为面部掩码图像、S0为待处理图像。
这里,通过基于面部掩码图像对第一目标图像和待处理图像进行融合,得到的第二目标图像中的面部区域的像素值为第一目标图像的像素值,第二目标图像中的面部区域以外的像素值为待处理图像的像素值,从而能够在提升面部清晰度的同时,避免背景噪声的增强。
其中,在上述涉及到的基于面部掩码图像对第一目标图像和待处理图像进行融合,得到第二目标图像的步骤中,具体可以包括以下步骤:
对面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像;基于边缘羽化后的面部掩码图像,对第一目标图像和待处理图像进行融合,得到第二目标图像。
在本公开一些实施例中,可以使用导向滤波算法对面部掩码图像进行边缘羽化。其中,导向图为待处理图像S0。即可以通过公式M=GuidedFilter(S0,M)所示的方式对面部掩码图像进行边缘羽化。
其中,导向滤波算法是一种图像滤波技术,该技术通过一张引导图G,对目标图像P(输入图像)进行滤波处理,使得最后的输出图像大体上与目标图像P相似,但是纹理部分与引导图G相似。
这里,通过导向滤波算法对面部掩码图像进行边缘羽化,能够平滑面部掩码图像的边界。基于边缘羽化后的面部掩码图像,对第一目标图像和待处理图像进行融合,可以使得到的第二目标图像,边缘更加平滑自然。
综上,本公开实施例通过对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,根据待处理图像中包括面部轮廓线和五官轮廓线的轮廓图像对应的像素值,生成用于表示待处理图像中像素点对应的权重系数的权重图像,这里,显然轮廓线像素点对应的权重较大,轮廓线以外的区域像素点对应的权重较小;接着,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这 里,用于体现面部轮廓和五官轮廓等强边缘处的第一高频图像对应的锐化权重较小,用于体现如额头鼻梁等其他高频区域的第二高频图像对应的锐化权重相对较大,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,能够提升高频区域的亮度,使得到的第一目标图像中的面部区域的结构更有层次感。由此,能够在提高待处理图像的清晰度的同时,抑制“白边”效应,并且不会增强图像中的背景噪声。
基于上述图像处理方法,本公开还提供了一种图像处理装置。具体结合图5进行说明。
图5是根据一示例性实施例示出的一种图像处理装置的框图。参照图5,该图像处理装置500可以包括获取模块510、高频处理模块520、加权处理模块530和融合模块540。
获取模块510,被配置为执行获取待处理图像,待处理图像包括面部区域。
高频处理模块520,被配置为执行对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像。
加权处理模块530,被配置为执行基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数。
融合模块540,被配置为执行将待处理图像和第三高频图像进行融合,得到第一目标图像。
在本公开一些实施例中,融合模块540,还被配置为执行基于面部掩码图像对第一目标图像和待处理图像进行融合,得到第二目标图像,面部掩码图像用于标记面部区域;其中,第二目标图像中的面部区域的像素值为第一目标图像的像素值,第二目标图像中的面部区域以外的像素值为待处理图像的像素值。
在本公开一些实施例中,融合模块540包括:
亮度增强模块,被配置为执行对待处理图像进行亮度增强处理,得到第一图像。
亮度降低模块,被配置为执行降低待处理图像中的第一像素点的第二像素值,得到第一像素值,第二像素值小于预设阈值。
亮度增强模块,还被配置为执行增大待处理图像中的第二像素点的第四像素值,得到第三像素值,第四像素值大于预设阈值。
确定模块,被配置为执行根据第一像素值和第三像素值得到第二图像。
融合模块540,被进一步配置为执行对第一图像和第二图像进行加权融合处理,得到第三图像。
融合模块540,被进一步配置为执行将第三图像和第三高频图像进行融合,得到第一目标图像。
在本公开一些实施例中,融合模块540,被进一步配置为执行对第三图像和待处理图像进行加权融合处理,得到第四图像。
融合模块540,被进一步配置为执行基于面部掩码图像对第四图像和待处理图像进行加权融合处理,得到第五图像;其中,第五图像中的面部区域的像素值为第四图像的像素值,第五图像中的面部区域以外的像素值为待处理图像的像素值。
融合模块540,被进一步配置为执行将第五图像和第三高频图像进行融合,得到第一目标图像。
在本公开一些实施例中,该图像处理装置500还包括:第一提取模块和相减模块。
该第一提取模块,被配置为执行基于边缘检测算法从待处理图像中提取出第一高频图像。
该相减模块,被配置为执行将待处理图像与待处理图像的低频图像相减,得到第一高频图像。
在本公开一些实施例中,该图像处理装置500还包括:第二提取模块、连接模块和生成模块。
该第二提取模块,被配置为执行提取待处理图像中的关键点,关键点包括面部关键点和五官关键点。
该连接模块,被配置为执行根据面部关键点得到面部轮廓线,根据五官关键点得到五官轮廓线。
该生成模块,被配置为执行根据面部轮廓线和五官轮廓线,生成用于表示待处理图像中像素点对应的权重系数的权重图像。
在本公开一些实施例中,融合模块540,还包括高斯模糊模块。
该高斯模糊模块,被配置为执行对权重图像进行高斯模糊处理,得到高斯模糊后的权重图像。
融合模块540,被进一步配置为执行基于高斯模糊后的权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像。
在本公开一些实施例中,融合模块540,还包括边缘羽化模块。
该边缘羽化模块,被配置为执行对面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像。
融合模块540,被进一步配置为执行基于边缘羽化后的面部掩码图像,对第一目标图像和待处理图像进行融合,得到第二目标图像。
综上,本公开实施例通过对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,根据待处理图像中包括面部轮廓线和五官轮廓线的轮 廓图像对应的像素值,生成用于表示待处理图像中像素点对应的权重系数的权重图像,这里,显然轮廓线像素点对应的权重较大,轮廓线以外的区域像素点对应的权重较小;接着,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这里,用于体现面部轮廓和五官轮廓等强边缘处的第一高频图像对应的锐化权重较小,用于体现如额头鼻梁等其他高频区域的第二高频图像对应的锐化权重相对较大,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,能够提升高频区域的亮度,使得到的第一目标图像中的面部区域的结构更有层次感。由此,能够在提高待处理图像的清晰度的同时,抑制了“白边”效应,并且不会增强图像中的背景噪声。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
在本公开一些实施例中,还提供了一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
在一些实施例中,电子设备例如可以为服务器。
图6是根据一示例性实施例示出的一种服务器的框图。参照图6,本公开实施例还提供了一种服务器,包括处理器610、通信接口620、存储器630和通信总线640,其中,处理器610、通信接口620和存储器630通过通信总线640完成相互间的通信。
该存储器630,用于存放处理器610可执行的指令。
该处理器610,用于执行存储器630上所存放的指令时,实现如下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
可见,应用本公开实施例,通过对待处理图像中的高频区域对应的第一高频图像进行高斯模糊处理,得到第二高频图像;然后,根据待处理图像中包括面部轮廓线和五官轮廓线的轮廓图像对应的像素值,生成用于表示待处理图像中像素点对应的权重系数的权重图 像,这里,显然轮廓线像素点对应的权重较大,轮廓线以外的区域像素点对应的权重较小;接着,基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像;这里,用于体现面部轮廓和五官轮廓等强边缘处的第一高频图像对应的锐化权重较小,用于体现如额头鼻梁等其他高频区域的第二高频图像对应的锐化权重相对较大,能够有效抑制“白边”效应;最后,将待处理图像和第三高频图像进行融合,得到第一目标图像,能够提升高频区域的亮度,使得到的第一目标图像中的面部区域的结构更有层次感。由此,能够在提高待处理图像的清晰度的同时,抑制“白边”效应,并且不会增强图像中的背景噪声。
图7是根据一示例性实施例示出的用于数据处理的设备的框图。例如,该设备700可以被提供为一服务器。参照图7,服务器700包括处理组件722,其进一步包括一个或多个处理器,以及由存储器732所代表的存储器资源,用于存储可由处理组件722的执行的指令,例如应用程序。存储器732中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件722被配置为执行指令,以执行以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
该设备700还可以包括一个电源组件726被配置为执行设备700的电源管理,一个有线或无线网络接口750被配置为将设备700连接到网络,和一个输入输出(I/O)接口758。设备700可以操作基于存储在存储器732的操作***,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在本公开一些实施例中,还提供了一种非易失性存储介质,当该非易失性存储介质中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
在本公开一些实施例中,该非易失性存储介质可以是非临时性计算机可读存储介质,示例性的,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
在本公开一些实施例中,还提供了一种计算机程序产品,当计算机程序产品中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:
获取待处理图像,待处理图像包括面部区域;对待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,第一高频图像为面部区域中的高频信息所在区域对应的图像;基于权重图像对第一高频图像和第二高频图像进行加权融合处理,得到第三高频图像,权重图像用于表征待处理图像中像素点对应的权重系数;将待处理图像和第三高频图像进行融合,得到第一目标图像。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (18)

  1. 一种图像处理方法,其中,包括:
    获取待处理图像,所述待处理图像包括面部区域;
    对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;
    基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;
    将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,所述面部掩码图像用于标记所述面部区域;
    其中,所述第二目标图像中的所述面部区域的像素值为所述第一目标图像的像素值,所述第二目标图像中的所述面部区域以外的像素值为所述待处理图像的像素值。
  3. 根据权利要求1所述的方法,其中,所述将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像,包括:
    对所述待处理图像进行亮度增强处理,得到第一图像;
    降低所述待处理图像中的第一像素点的第二像素值,得到第一像素值,所述第二像素值小于预设阈值;
    增大所述待处理图像中的第二像素点的第四像素值,得到第三像素值,所述第四像素值大于所述预设阈值;
    根据所述第一像素值和所述第三像素值得到第二图像;
    对所述第一图像和所述第二图像进行加权融合处理,得到第三图像;
    将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像。
  4. 根据权利要求3所述的方法,其中,所述将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像,包括:
    对所述第三图像和所述待处理图像进行加权融合处理,得到第四图像;
    基于面部掩码图像对所述第四图像和所述待处理图像进行加权融合处理,得到第五图 像;其中,所述第五图像中的所述面部区域的像素值为所述第四图像的像素值,所述第五图像中的所述面部区域以外的像素值为所述待处理图像的像素值;
    将所述第五图像和所述第三高频图像进行融合,得到所述第一目标图像。
  5. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于边缘检测算法从所述待处理图像中提取出所述第一高频图像;或者,
    将所述待处理图像与所述待处理图像的低频图像相减,得到所述第一高频图像。
  6. 根据权利要求1所述的方法,其中,所述方法还包括:
    提取所述待处理图像中的关键点,所述关键点包括面部关键点和五官关键点;
    根据所述面部关键点得到面部轮廓线,根据所述五官关键点得到五官轮廓线;
    根据所述面部轮廓线和所述五官轮廓线,生成用于表示所述待处理图像中像素点对应的权重系数的权重图像。
  7. 根据权利要求6所述的方法,其中,所述基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,包括:
    对所述权重图像进行高斯模糊处理,得到高斯模糊后的权重图像;
    基于所述高斯模糊后的权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到所述第三高频图像。
  8. 根据权利要求2所述的方法,其中,所述基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,包括:
    对所述面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像;
    基于所述边缘羽化后的面部掩码图像,对所述第一目标图像和所述待处理图像进行融合,得到所述第二目标图像。
  9. 一种图像处理装置,其中,包括:
    获取模块,被配置为执行获取待处理图像,所述待处理图像包括面部区域;
    高频处理模块,被配置为执行对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;
    加权处理模块,被配置为执行基于权重图像对所述第一高频图像和所述第二高频图像 进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;
    融合模块,被配置为执行将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
  10. 根据权利要求9所述的装置,其中,所述融合模块,还被配置为执行基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,所述面部掩码图像用于标记所述面部区域;
    其中,所述第二目标图像中的所述面部区域的像素值为所述第一目标图像的像素值,所述第二目标图像中的所述面部区域以外的像素值为所述待处理图像的像素值。
  11. 根据权利要求9所述的装置,其中,所述融合模块包括:
    亮度增强模块,被配置为执行对所述待处理图像进行亮度增强处理,得到第一图像;
    亮度降低模块,被配置为执行降低所述待处理图像中的第一像素点的第二像素值,得到第一像素值,所述第二像素值小于预设阈值;
    亮度增强模块,还被配置为执行增大所述待处理图像中的第二像素点的第四像素值,得到第三像素值,所述第四像素值大于所述预设阈值;
    确定模块,被配置为执行根据所述第一像素值和所述第三像素值得到第二图像;
    所述融合模块,被进一步配置为执行对所述第一图像和所述第二图像进行加权融合处理,得到第三图像;
    所述融合模块,被进一步配置为执行将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像。
  12. 根据权利要求11所述的装置,其中,所述融合模块,被进一步配置为执行对所述第三图像和所述待处理图像进行加权融合处理,得到第四图像;
    所述融合模块,被进一步配置为执行基于面部掩码图像对所述第四图像和所述待处理图像进行加权融合处理,得到第五图像;其中,所述第五图像中的所述面部区域的像素值为所述第四图像的像素值,所述第五图像中的所述面部区域以外的像素值为所述待处理图像的像素值;
    所述融合模块,被进一步配置为执行将所述第五图像和所述第三高频图像进行融合,得到所述第一目标图像。
  13. 根据权利要求9所述的装置,其中,所述装置还包括:
    第一提取模块,被配置为执行基于边缘检测算法从所述待处理图像中提取出所述第一高频图像;
    相减模块,被配置为执行将所述待处理图像与所述待处理图像的低频图像相减,得到所述第一高频图像。
  14. 根据权利要求9所述的装置,其中,所述装置还包括:
    第二提取模块,被配置为执行提取所述待处理图像中的关键点,所述关键点包括面部关键点和五官关键点;
    连接模块,被配置为执行根据所述面部关键点得到面部轮廓线,根据所述五官关键点得到五官轮廓线;
    生成模块,被配置为执行根据所述面部轮廓线和所述五官轮廓线,生成用于表示所述待处理图像中像素点对应的权重系数的权重图像。
  15. 根据权利要求14所述的装置,其中,所述融合模块还包括:
    高斯模糊模块,被配置为执行对所述权重图像进行高斯模糊处理,得到高斯模糊后的权重图像;
    所述融合模块,被进一步配置为执行基于所述高斯模糊后的权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到所述第三高频图像。
  16. 根据权利要求10所述的装置,其中,所述融合模块还包括:
    边缘羽化模块,被配置为执行对所述面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像;
    所述融合模块,被进一步配置为执行基于所述边缘羽化后的面部掩码图像,对所述第一目标图像和所述待处理图像进行融合,得到所述第二目标图像。
  17. 一种电子设备,其中,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取待处理图像,所述待处理图像包括面部区域;
    对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;
    基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;
    将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
  18. 一种非易失性存储介质,其中,当所述存储介质中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:
    获取待处理图像,所述待处理图像包括面部区域;
    对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;
    基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;
    将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
PCT/CN2021/116233 2020-10-29 2021-09-02 图像处理方法及装置 WO2022088976A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011182131.8 2020-10-29
CN202011182131.8A CN112258440B (zh) 2020-10-29 2020-10-29 图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022088976A1 true WO2022088976A1 (zh) 2022-05-05

Family

ID=74267207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116233 WO2022088976A1 (zh) 2020-10-29 2021-09-02 图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN112258440B (zh)
WO (1) WO2022088976A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116527922A (zh) * 2023-07-03 2023-08-01 浙江大华技术股份有限公司 图像编码方法及其相关装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258440B (zh) * 2020-10-29 2024-01-02 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112918956A (zh) * 2021-02-20 2021-06-08 陆伟凤 一种基于图像识别技术的垃圾分类***
CN112862726B (zh) * 2021-03-12 2023-11-10 湖南国科微电子股份有限公司 图像处理方法、装置及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877125A (zh) * 2009-12-25 2010-11-03 北京航空航天大学 一种基于小波域统计信号的图像融合处理方法
CN104517265A (zh) * 2014-11-06 2015-04-15 福建天晴数码有限公司 智能磨皮方法和装置
US20180302544A1 (en) * 2017-04-12 2018-10-18 Samsung Electronics Co., Ltd. Method and apparatus for generating hdr images
CN110580688A (zh) * 2019-08-07 2019-12-17 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN112258440A (zh) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10445877B2 (en) * 2016-12-30 2019-10-15 International Business Machines Corporation Method and system for crop recognition and boundary delineation
CN107220990B (zh) * 2017-06-22 2020-09-08 成都品果科技有限公司 一种基于深度学习的头发分割方法
CN107864337B (zh) * 2017-11-30 2020-03-06 Oppo广东移动通信有限公司 素描图像处理方法、装置、设备及计算机可读存储介质
CN109033945B (zh) * 2018-06-07 2021-04-06 西安理工大学 一种基于深度学习的人体轮廓提取方法
CN109409262A (zh) * 2018-10-11 2019-03-01 北京迈格威科技有限公司 图像处理方法、图像处理装置、计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877125A (zh) * 2009-12-25 2010-11-03 北京航空航天大学 一种基于小波域统计信号的图像融合处理方法
CN104517265A (zh) * 2014-11-06 2015-04-15 福建天晴数码有限公司 智能磨皮方法和装置
US20180302544A1 (en) * 2017-04-12 2018-10-18 Samsung Electronics Co., Ltd. Method and apparatus for generating hdr images
CN110580688A (zh) * 2019-08-07 2019-12-17 北京达佳互联信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN112258440A (zh) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116527922A (zh) * 2023-07-03 2023-08-01 浙江大华技术股份有限公司 图像编码方法及其相关装置
CN116527922B (zh) * 2023-07-03 2023-10-27 浙江大华技术股份有限公司 图像编码方法及其相关装置

Also Published As

Publication number Publication date
CN112258440B (zh) 2024-01-02
CN112258440A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022088976A1 (zh) 图像处理方法及装置
Li et al. Fast multi-scale structural patch decomposition for multi-exposure image fusion
Guo et al. LIME: Low-light image enhancement via illumination map estimation
Chen et al. Robust image and video dehazing with visual artifact suppression via gradient residual minimization
Bhat et al. Gradientshop: A gradient-domain optimization framework for image and video filtering
Tao et al. Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images
CN104252698B (zh) 一种基于半逆法的快速单幅图像去雾算法
US8965141B2 (en) Image filtering based on structural information
Kim et al. Low-light image enhancement based on maximal diffusion values
Gao et al. Detail preserved single image dehazing algorithm based on airlight refinement
CN107194869B (zh) 一种图像处理方法及终端、计算机存储介质、计算机设备
Ancuti et al. Image and video decolorization by fusion
Vazquez-Corral et al. A fast image dehazing method that does not introduce color artifacts
Lou et al. Integrating haze density features for fast nighttime image dehazing
Dai et al. Dual-purpose method for underwater and low-light image enhancement via image layer separation
CN107564085B (zh) 图像扭曲处理方法、装置、计算设备及计算机存储介质
Chen et al. A solution to the deficiencies of image enhancement
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Gu et al. A novel Retinex image enhancement approach via brightness channel prior and change of detail prior
Rahman et al. Efficient contrast adjustment and fusion method for underexposed images in industrial cyber-physical systems
Tung et al. ICEBIN: Image contrast enhancement based on induced norm and local patch approaches
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
Yao et al. A multi-expose fusion image dehazing based on scene depth information
WO2021135676A1 (zh) 一种拍照背景虚化方法、移动终端及存储介质
Kumari et al. Real time image and video deweathering: The future prospects and possibilities

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884707

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884707

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21884707

Country of ref document: EP

Kind code of ref document: A1