WO2022088976A1 - 图像处理方法及装置 - Google Patents
图像处理方法及装置 Download PDFInfo
- Publication number
- WO2022088976A1 WO2022088976A1 PCT/CN2021/116233 CN2021116233W WO2022088976A1 WO 2022088976 A1 WO2022088976 A1 WO 2022088976A1 CN 2021116233 W CN2021116233 W CN 2021116233W WO 2022088976 A1 WO2022088976 A1 WO 2022088976A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- frequency
- processed
- pixel value
- weighted
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 230000001815 facial effect Effects 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 46
- 238000007499 fusion processing Methods 0.000 claims abstract description 45
- 230000004927 fusion Effects 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 21
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000001965 increasing effect Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 210000001061 forehead Anatomy 0.000 description 6
- 230000003796 beauty Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 4
- 238000003707 image sharpening Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000003313 weakening effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000000088 lip Anatomy 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present disclosure relates to the field of image technology, and in particular, to an image processing method and apparatus.
- the user in order to improve the effect of the captured image, the user usually uses some beauty software to beautify the captured image.
- beauty generally includes “clear” processing, but after “clear” processing, the processed image will appear “white borders” at the junction of the light and dark of the image, and it will also increase the noise of the image.
- the present disclosure provides an image processing method and apparatus.
- an image processing method comprising:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- an image processing apparatus comprising:
- an acquisition module configured to perform acquisition of an image to be processed, and the to-be-processed image includes a face region; a high-frequency processing module, configured to perform Gaussian blurring on the first high-frequency image in the to-be-processed image to obtain a second high-frequency image,
- the first high-frequency image is an image corresponding to the area where the high-frequency information in the face area is located;
- the weighting processing module is configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image, to obtain a third high-frequency image.
- the high-frequency image, the weight image is used to represent the weight coefficient corresponding to the pixel in the image to be processed;
- the fusion module is configured to perform fusion of the image to be processed and the third high-frequency image to obtain the first target image.
- an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- a non-volatile storage medium which enables the server to perform the following steps when instructions in the storage medium are executed by a processor of a server:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- a computer program product that, when the instructions in the computer program product are executed by a processor of a server, enables the server to perform the following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- Gaussian blurring is performed on the first high-frequency image corresponding to the high-frequency region in the image to be processed to obtain the second high-frequency image;
- the high-frequency image is subjected to weighted fusion processing to obtain a third high-frequency image; here, by performing weighted fusion of the first high-frequency image used to represent the high-frequency information in the image to be processed and the second high-frequency image obtained by Gaussian blur, it can be appropriately Weakening the sharpness of strong edges such as facial contours and facial features contours can effectively suppress the "white edge” effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain the first target image, which can improve the high-frequency area.
- the brightness of the obtained first target image makes the structure of the face region in the first target image more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the background noise in the image will not be enhanced.
- Fig. 1 is a schematic diagram of an image obtained by "clear" processing according to an exemplary embodiment.
- FIG. 2 is a schematic diagram illustrating an application environment of an image processing method, apparatus, electronic device, and storage medium according to an exemplary embodiment.
- Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment.
- Fig. 4 is a schematic diagram of a face mask image according to an exemplary embodiment.
- Fig. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment.
- Fig. 6 is a block diagram of a server according to an exemplary embodiment.
- FIG. 7 is a block diagram of an apparatus for data processing according to an exemplary embodiment.
- beauty generally includes “clear” processing, but after “clear” processing, the structure of the processed image is not optimized, and the sense of hierarchy is not strong. In addition, there will be a "white edge” at the junction of the light and dark of the image, and the noise of the image will be increased.
- “Clear” processing can generally be achieved by image sharpening.
- Image sharpening is to compensate the contour of the image, enhance the edge of the image and the gray-scale transition part, and make the image clear. It is divided into spatial domain processing and frequency domain processing. kind. Image smoothing often blurs the borders and outlines in the image. In order to reduce the influence of such adverse effects, it is necessary to use image sharpening technology to make the edges of the image clear.
- Sharpening is to reduce the blur in the image by enhancing the high frequency components, so it is also called high-pass filtering. Sharpening adds noise to the image while enhancing the edges of the image.
- the principle of “sharpening” is: at the junction of light and dark, the dark side is adjusted to be darker, and the bright side is adjusted to be brighter. But what follows is that the processed image will have "white borders" at the junction of light and dark.
- FIG. 1 is used as an example to illustrate the display effect of the image obtained by the “clear” process in the related art.
- Fig. 1 is a schematic diagram of an image obtained by "clear” processing according to an exemplary embodiment. As shown in Figure 1, taking a face image as an example, after the image is “clear” processed, in the obtained image, at the junction between the face area 10 and the non-face area 20 (that is, the background area), a "white” appears. edge", which makes the processed image look less realistic and natural.
- the present disclosure provides an image processing method, apparatus, electronic device and non-volatile storage medium.
- the image processing method, device, electronic device and non-volatile storage medium can obtain the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed; and then, based on the weight The image performs weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image;
- the weighted fusion of the second high-frequency image can appropriately weaken the sharpness of strong edges such as facial contours and facial contours, and can effectively suppress the "white edge” effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain
- the first target image can enhance the brightness of the high frequency region, so that the structure of the face region in the obtained first target image is more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the
- the electronic device in the embodiment of the present disclosure may be, for example, a server.
- the server 100 is communicatively connected with one or more clients 200 through the network 300 for data communication or interaction.
- the server 100 may be a web server, a database server, or the like.
- the client 200 may be, but not limited to, a personal computer (personal computer, PC), a smart phone, a tablet computer, a personal digital assistant (PDA), and the like.
- the network 300 may be a wired or wireless network.
- the image processing methods provided in the embodiments of the present disclosure can be applied to the client 200.
- the embodiments of the present disclosure are described with the client 200 as the execution body. It can be understood that the described execution body does not constitute a limitation of the present disclosure.
- Fig. 3 is a flowchart of an image processing method according to an exemplary embodiment. As shown in Figure 3, the image processing method may include the following steps:
- S310 Acquire a to-be-processed image, where the to-be-processed image includes a face region.
- S330 Perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain a third high-frequency image, where the weighted image is used to represent weight coefficients corresponding to pixels in the image to be processed.
- the second high-frequency image is obtained by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed; Generate a weighted image, where the weights corresponding to the pixels of the contour lines are larger, and the weights corresponding to the pixels of the areas outside the contour lines are smaller; then, based on the weighted images, weighted fusion processing is performed on the first high-frequency image and the second high-frequency image , to obtain the third high-frequency image; here, the sharpening weight corresponding to the first high-frequency image at strong edges such as facial contours and facial features contours is smaller, and the second high-frequency image used to reflect other high-frequency areas such as the forehead and nose bridge The sharpening weight corresponding to the high-frequency image is relatively large, which can effectively suppress the "white edge" effect; finally, the to-be-processed image and the third high-frequency image are fused to obtain the first target image, which can improve the to-be-processed image
- the above steps may have the following specific implementation manners.
- the client 200 may acquire an image to be processed, and the image to be processed includes a face region.
- the facial area may be the facial area of a human face.
- the facial area that the user wants to process may be further acquired.
- the foreground area of the image to be processed can be obtained as the face area through techniques such as face key point recognition.
- the client 200 can obtain the to-be-processed image in various public, legal and compliant ways, for example, it can obtain the to-be-processed image from a public data set, or obtain the to-be-processed image from the user after being authorized by the user.
- the first high-frequency image in the image to be processed may be acquired in the following manner:
- the first high frequency image is extracted from the to-be-processed image based on the edge detection algorithm; or, the to-be-processed image is subtracted from the low-frequency image of the to-be-processed image to obtain the first high frequency image.
- the first high-frequency image includes high-frequency information of the image to be processed.
- the first high-frequency image may include: eyebrows, eyes, lips, forehead, bridge of nose and other high-frequency information in the image to be processed that need to clearly display details.
- the first high frequency image can be extracted from the image to be processed by an edge detection algorithm.
- Edge detection is a fundamental problem in image processing and computer vision.
- the purpose of edge detection is to identify points in digital images with obvious changes in brightness.
- Significant changes in image properties often reflect significant events and changes in properties.
- Notable changes in these image properties include: discontinuities in depth, discontinuities in surface orientation, changes in material properties, and changes in scene lighting. That is, the first high-frequency image is an image corresponding to the region where the high-frequency information in the face region is located.
- the edge detection algorithm may include algorithms such as sobel, canny, prewitt or roberts.
- the first high-frequency image may also be obtained by subtracting the low-frequency image of the to-be-processed image and the to-be-processed image.
- the low-frequency image of the image to be processed can be obtained by performing Gaussian blurring on the image to be processed, and the low-frequency image of the image to be processed and the image to be processed are made difference to obtain the first high frequency images.
- Gaussian blurring on the image to be processed
- other algorithms capable of implementing low-pass filtering may also be selected, which is not specifically limited in this embodiment of the present disclosure.
- high frequency information in the image to be processed can be quickly determined by extracting the first high frequency image from the image to be processed.
- G1 is the first high-frequency image
- G2 is the second high-frequency image.
- Gaussian blur processing also called Gaussian smoothing, can enhance the image effect of the image at different scales.
- S330 may acquire the weighted image in the following manner:
- Extract the key points in the image to be processed, and the key points include facial key points and facial features; obtain facial contours according to facial key points, and obtain facial contours according to facial key points; A weight image representing the weight coefficients corresponding to the pixels in the image to be processed.
- the key points in the to-be-processed image can be extracted by detecting the key points of the face, and the key points of the face and the facial features can be obtained, and the key points of the facial features and the facial features can be connected to obtain the outline that can represent the facial structure.
- Line drawings that is, facial contours and facial features.
- the width of these contour lines may be set to w pixels, so that the pixel value of the contour line is 1, and the pixel value of other areas of the image to be processed is 0, which can be used to represent the image in the image to be processed.
- the weight image of the weight coefficient corresponding to the pixel point is
- the weight value of contour lines in the weighted image may be set to be greater than a preset threshold, and the weight value of non-contour lines in the weighted image may be set to be less than the preset threshold.
- the pixel value is 0.2, and the preset threshold is 0.5.
- the weight coefficient corresponding to the pixel point of the contour line in the image to be processed is 0.8, and the weight coefficient corresponding to the pixel point of the non-contour line in the image to be processed is 0.2.
- the color of the facial contour line and the facial features contour line is white, and the color of other pixels in the facial region of the image to be processed is black.
- the step of performing weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain the third high-frequency image can be implemented by the following formula (1):
- G1 in the formula (1) is the first high-frequency image
- G2 is the second high-frequency image
- K is the weight image
- G3 is the third high-frequency image.
- weighted fusion processing is performed on the first high-frequency image and the second high-frequency image, which can weaken the high-frequency information representing the face and facial features in the first high-frequency image, that is, the pixels in the weighted image. Regions with a value greater than 0 correspond to regions in the first high-frequency image for high-frequency weakening.
- the weight corresponding to the pixel points of the contour line is larger, and the weight corresponding to the pixel points of the area outside the contour line is smaller; then, based on the weight image, the first high-frequency image and the second high-frequency image are weighted and fused to obtain the first high-frequency image and the second high-frequency image.
- Three high-frequency images since the sharpening weight corresponding to the first high-frequency image at strong edges such as facial contours and facial features contours is small, the second high-frequency image used to reflect other high-frequency areas such as the forehead and nose bridge corresponds to The sharpening weight is relatively large, which can effectively suppress the "white edge" effect.
- S330 may specifically include the following steps:
- Gaussian blurring is performed on the weighted image to obtain a Gaussian blurred weighted image; weighted fusion processing is performed on the first high frequency image and the second high frequency image based on the Gaussian blurred weighted image to obtain a third high frequency image.
- Gaussian blurring is performed on the weighted image, and the obtained Gaussian-blurred weighted image has smoother edges, so that the subsequent third high-frequency image fused based on the Gaussian-blurred weighted image has a better display effect.
- USM Unsharp Mask
- This sharpening method is to be processed
- the image is first subjected to a Gaussian blur, and then the image to be processed is subtracted by a coefficient and multiplied by the Gaussian blur to obtain an edge image (ie, the third high-frequency image), and the edge image and the image to be processed are linearly combined to obtain sharpening.
- the method based on USM sharpening can remove some small interfering details and noise, which is more realistic and credible than the image sharpening results obtained by using the convolution sharpening operator directly.
- the above-mentioned steps of fusing the to-be-processed image and the third high-frequency image to obtain the first target image can be specifically implemented by the following formula (2):
- S0 is the image to be processed
- G3 is the third high-frequency image
- R1 is the first target image
- ⁇ is a preset coefficient
- the first target image obtained by linearly combining the third high-frequency image and the to-be-processed image can remove some fine interfering details and noise, so that the face in the first target image has high definition.
- S340 may specifically include the following steps: performing brightness enhancement processing on the image to be processed to obtain the first image; and reducing the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, the second pixel value is less than the preset threshold; increase the fourth pixel value of the second pixel in the image to be processed to obtain a third pixel value, and the fourth pixel value is greater than the preset threshold; according to the first pixel value and the third pixel value Three pixel values are used to obtain a second image; weighted fusion processing is performed on the first image and the second image to obtain a third image; and the third image and the third high-frequency image are fused to obtain a first target image.
- the above-mentioned steps of performing brightness enhancement processing on the image to be processed to obtain the first image can be specifically implemented by the following formula (3):
- S0 is the image to be processed
- S1 is the first image
- brightness enhancement processing is performed on the image to be processed to obtain a first image, so as to enhance the overall brightness of the image.
- the above-mentioned process involves reducing the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, and increasing the fourth pixel value of the second pixel in the image to be processed to obtain
- the third pixel value, the step of obtaining the second image according to the first pixel value and the third pixel value can be specifically implemented by the following formula (4):
- S0 is the image to be processed
- S2 is the second image.
- the second pixel value (pixel value of S0 ⁇ 0.5) is less than the preset threshold value of 0.5; the fourth pixel value (pixel value of S0 ⁇ 0.5) is greater than the preset threshold value of 0.5.
- the dark part of the image to be processed can be made darker and the bright part of the image to be processed brighter, so that the highlights and shadow parts of the image to be processed can be more prominent, and the facial structure of the obtained second image has more distinct layers.
- the above-mentioned steps of performing weighted fusion processing on the first image and the second image to obtain the third image can be specifically implemented by the following formula (5):
- S1 is the first image
- S2 is the second image
- S3 is the third image
- the fusion coefficient is a preset value a, which can be a value preset by the user and can be adjusted according to actual needs.
- the first target image can be obtained by fusing the third image and the third high-frequency image.
- the third image and the third high-frequency image obtained after performing the brightness adjustment processing on the image to be processed the layered sense of the structure of the first target image can be improved.
- the above-mentioned steps of fusing the third image and the third high-frequency image to obtain the first target image may specifically include the following steps:
- S4 is the fourth image.
- S3 and S0 can be weighted and fused, and the fusion coefficient is a preset value b, which can be adjusted according to actual needs.
- the face region mask map M weighted fusion processing is performed on the fourth image and the image to be processed, so that the background can be filtered out to obtain the fifth image.
- the pixel value of the face region in the fifth image is the pixel value of the fourth image
- the pixel value of the fifth image outside the face region is the pixel value of the image to be processed.
- the structural clarity of the face region can be improved without enhancing the background noise.
- the fifth image and the third high-frequency image are fused to obtain the first target image, which can improve the clarity of the face and avoid enhancing the background noise.
- the following processing may also be performed on the first target image:
- the first target image and the to-be-processed image are fused based on the face mask image to obtain a second target image, and the face mask image is used to mark the face area; wherein, the pixel value of the face area in the second target image is the first target image
- the pixel value of the image, the pixel value of the second target image outside the face area is the pixel value of the image to be processed.
- the above-mentioned face mask image is generated according to the face area and non-face area in the image to be processed, and the face mask image is an image corresponding to the image to be processed and used to mark the face area.
- the facial area and non-face area in the image to be processed can be determined by using a face key point detection algorithm or a skin color detection model, and then according to the facial area in the image to be processed and the area other than the facial area in the second target image
- the face mask image is generated by determining the mask value of the face area as 1 and the mask value of the area other than the face area as 0, and then the face mask image can be generated.
- the area of interest (that is, the face area) is white, indicating that the pixels of the area of interest are all non-0, and the non-interesting area (that is, the non-face area) is all black, Indicates that the pixels in those regions are all 0s, so that once the image to be processed is ANDed with the face mask image, the resulting map leaves only the image of the region of interest in the image to be processed.
- the step of fusing the first target image and the to-be-processed image based on the face mask image to obtain the second target image can be specifically implemented by the following formula (8):
- R1 is the first target image
- R2 is the second target image
- M is the face mask image
- S0 is the image to be processed.
- the obtained pixel value of the face region in the second target image is the pixel value of the first target image
- the face region in the second target image is The pixel values other than those are the pixel values of the image to be processed, so that the enhancement of the background noise can be avoided while improving the clarity of the face.
- the face mask image is subjected to edge feathering processing to obtain a face mask image after edge feathering; based on the face mask image after edge feathering, the first target image and the image to be processed are fused to obtain a second target image.
- the face mask image may be edge feathered using a guided filtering algorithm.
- the guided filtering algorithm is an image filtering technology, which filters the target image P (input image) through a guiding graph G, so that the final output image is generally similar to the target image P, but the texture part is similar to the target image P.
- the bootstrap graph G is similar.
- the edge feathering of the face mask image by the guided filtering algorithm can smooth the boundary of the face mask image.
- the first target image and the to-be-processed image are fused, so that the obtained second target image has smoother and more natural edges.
- the embodiment of the present disclosure obtains the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
- the pixel value corresponding to the contour image is generated to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
- the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line is larger.
- the first high-frequency image and the second high-frequency image are subjected to weighted fusion processing to obtain a third high-frequency image;
- the sharpening weight corresponding to the image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
- the first target image is obtained by fusing with the third high-frequency image, which can enhance the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered. Thereby, the "white fringing" effect can be suppressed while improving the sharpness of the image to be processed, and the background noise in the image will not be enhanced.
- the present disclosure also provides an image processing apparatus.
- the specific description will be made with reference to FIG. 5 .
- Fig. 5 is a block diagram of an image processing apparatus according to an exemplary embodiment.
- the image processing apparatus 500 may include an acquisition module 510 , a high frequency processing module 520 , a weighting processing module 530 and a fusion module 540 .
- the acquiring module 510 is configured to perform acquiring an image to be processed, where the image to be processed includes a face region.
- the high-frequency processing module 520 is configured to perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image is an image corresponding to the area where the high-frequency information in the face area is located .
- the weighted processing module 530 is configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the weighted image to obtain a third high-frequency image, where the weighted image is used to represent the weight corresponding to the pixel in the image to be processed coefficient.
- the fusion module 540 is configured to perform fusion of the to-be-processed image and the third high-frequency image to obtain the first target image.
- the fusion module 540 is further configured to perform fusion of the first target image and the to-be-processed image based on the face mask image to obtain the second target image, and the face mask image is used to mark the face region;
- the pixel value of the face area in the second target image is the pixel value of the first target image
- the pixel value of the second target image outside the face area is the pixel value of the image to be processed.
- the fusion module 540 includes:
- the brightness enhancement module is configured to perform brightness enhancement processing on the image to be processed to obtain the first image.
- the brightness reduction module is configured to reduce the second pixel value of the first pixel in the image to be processed to obtain the first pixel value, where the second pixel value is smaller than the preset threshold.
- the brightness enhancement module is further configured to perform increasing the fourth pixel value of the second pixel in the image to be processed to obtain a third pixel value, where the fourth pixel value is greater than a preset threshold.
- a determination module configured to perform obtaining the second image according to the first pixel value and the third pixel value.
- the fusion module 540 is further configured to perform weighted fusion processing on the first image and the second image to obtain a third image.
- the fusion module 540 is further configured to perform fusion of the third image and the third high-frequency image to obtain the first target image.
- the fusion module 540 is further configured to perform weighted fusion processing on the third image and the to-be-processed image to obtain a fourth image.
- the fusion module 540 is further configured to perform weighted fusion processing on the fourth image and the to-be-processed image based on the face mask image, to obtain a fifth image; wherein, the pixel value of the face region in the fifth image is the pixel value of the fourth image value, the pixel value of the fifth image outside the face area is the pixel value of the image to be processed.
- the fusion module 540 is further configured to perform fusion of the fifth image and the third high-frequency image to obtain the first target image.
- the image processing apparatus 500 further includes: a first extraction module and a subtraction module.
- the first extraction module is configured to extract the first high-frequency image from the to-be-processed image based on an edge detection algorithm.
- the subtraction module is configured to perform subtraction of the image to be processed and the low-frequency image of the image to be processed to obtain a first high-frequency image.
- the image processing apparatus 500 further includes: a second extraction module, a connection module, and a generation module.
- the second extraction module is configured to perform extraction of key points in the image to be processed, and the key points include facial key points and facial feature key points.
- the connection module is configured to obtain the facial contour line according to the facial key points, and obtain the facial features contour line according to the facial feature key points.
- the generating module is configured to generate, according to the facial contour line and the facial features contour line, a weight image used to represent the weight coefficient corresponding to the pixel point in the image to be processed.
- the fusion module 540 further includes a Gaussian blurring module.
- the Gaussian blurring module is configured to perform Gaussian blurring on the weighted image to obtain a Gaussian blurred weighted image.
- the fusion module 540 is further configured to perform weighted fusion processing on the first high-frequency image and the second high-frequency image based on the Gaussian blurred weighted image to obtain a third high-frequency image.
- the fusion module 540 further includes an edge feathering module.
- the edge feathering module is configured to perform edge feathering processing on the face mask image to obtain a face mask image after edge feathering.
- the fusion module 540 is further configured to perform fusion of the first target image and the to-be-processed image based on the face mask image after edge feathering to obtain a second target image.
- the embodiment of the present disclosure obtains the second high-frequency image by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
- the pixel value corresponding to the contour image is generated to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
- the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line is larger.
- the first high-frequency image and the second high-frequency image are subjected to weighted fusion processing to obtain a third high-frequency image;
- the sharpening weight corresponding to the image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
- the first target image is obtained by fusing with the third high-frequency image, which can enhance the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered.
- an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to achieve The following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- the electronic device may be, for example, a server.
- Fig. 6 is a block diagram of a server according to an exemplary embodiment. 6 , an embodiment of the present disclosure further provides a server, including a processor 610 , a communication interface 620 , a memory 630 and a communication bus 640 , wherein the processor 610 , the communication interface 620 and the memory 630 communicate with each other through the communication bus 640 Communication.
- a server including a processor 610 , a communication interface 620 , a memory 630 and a communication bus 640 , wherein the processor 610 , the communication interface 620 and the memory 630 communicate with each other through the communication bus 640 Communication.
- the memory 630 is used for storing instructions executable by the processor 610 .
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- the second high-frequency image is obtained by performing Gaussian blurring on the first high-frequency image corresponding to the high-frequency region in the image to be processed;
- the pixel value corresponding to the contour image of the line is used to generate a weight image representing the weight coefficient corresponding to the pixel point in the image to be processed.
- the weight corresponding to the pixel point of the contour line is larger, and the weight corresponding to the pixel point of the area outside the contour line.
- the first high-frequency image and the second high-frequency image are weighted and fused based on the weighted image to obtain the third high-frequency image;
- the sharpening weight corresponding to the high-frequency image is relatively small, and the sharpening weight corresponding to the second high-frequency image used to reflect other high-frequency areas such as the forehead and the bridge of the nose is relatively large, which can effectively suppress the "white edge” effect;
- the image and the third high-frequency image are fused to obtain the first target image, which can improve the brightness of the high-frequency area, and make the structure of the face area in the obtained first target image more layered. Thereby, while improving the definition of the image to be processed, the "white fringing" effect can be suppressed, and the background noise in the image will not be enhanced.
- FIG. 7 is a block diagram of an apparatus for data processing according to an exemplary embodiment.
- the device 700 may be provided as a server.
- server 700 includes processing component 722, which further includes one or more processors, and a memory resource, represented by memory 732, for storing instructions executable by processing component 722, such as application programs.
- An application program stored in memory 732 may include one or more modules, each corresponding to a set of instructions.
- the processing component 722 is configured to execute instructions to perform the following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- the device 700 may also include a power component 726 configured to perform power management of the device 700, a wired or wireless network interface 750 configured to connect the device 700 to a network, and an input output (I/O) interface 758.
- Device 700 may operate based on an operating system stored in memory 732, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile storage medium is also provided.
- the server can perform the following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
- the non-volatile storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD - ROM, magnetic tape, floppy disk and optical data storage devices, etc.
- a computer program product is also provided.
- the server can perform the following steps:
- an image to be processed includes a face area; perform Gaussian blurring on the first high-frequency image in the image to be processed to obtain a second high-frequency image, where the first high-frequency image corresponds to the area where the high-frequency information in the face area is located based on the weighted image, perform weighted fusion processing on the first high-frequency image and the second high-frequency image to obtain a third high-frequency image, and the weighted image is used to represent the weight coefficients corresponding to the pixels in the image to be processed; It is fused with the third high-frequency image to obtain the first target image.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (18)
- 一种图像处理方法,其中,包括:获取待处理图像,所述待处理图像包括面部区域;对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
- 根据权利要求1所述的方法,其中,所述方法还包括:基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,所述面部掩码图像用于标记所述面部区域;其中,所述第二目标图像中的所述面部区域的像素值为所述第一目标图像的像素值,所述第二目标图像中的所述面部区域以外的像素值为所述待处理图像的像素值。
- 根据权利要求1所述的方法,其中,所述将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像,包括:对所述待处理图像进行亮度增强处理,得到第一图像;降低所述待处理图像中的第一像素点的第二像素值,得到第一像素值,所述第二像素值小于预设阈值;增大所述待处理图像中的第二像素点的第四像素值,得到第三像素值,所述第四像素值大于所述预设阈值;根据所述第一像素值和所述第三像素值得到第二图像;对所述第一图像和所述第二图像进行加权融合处理,得到第三图像;将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像。
- 根据权利要求3所述的方法,其中,所述将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像,包括:对所述第三图像和所述待处理图像进行加权融合处理,得到第四图像;基于面部掩码图像对所述第四图像和所述待处理图像进行加权融合处理,得到第五图 像;其中,所述第五图像中的所述面部区域的像素值为所述第四图像的像素值,所述第五图像中的所述面部区域以外的像素值为所述待处理图像的像素值;将所述第五图像和所述第三高频图像进行融合,得到所述第一目标图像。
- 根据权利要求1所述的方法,其中,所述方法还包括:基于边缘检测算法从所述待处理图像中提取出所述第一高频图像;或者,将所述待处理图像与所述待处理图像的低频图像相减,得到所述第一高频图像。
- 根据权利要求1所述的方法,其中,所述方法还包括:提取所述待处理图像中的关键点,所述关键点包括面部关键点和五官关键点;根据所述面部关键点得到面部轮廓线,根据所述五官关键点得到五官轮廓线;根据所述面部轮廓线和所述五官轮廓线,生成用于表示所述待处理图像中像素点对应的权重系数的权重图像。
- 根据权利要求6所述的方法,其中,所述基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,包括:对所述权重图像进行高斯模糊处理,得到高斯模糊后的权重图像;基于所述高斯模糊后的权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到所述第三高频图像。
- 根据权利要求2所述的方法,其中,所述基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,包括:对所述面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像;基于所述边缘羽化后的面部掩码图像,对所述第一目标图像和所述待处理图像进行融合,得到所述第二目标图像。
- 一种图像处理装置,其中,包括:获取模块,被配置为执行获取待处理图像,所述待处理图像包括面部区域;高频处理模块,被配置为执行对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;加权处理模块,被配置为执行基于权重图像对所述第一高频图像和所述第二高频图像 进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;融合模块,被配置为执行将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
- 根据权利要求9所述的装置,其中,所述融合模块,还被配置为执行基于面部掩码图像对所述第一目标图像和所述待处理图像进行融合,得到第二目标图像,所述面部掩码图像用于标记所述面部区域;其中,所述第二目标图像中的所述面部区域的像素值为所述第一目标图像的像素值,所述第二目标图像中的所述面部区域以外的像素值为所述待处理图像的像素值。
- 根据权利要求9所述的装置,其中,所述融合模块包括:亮度增强模块,被配置为执行对所述待处理图像进行亮度增强处理,得到第一图像;亮度降低模块,被配置为执行降低所述待处理图像中的第一像素点的第二像素值,得到第一像素值,所述第二像素值小于预设阈值;亮度增强模块,还被配置为执行增大所述待处理图像中的第二像素点的第四像素值,得到第三像素值,所述第四像素值大于所述预设阈值;确定模块,被配置为执行根据所述第一像素值和所述第三像素值得到第二图像;所述融合模块,被进一步配置为执行对所述第一图像和所述第二图像进行加权融合处理,得到第三图像;所述融合模块,被进一步配置为执行将所述第三图像和所述第三高频图像进行融合,得到所述第一目标图像。
- 根据权利要求11所述的装置,其中,所述融合模块,被进一步配置为执行对所述第三图像和所述待处理图像进行加权融合处理,得到第四图像;所述融合模块,被进一步配置为执行基于面部掩码图像对所述第四图像和所述待处理图像进行加权融合处理,得到第五图像;其中,所述第五图像中的所述面部区域的像素值为所述第四图像的像素值,所述第五图像中的所述面部区域以外的像素值为所述待处理图像的像素值;所述融合模块,被进一步配置为执行将所述第五图像和所述第三高频图像进行融合,得到所述第一目标图像。
- 根据权利要求9所述的装置,其中,所述装置还包括:第一提取模块,被配置为执行基于边缘检测算法从所述待处理图像中提取出所述第一高频图像;相减模块,被配置为执行将所述待处理图像与所述待处理图像的低频图像相减,得到所述第一高频图像。
- 根据权利要求9所述的装置,其中,所述装置还包括:第二提取模块,被配置为执行提取所述待处理图像中的关键点,所述关键点包括面部关键点和五官关键点;连接模块,被配置为执行根据所述面部关键点得到面部轮廓线,根据所述五官关键点得到五官轮廓线;生成模块,被配置为执行根据所述面部轮廓线和所述五官轮廓线,生成用于表示所述待处理图像中像素点对应的权重系数的权重图像。
- 根据权利要求14所述的装置,其中,所述融合模块还包括:高斯模糊模块,被配置为执行对所述权重图像进行高斯模糊处理,得到高斯模糊后的权重图像;所述融合模块,被进一步配置为执行基于所述高斯模糊后的权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到所述第三高频图像。
- 根据权利要求10所述的装置,其中,所述融合模块还包括:边缘羽化模块,被配置为执行对所述面部掩码图像进行边缘羽化处理,得到边缘羽化后的面部掩码图像;所述融合模块,被进一步配置为执行基于所述边缘羽化后的面部掩码图像,对所述第一目标图像和所述待处理图像进行融合,得到所述第二目标图像。
- 一种电子设备,其中,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现以下步骤:获取待处理图像,所述待处理图像包括面部区域;对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
- 一种非易失性存储介质,其中,当所述存储介质中的指令由服务器的处理器执行时,使得服务器能够执行以下步骤:获取待处理图像,所述待处理图像包括面部区域;对所述待处理图像中的第一高频图像进行高斯模糊处理,得到第二高频图像,所述第一高频图像为所述面部区域中的高频信息所在区域对应的图像;基于权重图像对所述第一高频图像和所述第二高频图像进行加权融合处理,得到第三高频图像,所述权重图像用于表征所述待处理图像中像素点对应的权重系数;将所述待处理图像和所述第三高频图像进行融合,得到第一目标图像。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011182131.8 | 2020-10-29 | ||
CN202011182131.8A CN112258440B (zh) | 2020-10-29 | 2020-10-29 | 图像处理方法、装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022088976A1 true WO2022088976A1 (zh) | 2022-05-05 |
Family
ID=74267207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/116233 WO2022088976A1 (zh) | 2020-10-29 | 2021-09-02 | 图像处理方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112258440B (zh) |
WO (1) | WO2022088976A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116527922A (zh) * | 2023-07-03 | 2023-08-01 | 浙江大华技术股份有限公司 | 图像编码方法及其相关装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112258440B (zh) * | 2020-10-29 | 2024-01-02 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN112918956A (zh) * | 2021-02-20 | 2021-06-08 | 陆伟凤 | 一种基于图像识别技术的垃圾分类*** |
CN112862726B (zh) * | 2021-03-12 | 2023-11-10 | 湖南国科微电子股份有限公司 | 图像处理方法、装置及计算机可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877125A (zh) * | 2009-12-25 | 2010-11-03 | 北京航空航天大学 | 一种基于小波域统计信号的图像融合处理方法 |
CN104517265A (zh) * | 2014-11-06 | 2015-04-15 | 福建天晴数码有限公司 | 智能磨皮方法和装置 |
US20180302544A1 (en) * | 2017-04-12 | 2018-10-18 | Samsung Electronics Co., Ltd. | Method and apparatus for generating hdr images |
CN110580688A (zh) * | 2019-08-07 | 2019-12-17 | 北京达佳互联信息技术有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN112258440A (zh) * | 2020-10-29 | 2021-01-22 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10445877B2 (en) * | 2016-12-30 | 2019-10-15 | International Business Machines Corporation | Method and system for crop recognition and boundary delineation |
CN107220990B (zh) * | 2017-06-22 | 2020-09-08 | 成都品果科技有限公司 | 一种基于深度学习的头发分割方法 |
CN107864337B (zh) * | 2017-11-30 | 2020-03-06 | Oppo广东移动通信有限公司 | 素描图像处理方法、装置、设备及计算机可读存储介质 |
CN109033945B (zh) * | 2018-06-07 | 2021-04-06 | 西安理工大学 | 一种基于深度学习的人体轮廓提取方法 |
CN109409262A (zh) * | 2018-10-11 | 2019-03-01 | 北京迈格威科技有限公司 | 图像处理方法、图像处理装置、计算机可读存储介质 |
-
2020
- 2020-10-29 CN CN202011182131.8A patent/CN112258440B/zh active Active
-
2021
- 2021-09-02 WO PCT/CN2021/116233 patent/WO2022088976A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877125A (zh) * | 2009-12-25 | 2010-11-03 | 北京航空航天大学 | 一种基于小波域统计信号的图像融合处理方法 |
CN104517265A (zh) * | 2014-11-06 | 2015-04-15 | 福建天晴数码有限公司 | 智能磨皮方法和装置 |
US20180302544A1 (en) * | 2017-04-12 | 2018-10-18 | Samsung Electronics Co., Ltd. | Method and apparatus for generating hdr images |
CN110580688A (zh) * | 2019-08-07 | 2019-12-17 | 北京达佳互联信息技术有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN112258440A (zh) * | 2020-10-29 | 2021-01-22 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116527922A (zh) * | 2023-07-03 | 2023-08-01 | 浙江大华技术股份有限公司 | 图像编码方法及其相关装置 |
CN116527922B (zh) * | 2023-07-03 | 2023-10-27 | 浙江大华技术股份有限公司 | 图像编码方法及其相关装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112258440B (zh) | 2024-01-02 |
CN112258440A (zh) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022088976A1 (zh) | 图像处理方法及装置 | |
Li et al. | Fast multi-scale structural patch decomposition for multi-exposure image fusion | |
Guo et al. | LIME: Low-light image enhancement via illumination map estimation | |
Chen et al. | Robust image and video dehazing with visual artifact suppression via gradient residual minimization | |
Bhat et al. | Gradientshop: A gradient-domain optimization framework for image and video filtering | |
Tao et al. | Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images | |
CN104252698B (zh) | 一种基于半逆法的快速单幅图像去雾算法 | |
US8965141B2 (en) | Image filtering based on structural information | |
Kim et al. | Low-light image enhancement based on maximal diffusion values | |
Gao et al. | Detail preserved single image dehazing algorithm based on airlight refinement | |
CN107194869B (zh) | 一种图像处理方法及终端、计算机存储介质、计算机设备 | |
Ancuti et al. | Image and video decolorization by fusion | |
Vazquez-Corral et al. | A fast image dehazing method that does not introduce color artifacts | |
Lou et al. | Integrating haze density features for fast nighttime image dehazing | |
Dai et al. | Dual-purpose method for underwater and low-light image enhancement via image layer separation | |
CN107564085B (zh) | 图像扭曲处理方法、装置、计算设备及计算机存储介质 | |
Chen et al. | A solution to the deficiencies of image enhancement | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
Gu et al. | A novel Retinex image enhancement approach via brightness channel prior and change of detail prior | |
Rahman et al. | Efficient contrast adjustment and fusion method for underexposed images in industrial cyber-physical systems | |
Tung et al. | ICEBIN: Image contrast enhancement based on induced norm and local patch approaches | |
US20220398704A1 (en) | Intelligent Portrait Photography Enhancement System | |
Yao et al. | A multi-expose fusion image dehazing based on scene depth information | |
WO2021135676A1 (zh) | 一种拍照背景虚化方法、移动终端及存储介质 | |
Kumari et al. | Real time image and video deweathering: The future prospects and possibilities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21884707 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21884707 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.08.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21884707 Country of ref document: EP Kind code of ref document: A1 |