CN109862338B - Image enhancement method and image enhancement device - Google Patents

Image enhancement method and image enhancement device Download PDF

Info

Publication number
CN109862338B
CN109862338B CN201711235144.5A CN201711235144A CN109862338B CN 109862338 B CN109862338 B CN 109862338B CN 201711235144 A CN201711235144 A CN 201711235144A CN 109862338 B CN109862338 B CN 109862338B
Authority
CN
China
Prior art keywords
pixel
sub
current
value
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711235144.5A
Other languages
Chinese (zh)
Other versions
CN109862338A (en
Inventor
刘楷
黄文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Priority to CN201711235144.5A priority Critical patent/CN109862338B/en
Publication of CN109862338A publication Critical patent/CN109862338A/en
Application granted granted Critical
Publication of CN109862338B publication Critical patent/CN109862338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image enhancement method and an image enhancement device, which can perform edge enhancement (namely sharpening) on an image subjected to mosaic resolution according to the regional characteristics of an input image, and can simultaneously inhibit brightness noise and color noise so as to provide a clear and clean image.

Description

Image enhancement method and image enhancement device
Technical Field
The present invention relates to an image enhancement method and an image enhancement apparatus, and more particularly, to an image enhancement method and an image enhancement apparatus for edge enhancement and noise suppression of an input image.
Background
In a conventional color filter array Image (Raw CFA Image), a full-color Image (color Image) is obtained by de-mosaic (mosaic). The full-color image is not sharpened (sharpened) and does not conform to the natural image observed by human eyes. Therefore, the blurred image after demosaicing is edge-enhanced first to obtain a sharp image with comfortable human eye feeling.
However, edge enhancement tends to amplify noise in the image. Therefore, an effective processing method is needed to solve the above problems simultaneously between the side effects of edge enhancement and noise amplification.
Disclosure of Invention
The invention provides an image enhancement method and an image enhancement device, which can perform edge enhancement (namely sharpening) on an image subjected to mosaic resolution according to the regional characteristics of an input image, and can simultaneously inhibit brightness noise and color noise so as to provide a clear and clean image.
The embodiment of the invention provides an image enhancement method which is suitable for an image enhancement device. The image enhancement method comprises the following steps: (A) sequentially obtaining each pixel of an input image in a YUV color space, wherein each pixel is provided with a Y pixel, a U pixel and a V pixel; (B) performing a low-pass filtering process on a current Y pixel and a plurality of adjacent Y pixels adjacent to the current Y pixel to generate a first low-pass pixel; (C) judging whether the current Y pixel is an edge pixel according to a gradient change sum, and if the current Y pixel is the edge pixel, performing low-pass filtering processing on the current Y pixel and a plurality of adjacent Y pixels with corresponding trends according to a trend of the edge pixel to generate a second low-pass pixel; (D) judging whether the current Y pixel is a thin-edge pixel or not, wherein if the current Y pixel is not the thin-edge pixel, calculating an edge response value between the current Y pixel and the adjacent Y pixels according to a first mask, and if the current Y pixel is the thin-edge pixel, calculating an edge response value between the current Y pixel and the adjacent Y pixels according to a second mask, wherein the edge response value calculated according to the second mask is higher than the edge response value calculated according to the first mask; (E) calculating an enhanced pixel corresponding to the gradient change sum and the edge response value according to an enhancement function; and (F) adding the second low-pass pixel to the enhancement pixel to generate an output Y pixel.
The embodiment of the invention provides an image enhancement device, which comprises an image acquisition device and an image processor and is used for executing the image enhancement method.
Drawings
Fig. 1 is a schematic diagram of an image enhancement apparatus according to an embodiment of the invention.
FIG. 2A is a diagram of a Y-channel image in YUV color space according to an embodiment of the present invention.
FIG. 2B is a diagram of a U-channel image in YUV color space according to an embodiment of the present invention.
Fig. 2C is a diagram illustrating a V-channel image in YUV color space according to an embodiment of the invention.
FIG. 3 is a flowchart illustrating an image enhancement method for enhancing Y pixels according to an embodiment of the present invention.
FIG. 4A is a detailed flowchart of an embodiment of the present invention for generating the first low-pass pixel.
Fig. 4B is a graph illustrating the relationship between the similarity between the current Y pixel and the neighboring Y pixel and the first weighting value according to an embodiment of the present invention.
Fig. 4C is a graph illustrating the relationship between the similarity between the current Y pixel and the neighboring Y pixel and the second weighting value according to an embodiment of the present invention.
Fig. 4D is a schematic diagram illustrating a first weight value and a second weight value in a Y-channel image according to an embodiment of the invention.
Fig. 5A is a schematic diagram illustrating an edge pixel oriented in a vertical direction according to an embodiment of the invention.
Fig. 5B is a schematic diagram of an edge pixel oriented in a horizontal direction according to an embodiment of the invention.
FIG. 5C is a schematic diagram of an edge pixel oriented in a right diagonal direction according to an embodiment of the invention.
FIG. 5D is a schematic diagram of an edge pixel with a negative diagonal orientation according to an embodiment of the invention.
Fig. 5E is a detailed flowchart of determining whether the current Y pixel is an edge pixel according to an embodiment of the invention.
Fig. 5F is a graph showing the relationship between the similarity between the current Y pixel and the neighboring Y pixel and the third weighting value according to an embodiment of the present invention.
Fig. 6A is a detailed flowchart of determining whether a current Y pixel is a thin-edge pixel according to an embodiment of the present invention.
Fig. 6B is a schematic diagram of a3 × 3 mask formed by the current Y pixel Y12 according to an embodiment of the invention.
FIG. 7A is a schematic diagram of a first mask according to an embodiment of the invention.
FIG. 7B is a schematic diagram of a first mask according to another embodiment of the present invention.
FIG. 8 is a graph of enhancement functions according to an embodiment of the present invention.
FIG. 9 is a flowchart illustrating an image enhancement method for enhancing U pixels according to an embodiment of the present invention.
FIG. 10 is a detailed flowchart of an embodiment of the present invention for generating a noise suppressed U pixel.
Fig. 11A is a schematic diagram of difference values between U pixels and U pixels of a de-extremum according to an embodiment of the invention.
Fig. 11B is a diagram illustrating a Y pixel difference value according to an embodiment of the invention.
FIG. 12 is a graph of intensity ratios for an embodiment of the present invention.
FIG. 13 is a graph of a current U pixel and an output U pixel according to one embodiment of the invention.
FIG. 14 is a flowchart illustrating an image enhancement method for enhancing V pixels according to an embodiment of the present invention.
FIG. 15 is a detailed flowchart of an embodiment of the present invention for generating noise suppressed V pixels.
100: electronic device
110: image acquisition device
120: image processor
Im: inputting image
P0-Pn: pixel
Y0 '-Yn': outputting Y pixels
U0 '-Un': output U pixel
V0 '-Vn': outputting V pixels
ImY: y-channel image
ImU: u channel image
ImV: v-channel image
Y0-Y24: y pixel
U0-Y24: u pixel
V0-V24: v pixel
S305, S310, S320, S330, S340, S350, S360, S370, S380: step (ii) of
S410, S420, S430, S440: step (ii) of
Wn, W0-W9, W10-W19: first weight value
Wmax: maximum first weight value
Th0, Th5, Th 7: first threshold value
Th1, Th6, Th 8: second threshold value
Difn, Dif1, Dif10, Dif 13: degree of similarity
An, A0-A3: second weight value
Amax: maximum second weight value
Th 3: third threshold value
Th 4: the fourth threshold value
S510, S520: step (ii) of
Gn: third weight value
Gmax: maximum third weight value
G10: third weight value
S610 and S620: step (ii) of
MS 1: a first mask
MS 2: a first mask
Gtotal: sum of gradient changes
f (En): enhancement function
Dr: edge response value
S910, S920, S930, S940, S950, S960: step (ii) of
S1010, S1020, S1030, S1040, S1050, S1060: step (ii) of
Ymean1, Ymean 2: mean value of
Cmax: maximum intensity ratio
Cfinal: intensity ratio
Udiff: difference of U pixel
S1410, S1420, S1430, S1440, S1450, S1460: step (ii) of
S1510, S1520, S1530, S1540, S1550, S1560: step (ii) of
Val1, Val2, Val 3: gain parameter
coef: intensity ratio
YUmax: maximum pixel difference
Detailed Description
The embodiment of the invention provides an image enhancement method and an image enhancement device, which mainly aim at sharpness of an image edge and simultaneously inhibit brightness noise and color noise so as to reduce image distortion observed by human vision. The image enhancement method and the image enhancement device of the embodiment of the invention comprise the following steps: image edge detection, Infinite Impulse Response (IIR) noise suppression, low pass filtering in the edge direction, fine edge detection, edge response intensity calculation, and color suppression with low chroma, thereby providing a clear and clean image.
First, please refer to fig. 1, which shows a schematic diagram of an image enhancement apparatus according to an embodiment of the present invention. As shown in fig. 1, the image enhancement apparatus 100 receives an input image Im and obtains a plurality of pixels P0-Pn of the input image Im in a YUV color space. Each of the pixels P0-Pn has a Y pixel (representing luminance information), a U pixel (representing chrominance information), and a V pixel (representing chrominance information).
The image enhancement device 100 performs edge enhancement on the input image Im and simultaneously suppresses the luminance noise and the color noise of the input image Im to generate the adjusted output Y pixels Y0 '-Yn', the adjusted output U pixels U0 '-Un' and the adjusted output V pixels V0 '-Vn'. In this embodiment, the image enhancement apparatus 100 may be a smart phone, a video recorder, a tablet computer, a notebook computer or other electronic devices that need to perform image enhancement, which is not limited in the present invention.
For example, referring to fig. 2A-2C together, the input image Im corresponds to a Y-channel image ImY, a U-channel image ImU, and a V-channel image ImV in YUV color space. Y-channel picture ImY has Y pixels Y0-Y24 among pixels P0-P24, U-channel picture ImU has U pixels U0-U24 among pixels P0-P24, and V-channel picture ImV has V pixels V0-V24 among pixels P0-P24.
The image enhancement apparatus 100 includes an image capturing device 110 and an image processor 120. As shown in fig. 1-2C, the image capturing device 110 sequentially obtains pixels P0-P24 of the input image Im in YUV color space, and each of the pixels P0-P24 has a Y pixel, a U pixel and a V pixel. More specifically, the image capturing device 110 captures continuous images, and the input image Im is one of the continuous images. The input image Im of the present embodiment is composed of pixels P0-P24.
The image processor 120 is electrically connected to the image capturing device 110 and configured to perform the following steps to perform edge enhancement on the input image Im and simultaneously suppress the luminance noise and the color noise of the input image Im, so as to generate the adjusted output Y pixel Y0 '-Yn', the adjusted output U pixel U0 '-Un', and the adjusted output V pixel V0 '-Vn'.
For convenience of description, the pixel P12 and its neighboring pixels P0-P11 and P13-P24 (i.e., 5 × 5 mask formed by taking the pixel P12 as the center) in the input image Im are described below. Each of the pixels P0-P24 corresponds to a Y pixel, a U pixel and a V pixel. For example, pixel P12 corresponds to Y pixel Y12, U pixel U12, and V pixel V12. One of ordinary skill in the art can derive the adjustment to the other pixels P0-P11 and P13-Pn from the adjustment to the pixel P12.
Referring to fig. 1-3, fig. 3 is a flowchart illustrating an image enhancement method for enhancing Y pixels according to an embodiment of the invention. First, the image processor 120 receives each pixel P0-P24 of the input image Im in YUV color space, sequentially uses Y pixel Y0-Y24 of the pixels P0-P24, which has luminance information, as the current Y pixel, and further adjusts Y pixel Y0-Y24 (step S305).
Then, the image processor 120 performs a low pass filtering process on a current Y pixel Y12 and a plurality of neighboring Y pixels Y0-Y11, Y13-Y24 neighboring the current Y pixel Y12 to generate a first low pass pixel Lp1 (step S310). In the present embodiment, the image processor 120 performs a low-pass filtering process on the current Y pixel Y12 and the neighboring Y pixels Y0-Y11 and Y13-Y24 by using an impulse response (IIR) method, so as to generate the first low-pass pixel Lp 1.
Further, referring to fig. 4A, the image processor 120 first determines a first weighting value for each of the neighboring Y pixels in the upper row and the lower row of the current Y pixel according to the similarity between the current Y pixel and the neighboring Y pixels in the upper row and the lower row of the current Y pixel (step S410). In this embodiment, each similarity degree represents an absolute difference between a current Y pixel and a corresponding neighboring Y pixel, and each similarity degree corresponds to a certain first weight value.
For example, a 5 × 5 mask formed by Y pixels Y12 is taken as an example. Referring to fig. 2A and fig. 4B together, fig. 4B shows a relationship diagram of the similarity degree diffn between the current Y pixel and the neighboring Y pixel and the first weight value Wn. The maximum first weight value Wmax, the first threshold Th0 and the second threshold Th1 are defined by the user. Taking the neighboring Y pixel Y1 above the current Y pixel Y12 as an example, the similarity difference Dif1 | -Y12-Y1 | -between the neighboring Y pixel Y1 and the current Y pixel Y12, and the image processor 120 then corresponds the similarity difference Dif1 | -Y12-Y1 | -to the first weight value W1 in fig. 4B.
Accordingly, the image processor 120 determines the first weighting values W0-W9 and W10-W19 of the neighboring Y pixels Y0-Y9 and Y15-Y24 according to the similarity among the current Y pixel Y12, the neighboring Y pixels Y0-Y9 in the upper row and the neighboring Y pixels Y15-Y24 in the lower row, as shown in FIG. 4D. The adjacent Y pixels in the upper row and the adjacent Y pixels in the lower row may also be designed according to the mask size or the actual condition, which is not limited by the invention.
Then, the image processor 120 performs a weighted average of the current Y pixel Y12 and the neighboring Y pixels Y0-Y9 and Y15-Y24 located above and below the current Y pixel according to the first weighting values W0-W9 and W10-W19 to generate an edge protection low-pass pixel EPF (step S420). In response to the above example, the edge protection low-pass pixel EPF ═ ((128- (W0+ W1+ … + W19)) × Y12+ W0 × Y0+ W1 × Y1+ … + W19 × Y19)/128. The calculation method of the edge protection low-pass pixel EPF can also be designed according to the actual situation, which is not limited by the present invention.
After obtaining the edge protection low-pass pixel EPF, the image processor 120 determines a second weight value of each of the adjacent Y pixels located to the left and right of the current Y pixel according to the similarity between the current Y pixel and the adjacent Y pixels located to the left and right of the current Y pixel (step S430). In this embodiment, each similarity degree represents an absolute difference between the current Y pixel and the corresponding neighboring Y pixel, and each similarity degree corresponds to a certain second weight value.
In connection with the above example, please refer to fig. 2A and fig. 4C, fig. 4C shows a relationship diagram of the similarity degree diffn between the current Y pixel and the neighboring Y pixel and the second weighting value An. The maximum second weight value Amax, the third threshold value Th3 and the fourth threshold value Th4 are defined by the user. Taking the neighboring Y pixel Y13 to the right of the current Y pixel, the similarity difference Dif13 | -Y12-Y13 |, between the neighboring Y pixel Y13 and the current Y pixel Y12, the image processor 120 then corresponds the similarity difference Dif13 | -Y12-Y13 |, to the second weight value a2 in fig. 4C.
Accordingly, the image processor 120 determines the second weighting values A0-A3 of the neighboring Y pixels Y10-Y11 and Y13-Y14 according to the similarity among the current Y pixel Y12, the left neighboring Y pixel Y10-Y11 and the right neighboring Y pixel Y13-Y14, as shown in FIG. 4D. The left adjacent Y pixels and the right adjacent Y pixels can also be designed according to the mask size or actual conditions.
Then, the image processor 120 performs a weighted average of the edge protection low-pass pixel EPF and the adjacent Y pixels Y10-Y11 and Y13-Y14 located at the left and right of the current Y pixel according to the second weight values a0-A3 to generate a first low-pass pixel Lp1 (step S440). In connection with the above example, the first low-pass pixel Lp1 is calculated in a manner similar to that of the edge protection low-pass pixel EPF, that is, the first low-pass pixel Lp1 ═(128- (a0+ a1+ a2+ A3)) × EPF + a0 × Y10+ a1 × Y11+ a2 × Y13+ A3Y 14)/128. The calculation of the first low-pass pixel Lp1 may also be designed according to the actual situation, and the invention is not limited thereto.
As can be seen from the above description, the more relevant the similarity between the current Y pixel and the neighboring Y pixels (i.e. the smaller the absolute difference), the less noise the current Y pixel needs to be removed. Conversely, the more noise needs to be removed for the current Y pixel. Accordingly, the image processor 120 can perform noise removal on the entire input image Im accordingly.
Returning to fig. 3, after the image processor 120 generates the first low-pass pixel Lp1 (i.e., step S310), the image processor 120 further determines whether the current Y pixel is an edge pixel according to a sum of gradient changes (step S320). In the present embodiment, there are four patterns of the edge direction, which are the vertical direction, the horizontal direction, the right diagonal direction and the negative diagonal direction, as shown in fig. 5A to 5D. The image processor 120 will sum the gradient changes of the current Y pixel Y12 in at least one edge direction.
For example, the image processor 120 calculates the gradient change Gv of the current Y pixel Y12 in the vertical direction and the gradient change Gh in the horizontal direction, and sums the gradient changes Gv and Gh to obtain the total of gradient changes Gtotal. For another example, the image processor 120 calculates the gradient change Gv in the vertical direction, the gradient change Gh in the horizontal direction, the gradient change G +45 in the positive diagonal direction, and the gradient change G-45 in the negative diagonal direction of the current Y pixel Y12, respectively, and sums the gradient changes Gv, Gh, G +45, and G-45 to obtain the total gradient change Gtotal. The calculation of the sum of the gradient changes is known to those skilled in the art and will not be described herein.
Referring to FIG. 5E, the image processor 120 determines whether the sum of the gradient changes corresponding to the current Y pixel is greater than or equal to an edge threshold (S510). If the sum of the gradient changes is greater than or equal to the edge threshold, the image processor 120 determines that the current Y pixel is an edge pixel. At this time, the image processor 120 determines the trend of the edge pixel according to the sum of the gradient changes representing at least one edge direction (S520).
For example, the total sum Gtotal of gradient changes is equal to the gradient change Gv in the vertical direction + the gradient change Gh in the horizontal direction. When the gradient Gv is larger than the gradient Gh, the image processor 120 determines that the edge pixels are oriented vertically. On the contrary, when the gradient Gv is smaller than or equal to the gradient Gh, the image processor 120 determines that the edge pixel is oriented in the horizontal direction. The image processor 120 may also determine the orientation of the edge pixels through other methods, which is not limited in the present invention.
Otherwise, if the sum of the gradient changes is smaller than the edge threshold, the image processor 120 determines that the current Y pixel is not an edge pixel. At this time, the image processor 120 performs step S350.
As shown in step S320 of fig. 3 and steps S510-S520 of fig. 5E, if the image processor 120 determines that the current Y pixel Y12 is an edge pixel, the image processor 120 performs low-pass filtering on the current Y pixel Y12 and a plurality of adjacent Y pixels corresponding to the above-mentioned trend according to a trend of the edge pixel to generate a second low-pass pixel Lp2 (step S330). In the present embodiment, the image processor 120 also performs a low-pass filtering process on the current Y pixel Y12 and the neighboring Y pixels corresponding to the above trend by using an impulse response (IIR) method, so as to generate a second low-pass pixel Lp 2.
In step S330, the image processor 120 further determines a third weight value corresponding to each of the adjacent Y pixels of the trend of the current Y pixel and the edge pixel according to the similarity between the adjacent Y pixels. Similarly, in the present embodiment, the similarity degrees represent absolute differences between the current Y pixel and the corresponding neighboring Y pixels, and each similarity degree corresponds to a third weighting value.
For example, the 5 x 5 mask formed by the Y pixels Y12 with the edge pixels oriented horizontally is taken as an example for illustration. Referring to fig. 2A and 5F together, fig. 5F shows a relationship between the similarity degree diffn between the current Y pixel and the neighboring Y pixel and the third weight value Gn. The maximum third weighting value Gmax, the first threshold Th5 and the second threshold Th6 are defined by the user. To illustrate with the neighboring Y pixel Y10 in the horizontal direction, the similarity difference Dif10 | -Y12-Y10 |, between the neighboring Y pixel Y10 and the current Y pixel Y12, and the image processor 120 then corresponds the similarity difference 10 | -Y12-Y10 |, to the third weight value G10 in fig. 5F.
Accordingly, the image processor 120 determines the third weighting values W10-W11 and W13-W14 of the neighboring Y pixels Y10-Y11 and Y13-Y14 according to the similarity between the current Y pixel Y12 and the corresponding horizontally neighboring Y pixels Y10-Y11 and Y13-Y14.
Then, the image processor 120 performs a weighted average of the first low-pass pixel Lp1 and the adjacent Y pixels Y10-Y11 and Y13-Y14 corresponding to the trend according to the third weighting values W10-W11 and W13-W14 to generate a second low-pass pixel Lp 2. In response to the above example, the second low-pass pixel Lp2 ═ ((128- (W10+ W11+ W13+ W14)) × Lp1+ W10 × Y10+ W11 × Y11+ W13 × Y13+ W14 × Y14)/128. The calculation of the second low-pass pixel Lp2 may also be designed according to the actual situation.
Referring back to fig. 3 and 6A, after step S330, the image processor 120 then determines whether the current Y pixel is a thin-edge pixel to enhance the thin edge of the input image Im (step S340). In the present embodiment, the image processor 120 determines whether the current Y pixel is simultaneously larger or simultaneously smaller than the neighboring Y pixels located at diagonal corners of the current Y pixel (step S610).
If not, it means that the current Y pixel is not a thin-edge pixel. At this time, the image processor 120 performs step S350. If so, it means that the current Y pixel is likely to be a thin-edge pixel. At this time, the image processor 120 determines whether an absolute difference between the neighboring Y pixels in the vertical direction of the current Y pixel and the neighboring Y pixels in the horizontal direction of the current Y pixel is greater than a thin-edge threshold (step S620).
If the absolute difference is less than or equal to the thin-edge threshold, it means that the current Y pixel is not enough to become a thin-edge pixel. At this time, the image processor 120 performs step S350. If the absolute difference is greater than the thin-edge threshold, it represents that the current Y pixel is a thin-edge pixel. At this time, the image processor 120 performs step S360.
For example, since the image processor 120 determines whether the current Y pixel is located at a thin edge of the input image Im, a3 × 3 mask formed by the current Y pixel Y12 will be described as an example. As shown in FIG. 6B, the image processor 120 determines whether the current Y pixel Y12 is simultaneously larger or simultaneously smaller than the neighboring Y pixels Y6, Y8, Y16 and Y18 diagonally opposite to the current Y pixel Y12. If not, it means that the current Y pixel is not a thin-edge pixel. If so, the image processor 120 further determines whether the absolute difference between the adjacent Y pixels Y7, Y17 in the vertical direction and the adjacent Y pixels Y11, Y13 in the horizontal direction is greater than the thin-edge threshold.
If the absolute difference is less than or equal to the thin-edge threshold, it means that the current Y pixel is not enough to become a thin-edge pixel, and the image processor 120 performs step S350. Otherwise, it represents that the current Y pixel is a thin-edge pixel, and the image processor 120 performs step S360.
As mentioned above, when the current Y pixel is not an edge pixel or is not a thin-edge pixel, the image processor 120 calculates an edge response value between the current Y pixel and the neighboring Y pixels according to a first mask (step S350), such as the first mask MS1 of fig. 7A without considering directionality. Therefore, the peripheral response value-2 × P0-3 × P1-3 × P2-3 × P3-2 × P4-3 × P5-1 × P6+4 × P7-1 × P8-3 × P9-3 × P6865 +4 × P11+24 × P12+4 × P13-3 × P14-3 × P15-1 × P16+4 × P862-1 × P56-3 × P19-2 × P20-3 × P21-3 × P22-3 × P23-2 × P24. In other embodiments, the first mask may also be a first mask with directivity considerations, such as the first mask MS2 shown in fig. 7B.
When the current Y pixel is the thin-edge pixel, the image processor 120 calculates an edge response value between the current Y pixel and the neighboring Y pixels according to a second mask (step S360). The coefficients relating to the first mask and the second mask can also be designed through the actual conditions. It is noted that the first mask is applied in the case where the current Y pixel is not an edge pixel or a thin-edge pixel, and the second mask is applied in the case where the current Y pixel is a thin-edge pixel. The coefficients in the second mask are stronger than the coefficients in the first mask, resulting in a greater degree of enhancement of the thin-edge pixels by the image processor 120. Thus, the edge response values calculated from the second mask are higher than the edge response values calculated from the first mask.
Returning to fig. 3, after obtaining the edge response value (i.e., steps S350 and S360), the image processor 120 calculates an enhanced pixel having a gradient sum corresponding to the edge response value according to an enhancement function (step S370). In the present embodiment, the enhancement function f (en) is associated with the gradient change sum Gtotal and the edge response value Dr, as shown in fig. 8. Further, the gain parameters Val1, Val2, and Val3 set by the user divide the 2-dimensional space formed by the gradient change sum total Gtotal and the edge response value Dr into 9 regions R1, R2, R3, R4, R5, R6, R7, R8, and R9. Each of the regions R1-R9 corresponds to a respective coefficient of C1, C2, C3, C4, C5, C6, C7, C8 and C9.
The enhancement pixel (coefficient) of the present embodiment is the enhancement amplitude (for example, the enhancement pixel (coefficient C3) 100), and the enhancement amplitude can be set by a user. When the input image Im needs to have higher sharpness, the user sets a higher enhancement level. Otherwise, the user sets a lower enhancement magnitude.
As can be seen from the above, the higher the total Gtotal of gradient change, the higher the enhancement pixel. The more Gtotal the gradient change sum is, the lower the enhancement pixel is. And if the edge response value is larger Dr, the enhanced pixel is larger. The smaller the edge response value Dr, the smaller the enhancement pixel.
After obtaining the enhanced pixels (step S370), the image processor 120 adds the second low-pass pixels to the enhanced pixels to generate an output Y pixel. In connection with the above example, after obtaining the enhanced pixel of the current Y pixel Y12, the image processor 120 adds the enhanced pixel to the second low-pass pixel Lp2 to generate the output Y pixel Y12'.
As can be seen from the above, the image enhancement apparatus 100 performs edge enhancement on each Y pixel Y0-Yn in the input image Im, and simultaneously suppresses the luminance noise of each Y pixel Y0-Yn, thereby generating the adjusted output Y pixels Y0 '-Yn'.
In the following embodiment, the image enhancement device 100 suppresses the color noise of each U pixel U0-Un in the input image Im to generate the adjusted output U pixel U0 '-Un'. For convenience of description, the pixel P12 and its neighboring pixels P0-P11 and P13-P24 (i.e., 5 × 5 mask formed by taking the pixel P12 as the center) in the input image Im are described below.
Referring to fig. 9 and fig. 2B, after step S305, the image processor 120 sequentially uses U pixels U0 to U24 of the pixels P0 to P24 related to chroma information as current U pixels to further adjust U pixels U0 to U24. Subsequently, the image processor 120 performs a noise suppression process on a current U pixel U12 and a plurality of neighboring U pixels U0-U11 and U13-U24 neighboring the current U pixel U12 to generate a noise suppressed U pixel Ulpf (step S910).
Further, referring to fig. 10 and 11A, the image processor 120 selects the neighboring U pixels ranked in the middle as a plurality of selected U pixels among the neighboring U pixels (step S1010).
For example, the image processor 120 sorts the U-pixels (e.g., U2, U22) adjacent to the current U-pixel U12 in the vertical direction with the U-pixels (e.g., U10, U14) adjacent to the current U-pixel U12 in the horizontal direction in terms of pixel value size (e.g., U2, U22, U10, U14), and selects the U-pixels (e.g., U22, U10) adjacent to the U-pixels sorted in the middle as the selected U-pixels. Next, the image processor 120 sorts the adjacent U pixels (e.g., U0, U4, U20, U24) located diagonally to the current U pixel U12 in terms of pixel value size, and selects the adjacent U pixels sorted in the middle as the selected U pixels (e.g., U4, U20) of the portion. Accordingly, the image processor 120 can obtain intermediate values in different directions.
Then, the image processor 120 averages the selected U pixels to generate a de-extremum U pixel (step S1020). Taking the above example and referring to fig. 11A as well, the depolarization value U pixel Ufcr is (U22+ U10+ U4+ U20)/4. Accordingly, the image processor 120 can eliminate the extreme value of the U pixel adjacent to the current U pixel U12 to suppress the color noise around the current U pixel U12.
Next, the image processor 120 calculates an absolute difference between the current U pixel and the nth neighboring U pixel to generate a U pixel difference (step S1030). In response to the above example and referring to FIG. 11A, the image processor 120 selects, for example, the 2 nd neighboring U pixel U10 from the current U pixel U12 onward. Thus, the U pixel difference Udiff | -U12-U10 |. | an
Then, the image processor 120 calculates absolute differences between the neighboring Y pixels located to the right of the current Y pixel and the neighboring Y pixels located to the left of the current Y pixel to generate a Y pixel difference (step S1040). Taking the above example in mind and referring to fig. 11B, the image processor 120 selects and averages neighboring Y pixels Y0-Y1, Y5-Y6, Y10-Y11, Y15-Y16, Y20-Y21, for example, located to the left of the current Y pixel Y12, i.e., the average value Ymean1 ═ (Y0+ Y1+ Y5+ Y6+ Y10+ Y11+ Y15+ Y16+ Y20+ Y21)/10. The image processor 120 selects and averages neighboring Y pixels Y3-Y4, Y8-Y9, Y13-Y14, Y18-Y19, Y23-Y24, for example, located to the right of the current Y pixel Y12, i.e., the average value Ymean2 ═ (Y3+ Y4+ Y8+ Y9+ Y13+ Y14+ Y18+ Y19+ Y23+ Y24)/10. The image processor 120 then recalculates the absolute difference | Ymean1-Ymean2 | between the average value Ymean1 and the average value Ymean2 to produce the Y pixel difference value ydiiff.
The image processor 120 selects the adjacent Y pixels on the right side and the adjacent Y pixels on the left side according to actual conditions, for example, the adjacent Y pixels on the left side are Y0, Y5, Y10, Y15, and Y20, and the adjacent Y pixels on the right side are Y4, Y9, Y14, Y19, and Y24.
Then, the image processor 120 selects a larger value from the U pixel difference and the Y pixel difference, and maps the value to an intensity ratio (step S1050). Take the above example and please refer to fig. 12. Fig. 12 is a graph of the intensity ratio coef versus the maximum pixel difference YUmax. The maximum intensity ratio Cmax, the first threshold Th7 and the second threshold Th8 are defined by the user. Assuming that the U pixel difference Udiff is greater than the Y pixel difference Ydiff, the image processor 120 will select the U pixel difference Udiff and map the U pixel difference Udiff to the intensity ratio Cfinal.
Finally, the image processor 120 mixes (blending) the de-extremum U pixel and the previous Nth neighboring U pixel according to the intensity ratio to generate a noise-suppressed U pixel Ulpf (step S1060). Taking the above example into account, the noise-suppressed U pixel Ulpf is (1-intensity ratio Cfinal) to the extreme value U pixel Ufcr + intensity ratio Cfinal to the nth adjacent U pixel onward. Of course, the noise suppression U pixel Ulpf can be achieved by other calculation methods, which is not limited by the invention.
Accordingly, the image processor 120 can generate the noise-suppressed U pixel Ulpf through steps S1010-S1060, so as to suppress the color noise of the current U pixel value. The scene under low light source cannot be completely suppressed due to the color noise. Therefore, the image processor 120 regards the low chroma pixels as the pixels affected by the noise, and regards the high chroma pixels as the pixels not affected by the noise.
Therefore, after step S910 in fig. 9, the image processor 120 determines whether the current U pixel is less than or equal to a low chroma pixel (step S920). If the current U pixel is less than or equal to the low chroma pixel, the image processor 120 lowers the current U pixel to generate an output U pixel (step S930).
If the current U pixel is larger than the low chroma pixel, the image processor 120 further determines whether the current U pixel is smaller than or equal to a high chroma pixel (step S940). At this time, if the current U pixel is less than or equal to the high chroma pixel, the image processor 120 adjusts the current U pixel according to a monotonically increasing function to generate an output U pixel (step S950). If the current U pixel is larger than the high chroma pixel, the image processor 120 uses the current U pixel as the output U pixel (i.e., maintains the current U pixel) (step S960).
In connection with the above example and referring to fig. 13, fig. 13 shows a graph of the current U pixel and the output U pixel according to an embodiment of the invention. In this example, the low chroma pixel is set to 128 and the high chroma pixel is set to 160. Therefore, if the current U pixel U12 is equal to or less than the low chroma pixel, the image processor 120 will reduce the current U pixel (to 0 in the case of noise) to generate the output U pixel U12'. If the current U pixel U12 is greater than the low chroma pixel and less than or equal to the high chroma pixel, the image processor 120 adjusts the current U pixel U12 according to a monotonically increasing function to generate an output U pixel U12'. If the current U pixel U12 is larger than the high chroma pixel, the image processor 120 takes the current U pixel U12 as the output U pixel U12'.
As can be seen from the above, the image enhancement device 100 suppresses the color noise of each U pixel U0-Un in the input image Im to generate the adjusted output U pixel U0 '-Un'.
In the following embodiment, the image enhancement device 100 suppresses the color noise of each V pixel V0-Vn in the input image Im to generate the adjusted output V pixels V0 '-Vn'. For convenience of description, the pixel P12 and its neighboring pixels P0-P11 and P13-P24 (i.e., 5 × 5 mask formed by taking the pixel P12 as the center) in the input image Im are described below.
Referring to FIG. 14 and FIG. 2C, after step S305, the image processor 120 sequentially uses the V pixels V0-V24 related to chroma information in the pixels P0-P24 as the current V pixels to further adjust the V pixels V0-V24. Subsequently, the image processor 120 performs a noise suppression process on a current V pixel V12 and a plurality of neighboring V pixels V0-V11 and V13-V24 neighboring the current V pixel V12 to generate a noise-suppressed V pixel Vlpf (step S1410).
More specifically, referring to fig. 15, the image processor 120 selects the neighboring V pixels ranked in the middle as selected V pixels among the neighboring V pixels (step S1510). Then, the image processor 120 averages the selected V pixels to generate a de-extremum V pixel (step S1520). Accordingly, the image processor 120 can eliminate the extremum of the neighboring V pixels to suppress the color noise around the current V pixel.
Next, the image processor 120 calculates an absolute difference between the current V pixel and the nth neighboring V pixel to generate a V pixel difference (step S1530). Then, the image processor 120 calculates absolute differences between the neighboring Y pixels located at the right side of the current Y pixel and the neighboring Y pixels located at the left side of the current Y pixel to generate a Y pixel difference (step S1540).
Then, the image processor 120 selects a larger value from the V pixel difference and the Y pixel difference, and maps the value to an intensity ratio (step S1550). Finally, the image processor 120 mixes (blending) the de-extremum V pixel and the previous Nth neighboring V pixel according to the intensity ratio to generate a noise-suppressed V pixel Vlpf (step S1560). The implementation of the above steps S1510-S1560 can be substantially derived from the steps S1010-S1060, and thus, the description thereof is omitted here.
Accordingly, the image processor 120 can generate the noise-suppressed V pixel Vlpf through the steps S1510-S1560, so as to suppress the color noise of the current V pixel value. The scene under low light source cannot be completely suppressed due to the color noise. Therefore, the image processor 120 regards the low chroma pixels as the pixels affected by the noise, and regards the high chroma pixels as the pixels not affected by the noise.
Therefore, after step S1410 of fig. 14, the image processor 120 determines whether the current V pixel is less than or equal to a low chroma pixel (step S1420). If the current V pixel is less than or equal to the low chroma pixel, the image processor 120 lowers the current V pixel to generate an output V pixel (step S1430).
If the current V pixel is larger than the low chroma pixel, the image processor 120 further determines whether the current V pixel is smaller than or equal to a high chroma pixel (step S1440). If the current V pixel is less than or equal to the high chroma pixel, the image processor 120 adjusts the current V pixel according to a monotonically increasing function to generate an output V pixel (step S1450). If the current V pixel is larger than the high chroma pixel, the image processor 120 uses the current V pixel as the output V pixel (i.e., maintains the current V pixel) (step S1460). The above-mentioned embodiments of steps S1420-S1460 can be derived from steps S920-S960, and therefore are not described herein again.
As can be seen from the above, the image enhancement device 100 suppresses the color noise of each V pixel V0-Vn in the input image Im to generate the adjusted output V pixels V0 '-Vn'.
In summary, the image enhancement method and the image enhancement apparatus provided by the embodiments of the present invention can perform edge enhancement (i.e. sharpening) on the demosaiced image according to the local characteristics of the input image, and can simultaneously suppress the luminance noise and the color noise, thereby providing a clear and clean image.

Claims (10)

1. An image enhancement method is suitable for an image enhancement device and comprises the following steps:
sequentially obtaining each pixel of an input image in a YUV color space, wherein each pixel is provided with a Y sub-pixel, a U sub-pixel and a V sub-pixel;
performing a low-pass filtering process on a current Y sub-pixel and a plurality of adjacent Y sub-pixels adjacent to the current Y sub-pixel to generate a first low-pass pixel having a first low-pass pixel value;
judging whether the current Y sub-pixel is an edge pixel according to a gradient change sum of the current Y sub-pixel, and if the current Y sub-pixel is the edge pixel, performing the low-pass filtering processing on the first low-pass pixel and a plurality of adjacent Y sub-pixels corresponding to the trend according to a trend of the edge pixel to generate a second low-pass pixel with a second low-pass pixel value, wherein the trend is a vertical direction, a horizontal direction, a positive diagonal direction or a negative diagonal direction;
determining whether the current Y sub-pixel is a thin-edge pixel, wherein if the current Y sub-pixel is not the thin-edge pixel, an edge response value between the current Y sub-pixel and the neighboring Y sub-pixel is calculated according to a first mask, and if the current Y sub-pixel is the thin-edge pixel, the edge response value between the current Y sub-pixel and the neighboring Y sub-pixel is calculated according to a second mask, wherein the edge response value calculated according to the second mask is higher than the edge response value calculated according to the first mask;
calculating an enhanced pixel value corresponding to the gradient change sum and the edge response value according to an enhancement function; and
the second low-pass pixel value is added to the enhanced pixel value to generate an output Y pixel value of the current Y sub-pixel.
2. The image enhancement method of claim 1, wherein the step of generating the first low-pass pixel having the first low-pass pixel value further comprises:
determining a first weight value of each of the neighboring Y sub-pixels located in the upper column and the lower column of the current Y sub-pixel according to the similarity between the current Y sub-pixel and the neighboring Y sub-pixels located in the upper column and the lower column of the current Y sub-pixel;
performing a weighted average of the pixel value of the current Y sub-pixel and the pixel values of the adjacent Y sub-pixels located in the upper row and the lower row of the current Y sub-pixel according to the first weight value to generate an edge-protected low-pass pixel value;
determining a second weight value of each first adjacent Y sub-pixel according to the similarity between the current Y sub-pixel and a first adjacent Y sub-pixel which is positioned at the left side and the right side of the current Y sub-pixel and is in the same line with the current Y sub-pixel; and
and performing the weighted average on the edge-protected low-pass pixel value and the pixel value of the first adjacent Y sub-pixel according to the second weight value to generate the first low-pass pixel value.
3. The image enhancement method of claim 1, wherein the step of generating the second low-pass pixel having the second low-pass pixel value further comprises:
determining a third weight value corresponding to each adjacent Y sub-pixel of the trend according to the similarity between the current Y sub-pixel and a plurality of adjacent Y sub-pixels corresponding to the trend; and
and performing a weighted average on the first low-pass pixel value and the pixel value of the adjacent Y sub-pixel corresponding to the trend according to the third weight value to generate the second low-pass pixel value.
4. The image enhancement method of claim 1, wherein the step of determining whether the current Y sub-pixel is the edge pixel further comprises:
judging whether the gradient change sum corresponding to the current Y sub-pixel is larger than or equal to an edge threshold value;
if the gradient change sum is larger than or equal to the edge threshold value, judging the current Y sub-pixel as the edge pixel, and determining the trend of the edge pixel according to the gradient change sum representing at least one edge direction; and
if the sum of the gradient changes is smaller than the edge threshold value, the current Y sub-pixel is judged not to be the edge pixel.
5. The image enhancement method of claim 1, wherein in the step of determining whether the current Y sub-pixel is the edge pixel, if the current Y sub-pixel is not the edge pixel, the edge response value between the current Y sub-pixel and the neighboring Y sub-pixel is calculated according to the first mask.
6. The image enhancement method of claim 1, wherein the step of determining whether the current Y sub-pixel is the thin-edge pixel further comprises:
judging whether the pixel value of the current Y sub-pixel is simultaneously larger or simultaneously smaller than the pixel values of the adjacent Y sub-pixels positioned at the diagonal angle of the current Y sub-pixel;
if not, the current Y sub-pixel is not the thin-edge pixel, and if so, whether an absolute difference value between the pixel value of the adjacent Y sub-pixel in the vertical direction of the current Y sub-pixel and the pixel value of the adjacent Y sub-pixel in the horizontal direction of the current Y sub-pixel is greater than a thin-edge threshold value is judged; and
if the absolute difference is less than or equal to the thin-edge threshold, the current Y sub-pixel is not the thin-edge pixel, and if the absolute difference is greater than the thin-edge threshold, the current Y sub-pixel is the thin-edge pixel.
7. The image enhancement method according to claim 1, wherein in the step of calculating the enhanced pixel value according to the enhancement function, the enhanced pixel value is higher if the sum of the gradient changes is higher, the enhanced pixel value is lower if the sum of the gradient changes is lower, the enhanced pixel value is larger if the edge response value is larger, and the enhanced pixel value is smaller if the edge response value is smaller.
8. The image enhancement method of claim 1, wherein after the step of sequentially obtaining the pixels of the input image in the YUV color space, further comprising:
performing noise suppression processing on a current U sub-pixel and a plurality of adjacent U sub-pixels adjacent to the current U sub-pixel to generate a pixel value of the noise suppressed U sub-pixel;
judging whether the pixel value of the current U sub-pixel is less than or equal to a low chroma pixel value;
if the pixel value of the current U sub-pixel is less than or equal to the low-chroma pixel value, reducing the pixel value of the noise suppression U sub-pixel to generate an output U pixel value of the current U sub-pixel, and if the pixel value of the current U sub-pixel is greater than the low-chroma pixel value, judging whether the pixel value of the current U sub-pixel is less than or equal to a high-chroma pixel value; and
if the pixel value of the current U sub-pixel is smaller than or equal to the high-chroma pixel value, the pixel value of the noise suppression U sub-pixel is adjusted according to a monotone increasing function to generate an output U pixel value of the current U sub-pixel, and if the pixel value of the current U sub-pixel is larger than the high-chroma pixel value, the pixel value of the noise suppression U sub-pixel is used as the output U pixel value of the current U sub-pixel.
9. The image enhancement method of claim 8, wherein the step of generating the pixel value of the noise-suppressed U sub-pixel further comprises:
sorting the adjacent U sub-pixels according to the U pixel values, and selecting a plurality of adjacent U sub-pixels sorted in the middle as a plurality of selected U sub-pixels;
averaging the pixel values of the selected U sub-pixels to generate a de-extremum U pixel value;
calculating an absolute difference between a pixel value of the current U sub-pixel and a pixel value of an Nth adjacent U sub-pixel from the previous position to generate a U pixel difference value, wherein each pixel in the input image is sorted from left to right and from top to bottom, and the Nth adjacent U sub-pixel from the previous position is the Nth adjacent U sub-pixel from the current U sub-pixel as a starting point in the input image;
calculating an absolute difference between pixel values of the neighboring Y sub-pixels located at a right side of the current Y sub-pixel and the neighboring Y sub-pixels located at a left side of the current Y sub-pixel to generate a Y pixel difference value;
selecting a larger value from the U pixel difference value and the Y pixel difference value, and mapping the value to an intensity ratio; and
and mixing the de-extremum U pixel value with the pixel value of the Nth previous adjacent U sub-pixel according to the intensity ratio to generate the pixel value of the noise suppression U sub-pixel.
10. An image enhancement device, comprising:
an image capturing device for sequentially capturing each pixel of an input image in a YUV color space, wherein each pixel has a Y sub-pixel, a U sub-pixel and a V sub-pixel;
an image processor electrically connected to the image capturing device for executing the following steps:
receiving each pixel of the input image in the YUV color space;
performing a low-pass filtering process on a current Y sub-pixel and a plurality of adjacent Y sub-pixels adjacent to the current Y sub-pixel to generate a first low-pass pixel having a first low-pass pixel value;
judging whether the current Y sub-pixel is an edge pixel according to a gradient change sum of the current Y sub-pixel, and if the current Y sub-pixel is the edge pixel, performing the low-pass filtering processing on the first low-pass pixel and a plurality of adjacent Y sub-pixels corresponding to the trend according to a trend of the edge pixel to generate a second low-pass pixel with a second low-pass pixel value, wherein the trend is a vertical direction, a horizontal direction, a positive diagonal direction or a negative diagonal direction;
determining whether the current Y sub-pixel is a thin-edge pixel, wherein if the current Y sub-pixel is not the thin-edge pixel, an edge response value between the current Y sub-pixel and the neighboring Y sub-pixel is calculated according to a first mask, and if the current Y sub-pixel is the thin-edge pixel, the edge response value between the current Y sub-pixel and the neighboring Y sub-pixel is calculated according to a second mask, wherein the edge response value calculated according to the second mask is higher than the edge response value calculated according to the first mask;
calculating an enhanced pixel value corresponding to the gradient change sum and the edge response value according to an enhancement function; and
the second low-pass pixel value is added to the enhanced pixel value to generate an output Y pixel value of the current Y sub-pixel.
CN201711235144.5A 2017-11-30 2017-11-30 Image enhancement method and image enhancement device Active CN109862338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711235144.5A CN109862338B (en) 2017-11-30 2017-11-30 Image enhancement method and image enhancement device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711235144.5A CN109862338B (en) 2017-11-30 2017-11-30 Image enhancement method and image enhancement device

Publications (2)

Publication Number Publication Date
CN109862338A CN109862338A (en) 2019-06-07
CN109862338B true CN109862338B (en) 2021-03-02

Family

ID=66888002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711235144.5A Active CN109862338B (en) 2017-11-30 2017-11-30 Image enhancement method and image enhancement device

Country Status (1)

Country Link
CN (1) CN109862338B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200813890A (en) * 2006-09-06 2008-03-16 Realtek Semiconductor Corp Method and apparatus for directional edge enhancement
CN103297658A (en) * 2012-03-05 2013-09-11 华晶科技股份有限公司 Image sharpening processing device and method
CN104104842A (en) * 2013-04-02 2014-10-15 珠海扬智电子科技有限公司 Image processing method and image processing device
CN106027851A (en) * 2015-03-30 2016-10-12 想象技术有限公司 Image filtering based on image gradients
CN106157253A (en) * 2015-04-17 2016-11-23 瑞昱半导体股份有限公司 Image processing apparatus and image processing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4952796B2 (en) * 2007-12-25 2012-06-13 富士通株式会社 Image processing device
US9779491B2 (en) * 2014-08-15 2017-10-03 Nikon Corporation Algorithm and device for image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200813890A (en) * 2006-09-06 2008-03-16 Realtek Semiconductor Corp Method and apparatus for directional edge enhancement
CN103297658A (en) * 2012-03-05 2013-09-11 华晶科技股份有限公司 Image sharpening processing device and method
CN104104842A (en) * 2013-04-02 2014-10-15 珠海扬智电子科技有限公司 Image processing method and image processing device
CN106027851A (en) * 2015-03-30 2016-10-12 想象技术有限公司 Image filtering based on image gradients
CN106157253A (en) * 2015-04-17 2016-11-23 瑞昱半导体股份有限公司 Image processing apparatus and image processing method

Also Published As

Publication number Publication date
CN109862338A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
JP4693067B2 (en) Techniques for reducing color artifacts in digital images
US8115840B2 (en) Image enhancement in the mosaic domain
US9392241B2 (en) Image processing apparatus and image processing method
US7907791B2 (en) Processing of mosaic images
RU2519829C2 (en) Image processing device
TWI511559B (en) Image processing method
CN109389560B (en) Adaptive weighted filtering image noise reduction method and device and image processing equipment
US8238685B2 (en) Image noise reduction method and image processing apparatus using the same
TWI638336B (en) Image enhancement method and image enhancement apparatus
JP2010268426A (en) Image processing apparatus, image processing method, and program
JP2008060814A (en) Imaging apparatus
CN111429381B (en) Image edge enhancement method and device, storage medium and computer equipment
CN109862338B (en) Image enhancement method and image enhancement device
JP6056511B2 (en) Image processing apparatus, method, program, and imaging apparatus
JP4661775B2 (en) Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method
US8305499B2 (en) Image processing circuit and method for image processing
JP4400160B2 (en) Image processing device
Kim et al. Adaptive lattice-aware image demosaicking using global and local information
JP5247632B2 (en) Image processing apparatus and method, and image display apparatus and method
JP3617455B2 (en) Image processing apparatus, image processing method, and recording medium
JP2006245916A (en) Circuit and method for processing color separation
JPH10294948A (en) Picture processing method and picture processing program recording medium
JP2001292452A (en) Image pickup device and processing method for color image pickup signal
WO2016051911A1 (en) Image processing device, imaging device, image processing method, and program
JP2017021467A (en) Image processing device, control method and control program of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant