GB2468304A - Video image processing method that smooths the jagged diagonal edges created by epsilon filtering - Google Patents

Video image processing method that smooths the jagged diagonal edges created by epsilon filtering Download PDF

Info

Publication number
GB2468304A
GB2468304A GB0903594A GB0903594A GB2468304A GB 2468304 A GB2468304 A GB 2468304A GB 0903594 A GB0903594 A GB 0903594A GB 0903594 A GB0903594 A GB 0903594A GB 2468304 A GB2468304 A GB 2468304A
Authority
GB
United Kingdom
Prior art keywords
edge
filter
pixel
image
edge strength
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0903594A
Other versions
GB0903594D0 (en
Inventor
Graham Jones
Marc Paul Servais
Matti Pentti Taavetti Juvonen
Kenji Maeda
Andrew Kay
Allan Evans
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Priority to GB0903594A priority Critical patent/GB2468304A/en
Publication of GB0903594D0 publication Critical patent/GB0903594D0/en
Priority to PCT/JP2010/053925 priority patent/WO2010101292A1/en
Publication of GB2468304A publication Critical patent/GB2468304A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • G06T5/001
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

A method of processing a video image 500 to determine an edge strength 502 in a region associated with each image pixel to be processed, wherein for large edge strengths, the pixel is filtered by an edge-preserving filter 504, which may be an epsilon filter, and for small edge strengths, the edge-preserving filter is replaced by a smoothing filter 505,506. For very large edge strengths 502, which may be within 20% of a maximum value, no filter may be applied. A directional filter may also be applied when the edge-preserving filter 504 is applied. The smoothing filter may comprise an image blurring filter 505 followed by an image sharpening filter 506. A weighted combination of edge-preserving filter 504 and smoothing filter 505,506 may be applied in a blend mode 507 according to the size of the edge strength 502. The edge strength 502 may be determined as a function of variations in pixel values in a plurality of pixel blocks, which may be in a single pixel row, in the neighbourhood of the pixel to be processed. Epsilon edge-preserving filtering performs well at reducing mosquito noise near edges, but it tends to create jaggy edges, especially on diagonal edges. This image processing method is used to improve the quality of digital TV and video by smoothing the jagged diagonal edges.

Description

METHOD OF AND APPARATUS FOR PROCESSING A VIDEO IMAGE
The present invention relates to a method of an apparatus for processing a video image, for example for use with or in a display system, or a television, mobile phone, advertising hoarding, digital photograph display device, computer display, projector or other public or personal devices including such a display system. The method may be used in image processing for improving the perceived quality of digital TV or video without changing what is broadcast or otherwise distributed.
In this disclosure we consider that a display is a device capable of processing and showing static or moving images. Moving images are generally represented as video sequences of instantaneous frames or fields. The difference is that a frame represents an entire image at an instant, whereas a field represents only a portion of an image at an instant, such as every other line of an image.
The advantage of digital video transmission or storage is that techniques of lossy digital compression can be used to reduce the number of bits required to describe the images. However, as the number of bits is decreased the perceived quality of the images can be seriously degraded by the introduction of compression artefacts, which are perceived as noise. In addition, when scaling video content for a larger display size these artefacts tend to become more visible and annoying to the viewer. Current compression techniques such as MPEG2 and H.264 introduce in particular block noise (caused by the division of the scene into blocks, which may not have smooth joins when decompressed) and mosquito noise caused by not using enough bits to record all the frequencies present in an image, resulting in spatial ringing near strong image edges. Examples can be seen in figure 2 of the accompanying drawings, which shows a detail of a compressed image. Mosquito noise 21 and block noise 22 can be clearly seen.
In "An Efficient Approach to the Reduction of Mosquito Noise for the JPEG/JPEG2000 Decoded Image by Using Epsilon Filter" (2002 International Conference on Digital Printing Technologies) Shohdoji et aI present a method for reducing mosquito noise using an c (epsilon) filter. Patent US2005117807 (Shohdoji) by the same author proposes an optimisation to the method, reducing the need to calculate c at every pixel position.
W02007072301 (Philips) and US7203234 (Sharp, SLA) try to reduce ringing using directional low pass filtering, using information from the decompression (decoder) unit.
Often, due to the modular design of display systems, such decompression information is not available to other parts of the system, so this method would not work in such a case.
U56996184 (Sony) uses more than one frame of data to reduce noise. This requires extra memory to record the history of the processing.
JP2007174403 (Toshiba) improves an image by emphasising edges, but being careful not to emphasize areas potentially containing compression block edges.
W02004084123 (Qualcomm) reduces block noise by filtering specifically at positions on block edges.
The c-filter is described in "An Efficient Approach to the Reduction of Mosquito Noise for the JPEG/JPEG2000 Decoded Image by Using Epsilon Filter" (2002 International Conference on Digital Printing Technologies) by Shohdoji et al and also in patent application U52005117807. Here we describe it with slightly different notation. There are three main parameters M, N and escale. The luminance value Y(x,y) is either known or calculated for each pixel P(x,y) in an input image. Next the algorithm calculates a measure of variation V(x,y) of Y over rectangular NxN blocks of the image centred at each pixel location (x,y). Let lxi denote, as usual, the absolute value of a number x. Let A(x,y,N) denote the set of pixel positions (x',y') such that lx-xl �= N/2 and ly-yl �= N/2, that is roughly the rectangle of size N centred on (x,y). One measure of variation to be used for V suggested in U52005117807 is standard deviation: (1) V(x,y) = ( I(xy2EA(x,y,N) [Y(xcy') -(I(x",y')eA(x,y,N) Y(x",y") / IA(x,y,N)l) j2 / IA(x,y,N)12)h/2 Next E(x,y) is calculated for each pixel (x,y), where E(x,y) is the maximum of V(x',y') taken over a block A(x,y,M) of size M centred at (x,y): (2) E(x,y) = Max(X)EA(XM)V(x',y') Next E(x,y) is multiplied by a constant factor as an adjusting coefficient, escale, to obtain the epsilon matrix (3) E(x,y) = E(x,y) * escale Intuitively the value E(x,y) represents the "size" of nearby edges, so that it has a larger value for pixels near to (within M of) strong edges.
Finally the epsilon matrix is used as the filter strength for an c-filter, which we will call eps, which is applied to each pixel of the original luminance image to obtain a new luminance image Y'(x,y) = eps(E, ME, Y, x, y). To assist the definition of the c-filter we first define (following Shohdoji) TE[X]= 1, if lxl�=E T[x] = 0, if lxi >E Then the c-filter is given by (1(xy)en(x,yM) TE(X,Y)[Y(Xc y') -Y(x,y)]. Y(xc y')) (4) eps(E, M, Y, x,y) = (I(x'y)efl(xyM) TE(X,y) [Y(xc y') -Y(x,y)]) (Note that US2005117807 has an obvious error here in paragraph 0016, equation (1), as it uses its equivalent of Y(x,y) as the multiplier in the numerator rather than Y(xcy')).
The parameters are the field of epsilon values, ; a measure, ME, of the size of the area over which the filter operates for each pixel; the original image, Y; and the location of the pixel to be calculated, (x,y).
Intuitively this equation defines Y'(x,y) to be the average Yof those neighbours of (x,y) within M pixels and value within E(x,y) of Y(x,y). Thus small variations of size less than c tend to be smoothed out, without compromising the visually important edge. This is illustrated in Figure 1 of the accompanying drawings which depicts a 1-D slice through a luminance 15 image near to two edges 11 and 12. The result 14 shows a reduction in mosquito noise 13.
For large values of M or N it becomes expensive in terms of computing power to calculate the result of an c-filter. It is possible to compensate to some extent for small values of M and N by increasing escale. However, there are disadvantages to increasing escale, as we explain.
We observe that c-filtering performs well at reducing noise near edges, especially when the correction parameter, escale, is large. However, it tends to create jaggy edges, by removing the softening intermediate values which allow diagonal edges to look smooth.
In addition, it does not perform at all well at reducing noise not close to strong image edges, for example block noise. Figure 3 of the accompanying drawings illustrates this. The image 31 before processing exhibits mosquito noise 32, which has been reduced 34 to a much lower level in the image 33 after processing. However the smooth diagonal 35 has been made more jaggy 36.
Figure 7 of the accompanying drawings explains in more detail why this is. In each graph 7a, 7b the horizontal axis represents a 1-dimensional section through an image, and the vertical axis represents brightness. On the left 7a the graph represents the original image, and on the right 7a is the same image processed by an c-filter with a large value for escale. In each graph the section shows what happens at an edge, with dark values on the left of the edge and bright ones on the right. Since escale is large, the calculated values of E are also large compared with the size of the edge. The effect is then that the pixels nearest the edge (filled in black) are drawn outward, nearer to their neighbours within c. If the edge happens to be a diagonal one as diagrammed in 7c and 7d, then the effect is that the smooth transition from top to bottom of the edge in original image 7c has now become too sharp, resulting in jaggy edges as in processed image 7d. The pixels just on and below the main diagonal (such as those marked 71) in 7c correspond to the pixels marked with black circles in 7a.
According to a first and second aspects of the invention, there are provided apparatuses as defined in the appended claims 1 and 6, respectively.
According to a third and fourth aspects of the invention, there are provided methods as defined in the appended claims 18 and 19, respectively.
Embodiments of the invention are defined in the other appended claims.
The c-filtering method may be modified by using E as a measure of edge strength, using filtering designed not to create jaggy edges when E is nearly maximal. We can determine the near-maximality from the fields V and M which are calculated anyway during the c-filter method. Optionally, when E is small we use a generic smoothing filter (low pass filter) instead of c-filtering. For intermediate values (non-small and non-maximal) we perform c-filtering as usual. Optionally we blend filtering between these three regions to prevent sudden changes in processing which might be perceptible.
It is not necessary to use special information from a compression decoder other than the succession of fields or frames of video data itself. This information includes items such as frame rate, motion vectors, amount of quantisation and more. Such information from the decoder can optionally be used to tune the parameters for better performance.
This method works well applied to each field or frame without necessarily requiring information from earlier fields or frames. However, information from earlier fields or frames can be used to tune the parameters for better performance.
Data from earlier frames can be used to adjust the filters and filter parameters adaptively, depending on the video content. For example, if the type of the video in the shot is known by analysis or otherwise to be one of cartoon, sport, interview, slow-moving scenery, computer generated, then different parameters can be selected.
While in the prior art, square (NxN or MxM) rectangles have generally been used as the domain over which the measure of variation is calculated, the domain over which selective averaging is performed, and the domain of the optional low-pass filter, the present invention is not restricted to square domains. Square domains are included in some embodiments of the invention, but other types of domain may also be used. In particular, a domain which contains only one pixel row (lxN or lxM) may be advantageous when memory or processing resources are restricted.
Such techniques allow the suppression of mosquito noise without introducing more jaggy edges into the image, resulting in a more pleasing image.
Mosquito noise and block noise filters may be integrated with little extra cost, and
without the introduction of more jaggy edges.
The implementation of these techniques may be cheap, as they make use of information, Vand Mwhich is readily available.
The escale parameter (strength of filter) may be increased to much higher values than the prior art without adding jagginess. This allows smaller values of M and N to be used, which in turn reduces cost of implementation. It allows processing of video which has undergone a larger amount of compression and consequent degradation.
Such techniques may provide an image ready for enlarging and/or sharpening and/or other processing for display. In particular, sharpening tends to increase jagginess, so an advantage of the present technique is that jaggy edges are not increased before sharpening. With the prior art c-filter jaggy edges would be increased both by the c-filter and by subsequent sharpening.
In the case of embodiments using filtering domains containing a single row, the integration of a system for the removal of noise with a very small requirement for computing and memory resources is possible.
The invention will be further described, by way of example, with reference to the accompanying drawings, in which: Figure 1 shows the operation of the standard epsilon filter; Figure 2 illustrates some types of compression noise; Figure 3 illustrates processing with and without the new method; Figure 4 illustrates processing with and without the new method; Figure 5 illustrates an embodiment of the invention; Figure 6 illustrates the principal components of the system; Figure 7 illustrates the reason for jaggy edges after epsilon filtering; Figure 8 illustrates the principle of operation of embodiments of the invention where filtering domains include only one row of video data; and Figure 9 shows a number of different ways in which the invention may be incorporated into devices or products.
Figure 6 illustrates the principal components of the system. Video from a compressed source 61 (such as a digital satellite, cable or terrestrial broadcast receiver, or from a device such as a PVR, DVD player or Blu-ray disc player, or from an internet video service, or from a video conferencing service) is passed to a decompressor 62 to create an uncompressed video. An image processing unit 63 applies algorithms to reduce compression artefacts (and optionally applies other algorithms for other purposes, such as image scaling for the particular display unit.) The cleaned video is then displayed on a panel or other display unit (such as a CRT) 64.
Figure 5 illustrates a preferred embodiment of the invention, typically forming a part of the image processing unit 63. An incoming image 500 that has been previously decompressed is processed to create an output image 508 with fewer visible compression artefacts. The Y component of the input image is passed as a parameter 53 to several parts of the process: One part 501 computes V(x,y) according to equation (1) above. Alternatively other measures can be used, for example variance, or some other measure of edge strength such as a Canny edge filter. The result is passed to a unit 502 which computes E(x,y) according to equation (2) above. This result is passed to a unit 503 which computes E(x,y) by multiplying E by parameter escale 509, according to equation (3). The result is passed to the c-filter 504, which also receives the image 500 as a parameter, and computes according to equation (4), sending the result 52 to the blend unit 507. The results V(x,y) 501 and E(x,y) 502 and escale 509 are also passed to a decision unit 51 which identifies a blend mode for each pixel position (x,y).
The Y input image is also passed 53 to a blur unit 505, and the blurred result passed to a sharpening unit 506, and the result of that 54 passed to the blend unit 507. The purpose of this blur is to remove block noise, so a small gaussian blur is a possible implementation. The sharpening unit is used to optionally recover details of texture which are over-smoothed by the blur. Units 505 and 506 may optionally be replaced by any filter designed to remove block noise.
The blend unit 507 also receives the Y input image. Its purpose is to combine the three kinds of input image (original 53, block-filtered 54 and c-filtered 52) using the result from the blend mode unit 51. The blend unit may combine the inputs by weighting the three inputs and adding them, and the weights (Wy, Wb and W respectively) may be received from the blend mode unit 51. The blend unit may optionally scale and sharpen the resulting image for best viewing. Finally the resulting image may be recombined with the unprocessed colour components of the original image to produce a colour image ready for display.
Calculation of the blend mode is important for this method. The basic idea is that when E(x,y) is small there are no hard edges nearby, and therefore the block filter runs.
When E(x,y) is large, then there is a nearby hard edge, so the c-filter runs. The exception is when V(x,y) is roughly maximal, that is close in value to its local peak value E(x,y); then to avoid jaggedness the c-filter is not run (and so no filtering occurs).
One way to test for rough maximality of V(x,y) is to check if V(x,y) > E(x,y) -t1 and E(x,y) > t2 for suitable threshold parameters t1 and t2. Another method is to test if (E(x,y) -V(x,y)) > ti*E(x,y) for a different threshold parameter t'1.
Good results have been obtained using values M = 3, N = 3, escale = 0.6, t1 = 0.02, t2 = 0.1 (here assuming data values vary from 0.0 black to 1.0 white), though naturally values must be tuned for each particular display type or application, and expected viewing conditions.
To avoid sudden transitions between regions it may be advantageous to blend between the block and c-filter regions, so that over a small range of values of E(x,y) a portion of each is used. One possible way to achieve this is to adjust the weighting factors linearly in this region. For example, given a threshold t3 and a gradient parameter m, calculate a blending parameter r(x,y) as follows: r(x,y) = max(O, min(1, m * E(x,y) -For example, m=9 and t3 = 0.2 are reasonable values.
Then use r(x,y) to control the blend weighting parameters as follows: if V(x,y) > E(x,y) -t1 and E(x,y) > W(x,y) = 1; WE(x,y) = O,* W,(x,y) = 0 else W(x,y) = 0, W(x,y) = r(x,y), Wb(x,y) = 1 -r(x,y) The blending operation 507 may be as simple as Y'(x,y) = Wy(x,y) * Y + WE(x,y) * eps(E, ME, Y, x,y) + Wb(x,y) * b(x,y) where b(x,y) is the output 54 of the block filter 505, 506, and Y'(x,y) is the resulting monochrome image 508.
For later reference we give the name A to the region with Wy(x,y) = 1, B to the region with WE(x,y) = 1, and C to the region Wb(x,y) = 1. We call 0 the blend region, in which 0<WE(x,y)< land0<Wb(x,y)< 1.
This blending technique uses the value E(x,y) to determine the blend control parameter r(x,y). This may lead to overly sudden blending in the presence of very sharp features (such as thin horizontal or vertical lines.) It may be preferable to provide a more gradual blend, which may be achieved by many methods including the following, used singly or in combination. For these variations we change the definition of r, and instead write r(x,y) = max(O, min(1, m * E'(x,y) -t3,))), where the primed E indicates a different, smoother, edge weighting function as given by example here. We note that c(x,y) may be calculated (as in equation 4) using either E or E', since there may be an efficiency advantage in not calculating both of them.
For the first example we define E' to be the result of applying a smoothing filter (such as a gaussian kernel in one or two dimensions) to E. For the second example we obtain E' from E by modifying its definition. Whereas E is defined in terms of variance V (in equation 1) we may define E' in terms of V', where V'(x,y) = ( (xy)eA(x,y,N) [Y(xcy'). w(x'-x,y'-y) -(Ix'y')eA(x,y,N) Y(x'çy") .w(x'-x,y"-y) /IA(x,y,N)I)]2 / IA (x,y, N) 12)1'2 and w(dx,dy) is a smoothing kernel, such as a gaussian.
For the third example we may obtain E' from E by modifying instead the equation 2, so that E'(x,y) = Max() EA(x,y,M) V(x',y'). w'(x'-x,y'-y) where again w'(dx,dy) is a smoothing kernel, such as a gaussian.
It will be well understood by those skilled in the art that the various computations and units described above are primarily used for describing the method, and that any implementation may combine or separate calculations to create a different architecture that achieves the same effect, namely the blending of the different types of processing.
Implementation may be in software or in dedicated hardware or some combination. It may work on whole frames or fields at a time, or may work in a streamed mode with a limited amount of buffering, as is common in video processing.
For efficiency it may be advantageous to calculate the blend mode 51 for any particular pixel early on in the processing order, and then only calculate those inputs to the blend unit 507 which have a non-zero weighting.
Near the borders of the image the algorithm is less likely to perform well. It may be advantageous to do no processing, or optionally to do other processing, within a few pixels of the borders.
Figure 3 illustrates the results of the processing. The image detail 31 without processing exhibits mosquito noise 32, which has been reduced 34 to a much lower level in the image after c-filter processing 33. However the smooth diagonal 35 has been made more jaggy 36. The image 37 created with this embodiment shows the same reduction 38 in mosquito noise but the smooth diagonal 39 is not significantly jaggier than the original. Figure 4 shows the same thing at higher resolution, using shading to represent grey levels. The scale of grey levels 41 is given for reference, from lightest 45 to darkest 46 pixels. 42 is the original image, 43 the image processed by the algorithm in the prior art and 44 the image processed according to the present embodiment. Note that 43 and 44 differ only near and along the sharp diagonal edge.
The mosquito noise reduction well above or below the edge is identical.
In an alternative embodiment, the unprocessed image is not passed directly to the blend unit for use in blend region A. Instead a directional filter is applied, according to edge direction. The aim is to reduce existing jaggy edges in the image, not simply to avoid adding more jaggy edges. The edge direction information can be obtained from the image directly, or using the results V(x,y) and E(x,y) calculated elsewhere 501, 502.
In an alternative embodiment the choice of blending of the different filters is performed using morphological operations (that is, considering pixel adjacency). In this embodiment regions A and B are calculated as before. Region 0 is calculated to be those pixels which border within a set distance on pixels of region A or B but which lie in neither A nor B. Region C is then calculated to be those remaining pixels not in A, B or 0. In region D the blending weights are chosen to give values between B and C. For example, Wy(x,y) = 0; WE(x,y) = 34; Wb(x,y) = 172.
In an alternative embodiment, the shapes of the filter regions A(x, y, M) need not be squares. Note that there are three distinct uses of A(x,y,...) -for the variance calculation (1), the maximisation of variance (2) and for the epsilon filter (4) -and each one could use a different filter shape. In one example, a rectangle with differing height and width are used. In another example, a region using a different metric, such as Ix'-xl + ly'-yl <M is used. The size or shape of the filter may be made to depend on the input image type, or on local properties of the image. For example, the parameter M may be chosen to vary at each pixel in accordance with the magnitude of V(x,y).
An example of an embodiment with non-square filter regions is one where all calculations are done using pixels from a single video row. In this case, as pixel data are supplied to the system serially row-by-row, the system need store only a few (probably less than 100 per colour channel) pixel values in order to perform the noise reduction function. This allows an implementation which runs at high speed and with low cost.
Figure 8 shows the principle of operation of this type of embodiment. An image 72 is shown, where element 53 is the pixel in the 5th row and 3rd column. Image data arrive serially as a stream of pixel values. The noise reduction system 74 need only take into account nearby pixels in the data stream in order to calculate the output pixel stream 76 if all filtering domains have one row. Other operations 75 may also be applied before or after (as shown) the noise reduction system. In particular, 75 may be a sharpening operation which again operates only on a single pixel row. For example, the noise reduction system 74 may be of the type shown in Figures.
Figure 8 shows operations on a single set of pixel data, as might be used to describe a monochrome image. Colour images may be processed in the same way, either by (a) processing the red, green and blue channels separately and independently, (b) calculating a luminance or luma value or some approximation to luminance and processing only that luminance before recombining it with colour information, or (c) by using information derived from one channel of image information to control the blend function for all three channels.
In an alternative embodiment not all the colour channels need be processed. For example, since the eye is not particularly sensitive to blue light it is relatively unimportant to process the blue channel. The advantage of this embodiment is that less processing power or processing hardware is required.
In an alternative embodiment escale may be varied according to the style of image.
For example, if it is known that an animated cartoon sequence is in progress it may be beneficial to increase escale to allow more reduction of mosquito noise. Here the flattening effect of the epsilon filter may be considered an advantage.
In an alternative embodiment extra information is taken from the video decoder, such as motion vectors or estimates of video quality (for example from the number of bits used for coding each block), to control the strength and shapes of the various filters.
In an alternative embodiment processing is not restricted to the Y (luma) channel. If the image is in RGB colour space, or some other colour space, each channel may be processed independently by the filters.
In an alternative embodiment, the information used in the blend mode decision 51 is calculated solely from one of the RGB channels, and the same blending is applied to all three colours (red, green and blue). Preferably the G channel is the one used in the blend mode decision, since the green brightness has the strongest influence on luminance.
In an alternative embodiment the image is already in, or converted to, a colour space with one luminance channel Y, and two colour channels which we will call Cl and 02.
Y is processed as before. However, for processing Cl and 02 the system uses E(Y) as a substitute for E(C1) and E(C2) respectively. This is for efficiency, as it requires only one E calculation to serve for processing all three channels, as compared with the previous embodiment.
In an alternative embodiment the E or c fields are calculated more approximately, perhaps on a coarser grid. For example, a single c value might be used for a 3x3 or 5x5 region of pixels. This kind of optimisation is discussed in US20051 17807 Shohdoji etal.
In an alternative embodiment the method of not filtering where E is maximal or nearly maximal is applied to a bilateral filter instead of an epsilon filter. The well-known bilateral filter is similar to an epsilon filter except that the weight of a pixel depends not only on the value of that pixel relative to a central pixel, but also on the distance of that pixel relative to a central pixel. Bilateral filtering may give rise to jaggy edges in the same way as epsilon filtering, so it is advantageous to use no filtering or a different filter at those pixels which lie close to edges. The method extends in an analogous way to other kinds of smoothing filters.
Epsilon filters and bilateral filters are examples of edge-preserving filters. An edge-preserving filter is one which blurs features in an image but which tends to leave stronger edges unaffected. The present technique may be used with other edge-preserving filters, examples of which include: median filter and coring algorithms (both described by C. Poynton, "Digital video and HDTV algorithms and interfaces", Elsevier 2003, p.331); a method based on "bandlets" described by E. Le Pennec and S. Mallat, "Sparse geometric image representations with bandelets", IEEE transactions on image processing, vol 14 (4), pp423-438 (April 2005); and methods based on partial differential equations such as those described in D. Tschmperlé. "Fast Anisotropic Smoothing of Multi-Valued Images using Curvature-Preserving PDEs", International Journal of Computer Vision, IJCV (68), No 1, June 2006, pp.65-82.
The noise reduction method may be incorporated into a display system or an image manipulation system in many different ways. Some examples are shown in Figure 9 which describe, but are not intended to limit, the possibilities.
In Figure 9(a), the noise reduction system 78 is included (possibly together with other operations such as sharpening or scaling) in a separate device which may receive a video signal from a source 77 and feed video output to a display 79. Source 77 may for example be an optical disk player, a television tuner (analogue or digital), a hard disk recorder or a computer.
In Figure 9(b), the noise reduction system 78 is included in the source device 77 and processes video before it is sent to the display device 79.
In Figure 9(c), the noise reduction system 78 is included in the display device (which may be a monitor, a television or another device which includes a display such as a mobile phone or personal digital assistant or movie player).
Figures 9(d), 9(e) and 9(f) show more specific examples of how a noise reduction system may be included in a device which can be used for viewing television, such as a television set, mobile phone or personal media centre. Such devices typically include at least three processing stages. The TV signal entering the device at the tuner is first demodulated and decoded in a stage 80. The resulting signal is then adapted to the resolution and possibly to the frame rate of the display in a second stage 81. This stage may include scaling and may include frame-rate conversion. The final stage 82 is to adapt the video signal to the specific characteristics of the display device 83 (for instance, CRT or flat panel or projector). The stage 82 typically takes into account the voltage range and the luminance-data relationship of the display device 83, and includes such elements as look-up tables and timing controllers.
Figure 9(d) shows the noise reduction system 78 incorporated in the same hardware as the demodulating/decoding step. This may be advantageous if parameters derived from decoding are used to control the blending or filtering steps in the algorithm.
Figure 9(e) shows the noise reduction system 78 in the same hardware as the scaling step 81. This may have the advantage that when the signal displayed comes from a source other than the built-in tuner/decoder (for example an external feed from a DVD player), the noise reduction may still be applied.
Figure 9(d) shows the noise reduction system 78 incorporated into the part of the system 83 which adapts the signal to the display panel. Again, this may have the advantage that the noise reduction may be applied to data from a number of different sources inside or outside the television set. In this part of the system, memory and processing resources are typically limited, so the one-row version of the system described in Figure 8 may be used.

Claims (22)

  1. CLAIMS: 1. An apparatus for processing a video image, comprising first means for determining an edge strength in a region of the image associated with an image pixel to be processed, and second means for applying an edge-preserving filter at the pixel in response to a first edge strength and for substantially not applying the edge-preserving filter at the pixel in response to a second edge strength greater than the first edge strength.
  2. 2. An apparatus as claimed in claim 1, in which the second means is arranged substantially not to apply the edge-preserving filter when the edge strength is at or adjacent a maximum value thereof.
  3. 3. An apparatus as claimed in claim 2, in which the second means is arranged substantially not to apply the edge preserving filter when the edge strength is within 20% of the maximum valve.
  4. 4. An apparatus as claimed in any one of the preceding claims, in which the second means is arranged to apply a directional filter at the pixel in response to the second edge strength.
  5. 5. An apparatus as claimed in any one of the preceding claims, in which the second means is arranged substantially not to apply a smoothing filter at the pixel in response to the first edge strength and is arranged to apply the smoothing filter and substantially not to apply the edge-preserving filter at the pixel in response to a third edge strength less than the first edge strength.
  6. 6. An apparatus for processing a video image, comprising first means for determining an edge strength in a region of the image associated with an image pixel to be processed, and second means for applying an edge-preserving filter and for substantially not applying a smoothing filter at the pixel in response to a first edge strength and for applying the smoothing filter and for substantially not applying the edge-preserving filter at the pixel in response to a third edge strength less than the first edge strength.
  7. 7. An apparatus as claimed in claim 5 or 6, in which the smoothing filter comprises an image-blurring filter followed by an image-sharpening filter.
  8. 8. An apparatus as claimed in any one of claims 5 to 7, in which the second means is arranged to apply a combination of the smoothing filter and the edge-preserving filter weighted according to the edge strength for edge strengths between the first and third edge strengths.
  9. 9. An apparatus as claimed in any one of the preceding claims, in which the region contains the pixel to be processed.
  10. 10. An apparatus as claimed in any one of the preceding claims, in which the first means is arranged to determine the edge strength as a function of variations in pixel values in a plurality of pixel blocks in the neighbourhood of the pixel to be processed.
  11. 11. An apparatus as claimed in claim 10, in which the blocks comprises sets of contiguous pixels in a single pixel row.
  12. 12. An apparatus as claimed in claim 10 or 11, in which the first means is arranged to determine the edge strength as a function of a maximum of the variations.
  13. 13. An apparatus as claimed in any one of the preceding claims, in which the first and second means are arranged to repeat their operations for each of a plurality of pixels of the image.
  14. 14. An apparatus as claimed in claim 13, in which the plurality of pixels comprises all image pixels excluding those pixels in a border region of the image.
  15. 15. An apparatus as claimed in claim 13 or 14, in which the first and second means are arranged to repeat their operations for each image of an image sequence.
  16. 16. An apparatus as claimed in any one of the preceding claims, in which the edge-preserving filter comprises an epsilon filter.
  17. 17. A display including an apparatus as claimed in any one of the preceding claims.
  18. 18. A method of processing a video image, comprising determining an edge strength in a region of the image associated with an image pixel to be processed, applying an edge-preserving filter at the pixel in response to a first edge strength and substantially not applying the edge-preserving filter at the pixel in response to a second edge strength greater than the first edge strength.
  19. 19. A method of processing a video image, comprising determining an edge strength in a region of the image associated with an image pixel to be processed, applying an edge-preserving filter and substantially not applying a smoothing filter at the pixel in response to a first edge strength, and applying the smoothing filter and substantially not applying the edge-preserving filter at the pixel in response to a third edge strength.
  20. 20. A program for programming a computer to perform a method as claimed in claim 18 or 19.
  21. 21. A computer-readable medium containing a program as claimed in claim 20.
  22. 22. A computer programmed by a program as claimed in claim 20.
GB0903594A 2009-03-03 2009-03-03 Video image processing method that smooths the jagged diagonal edges created by epsilon filtering Withdrawn GB2468304A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0903594A GB2468304A (en) 2009-03-03 2009-03-03 Video image processing method that smooths the jagged diagonal edges created by epsilon filtering
PCT/JP2010/053925 WO2010101292A1 (en) 2009-03-03 2010-03-03 Method of and apparatus for processing a video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0903594A GB2468304A (en) 2009-03-03 2009-03-03 Video image processing method that smooths the jagged diagonal edges created by epsilon filtering

Publications (2)

Publication Number Publication Date
GB0903594D0 GB0903594D0 (en) 2009-04-08
GB2468304A true GB2468304A (en) 2010-09-08

Family

ID=40566037

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0903594A Withdrawn GB2468304A (en) 2009-03-03 2009-03-03 Video image processing method that smooths the jagged diagonal edges created by epsilon filtering

Country Status (2)

Country Link
GB (1) GB2468304A (en)
WO (1) WO2010101292A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103986922B (en) * 2013-02-07 2016-03-02 聚晶半导体股份有限公司 Image processing method
KR102592605B1 (en) * 2018-12-06 2023-10-24 삼성전자주식회사 Image signal processor, operation method of image signal processor, and electronic device including image signal processor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
EP1404120A1 (en) * 2001-06-20 2004-03-31 Sony Corporation Image processing method and device
US20050117807A1 (en) * 2003-12-01 2005-06-02 School Foundation Of Nippon Institute Of Technology Method and apparatus for reduction mosquito noise in decoded images
US20070076972A1 (en) * 2005-09-30 2007-04-05 Yi-Jen Chiu System and method of spatio-temporal edge-preserved filtering techniques to reduce ringing and mosquito noise of digital pictures
US20070110329A1 (en) * 2005-08-18 2007-05-17 Hoshin Moon Data processing apparatus, data processing method, and program
US20080152017A1 (en) * 2006-12-21 2008-06-26 Canon Kabushiki Kaisha Mpeg noise reduction
US20090245679A1 (en) * 2008-03-27 2009-10-01 Kazuyasu Ohwaki Image processing apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3485454B2 (en) * 1996-12-18 2004-01-13 富士通株式会社 Image gradation conversion device, image gradation changing method, medium recording program for executing the method, and infrared camera
US7324120B2 (en) * 2002-07-01 2008-01-29 Xerox Corporation Segmentation method and system for scanned documents
JP4259410B2 (en) * 2004-07-14 2009-04-30 パナソニック株式会社 Image processing device
JP4454424B2 (en) * 2004-07-26 2010-04-21 シャープ株式会社 Image quality correction apparatus and imaging apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
EP1404120A1 (en) * 2001-06-20 2004-03-31 Sony Corporation Image processing method and device
US20050117807A1 (en) * 2003-12-01 2005-06-02 School Foundation Of Nippon Institute Of Technology Method and apparatus for reduction mosquito noise in decoded images
US20070110329A1 (en) * 2005-08-18 2007-05-17 Hoshin Moon Data processing apparatus, data processing method, and program
US20070076972A1 (en) * 2005-09-30 2007-04-05 Yi-Jen Chiu System and method of spatio-temporal edge-preserved filtering techniques to reduce ringing and mosquito noise of digital pictures
US20080152017A1 (en) * 2006-12-21 2008-06-26 Canon Kabushiki Kaisha Mpeg noise reduction
US20090245679A1 (en) * 2008-03-27 2009-10-01 Kazuyasu Ohwaki Image processing apparatus

Also Published As

Publication number Publication date
WO2010101292A1 (en) 2010-09-10
GB0903594D0 (en) 2009-04-08

Similar Documents

Publication Publication Date Title
US6600517B1 (en) System and method for improving the sharpness of a video image
US7373013B2 (en) Directional video filters for locally adaptive spatial noise reduction
EP1698164B1 (en) Directional video filters for locally adaptive spatial noise reduction
US7095903B2 (en) Method and apparatus for visual lossless image syntactic encoding
US8155468B2 (en) Image processing method and apparatus
JP5514344B2 (en) Video processing device, video processing method, television receiver, program, and recording medium
US7400779B2 (en) Enhancing the quality of decoded quantized images
KR101009999B1 (en) Contour correcting method, image processing device and display device
US8750390B2 (en) Filtering and dithering as pre-processing before encoding
US8218082B2 (en) Content adaptive noise reduction filtering for image signals
KR20120018124A (en) Automatic adjustments for video post-processor based on estimated quality of internet video content
US7511769B2 (en) Interframe noise reduction for video
US8086067B2 (en) Noise cancellation
US8503814B2 (en) Method and apparatus for spectrum estimation
US20080123979A1 (en) Method and system for digital image contour removal (dcr)
US20150085943A1 (en) Video processing device, video processing method, television receiver, program, and recording medium
US20080122985A1 (en) System and method for processing videos and images to a determined quality level
US10922792B2 (en) Image adjustment method and associated image processing circuit
EP1506525B1 (en) System for and method of sharpness enhancement for coded digital video
JP4762352B1 (en) Image processing apparatus and image processing method
EP1933556A2 (en) TV user interface and processing for personal video players
KR20030005219A (en) Apparatus and method for providing a usefulness metric based on coding information for video enhancement
GB2468304A (en) Video image processing method that smooths the jagged diagonal edges created by epsilon filtering
Cho et al. Color transient improvement with transient detection and variable length nonlinear filtering
Lachine et al. Content adaptive enhancement of video images

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)