WO2013162441A1 - Deblocking filtering control - Google Patents

Deblocking filtering control Download PDF

Info

Publication number
WO2013162441A1
WO2013162441A1 PCT/SE2013/050237 SE2013050237W WO2013162441A1 WO 2013162441 A1 WO2013162441 A1 WO 2013162441A1 SE 2013050237 W SE2013050237 W SE 2013050237W WO 2013162441 A1 WO2013162441 A1 WO 2013162441A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter value
pixels
deblocking filtering
block boundary
beta
Prior art date
Application number
PCT/SE2013/050237
Other languages
French (fr)
Inventor
Andrey Norkin
Rickard Sjöberg
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Publication of WO2013162441A1 publication Critical patent/WO2013162441A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary between neighboring blocks of pixels in a picture.
  • Deblocking filters are used in video coding standards in order to combat blocking artifacts.
  • the blocking artifacts arise because the original video is split into blocks which are processed relatively independently.
  • the blocking artifacts can arise due to different intra prediction of blocks, quantization effects and motion compensation. Two particular variants of deblocking are described below.
  • deblocking filtering In state of the art video coding such as H.264 there is an adaptive de-blocking filter/loop filter after prediction and residual reconstruction, but before storage of the reconstruction for later reference when encoding or decoding subsequent frames.
  • the deblocking filtering consists of several steps such as filter decisions, filtering operations, a clipping function and changes of pixel values. The decision to filter the border or not is made based on evaluating several conditions. Filter decisions depend on macro block (MB) type, motion vector (MV) difference between neighboring blocks, whether neighboring blocks have coded residuals and on the local structure of the current and/or neighboring blocks.
  • MB macro block
  • MV motion vector
  • the amount of filtering for a pixel depends on the position of that pixel relative to the block boundary and on the quantization parameter (QP) value used for residual coding.
  • QP quantization parameter
  • the filter decision is based on comparing three pixel differences with three thresholds.
  • the thresholds are adapted to the QP. If the following conditions are fulfilled the filtering is done: abs(d-e) ⁇ thr1 ,
  • filtering can be described with a delta value that the filtering changes the current pixel value with.
  • d' is here the pixel value at position d after filtering and e' is the pixel value after filtering at position e. More filtering is allowed for high QP than for low QP.
  • delta_clipped max(-thr3,min(thr3,delta)) where thr3 is controlling the filter strength.
  • thr3 is controlling the filter strength.
  • a larger value of thr3 means that the filtering is stronger, which in turns means that a stronger low-pass filtering effect will happen.
  • the filter strength can be increased if any of the following two conditions also holds: abs(b-d) ⁇ thr2 and abs(e-g) ⁇ thr2
  • the filter strength is adapted by clipping the delta less, e.g. to allow for more variation.
  • the second filtering mode (strong filtering) is applied for intra macroblock boundaries only, when the following condition is fulfilled: abs(d-e) ⁇ thr1/4.
  • the thresholds thrl, thr2 and thr3 are derived from table lookup using QP as index. Each slice can contain modifications of thr2 and thr3 using slice_beta_offset_div2 and thrl using slice_alp a_c0_offset_div2.
  • the slice parameters 2xslice_beta_offset_div2 and 2x slice_alp a_c0_offset_div2 are added to the current QP index before table lookup of thr2/thr3 and thrl respectively.
  • p 0 to p3 and qo to q3 represent pixel values across a vertical block boundary.
  • the deblocking filter works differently than H.264.
  • the filtering is performed if at least one of the blocks on the side of the border is intra, or has non-zero coefficients, or the difference between the motion vector components of the blocks is greater than or equal to one integer pixel. For example, if one is filtering the border between the blocks A and B below, then the following condition should satisfy for the block boundary to be filtered: A B
  • the two filtering modes (weak and strong filtering) in the HEVC draft look like in the following:
  • Weak filtering is performed based on the above conditions.
  • the actual filtering works by computing an offset ( ⁇ ) for each of the lines / that the weak filter has been selected for.
  • the following weak filtering procedure is applied for every line, where it has been chosen.
  • Aq Clip3( -( tc » 1 ), tc » 1 , ( ( ( q2 + qO + 1 ) » 1 ) - q1 - ⁇ ) »1 )
  • p1 ' Clip3( p1 -2xtc, p1 +2xtc, ( p2 + p1 + pO + qO + 2 ) » 2 )
  • p2' Clip3( p2-2xtc, p2+2xtc, ( 2xp3 + 3xp2 + p1 + pO + qO + 4 ) » 3 )
  • qO' Clip3( q0-2xtc, q0+2xtc, ( p1 + 2xp0 + 2xq0 + 2xq1 + q2 + 4 ) » 3 )
  • q1 ' Clip3( q1 -2xtc, q1 +2xtc, ( pO + qO + q1 + q2 + 2 ) » 2 )
  • q2' Clip3( q2-2xtc, q2+2xtc, ( pO + qO + q1 + 3xq2 + 2xq3 + 4 ) » 3 )
  • the parameter beta_o ⁇ fset_div2 and pps_beta_o ⁇ fset_div2 is used in order to adjust the amount of deblocking filtering as in the following.
  • the number of samples from the block boundary modified by deblocking filtering depends on equations (4), (5), (7), (8).
  • equations (4), (5), (7), (8) use a comparison with the parameter ⁇ , divided with some factor, such as ( ⁇ »2 ) in (4), ( ⁇ »3 ) in (5) and ( ⁇ + ( ⁇ » 1 ) ) » 3) in (7) and (8), where the parameter ⁇ depends on the QP and is normally derived from the look-up table, such as Table 1.
  • an offset to the parameter ⁇ can be signaled, for instance, in the slice header or in a parameter set, such as Adaptation Parameter Set (APS) or Picture Parameter Set (PPS), as beta_o ⁇ fset_div2.
  • APS Adaptation Parameter Set
  • PPS Picture Parameter Set
  • beta_o ⁇ fset_div2 Adaptation Parameter Set
  • significant changes of the parameter ⁇ is required in order to change the threshold values ( ⁇ »2 ), ( ⁇ »3 ), ( ⁇ + ( ⁇ » 1 ) ) » 3).
  • a general objective is to provide an efficient deblocking filtering control.
  • a particular objective is to enable modifying threshold values used for determining deblocking filtering mode and/or length separately from modifications of a parameter that determines which parts of block boundaries are modified by deblocking filtering.
  • An aspect of the embodiments relates to a deblocking filtering control method performed in connection with video decoding.
  • the method comprises retrieving, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the method also comprises determining, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and determining, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a related aspect of the embodiments defines a filtering control device comprising a determining unit configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data.
  • the filtering control device also comprises a processing unit configured to i) determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and ii) determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • Another related aspect of the embodiments defines a decoder configured to decode encoded video data of a video sequence.
  • the decoder comprises a filtering control device according to above.
  • a further related aspect of the embodiments defines a user equipment comprising a decoder according to above.
  • the computer program comprises code means which when run on a computer causes the computer to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the code means also causes the computer to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the code means further causes the computer to determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a further related aspect of the embodiments defines a computer program product comprising computer readable code means and a computer program according to above stored on the computer readable code means.
  • the method comprises determining a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • the method also comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value are associated to an encoded representation of the picture.
  • a related aspect of the embodiments defines a filtering control device comprising a beta parameter determining unit configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • a length offset determining unit is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the filtering control device also comprises an associating unit configured to associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
  • Another related aspect of the embodiments defines an encoder configured to encode video data of a video sequence and comprising a filtering control device according to above.
  • a further aspect of the embodiments defines a user equipment comprising an encoder according to above.
  • Still another aspect of the embodiments defines a network device being or belonging to a network node in a communication network.
  • the network device comprises an encoder and/or a decoder according to above.
  • the present embodiments enable flexibility of applying stronger/weaker filtering and thereby modifying more/less pixels from the block boundaries in the block boundaries chosen for deblocking filtering without applying deblocking to most of the boundaries in the picture. Hence, the embodiments enable this adaptation of deblocking filtering mode/length while avoiding filtering on most of the block boundaries
  • Fig. 1A schematically illustrates a method performed in a filtering control device according to an embodiment
  • Fig. 1 B schematically illustrates a method performed in a transmitter according to an embodiment
  • Fig. 1 C schematically illustrates a method performed in a receiver according to an embodiment
  • Fig. 2 is a schematic block diagram of an encoder according to an embodiment
  • Fig. 3 is a schematic block diagram of a decoder according to an embodiment
  • Fig. 4 is a schematic block diagram of a user equipment according to an embodiment
  • Fig. 5 is a schematic block diagram of a user equipment according to another embodiment
  • Fig. 6 is a schematic block diagram of a network device according to an embodiment
  • Fig. 7 is a schematic block diagram of a filtering control device according to an embodiment
  • Fig. 8 is a schematic block diagram of a computer according to an embodiment
  • Fig. 11 is a flow diagram illustrating a deblocking filtering control method according to an embodiment
  • Fig. 12 is a flow diagram illustrating an embodiment of the step of determining whether to apply deblocking filtering in Fig. 11 ;
  • Fig. 13 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering and/or how many pixels to filter in Fig. 11 ;
  • Fig. 14 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering in Fig. 13;
  • Fig. 15 is a flow diagram illustrating an embodiment of the step of determining how many pixels to filter in Fig. 13;
  • Fig. 16 is a schematic block diagram of a processing unit in Fig. 7 according to an embodiment
  • Fig. 17 is a flow diagram illustrating a deblocking filtering control method according to another embodiment
  • Fig. 18 is a flow diagram illustrating the step of associating the first and second syntax element in Fig. 17 according to an embodiment
  • Fig. 19 is a schematic block diagram of a filtering control device according to another embodiment
  • Fig. 20 schematically illustrates a video sequence of pictures
  • Fig. 21 schematically illustrates a data packet carrying encoded video data
  • Fig. 22 schematically illustrates an encoded representation of a picture.
  • the present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary of neighboring blocks of pixels in a picture.
  • the embodiments are based on the insight that prior art techniques use a single parameter beta ( ⁇ ) to determine whether or not to apply deblocking filtering on a block boundary between two block of pixels in a picture, see equation (3), to determine whether to apply weak or strong filtering, see equations (4) and (5), and to determine how many pixels to filter on each side of the block boundary, see equations (7) and (8).
  • beta
  • equations (4), (5), (7) and (8) use thresholds where the parameter ⁇ is divided by some number, e.g. four in equation (4), eight in equation (5) and 16/3 in equations (7), (8).
  • increasing the value of ⁇ will affect the filter decision in equation (3) determining whether to filter a block boundary or not.
  • the relevant threshold is simply the ⁇ value, i.e. not divided by any number. This means that an increase in the ⁇ value as required in order to affect the decision according to any of equations (4), (5), (7) and (8) will have a significant impact on the threshold used in equation (3).
  • the decision in equation (3) will be true for most block boundaries in a picture and filtering will therefore be applied on most block boundaries. This may lead to excessive blurriness.
  • deblocking filtering is unintentionally applied to some block boundaries since the decision in equation (3) will be true for most block boundaries then structures present in the block of pixels could be removed or at least significantly suppressed. For instance, a clear edge between two pixel areas could be present close to the block boundary. It is then generally not preferred to apply deblocking filtering since such deblocking filtering could remove or blur the clear edge leading to visual artifacts. If is of course also computationally wasteful to apply deblocking filtering on block boundaries if the decision in equation (3) will be true for most block boundaries if the deblocking filtering does not lead to any significant quality improvement or might even cause a deterioration in visual quality.
  • Fig. 11 is a flow diagram illustrating a deblocking filtering control method performed in connection with, such during, video decoding. The method comprises retrieving, in step S30 and based on encoded video data, a first syntax element defining a beta ( ⁇ ) parameter value and a second syntax element defining a length offset parameter value.
  • Step S31 comprises determining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the determination or decision taken in step S31 is performed based at least partly on the beta parameter value.
  • Step S32 comprises determining at least one of i) whether to apply weak deblocking filtering, sometimes also denoted normal deblocking filtering, or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the determination or decision taken in step S32 is performed based at least partly on the length offset parameter value.
  • the decision of whether to apply weak or strong deblocking filtering can be regarded as a decision of which out of two deblocking filtering modes to use.
  • Weak deblocking filtering is a deblocking filtering mode that applies a weaker filtering and modification of pixel values as compared to a strong deblocking filtering mode.
  • Strong deblocking filtering is generally preferred if there are not any structures in the pixels value when traveling along a pixel line across the block boundary, hence the pixel values are substantially the same on both sides of the block boundary or differ little. If there is any structure in the pixel values, such as an edge between elements, then weak deblocking filtering is generally preferred since a strong deblocking filtering could remove or at least suppress such a structure.
  • method step S32 could be regarded as comprising determining how many pixels to filter on each side of the block boundary, or expressed alternatively, determining a length of a deblocking filtering. The reason being that in HEVC the strong deblocking filtering mode involves filtering three pixels in a pixel line on each side of the block boundary, whereas the weak deblocking filtering mode involves filtering either one or two pixels in the pixel line on each side of the block boundary.
  • Pixel value as used herein generally relates to a value that a pixel or sample in a block of a slice in a picture has.
  • a pixel value typically represents a color value according to some defined color format, such as RGB or, typically, luminance (luma) and chrominance (chroma).
  • the deblocking filtering as disclosed herein is in particular suitable in connection with filtering luma values.
  • the block boundary is a block boundary or block border between two neighboring or adjacent blocks of pixels in a slice 3 of a picture 2 in a video sequence 1 , see Fig. 20.
  • the picture 2 could comprise a single slice 3 or multiple, i.e. at least two, slices 3.
  • the block boundary could be a vertical block boundary as shown below for two neighboring blocks A and B positioned side by side in the picture 2.
  • the block boundary is a horizontal block boundary as shown below for two neighboring blocks A and B, where block A is positioned on top of block B in the picture 2.
  • Deblocking filtering is applied to a line of pixels, denoted pixel line herein, i.e. to a row of pixels for a vertical block boundary or to a column of pixels for a horizontal block boundary.
  • the deblocking filtering control method as disclosed in Fig. 11 hence, adds a new parameter, i.e. the length offset value, which is used in order to determine whether to apply weak or strong deblocking filtering and/or how many pixels to filter.
  • a new parameter i.e. the length offset value
  • the decision of which deblocking filtering mode to use and/or the length of the deblocking filtering can be made independent of changing the value of the beta parameter used in step S31 to determine whether or not to apply deblocking filtering on the block boundary.
  • the embodiments thereby enable, for instance, going from applying weak deblocking filtering to strong deblocking filtering for a pixel line on a block boundary and/or going from filtering and modifying a single pixel on each side of the block boundary in a pixel line to filtering and modifying two pixels on each side of the block boundary in the pixel line without affecting the beta parameter value and thereby without affecting the number of block boundaries on which deblocking filtering is applied in a picture.
  • the first syntax element retrieved in step S30 and defining the beta parameter value could be any syntax element in the encoded video data, i.e. bitstream, or associated with encoded video data that enables determination of the beta parameter value.
  • the first syntax element comprises the quantization parameter (QP) value used for residual coding.
  • QP quantization parameter
  • a base QP parameter could be set at the picture level with a first delta QP parameter that can further change the base QP parameter value on the slice level.
  • two neighboring blocks of pixels in a slice in a picture can have different QP parameter values.
  • an average QP value of the two neighboring blocks is typically used to derive the beta parameter value from a look-up table, such as Table 1.
  • the first syntax element could then comprise the syntax elements defining the base QP parameter, the first delta QP parameter and the second delta QP parameters.
  • the syntax element defining the first delta QP parameter is typically signaled in a slice header in an encoded representation of a slice and with the optional second delta QP parameter signaled on coding unit (block of pixels) basis.
  • the syntax element defining the base QP parameter may also be retrieved from the slice header but is typically included in another header or data element or structure in or associated with an encoded representation of a picture. Examples of the latter include various parameter sets, such as Picture Parameter Set (PPS), Sequence Parameter Set (SPS), Video Parameter Set (VPS) and Adaptation Parameter Set (APS), and preferably PPS.
  • PPS Picture Parameter Set
  • SPS Sequence Parameter Set
  • VPS Video Parameter Set
  • APS Adaptation Parameter
  • the second syntax element defining the length offset parameter value could be retrieved from a slice header in an encoded representation of a slice.
  • the syntax element is retrieved from a data element or structure associated with the encoded representation, such a PPS, SPS, VPS or APS.
  • the slice header preferably comprises a parameter set identifier directly or indirectly identifying the relevant parameter set.
  • a PPS can be identified by a PPS identifier in the slice header
  • an SPS can be identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header
  • a VPS can be identified by a VPS identifier in an SPS identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header.
  • the retrieval of syntax elements in step S30 could, for instance, be performed once per block boundary or once per slice in the picture.
  • the block parameter value and/or the length offset parameter value could be reused for multiple block boundaries in the slice.
  • Retrieving the syntax elements in step S30 preferably comprises reading and decoding the syntax elements from the relevant data structure, such as slice header or parameter set.
  • the decoded values could be used directly as the beta parameter value and the length offset parameter value.
  • the beta parameter value and/or the length offset parameter value is calculated or otherwise determined based on decoded values. For instance, decoded values of the base QP parameter and the delta QP parameters are used to calculate a QP parameter value which is used a table input in a look-up table to get the relevant beta parameter value.
  • the value of the length offset parameter is defined based on the size of the block of pixels. This means that the length offset parameter value is then linked to a particular type of block boundary.
  • the second syntax element retrieved in step S30 is a syntax element defining the size of the current block of pixels. The size is then used, for instance, in a look-up table to get the value of the length offset parameter to be used for the current block of pixels.
  • the following steps S31 and S32 of Fig. 11 are preferably performed for each vertical and horizontal block boundary between two neighboring blocks present in the same slice in the picture.
  • step S31 is preferably performed for the first and fourth pixel line for a block boundary.
  • the determination of how many pixels to filter is preferably performed for each pixel line relative to a block boundary for which weak deblocking filtering has been applied.
  • step S32 involves determining how many pixels to filter based on the length offset value and this determination is made for each pixel line relative to the block boundary.
  • the decision could first be to determine, based at least partly on the length offset parameter value, whether to filter three or filter one/two pixels on each side of a block boundary in a given pixel line. This corresponds to selecting between strong and weak deblocking filtering. If weak deblocking filtering is selected for the first and/or fourth pixel line a further decision is made, based at least partly on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary on the given pixel line. Strong deblocking filtering is generally not applicable to the second and third pixel lines.
  • step S32 could thereby involve determining, at least partly based on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary.
  • Step S31 comprises determining whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but preferably not based on the length offset parameter value.
  • the length offset parameter value is used in the decision or determination performed in step S32 but not in the decision or determination in step S31.
  • a particular embodiment of step S31 is illustrated in the flow diagram of Fig. 12. The method continues from step S30 in Fig. 1 1.
  • a next step S40 comprises calculating a variable d based on pixel values of pixels in a first block of pixels (block A above) and in a second, neighboring block of pixels (block B above).
  • Equation (9) and further herein pA represents a pixel value of a pixel in a pixel line number i in a first or current block of pixels (block A) at a pixel position number A relative to the block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels (block B) at a pixel position number A relative to the block boundary.
  • Equation (9) above basically corresponds to a combination of equation (2) and some of the equations referred to as (1 ).
  • a next step S41 comprises comparing the variable d with a threshold value, which in an embodiment corresponds to the beta parameter value, represented by ⁇ in Fig. 12. If the variable d is smaller than the beta parameter value the method continues to step S42, which comprises determining to apply deblocking filtering on the block boundary. The method then continues to step S32 of Fig. 1 1. If the variable d instead is not smaller than the beta parameter value the method continues from step S41 to S43. This step S43 comprises determining not to apply deblocking filtering on the block boundary. In such a case, the method ends.
  • the method as shown in Fig. 12 involving steps S40 to S43 is preferably performed for each (vertical and horizontal) block boundary between neighboring blocks of pixels present in a same slice in a picture of a video sequence.
  • the decision of whether to apply deblocking filtering or not is preferably performed once as shown in Fig. 12 for each such block boundary.
  • step S32 of Fig. 1 1 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value.
  • both the length offset parameter value and the beta parameter value are used in the decision on the deblocking filtering mode and/or the decision on the deblocking filtering length. This should be compared to step S31 , in which, as discussed in the foregoing, the decision is made based on the beta parameter value but preferably not based on the length offset parameter value.
  • Fig. 13 is flow diagram illustrating an embodiment of step S32 using both the beta parameter value and the length offset value.
  • the method continues from step S31 in Fig. 11 and continues to step S50.
  • Step S50 comprises calculating a beta length parameter value based on the beta parameter value and the length offset parameter value.
  • BetaLength represents the beta length parameter value
  • Beta represents the beta parameter value
  • LengthOffset represents the length offset parameter value
  • Base represents a base parameter value.
  • a next step S51 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value calculated in step S50.
  • the base parameter value used in equation (10) is optional. Hence, in an embodiment the base parameter value is zero. In such a case, the beta length parameter is defined as Beta ⁇ LengthOffset.
  • the base parameter has a fixed value. The fixed value could advantageously be represented as a power of two, for example 2, 4, 8, etc. In such a case, the fixed value could be known to both the encoder and the decoder. No signaling of the base parameter value is thereby required.
  • step S30 of Fig. 1 1 preferably also comprises retrieving, based on the encoded video data, a third syntax element defining the base parameter value in addition to retrieving the previously mentioned first and second syntax elements.
  • step S32 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • Fig. 14 is a flow diagram illustrating such a decision on the deblocking filtering mode performed based on the beta length parameter value.
  • the method continues, in an embodiment from step S50 in Fig. 13.
  • Step S61 comprises calculating a second threshold, T 2 , based on the beta length parameter value, preferably defined as BetaLength » 5.
  • the first threshold Ti could be regarded as corresponding to the threshold used in equation (4), i.e. ⁇ »2.
  • the embodiment uses the beta length parameter value [BetaLength).
  • the base parameter value is defined as 2 X
  • the second threshold T 2 could be regarded as corresponding to the threshold used in equation (5), i.e. ⁇ »3.
  • the embodiment uses the beta length parameter value [BetaLength).
  • Step S62 comprises calculating a first variable, Vi, defined as 2x(
  • the first and second variables correspond to the values used in equations (4) and (5).
  • Steps S60 to S63 can be performed serially in any order or at least partly in parallel.
  • the method then continues to step S64 which compares the first variable to the first threshold value, compares the second variable to the second threshold value and preferably also compares the value
  • step S66 comprises determining to apply weak deblocking filtering for the pixel line number i.
  • the decision whether to apply strong or weak deblocking filtering as shown in Fig. 14 is preferably a pixel line specific decision.
  • p1i' Clip3( p1r2xtc, p1i+2xtc, ( p2i + p1i + p0i + q0i + 2 ) » 2 )
  • p2i Clip3( p2i-2xtc, p2i+2xtc, ( 2xp3i + 3xp2i + p1 i + pOi + qOi + 4 ) » 3 )
  • qOi Clip3( q0i-2xtc, q0i+2xtc, ( p1 i + 2xp0i + 2xq0i + 2xq1 i + q2 + 4 ) » 3 )
  • q1i' Clip3( q1i-2xtc, q1i+2xtc, ( p0i + q0i + q1i + q2i + 2 ) » 2 )
  • q2i Clip3( q2 r 2xtc, q2i+2xtc, ( pOi + qOi + q1 i + 3xq2i + 2xq3i + 4 ) » 3 )
  • step S32 comprises determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • Fig. 15 is a flow diagram illustrating such a decision on the deblocking filtering length performed based on the beta length parameter value.
  • the method continues, in an embodiment from step S50 in Fig. 13.
  • the decision on the deblocking filtering length is performed only if weak deblocking filtering has been selected or is pre-selected for a current pixel line.
  • the method could continue from step S50 for pixel lines for which weak deblocking filtering should be applied or, if the method as shown in Fig. 14 is used to select between weak and strong deblocking filtering, from step S66 in Fig. 14.
  • Step S70 comprises calculating a side threshold, T s , based on the beta length parameter value, preferably defined as ( BetaLength + ( BetaLength » 1 ) ) » 5.
  • the side threshold T s could be regarded as corresponding to the thresholds used in equation (7) and (8), i.e. ( ⁇ + ( ⁇ » 1 ) » 3).
  • the embodiment uses the beta length parameter value ⁇ BetaLength).
  • the variable dp corresponds to the value used in equation (7) and calculated as defined in equation (1 ).
  • Steps S70 and S71 can be performed serially in any order or at least partly in parallel.
  • step S72 compares the variable dp to the side threshold. If the variable dp is smaller than the side threshold the method continues to step S73, which determines to filter and modify two pixels in the pixel line number i. If the variable dp, however, is not smaller than the side threshold the method instead continues from step S72 to step S74, which determines to filter and modify one pixel in the pixel line.
  • step S72 preferably also comprises comparing this variable dq to the side threshold. If the variable dq is smaller than the side threshold the method continues to step S73, where it is determined to filter and modify two pixels in the neighboring block of pixels on the pixel line number i. If the variable dq is not smaller than the side threshold the method instead continues to step S74 where one pixel is filtered and modified in the pixel line i in the neighboring block of pixels.
  • Step S73 then filters and modifies the values of the two pixels that are closest to the block boundary in the pixel line number i in the neighboring block of pixels.
  • the method as shown in Fig. 15 may comprise one additional, optional step that is preferably performed prior to step S70.
  • This optional step involves determining whether to apply any weak deblocking filtering at all to the pixel line number i. In a particular embodiment, this decision is based on a comparison of a delta value ( ⁇ ) and a threshold defined based on the parameter tc. Thus, weak deblocking filtering is then in this optional embodiment only applied to a pixel line number i if
  • step S32 of Fig. 13 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
  • An implementation example of such an embodiment is basically a combination of Figs. 15 and 16.
  • an idea of the embodiments is to use a single parameter to adjust the "length" of the deblocking filtering in both weak (normal) and strong filtering modes while at the same time be able to adjust the length of the deblocking filtering independently of changing the parameter ⁇ that determines which block boundaries that are processed by deblocking filtering.
  • Embodiments relate to introduction of a new parameter indicative of a length offset, which is used to determine a beta length.
  • the beta will then be replaced by the beta length in equations (4), (5), (7) and (8).
  • Fig. 1A schematically illustrates such an embodiment performed in a filtering control device.
  • the method generally states in step S1 where the beta length parameter value is determined as disclosed herein.
  • the beta parameter in equations (4), (5), (7) and (8) is then replaced by this determined beta length parameter in step S2.
  • Fig. 1 B illustrates a method performed in a transmitter, which involves, in step S10, send signaling according to the embodiments, i.e. transmit the syntax elements defining the beta parameter and the length offset parameter.
  • Fig. 1 C illustrates a method performed in a receiver, which involves, in step S20, receive signaling according to the embodiments, i.e. receiving the syntax elements defining the beta parameter and the length offset parameter.
  • the "length of deblocking filtering" can be adjusted.
  • the length of the deblocking filtering is here referred to as the number of pixels from the block boundary that can be modified by deblocking filtering.
  • the length of deblocking filtering can alternatively be adjusted by the same parameter for both the strong and the weak (normal) deblocking filter.
  • a parameter to control the thresholds is signaled in e.g. the slice header or alternatively, in the APS header or in the other parts of the bitstream, such as in another parameter set, such as PPS or SPS.
  • the thresholds that are used in the decisions how many pixels are modified relative to the block boundary depend on the beta parameter. In order to increase these thresholds, i.e.
  • the two parts i.e. the number of pixels to be modified and the signaling, are independent of each other and can be used independently or in combination.
  • the details on the particular implementation are in the following detailed description and the embodiments.
  • the proposed embodiments allow adjusting subjective quality of deblocking filtering, for instance, on the sequence-base and/or slice-based basis. This gives the content provider possibility to adjust the deblocking filtering strength or length of deblocking filtering to a particular sequence.
  • the deblocking filtering strength can also vary on the frame or picture basis.
  • BetaLength Beta ⁇ ( Base + LengthOffset ) (10) wherein Beta is the same as ⁇ and Base is preferably a fixed value that can be represented as a power of two, for example 2, 4, 8 etc. Base could be signaled in the bitstream or be fixed for the encoder and decoder. Then the values of the respective thresholds controlling the length of deblocking filtering are obtained by using the value of BetaLength instead of Beta, see Fig. 1 A, and additionally dividing it by the value equal to the value of Base. In this way, the value of BetaLength is approximately equal to the value of Beta when the LengthOffset is equal to 0. Then the operation required to obtain the values of BetaLength is basically multiplying the beta parameter value with the base parameter value and then dividing it with the same parameters when calculating the threshold in equation (4), (5), (7), (8).
  • a Base value which is greater than 1 may be used in order to have finer granularity of threshold values when sending integer values of LengthOffset.
  • the value of Base equal to 2 enables the granularity of the thresholds to be half of the Beta value
  • using Base value of 4 enables a quarter value granularity, etc.
  • the value [Base + LengthOffset) can also be clipped in the decoder in order to ensure that the value is in the range (MinValue, MaxValue), where the MinValue can be, for example, 0 and MaxValue is a defined maximum value.
  • the value of BetaLength can be used for calculation of all the mentioned thresholds in equations (4), (5), (7) and (8) or for a subset of these thresholds. An example the latter case could be to use BetaLength, for instance, for thresholds related to application of strong filter, i.e. equations (4) and (5). Alternatively, it can be used for threshold used in equations (7), (8), related to choosing between filtering one or two pixels from the block boundary in weak (normal) filtering.
  • the presented threshold can also be used together with some other conditions, for example, for intra blocks only or with boundary strength equal to some particular value.
  • the filtering control method is limited to be used for block boundaries for intra blocks only or for block boundaries equal to or larger than the particular value.
  • BetaLength with some particular value can also be linked to conditions like the size of the block of pixels.
  • the particular value of BetaLength is preferably increasing for increasing block sizes. This can be achieved by defining the value of the length offset parameter to be dependent on the size of the block of pixels.
  • the value of LengthOffset can be either signaled in the bitstream or be hardcoded. In the former case, different values of LengthOffset are used for different boundary strength or when certain conditions are met. In the latter case, the second syntax element does not need to be signaled in the bitstream.
  • a deblocking length offset ⁇ LengthOffset is preferably sent in the bitstream, see Figs. 1 B and 1 C.
  • LengthOffset can be sent in the slice header, the APS or in the other parts of the bitstream.
  • LengthOffset can alternatively be signaled in the SPS or PPS. If LengthOffset is signaled in PPS, the same PPS can be used for a particular sequence picture type. Sending LengthOffset in the SPS provides modifications of the offsets for the video sequence.
  • VLC variable length code
  • An alternative embodiment is to signal a value of LengthOffset using unsigned VLC values.
  • a value A representing [Base + LengthOffset) can be transmitted in the bitstream, e.g. slice header or APS.
  • a positive integer value for ⁇ Base + LengthOffset) can always be signaled.
  • the BetaLength is then equal to Beta ⁇ A instead of Beta ⁇ [Base + LengthOffset).
  • a value of the length offset parameter with some multiplier or divisor can be used, e.g. length_offset_div2.
  • the resulting threshold values should be multiplied by the respective bit depth scaling factor.
  • the beta value should be multiplied by the respective bit scaling factor.
  • an additional parameter is sent in the bitstream to control whether strong or weak filtering is done as well as controlling how many pixels on each side that should be filtered. This parameter is a complement to the existing parameter that, in the embodiments, is used to decide whether to filter a block boundary or not.
  • the decision whether to filter a block or not is based on the old parameter and the decisions whether to use strong or weak filtering should be done is based on a new parameter.
  • the decision how many pixels to filter on each side of a block boundary in the weak (normal) filter operation is also based on this new parameter.
  • a decoder is, according to this embodiment, configured to perform the following steps.
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the decisions whether to use weak or strong filtering is based partly on A.
  • the decisions on how many pixels to filter on each side of a block boundary are based partly on A.
  • the decisions whether to filter a block or not is based partly on B.
  • the decisions whether to use weak or strong filtering is based partly on A and B, the decisions on how many pixels to filter on each side of a block boundary are based partly on A and B.
  • the new parameter from the first embodiment here called A
  • B B x ( X + A )
  • X is a predetermined, fixed value.
  • the decision whether to use weak or strong filtering is based on C. How many pixels to filter on each side of a block boundary is also based on C. 1.
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture 3.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the decisions whether to use weak or strong filtering is based partly on C.
  • the decisions on how many pixels to filter on each side of a block boundary are based partly on C.
  • the decisions whether to filter a block or not is based partly on B.
  • the parameter C from the second embodiment or the parameter C/X, or C»(log 2 (X)), is used instead of ⁇ in equations (4), (5), (7) and (8).
  • the decoder receives video data and parses syntax elements that control the deblocking filtering process.
  • the syntax elements include one parameter A and one parameter B.
  • the decoder decodes the picture.
  • the decoder performs deblocking filter operations on the decoded picture.
  • the syntax code below provides an example of deblocking parameter signaling in the slice header. if( deblocking_filter_control_present_flag ) ⁇
  • the syntax code below provides an example of deblocking parameter signaling in a parameter set, here represented by an APS.
  • m_dfLenghtOffset represents the length offset parameter value and is preferably obtained from the syntax element df_length_offset in the syntax code above.
  • ilndexB is an index used in a look-up table (betatable_8x8) and BitdepthScale is derived from a bit depth as signaled in the PPS or SPS.
  • the previously mentioned side threshold is then preferably calculated as:
  • Equations (4) and (5) become, in this embodiment, equal to (12) and (13) respectively.
  • Equations (4) and (5) in this embodiment are equal to (15) and (16) respectively.
  • the parameters in the form described in one of the previous embodiments can be put into the adaptation parameter set.
  • the parameters are applied to the whole frame or picture rather than to one slice.
  • Fig. 17 is a flow diagram illustrating a deblocking filtering control method performed during video encoding.
  • the method starts in step S80 where a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence is determined.
  • a next step S81 comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • a next step S82 associates a first syntax element representing or defining the beta parameter value determined in step S80 and a second syntax element representing or defining the length offset parameter value determined in step S81 to an encoded representation of a picture.
  • the length offset parameter value is determined in step S81 based on pixel values in the two neighbouring blocks of pixels. In another embodiment, the length offset parameter is determined in step S81 based on pixel values in a current slice of the picture. Thus, in this embodiment a same length offset parameter value is used for all block boundaries between neighboring blocks of pixels in the slice. Therefore, the length offset parameter value is, in this embodiment, estimated based on, typically, all pixel values adjacent to block boundaries in the slice. For instance, it can be an average of these pixel values that is used to determine the length offset parameter value. For instance, if the pixels in the two blocks represent a rather smooth background or area, i.e.
  • the length offset parameter value is set so that Vi ⁇ Ti and V 2 ⁇ T 2 for the first and fourth pixel line in Fig. 14 and dp ⁇ T s (and typically dq ⁇ T s ) for the second and third pixel line in Fig. 15.
  • the variables Vi, V 2 , dp and dq are dictated by the pixel values in the blocks of pixels, whereas the value of the length offset parameter affects the values of the thresholds Ti, T 2 and T s .
  • the value of the length offset parameter affects the values of the thresholds Ti, T 2 and T s .
  • the length offset parameter is preferably set to a value so that at least one of the variables Vi and V 2 is not smaller than its associated threshold Ti and T 2 , which are defined at least partly based on the length offset parameter value.
  • the length offset parameter is preferably set to a value so that dp (and dq) is preferably not smaller than T s for the pixel line(s), which comprises the structure.
  • the length offset parameter value is determined based on at least one encoding or slice parameter used for encoding the picture. Examples of such parameters based on which the length offset parameter value can be determined include the quantization parameter (QP) or the lambda parameter used in the rate-distortion optimization of the encoded picture.
  • the encodi ng of a slice in a picture of a video sequence generates an encoded representation 20 of the slice comprising a slice header 21 and slice data 22 as shown in Fig. 22.
  • the encoded presentation 20 is output from the encoding process as a so called Network Adaptation Layer (NAL) unit 1 1 as shown in Fig. 21 .
  • the first part of the NAL unit 1 1 is a header that contains an indication of the type of data in the NAL unit 11.
  • the remaining part of the NAL unit 11 contains payload data in the form of the slice header 21 and slice data 22.
  • the NAL unit 1 1 may then be added with headers 12 to form a data packet 10 that can be transmitted as a part of a bitstream from the encoder to the decoder.
  • headers 12 For instance, Real-time Transport Protocol (RTP), User Datagram Protocol (UDP) and Internet Protocol (IP) headers 12 could be added to the NAL unit 1 1.
  • RTP Real-time Transport Protocol
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • This form of packetization of NAL units 11 merely constitutes an example in connection with video transport. Other approaches of handling NAL units 11 , such as file format, MPEG-2 transport streams, MPEG-2 program streams, etc. are possible.
  • step S82 in Fig. 17 applicable to the generation of an encoded representation 20 of a picture comprising at least one slice header 21 and encoded video data represented by the slice data 22 in Fig. 22, the first syntax element and the second syntax element are inserted into a slice header 21 of the at least one slice header 21.
  • the encoded representation of the picture itself carries the syntax elements defining and enabling determination of the beta parameter value and the length offset parameter value.
  • Fig. 18 is a flow diagram illustrating another embodiment of step S82 in Fig. 17. The method continues from step S81 in Fig. 17 and continues to step S90.
  • Step S90 comprises inserting the first syntax element and the second syntax element into a parameter set associated with the video sequence.
  • the syntax elements could be inserted into an APS, PPS, SPS or VPS. It is generally preferred to include the syntax elements in the same parameter set but this is not necessary.
  • the first syntax element could be included in one of an APS, PPS, SPS or VPS with the second syntax element in another of the APS, PPS, SPS or VPS. It is in fact possible to distribute, for instance, the first syntax element, which could comprise multiple syntax element parameter, such as the previously mentioned base QP parameter and delta QP parameters, among multiple parameter sets.
  • a next step S91 comprises inserting a parameter set identifier into a slice header 21 of the at least one slice header 21 in the encoded representation 20 of the picture.
  • This parameter set identifier enables identification of the parameter set into which the first and second syntax elements were inserted in step S90.
  • the parameter set identifier could directly identify the relevant parameter set, such as an APS identifier or PPS identifier.
  • the parameter set identifier identifies a first parameter set, such as PPS, which in turn comprises a second parameter set identifier identifying a second parameter set, such as SPS, which comprises the first or second syntax elements or comprises a third parameter set identifier identifying a third parameter set, such as VPS, which comprises the first or second syntax elements.
  • step S91 optionally comprises inserting multiple parameter set identifiers into the slice header.
  • step S82 may be combined. Hence, it is possible to distribute the first and second parameter sets among a slice header and at least one parameter set.
  • Fig. 7 is a schematic block diagram of a filtering control device 100 according to an embodiment.
  • the filtering control device 100 comprises a determining unit 1 10, also referred to as determiner, determining means or module.
  • the determining unit 110 is configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data.
  • the determined beta parameter value and the length offset parameters are used by a connecting processing unit 120, also referred to a processor or processing means or module.
  • the processing unit 120 is configured to determine whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data.
  • the processing unit 120 performs this determination at least partly based on the beta parameter value from the determining unit 1 10.
  • the processing unit 120 is also configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary and perform this determination at least partly on the length offset parameter value from the determining unit 110.
  • the processing unit 120 is configured to determine whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but not based on the length offset parameter value.
  • the processing unit 120 is configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value.
  • Fig. 16 is a schematic block diagram illustrating optional units of the processing unit 120 of the filtering control device 100 in Fig. 7.
  • the processing unit 120 comprises a beta length calculator 121 , also referred to as beta length calculating unit, means or module.
  • the beta length calculator 121 is configured to calculate a beta length parameter value based on the beta parameter value and the length offset parameter value.
  • the beta length calculator 121 is configured to calculate the beta length parameter value based on, preferably equal to, Beta ⁇ ( Base + LengthOffset ).
  • the processing unit 120 is preferably configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value.
  • the processing unit 120 optionally comprises a d variable calculator 128, also referred to as a d variable calculating unit, means or module.
  • the d variable calculator 129 is configured to calculate a variable d for the block boundary between the two neighboring blocks of pixels based on pixel values in the first and fourth pixel lines in the two neighboring blocks of pixels.
  • the processing unit 120 is, in this embodiment, configured to determine to apply deblocking filtering on the block boundary if the variable d is smaller than the beta parameter value and otherwise determine not to apply deblocking filtering on the block boundary. In an embodiment, the processing unit 120 is configured to determine whether to apply strong or weak deblocking filtering for a pixel line number i crossing the block boundary between the two neighboring blocks of pixel. The processing unit 120 is configured to perform this determination based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value and more preferably at least partly based on the beta length parameter value.
  • the processing unit 120 preferably comprises a first threshold calculator 124, a second threshold calculator 125, a first variable calculator 126 and a second variable calculator 127, which are also referred to as first threshold calculating unit, means or module, second threshold calculating unit, means or module, first variable calculating unit, means or module and second variable calculating unit, means or module.
  • the first threshold calculator 124 is configured to calculate a first threshold based on the beta length parameter value, preferably defined as BetaLengt » (2+X), wherein the base parameter, Base, has a value of 2 X .
  • the first threshold is calculated as BetaLeng t » 4 by the first threshold calculator 124.
  • the second threshold calculator 125 is correspondingly configured to calculate a second threshold based on the beta length parameter value, preferably defined as BetaLength » (3+X), such as BetaLength » 5.
  • the first variable calculator 126 preferably calculates the first variable defined as 2x(
  • the second variable calculator 127 is correspondingly configured to calculate a second variable for the current pixel line number i based on pixel values of pixels present in the first block and pixels present in the second block divided by the block boundary.
  • the second variable calculator 127 preferably calculates the second variable defined as
  • the processing unit 120 is, in this embodiment, preferably configured to determine to apply strong deblocking filtering for the pixel line number i if the first variable is smaller than the first threshold, the second variable is smaller than the second threshold and optionally but preferably
  • the processing unit 120 optionally comprises a side threshold calculator 122 and a dp variable calculator 123, also referred to as side threshold calculating unit, means or module and dp variable calculating unit, means or module.
  • the side threshold calculator 122 is configured to calculate a side threshold based on the beta length parameter value. In a particular embodiment the side threshold calculator 122 is configured to calculate the side threshold as ( BetaLengt + ( BetaLengt » 1) ) » (3+X), preferably as ( BetaLength + ( BetaLength » 1) ) » 5.
  • the pixel line is a line for which the processing unit 120 has determined to apply weak deblocking filtering.
  • the dp variable calculator 123 is configured to calculate the variable dp based on pixel values of pixels present in the first (current) block of the two blocks of pixels divided by the block boundary.
  • the processing unit 120 is, in this embodiment, preferably configured to determine to filter and modify two pixels in the pixel line number i if the variable dp is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line.
  • the two pixels are preferably the two pixels in pixel line number i that are closest to the block boundary whereas in the latter case the one pixel is the pixel closest to the block boundary in the pixel line number i.
  • the dp variable calculator 123 is also configured to calculate a variable dq based on pixel values of pixels present in the pixel line number i in the second block of the two pixels divided by the block boundary.
  • the processing unit 120 is then preferably configured to determine to filter and modify two pixels in the pixel line number i in the second block if the variable dq is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line number i in the second block.
  • the processing unit 120 is configured to determine both to i) whether to apply weak or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, more preferably based at least partly on the beta length parameter value.
  • the processing unit 120 may contain all the units 121-128 as shown in Fig. 16.
  • the filtering control device 100 implements the functions of the previously disclosed embodiments, such as the first to seventh embodiment, or a combination thereof by the determining unit 110, which is configured to determine, in a particular embodiment, BetaLengt .
  • This BetaLength is processed by the processing unit 120.
  • Fig. 19 is a schematic block diagram of a filtering control device 200 according to another embodiment.
  • This filtering control device 200 is in particular configured to be implemented within or connected to an encoder.
  • the filtering control device 200 comprises a beta parameter determining unit 210, also referred to as a beta parameter determiner or beta parameter determining means or module.
  • the beta parameter determining unit 210 is configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • a length offset determining unit 220 also referred to as a length offset determiner or length offset determining means or module, is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the filtering control device 200 also comprises an associating unit 230, also referred to as associator or associating means or module, configured to associate a first syntax element representing or defining the beta parameter value and a second syntax element representing or defining the length offset parameter value to an encoded representation of the picture.
  • the length offset determining unit 220 is configured to determine the length offset parameter value based on pixel values in the two blocks of pixels or based on pixel values close to block boundaries in the slice as previously disclosed herein. In another alternative or additional embodiment the length offset determining unit 220 is configured to determine the length offset parameter value based on at least one encoding or slice parameter used for encoding the picture as previously disclosed herein.
  • the associating unit 230 is, in an embodiment, configured to insert the first and second syntax elements into a slice header of the encoded representation of the picture. In another embodiment the associating unit 230 is configured to insert first and second syntax elements into a parameter set associated with the video sequence and insert a parameter set identifier enabling identification of the parameter set into a slice header of the encoded representation of the picture. The associating unit 230 may alternatively be configured to distribute the first and second syntax elements between different parameter sets or between a slice header and a parameter set.
  • the filtering control device 200 of Fig. 19 can, in an embodiment, be viewed as an implementation example of the filtering control unit 100 of Fig. 7. In such a case, the determining unit 100 is configured to perform the operations of the beta parameter determining unit 210 and the length offset determining unit 220, whereas the processing unit 120 is configured to perform the operations of the associating unit 230.
  • the filtering control device 100, 200 of Figs. 7, 19 with their including units 110-120 (and optional units 121-128), 210-230 could be implemented in hardware.
  • circuitry elements that can be used and combined to achieve the functions of the units 110-120, 210-230 of the filtering control device 100, 200. Such variants are encompassed by the embodiments.
  • Particular examples of hardware implementation of the filtering control device 100, 200 is implementation in digital signal processor (DSP) hardware and integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
  • DSP digital signal processor
  • the filtering control device 100, 200 described herein could alternatively be implemented e.g. by one or more of a processing unit 72 in a computer 70 and adequate software with suitable storage or memory therefore, a programmable logic device (PLD) or other electronic component(s) as shown in Fig. 8.
  • a processing unit 72 in a computer 70 and adequate software with suitable storage or memory therefore, a programmable logic device (PLD) or other electronic component(s) as shown in Fig. 8.
  • PLD programmable logic device
  • Fig. 8 schematically illustrates an embodiment of a computer 70 having a processing unit 72, such as a DSP (Digital Signal Processor) or CPU (Central Processing Unit).
  • the processing unit 72 can be a single unit or a plurality of units for performing different steps of the method described herein.
  • the computer 70 also comprises an input/output (I/O) unit 71 for receiving recorded or generated video frames or encoded video frames and outputting encoded video frame or decoded video data.
  • the I/O unit 71 has been illustrated as a single unit in Fig. 8 but can likewise be in the form of a separate input unit and a separate output unit.
  • the computer 70 comprises at least one computer program product 73 in the form of a nonvolatile memory, for instance an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory or a disk drive.
  • the computer program product 73 comprises a computer program 74, which comprises code means which when run on or executed by the computer 70, such as by the processing unit 72, causes the computer 70 to perform the steps of the method described in the foregoing in connection with Figs. 1A-1C, 11-15, 17-18.
  • the code means in the computer program 74 comprises a module 310 configured to implement embodiments as disclosed herein or combinations thereof. This module 310 essentially performs the steps of the flow diagrams in Figs. 1A-1C, 11-15, 17-18 when run on the processing unit 72.
  • the module 310 is run on the processing unit 72 it corresponds to the corresponding units 110-120, 210-230 of Figs 7, 19.
  • the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value.
  • the code means also causes the computer 70 to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data and determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
  • the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence.
  • the code means also causes the computer 70 to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of said the boundary, and associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
  • An embodiment also relates to a computer program product 73 comprising computer readable code means and a computer program 74 as defined according to any of the embodiments above stored on the computer readable code means.
  • the filtering control device 200 of Fig. 19 is preferably implemented or arranged in an encoder configured to encode video data of a video sequence.
  • Fig. 2 is a schematic block diagram of an encoder 40 for encoding a block of pixels in a video frame or picture of a video sequence according to an embodiment.
  • a current block of pixels is predicted by performing a motion estimation by a motion estimator 50 from an already provided block of pixels in the same frame or in a previous frame.
  • the result of the motion estimation is a motion or displacement vector associated with the reference block, in the case of inter prediction.
  • the motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the block of pixels.
  • An intra predictor 49 computes an intra prediction of the current block of pixels.
  • the outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current block of pixels.
  • the output from the selector 51 is input to an error calculator in the form of an adder 41 that also receives the pixel values of the current block of pixels.
  • the adder 41 calculates and outputs a residual error as the difference in pixel values between the block of pixels and its prediction.
  • the error is transformed in a transformer 42, such as by a discrete cosine transform, and quantized by a quantizer 43 followed by coding in an encoder 44, such as by entropy encoder.
  • a transformer 42 such as by a discrete cosine transform
  • a quantizer 43 quantized by a quantizer 43
  • an encoder 44 such as by entropy encoder.
  • the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current block of pixels.
  • the transformed and quantized residual error for the current block of pixels is also provided to an inverse quantizer 45 and inverse transformer 46 to retrieve the original residual error.
  • This error is added by an adder 47 to the block prediction output from the motion compensator 50 or the intra predictor 49 to create a reference block of pixels that can be used in the prediction and coding of a next block of pixels.
  • This new reference block is first processed by a filtering control device 200 in order to control any filtering that is applied to the reference block to combat any artifact.
  • the processed new reference block is then temporarily stored in a frame buffer 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
  • the filtering control device 100 of Fig. 7 is preferably implemented or arranged in a decoder configured to decode encoded video data of a video sequence.
  • Fig. 3 is a corresponding schematic block diagram of a decoder 60 comprising a filtering control device 100 according to any of the embodiments or in combinations thereof.
  • the decoder 60 comprises a decoder 61, such as entropy decoder, for decoding an encoded representation of a block of pixels to get a set of quantized and transformed residual errors. These residual errors are dequantized in an inverse quantizer 62 5 and inverse transformed by an inverse transformer 63 to get a set of residual errors.
  • the resulting decoded block of pixels output from the adder 64 is input to a filtering control device 100 in order to control any filter that is applied to combat any artifacts.
  • the filtered block of pixels is output form the decoder 60 and is furthermore preferably temporarily provided to a frame buffer 65 and can be used as a reference block of pixels for a subsequent block of pixels to be decoded.
  • the frame buffer 65 is thereby connected to the motion estimator/compensator
  • the output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered reference block of pixels.
  • the filtering control device 100, 200 controls filtering in the form of so called in-loop filtering.
  • the filtering control device 100 is arranged to perform so called post-processing filtering.
  • the filtering control device 100 operates on the output frames outside of the loop formed by the adder 64, the frame buffer 65, the intra predictor 66, the motion estimator/compensator 67 and the selector 68.
  • No filtering and filter control is then typically done at the encoder although, in principle, the encoder can still estimate the length offset parameter and signal it with some non-normative means, e.g. in a non- normative SEI message.
  • the embodiments can be used in an encoder 40 and/or a decoder 60 or completely outside the 30 coding loop as a post filter.
  • the methods of the embodiments are performed in a filtering control device 100, 200 which can be located in an encoder 40 or a decoder 60 as schematically illustrated in Figs. 2 and 3.
  • Figs. 2 and 3 illustrate the example when the method is performed inside the coding loop.
  • the decoder 60 with a filtering control device 100 may be implemented in a user equipment or media terminal 80 as shown in Fig. 5.
  • Fig. 5 is a schematic block diagram of a user equipment or media terminal 80 housing a decoder 60 with a filtering control device.
  • the user equipment 80 can be any device having media decoding functions that operates on an encoded video stream of encoded video frames to thereby decode the video frames and make the video data available. Non-limiting examples of such devices include mobile telephones and other portable media players, tablets, desktops, notebooks, personal video recorders, multimedia players, video streaming servers, set-top boxes, TVs, computers, decoders, game consoles, etc.
  • the user equipment 80 comprises a memory 84 configured to store encoded video frames or pictures. These encoded video frames or pictures can have been generated by the user equipment 80 itself. Alternatively, the encoded video frames or pictures are generated by some other device and wirelessly transmitted or transmitted by wire to the user equipment 80.
  • the user equipment 80 then comprises a transceiver (transmitter and receiver) or input and output port 82 to achieve the data transfer.
  • the encoded video frames or pictures are brought from the memory 84 to a decoder 60, such as the decoder illustrated in Fig. 3.
  • the decoder 60 comprises a filtering control device 100 according to embodiments.
  • the decoder 60 then decodes the encoded video frames or pictures into decoded video frames or pictures.
  • the decoded video frames pictures are provided to a media player 86 that is configured to render the decoded video frames into video data that is displayable on a display or screen 88 of or connected to the user equipment 80.
  • the user equipment 80 has been illustrated as comprising both the decoder 60 and the media player 86, with the decoder 60 implemented as a part of the media player 86.
  • Also distributed implementations are possible where the decoder 60 and the media player 86 are provided in two physically separated devices are possible and within the scope of user equipment 80 as used herein.
  • the display 88 could also be provided as a separate device connected to the user equipment 80, where the actual data processing is taking place.
  • the encoder 40 with a filtering control device 200 may be implemented in a user equipment or media terminal 80 as shown in Fig. 4.
  • Fig. 4 illustrates another embodiment of a user equipment 80 that comprises en encoder 40, such as the encoder of Fig. 2, comprising a filtering control device according to the embodiments.
  • the encoder 40 is then configured to encode video frames or pictures received by the I/O unit 82 and/or generated by the user equipment 80 itself.
  • the user equipment 80 preferably comprises a media 5 engine or recorder, such as in the form of or connected to a (video) camera.
  • the user equipment 80 may optionally also comprise a media player 86, such as a media player 86 with a decoder and filtering control device according to the embodiments, and a display 88.
  • the encoder 40 and/or decoder 60 may be 10 implemented in a network device 30 being or belonging to a network node in a communication network 32 between a sending unit 34 and a receiving user equipment 36.
  • a network device 30 may be a device for converting video according to one video coding standard to another video coding standard, for example, if it has been established that the receiving user equipment 36 is only capable of or prefers another video coding standard than the one sent from the sending unit 34.
  • the network device 15 30 can be in the form of or comprised in a radio base station, a Node-B or any other network node in a communication network 32, such as a radio-based network.
  • a transmitter associated with the encoder is provided for signaling the parameters according to embodiments above. Accordingly a receiver is provided for receiving the signaled parameters. The 20 received parameters are used by the decoder when decoding the bit stream. Thus the receiver and the transmitter respectively implement the methods shown in Figs. 1 B and 1 C.
  • the embodiments above apply to a decoder, an encoder and any element that operates on a bitstream, such as a network-node or a Media Aware Network Element.
  • the encoder may for example 25 be located in a transmitter in a video camera in e.g. a mobile device.
  • the decoder may for example be located in a receiver in a video camera or any other device for displaying, decoding or transcoding a video stream.
  • the embodiments are not limited to HEVC but may be applied to any extension of HEVC such as a 30 scalable extension or multiview extension or to a different video codec.
  • slice sao interleaving flag u(1) slice_sample_adaptive_offset_flag u(1) if( slice_sao_interleaving_flag &&
  • slice_adaptiveJoopJilterJlag u(1) if( slice_adaptiveJoop_filter_flag && alf_coef_in_slice_flag ) alf_param( )
  • slice_adaptive_loop_filter_flag 1 slice_sample_adaptive_offset_flag 1 1
  • n um_entry_poi nt_off sets ue(v) if( num_entry_point_offsets > 0 ) ⁇

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A deblocking filtering control involves determining, based at least partly on a beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture. A length offset parameter value is used in order to determine whether to apply weak or strong deblocking filtering on the block boundary and/or how many pixels to filter on each side of the block boundary. The decision with regard to whether to apply deblocking filtering and the decision of which deblocking filtering mode and/or length can thereby be independent using different threshold-defining parameter values.

Description

DEBLOCKING FILTERING CONTROL
TECHNICAL FIELD
The present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary between neighboring blocks of pixels in a picture.
BACKGROUND
Deblocking filters are used in video coding standards in order to combat blocking artifacts. The blocking artifacts arise because the original video is split into blocks which are processed relatively independently. The blocking artifacts can arise due to different intra prediction of blocks, quantization effects and motion compensation. Two particular variants of deblocking are described below.
These descriptions assume a vertical block boundary or border. The process is also done in the same way for horizontal block boundaries.
H.264 deblocking
In state of the art video coding such as H.264 there is an adaptive de-blocking filter/loop filter after prediction and residual reconstruction, but before storage of the reconstruction for later reference when encoding or decoding subsequent frames. The deblocking filtering consists of several steps such as filter decisions, filtering operations, a clipping function and changes of pixel values. The decision to filter the border or not is made based on evaluating several conditions. Filter decisions depend on macro block (MB) type, motion vector (MV) difference between neighboring blocks, whether neighboring blocks have coded residuals and on the local structure of the current and/or neighboring blocks.
The amount of filtering for a pixel depends on the position of that pixel relative to the block boundary and on the quantization parameter (QP) value used for residual coding. Here below a to h represent pixel values across a vertical block boundary. a b c d | e f g h
The filter decision is based on comparing three pixel differences with three thresholds. The thresholds are adapted to the QP. If the following conditions are fulfilled the filtering is done: abs(d-e)<thr1 ,
abs(c-d)<thr2, and
abs(e-f)<thr2 where thrl and thr2 are functions of QP.
There are two filtering modes in H.264. In the first filtering mode (normal filtering), filtering can be described with a delta value that the filtering changes the current pixel value with. The filtering for the pixel closest to the block boundary is: d' = d + delta and e' = e - delta where delta has been clipped off to a threshold ±thr3 to a value that is constrained by the QP. d' is here the pixel value at position d after filtering and e' is the pixel value after filtering at position e. More filtering is allowed for high QP than for low QP.
Clipping can be described as: delta_clipped = max(-thr3,min(thr3,delta)) where thr3 is controlling the filter strength. A larger value of thr3 means that the filtering is stronger, which in turns means that a stronger low-pass filtering effect will happen.
The filter strength can be increased if any of the following two conditions also holds: abs(b-d)<thr2 and abs(e-g)<thr2 The filter strength is adapted by clipping the delta less, e.g. to allow for more variation.
The second filtering mode (strong filtering) is applied for intra macroblock boundaries only, when the following condition is fulfilled: abs(d-e)<thr1/4.
The thresholds thrl, thr2 and thr3 are derived from table lookup using QP as index. Each slice can contain modifications of thr2 and thr3 using slice_beta_offset_div2 and thrl using slice_alp a_c0_offset_div2. The slice parameters 2xslice_beta_offset_div2 and 2x slice_alp a_c0_offset_div2 are added to the current QP index before table lookup of thr2/thr3 and thrl respectively.
Deblocking in HEVC draft
Here below p0 to p3 and qo to q3 represent pixel values across a vertical block boundary.
Figure imgf000005_0001
In the draft HEVC specification, the deblocking filter works differently than H.264. The filtering is performed if at least one of the blocks on the side of the border is intra, or has non-zero coefficients, or the difference between the motion vector components of the blocks is greater than or equal to one integer pixel. For example, if one is filtering the border between the blocks A and B below, then the following condition should satisfy for the block boundary to be filtered: A B
p3o p20 p1o pOo qOo q1o q20 q30
p3i p2i p1i pOi qOi q1i q2i q3i
p32 p22 p12 p02 q02 q12 q22 q32
p33 p23 pis p03 q03 q13 q23 q33
dpO = I p20 - 2xp10 + p001 (1) dp3=|p23-2xp13 + p03|
dqO = |q2o-2xq1o + qOo|
dq3 = |q23-2xq13 + q03|
dpqO = dpO +dqO
dpq3 = dp3 +dq3
dp = dpO +dp3
dq = dqO +dq3 The variable d is derived as follows: d = dpqO + dpq3 (2)
The deblocking filtering is performed on the block boundary for lines / = 0..3 if the following condition holds:
<1<β (3) where β depends on QP. In the draft HEVC specification, there is a table, see Table 1 below, for looking up the value of β using QP as the table index, β increases with increasing QP.
Table 1 - Derivation of threshold variables β and tc from input QP
Figure imgf000006_0001
If the condition in equation (3) above is fulfilled and filtering is done between blocks A and B, one of two types of filtering (weak or strong filtering) is performed. The choice between the strong and the weak filtering is done separately for each line depending on the following conditions. For lines / = 0,3, strong filtering is performed if all the following conditions are true, otherwise, weak filtering is performed:
2xdpqi < (β»2) (4) and (|p3i - p0i| + |q0i - q3i|) < $»3) (5) and |p0i - q0i| < ((5*tc + 1 )»1 ), (6) where tc and β depend on QP as shown in Table 1. The tc is calculated as tcx(QP+2) when A or B has PredMode==MODE_INTRA.
The two filtering modes (weak and strong filtering) in the HEVC draft look like in the following:
Weak filtering
Weak filtering is performed based on the above conditions. The actual filtering works by computing an offset (Δ) for each of the lines / that the weak filter has been selected for. The following weak filtering procedure is applied for every line, where it has been chosen. In the following algorithm, the variables p0..p2 and q0...q2 are assigned the following values (from row/column i) where ρΟ=ρΟ, , ρ1=ρ1,, p2=p2i, qO=qOi , q1=qii, q2=q2,. One can see that if the weak filtering is performed, one to two pixels are modified at each side of the block boundary:
A = (9x(q0-p0)-3x(q1 -p1)+8)»4
if( abs( Δ ) < 10xtc)
{
Δ = Clip3( -tc, tc, Δ )
pO' = Clip1Y( ρθ +Δ )
qO' = Clip1Y( qO - Δ )
if( dp < ( β -H ( β » 1 ))»3) (7)
{
Δρ = Clip3( -( tc » 1 ),tc» 1, (((p2+p0 + 1 )» 1 )-p1 +Δ)»1 )
p1' = Clip1Y(p1 +Δρ)
}
if(dq <(β+(β» 1 ))»3) (8) {
Aq = Clip3( -( tc » 1 ), tc » 1 , ( ( ( q2 + qO + 1 ) » 1 ) - q1 - Δ ) »1 )
q1' = Clip1Y(q1 +Aq)
}
} where Clip is defined as x' = Clip3( A, B, x ): x -x or if x<A x -A or if x>B x -B and Clip1Y( x ) = Clip3( 0, ( 1 « BitDepthv ) - 1 , x ), where BitDepthY is the bit depth, for example 8 or 10. Strong filtering
Strong filtering mode is performed for a line i of pixels by the following set of operations, where ρΟ=ρΟ,, p1 =p1i, p2=p2i, qO=qOi , q1=q1i, q2=q2i: pO' = Clip3( p0-2xtc, p0+2xtc, ( p2 + 2xp1 + 2xp0 + 2xq0 + q1 + 4 ) » 3 )
p1 ' = Clip3( p1 -2xtc, p1 +2xtc, ( p2 + p1 + pO + qO + 2 ) » 2 )
p2' = Clip3( p2-2xtc, p2+2xtc, ( 2xp3 + 3xp2 + p1 + pO + qO + 4 ) » 3 )
qO' = Clip3( q0-2xtc, q0+2xtc, ( p1 + 2xp0 + 2xq0 + 2xq1 + q2 + 4 ) » 3 )
q1 ' = Clip3( q1 -2xtc, q1 +2xtc, ( pO + qO + q1 + q2 + 2 ) » 2 )
q2' = Clip3( q2-2xtc, q2+2xtc, ( pO + qO + q1 + 3xq2 + 2xq3 + 4 ) » 3 )
One can see that if the strong filtering is performed, three pixels are modified at each side of the block boundary. The parameter beta_o†fset_div2 and pps_beta_o†fset_div2, see Appendix A, Appendix B and Appendix C, is used in order to adjust the amount of deblocking filtering as in the following. The parameter beta (β) is extracted from the corresponding table as β = Clip3( 0, 51 , qPi_ + ( beta_o†fset_div2 « 1 ) ), and where the Clip3( a, b, c ) function is determined as Min(b,Max(a,c)) and qPi_ represents the QP value. Hence, if the deblocking filtering is applied the number of samples from the block boundary modified by deblocking filtering depends on equations (4), (5), (7), (8). These equations use a comparison with the parameter β, divided with some factor, such as ( β»2 ) in (4), ( β»3 ) in (5) and ( β + ( β » 1 ) ) » 3) in (7) and (8), where the parameter β depends on the QP and is normally derived from the look-up table, such as Table 1.
As one see from the slice header syntax in Appendix A, Appendix B and Appendix C, an offset to the parameter β can be signaled, for instance, in the slice header or in a parameter set, such as Adaptation Parameter Set (APS) or Picture Parameter Set (PPS), as beta_o†fset_div2. However, it means that significant changes of the parameter β is required in order to change the threshold values ( β»2 ), ( β»3 ), ( β + ( β » 1 ) ) » 3).
SUMMARY It may be desirable to modify the threshold values, which affect the number of pixels modified by deblocking filtering, without large modifications of the parameter β, which determines which parts of block boundaries that are modified by deblocking filtering. A general objective is to provide an efficient deblocking filtering control.
A particular objective is to enable modifying threshold values used for determining deblocking filtering mode and/or length separately from modifications of a parameter that determines which parts of block boundaries are modified by deblocking filtering.
These and other objectives are met by embodiments disclosed herein.
An aspect of the embodiments relates to a deblocking filtering control method performed in connection with video decoding. The method comprises retrieving, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value. The method also comprises determining, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and determining, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
A related aspect of the embodiments defines a filtering control device comprising a determining unit configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data. The filtering control device also comprises a processing unit configured to i) determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data, and ii) determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
Another related aspect of the embodiments defines a decoder configured to decode encoded video data of a video sequence. The decoder comprises a filtering control device according to above. A further related aspect of the embodiments defines a user equipment comprising a decoder according to above.
Yet another related aspect of the embodiments defines a computer program for deblocking filtering control. The computer program comprises code means which when run on a computer causes the computer to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value. The code means also causes the computer to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data. The code means further causes the computer to determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary. A further related aspect of the embodiments defines a computer program product comprising computer readable code means and a computer program according to above stored on the computer readable code means.
Another aspect of the embodiments relates to a deblocking filtering control method performed during video encoding. The method comprises determining a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence. The method also comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary. A first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value are associated to an encoded representation of the picture.
A related aspect of the embodiments defines a filtering control device comprising a beta parameter determining unit configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence. A length offset determining unit is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary. The filtering control device also comprises an associating unit configured to associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
Another related aspect of the embodiments defines an encoder configured to encode video data of a video sequence and comprising a filtering control device according to above. A further aspect of the embodiments defines a user equipment comprising an encoder according to above.
Still another aspect of the embodiments defines a network device being or belonging to a network node in a communication network. The network device comprises an encoder and/or a decoder according to above.
The present embodiments enable flexibility of applying stronger/weaker filtering and thereby modifying more/less pixels from the block boundaries in the block boundaries chosen for deblocking filtering without applying deblocking to most of the boundaries in the picture. Hence, the embodiments enable this adaptation of deblocking filtering mode/length while avoiding filtering on most of the block boundaries
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
Fig. 1A schematically illustrates a method performed in a filtering control device according to an embodiment; Fig. 1 B schematically illustrates a method performed in a transmitter according to an embodiment; Fig. 1 C schematically illustrates a method performed in a receiver according to an embodiment; Fig. 2 is a schematic block diagram of an encoder according to an embodiment;
Fig. 3 is a schematic block diagram of a decoder according to an embodiment;
Fig. 4 is a schematic block diagram of a user equipment according to an embodiment; Fig. 5 is a schematic block diagram of a user equipment according to another embodiment;
Fig. 6 is a schematic block diagram of a network device according to an embodiment; Fig. 7 is a schematic block diagram of a filtering control device according to an embodiment;
Fig. 8 is a schematic block diagram of a computer according to an embodiment;
Fig. 9 is a diagram illustrating dependency of thresholds in equations (4), (5) and (7) on QP values (solid lines) and corresponding thresholds using Lengt Offset = 1 and the parameter Base = 4 (dashed lines);
Fig. 10 is a diagram illustrating dependency of thresholds in equations (4), (5) and (7) on QP values (solid lines) and corresponding thresholds using LengthOffset = 2 and the parameter Base = 4 (dashed lines);
Fig. 11 is a flow diagram illustrating a deblocking filtering control method according to an embodiment;
Fig. 12 is a flow diagram illustrating an embodiment of the step of determining whether to apply deblocking filtering in Fig. 11 ;
Fig. 13 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering and/or how many pixels to filter in Fig. 11 ; Fig. 14 is a flow diagram illustrating an embodiment of the step of determining whether to apply weak or strong deblocking filtering in Fig. 13;
Fig. 15 is a flow diagram illustrating an embodiment of the step of determining how many pixels to filter in Fig. 13;
Fig. 16 is a schematic block diagram of a processing unit in Fig. 7 according to an embodiment;
Fig. 17 is a flow diagram illustrating a deblocking filtering control method according to another embodiment; Fig. 18 is a flow diagram illustrating the step of associating the first and second syntax element in Fig. 17 according to an embodiment; Fig. 19 is a schematic block diagram of a filtering control device according to another embodiment;
Fig. 20 schematically illustrates a video sequence of pictures;
Fig. 21 schematically illustrates a data packet carrying encoded video data; and
Fig. 22 schematically illustrates an encoded representation of a picture.
DETAILED DESCRIPTION
Throughout the drawings, the same reference numbers are used for similar or corresponding elements.
The present embodiments generally relate to deblocking filtering and in particular to controlling deblocking filtering over a boundary of neighboring blocks of pixels in a picture.
The embodiments are based on the insight that prior art techniques use a single parameter beta (β) to determine whether or not to apply deblocking filtering on a block boundary between two block of pixels in a picture, see equation (3), to determine whether to apply weak or strong filtering, see equations (4) and (5), and to determine how many pixels to filter on each side of the block boundary, see equations (7) and (8). Hence, if it would be desirable to affect the filtering mode, such as go from weak to strong filtering, and/or affect the number of pixels to filter in the weak filtering mode, such as go from filtering and modifying one to two pixels, then the value of this parameter β needs to be increased. The increase in the β value needs, though, to be significant since equations (4), (5), (7) and (8) use thresholds where the parameter β is divided by some number, e.g. four in equation (4), eight in equation (5) and 16/3 in equations (7), (8). However, increasing the value of β will affect the filter decision in equation (3) determining whether to filter a block boundary or not. In this equation (3) the relevant threshold is simply the β value, i.e. not divided by any number. This means that an increase in the β value as required in order to affect the decision according to any of equations (4), (5), (7) and (8) will have a significant impact on the threshold used in equation (3). As a consequence, the decision in equation (3) will be true for most block boundaries in a picture and filtering will therefore be applied on most block boundaries. This may lead to excessive blurriness.
Thus, if deblocking filtering is unintentionally applied to some block boundaries since the decision in equation (3) will be true for most block boundaries then structures present in the block of pixels could be removed or at least significantly suppressed. For instance, a clear edge between two pixel areas could be present close to the block boundary. It is then generally not preferred to apply deblocking filtering since such deblocking filtering could remove or blur the clear edge leading to visual artifacts. If is of course also computationally wasteful to apply deblocking filtering on block boundaries if the decision in equation (3) will be true for most block boundaries if the deblocking filtering does not lead to any significant quality improvement or might even cause a deterioration in visual quality.
The embodiments solve the above described shortcomings of the prior art by using a parameter to adjust the length of the deblocking filtering, i.e. the number of pixels from the block boundary that can be modified by deblocking filtering, and/or affect the decision between weak (normal) and strong deblocking filtering independently of determining which block boundaries that are processed by deblocking filtering. Fig. 11 is a flow diagram illustrating a deblocking filtering control method performed in connection with, such during, video decoding. The method comprises retrieving, in step S30 and based on encoded video data, a first syntax element defining a beta (β) parameter value and a second syntax element defining a length offset parameter value. Step S31 comprises determining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data. The determination or decision taken in step S31 is performed based at least partly on the beta parameter value. Step S32 comprises determining at least one of i) whether to apply weak deblocking filtering, sometimes also denoted normal deblocking filtering, or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary. The determination or decision taken in step S32 is performed based at least partly on the length offset parameter value.
The decision of whether to apply weak or strong deblocking filtering can be regarded as a decision of which out of two deblocking filtering modes to use. Weak deblocking filtering is a deblocking filtering mode that applies a weaker filtering and modification of pixel values as compared to a strong deblocking filtering mode. Strong deblocking filtering is generally preferred if there are not any structures in the pixels value when traveling along a pixel line across the block boundary, hence the pixel values are substantially the same on both sides of the block boundary or differ little. If there is any structure in the pixel values, such as an edge between elements, then weak deblocking filtering is generally preferred since a strong deblocking filtering could remove or at least suppress such a structure.
If the deblocking filtering control method is applied to HEVC as discussed in the background then method step S32 could be regarded as comprising determining how many pixels to filter on each side of the block boundary, or expressed alternatively, determining a length of a deblocking filtering. The reason being that in HEVC the strong deblocking filtering mode involves filtering three pixels in a pixel line on each side of the block boundary, whereas the weak deblocking filtering mode involves filtering either one or two pixels in the pixel line on each side of the block boundary. Hence, the determination of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary could be regarded as being surplus since if it is determined to filter three pixels on each side of the block boundary then the strong deblocking filtering mode has been selected, whereas if it is determined to filter one or two pixels on each side of the block boundary then the weak deblocking filtering mode has been selected. Pixel value as used herein generally relates to a value that a pixel or sample in a block of a slice in a picture has. A pixel value typically represents a color value according to some defined color format, such as RGB or, typically, luminance (luma) and chrominance (chroma). The deblocking filtering as disclosed herein is in particular suitable in connection with filtering luma values. The block boundary is a block boundary or block border between two neighboring or adjacent blocks of pixels in a slice 3 of a picture 2 in a video sequence 1 , see Fig. 20. The picture 2 could comprise a single slice 3 or multiple, i.e. at least two, slices 3. The block boundary could be a vertical block boundary as shown below for two neighboring blocks A and B positioned side by side in the picture 2. A B
p3o p20 p1o pOo qOo q1o q20 q30
p3i p2i p1 i p0i qOi q1 i q2i q3i
p32 p22 p12 p02 q02 q12 q22 q32
p33 p23 pis p03 q03 q13 q23 q33 Alternatively, the block boundary is a horizontal block boundary as shown below for two neighboring blocks A and B, where block A is positioned on top of block B in the picture 2.
A
Figure imgf000016_0001
p1o p1 i p12 p13
Figure imgf000016_0002
q1o q1 i q12 q13
Figure imgf000016_0003
B
Deblocking filtering is applied to a line of pixels, denoted pixel line herein, i.e. to a row of pixels for a vertical block boundary or to a column of pixels for a horizontal block boundary.
The deblocking filtering control method as disclosed in Fig. 11 , hence, adds a new parameter, i.e. the length offset value, which is used in order to determine whether to apply weak or strong deblocking filtering and/or how many pixels to filter. This means that the decision of which deblocking filtering mode to use and/or the length of the deblocking filtering can be made independent of changing the value of the beta parameter used in step S31 to determine whether or not to apply deblocking filtering on the block boundary. The embodiments thereby enable, for instance, going from applying weak deblocking filtering to strong deblocking filtering for a pixel line on a block boundary and/or going from filtering and modifying a single pixel on each side of the block boundary in a pixel line to filtering and modifying two pixels on each side of the block boundary in the pixel line without affecting the beta parameter value and thereby without affecting the number of block boundaries on which deblocking filtering is applied in a picture.
The first syntax element retrieved in step S30 and defining the beta parameter value could be any syntax element in the encoded video data, i.e. bitstream, or associated with encoded video data that enables determination of the beta parameter value. In a particular embodiment, the first syntax element comprises the quantization parameter (QP) value used for residual coding. Generally, a base QP parameter could be set at the picture level with a first delta QP parameter that can further change the base QP parameter value on the slice level. In addition, there may be a second delta QP parameter which can be sent on the block level. Hence, two neighboring blocks of pixels in a slice in a picture can have different QP parameter values. In such a case, an average QP value of the two neighboring blocks is typically used to derive the beta parameter value from a look-up table, such as Table 1. The first syntax element could then comprise the syntax elements defining the base QP parameter, the first delta QP parameter and the second delta QP parameters. The syntax element defining the first delta QP parameter is typically signaled in a slice header in an encoded representation of a slice and with the optional second delta QP parameter signaled on coding unit (block of pixels) basis. The syntax element defining the base QP parameter may also be retrieved from the slice header but is typically included in another header or data element or structure in or associated with an encoded representation of a picture. Examples of the latter include various parameter sets, such as Picture Parameter Set (PPS), Sequence Parameter Set (SPS), Video Parameter Set (VPS) and Adaptation Parameter Set (APS), and preferably PPS.
The second syntax element defining the length offset parameter value could be retrieved from a slice header in an encoded representation of a slice. Alternatively, the syntax element is retrieved from a data element or structure associated with the encoded representation, such a PPS, SPS, VPS or APS. In the latter case, the slice header preferably comprises a parameter set identifier directly or indirectly identifying the relevant parameter set. For instance, a PPS can be identified by a PPS identifier in the slice header, an SPS can be identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header and a VPS can be identified by a VPS identifier in an SPS identified by an SPS identifier in a PPS, which is identified by a PPS identifier in the slice header.
Hence, the retrieval of syntax elements in step S30 could, for instance, be performed once per block boundary or once per slice in the picture. In the latter case, the block parameter value and/or the length offset parameter value could be reused for multiple block boundaries in the slice. Retrieving the syntax elements in step S30 preferably comprises reading and decoding the syntax elements from the relevant data structure, such as slice header or parameter set. The decoded values could be used directly as the beta parameter value and the length offset parameter value. Alternatively, the beta parameter value and/or the length offset parameter value is calculated or otherwise determined based on decoded values. For instance, decoded values of the base QP parameter and the delta QP parameters are used to calculate a QP parameter value which is used a table input in a look-up table to get the relevant beta parameter value.
In a particular embodiment, the value of the length offset parameter is defined based on the size of the block of pixels. This means that the length offset parameter value is then linked to a particular type of block boundary. In such a case, the second syntax element retrieved in step S30 is a syntax element defining the size of the current block of pixels. The size is then used, for instance, in a look-up table to get the value of the length offset parameter to be used for the current block of pixels. The following steps S31 and S32 of Fig. 11 are preferably performed for each vertical and horizontal block boundary between two neighboring blocks present in the same slice in the picture. As is further discussed herein, the determination of whether to use strong or weak deblocking filtering is preferably available for the first (pixel line number i=0) and fourth pixel line (pixel line number i=3) in a block of 4x4 pixels. In such a case, step S31 is preferably performed for the first and fourth pixel line for a block boundary. In an embodiment, the determination of how many pixels to filter is preferably performed for each pixel line relative to a block boundary for which weak deblocking filtering has been applied.
In an alternative approach, step S32 involves determining how many pixels to filter based on the length offset value and this determination is made for each pixel line relative to the block boundary. In such a case, for the first and fourth line the decision could first be to determine, based at least partly on the length offset parameter value, whether to filter three or filter one/two pixels on each side of a block boundary in a given pixel line. This corresponds to selecting between strong and weak deblocking filtering. If weak deblocking filtering is selected for the first and/or fourth pixel line a further decision is made, based at least partly on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary on the given pixel line. Strong deblocking filtering is generally not applicable to the second and third pixel lines. Hence, for these two pixel lines the decision in step S32 could thereby involve determining, at least partly based on the length offset parameter value, whether to filter one or two pixels on each side of the block boundary. Step S31 comprises determining whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but preferably not based on the length offset parameter value. Hence, in an embodiment the length offset parameter value is used in the decision or determination performed in step S32 but not in the decision or determination in step S31. A particular embodiment of step S31 is illustrated in the flow diagram of Fig. 12. The method continues from step S30 in Fig. 1 1. A next step S40 comprises calculating a variable d based on pixel values of pixels in a first block of pixels (block A above) and in a second, neighboring block of pixels (block B above). In a particular embodiment, the variable d is calculated as: d = |p2o - 2xp1 o + pOo| + |p23 - 2xpl 3 + p03| + |q2o - 2xq1 o + qOo| + |q23 - 2xql 3 + q03| (9)
In equation (9) and further herein pA represents a pixel value of a pixel in a pixel line number i in a first or current block of pixels (block A) at a pixel position number A relative to the block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels (block B) at a pixel position number A relative to the block boundary.
Equation (9) above basically corresponds to a combination of equation (2) and some of the equations referred to as (1 ).
A next step S41 comprises comparing the variable d with a threshold value, which in an embodiment corresponds to the beta parameter value, represented by β in Fig. 12. If the variable d is smaller than the beta parameter value the method continues to step S42, which comprises determining to apply deblocking filtering on the block boundary. The method then continues to step S32 of Fig. 1 1. If the variable d instead is not smaller than the beta parameter value the method continues from step S41 to S43. This step S43 comprises determining not to apply deblocking filtering on the block boundary. In such a case, the method ends.
The method as shown in Fig. 12 involving steps S40 to S43 is preferably performed for each (vertical and horizontal) block boundary between neighboring blocks of pixels present in a same slice in a picture of a video sequence. Hence, the decision of whether to apply deblocking filtering or not is preferably performed once as shown in Fig. 12 for each such block boundary.
In an embodiment, step S32 of Fig. 1 1 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value. Hence, in this embodiment of step S32 both the length offset parameter value and the beta parameter value are used in the decision on the deblocking filtering mode and/or the decision on the deblocking filtering length. This should be compared to step S31 , in which, as discussed in the foregoing, the decision is made based on the beta parameter value but preferably not based on the length offset parameter value.
Fig. 13 is flow diagram illustrating an embodiment of step S32 using both the beta parameter value and the length offset value. The method continues from step S31 in Fig. 11 and continues to step S50. Step S50 comprises calculating a beta length parameter value based on the beta parameter value and the length offset parameter value. In a particular embodiment the beta length parameter value is calculated based on, preferably equal to: BetaLength = Beta χ ( Base + LengthOffset ) (10)
In equation (10), BetaLength represents the beta length parameter value, Beta represents the beta parameter value, LengthOffset represents the length offset parameter value and Base represents a base parameter value.
A next step S51 comprises determining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value calculated in step S50. The base parameter value used in equation (10) is optional. Hence, in an embodiment the base parameter value is zero. In such a case, the beta length parameter is defined as Beta χ LengthOffset. In another embodiment, the base parameter has a fixed value. The fixed value could advantageously be represented as a power of two, for example 2, 4, 8, etc. In such a case, the fixed value could be known to both the encoder and the decoder. No signaling of the base parameter value is thereby required.
In an alternative embodiment, the base parameter value could be signaled in the bitstream, i.e. be represented by a syntax element defining the base parameter value. In such a case, step S30 of Fig. 1 1 preferably also comprises retrieving, based on the encoded video data, a third syntax element defining the base parameter value in addition to retrieving the previously mentioned first and second syntax elements.
In an embodiment, step S32 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
Fig. 14 is a flow diagram illustrating such a decision on the deblocking filtering mode performed based on the beta length parameter value. The method continues, in an embodiment from step S50 in Fig. 13. A next step S60 comprises calculating a first threshold, Ti, based on the beta length parameter value, preferably defined as BetaLengt » 4. » represents a right shift operator defined as a » b = L a / 2b J and L x J is a floor function that outputs a largest integer not greater than x. Step S61 comprises calculating a second threshold, T2, based on the beta length parameter value, preferably defined as BetaLength » 5.
The first threshold Ti could be regarded as corresponding to the threshold used in equation (4), i.e. β»2. This means that instead of using solely the beta parameter value (β) the embodiment uses the beta length parameter value [BetaLength). In addition, the beta length parameter value is divided by 24=16, whereas in equation (4) the beta parameter value is divided by 22=4. This difference corresponds to using a base parameter value equal to four, i.e. the beta length parameter value is equal to BetaLength = Beta χ ( 4 + LengthOffset ). In a general embodiment, if the base parameter value is defined as 2X then the first threshold is preferably calculated as BetaLength » (2+X), wherein BetaLength = Beta χ ( 2X + LengthOffset ).
Correspondingly, the second threshold T2 could be regarded as corresponding to the threshold used in equation (5), i.e. β»3. This means that instead of using solely the beta parameter value (β) the embodiment uses the beta length parameter value [BetaLength). In addition, the beta length parameter value is divided by 25=32, whereas in equation (5) the beta parameter value is divided by 23=8. This difference corresponds to using a base parameter value equal to four, i.e. the beta length parameter value is equal to BetaLength = Beta χ ( 4 + LengthOffset ). In a general embodiment, if the base parameter value is defined as 2X then the second threshold is preferably calculated as BetaLength » (3+X), wherein BetaLength = Beta χ ( 2X + LengthOffset ). Step S62 comprises calculating a first variable, Vi, defined as 2x(|p2, - 2xp1, + p0,| + |q2, - 2xq1, -+qOi|), whereas step S63 comprises calculating a second variable, V2, defined as |p3, - p0,| + |q0, - q3,|. The first and second variables correspond to the values used in equations (4) and (5).
Steps S60 to S63 can be performed serially in any order or at least partly in parallel. The method then continues to step S64 which compares the first variable to the first threshold value, compares the second variable to the second threshold value and preferably also compares the value |pOi - qOi| to the threshold (5xtc + 1) » 1 (see equation (6)). If the first variable is smaller than the first threshold value, the second variable is smaller than the second threshold value and |p0, - q0,| is smaller than (5xtc + 1) » 1 the method continues to step S65, which comprises determining to apply strong deblocking filtering for pixel line number i. If any of the first and second variable is not smaller than its threshold and/or |p0, - q0,| is not smaller than (5xtc + 1) » 1 the method instead continues from step S64 to S66. This step S66 comprises determining to apply weak deblocking filtering for the pixel line number i.
The decision whether to apply strong or weak deblocking filtering as shown in Fig. 14 is preferably a pixel line specific decision. In an embodiment, the decision is made for pixel line number 0 and pixel line number 3, i.e. i=0,3. In another embodiment, the decision could be made for each pixel line in the block of pixels, such as i=0...3.
Applying strong deblocking filtering is preferably performed as disclosed in the background section, i.e. as: pOi = Clip3( p0i-2xtc, p0i+2xtc, ( p2 + 2xp1 i + 2xp0i + 2xq0i + q1 i + 4 ) » 3 )
p1i' = Clip3( p1r2xtc, p1i+2xtc, ( p2i + p1i + p0i + q0i + 2 ) » 2 )
p2i = Clip3( p2i-2xtc, p2i+2xtc, ( 2xp3i + 3xp2i + p1 i + pOi + qOi + 4 ) » 3 )
qOi = Clip3( q0i-2xtc, q0i+2xtc, ( p1 i + 2xp0i + 2xq0i + 2xq1 i + q2 + 4 ) » 3 )
q1i' = Clip3( q1i-2xtc, q1i+2xtc, ( p0i + q0i + q1i + q2i + 2 ) » 2 )
q2i = Clip3( q2r2xtc, q2i+2xtc, ( pOi + qOi + q1 i + 3xq2i + 2xq3i + 4 ) » 3 )
In an embodiment, step S32 comprises determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably at least partly based on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value.
Fig. 15 is a flow diagram illustrating such a decision on the deblocking filtering length performed based on the beta length parameter value. The method continues, in an embodiment from step S50 in Fig. 13. In an alternative embodiment, the decision on the deblocking filtering length is performed only if weak deblocking filtering has been selected or is pre-selected for a current pixel line. In such a case, the method could continue from step S50 for pixel lines for which weak deblocking filtering should be applied or, if the method as shown in Fig. 14 is used to select between weak and strong deblocking filtering, from step S66 in Fig. 14.
Step S70 comprises calculating a side threshold, Ts, based on the beta length parameter value, preferably defined as ( BetaLength + ( BetaLength » 1 ) ) » 5.
The side threshold Ts could be regarded as corresponding to the thresholds used in equation (7) and (8), i.e. ( β + ( β » 1 ) » 3). This means that instead of using solely the beta parameter value (β) the embodiment uses the beta length parameter value {BetaLength). In addition, the sum ( BetaLength + ( BetaLength » 1 ) ) is divided by 25=32, whereas in equations (7) and (8) the sum ( β + ( β » 1 ) ) is divided by 23=8. This difference corresponds to using a base parameter value equal to four, i.e. the beta length parameter value is equal to BetaLength = Beta χ ( 4 + LengthOffset ). In a general embodiment, if the base parameter value is defined as 2X then the side threshold is preferably calculated as ( BetaLength + ( BetaLength » 1 ) ) » (3+X), wherein BetaLength = Beta χ ( 2X + LengthOffset ).
Step S71 comprises calculating a variable dp = |p20 - 2xp10 + pOo| + |p23 - 2xp13 + pOs|. The variable dp corresponds to the value used in equation (7) and calculated as defined in equation (1 ).
Steps S70 and S71 can be performed serially in any order or at least partly in parallel.
The method then continues to step S72 which compares the variable dp to the side threshold. If the variable dp is smaller than the side threshold the method continues to step S73, which determines to filter and modify two pixels in the pixel line number i. If the variable dp, however, is not smaller than the side threshold the method instead continues from step S72 to step S74, which determines to filter and modify one pixel in the pixel line. The filtering and modification of pixel(s) in steps S73 and S74 is preferably performed as disclosed in the background section. In other words, a delta parameter is preferably calculated as Δ = ( 9x( qO, - pO, ) - 3x( q 1 i - p 1 i) + 8 ) » 4. The delta parameter is preferably clipped to get Δ = Clip3( -tc, tc, Δ ). In such a case, step S74 comprises filtering and modifying the value of the pixel that is closest to the block bou ndary in the pixel line n umber i, preferably as pO' = Clip1 γ( ρθ + Δ ). Step S73 correspondingly comprises filtering and modifying the values of the two pixels that are closest to the block boundary in the pixel line number i, preferably as pO' = Clip1v( pO + Δ ) and ρϊ = Clip1 γ( p 1 + Δρ), wherein Δρ = Clip3( -( tc » 1 ), tc » 1 , ( ( ( p2 + p0 + 1 ) » 1 ) - p1 + Δ ) »1 ). In a particular embodiment, step S71 additionally comprises calculating a variable dq = |q20 - 2xq10 + qOo| + |q23 - 2xq13 + qOs|. In such a case, step S72 preferably also comprises comparing this variable dq to the side threshold. If the variable dq is smaller than the side threshold the method continues to step S73, where it is determined to filter and modify two pixels in the neighboring block of pixels on the pixel line number i. If the variable dq is not smaller than the side threshold the method instead continues to step S74 where one pixel is filtered and modified in the pixel line i in the neighboring block of pixels.
In an embodiment, step S74 then comprises determining to calculate qO' = Clip1 γ( qO - Δ ), i.e. filtering the pixel that is closest to the block boundary in the pixel line number i in the neighboring block of pixels. Step S73 preferably comprises determining to calculate qO' = Clip1 γ( qO - Δ ) and qf = Clip1 γ( q1 + Aq), wherein Aq = Clip3( -( tc » 1 ), tc » 1 , ( ( ( q2 + qO + 1 ) » 1 ) - q1 - Δ ) »1 ). Step S73 then filters and modifies the values of the two pixels that are closest to the block boundary in the pixel line number i in the neighboring block of pixels. As mentioned in the foregoing, steps S70 to S74 of Fig. 15 are preferably performed for the pixel line(s) for which weak deblocking filtering has been selected, i.e. pixel line i=0,3, such as in step S66 of Fig. 14, or for which weak deblocking filtering is pre-defined, i.e. pixel line i=1 ,2.
The method as shown in Fig. 15 may comprise one additional, optional step that is preferably performed prior to step S70. This optional step involves determining whether to apply any weak deblocking filtering at all to the pixel line number i. In a particular embodiment, this decision is based on a comparison of a delta value (Δ) and a threshold defined based on the parameter tc. Thus, weak deblocking filtering is then in this optional embodiment only applied to a pixel line number i if | Δ | < 10xtc. The delta value is preferably calculated as Δ = ( 9x( qO, - pO, ) - 3x( q1, - p1,) + 8 ) » 4.
In a particular embodiment, step S32 of Fig. 13 comprises determining whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and determining how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, such as based on the beta length parameter value. An implementation example of such an embodiment is basically a combination of Figs. 15 and 16.
According to the embodiments it may be desirable to modify the threshold values in equations (4), (5), (7) and (8), which affect the number of pixels modified by deblocking filtering, without large modifications of parameter β, which determines which parts of block boundaries are modified by deblocking filtering.
In brief, an idea of the embodiments is to use a single parameter to adjust the "length" of the deblocking filtering in both weak (normal) and strong filtering modes while at the same time be able to adjust the length of the deblocking filtering independently of changing the parameter β that determines which block boundaries that are processed by deblocking filtering.
Embodiments relate to introduction of a new parameter indicative of a length offset, which is used to determine a beta length. The beta will then be replaced by the beta length in equations (4), (5), (7) and (8). Fig. 1A schematically illustrates such an embodiment performed in a filtering control device. The method generally states in step S1 where the beta length parameter value is determined as disclosed herein. The beta parameter in equations (4), (5), (7) and (8) is then replaced by this determined beta length parameter in step S2.
Other embodiments relate to signaling of the parameters as shown in the flow diagrams of Figs. 1 B and 1 C. Fig. 1 B illustrates a method performed in a transmitter, which involves, in step S10, send signaling according to the embodiments, i.e. transmit the syntax elements defining the beta parameter and the length offset parameter. Fig. 1 C illustrates a method performed in a receiver, which involves, in step S20, receive signaling according to the embodiments, i.e. receiving the syntax elements defining the beta parameter and the length offset parameter.
According to embodiments, the "length of deblocking filtering" can be adjusted. The length of the deblocking filtering is here referred to as the number of pixels from the block boundary that can be modified by deblocking filtering. The length of deblocking filtering can alternatively be adjusted by the same parameter for both the strong and the weak (normal) deblocking filter. Further, a parameter to control the thresholds is signaled in e.g. the slice header or alternatively, in the APS header or in the other parts of the bitstream, such as in another parameter set, such as PPS or SPS. Currently, the thresholds that are used in the decisions how many pixels are modified relative to the block boundary depend on the beta parameter. In order to increase these thresholds, i.e. increase the values of these thresholds for a certain QP one would need to increase the beta parameter. But this, however, would change the decisions which block boundaries are processed by deblocking and which are not. However, with the embodiments, it is possible to make these two decisions, i.e. number of pixels to be modified by deblocking and if deblocking should be applied or not at a block border, separately. It is possible to modify on average more pixels from the block boundary without choosing more block boundaries to be processed by deblocking.
The two parts, i.e. the number of pixels to be modified and the signaling, are independent of each other and can be used independently or in combination. The details on the particular implementation are in the following detailed description and the embodiments.
The proposed embodiments allow adjusting subjective quality of deblocking filtering, for instance, on the sequence-base and/or slice-based basis. This gives the content provider possibility to adjust the deblocking filtering strength or length of deblocking filtering to a particular sequence. The deblocking filtering strength can also vary on the frame or picture basis.
It is possible to change the "length" of the deblocking filtering, i.e. the number of modified pixels from the block boundary, without filtering more or less block boundaries.
It is proposed, in an embodiment, to introduce a threshold parameter for the deblocking length adjustment that would possibly change the value of parameters ( β » 2 ), ( β » 3 ), ( β + ( β » 1 ) ) » 3) in equations (4), (5), (7), (8). In order to achieve that, it is suggested to derive a beta length parameter as follows.
BetaLength = Beta χ ( Base + LengthOffset ) (10) wherein Beta is the same as β and Base is preferably a fixed value that can be represented as a power of two, for example 2, 4, 8 etc. Base could be signaled in the bitstream or be fixed for the encoder and decoder. Then the values of the respective thresholds controlling the length of deblocking filtering are obtained by using the value of BetaLength instead of Beta, see Fig. 1 A, and additionally dividing it by the value equal to the value of Base. In this way, the value of BetaLength is approximately equal to the value of Beta when the LengthOffset is equal to 0. Then the operation required to obtain the values of BetaLength is basically multiplying the beta parameter value with the base parameter value and then dividing it with the same parameters when calculating the threshold in equation (4), (5), (7), (8).
A Base value, which is greater than 1 may be used in order to have finer granularity of threshold values when sending integer values of LengthOffset. In this way, the value of Base equal to 2 enables the granularity of the thresholds to be half of the Beta value, using Base value of 4 enables a quarter value granularity, etc. Fig. 9 illustrates the dependency of the thresholds in equations (4), (5) and (7) on QP values (solid lines). The dashed lines correspond to the values of the thresholds using LengthOffset = 1 and Base = 4. Fig. 10 corresponds to the results in Fig. 9 but with the difference that the dashed lines correspond to the values of the thresholds using LengthOffset = 2 and Base = 4.
The value [Base + LengthOffset) can also be clipped in the decoder in order to ensure that the value is in the range (MinValue, MaxValue), where the MinValue can be, for example, 0 and MaxValue is a defined maximum value. The value of BetaLength can be used for calculation of all the mentioned thresholds in equations (4), (5), (7) and (8) or for a subset of these thresholds. An example the latter case could be to use BetaLength, for instance, for thresholds related to application of strong filter, i.e. equations (4) and (5). Alternatively, it can be used for threshold used in equations (7), (8), related to choosing between filtering one or two pixels from the block boundary in weak (normal) filtering.
The presented threshold can also be used together with some other conditions, for example, for intra blocks only or with boundary strength equal to some particular value. In such a case, the filtering control method is limited to be used for block boundaries for intra blocks only or for block boundaries equal to or larger than the particular value.
Use of BetaLength with some particular value can also be linked to conditions like the size of the block of pixels. In such a case, the particular value of BetaLength is preferably increasing for increasing block sizes. This can be achieved by defining the value of the length offset parameter to be dependent on the size of the block of pixels. In case the usage of BetaLength values linked to a particular type of the block boundary, as described in the above, the value of LengthOffset can be either signaled in the bitstream or be hardcoded. In the former case, different values of LengthOffset are used for different boundary strength or when certain conditions are met. In the latter case, the second syntax element does not need to be signaled in the bitstream.
For the HEVC deblocking, a deblocking length offset {LengthOffset) is preferably sent in the bitstream, see Figs. 1 B and 1 C. LengthOffset can be sent in the slice header, the APS or in the other parts of the bitstream. LengthOffset can alternatively be signaled in the SPS or PPS. If LengthOffset is signaled in PPS, the same PPS can be used for a particular sequence picture type. Sending LengthOffset in the SPS provides modifications of the offsets for the video sequence.
It is possible to send LengthOffset values using signed variable length code (VLC) values, for example, with Exponential-Golomb codes. The syntax for these val ues is provided as i n the followi ng embodiments. The obtained VLC codes can also be further compressed by the arithmetic codes, thus improving compression.
An alternative embodiment is to signal a value of LengthOffset using unsigned VLC values. In this case, a value A representing [Base + LengthOffset) can be transmitted in the bitstream, e.g. slice header or APS. Then, a positive integer value for {Base + LengthOffset) can always be signaled. The BetaLength is then equal to Beta χ A instead of Beta χ [Base + LengthOffset). A value of the length offset parameter with some multiplier or divisor can be used, e.g. length_offset_div2.
Seven particular embodiments are disclosed here below. These embodiments are for exemplary purposes, and are not intended to be limiting. The embodiments disclosed can either be combined in any permissible combination or used separately.
Some changes to the respective clause for the deblocking filtering in the HEVC slice header syntax are included in some of the embodiments below. In case of using another bit depth than 8 bits per pixel component, the resulting threshold values should be multiplied by the respective bit depth scaling factor. Alternatively, the beta value should be multiplied by the respective bit scaling factor. When the described syntax elements (in bold typeset) are not present in the bitstream, their value shall be inferred to be equal to 0. In a first embodiment, an additional parameter is sent in the bitstream to control whether strong or weak filtering is done as well as controlling how many pixels on each side that should be filtered. This parameter is a complement to the existing parameter that, in the embodiments, is used to decide whether to filter a block boundary or not. The decision whether to filter a block or not is based on the old parameter and the decisions whether to use strong or weak filtering should be done is based on a new parameter. The decision how many pixels to filter on each side of a block boundary in the weak (normal) filter operation is also based on this new parameter.
A decoder is, according to this embodiment, configured to perform the following steps.
1. The decoder receives video data and parses syntax elements that control the deblocking filtering process. The syntax elements include one parameter A and one parameter B.
2. The decoder decodes the picture.
3. The decoder performs deblocking filter operations on the decoded picture. The decisions whether to use weak or strong filtering is based partly on A. The decisions on how many pixels to filter on each side of a block boundary are based partly on A. The decisions whether to filter a block or not is based partly on B.
Alternatively, the decisions whether to use weak or strong filtering is based partly on A and B, the decisions on how many pixels to filter on each side of a block boundary are based partly on A and B.
In a second embodiment the new parameter from the first embodiment, here called A, is used together with the existing parameter, here called B to form a new parameter C, where C = B x ( X + A ) and X is a predetermined, fixed value. Alternatively the value of X is sent in the bitstream. The decision whether to use weak or strong filtering is based on C. How many pixels to filter on each side of a block boundary is also based on C. 1. The decoder receives video data and parses syntax elements that control the deblocking filtering process. The syntax elements include one parameter A and one parameter B.
2. The decoder decodes the picture 3. The decoder performs deblocking filter operations on the decoded picture. The decoder computes C = B x ( X + A ) where X is a predetermined fixed value. Alternatively the value of X is sent in the bitstream. 4. The decisions whether to use weak or strong filtering is based partly on C. The decisions on how many pixels to filter on each side of a block boundary are based partly on C. The decisions whether to filter a block or not is based partly on B.
In a third embodiment the parameter C from the second embodiment or the parameter C/X, or C»(log2(X)), is used instead of β in equations (4), (5), (7) and (8).
1. The decoder receives video data and parses syntax elements that control the deblocking filtering process. The syntax elements include one parameter A and one parameter B. 2. The decoder decodes the picture.
3. The decoder performs deblocking filter operations on the decoded picture. The decoder computes C = B x ( X + A ) where X is a predetermined fixed value. Alternatively the value of X is sent in the bitstream.
4. The decisions whether to use weak or strong filtering is done according to equations (4), (5), (6) where C, C/X or C»(log2(X)) is used instead of β.
5. The decisions on how many pixels to filter on each side of a block boundaries is done according to equations (7) and (8) where C, C/X or C»(log2(X)) is used instead of β.
6. The decisions whether to filter a block or not is based partly on B.
In a fourth embodiment, the syntax code below provides an example of deblocking parameter signaling in the slice header. if( deblocking_filter_control_present_flag ) {
if( deblocking_filter_in_aps_enabled_flag )
inherit_dbl_params_from_aps_flag u(1) if( !inherit_dbl_params_from_aps_flag ) {
disable deblocking filter flag u(1) if( !disable_deblocking_filter_flag ) {
beta_offset_div2 se(v)
tc_offset_div2 se(v)
df_length_offset se(v)
}
}
The syntax code below provides an example of deblocking parameter signaling in a parameter set, here represented by an APS.
Figure imgf000031_0001
In this fourth embodiment the parameters beta and the beta length can be calculated as defined below:
Int iBeta = betatable_8x8[ilndexB]xiBitdepthScale;
Int iBetaLength = iBeta χ (4-mi_dfLengthOffset); (11) m_dfLenghtOffset represents the length offset parameter value and is preferably obtained from the syntax element df_length_offset in the syntax code above. ilndexB is an index used in a look-up table (betatable_8x8) and BitdepthScale is derived from a bit depth as signaled in the PPS or SPS. The previously mentioned side threshold is then preferably calculated as:
Int iSideThreshold = (iBetaLength + (iBetaLength»1))»5; I n the following, the value applied as the right shift in the calculation of SideThreshold has been increased from 3 in the original code (see equations (7) and (8)) to 5, which corresponds to the division by the value of the Base parameter which is equal to 4 in equation (11). On can notice here that equation (11) is basically an instance of equation (10) described earlier for the base parameters equal to 4.
Equations (4) and (5) become, in this embodiment, equal to (12) and (13) respectively.
2xdpqi < {BetaLength »4) (12) (|p3i - p0i| + |q0i - q3i|) < {BetaLength»5) (13)
One should notice that some other value of shift in the equations above can be used, as well as the particular formula for determining the threshold value, while still be covered by the embodiments. In a fifth embodiment an example with df_lengh_offset signaling using the unsigned VLC is provided below. if( deblocking_filter_control_present_flag ) {
if( deblocking_filter_in_aps_enabled_flag )
inherit_dbl_params_from_aps_flag u(1) if( !inherit_dbl_params_from_aps_flag ) {
disable deblocking filter flag u(1) if( !disable_deblocking_filter_flag ) {
beta_offset_div2 se(v)
tc_offset_div2 se(v)
df_length_offset ue(v)
}
} aps_deblocking_filter_flag u(1)
if(aps_deblocking_filter_flag) {
disable deblocking filter flag u(1) if( !disable_deblocking_filter_flag ) {
beta_offset_div2 se(v) tc_offset_div2 se(v) df_length_offset ue(v)
}
In this fifth embodiment the parameters beta and the beta length can be calculated as defined below:
Int iBeta = betatable_8x8[ilndexB]*iBitdepthScale;
Int iBetaLength = iBeta χ (m_dfLengthOffset); (14)
The previously mentioned side threshold is then preferably calculated as: Int iSideThreshold = (iBetaLength+(iBetaLength»1))»5;
Equations (4) and (5) in this embodiment are equal to (15) and (16) respectively.
2xdpqi < (BetaLength »4) (15) (|p3i - pOi| + |qOi - q3i|) < (BetaLength»5) (16)
One should notice that some other value of shift in the equations above can be used, as well as the particular formula for determining the threshold value, while still be covered by the embodiments.
In a sixth embodiment, the parameters in the form described in one of the previous embodiments can be put into the adaptation parameter set. In this case, the parameters are applied to the whole frame or picture rather than to one slice.
In a seventh embodiment, the described parameters can alternatively be sent in the SPS or PPS. In this case, the same set of parameters can be reused for several frames or pictures, e.g. frames or pictures of the same type, in case of PPS or kept constant for the entire sequence, in case of SPS. Fig. 17 is a flow diagram illustrating a deblocking filtering control method performed during video encoding. The method starts in step S80 where a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence is determined. A next step S81 comprises determining a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
A next step S82 associates a first syntax element representing or defining the beta parameter value determined in step S80 and a second syntax element representing or defining the length offset parameter value determined in step S81 to an encoded representation of a picture.
In an embodiment, the length offset parameter value is determined in step S81 based on pixel values in the two neighbouring blocks of pixels. In another embodiment, the length offset parameter is determined in step S81 based on pixel values in a current slice of the picture. Thus, in this embodiment a same length offset parameter value is used for all block boundaries between neighboring blocks of pixels in the slice. Therefore, the length offset parameter value is, in this embodiment, estimated based on, typically, all pixel values adjacent to block boundaries in the slice. For instance, it can be an average of these pixel values that is used to determine the length offset parameter value. For instance, if the pixels in the two blocks represent a rather smooth background or area, i.e. having substantially the same pixel values or at least pixel values that differ less than a defined threshold, then strong deblocking filtering could be applied to the available pixel lines, typically the first and fourth pixel lines, whereas if weak deblocking filtering is applied to pixel lines, in this case typically the second and third pixel lines, two pixel values could be filtered and modified on each side of the block boundary. This means that the length offset parameter value is set so that Vi<Ti and V2<T2 for the first and fourth pixel line in Fig. 14 and dp<Ts (and typically dq<Ts) for the second and third pixel line in Fig. 15. The variables Vi, V2, dp and dq are dictated by the pixel values in the blocks of pixels, whereas the value of the length offset parameter affects the values of the thresholds Ti, T2 and Ts. Correspondingly, if there is a structure in at least one of the two neighboring blocks, such as an edge passing over one or more of the pixel lines, then it is generally preferred to use weak deblocking filtering and modifying only a single pixel value on a pixel line, if any deblocking filtering is applied at all. Hence, in such a case, the length offset parameter is preferably set to a value so that at least one of the variables Vi and V2 is not smaller than its associated threshold Ti and T2, which are defined at least partly based on the length offset parameter value. Correspondingly, the length offset parameter is preferably set to a value so that dp (and dq) is preferably not smaller than Ts for the pixel line(s), which comprises the structure. In another embodiment of step S81 the length offset parameter value is determined based on at least one encoding or slice parameter used for encoding the picture. Examples of such parameters based on which the length offset parameter value can be determined include the quantization parameter (QP) or the lambda parameter used in the rate-distortion optimization of the encoded picture. Generally, the encodi ng of a slice in a picture of a video sequence generates an encoded representation 20 of the slice comprising a slice header 21 and slice data 22 as shown in Fig. 22. The encoded presentation 20 is output from the encoding process as a so called Network Adaptation Layer (NAL) unit 1 1 as shown in Fig. 21 . The first part of the NAL unit 1 1 is a header that contains an indication of the type of data in the NAL unit 11. The remaining part of the NAL unit 11 contains payload data in the form of the slice header 21 and slice data 22.
The NAL unit 1 1 may then be added with headers 12 to form a data packet 10 that can be transmitted as a part of a bitstream from the encoder to the decoder. For instance, Real-time Transport Protocol (RTP), User Datagram Protocol (UDP) and Internet Protocol (IP) headers 12 could be added to the NAL unit 1 1. This form of packetization of NAL units 11 merely constitutes an example in connection with video transport. Other approaches of handling NAL units 11 , such as file format, MPEG-2 transport streams, MPEG-2 program streams, etc. are possible.
In an embodiment of step S82 in Fig. 17 applicable to the generation of an encoded representation 20 of a picture comprising at least one slice header 21 and encoded video data represented by the slice data 22 in Fig. 22, the first syntax element and the second syntax element are inserted into a slice header 21 of the at least one slice header 21. Thus, in this embodiment the encoded representation of the picture itself carries the syntax elements defining and enabling determination of the beta parameter value and the length offset parameter value.
Fig. 18 is a flow diagram illustrating another embodiment of step S82 in Fig. 17. The method continues from step S81 in Fig. 17 and continues to step S90. Step S90 comprises inserting the first syntax element and the second syntax element into a parameter set associated with the video sequence. For instance, the syntax elements could be inserted into an APS, PPS, SPS or VPS. It is generally preferred to include the syntax elements in the same parameter set but this is not necessary. For instance, the first syntax element could be included in one of an APS, PPS, SPS or VPS with the second syntax element in another of the APS, PPS, SPS or VPS. It is in fact possible to distribute, for instance, the first syntax element, which could comprise multiple syntax element parameter, such as the previously mentioned base QP parameter and delta QP parameters, among multiple parameter sets.
A next step S91 comprises inserting a parameter set identifier into a slice header 21 of the at least one slice header 21 in the encoded representation 20 of the picture. This parameter set identifier enables identification of the parameter set into which the first and second syntax elements were inserted in step S90. The parameter set identifier could directly identify the relevant parameter set, such as an APS identifier or PPS identifier. Alternatively, the parameter set identifier identifies a first parameter set, such as PPS, which in turn comprises a second parameter set identifier identifying a second parameter set, such as SPS, which comprises the first or second syntax elements or comprises a third parameter set identifier identifying a third parameter set, such as VPS, which comprises the first or second syntax elements.
If the first and second syntax elements are distributed among multiple parameter sets, step S91 optionally comprises inserting multiple parameter set identifiers into the slice header.
The two embodiments of step S82 described above may be combined. Hence, it is possible to distribute the first and second parameter sets among a slice header and at least one parameter set.
Fig. 7 is a schematic block diagram of a filtering control device 100 according to an embodiment. The filtering control device 100 comprises a determining unit 1 10, also referred to as determiner, determining means or module. The determining unit 110 is configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data and a length offset parameter value from a second syntax element retrieved based on the encoded video data. The determined beta parameter value and the length offset parameters are used by a connecting processing unit 120, also referred to a processor or processing means or module. The processing unit 120 is configured to determine whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data. The processing unit 120 performs this determination at least partly based on the beta parameter value from the determining unit 1 10. The processing unit 120 is also configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary and perform this determination at least partly on the length offset parameter value from the determining unit 110. I n an embodiment the processing unit 120 is configured to determine whether or not to apply deblocking filtering on the block boundary based at least partly on the beta parameter value but not based on the length offset parameter value.
In an embodiment the processing unit 120 is configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value and the beta parameter value.
Fig. 16 is a schematic block diagram illustrating optional units of the processing unit 120 of the filtering control device 100 in Fig. 7.
In an embodiment the processing unit 120 comprises a beta length calculator 121 , also referred to as beta length calculating unit, means or module. The beta length calculator 121 is configured to calculate a beta length parameter value based on the beta parameter value and the length offset parameter value. In an embodiment the beta length calculator 121 is configured to calculate the beta length parameter value based on, preferably equal to, Beta χ ( Base + LengthOffset ). In such a case, the processing unit 120 is preferably configured to determine at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the beta length parameter value.
The processing unit 120 optionally comprises a d variable calculator 128, also referred to as a d variable calculating unit, means or module. The d variable calculator 129 is configured to calculate a variable d for the block boundary between the two neighboring blocks of pixels based on pixel values in the first and fourth pixel lines in the two neighboring blocks of pixels. The d variable calculator 128 preferably calculates the variable d = |p20 - 2xp10 + pOo| + |p23 - 2χρ13 + p03| + |q20 - 2xq10 + qOo| + |q23 - 2xq13 + q03|. The processing unit 120 is, in this embodiment, configured to determine to apply deblocking filtering on the block boundary if the variable d is smaller than the beta parameter value and otherwise determine not to apply deblocking filtering on the block boundary. In an embodiment, the processing unit 120 is configured to determine whether to apply strong or weak deblocking filtering for a pixel line number i crossing the block boundary between the two neighboring blocks of pixel. The processing unit 120 is configured to perform this determination based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value and more preferably at least partly based on the beta length parameter value.
In such an embodiment the processing unit 120 preferably comprises a first threshold calculator 124, a second threshold calculator 125, a first variable calculator 126 and a second variable calculator 127, which are also referred to as first threshold calculating unit, means or module, second threshold calculating unit, means or module, first variable calculating unit, means or module and second variable calculating unit, means or module.
The first threshold calculator 124 is configured to calculate a first threshold based on the beta length parameter value, preferably defined as BetaLengt » (2+X), wherein the base parameter, Base, has a value of 2X. In a particular embodiment the first threshold is calculated as BetaLeng t » 4 by the first threshold calculator 124. The second threshold calculator 125 is correspondingly configured to calculate a second threshold based on the beta length parameter value, preferably defined as BetaLength » (3+X), such as BetaLength » 5.
The first variable calculator 126 is configured to calculate a first variable for the current pixel line number i based on pixel values of pixels present in the first block and pixels present in the second block divided by the block boundary. In a particular embodiment, i=0,3. The first variable calculator 126 preferably calculates the first variable defined as 2x(|p2, - 2xp1, + p0,| + |q2, - 2xq1, -+q0i|). The second variable calculator 127 is correspondingly configured to calculate a second variable for the current pixel line number i based on pixel values of pixels present in the first block and pixels present in the second block divided by the block boundary. The second variable calculator 127 preferably calculates the second variable defined as |p3, - p0,| + |q0, - q3,|. The processing unit 120 is, in this embodiment, preferably configured to determine to apply strong deblocking filtering for the pixel line number i if the first variable is smaller than the first threshold, the second variable is smaller than the second threshold and optionally but preferably |p0, - q0,| is smaller than (5xtc + 1 ) » 1. If not all of these criteria are met for the pixel line number i the processing unit 120 is preferably configured to determine to apply weak deblocking filtering for the pixel line number i. The processing unit 120 optionally comprises a side threshold calculator 122 and a dp variable calculator 123, also referred to as side threshold calculating unit, means or module and dp variable calculating unit, means or module. The side threshold calculator 122 is configured to calculate a side threshold based on the beta length parameter value. In a particular embodiment the side threshold calculator 122 is configured to calculate the side threshold as ( BetaLengt + ( BetaLengt » 1) ) » (3+X), preferably as ( BetaLength + ( BetaLength » 1) ) » 5.
The dp variable calculator 123 is configured to calculate a variable dp for a current pixel line number i, preferably i=0...3. In a particular embodiment, the pixel line is a line for which the processing unit 120 has determined to apply weak deblocking filtering. The dp variable calculator 123 is configured to calculate the variable dp based on pixel values of pixels present in the first (current) block of the two blocks of pixels divided by the block boundary. The dp variable calculator 123 preferably calculates the variable dp = | p20 - 2xp10 + p001 + 1 p23 - 2xp13 + p031.
The processing unit 120 is, in this embodiment, preferably configured to determine to filter and modify two pixels in the pixel line number i if the variable dp is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line. In the former case, the two pixels are preferably the two pixels in pixel line number i that are closest to the block boundary whereas in the latter case the one pixel is the pixel closest to the block boundary in the pixel line number i.
In an optional embodiment the dp variable calculator 123 is also configured to calculate a variable dq based on pixel values of pixels present in the pixel line number i in the second block of the two pixels divided by the block boundary. The dp variable calculator 123 preferably calculates the variable dq = | q20 - 2xq10 + qOo | + 1 q23 - 2xq13 + q03 1. The processing unit 120 is then preferably configured to determine to filter and modify two pixels in the pixel line number i in the second block if the variable dq is smaller than the side threshold and otherwise determine to filter and modify one pixel in the pixel line number i in the second block. In an embodiment, the processing unit 120 is configured to determine both to i) whether to apply weak or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary based at least partly on the length offset parameter value, preferably based at least partly on the length offset parameter value and the beta parameter value, more preferably based at least partly on the beta length parameter value. In such a case, the processing unit 120 may contain all the units 121-128 as shown in Fig. 16.
Accordingly as illustrated in Fig. 7, the filtering control device 100 implements the functions of the previously disclosed embodiments, such as the first to seventh embodiment, or a combination thereof by the determining unit 110, which is configured to determine, in a particular embodiment, BetaLengt . This BetaLength is processed by the processing unit 120.
Fig. 19 is a schematic block diagram of a filtering control device 200 according to another embodiment. This filtering control device 200 is in particular configured to be implemented within or connected to an encoder. The filtering control device 200 comprises a beta parameter determining unit 210, also referred to as a beta parameter determiner or beta parameter determining means or module. The beta parameter determining unit 210 is configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence. A length offset determining unit 220, also referred to as a length offset determiner or length offset determining means or module, is configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary. The filtering control device 200 also comprises an associating unit 230, also referred to as associator or associating means or module, configured to associate a first syntax element representing or defining the beta parameter value and a second syntax element representing or defining the length offset parameter value to an encoded representation of the picture.
In an embodiment, the length offset determining unit 220 is configured to determine the length offset parameter value based on pixel values in the two blocks of pixels or based on pixel values close to block boundaries in the slice as previously disclosed herein. In another alternative or additional embodiment the length offset determining unit 220 is configured to determine the length offset parameter value based on at least one encoding or slice parameter used for encoding the picture as previously disclosed herein.
The associating unit 230 is, in an embodiment, configured to insert the first and second syntax elements into a slice header of the encoded representation of the picture. In another embodiment the associating unit 230 is configured to insert first and second syntax elements into a parameter set associated with the video sequence and insert a parameter set identifier enabling identification of the parameter set into a slice header of the encoded representation of the picture. The associating unit 230 may alternatively be configured to distribute the first and second syntax elements between different parameter sets or between a slice header and a parameter set. The filtering control device 200 of Fig. 19 can, in an embodiment, be viewed as an implementation example of the filtering control unit 100 of Fig. 7. In such a case, the determining unit 100 is configured to perform the operations of the beta parameter determining unit 210 and the length offset determining unit 220, whereas the processing unit 120 is configured to perform the operations of the associating unit 230.
The filtering control device 100, 200 of Figs. 7, 19 with their including units 110-120 (and optional units 121-128), 210-230 could be implemented in hardware. There are numerous variants of circuitry elements that can be used and combined to achieve the functions of the units 110-120, 210-230 of the filtering control device 100, 200. Such variants are encompassed by the embodiments. Particular examples of hardware implementation of the filtering control device 100, 200 is implementation in digital signal processor (DSP) hardware and integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
The filtering control device 100, 200 described herein could alternatively be implemented e.g. by one or more of a processing unit 72 in a computer 70 and adequate software with suitable storage or memory therefore, a programmable logic device (PLD) or other electronic component(s) as shown in Fig. 8.
Fig. 8 schematically illustrates an embodiment of a computer 70 having a processing unit 72, such as a DSP (Digital Signal Processor) or CPU (Central Processing Unit). The processing unit 72 can be a single unit or a plurality of units for performing different steps of the method described herein. The computer 70 also comprises an input/output (I/O) unit 71 for receiving recorded or generated video frames or encoded video frames and outputting encoded video frame or decoded video data. The I/O unit 71 has been illustrated as a single unit in Fig. 8 but can likewise be in the form of a separate input unit and a separate output unit. Furthermore, the computer 70 comprises at least one computer program product 73 in the form of a nonvolatile memory, for instance an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory or a disk drive. The computer program product 73 comprises a computer program 74, which comprises code means which when run on or executed by the computer 70, such as by the processing unit 72, causes the computer 70 to perform the steps of the method described in the foregoing in connection with Figs. 1A-1C, 11-15, 17-18. Hence, in an embodiment the code means in the computer program 74 comprises a module 310 configured to implement embodiments as disclosed herein or combinations thereof. This module 310 essentially performs the steps of the flow diagrams in Figs. 1A-1C, 11-15, 17-18 when run on the processing unit 72. Thus, when the module 310 is run on the processing unit 72 it corresponds to the corresponding units 110-120, 210-230 of Figs 7, 19.
In an embodiment the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to retrieve, based on encoded video data, a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value. The code means also causes the computer 70 to determine, based at least partly on the beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture decoded based on the encoded video data and determine, based at least partly on the length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of the block boundary.
In another embodiment the computer program 74 is a computer program 74 for deblocking filtering control and comprises code means which when run on the computer 70 causes the computer 70 to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture of a video sequence. The code means also causes the computer 70 to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on the block boundary and ii) how many pixels to filter on each side of said the boundary, and associate a first syntax element representing the beta parameter value and a second syntax element representing the length offset parameter value to an encoded representation of the picture.
An embodiment also relates to a computer program product 73 comprising computer readable code means and a computer program 74 as defined according to any of the embodiments above stored on the computer readable code means.
The filtering control device 200 of Fig. 19 is preferably implemented or arranged in an encoder configured to encode video data of a video sequence. Fig. 2 is a schematic block diagram of an encoder 40 for encoding a block of pixels in a video frame or picture of a video sequence according to an embodiment.
A current block of pixels is predicted by performing a motion estimation by a motion estimator 50 from an already provided block of pixels in the same frame or in a previous frame. The result of the motion estimation is a motion or displacement vector associated with the reference block, in the case of inter prediction. The motion vector is utilized by a motion compensator 50 for outputting an inter prediction of the block of pixels. An intra predictor 49 computes an intra prediction of the current block of pixels. The outputs from the motion estimator/compensator 50 and the intra predictor 49 are input in a selector 51 that either selects intra prediction or inter prediction for the current block of pixels. The output from the selector 51 is input to an error calculator in the form of an adder 41 that also receives the pixel values of the current block of pixels. The adder 41 calculates and outputs a residual error as the difference in pixel values between the block of pixels and its prediction.
The error is transformed in a transformer 42, such as by a discrete cosine transform, and quantized by a quantizer 43 followed by coding in an encoder 44, such as by entropy encoder. In inter coding, also the estimated motion vector is brought to the encoder 44 for generating the coded representation of the current block of pixels.
The transformed and quantized residual error for the current block of pixels is also provided to an inverse quantizer 45 and inverse transformer 46 to retrieve the original residual error. This error is added by an adder 47 to the block prediction output from the motion compensator 50 or the intra predictor 49 to create a reference block of pixels that can be used in the prediction and coding of a next block of pixels. This new reference block is first processed by a filtering control device 200 in order to control any filtering that is applied to the reference block to combat any artifact. The processed new reference block is then temporarily stored in a frame buffer 48, where it is available to the intra predictor 49 and the motion estimator/compensator 50.
The filtering control device 100 of Fig. 7 is preferably implemented or arranged in a decoder configured to decode encoded video data of a video sequence. Fig. 3 is a corresponding schematic block diagram of a decoder 60 comprising a filtering control device 100 according to any of the embodiments or in combinations thereof. The decoder 60 comprises a decoder 61, such as entropy decoder, for decoding an encoded representation of a block of pixels to get a set of quantized and transformed residual errors. These residual errors are dequantized in an inverse quantizer 62 5 and inverse transformed by an inverse transformer 63 to get a set of residual errors.
These residual errors are added in an adder 64 to the pixel values of a reference block of pixels. The reference block is determined by a motion estimator/compensator 67 or intra predictor 66, depending on whether inter or intra prediction is performed. A selector 68 is thereby interconnected to the adder 64 and the
10 motion estimator/compensator 67 and the intra predictor 66. The resulting decoded block of pixels output from the adder 64 is input to a filtering control device 100 in order to control any filter that is applied to combat any artifacts. The filtered block of pixels is output form the decoder 60 and is furthermore preferably temporarily provided to a frame buffer 65 and can be used as a reference block of pixels for a subsequent block of pixels to be decoded. The frame buffer 65 is thereby connected to the motion estimator/compensator
15 67 to make the stored blocks of pixels available to the motion estimator/compensator 67.
The output from the adder 64 is preferably also input to the intra predictor 66 to be used as an unfiltered reference block of pixels.
20 In the embodiments disclosed in Figs. 2 and 3 the filtering control device 100, 200 controls filtering in the form of so called in-loop filtering. In an alternative implementation at the decoder 60 the filtering control device 100 is arranged to perform so called post-processing filtering. In such a case, the filtering control device 100 operates on the output frames outside of the loop formed by the adder 64, the frame buffer 65, the intra predictor 66, the motion estimator/compensator 67 and the selector 68.
25 No filtering and filter control is then typically done at the encoder although, in principle, the encoder can still estimate the length offset parameter and signal it with some non-normative means, e.g. in a non- normative SEI message.
Hence, the embodiments can be used in an encoder 40 and/or a decoder 60 or completely outside the 30 coding loop as a post filter. The methods of the embodiments are performed in a filtering control device 100, 200 which can be located in an encoder 40 or a decoder 60 as schematically illustrated in Figs. 2 and 3. Figs. 2 and 3 illustrate the example when the method is performed inside the coding loop. The decoder 60 with a filtering control device 100 according to an embodiment may be implemented in a user equipment or media terminal 80 as shown in Fig. 5.
Fig. 5 is a schematic block diagram of a user equipment or media terminal 80 housing a decoder 60 with a filtering control device. The user equipment 80 can be any device having media decoding functions that operates on an encoded video stream of encoded video frames to thereby decode the video frames and make the video data available. Non-limiting examples of such devices include mobile telephones and other portable media players, tablets, desktops, notebooks, personal video recorders, multimedia players, video streaming servers, set-top boxes, TVs, computers, decoders, game consoles, etc. The user equipment 80 comprises a memory 84 configured to store encoded video frames or pictures. These encoded video frames or pictures can have been generated by the user equipment 80 itself. Alternatively, the encoded video frames or pictures are generated by some other device and wirelessly transmitted or transmitted by wire to the user equipment 80. The user equipment 80 then comprises a transceiver (transmitter and receiver) or input and output port 82 to achieve the data transfer.
The encoded video frames or pictures are brought from the memory 84 to a decoder 60, such as the decoder illustrated in Fig. 3. The decoder 60 comprises a filtering control device 100 according to embodiments. The decoder 60 then decodes the encoded video frames or pictures into decoded video frames or pictures. The decoded video frames pictures are provided to a media player 86 that is configured to render the decoded video frames into video data that is displayable on a display or screen 88 of or connected to the user equipment 80.
In Fig. 5, the user equipment 80 has been illustrated as comprising both the decoder 60 and the media player 86, with the decoder 60 implemented as a part of the media player 86. This should, however, merely be seen as an illustrative but non-limiting example of an implementation embodiment for the user equipment 80. Also distributed implementations are possible where the decoder 60 and the media player 86 are provided in two physically separated devices are possible and within the scope of user equipment 80 as used herein. The display 88 could also be provided as a separate device connected to the user equipment 80, where the actual data processing is taking place.
The encoder 40 with a filtering control device 200 according to an embodiment may be implemented in a user equipment or media terminal 80 as shown in Fig. 4. Fig. 4 illustrates another embodiment of a user equipment 80 that comprises en encoder 40, such as the encoder of Fig. 2, comprising a filtering control device according to the embodiments. The encoder 40 is then configured to encode video frames or pictures received by the I/O unit 82 and/or generated by the user equipment 80 itself. In the latter case, the user equipment 80 preferably comprises a media 5 engine or recorder, such as in the form of or connected to a (video) camera. The user equipment 80 may optionally also comprise a media player 86, such as a media player 86 with a decoder and filtering control device according to the embodiments, and a display 88.
As illustrated in Fig. 6, the encoder 40 and/or decoder 60, such as illustrated in Figs. 2 and 3, may be 10 implemented in a network device 30 being or belonging to a network node in a communication network 32 between a sending unit 34 and a receiving user equipment 36. Such a network device 30 may be a device for converting video according to one video coding standard to another video coding standard, for example, if it has been established that the receiving user equipment 36 is only capable of or prefers another video coding standard than the one sent from the sending unit 34. The network device 15 30 can be in the form of or comprised in a radio base station, a Node-B or any other network node in a communication network 32, such as a radio-based network.
A transmitter associated with the encoder is provided for signaling the parameters according to embodiments above. Accordingly a receiver is provided for receiving the signaled parameters. The 20 received parameters are used by the decoder when decoding the bit stream. Thus the receiver and the transmitter respectively implement the methods shown in Figs. 1 B and 1 C.
The embodiments above, apply to a decoder, an encoder and any element that operates on a bitstream, such as a network-node or a Media Aware Network Element. The encoder may for example 25 be located in a transmitter in a video camera in e.g. a mobile device. The decoder may for example be located in a receiver in a video camera or any other device for displaying, decoding or transcoding a video stream.
The embodiments are not limited to HEVC but may be applied to any extension of HEVC such as a 30 scalable extension or multiview extension or to a different video codec.
The embodiments described above are to be understood as a few illustrative examples of the present invention. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the scope of the present invention. I n particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible. The scope of the present invention is, however, defined by the appended claims.
APPENDIX A - SLICE HEADER SYNTAX
slice_header( ) { Descriptor first_slice_in_pic_flag u(1) if( first_slice_in_pic_flag = = 0 )
slice address u(v) slicejype ue(v) entropy_slice_flag u(1) if( !entropy_slice_flag ) {
pic_parameter_set_id ue(v) if( o utp ut_fl ag_p rese nt_fl ag )
pic_output_flag u(1) if( separate_colour_plane_flag = = 1 )
colour_plane_id u(2) if( IdrPicFlag ) {
idr_pic_id ue(v) no_output_of_prior_pics_flag u(1)
} else {
pic_order_cnt_lsb u(v) short_term_ref_pic_set_sps_flag u(1) if( !short_term_ref_pic_set_sps_flag )
short_term_ref_pic_set( num_short_term_ref_pic_sets )
else
short_term_ref_pic_set_idx u(v) if( long_term_ref_pics_present_flag ) {
num_long_term_pics ue(v) for( i = 0; i < num_long_term_pics; i++ ) {
delta_poc_lsb_lt[ i ] ue(v) delta_poc_msb_present_flag[ i ] u(1) if( delta_poc_msb_present_flag[ i ] )
delta_poc_msb_cycle_lt_minus1 [ i ] ue(v) used_by_curr_pic_lt_flag[ i ] u(1)
} }
}
if( sample_adaptive_offset_enabled_flag ) {
slice sao interleaving flag u(1) slice_sample_adaptive_offset_flag u(1) if( slice_sao_interleaving_flag &&
s 1 i ce_samp 1 e_ad apti ve_offset_fl ag ) {
sao_cb_enable_flag u(1) sao_cr_enable_flag u(1)
}
}
if( scaling_list_enable_flag 1 1
deblocking_filter_in_aps_enabled_flag 1 1
( sample_adaptive_offset_enabled_flag && !slice_sao_interleaving_flag ) 1 1
adaptive_loop_filter_enabled_flag )
apsjd ue(v) if( slicejype = = P 1 1 slicejype = = B ) {
num_refJdx_active_overrideJlag u(1) if( num_ref_idx_active_override_flag ) {
num_refJdx_IO_active_minus1 ue(v) if( slicejype = = B )
num_refjdx_l1_active_minus1 ue(v)
}
}
if( lists_modification_present_flag ) {
ref_pic_list_modification( )
ref_p i c_l i st_co m b i n ati o n ( )
}
if( slicejype = = B )
mvdJ1_zeroJlag u(1)
}
if( cabacjniLpresentJag && slicejype != I )
cabacjnitjlag u(1) if( !entropy_slice_flag ) {
slice_qp_delta se(v) if( deblocking_filter_control_present_flag ) {
if( deblocking_filter_in_aps_enabled_flag )
inherit_dbl_params_from_aps_flag u(1) if( !inherit_dbl_params_from_aps_flag ) {
disable deblocking filter flag u(1) if( !disable_deblocking_filter_flag ) {
beta_offset_div2 se(v) tc_offset_div2 se(v)
}
}
}
if( slicejype = = B )
collocated_from_IO_flag u(1) if( slicejype != 1 &&
((collocated JromJOJIag && num_ref_idx_IO_active_minus1 > 0) 1 1
(!collocated JromJOJIag && num_refJdxJ1_active_minus1 > 0) )
collocated_ref_idx ue(v) if( ( weighted_pred_flag && slicejype = = P) 1 1
( weighted_bipred_idc = = 1 && slicejype = = B ) )
if( deblocking _fi lter_co ntroLp rese nt_fl ag ) {
if( deblocking _filterjn_aps_enabled_flag )
inherit_dbl_paramsJrom_apsJlag u(1) if( !inherit_dbl_paramsJrom_aps_flag ) {
disable deblocking filter flag u(1) pred_weightjable( )
}
if( slicejype = = P 1 1 slicejype = = B )
five_minus_max_num_merge_cand ue(v) if( adaptivejoop _filter_enabled_flag ) {
slice_adaptiveJoopJilterJlag u(1) if( slice_adaptiveJoop_filter_flag && alf_coef_in_slice_flag ) alf_param( )
if( slice_adaptive_loop_filter_flag && !alf_coef_in_slice_flag )
alf_cu_control_param( )
}
if( seq Joop_filter_across_slices_enabled_flag &&
( slice_adaptive_loop_filter_flag 1 1 slice_sample_adaptive_offset_flag 1 1
!disable_deblocking_filter_flag ) )
slice_loop_filter_across_slices_enabled_flag u(1) if( tiles_or_entropy_coding_sync_idc > 0 ) {
n um_entry_poi nt_off sets ue(v) if( num_entry_point_offsets > 0 ) {
offset_len_minus1 ue(v) for( i = 0; i < num_entry_point_offsets; i++ )
entry_point_offset[ i ] u(v)
}
}
APPENDIX B - ADAPTATION PARAMETER SET RBSP SYNTAX aps_rbsp( ) { Descriptor apsjd ue(v) aps_scaling_list_data_present_flag u(1) if( ap s_scal i n g_l ist_d ata_p rese nt_fl ag )
scaling_list_param( )
aps_deblocking_filter_flag u(1) if(aps_deblocking_filter_flag) {
disable deblocking filter flag u(1) if( !disable_deblocking_filter_flag ) {
beta_offset_div2 se(v) tc_offset_div2 se(v)
}
}
aps_sao_i nterleavi ng_f lag u(1) if( !aps_sao_interleaving_flag ) {
aps_sample_adaptive_offset_flag u(1) if( aps_sample_adaptive_offset_flag )
aps_sao_param( )
}
aps_adapti ve_loop_f i lter_f lag u(1) if( aps_adaptive_loop_filter_flag )
alf_param( )
aps_extension_flag u(1) if( aps_extension_flag )
while( more_rbsp_data( ) )
aps_extension_data_flag u(1) rbsp_trailing_bits( )
} APPENDIX C - PICTURE PARAMETER SET RBSP SYNTAX pic_parameter_set_rbsp( ) { Descriptor pps_pic_parameter_set_id ue(v) pps_seq_parameter_set_id ue(v) dependent_slice_segments_enabled_flag u(1) output_flag_present_flag u(1) num_extra_slice_header_bits u(3) sign data hiding flag u(1) cabac_init_present_flag u(1) num_ref_idx_IO_default_active_minus1 ue(v) num_ref_idx_l1_default_active_minus1 ue(v) init_qp_minus26 se(v) constrained_intra_pred_flag u(1) transform_skip_enabled_flag u(1) cu_qp_delta_enabled_flag u(1) if( cu_qp_delta_enabled_flag )
diff_cu_qp_delta_depth ue(v) pps_cb_qp_offset se(v) pps_cr_qp_offset se(v) pps_slice_chroma_qp_offsets_present_flag u(1) weighted_pred_flag u(1) weighted_bipred_flag u(1) transquant_bypass_enabled_flag u(1) tiles_enabled_flag u(1) entropy_coding_sync_enabled_flag u(1) if( tiles_enabled_flag ) {
num_tile_columns_minus1 ue(v) num_tile_rows_minus1 ue(v) uniform spacing flag u(1) if( !uniform_spacing_flag ) {
for( i = 0; i < num_tile_columns_minus1 ; i++ )
column_width_minus1 [ i ] ue(v) for( i = 0; i < num_tile_rows_minus1 ; i++ ) row_height_minus1 [ i ] ue(v)
}
loop_f i lter_across_ti les_enab led_f lag u(1)
}
loop_filter_across_slices_enabled_flag u(1) deblocking_filter_control_present_flag u(1) if( deblocking_filter_control_present_flag ) {
deblocking_filter_override_enabled_flag u(1) pps_deblocking_filter_disabled_flag u(1) if( !pps_deblocking_filter_disabled_flag ) {
pps_beta_offset_div2 se(v) pps_tc_offset_div2 se(v)
}
}
pps_scaling_list_data_present_flag u(1) if( p p s_scal i n g_l ist_d ata_p rese nt_fl ag )
scaling_list_data( )
lists modification present flag u(1) log2_parallel_merge_level_minus2 ue(v) slice_segment_header_extension_present_flag u(1) pps_extension_flag u(1) if( pps_extension_flag )
while( more_rbsp_data( ) )
pps_extension_data_flag u(1) rbsp_trailing_bits( )

Claims

1. A deblocking filtering control method performed in connection with video decoding, said method comprising:
retrieving (S30), based on encoded video data (20), a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value;
determining (S31), based at least partly on said beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture (2) decoded based on said encoded video data (20); and
determining (S32), based at least partly on said length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
2. The method according to claim 1 , wherein determining (S31) whether or not to apply deblocking filtering comprises determining (S31), based at least partly on said beta parameter value but not based on said length offset parameter value, whether or not to apply deblocking filtering on said block boundary.
3. The method according to claim 1 or 2, wherein determining (S32) at least one of i) and ii) comprises determining (S32), based at least partly on said length offset parameter value and said beta parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
4. The method according to claim 3, wherein determining (S32) at least one of i) and ii) comprises: calculating (S50) a beta length parameter value based on Beta χ ( Base + LengthOffset ), wherein Beta represents said beta parameter value, LengthOffset represents said length offset parameter value and Base represents a base parameter value; and
determining (S51), based at least partly on said beta length parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
5. The method according to claim 4, further comprising:
calculating (S70) a side threshold defined as ( BetaLength + ( BetaLength » 1 ) ) » 5, wherein BetaLength represents said beta length parameter value and » represents a right shift operator defined as a » b = L a 1 2b J and L x J is a largest integer not greater than x; and calculating (S71) a variable dp = |p20 - 2xp10 + pOo| + |p23 - 2xp13 + p03|, wherein pA, represents a pixel value of a pixel in a pixel line number i in a block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary, wherein determining (S32) at least one of i) and ii) comprises determining (S73) to filter and modify two pixels in said pixel line if said variable dp is smaller than said side threshold and otherwise determining (S74) to filter and modify one pixel in said pixel line.
6. The method according to claim 4 or 5, further comprising:
calculating (S60) a first threshold defined as BetaLengt » 4, wherein BetaLengt represents said beta length parameter value, » represents a right shift operator defined as a » b = L a 1 2b J and L x J is a largest integer not greater than x;
calculating (S61) a second threshold defined as BetaLength » 5;
calculating (S62) a first variable defined as 2x(|p2, - 2xp1, + p0,| + |q2, - 2xq1, -+qOi|), wherein pA represents a pixel value of a pixel in a pixel line number i in a first block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary; and
calculating (S63) a second variable defined as |p3, - p0,| + |q0, - q3,|, wherein determining (S32) at least one of i) and ii) comprises determining (S65) to apply strong deblocking filtering for said pixel line number i if said first variable is smaller than said first threshold, said second variable is smaller than said second threshold and |p0, - q0,| is smaller than (5xtc + 1) » 1 and otherwise determining (S66) to apply weak deblocking filtering for said pixel line number i, wherein tc depends on a quantization parameter value used for residual coding of said first block of pixels.
7. The method according to any of the claims 4 to 6, further comprising calculating (S40) a variable d = |p20 - 2xp10 + pOo| + |p23 - 2xp13 + p03| + |q20 - 2xq10 + q00| + |q23 - 2xq13 + q03|, wherein pA represents a pixel value of a pixel in a pixel line number i in a first block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary, wherein determining (S31) whether or not to apply deblocking filtering comprises determining (S42) to apply deblocking filtering on said block boundary if said variable d is smaller than said beta parameter value and otherwise (S43) determining not to apply deblocking filtering on said block boundary.
8. The method according to any of the claims 1 to 7, wherein determining (S32) at least one of i) and ii) comprises determining (S32), based at least partly on said length offset parameter value, i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
9. A filtering control device (100) comprising:
a determining unit (1 10) configured to determine a beta parameter value from a first syntax element retrieved based on encoded video data (20) and a length offset parameter value from a second syntax element retrieved based on said encoded video data (20); and
a processing unit (120) configured to i) determine, based at least partly on said beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture (2) decoded based on said encoded video data (20), and ii) determine, based at least partly on said length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
10. The device according to claim 9, wherein said processing unit (120) is configured to determine, based at least partly on said beta parameter value but not based on said length offset parameter value, whether or not to apply deblocking filtering on said block boundary.
11. The device according to claim 9 or 10, wherein said processing unit (120) is configured to determine, based at least partly on said length offset parameter value and said beta parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
12. The device according to claim 1 1 , wherein said processing unit (120) comprises a beta length calculator (121) configured to calculate a beta length parameter value based on Beta χ ( Base + LengthOffset ), wherein Beta represents said beta parameter value, LengthOffset represents said length offset parameter value and Base represents a base parameter value, said processing unit (120) is configured to determine, based at least partly on said beta length parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
13. The device according to claim 12, wherein said processing unit (120) comprises: a side threshold calculator (122) configured to calculate a side threshold defined as ( BetaLength + ( BetaLength » 1 ) ) » 5, wherein BetaLength represents said beta length parameter value and » represents a right shift operator defined as a » b = L a 1 2b J and L x J is a largest integer not greater than x; and
5 a dp variable calculator (123) configured to calculate a variable dp = |p20 - 2xp10 + pOo| + |p23 - 2χρ13 + p03|, wherein pA, represents a pixel value of a pixel in a pixel line number i in a block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary, wherein said processing unit (120) is configured to determine to filter and modify two pixels in said pixel line if said variable dp is smaller than said side threshold and otherwise determine to filter and modify one pixel in
10 said pixel line.
14. The device according to claim 12 or 13, wherein said processing unit (120) comprises:
a first threshold calculator (124) configured to calculate a first threshold defined as BetaLength
» 4, wherein BetaLength represents said beta length parameter value and » represents a right shift 15 operator defined as a » b = L a / 2b J and L x J is a largest integer not greater than x;
a second threshold calculator (125) configured to calculate a second threshold defined as
BetaLength » 5;
a first variable calculator (126) configured to calculate a first variable defined as 2x(|p2, - 2xp1, + p0i| + |q2i - 2xq1i -+q0i|), wherein pA represents a pixel value of a pixel in a pixel line number i in a first
20 block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary; and
a second variable calculator (127) configured to calculate a second variable defined as |p3, - p0i| + |q0i - q3i|, wherein said processing unit (120) is configured to determine to apply strong
25 deblocking filtering for said pixel line number i if said first variable is smaller than said first threshold, said second variable is smaller than said second threshold and |p0, - q0,| is smaller than (5xtc + 1) » 1 and otherwise determine to apply weak deblocking filtering for said pixel line number i, wherein tc depends on a quantization parameter value used for residual coding of said first block of pixels.
30 15. The device according to any of the claims 12 to 14, wherein said processing unit (120) comprises a d variable calculator (128) configured to calculate a variable d = |p20 - 2xp10 + pOo| + |p23 - 2xp13 + p03| + |q20 - 2xq10 + qOo| + |q23 - 2xq13 + q03|, wherein pA represents a pixel value of a pixel in a pixel line number i in a first block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary and qA represents a pixel value of a pixel in a pixel line number i in a second block of pixels of said two blocks of pixels at a pixel position number A relative to said block boundary, wherein said processing unit (120) is configured to determine to apply deblocking filtering on said block boundary if said variable d is smaller than said beta parameter value and otherwise determine not to apply deblocking filtering on said block boundary.
5
16. The device according to any of the claims 9 to 15, whether said processing unit (120) is configured to determine, based at least partly on said length offset parameter value, i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary.
10
17. A decoder (60) configured to decode encoded video data (20) of a video sequence (1), said decoder (60) comprising a filtering control device (100) according to any of the claims 9 to 16.
18. A user equipment (80) comprising a decoder (60) according to claim 17.
15
19. A computer program (74) for deblocking filtering control, said computer program (74) comprises code means which when run on a computer (70) causes said computer (70) to:
retrieve, based on encoded video data (20), a first syntax element defining a beta parameter value and a second syntax element defining a length offset parameter value;
20 determine, based at least partly on said beta parameter value, whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture (2) decoded based on said encoded video data (20); and
determine, based at least partly on said length offset parameter value, at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how 25 many pixels to filter on each side of said block boundary.
20. A computer program product (73) comprising computer readable code means and a computer program (74) according to claim 19 stored on said computer readable code means.
30 21. A deblocking filtering control method performed during video encoding, said method comprising:
determining (S80) a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture (2) of a video sequence (1); determining (S81) a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary; and
associating (S82) a first syntax element representing said beta parameter value and a second 5 syntax element representing said length offset parameter value to an encoded representation (20) of said picture (2).
22. The method according to claim 21 , wherein determining (S81) said length offset parameter value comprises determining (S81) said length offset parameter value based on pixel values in said two
10 blocks of pixels.
23. The method according to claim 21 or 22, wherein determining (S81) said length offset parameter value comprises determining (S81) said length offset parameter value based on at least one encoding parameter used for encoding said picture (2).
15
24. The method according to any of the claims 21 to 23, wherein said encoded representation (20) of said picture (2) comprises at least one slice header (21) and encoded video data (22) and associating (S82) said first syntax element and said second syntax element comprises inserting (S82) said first syntax element and said second syntax element into a slice header (21) of said at least one
20 slice header (21).
25. The method according to any of the claims 21 to 23, wherein said encoded representation (20) of said picture (2) comprises at least one slice header (21) and encoded video data (22), and associating (S82) said first syntax element and said second syntax element comprises:
25 inserting (S90) said first syntax element and said second syntax element into a parameter set associated with said video sequence (1); and
inserting (S91) a parameter set identifier enabling identification of said parameter set into a slice header (21 ) of said at least one slice header (21 ).
30 26. A filtering control device (200) comprising:
a beta parameter determining unit (210) configured to determine a beta parameter value defining whether or not to apply deblocking filtering on a block boundary between two blocks of pixels in a picture (2) of a video sequence (1); a length offset determining unit (220) configured to determine a length offset parameter value defining at least one of i) whether to apply weak deblocking filtering or strong deblocking filtering on said block boundary and ii) how many pixels to filter on each side of said block boundary; and
an associating unit (230) configured to associate a first syntax element representing said beta 5 parameter value and a second syntax element representing said length offset parameter value to an encoded representation (20) of said picture (2).
27. The device according to claim 26, wherein said length offset determining unit (220) is configured to determine said length offset parameter value based on pixel values in said two blocks of pixels.
10
28. The device according to claim 26 or 27, wherein said length offset determining unit (220) is configured to determine said length offset parameter value based on at least one encoding parameter used for encoding said picture (2).
15 29. The device according to any of the claims 26 to 28, wherein said encoded representation (20) of said picture (2) comprises at least one slice header (21) and encoded video data (22) and said associating unit (230) is configured to insert said first syntax element and said second syntax element into a slice header (21 ) of said at least one slice header (21 ).
20 30. The device according to any of the claims 26 to 28, wherein said encoded representation (20) of said picture (2) comprises at least one slice header (21) and encoded video data (22) and said associating unit (230) is configured to i) insert said first syntax element and said second syntax element into a parameter set associated with said video sequence (1), and ii) insert a parameter set identifier enabling identification of said parameter set into a slice header (21) of said at least one slice
25 header (21).
31. An encoder (40) configured to encode video data of a video sequence, said encoder (40) comprising a filtering control device (200) according to any of the claims 26 to 30.
30 32. A user equipment (80) comprising an encoder (40) according to claim 31.
33. A network device (30) being or belonging to a network node in a communication network (32), said network device (30) comprising an encoder (40) according to claim 31 and/or a decoder (60) according to claim 17.
PCT/SE2013/050237 2012-04-25 2013-03-14 Deblocking filtering control WO2013162441A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261638157P 2012-04-25 2012-04-25
US61/638,157 2012-04-25

Publications (1)

Publication Number Publication Date
WO2013162441A1 true WO2013162441A1 (en) 2013-10-31

Family

ID=48045655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2013/050237 WO2013162441A1 (en) 2012-04-25 2013-03-14 Deblocking filtering control

Country Status (1)

Country Link
WO (1) WO2013162441A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018132043A1 (en) * 2017-01-10 2018-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Deblocking filtering control
WO2019144732A1 (en) 2018-01-29 2019-08-01 Mediatek Inc. Length-adaptive deblocking filtering in video coding
KR20190121377A (en) * 2017-04-06 2019-10-25 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Coding device, decoding device, coding method and decoding method
CN110495179A (en) * 2017-04-06 2019-11-22 松下电器(美国)知识产权公司 Code device, decoding apparatus, coding method and coding/decoding method
JPWO2018186433A1 (en) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method and decoding method
JPWO2018186430A1 (en) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method and decoding method
CN111131821A (en) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 Deblocking filtering under dependent quantization
JP2021073802A (en) * 2017-04-06 2021-05-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Decoding device and decoding method
CN114025159A (en) * 2017-04-06 2022-02-08 松下电器(美国)知识产权公司 Encoding device and decoding device
EP3991308A4 (en) * 2020-03-27 2022-08-31 Tencent America LLC High level control for deblocking operations

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194614A1 (en) * 2010-02-05 2011-08-11 Andrey Norkin De-Blocking Filtering Control

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194614A1 (en) * 2010-02-05 2011-08-11 Andrey Norkin De-Blocking Filtering Control

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
AN J ET AL: "CE12 Subtest 1: Improved Deblocking Filter", 20110309, no. JCTVC-E079, 9 March 2011 (2011-03-09), XP030008585, ISSN: 0000-0007 *
ANDREY NORKIN ET AL: "CE12.1: Ericsson deblocking filter", 96. MPEG MEETING; 21-3-2011 - 25-3-2011; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m19803, 19 March 2011 (2011-03-19), XP030048370 *
CHUN-LUNG HSU ET AL: "A Fast-Deblocking Boundary-strength Based Architecture Design of Deblocking Filter in H.264/AVC Applications", JOURNAL OF SIGNAL PROCESSING SYSTEMS ; FOR SIGNAL, IMAGE, AND VIDEO TECHNOLOGY (FORMERLY THE JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL, IMAGE, AND VIDEO TECHNOLOGY), SPRINGER US, BOSTON, vol. 52, no. 3, 20 November 2007 (2007-11-20), pages 211 - 229, XP019616669, ISSN: 1939-8115 *
KOTRA A ET AL: "Deblocking simplification and rounding optimization", 7. JCT-VC MEETING; 98. MPEG MEETING; 21-11-2011 - 30-11-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-G639, 8 November 2011 (2011-11-08), XP030110623 *
NORKIN A ET AL: "AHG6: On deblocking filter and parameters signaling", 103. MPEG MEETING; 21-1-2013 - 25-1-2013; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m27569, 18 January 2013 (2013-01-18), XP030056136 *
NORKIN A ET AL: "CE10.4: deblocking parameters signaling", 8. JCT-VC MEETING; 99. MPEG MEETING; 1-2-2012 - 10-2-2012; SAN JOSE; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-H0574, 22 January 2012 (2012-01-22), XP030111601 *
NORKIN A ET AL: "CE12: Ericsson's and MediaTek's deblocking filter", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-F118, 9 August 2011 (2011-08-09), XP030009141 *
NORKIN A ET AL: "Deblocking filter length adjustment", 9. JCT-VC MEETING; 100. MPEG MEETING; 27-4-2012 - 7-5-2012; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-I0542, 27 April 2012 (2012-04-27), XP030112305 *
WIEGAND T ET AL: "WD3: Working Draft 3 of High-Efficiency Video Coding", 20110329, no. JCTVC-E603, 29 March 2011 (2011-03-29), XP030009014, ISSN: 0000-0003 *
YANG J ET AL: "CE12: SK Telecom/SKKU Deblocking Filter", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-F258, 18 July 2011 (2011-07-18), XP030009281 *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10869063B2 (en) 2017-01-10 2020-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Deblocking filtering control
WO2018132043A1 (en) * 2017-01-10 2018-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Deblocking filtering control
CN114449264A (en) * 2017-04-06 2022-05-06 松下电器(美国)知识产权公司 Encoding method, decoding method, and transmission method
JP7364768B2 (en) 2017-04-06 2023-10-18 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and decoding device
JPWO2018186433A1 (en) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method and decoding method
JPWO2018186430A1 (en) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method and decoding method
JPWO2018186429A1 (en) * 2017-04-06 2020-02-06 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method and decoding method
EP3609185A4 (en) * 2017-04-06 2020-03-18 Panasonic Intellectual Property Corporation of America Encoding device, decoding device, encoding method, and decoding method
JP7408862B2 (en) 2017-04-06 2024-01-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and decoding device
KR20190121377A (en) * 2017-04-06 2019-10-25 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Coding device, decoding device, coding method and decoding method
JP2021073802A (en) * 2017-04-06 2021-05-13 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Decoding device and decoding method
KR102296015B1 (en) 2017-04-06 2021-09-01 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding apparatus, decoding apparatus, encoding method and decoding method
KR20210110735A (en) * 2017-04-06 2021-09-08 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
JP2021153336A (en) * 2017-04-06 2021-09-30 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoder and decoder
JP2021158680A (en) * 2017-04-06 2021-10-07 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Encoding device and decoding device
US11172198B2 (en) 2017-04-06 2021-11-09 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
CN114025159B (en) * 2017-04-06 2023-12-12 松下电器(美国)知识产权公司 Encoding device and decoding device
CN114025159A (en) * 2017-04-06 2022-02-08 松下电器(美国)知识产权公司 Encoding device and decoding device
CN114040202A (en) * 2017-04-06 2022-02-11 松下电器(美国)知识产权公司 Encoding method and decoding method
JP7044913B2 (en) 2017-04-06 2022-03-30 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Decoding device and decoding method
KR102382809B1 (en) 2017-04-06 2022-04-08 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
KR20220045076A (en) * 2017-04-06 2022-04-12 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
CN110495179B (en) * 2017-04-06 2022-04-15 松下电器(美国)知识产权公司 Encoding device, decoding device, encoding method, and decoding method
CN114449263A (en) * 2017-04-06 2022-05-06 松下电器(美国)知识产权公司 Encoding device, decoding device, and storage medium
JP7442607B2 (en) 2017-04-06 2024-03-04 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding method and decoding method
CN110495179A (en) * 2017-04-06 2019-11-22 松下电器(美国)知识产权公司 Code device, decoding apparatus, coding method and coding/decoding method
KR102469589B1 (en) 2017-04-06 2022-11-22 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
JP2022082602A (en) * 2017-04-06 2022-06-02 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and encoding method
CN114040202B (en) * 2017-04-06 2023-12-12 松下电器(美国)知识产权公司 Encoding method and decoding method
CN114500999A (en) * 2017-04-06 2022-05-13 松下电器(美国)知识产权公司 Encoding device, decoding device, and storage medium
CN114466184A (en) * 2017-04-06 2022-05-10 松下电器(美国)知识产权公司 Encoding method, decoding method, and transmission method
KR20220158104A (en) * 2017-04-06 2022-11-29 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
JP7192041B2 (en) 2017-04-06 2022-12-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and decoding device
JP7192040B2 (en) 2017-04-06 2022-12-19 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and decoding device
US11563940B2 (en) 2017-04-06 2023-01-24 Panasonic Intellectual Property Corporation Of America Encoder, decoder, and related non-transitory computer readable medium
JP7237215B2 (en) 2017-04-06 2023-03-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device and encoding method
US11778180B2 (en) 2017-04-06 2023-10-03 Panasonic Intellectual Property Corporation Of America Encoder, decoder, and related non-transitory computer readable medium
CN114500999B (en) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 Encoding device, decoding device, and storage medium
CN114449264B (en) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 Encoding method, decoding method, and transmitting method
CN114466184B (en) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 Encoding method, decoding method, and transmitting method
CN114449263B (en) * 2017-04-06 2023-05-16 松下电器(美国)知识产权公司 Encoding device, decoding device, and storage medium
KR102555783B1 (en) 2017-04-06 2023-07-14 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 Encoding device, decoding device, encoding method, and decoding method
EP3738307A4 (en) * 2018-01-29 2021-11-10 MediaTek Inc Length-adaptive deblocking filtering in video coding
WO2019144732A1 (en) 2018-01-29 2019-08-01 Mediatek Inc. Length-adaptive deblocking filtering in video coding
CN111131821B (en) * 2018-10-31 2023-05-09 北京字节跳动网络技术有限公司 Deblocking filtering under dependency quantization
CN111131821A (en) * 2018-10-31 2020-05-08 北京字节跳动网络技术有限公司 Deblocking filtering under dependent quantization
JP2022548914A (en) * 2020-03-27 2022-11-22 テンセント・アメリカ・エルエルシー Advanced control of deblocking operations
EP3991308A4 (en) * 2020-03-27 2022-08-31 Tencent America LLC High level control for deblocking operations
US11973990B2 (en) 2020-03-27 2024-04-30 Tencent America LLC Signaling for modified deblocking filter operations

Similar Documents

Publication Publication Date Title
US10951917B2 (en) Method and apparatus for performing intra-prediction using adaptive filter
WO2013162441A1 (en) Deblocking filtering control
KR102130480B1 (en) Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
EP2938075A1 (en) Deblocking filtering
KR20130139341A (en) Deblocking filtering control
WO2014055020A1 (en) Hierarchical deblocking parameter adaptation
EP2870752A1 (en) Restricted intra deblocking filtering for video coding
EP2870758B1 (en) Controlling deblocking filtering
EP2870759B1 (en) Strong deblocking filtering decisions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13714031

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13714031

Country of ref document: EP

Kind code of ref document: A1