KR102014177B1 - Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same - Google Patents

Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same Download PDF

Info

Publication number
KR102014177B1
KR102014177B1 KR1020120047758A KR20120047758A KR102014177B1 KR 102014177 B1 KR102014177 B1 KR 102014177B1 KR 1020120047758 A KR1020120047758 A KR 1020120047758A KR 20120047758 A KR20120047758 A KR 20120047758A KR 102014177 B1 KR102014177 B1 KR 102014177B1
Authority
KR
South Korea
Prior art keywords
sample
reference samples
samples
coordinate
delete delete
Prior art date
Application number
KR1020120047758A
Other languages
Korean (ko)
Other versions
KR20120125193A (en
Inventor
이진호
김휘용
임성창
최진수
김진웅
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020120039422A external-priority patent/KR20120125160A/en
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020120047758A priority Critical patent/KR102014177B1/en
Priority to PCT/KR2012/003540 priority patent/WO2012150849A2/en
Publication of KR20120125193A publication Critical patent/KR20120125193A/en
Application granted granted Critical
Publication of KR102014177B1 publication Critical patent/KR102014177B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a video encoding / decoding method, and more particularly, to a method of performing in-loop filtering which is robust against error, and a signaling method therefor. According to an embodiment of the present invention, determining whether to use a Constrained Intra Prediction (CIP) mode, determining decoding parameters of blocks located at both sides of a boundary to be filtered, and using a limited intra prediction mode. And determining whether to apply an in-loop filter to samples located on both sides of the filtering boundary based on the decoding parameters of blocks located on both sides of the filtering boundary. .

Description

VIDEO ENCODING / DECODING METHOD USING ERROR-RESILIENT IN-LOOP FILTER AND SIGNALING METHOD RELATING TO THE SAME}

The present invention relates to a video encoding / decoding method, and more particularly, to a method of performing in-loop filtering which is robust against error, and a signaling method therefor.

Recently, as the broadcasting system supporting HD (High Definition) resolution has been expanded not only in Korea but also in the world, many users are getting used to high resolution and high quality images, and many organizations are accelerating the development of the next generation video equipment. . In addition, as interest in Ultra High Definition (UHD), which supports four times the resolution of HDTV, is increased along with HDTV, a compression technology for higher resolution and higher quality images is required.

For image compression, an inter prediction technique for predicting pixel values included in the current picture from preceding and / or following pictures, and intra prediction for predicting pixel values using pixel information within the picture. A technique, an entropy encoding technique for allocating a short code to a symbol with a high frequency of appearance and a long code to a symbol with a low frequency of appearance may be used.

An object of the present invention is to provide an image encoding / decoding method using an in-loop filtering which is robust against errors, and an apparatus thereof.

Another object of the present invention is to provide a method of performing in-loop filtering based on whether a limited intra prediction (CIP) mode is used and coding parameters of blocks located at both sides of a filtering target boundary.

According to an embodiment of the present invention, an image encoding method is provided. The image encoding method may include determining whether to use a Constrained Intra Prediction (CIP) mode, determining encoding parameters of blocks located at both sides of a filtering boundary, and whether to use a limited intra prediction mode and encoding. And determining whether to apply an in-loop filter to samples located on both sides of the boundary to be filtered based on the parameters.

[2] The method of [1], wherein the in-loop filter may be a deblocking filter.

[3] The method of [2], wherein if the encoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are all encoded in inter prediction mode, The deblocking filter may not be applied to the positioned samples.

[4] The filtering object of [2], wherein the encoding parameters of the blocks located on both sides of the boundary to be filtered indicate that the blocks located on both sides of the boundary to be filtered are blocks encoded in the intra prediction mode and the inter prediction mode, respectively. The samples belonging to the block encoded in the inter prediction mode among the samples located at both sides of the filtering boundary without applying the deblocking filter to the samples belonging to the block encoded in the intra prediction mode among the samples located at both sides of the boundary. Can be applied to the deblocking filter.

[5] In the case of [2], when the encoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are the blocks encoded in the intra prediction mode and the inter prediction mode, respectively, The deblocking filter may not be applied to samples located on both sides of the boundary.

According to another embodiment of the present invention, an image decoding method is provided. The image decoding method includes determining whether to use a Constrained Intra Prediction (CIP) mode, determining decoding parameters of blocks located at both sides of a boundary to be filtered, and whether to use the limited intra prediction mode and filtering. And determining whether to apply an in-loop filter to samples located on both sides of the filtering target boundary based on decoding parameters of blocks located on both sides of the target boundary.

[7] The method of [6], wherein the in-loop filter may be a deblocking filter.

[8] The method of [7], wherein if the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the intra prediction mode, A deblocking filter can be applied to the samples that are located.

[9] The method of [7], wherein if the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the inter prediction mode, The deblocking filter may not be applied to the positioned samples.

[10] The method of [7], wherein if the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the inter prediction mode, A deblocking filter can be applied to the samples that are located.

[11] The method of filtering according to [7], wherein the decoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are blocks decoded in the intra prediction mode and the inter prediction mode, respectively. A sample belonging to a block decoded in an inter prediction mode among samples located at both sides of a filtering target boundary without applying a deblocking filter to samples belonging to a block decoded in an intra prediction mode among samples located at both sides of a boundary. Can be applied to the deblocking filter.

[12] The method of filtering according to [7], when the decoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are blocks decoded in the intra prediction mode and the inter prediction mode, respectively. It may not be possible to apply a deblocking filter on samples located on either side of the boundary.

According to another embodiment of the present invention, an image decoding method is provided. The image decoding method includes parsing an indicator indicating whether a limited deblocking filter is used, determining decoding parameters of blocks located on both sides of a filtering target boundary, and indicating the use of the limited deblocking filter and a filtering target. And determining whether to apply a deblocking filter to samples located on both sides of the boundary to be filtered based on decoding parameters of blocks located on both sides of the boundary.

[14] The method of filtering [13], wherein the decoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are blocks decoded in the intra prediction mode and the inter prediction mode, respectively. A sample belonging to a block decoded in an inter prediction mode among samples located at both sides of a filtering target boundary without applying a deblocking filter to samples belonging to a block decoded in an intra prediction mode among samples located at both sides of a boundary. Can be applied to the deblocking filter.

[15] The filtering object of [13], wherein the decoding parameters of the blocks located at both sides of the boundary to be filtered indicate that the blocks located at both sides of the boundary to be filtered are blocks decoded in the intra prediction mode and the inter prediction mode, respectively. It may not be possible to apply a deblocking filter on samples located on either side of the boundary.

According to the present invention, in-loop filtering that is robust against error can be performed.

According to the present invention, even when the region encoded in the inter prediction mode cannot be normally restored, the block encoded in the intra prediction mode can be decoded normally.

According to the present invention, the reconstruction result of the region encoded in the intra prediction mode can be maintained to be the same in the encoder and the decoder.

1 is a block diagram illustrating an example of a structure of an image encoder.
2 is a block diagram illustrating an example of a structure of an image decoder.
3 is a flowchart illustrating an in-loop filtering method (encoding) according to an embodiment of the present invention.
4 is a conceptual diagram illustrating an example in which limited intra prediction is performed.
5 is a flowchart illustrating a method of performing in-loop filtering on a block coded in a limited intra prediction mode.
6 illustrates an example in which blocks located at both sides of a filtering target boundary are encoded in an intra prediction mode.
7 shows an example in which blocks located at both sides of a filtering target boundary are encoded in an inter prediction mode.
8 and 9 illustrate an example in which blocks located at both sides of a filtering target boundary are encoded in an intra prediction mode and an inter prediction mode, respectively.
FIG. 10 is a flowchart illustrating a method of performing deblocking filtering based on whether a CIP mode and a PCM mode are used, and encoding parameters of blocks located at both sides of a filtering target boundary.
11 is a flowchart illustrating an in-loop filtering method (decoding) according to an embodiment of the present invention.
12 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
13 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.

EMBODIMENT OF THE INVENTION Hereinafter, embodiment of this invention is described in detail with reference to drawings. However, in describing the embodiments of the present invention, when it is determined that the detailed description of the known configuration or function may obscure the gist of the present invention, the detailed description thereof will be omitted.

When a component is described as being "connected" or "connected" to another component, it may be directly connected to or connected to another component, but another component may be in between. In addition, when the present invention is described as "includes" a specific component, rather than excluding components other than the component, it is understood that additional components may be included in the scope of the embodiments or technical spirit of the present invention. it means.

Terms such as "first" and "second" may be used to describe various components, but the components are not limited by the terms. In other words, the terms are used only for the purpose of distinguishing one component from another. Thus, the first component may be named the second component, and likewise the second component may be named the first component.

In addition, the components shown in the embodiments of the present invention are shown independently to indicate that they perform different characteristic functions, and do not mean that each component may not be implemented in one hardware or software. That is, each component is divided for convenience of description, and a plurality of components may be combined to operate as one component, or one component may be divided into and operate as a plurality of components, which does not depart from the essence of the present invention. Unless included in the scope of the present invention.

In addition, some components may be optional components for improving performance rather than essential components for performing essential functions of the present invention. The present invention may be implemented in a structure including only essential components except for optional components, and a structure including only essential components is also included in the scope of the present invention.

1 is a block diagram illustrating an example of a structure of an image encoder.

The image encoder 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, a transformer 130, a quantizer 140, and entropy. An encoder 150, an inverse quantizer 160, an inverse transformer 170, an adder 175, a filter 180, and a reference picture buffer 190 are included.

The image encoder 100 encodes an input image in an intra prediction mode or an inter prediction mode to output a bitstream. Intra prediction means intra prediction and inter prediction means inter prediction. The image encoder 100 transitions between the intra prediction mode and the inter prediction mode through the switching of the switch 115. The image encoder 100 generates a prediction block for an input block of an input image and then encodes a residual between the input block and the prediction block.

In the intra prediction mode, the intra prediction unit 120 generates a prediction block by performing spatial prediction using pixel values of blocks already encoded around the encoding target block.

In the inter prediction mode, the motion predictor 111 finds a motion vector by finding a reference block that best matches an input block in a reference picture stored in the reference picture buffer 190. The motion compensator 112 performs motion compensation by using the motion vector and generates a prediction block. Here, the motion vector is a two-dimensional vector used for inter prediction, and represents an offset between the target block of the current encoding / decoding and the reference block.

The subtractor 125 generates a residual block based on the difference between the input block and the prediction block, and the transformer 130 transforms the difference block and outputs a transform coefficient. The quantization unit 140 quantizes the transform coefficients and outputs quantized coefficients.

The entropy encoder 150 outputs a bitstream by performing entropy encoding based on information obtained in the encoding / quantization process. Entropy encoding reduces the size of a bit string for a symbol to be encoded by representing frequently generated symbols with fewer bits. Therefore, it is expected to improve the compression performance of the image through entropy encoding. The entropy encoder 150 may use an encoding method such as exponential golomb and context-adaptive binary arithmetic coding (CABAC) for entropy encoding.

Meanwhile, the coded picture needs to be decoded and stored again to be used as a reference picture for performing inter prediction. Accordingly, the inverse quantization unit 160 inverse quantizes the quantized coefficients, and the inverse transform unit 170 inverse transforms the inverse quantized coefficients to output the reconstructed difference block. The adder 175 adds the reconstructed difference block to the prediction block to generate a reconstruction block.

The filter unit 180 may also be referred to as an adaptive in-loop filter, and may include at least one of deblocking filtering, sample adaptive offset (SAO) compensation, and adaptive loop filtering (ALF). Do this. Deblocking filtering means removing block distortion at an inter-block boundary, and SAO compensation means adding an appropriate offset to pixel values to compensate for coding errors. Also, ALF means filtering based on a value obtained by comparing a reconstructed image with an original image.

2 is a block diagram illustrating an example of a structure of an image decoder.

The image decoder 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, a motion compensator 250, an adder 255, and a filter 260. ) And reference picture buffer 270.

The image decoder 200 decodes the bitstream in an intra prediction mode or an inter prediction mode, and outputs a reconstructed image. The image decoder 200 transitions between the intra prediction mode and the inter prediction mode by switching the switch. The image decoder 200 obtains a difference block from the bitstream, generates a prediction block, and then adds the difference block to generate a reconstruction block.

The entropy decoder 210 performs entropy decoding based on probability distribution. The entropy decoding process is the reverse of the above-described entropy coding process. That is, the entropy decoder 210 generates a symbol including quantized coefficients from a bitstream representing a frequently generated symbol with a small number of bits.

The inverse quantizer 220 inversely quantizes the quantized coefficients, and the inverse transformer 230 inversely transforms the inverse quantized coefficients to generate a difference block.

In the intra prediction mode, the intra prediction unit 240 generates a prediction block by performing spatial prediction using pixel values of blocks already decoded around the decoding target block.

In the inter prediction mode, the motion compensation unit 250 generates a prediction block by performing motion compensation using a motion picture and a reference picture stored in the reference picture buffer 270.

The adder 255 adds the prediction block to the difference block, and the filter unit 260 outputs a reconstructed image by performing at least one of deblocking filtering, SAO compensation, and ALF to the block that has passed through the adder.

Hereinafter, a block means a unit of encoding / decoding. In the encoding / decoding process, an image is divided into a predetermined size and encoded / decoded. Therefore, a block may also be called a macro block (MB), a coding unit (CU), a prediction unit (PU), a transform unit (TU), or the like. It may be divided into smaller blocks of smaller size.

Here, the prediction unit means a basic unit of performing prediction and / or motion compensation. The prediction unit may be divided into a plurality of partitions, and each partition is called a prediction unit partition. When the prediction unit is divided into a plurality of partitions, the prediction unit partition may be a basic unit of performing prediction and / or motion compensation. Hereinafter, in an embodiment of the present invention, the prediction unit may mean a prediction unit partition.

Also, using a sample in the present specification may mean using information of the sample, for example, a pixel value. However, for convenience of description, it should be noted that the expression "use sample information" or "use pixel value" may simply be referred to as "use sample".

On the other hand, when the bitstream is transmitted through an error-prone network channel, an error is likely to occur in the reconstructed picture. Constrained Intra Prediction (CIP) does not refer to neighboring samples that belong to a block that is not coded in intra prediction mode, but only to neighboring samples that belong to a block coded in intra prediction mode. In this case, the prediction method generates a prediction block for a current block. Here, the current block refers to an encoding object block or a decoding object block. When the CIP mode is used, generation of the intra prediction block is not affected even if the reference picture is lost and the block encoded in the inter prediction mode is not normally reconstructed. Therefore, even when a block coded in the inter prediction mode cannot be normally restored, the block coded in the intra prediction mode can be decoded normally.

However, since in-loop filtering is performed regardless of whether the CIP mode is used or not, if an error occurs in the compressed video bitstream, the error of the reconstructed video does not occur during the in-loop filtering process. It can propagate to other areas.

In order to solve the above problem, the present invention determines whether to perform in-loop filtering based on whether to use the CIP mode and encoding parameters of blocks located on both sides of the filtering target boundary.

3 is a flowchart illustrating an in-loop filtering method (encoding) according to an embodiment of the present invention.

The image encoder determines whether to use the CIP mode (S310).

The encoder and the decoder may signal information for determining whether to use the CIP mode. For example, information for determining whether to use the CIP mode is signaled through a sequence parameter set (SPS), a picture parameter set (PPS), or a slice header.

In this case, signaling information for determining whether to use the CIP mode means that an encoder inserts an indicator, such as a flag, indicating whether to use the CIP mode in a bitstream, and then decodes the decoder. It means parsing in the decoder. The indicator may be inserted into the bitstream through an entropy encoding process such as arithmetic coding in the encoder, and may be extracted through a decoding process corresponding to the entropy encoding process in the decoder.

For example, to determine whether the encoding target sequence uses the CIP mode, constrained_intra_pred_flag may be transmitted as shown in Table 1. When the flag value is '0', it may be determined that the CIP mode is not used, and when the flag value is '1', it may be determined that the CIP mode is used.

Figure 112012035958085-pat00001

For example, constrained_intra_pred_flag may be transmitted as shown in Table 2 to determine whether the picture to be encoded uses the CIP mode. As in the case of signaling through the SPS, when the flag value is '0', it may be determined that the CIP mode is not used, and when the flag value is '1', it may be determined that the CIP mode is used.

Figure 112012035958085-pat00002

Meanwhile, a special purpose encoder and decoder used in an error prone network environment may basically use the CIP mode without separately signaling information for determining whether to use the CIP mode.

4 is a conceptual diagram illustrating an example in which limited intra prediction is performed.

Hereinafter, for convenience of description, the [x, y] coordinate at which the coordinate value increases in the lower right direction is set based on the upper left sample of the encoding target block 400 as a reference ([0, 0]). In addition, p [a, b] represents the pixel value of the sample which has the position of [a, b]. For example, the pixel value of the upper left sample of the encoding target block 400 may be represented by p [0, 0].

When the size of the encoding object block 400 is 8x8, the reference samples located at the top and left sides of the encoding object block 400, that is, the upper reference samples p [-1 ... 15, -1] and the left reference A prediction block for a block to be encoded is generated using the samples p [-1, 0 ... 15].

However, when the CIP mode is used, as described above, since neighbor samples belonging to a block not encoded in the intra prediction mode cannot be used for intra prediction, a reference sample is replaced. substitution process). For example, in FIG. 4, [x = -1, y = 4 ... 11], [x = -1 ... 5, y = -1], [x = 8 ... 15, y =- The samples with position 1] should be replaced with other samples that can be used for limited intra prediction.

An example of constructing a reference sample by substituting a surrounding sample that cannot be used for intra prediction is as follows.

Example 1) Replace neighboring samples that cannot be used for intra prediction with a value of one of the surrounding samples that can be used for intra prediction.

For example, in the case of FIG. 4, if a sample having a position of [x = -1, y = 15] is a sample that cannot be used for intra prediction, [x = -1, y = 15] to [x = -1, y = -1], then [x = 0, y = -1] to [x = 15, y = -1] sequentially. As soon as a sample that can be used for intra prediction is found, the search ends, and the pixel value of the sample (p [x = -1, y = 15]) is assigned the pixel value of the retrieved sample.

For example, in the case of FIG. 4, a sample having a position of [x = -1, y = 4 ... 11], [x = -1, y = -1] cannot be used for intra prediction. , The pixel value p [x, y + 1] of the sample located at the bottom of the sample is assigned to the pixel value p [x, y] of the sample, as shown in Equations 1 and 2 below. .

Figure 112012035958085-pat00003

Figure 112012035958085-pat00004

For example, in the case of FIG. 4, a sample having a position of [x = 0 ... 5, y = -1], [x = 8 ... 15, y = -1] is used for intra prediction. If the sample cannot be counted, the pixel value (p [x-1, y]) of the sample located on the left side of the sample to the pixel value (p [x, y]) of the sample, as shown in Equations 3 and 4 Is assigned.

Figure 112012035958085-pat00005

Figure 112012035958085-pat00006

Example 2) Replace reference sample (s) that cannot be used for intra prediction with an average value of reference samples that can be used for intra prediction located on either side of the reference sample (s).

For example, in the case of FIG. 4, neighboring samples belonging to a block coded in the inter prediction mode are replaced with Equations 5 through 7.

Figure 112012035958085-pat00007

Figure 112012035958085-pat00008

Figure 112012035958085-pat00009

Example 3) Replace reference samples (s) that cannot be used for intra prediction with linear interpolated values of reference samples that can be used for intra prediction located on both sides of the reference sample (s).

For example, in the case of FIG. 4, neighboring samples belonging to a block coded in the inter prediction mode are replaced with Equations 8 to 10.

Figure 112012035958085-pat00010

Figure 112012035958085-pat00011

Figure 112012035958085-pat00012

Meanwhile, when the encoding target block is encoded in the intra prediction mode, filtering may be performed on the reference samples and the predicted samples in order to improve encoding performance. Here, the prediction sample means a prediction value of a sample belonging to the encoding target block.

For example, a 3-tap lowpass filter with filter coefficient [1 2 1] or a 2-tap average filter with filter coefficient [1 1] can be applied to the reference sample.

For example, in certain intra modes such as Intra_Vertical prediction mode, Intra_Horizontal prediction mode, and Intra_DC prediction mode, the prediction value of the sample corresponding to the boundary of the block to be encoded is derived based on the reference sample adjacent to the sample. Can be.

However, when the CIP mode is used, since the neighboring samples that are not encoded in the intra prediction mode are replaced with the neighboring samples that are encoded in the intra prediction mode, the pixel values of the reference samples are likely to be similar or identical to each other. Therefore, unlike the case where the CIP mode is not used, in the limited intra prediction mode, it may be desirable not to perform filtering on the reference samples in terms of reducing computational complexity and improving encoding performance. Similarly, since the pixel values of prediction samples derived based on reference samples having similar or identical pixel values are also likely to be similar or identical to each other, it may be desirable not to perform filtering on the prediction samples.

Accordingly, the reference sample is determined based on whether [1] CIP mode is used, [2] availability of the block to which the reference sample belongs, [3] pixel value of the reference sample, [4] encoding parameters of the block to which the reference sample belongs, and the like. And / or adaptively determine whether to perform filtering on the prediction sample.

Example 1) Whether to perform filtering on a reference sample and / or a prediction sample is determined based on whether the CIP mode is used.

For example, when a filter is applied to a reference sample in an environment in which a CIP mode is used, the inter slice (P slice or B slice) may not perform filtering on the reference sample.

For example, when the filter is applied to the prediction sample in an environment in which the CIP mode is used, the inter slice (P slice or B slice) may not perform filtering on the prediction sample.

Example 2) Adaptively determining whether to perform filtering on a reference sample and / or a prediction sample based on the availability of a block to which the reference sample belongs.

For example, when a filter is applied to a reference sample, filtering may not be performed on a reference sample that is replaced with a sample that can be used for intra prediction corresponding to a sample belonging to a block that is not available.

For example, if you apply a filter on a prediction sample, do not filter on the prediction sample that is derived based on a reference sample that is replaced by a sample that can be used for intra prediction, corresponding to a sample that belongs to a block that is not available. You may not.

Example 3) It is determined whether filtering is performed on the reference sample and / or the prediction sample based on the pixel value of the reference sample. In this case, the similarity of the pixel values measured based on the average, variance, and sample values of the pixel values of the reference samples may be used.

For example, when the filter is applied to the reference samples, the filtering may not be performed on the reference samples having the same or identical pixel values.

For example, when applying a filter on prediction samples, filtering may not be performed on prediction samples derived based on reference samples whose pixel values are similar or identical to each other.

Example 4 It is determined whether filtering is performed on a reference sample and / or a prediction sample based on an encoding parameter of a block to which the reference sample belongs. In this case, an intra prediction mode, a Most Probable Mode (MPM) flag, an inter prediction mode, a motion vector, a reference picture index, and a quantization parameter ), At least one of coding parameters such as a coded block flag and a coding mode indicating whether the encoding target block is encoded in the intra prediction mode or the inter prediction mode may be used.

For example, if you apply a filter to a reference sample, you will not perform filtering on the reference sample that is replaced by a sample that belongs to a block that is coded in the intra prediction mode for a sample that belongs to a block that is not coded in the intra prediction mode. Can be.

For example, when a filter is applied to a predictive sample, the predictive sample is derived based on a reference sample that is replaced by a sample belonging to a block encoded in the intra prediction mode corresponding to a sample belonging to a block not encoded in the intra prediction mode. You may not perform filtering on.

Meanwhile, in a specific intra mode such as an Intra_Vertical prediction mode, an Intra_Horizontal prediction mode, and an Intra_DC prediction mode, as described above, the filter may be applied only to prediction samples corresponding to the boundary of the encoding target block among the prediction samples. Even in this case, prediction is made based on whether [1] CIP mode is used, [2] availability of the block to which the reference sample belongs, [3] pixel value of the reference sample, and [4] the encoding parameter of the block to which the reference sample belongs. Whether to perform filtering on prediction samples corresponding to a boundary of an encoding target block among samples may be adaptively determined. Referring back to FIG. 4, filtering may not be performed on the prediction samples 410 and 420 having positions of [0 ... 5, 0] and [0, 4 ... 7].

5 is a flowchart illustrating a method of performing in-loop filtering on a block coded in a limited intra prediction mode.

In the following description, for convenience of description, in-loop filtering will be described using an example of deblocking filtering. However, the present invention is not only applicable to deblocking filtering but also to non-deblocking filtering, Sample Adaptive Offset (SAO) compensation or Adaptive Loop Filter (ALF).

The encoder determines a filtering target boundary (S510). In general, the boundary of the division unit of the image is determined as the filtering target boundary. For example, a boundary of a coding unit (CU), a boundary of a prediction unit (PU), and a boundary of a transform unit (TU) may be determined as a filtering target boundary. Accordingly, the filtering target boundary is determined in a coding unit (CU), a large coding unit (LCU), a slice unit, or a picture unit.

The encoder determines whether to perform filtering and the type of filter based on at least one or more of the neighboring pixel value of the filtering target boundary and the filtering intensity determined through the filtering boundary determination step (S510) (S520). For example, based on the neighboring pixel values of the filtering object boundary, it is determined whether the filtering object boundary is a blocking artifact due to transform and quantization or an actual edge present in the picture. In addition, it may be determined whether filtering is performed. For example, the filter strength may indicate a tap size representing the number of input samples of the low pass filter, a coefficient of the low pass filter, and the like.

The encoder performs filtering based on the filtering target boundary and the type of the filter determined through the filtering target boundary determination step (S510), whether to perform the filtering, and the filter type determination step (S520). In this case, a low pass filter may be used according to the amount of change of the neighboring pixel value of the boundary of the filtering object for smooth processing of the boundary between blocks, or a Wiener filter may be applied to minimize distortion with the original image. . In addition, one-dimensional filters or two-dimensional or multi-dimensional filters may be applied according to the filtering target boundary. For example, a two-dimensional or more multidimensional filter having a structure of filter coefficients such as a shape of a filter such as a rectangle, a circle, a rectangle, a horizontal symmetry, a vertical symmetry, and a diagonal stone symmetry can be applied. In addition, various filters may be applied based on whether filtering is performed and the filtering strength determined through the determination of the type of filter (S520).

Referring to FIG. 3 again, the image encoder determines encoding parameters of blocks located at both sides of the filtering boundary (S320).

The image encoder may determine whether to perform filtering and the type of filter based on encoding parameters of blocks located at both sides of the filtering target boundary. In this case, an intra prediction mode, an inter prediction mode, a motion vector, a reference picture index, a quantization parameter, and a coded block flag flag), at least one of encoding parameters such as a coding mode indicating whether the encoding target block is encoded in the intra prediction mode or the inter prediction mode may be used.

For example, the image encoder may determine whether blocks located at both sides of the boundary to be filtered are encoded in the intra prediction mode or the inter prediction mode. In this case, if a block is encoded in the intra prediction mode, the block may be referred to as being encoded in the intra mode or may be said to be intra coded. Similarly, if a block is coded in the inter prediction mode, the block may be said to be coded in inter mode or inter coded.

For example, when a block located at both sides of a filtering target boundary is encoded in an inter prediction mode, the image encoder may generate a residual signal such as a coded block flag (CBF) or a skip mode. It is possible to determine whether a transform coefficient with respect to the present, and based on this the processing method of the deblocking filter can be different.

For example, when blocks located on both sides of the filtering target boundary are encoded in a pulse coded modulation (PCM) mode, the image encoder may determine that the corresponding block is encoded in an intra prediction mode.

In addition, the image encoder may determine whether to perform filtering and the type of filter based on the neighboring pixel values of the filtering boundary along with the encoding parameters of the blocks located on both sides of the filtering boundary. In this case, a difference, a gradient, a variance, an average, etc. of neighboring pixel values of the boundary of the filtering target may be used.

The image encoder determines whether to apply the in-loop filter based on whether to use the CIP mode and encoding parameters of blocks located at both sides of the filtering target boundary.

If the CIP mode is not used, the image encoder performs filtering on samples located at both sides of the boundary to be filtered (S330).

If the CIP mode is used and the coding parameters of blocks located on both sides of the filtering boundary show that the blocks located on both sides of the filtering boundary are all encoded in the intra prediction mode, the image encoder is located on both sides of the filtering boundary. The filtering is performed on the samples (S331).

6 illustrates an example in which blocks located at both sides of a filtering target boundary are encoded in an intra prediction mode. If both blocks are intra coded, the samples belonging to both blocks are not affected from the block coded in the inter prediction mode in which an error may occur. Therefore, when constrained_intra_pred_flag is 1 and all blocks located at both sides of the filtering target boundary are encoded in the intra prediction mode, deblocking filtering may be performed on the filtering target boundary.

When the CIP mode is used and the coding parameters of blocks located on both sides of the filtering boundary show that the blocks located on both sides of the filtering boundary are all encoded in inter prediction mode, the image encoder is located on both sides of the filtering boundary. The deblocking filter may not be applied to the samples to be performed (S333 [1]).

7 shows an example in which blocks located at both sides of a filtering target boundary are encoded in an inter prediction mode. When both blocks are inter coded (merge mode, skip mode, PU_2Nx2N / PU_2NxN / PU_Nx2N / PU_NxN mode), an error may occur in samples belonging to both blocks. Therefore, when constrained_intra_pred_flag is 1 and the coding parameters of the blocks located at both sides of the filtering target boundary are all in the inter prediction mode, the image encoder may not perform filtering on the samples located at both sides of the filtering target boundary.

In addition, when constrained_intra_pred_flag is 1 and the coding parameters of blocks located on both sides of the filtering target boundary are all in the inter prediction mode, the image encoder may determine the filtering strength as '0' as if the filtering is not performed. Equation 11 shows an example of determining the filtering strength as '0'.

Figure 112012035958085-pat00013

Here, bS denotes the filtering intensity, filterDir denotes the application direction (vertical / horizontal) of the one-dimensional filter, and xEk and yEj denote positions of the filtering target boundary.

In addition, when constrained_intra_pred_flag is 1 and the coding parameters of blocks located on both sides of the filtering target boundary are all in the inter prediction mode, an error may occur in samples belonging to both blocks, but by performing deblocking filtering on the samples. This may not be a problem. Accordingly, the image encoder may perform filtering on samples located at both sides of the filtering target boundary (S333 [2]).

When the CIP mode is used and the coding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are blocks encoded in the intra prediction mode and the inter prediction mode, respectively, the image encoder is the filtering target boundary. Filtering may be performed only on samples belonging to a block coded in an inter prediction mode among samples positioned at both sides of S (S332 [1]).

8 and 9 illustrate examples in which blocks located at both sides of a filtering target boundary are encoded in an intra prediction mode and an inter prediction mode, respectively. When the blocks encoded in the intra prediction mode and the blocks encoded in the inter prediction mode are located on both sides of the boundary to be filtered, an error occurring in the reconstructed samples belonging to the blocks encoded in the inter prediction mode is stored in the blocks encoded in the intra prediction mode. This can affect which restored sample belongs. therefore. By performing filtering on only the samples belonging to the block encoded in the intra prediction mode among the samples on both sides of the filtering boundary, the filtering is performed only on the samples belonging to the block encoded in the inter prediction mode among the samples on both sides of the filtering boundary. This can prevent the propagation of errors.

When filtering is performed on only the samples belonging to the block encoded in the inter prediction mode among the samples to be filtered, the following deblocking filtering process is performed.

<Process 1 for Deblocking Filtering on Luma Samples>

- input

(1) sample value: pi, qi (i = 0 ... 3)

(2) Whether the filter is applied to each of p1 and q1 samples: dEp1, dEq1

(3) threshold for applying the filter: tc

- Print

(1) Number of samples filtered: nDp, nDq

(2) Filtered sample values: pi ', qj' (i = 0 ... nDp -1, j = 0 ... nDq-1)

[1] pi is not a sample of an I_PCM block or pcm_loop_filter_disable_flag is 0, [2] pi belongs to a block encoded in inter prediction mode, and [3] qi belongs to a block encoded in intra prediction mode , [4] If dE is 2, nDp is 3 and a strong filter is applied to pi as follows.

Figure 112012035958085-pat00014

Figure 112012035958085-pat00015

Figure 112012035958085-pat00016

Here, Clip3 (a, b, c) represents clipping c within the range of a and b.

[1] pi is not a sample of an I_PCM block or pcm_loop_filter_disable_flag is 0, [2] pi belongs to a block encoded in inter prediction mode, and [3] qi belongs to a block encoded in intra prediction mode , [4] If dE is not 2, nDp is 1 and a weak filter is applied to pi.

Figure 112012035958085-pat00017

If abs (Δ) is less than tc * 10, the following steps apply.

(1) p0 'is obtained as follows.

Figure 112012035958085-pat00018

Figure 112012035958085-pat00019

Here, Clip1Y (x) is defined as in Equation (18).

Figure 112012035958085-pat00020

Here, BitDepthY represents the bit depth of the luminance component.

(2) If dEp1 is 1, pi 'is obtained as follows.

Figure 112012035958085-pat00021

Figure 112012035958085-pat00022

(3) nDp becomes dEp1 + 1.

[1] qi is not a sample of the I_PCM block or pcm_loop_filter_disable_flag is 0, [2] qi belongs to a block encoded in inter prediction mode, and [3] pi belongs to a block encoded in intra prediction mode , [4] When dE is 2, nDq is 3 and a strong filter is applied to qi as follows.

Figure 112012035958085-pat00023

Figure 112012035958085-pat00024

Figure 112012035958085-pat00025

Here, Clip3 (a, b, c) represents clipping c within the range of a and b.

[1] qi is not a sample of the I_PCM block or pcm_loop_filter_disable_flag is 0, [2] qi belongs to a block encoded in inter prediction mode, and [3] pi belongs to a block encoded in intra prediction mode , [4] If dE is not 2, nDq is 1 and a weak filter is applied to qi.

Figure 112012035958085-pat00026

If abs (Δ) is less than tc * 10, the following steps apply.

(1) q0 'is obtained as follows.

Figure 112012035958085-pat00027

Figure 112012035958085-pat00028

Here, Clip1Y (x) is defined as in Equation 27.

Figure 112012035958085-pat00029

Here, BitDepthY represents the bit depth of the luminance component.

(2) If dEq1 is 1, qi 'is obtained as follows.

Figure 112012035958085-pat00030

Figure 112012035958085-pat00031

(3) nDq becomes dEq1 + 1.

Meanwhile, deblocking filtering may be performed on luma samples as follows.

<Process 2 for Deblocking Filtering on Luma Samples>

- input

(1) sample value: pi, qi (i = 0 ... 3)

(2) Whether the filter is applied to each of p1 and q1 samples: dEp1, dEq1

(3) threshold for applying the filter: tc

- Print

(1) Number of samples filtered: nDp, nDq

(2) Filtered sample values: pi ', qj' (i = 0 ... nDp -1, j = 0 ... nDq-1)

If dE is 2, nDp and nDq are 3 and a strong filter is applied.

Figure 112012035958085-pat00032

Figure 112012035958085-pat00033

Figure 112012035958085-pat00034

Figure 112012035958085-pat00035

Figure 112012035958085-pat00036

Figure 112012035958085-pat00037

Here, Clip3 (a, b, c) represents clipping c within the range of a and b.

If dE is not 2, nDp and nDq are set to 1 and a weak filter is applied.

Figure 112012035958085-pat00038

If abs (Δ) is less than tc * 10, the following steps apply.

(1) p0 'and q0' are obtained as follows.

Figure 112012035958085-pat00039

Figure 112012035958085-pat00040

Figure 112012035958085-pat00041

Here, Clip1Y (x) is defined as in Equation 40.

Figure 112012035958085-pat00042

Here, BitDepthY represents the bit depth of the luminance component.

(2) If dEp1 is 1, pi 'is obtained as follows.

Figure 112012035958085-pat00043

Figure 112012035958085-pat00044

(3) If dEq1 is 1, qi 'is obtained as follows.

Figure 112012035958085-pat00045

Figure 112012035958085-pat00046

(4) nDp becomes dEp1 + 1, and nDq becomes dEq1 + 1.

If at least one of the following two conditions is satisfied, pi '(i = 0 ... nDp-1) is changed to the input sample pi.

(1) pi is a sample of the I_PCM block and pcm_loop_filter_disable_flag is 1.

(2) pi belongs to a block coded in the intra prediction mode, and qi belongs to a block coded in the inter prediction mode.

Qj '(j = 0 ... nDq-1) is changed to the input sample qj if at least one of the following two conditions is satisfied.

(1) qj is a sample of the I_PCM block and pcm_loop_filter_disable_flag is 1.

(2) qj belongs to a block coded in the intra prediction mode, and pj belongs to a block coded in the inter prediction mode.

Meanwhile, deblocking filtering may be performed on chroma samples as follows.

<Step 1 of performing deblocking filtering on chroma samples>

- input

(1) sample value: pi, qi (i = 0, 1)

(2) threshold for applying a filter: tc

- Print

(1) Filtered sample values: p0 ', q0'

[1] pi is not a sample of an I_PCM block or pcm_loop_filter_disable_flag is 0, [2] pi belongs to a block encoded in inter prediction mode, and [3] qi belongs to a block encoded in intra prediction mode , Apply the following filter:

Figure 112012035958085-pat00047

Figure 112012035958085-pat00048

[1] qi is not a sample of the I_PCM block or pcm_loop_filter_disable_flag is 0, [2] qi belongs to a block encoded in inter prediction mode, and [3] pi belongs to a block encoded in intra prediction mode , Apply the following filter:

Figure 112012035958085-pat00049

Figure 112012035958085-pat00050

Meanwhile, deblocking filtering may be performed on chroma samples as follows.

<Process 2 to perform deblocking filtering on chroma samples>

- input

(1) sample value: pi, qi (i = 0, 1)

(2) threshold for applying a filter: tc

- Print

(1) Filtered sample values: p0 ', q0'

-Apply the following filter:

Figure 112012035958085-pat00051

Figure 112012035958085-pat00052

Figure 112012035958085-pat00053

If at least one of the following two conditions is satisfied, p0 'is changed to the input sample p0.

(1) p0 is a sample of the I_PCM block and pcm_loop_filter_disable_flag is 1.

(2) p0 belongs to a block coded in the intra prediction mode, and q0 belongs to a block coded in the inter prediction mode.

Q0 'is changed to the input sample q0 when at least one of the following two conditions is satisfied.

(1) q0 is a sample of the I_PCM block and pcm_loop_filter_disable_flag is 1.

(2) q0 belongs to a block coded in the intra prediction mode, and p0 belongs to a block coded in the inter prediction mode.

On the other hand, when the CIP mode is used and the coding parameters of the blocks located on both sides of the filtering boundary indicate that the blocks located on both sides of the filtering boundary are blocks encoded in the intra prediction mode and the inter prediction mode, respectively, the image encoder The filtering may not be performed on the samples located at both sides of the target boundary (S332 [2]).

For example, when one of the blocks located on both sides of the filtering target boundary is encoded in the intra prediction mode PU_2Nx2N or PU_NxN, the filtering strength may not be determined and the filtering may not be performed.

In addition, when the CIP mode is used and the coding parameters of the blocks located on both sides of the boundary to be filtered indicate that the blocks located on both sides of the boundary to be filtered are blocks encoded in the intra prediction mode and the inter prediction mode, respectively, the image encoder may perform filtering. The filtering strength may be determined as '0', as is not performed. Equation 52 shows an example of determining the filtering strength as '0'.

Figure 112012035958085-pat00054

Here, bS denotes the filtering intensity, filterDir denotes the application direction (vertical / horizontal) of the one-dimensional filter, and xEk and yEj denote positions of the filtering target boundary.

On the other hand, if the filtering target block is an I_PCM block and pcm_loop_filter_disable_flag is '1', deblocking filtering is not performed. Here, the I_PCM block means that the filtering target block is a block encoded in PCM mode using an uncompressed original sample, and pcm_loop_filter_disable_flag is a flag for indicating whether a loop filter process is performed on the reconstructed pixel of the I_PCM block. Therefore, it may be determined whether to perform filtering, including the case where the filtering target block is an I_PCM block.

10 is a flowchart illustrating a method of performing deblocking filtering based on whether a CIP mode and a PCM mode are used, and encoding parameters of blocks located on both sides of a filtering target boundary.

When the CIP mode is used, that is, when constrained_intra_pred_flag is '1', the coding modes of the blocks P and Q located on both sides of the filtering boundary are determined.

When the blocks located on both sides of the filtering boundary are encoded in the intra prediction mode and the inter prediction mode, respectively, the image encoder is configured to perform samples of the blocks belonging to the intra prediction mode among the samples located on both sides of the filtering boundary. Do not perform filtering.

When all blocks located at both sides of the filtering boundary are encoded in the intra prediction mode, or when all blocks located at both sides of the filtering boundary are encoded in the inter prediction mode, the image encoder determines pcm_loop_filter_disable_flag.

When pcm_loop_filter_disable_flag is '1', it is determined whether blocks P and Q located on both sides of the filtering target boundary are I_PCM blocks, respectively, and do not perform pillaring on blocks that are I_PCM blocks.

When the CIP mode is not used, that is, when constrained_intra_pred_flag is '0', the image encoder determines pcm_loop_filter_disable_flag.

When pcm_loop_filter_disable_flag is '1', it is determined whether blocks P and Q located on both sides of the filtering target boundary are I_PCM blocks, respectively, and do not perform pillaring on blocks that are I_PCM blocks.

11 is a flowchart illustrating an in-loop filtering method (decoding) according to an embodiment of the present invention.

The decoder may determine whether to perform in-loop filtering in the same manner as the encoder. That is, the image decoder determines whether to use the CIP mode (S1110).

When the CIP mode is used, the image decoder determines decoding parameters of blocks located at both sides of the filtering target boundary of the decoding target block (S1120).

The image decoder determines whether to apply the in-loop filter based on the use of the CIP mode and the decoding parameters of blocks located at both sides of the filtering target boundary.

For example, when the CIP mode is not used, the image decoder performs filtering on samples located at both sides of the boundary to be filtered (S1130).

For example, when the CIP mode is used and the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the intra prediction mode, the image decoder may determine the filtering boundary. Filtering is performed on the samples located at both sides (S1131).

For example, when the CIP mode is used and the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the inter prediction mode, the image decoder may determine the filtering boundary. The deblocking filter may not be applied to the samples positioned at both sides (S1133 [1]).

For example, when the CIP mode is used and the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are all decoded in the inter prediction mode, the image decoder may determine the filtering boundary. Filtering may be performed on samples located at both sides (S1133 [2]).

For example, when the CIP mode is used and the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are the blocks decoded in the intra prediction mode and the inter prediction mode, respectively, the image decoder Filtering may be performed only on samples belonging to a block decoded in the inter prediction mode among samples positioned at both sides of the filtering target boundary (S1132 [1]).

For example, when the CIP mode is used and the decoding parameters of the blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are the blocks decoded in the intra prediction mode and the inter prediction mode, respectively, the image decoder The filtering may not be performed on the samples located at both sides of the filtering target boundary (S1132 [2]).

12 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.

The image encoder signals information for determining whether to use a limited in-loop filter (S1210). Herein, the limited in-loop filter refers to an in-loop filter applied based on whether the CIP mode is used and the coding parameters of blocks located on both sides of the filtering target boundary, and determining whether to use the limited in-loop filter. Signaling information for an encoder inserts an indicator, such as a flag, in the bitstream indicating whether a limited in-loop filter is used in an encoder, and parses it in a decoder. Means.

When a limited in-loop filter is used, [1] filtering is performed only on samples belonging to a block encoded in inter prediction mode among samples located at both sides of the boundary to be filtered, or [2] both sides of the boundary to be filtered. Filtering may not be performed on samples located at [3], or filtering may be performed on samples located at both sides of a boundary to be filtered.

For example, when the encoding parameters of blocks located on both sides of the filtering target boundary indicate that the blocks located on both sides of the filtering boundary are blocks encoded in the intra prediction mode and the inter prediction mode, respectively, they are located on both sides of the filtering target boundary. Filtering may be performed only on samples belonging to a block coded in an inter prediction mode among the samples.

For example, if the encoding parameters of blocks located on both sides of the filtering boundary show that the blocks located on both sides of the filtering boundary are all encoded in inter prediction mode, the samples are located on both sides of the filtering boundary. You can choose not to perform filtering.

For example, if the encoding parameters of blocks located on both sides of the filtering boundary show that the blocks located on both sides of the filtering boundary are all encoded in the intra prediction mode, the samples are located on both sides of the filtering boundary. Filtering can be performed.

The encoder may signal information indicating that at least one of the above examples is applied, that is, information for determining whether to use a limited in-loop filter. In this case, the signaling is performed in a coding unit (CU), a prediction unit (PU), a transform unit (TU), a large coding unit (LCU), a slice unit, or a picture unit. Can be performed.

For example, when information indicating whether to use a limited in-loop filter is signaled through the PPS, constrained_in_loop_filter_flag may be transmitted as shown in Table 3.

Figure 112012035958085-pat00055

In this case, when the constrained_in_loop_filter_flag is '1', the limited in-loop filter may be used, and when the constrained_in_loop_filter_flag is '0', the limited in-loop filter may not be used.

Meanwhile, the image encoder may determine whether to signal information for determining whether to use the limited in-loop filter based on whether the CIP mode is used.

For example, if CIP mode is used (constrained_intra_pred_flag = 1), signal information to determine whether to use a restricted in-loop filter, and if CIP mode is not used (constrained_intra_pred_flag = 0), restricted in-loop Information for determining whether to use a filter may not be signaled.

In addition, the image encoder may determine whether to signal information for determining whether to use a limited in-loop filter based on encoding parameters of blocks located at both sides of the filtering target boundary.

For example, when one or more blocks among blocks located on both sides of the filtering target boundary are encoded in an inter prediction mode, information for determining whether to use a limited in-loop filter may be signaled.

Table 4 shows an example of determining whether to signal information for determining whether to use a limited in-loop filter based on coding parameters of blocks located at both sides of a filtering target boundary.

Figure 112012035958085-pat00056

In addition, the image encoder may basically signal information for determining whether to use a limited in-loop filter.

The image encoder determines encoding parameters of blocks located at both sides of the filtering target boundary (S1220). In this case, the image encoder may determine encoding parameters of blocks located at both sides of the filtering target boundary in the same manner as in FIG. 3, "Step of determining encoding parameters of blocks located at both sides of the filtering target boundary (S320)". .

The image encoder determines whether to apply the in-loop filter based on information for determining whether to use the limited in-loop filter and encoding parameters of blocks located at both sides of the filtering target boundary (S1230).

13 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.

The decoder may determine whether to use the limited in-loop filter in a manner corresponding to that of the encoder. That is, the image decoder parses information for determining whether to use the limited in-loop filter (S1310).

The image decoder determines the decoding parameters of blocks located at both sides of the filtering target boundary in operation S1320.

The image decoder determines whether to apply the in-loop filter based on information for determining whether to use the limited in-loop filter and decoding parameters of blocks located at both sides of the filtering target boundary (S1330).

It is not limited to the order of steps described above, and some steps may occur in a different order or in parallel with other steps. In addition, one of ordinary skill in the art will appreciate that the steps shown in the flowcharts are not exclusive, that other steps may be included or some steps may be deleted.

In addition, the above-described embodiments include examples of various aspects. In order to represent various aspects, not all possible combinations may be described, but one of ordinary skill in the art will recognize that other combinations are possible. Accordingly, the invention is intended to embrace all other replacements, modifications and variations that fall within the scope of the following claims.

Claims (45)

delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete Constructing reference samples from neighboring samples of the current block; And
Predicting the current block based on the reference samples to generate a predicted block,
Comprising the reference samples,
Deriving a substitute sample value by selecting two reference samples positioned at both sides of at least one substitute target reference sample included in the reference samples; And
Replacing the sample value of the substitute target reference sample with the substitute sample value,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. An image decoding method which is a sample of a fixed position having a large y-coordinate.
delete delete delete delete delete delete The method of claim 16,
Deriving the substitute sample value using the two reference samples,
And performing the linear interpolation using the two reference samples or the weighted average of the two reference samples to derive the substitute sample value.
The method of claim 23, wherein
When the weighted average is performed to derive the replacement sample value,
And a weight applied to each of the two reference samples is determined based on a position of the replacement target reference sample.
The method of claim 24,
When the weighted average is performed to derive the replacement sample value,
And a larger weight is applied to a reference sample closer to the substitute target reference sample among the two reference samples.
delete delete delete Constructing reference samples from neighboring samples of the current block; And
Predicting the current block based on the reference samples to generate a predicted block,
Comprising the reference samples,
Deriving a substitute sample value by selecting two reference samples positioned at both sides of at least one substitute target reference sample included in the reference samples; And
Replacing the sample value of the substitute target reference sample with the substitute sample value,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. An image encoding method which is a sample of a fixed position having a large y-coordinate.
delete delete delete delete delete delete The method of claim 29,
Deriving the substitute sample value using the two reference samples,
And performing the linear interpolation using the two reference samples or the weighted average of the two reference samples to derive the substitute sample value.
The method of claim 36,
When the weighted average is performed to derive the replacement sample value,
And a weight applied to each of the two reference samples is determined based on a position of the substitute target reference sample.
The method of claim 37,
When the weighted average is performed to derive the replacement sample value,
And applying a larger weight to a reference sample closer to the substitute target reference sample among the two reference samples.
delete delete delete Including an intra prediction unit,
The intra prediction unit,
Construct reference samples from neighboring samples of the current block, predict the current block based on the reference samples, and generate a predictive block,
The configuration of the reference samples,
Deriving a substitute sample value by selecting two reference samples positioned at both sides of at least one substitute target reference sample included in the reference samples; And
Replacing the sample value of the substitute target reference sample with the substitute sample value,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. An image decoding device which is a sample of a fixed position having a large y-coordinate.
Including an intra prediction unit,
The intra prediction unit,
Construct reference samples from neighboring samples of the current block, predict the current block based on the reference samples, and generate a predictive block,
The configuration of the reference samples,
Deriving a substitute sample value by selecting two reference samples positioned at both sides of at least one substitute target reference sample included in the reference samples; And
Replacing the sample value of the substitute target reference sample with the substitute sample value,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. An image encoding device which is a sample of a fixed position having a large y-coordinate.
A computer-readable recording medium storing a bitstream generated by an image encoding method,
The video encoding method,
Constructing reference samples from neighboring samples of the current block; And
Predicting the current block based on the reference samples to generate a predicted block,
Comprising the reference samples,
Deriving a substitute sample value by selecting two reference samples positioned at both sides of at least one substitute target reference sample included in the reference samples; And
Replacing the sample value of the substitute target reference sample with the substitute sample value,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. A computer-readable recording medium which is a sample of a fixed position having a large y-coordinate.
A computer-readable recording medium storing a bitstream received by an image decoding apparatus and used to recover a current block included in an image.
The bitstream includes prediction information of the current block,
The prediction information is decoded by the image decoding apparatus and used to generate a prediction block of the current block.
To generate a prediction block of the current block,
Reference samples are constructed from neighboring samples of the current block, and the prediction block is generated by predicting the current block based on the reference samples,
Selecting two reference samples located on both sides of at least one replacement target reference sample included in the reference samples to derive a replacement sample value, and replacing the sample value of the replacement target reference sample with the replacement sample value Reference samples are constructed,
If the replacement target reference sample is a sample located to the left of the current block,
X-coordinate of the two reference samples is the same as the x-coordinate of the replacement target reference sample,
One of the two reference samples is a sample of the fixed position having a y coordinate smaller than the y coordinate of the replacement target reference sample, and the other one of the two reference samples selected is smaller than the y coordinate of the replacement reference sample. A computer-readable recording medium which is a sample of a fixed position having a large y-coordinate.
KR1020120047758A 2011-05-04 2012-05-04 Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same KR102014177B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020120047758A KR102014177B1 (en) 2011-05-04 2012-05-04 Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same
PCT/KR2012/003540 WO2012150849A2 (en) 2011-05-04 2012-05-04 Video encoding/decoding method using error resilient loop filter, and signaling method thereof

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
KR20110042694 2011-05-04
KR1020110042694 2011-05-04
KR1020110052610 2011-06-01
KR20110052610 2011-06-01
KR20110065713 2011-07-01
KR1020110065713 2011-07-01
KR1020120039422 2012-04-16
KR1020120039422A KR20120125160A (en) 2011-05-04 2012-04-16 Method and apparatus for video encoding and decoding using error resilient filtering
KR1020120047758A KR102014177B1 (en) 2011-05-04 2012-05-04 Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020190064413A Division KR102112264B1 (en) 2011-05-04 2019-05-31 Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same

Publications (2)

Publication Number Publication Date
KR20120125193A KR20120125193A (en) 2012-11-14
KR102014177B1 true KR102014177B1 (en) 2019-10-21

Family

ID=47108162

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120047758A KR102014177B1 (en) 2011-05-04 2012-05-04 Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same

Country Status (2)

Country Link
KR (1) KR102014177B1 (en)
WO (1) WO2012150849A2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172424B (en) * 2011-03-30 2020-04-14 Lg 电子株式会社 Loop filtering method and apparatus thereof
US9596461B2 (en) * 2012-11-26 2017-03-14 Qualcomm Incorporated Loop filtering across constrained intra block boundaries in video coding
US9510021B2 (en) 2013-05-24 2016-11-29 Electronics And Telecommunications Research Institute Method and apparatus for filtering pixel blocks
AU2013228045A1 (en) * 2013-09-13 2015-04-02 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding video data
US10979714B2 (en) 2016-09-05 2021-04-13 Lg Electronics, Inc. Image coding/decoding method and apparatus therefor
CN116366844A (en) 2018-03-09 2023-06-30 韩国电子通信研究院 Image encoding/decoding method and apparatus using sample filtering
KR20240018689A (en) 2018-07-30 2024-02-13 삼성전자주식회사 Method and apparatus for image encoding, and method and apparatus for image decoding
CN109982075B (en) * 2019-03-21 2022-11-08 南京威翔科技有限公司 Intra-frame prediction universal angle method based on FPGA
EP3989586A4 (en) * 2019-06-21 2023-07-19 Samsung Electronics Co., Ltd. Video encoding method and device for performing post-reconstruction filtering in constrained prediction mode, and video decoding method and device
MX2022000963A (en) * 2019-07-21 2022-03-22 Lg Electronics Inc Image encoding/decoding method and apparatus for performing deblocking filtering according to whether palette mode is applied, and method for transmitting bitstream.
CN113411584A (en) * 2020-03-17 2021-09-17 北京三星通信技术研究有限公司 Video coding and decoding method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4821723B2 (en) * 2007-07-13 2011-11-24 富士通株式会社 Moving picture coding apparatus and program
KR101749269B1 (en) * 2009-06-30 2017-06-22 삼성전자주식회사 Apparaus and method for video encoding and decoding apparatus using adaptive in loop filter
JP5353532B2 (en) * 2009-07-29 2013-11-27 ソニー株式会社 Image processing apparatus and image processing method
JP5347849B2 (en) * 2009-09-01 2013-11-20 ソニー株式会社 Image encoding apparatus, image receiving apparatus, image encoding method, and image receiving method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Yunfei Zheng "CE13: Mode Dependent Hybrid Intra Smoothing", JCTVC-D282, 21 January 2011.*
정제창 역, H.264/AVC 비디오 압축 표준, 홍릉과학출판사, 2007.4.5 발행*

Also Published As

Publication number Publication date
WO2012150849A3 (en) 2013-03-21
KR20120125193A (en) 2012-11-14
WO2012150849A2 (en) 2012-11-08

Similar Documents

Publication Publication Date Title
KR102014177B1 (en) Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same
KR101962183B1 (en) Method for encoding/decoding an intra prediction mode and apparatus for the same
KR101937213B1 (en) Method for coding/decoding of intra prediction mode and computer readable redording meduim thereof
CA2997097C (en) Method and device for processing video signal
KR102229157B1 (en) Image information encoding and decoding method
KR101920105B1 (en) Method for providing compensation offsets for a set of reconstructed samples of an image
US9161046B2 (en) Determining quantization parameters for deblocking filtering for video coding
US9363533B2 (en) Method and apparatus for video-encoding/decoding using filter information prediction
KR102539354B1 (en) Method for processing image based on intra prediction mode and apparatus therefor
AU2012336598A1 (en) Method and apparatus for encoding image, and method and apparatus for decoding image
KR20230098714A (en) Image processing method and apparatus therefor
US12003737B2 (en) Coding of transform coefficients in video coding
JP2024506213A (en) Encoding/decoding method, apparatus and device thereof
USRE49308E1 (en) Method and apparatus for video-encoding/decoding using filter information prediction
KR102115822B1 (en) Video encoding/decoding method using error-resilient in-loop filter and signaling method relating to the same
KR101475286B1 (en) Method and apparatus for intra prediction, and apparatus for processing picture
KR102410326B1 (en) Method and apparatus for encoding/decoding a video signal
KR20130070215A (en) Method and apparatus for seletcing the adaptive depth information and processing deblocking filtering
KR20130107611A (en) Methods of encoding and decoding using bottom-up prediction mode decision and apparatuses for using the same

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant