KR20130070195A - Method and apparatus for context-based adaptive sao direction selection in video codec - Google Patents

Method and apparatus for context-based adaptive sao direction selection in video codec Download PDF

Info

Publication number
KR20130070195A
KR20130070195A KR1020110137406A KR20110137406A KR20130070195A KR 20130070195 A KR20130070195 A KR 20130070195A KR 1020110137406 A KR1020110137406 A KR 1020110137406A KR 20110137406 A KR20110137406 A KR 20110137406A KR 20130070195 A KR20130070195 A KR 20130070195A
Authority
KR
South Korea
Prior art keywords
block
sao
filter
additional
image
Prior art date
Application number
KR1020110137406A
Other languages
Korean (ko)
Inventor
유은경
조현호
남정학
심동규
Original Assignee
광운대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광운대학교 산학협력단 filed Critical 광운대학교 산학협력단
Priority to KR1020110137406A priority Critical patent/KR20130070195A/en
Publication of KR20130070195A publication Critical patent/KR20130070195A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a context-based SAO filter direction inference method and apparatus of a video decoder. Determining characteristics of an area or a block in a sub / decoder to select between a basic direction and an additional direction for the SAO; Selecting one of the basic direction and the additional direction according to the characteristics of the area or the block.

Description

METHODS AND APPARATUS FOR CONTEXT-BASED ADAPTIVE SAO DIRECTION SELECTION IN VIDEO CODEC

The present invention relates to an image processing technique, and more particularly, to adaptively add an edge offset direction of an SAO (Sample Adaptive Offset) which is an in-loop filter in video coding to encode / decode a video based on context information. It relates to a method and apparatus for finding out.

Recently, as a broadcast service having a high definition (HD) resolution (1280x720 or 1920x1080) has been expanded not only in Korea but also in the world, many users are receiving or applying high-definition and high-definition video services. Based on these trends, video standardization organizations have noted the need for compression technology for Ultra High Definition (UHD) video, which has more than four times the resolution of HDTV, together with HDTV for the future development of new technologies. Because of this, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) have now jointly formed Joint Collaborative Team on Video Coding (JCT-VC) to create a new generation of high-efficiency video coding (HEVC). We are working on a standard. HEVC aims to provide the same image quality as the existing coding schemes while providing higher gains in terms of frequency band and storage through higher compression efficiency than H.264 / Advanced Video Coding (AVC), the most recently redefined video compression coding standard. . The objective of JCT-VC is to encode not only HD video but also UHD video with twice the compression efficiency compared to H.264 / AVC.

The present invention provides a video coding method and apparatus for subdividing a direction to an edge offset of an SAO and adaptively finding and filtering an added direction based on context information to improve encoding / decoding efficiency. The purpose.

In the video decoding method according to an embodiment of the present invention for solving the above problems, a SAO (Sample Adaptive Offset) included in an in-loop filter to reduce the quantization error for the reconstructed image Independently performing unit (Region); A filter is performed by performing a sample adaptive offset (SAO) and an adaptive loop filter (ALF) included in an in-loop filter on the reconstructed image.

SAO is a method of reducing quantization error by determining a plurality of sets of pixels in an area or a block and sending an offset for each set of pixels. The method of determining the pixel set may be divided by the edge direction in the region or block or according to the luminance value. When dividing a set of pixels in the edge direction, as the edge direction is subdivided with respect to the characteristics of an area or a block, consideration may be made for various directions. In a method of determining a filter direction among a basic direction and an additional direction in one group, encoding efficiency can be improved when an adaptively added direction is found by using the same method for the sub / decoder.

According to an aspect of the present invention, there is provided a video decoding apparatus comprising: a filter selection parameter extractor configured to determine an SAO direction on a block or region basis adaptively based on context information; It includes a filter direction determiner for determining what filtering to take according to the characteristics of the current block or region.

According to an embodiment of the present invention, when performing SAO, the present invention provides a filtering method capable of improving subjective picture quality and objective picture quality by adaptively selecting in consideration of a more detailed edge direction according to the characteristics of an image to be decoded. Selecting adaptively means determining the filter direction by looking at the characteristics of the area or block to be decoded in order to find the direction of one of the included directions of the group including one basic direction and one or more additional directions. For example, when extracting filter selection parameters to determine the filter direction at SAO edge offsets, we use the parameter values computed by in-loop filter processes other than SAO to characterize the current block and region and to refine the edges accordingly. The direction can be taken into account adaptively. Another example is filtering that can improve subjective and objective image quality by identifying features using the edge direction, variance value, and Laplacian method of the decoded region or block, and adaptively considering the segmented edge direction. Provide a method. The adaptive consideration of the edge direction means that the direction of the filter is adaptively selected according to the image characteristics of the decoder without sending a bit for filter selection to the decoder. The proposed method can improve coding efficiency, thereby providing better image quality at the same bit rate.

1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
3 is a block diagram illustrating a detailed configuration of an in-loop filter according to an embodiment of an image encoding apparatus to which the present invention is applied.
4 is a block diagram illustrating a detailed configuration of an in-loop filter according to an embodiment of an image decoding apparatus to which the present invention is applied.
5 is one embodiment of the existing and additional directions of the present invention.
FIG. 6 is an embodiment of setting a group with respect to the additional direction and the existing direction shown in FIG. 5.
FIG. 7 is an embodiment of setting a group with respect to the additional direction and the existing direction shown in FIG. 4.
8 is additional syntax information indicating whether the present invention is used.
FIG. 9 is a detailed block diagram of SAO illustrating an embodiment of context-based SAO filter direction inference of SAO 405 in FIG. 4.
10 is a flowchart illustrating a context-based SAO filter direction inference method.
11 is an embodiment for adaptively determining direction information for filtering in context-based SAO filter direction inference.
FIG. 12 illustrates an embodiment of using AIF as a parameter for directional information for adaptive filtering in context-based SAO filter direction inference.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present disclosure rather unclear.

The definition of an area is the basic unit that sends the parameters of the SAO. In HEVC, a region refers to a bundle of LCUs divided by a quad tree method of an entire image.

A block refers to all basic units used in video coding, and may be an LCU, a CU, a PU, or the like.

1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the image encoding apparatus 100 may include a motion estimator 101, a motion compensator 102, a reference image buffer 103, a filter 104, an intra predictor 105, and a transform unit. 106, a quantization unit 107, an inverse transform unit 108, an inverse quantization unit 109, and an entropy encoding unit 110.

The image encoding apparatus 100 encodes an input image in an intra mode or an inter mode and generates a residual block. Intra prediction means intra prediction and inter prediction means inter prediction. The image encoding apparatus 100 generates a prediction block for an input block of an input image and then encodes a difference between the input block and the prediction block.

In the intra mode, the intra predictor 105 generates a predictive block by performing spatial prediction using pixel values of blocks that are already encoded around the current block. In the inter mode, the motion predictor 101 finds a motion vector in the reference picture stored in the reference picture buffer 103 that best matches the input block in the motion prediction process. The motion compensator 102 generates a prediction block by performing motion compensation using the motion vector.

The transform unit 106 performs a transform on the residual block and outputs a transform coefficient. The quantization unit 107 quantizes the input transform coefficient according to the quantization parameter and outputs a quantized coefficient. The entropy encoding unit 110 entropy codes the input quantized coefficients according to a probability distribution and outputs a bit stream.

Since the HEVC performs inter prediction coding, i.e., inter prediction coding, the currently encoded image needs to be decoded and stored for use as a reference image. Accordingly, the quantized coefficients are inversely quantized by the inverse quantizer 108 and inversely transformed by the inverse transformer 109. Inverse quantized, inversely transformed coefficients are added to the prediction block and a reconstruction block is generated.

The reconstruction block passes through the filter unit 104, and the filter unit 104 applies at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstruction block or the reconstruction image. can do. The filter portion 104 may be referred to as an adaptive in-loop filter. The deblocking filter can remove block distortion occurring at the boundary between the blocks. The SAO may add an appropriate offset value to the pixel value to compensate for the quantization error. The ALF may perform filtering based on a comparison between the reconstructed image and the original image, and may be performed only when high efficiency is applied. The reconstructed block that has passed through the filter unit 104 is stored in the reference image buffer 103.

2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.

Referring to FIG. 2, the image decoding apparatus 200 may include an entropy decoder 201, an inverse quantizer 202, an inverse transformer 203, a filter 204, an intra predictor 205, and a motion compensator. 206 and a reference picture buffer 207.

The image decoding apparatus 200 may obtain a residual block by receiving a bitstream output from the encoder and performing decoding in an intra mode or an inter mode. After the process of adding the residual block and the prediction block, the encoded image is reconstructed and output. In the intra mode, the switch is switched to the intra mode, and in the inter mode, the switch is switched to the inter mode. The image decoding apparatus 200 obtains a residual block from the input bitstream, generates a prediction block, and then adds the residual block and the prediction block to generate a reconstructed block, that is, a reconstruction block.

The entropy decoder 201 entropy decodes an input bitstream according to a probability distribution according to a syntax, and outputs a syntax value and a quantized coefficient necessary for decoding. The quantized coefficients are inversely quantized by the inverse quantizer 202 and inversely transformed by the inverse transformer 203, and a residual block is generated as a result of the inverse quantization / inverse transformation of the quantized coefficients.

In the intra mode, the intra predictor 205 generates a prediction block by performing spatial prediction using pixel values of blocks that are already encoded around the current block. In the inter mode, the motion compensator 206 generates a predictive block by performing motion compensation using the motion vector and the reference picture stored in the reference picture buffer 207.

After the residual block and the prediction block are added, the added image becomes an input of the filter unit 204. The filter unit 204 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture. The filter unit 204 outputs a reconstructed image, that is, a reconstructed image. The reconstructed picture may be stored in the reference picture buffer 207 and used for inter prediction.

FIG. 3 is an example of subdividing the filter unit 104 from FIG. 1. In the present invention, the filter unit 104 includes the SAO 304 as an essential part, and an ALF 303 or a deblocking filter 305 is added. Can be.

The adaptive loop filter (ALF) 303 is a method of obtaining a filter coefficient and a filter shape that make the input image most similar to the original, and performing filtering on a pixel basis using the filter coefficient.

Sample Adaptive Offset (SAO) 304 is a filtering method used to reduce an error occurring when quantization is performed by adding an offset to a set of pixels having similar characteristics, and is a method currently proposed for HEVC standardization. The basic unit of SAO execution is a region, which is divided into quad trees for the entire image. The minimum stage of segmentation is the LCU (Largest Coding Unit), and usually the region is a bundle of LCUs.

The basic method of SAO determines the pixel set for the area divided by the SAO segmentation information, and adds an offset to the determined pixel set. There are one or more pixel sets depending on the filter method.

SAO has two main filter methods, Edge Offset and Band Offset. In the case of the edge offset, the pixel set is determined according to the direction, and in the case of the band offset, the pixel set is determined by each corresponding degree value by uniformly dividing the histogram of the luminance value. In this way, among the plurality of pixel sets, a pixel set that can increase encoding efficiency is determined. Since the encoder 300 knows both the original image and the image reconstructed after the prediction, transformation, and quantization processes, the most suitable pixel set is determined among the multiple pixel sets for each region to minimize the error between the two images. The offset information of the pixel set may be found and the value may be output as a bitstream.

The deblocking filter 305 is a method of filtering to remove a blocking phenomenon, which is a phenomenon in which a difference between a block and a block appearing when quantization is performed in the frequency domain. The filtering method determines a threshold value based on parameter information such as a QP value, compares pixel brightness differences between two blocks at PU boundaries, TU boundaries, and CU boundaries of 8x8 or more with a threshold value to determine whether to filter and to filter accordingly. .

FIG. 4 is an example of subdividing the filter unit 204 in FIG. 2. In the present invention, the filter unit 204 includes the SAO 402 as an essential part, and an ALF 401 or a deblocking filter 403 is added. Can be.

The adaptive loop filter (ALF) 406 is a method of finding filter coefficients and filter shapes sent from an encoder and performing filtering on a pixel-by-pixel basis.

Sample Adaptive Offset (SAO) 405 is a filtering method used to reduce an error occurring when quantization is performed in an encoder and is a method added in the HEVC standardization step. The basic unit of SAO execution is a region, which is divided into quad trees for the entire image. Get a set of pixels for a region and add the offsets sent by the encoder to the set of pixels. There may be one or more pixel sets.

SAO has two main filter methods, Edge Offset and Band Offset. In the case of the edge offset, the pixel set is determined according to the direction, and in the case of the band offset, the pixel set is determined by each corresponding degree value by uniformly dividing the histogram of the luminance value. Since the decoder's SAO 405 has no information on the original image, it detects the pixel set using the SAO syntax information included in the bitstream, and minimizes the quantization error by adding the offset value sent from the encoder to the found pixel set. .

The deblocking filter 404 is a method of filtering to remove a blocking phenomenon, which is a phenomenon in which a difference between a block and a block appearing when quantization is performed in the frequency domain. The filtering method determines a threshold value based on parameter information such as a QP value, determines whether to filter by comparing a difference between two blocks at a PU boundary, a TU boundary, and a CU boundary of 8x8 or more and a threshold, and performs filtering accordingly. The deblocking filter 405 has no additional syntax other than the syntax for use, and performs filtering using only the reconstructed image and parameter values.

5 is a detailed block diagram of the SAO decoding apparatus of the present invention. The entropy decoder 501 receives a bitstream, parses it, and outputs a required syntax value. The pixel reconstruction unit 502 reconstructs the encoded image through inverse quantization, inverse transformation, and prediction using the syntax information output from the entropy decoding unit 501, and outputs the reconstructed image. The outputted reconstructed image becomes an input of the SAO execution unit 505. If there is an in-loop filter before SAO, the image output from the pixel reconstruction unit 502 is filtered, and then the filtered image is input to the SAO execution unit 505.

The context-based filter selection parameter extractor 503 obtains parameter information for finding out characteristics of an area or a block, which is a SAO basic unit, with respect to the image reconstructed by the pixel reconstructor 502, or finds an edge direction. Examples of parameters that can determine the characteristics of the image include index information for filter selection of ALF and block partition information of CU, PU, and TU. Examples of methods for determining edge directionality include edge detection, dispersion, and difference between neighboring pixels. Value, etc.

The filter direction determiner 504 is configured to determine the current block according to the type and value of a syntax parsed by the context-based filter selection parameter extractor 503 and the image parsed by the entropy decoder 501 such as an entire image, a slice, an LCU, and an area. Or determine the SAO filter direction in the group for the region. When the method proposed by the present invention is not used through the entropy decoding unit 501, the signaled basic direction is determined as the filter direction. In the case of using the method proposed in the present invention, it is determined whether to select a filter direction among the basic direction and the additional direction based on the image characteristic of the context-based filter selection parameter extractor 503.

The SAO performer 505 composes a pixel set using the filter direction determined by the filter direction determiner 504 and performs SAO filtering on each pixel set. The image output unit 506 outputs the reconstructed image in which all of the SAO is performed. If there is an in-loop filter after SAO, the filtered image is output after this process.

6 is an overall flowchart of a decoder of the present invention. 601 corresponds to the entropy decoding unit 312 of FIG. 3 and obtains a syntax value through entropy decoding of the bitstream. The 602 determines whether to use the context-based SAO filter direction inference method or the existing SAO method through the decoded syntax. In case of using the proposed adaptive direction prediction method, processes 603 and 604 are additionally performed. 603 extracts a parameter according to an image characteristic in a current region and a block, and performs the same function as the context-based filter selection parameter extractor 501 of FIG. 5. 604 uses the parameter information determined in 603 to determine whether to configure the pixel group in the basic direction or the additional direction in the current region or block. This corresponds to the filter direction determiner 504 of FIG. 5. Subsequently, 605 is a process of adding a predetermined offset to the determined method pixel set and serves as the SAO performing unit 505 in FIG. 5. When the adaptive direction prediction proposed in 602 is not used, only a process 605 of constructing a pixel set for the base direction and adding an offset sent by the encoder for the included pixel is performed.

7 is an embodiment of the base direction 701 and the additional direction 702 with respect to the edge offset in the SAO of the present invention. The basic direction 701 is composed of 0 degrees, 90 degrees, 135 degrees, and 45 degrees. The basic direction 701 may be a direction other than the above-described angles, and the position and number of pixels to be referred to may vary. Further, the additional direction 702 is 112.5 degrees, 67.5 degrees, 157.5 degrees and 22.5 degrees, which may also be changed in a direction other than the above-described angle, or the number of pixels to be referred to.

Here's how to filter by an existing edge offset in SAO: First, the encoder determines a direction for minimizing an error for a region or a block, and outputs the determined direction information in a bitstream. The decoder parses the direction determined in the bitstream. Four types of pixel sets may be configured by using the relationship between the current pixel and two surrounding pixels in the determined filter direction. The method of constructing the pixel set is determined by the addition and subtraction operation between pixels of the decoded input image before SAO, and both the decoder and the decoder can be found in the same way. In the case of the encoder, since the offset value is known, the corresponding offset is added to the pixels included in each pixel set. In the case of a decoder, an offset is parsed through a bitstream to find a value, and a corresponding offset is added to a pixel included in a pixel set.

The method and apparatus proposed in the present invention have an additional direction other than the basic direction 701 and set one or more additional directions in one group for one basic direction. The encoder determines a group that can minimize errors and signals information about the base direction of the group. The decoder decodes the signaled base direction and selects a corresponding group. The sub / decoder must determine the direction for filtering one of the directions in the group. In this process, the direction information for filtering the current region is adaptively found by using the characteristics of the region or the block, and it is determined through the same method as the encoder / decoder. For example, the direction of an edge, the dispersion of an area or a block, and the Laplacian method may be used to determine an image characteristic of an area or a block. Another example may be obtained by using a parameter used in a module other than SAO. Herein, the module may refer to 401, 402, 403, 404, 406, 407, 408, 409, and the like in FIG. 4. In addition, the parameter information used in the module may refer to partition information of a coding unit (CU), partition information of a prediction unit (PU), an index for selecting an ALF filter, quantization parameter, intra mode information, and the like.

FIG. 8 illustrates a method of determining an additional direction 702 with respect to the basic direction 701 as one group according to the embodiment of FIG. 7. For the basic directions 0 degrees 801, 90 degrees 802, 135 degrees 803, and 45 degrees 804, the adjacent direction among the additional directions 702 is determined as a group. For example, an additional direction 157.5 degrees and 22.5 degrees 805, which are adjacent to the zero degrees 801 in both directions, is set as one group for the zero degrees 801. In this way, you set 806 for additional direction for 802, 807 for additional direction for 803, and 808 for additional direction for 804 and group 809, 810 for one additional direction and two adjacent directions assigned in this way. , 811, 812 are set.

A method of adaptively selecting a filter direction among a basic direction and additional directions thereof in one group in the decoder is as follows. When performing SAO, the information for determining the pixel set is transmitted only for the base direction, and the filter direction is determined based on the characteristics of the area or block to be decoded to find the direction of one of the additional directions of the group including the base direction. do.

The process of determining the characteristics of the region or block for selecting the filter direction refers to a parameter representing the characteristics of the current coding region or block with respect to the decoded image before the in-loop filtering or the image of the previous filtering. Use information from As an example of a parameter, a parameter value calculated in another in-loop filter process may be used, or may be partition information of a block such as a CU, a PU, or a TU.

If the encoder also determines a group for the filter direction to minimize the error, selecting the filter direction in the group is the same process as the decoder.

FIG. 9 illustrates a method of determining an additional direction 702 with respect to the basic direction 701 as a group in the embodiment of FIG. 7. The additional direction 702 can be arbitrarily determined for the basic directions of 0 degrees 901, 90 degrees 902, 135 degrees 903, and 45 degrees 904. For example, 22.5 degrees 905 which is one of the adjacent directions with respect to the basic direction 0 degrees 901 are set as additional directions. In the same way, it may be 112.5 degrees 906, 157.5 degrees 907, and 67.5 degrees 908 in additional directions with respect to the basic directions 90 degrees 902, 135 degrees 903, and 45 degrees 904. The additional direction may be one as shown in FIG. 9, or may be set in plurality.

If the basic directions 901, 902, 903, 904 and the additional directions 905, 906, 907, 908 are determined, the group may be 809, 910, 911, 912. The decoder determines the filtering direction based on the parameter information of the edge and the characteristics of the region or the block using the base direction index sent by the encoder.

10 is necessary syntax information on whether to use the context-based SAO filter direction inference method. A sequence parameter set (SPS) 1001 is a unit for sending syntax information about the entire input image, and the slice headers 1002 and 1003 are units for sending syntax information commonly used for one slice.

In 1001, the SAO_adaptation_enabled_flag syntax indicates whether additional direction is used for the entire input image. In 1002, sample_adaptive_offset_flag is a statement indicating whether to use SAO in a slice unit. If the value of this syntax is 1, SAO is performed on the current slice. If 0, SAO is not performed. SAO_adaptation_flag, a syntax proposed by the present invention, is signaled when sample_adaptive_offset_flag is 1, and indicates whether additional direction is used in a slice unit. 803 transmits information about SAO. In the present invention, SAO_adaptation_CU_flag, which is an additional syntax, indicates whether additional direction is used on a per-unit basis.

The syntax information 1001, 1002, and 1003 may all be considered or partially considered to determine whether to use the context-based SAO filter direction inference method.

11 is an embodiment of the present invention. In FIG. 11, the basic direction is 135 degrees 1102, the additional direction is 67.5 degrees 1103, and the input region or block decoded by the current decoder is 1101. 1101 shows an edge in the direction of 40.5 degrees. Based on this information, the direction perpendicular to the edge direction in the block is selected to reduce the ringing phenomenon of the part contacting the edge. The set of pixels may be determined for pixels located at the boundary of the edge in the vertical direction of the edge. Therefore, 135 degrees 1102 close to the vertical direction of the edge direction of 1101 of the two directions are selected as the filter direction.

12 is an embodiment of the present invention. In FIG. 12, parameter information 1202 of the ALF is used to predict the directionality of the current block 1201. The parameter information 1202 of the ALF calculates an activity metric as a method for determining one of a plurality of filter sets in the ALF. This value ranges from 0 to 15, with a variance close to 0, and a variance close to 15. In addition, 0, 3, 6, 9, and 12 are cases where no directionality exists, 1, 4, 7, 10, and 13 are edges in the horizontal direction, and 2, 5, 8, 11 and 14 are in the vertical direction. If there is an edge of. In the current block 1201, the Activity Metric is calculated in units of 4 ㅧ 4 and its value is shown. In this case, by looking at the change in the value of the activity metric, it is possible to determine the block characteristic that there is an upward edge in the direction of about 30 degrees.

13 is an embodiment of the present invention. A method of selecting one of two directions included for the group 1303 signaled in the current block 1301. The parameter information considered at this time is CU partition information 1302. In FIG. 13, a solid line indicates an LCU boundary and the CU partitioning information 1302 is indicated by a dotted line. The large size of a CU is a case where an image in the CU shows a flat characteristic or a prediction is well performed. On the other hand, a small CU size is a case where the image of the part is complicated or the prediction is not good. In general, it is difficult to predict the edges and the characteristics of the image are complicated. Therefore, a lot of CU partitioning assumes that there are edges in that area. The number indicated in 1301 of FIG. 13 is depth information on CU partitioning, and the larger the number, the more partitioning is performed. The CU partitioning information 1302 can be used to determine that the edges exist in the 30-45 degree direction, and 22.5 degrees 1305 of the two directions in the group 1303 are selected using this information.

Claims (9)

In addition to the basic direction signaled for SAO (Sample Adaptive Offset), the additional direction is considered without additional signaling. The method of claim 1, wherein the basic unit for the additional direction may be the entire input image, a frame unit, a slice unit, an area unit, a block unit, and the like. The method of claim 1, wherein when determining a group of the base direction and the additional direction thereof, the number of additional directions may be one or more with respect to one base direction, and the number may be fixed or adaptively changed. The method of claim 1, wherein the direction used in the current region or block among the base direction and the corresponding additional direction may be found without additional signaling by using information on an image characteristic in the decoder. The method of claim 4, wherein the method of determining the basic direction and the corresponding additional direction may be found through partitioning information of the block. The method of claim 4, wherein the method of determining the basic direction and the corresponding additional direction may be found using parameter information of another in-loop filter. The method of claim 4, wherein the method of determining the basic direction and the corresponding additional direction may be found through dispersion, edge direction, and laplacian in a block or region in the input image. The method of claim 1, wherein the use thereof may be signaled in units of an SPS, a PPS, a slice header, a slice header, and slice data. The method of claim 8, wherein the signaling method may be partially performed.

KR1020110137406A 2011-12-19 2011-12-19 Method and apparatus for context-based adaptive sao direction selection in video codec KR20130070195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110137406A KR20130070195A (en) 2011-12-19 2011-12-19 Method and apparatus for context-based adaptive sao direction selection in video codec

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110137406A KR20130070195A (en) 2011-12-19 2011-12-19 Method and apparatus for context-based adaptive sao direction selection in video codec

Publications (1)

Publication Number Publication Date
KR20130070195A true KR20130070195A (en) 2013-06-27

Family

ID=48865057

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110137406A KR20130070195A (en) 2011-12-19 2011-12-19 Method and apparatus for context-based adaptive sao direction selection in video codec

Country Status (1)

Country Link
KR (1) KR20130070195A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072750A1 (en) * 2014-11-04 2016-05-12 삼성전자 주식회사 Video encoding method and apparatus therefor, and video decoding method and apparatus therefor, in which edge type offset is applied

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072750A1 (en) * 2014-11-04 2016-05-12 삼성전자 주식회사 Video encoding method and apparatus therefor, and video decoding method and apparatus therefor, in which edge type offset is applied
US10356418B2 (en) 2014-11-04 2019-07-16 Samsung Electronics Co., Ltd. Video encoding method and apparatus therefor, and video decoding method and apparatus therefor, in which edge type offset is applied

Similar Documents

Publication Publication Date Title
JP7077385B2 (en) A coding method that encodes information for performing sample adaptive offset processing
US9667997B2 (en) Method and apparatus for intra transform skip mode
AU2013248857B2 (en) Method and apparatus for loop filtering across slice or tile boundaries
JP2020017986A (en) Video decoding method, video encoding method, and recording medium
KR101752612B1 (en) Method of sample adaptive offset processing for video coding
US10880546B2 (en) Method and apparatus for deriving intra prediction mode for chroma component
KR101857755B1 (en) Methods of decoding using skip mode and apparatuses for using the same
US11659174B2 (en) Image encoding method/device, image decoding method/device and recording medium having bitstream stored therein
CN109644273B (en) Apparatus and method for video encoding
WO2022035687A1 (en) Chroma coding enhancement in cross-component sample adaptive offset
US11109024B2 (en) Decoder side intra mode derivation tool line memory harmonization with deblocking filter
EP3804314B1 (en) Method and apparatus for video encoding and decoding with partially shared luma and chroma coding trees
US11991378B2 (en) Method and device for video coding using various transform techniques
US20220368901A1 (en) Image encoding method/device, image decoding method/device and recording medium having bitstream stored therein
CN116389737B (en) Coding and decoding of transform coefficients in video coding and decoding
KR20130070195A (en) Method and apparatus for context-based adaptive sao direction selection in video codec
WO2023246901A1 (en) Methods and apparatus for implicit sub-block transform coding
CN114830641A (en) Image encoding method and image decoding method
CN114830643A (en) Image encoding method and image decoding method
CN114830645A (en) Image encoding method and image decoding method
CN114830650A (en) Image encoding method and image decoding method
KR20130070215A (en) Method and apparatus for seletcing the adaptive depth information and processing deblocking filtering

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination