WO2019135658A1 - Procédé de traitement d'image, et procédé de codage et de décodage d'image l'utilisant - Google Patents

Procédé de traitement d'image, et procédé de codage et de décodage d'image l'utilisant Download PDF

Info

Publication number
WO2019135658A1
WO2019135658A1 PCT/KR2019/000238 KR2019000238W WO2019135658A1 WO 2019135658 A1 WO2019135658 A1 WO 2019135658A1 KR 2019000238 W KR2019000238 W KR 2019000238W WO 2019135658 A1 WO2019135658 A1 WO 2019135658A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
prediction
block
padding
intra
Prior art date
Application number
PCT/KR2019/000238
Other languages
English (en)
Korean (ko)
Inventor
임화섭
임정윤
김재곤
박도현
윤용욱
Original Assignee
가온미디어 주식회사
한국항공대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 가온미디어 주식회사, 한국항공대학교산학협력단 filed Critical 가온미디어 주식회사
Publication of WO2019135658A1 publication Critical patent/WO2019135658A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to image encoding and decoding.
  • a picture is divided into a plurality of blocks each having a predetermined size to perform coding. Further, inter prediction and intra prediction techniques for eliminating redundancy between pictures are used to increase compression efficiency.
  • intra prediction and inter prediction are used to generate a residual signal.
  • the reason why the residual signal is obtained is that when the coding is performed using the residual signal, the amount of data is small, so that the data compression rate is high. Is small.
  • the intraprediction method predicts the data of the current block by using the pixels around the current block.
  • the difference between the actual value and the predicted value is called a residual signal block.
  • the intra prediction method increases to 35 prediction modes in nine prediction modes used in the existing H.264 / AVC, and further subdivided prediction is performed.
  • the current block is compared with the blocks in the neighboring pictures to find the closest block.
  • the position information (Vx, Vy) of the found block is referred to as a motion vector.
  • the difference between the intra-block pixel values of the current block and the prediction block predicted by the motion vector is called a residual-signal block (motion-compensated residual block).
  • the closest sample is padded to the region in which the base decoded reference sample does not exist, which is an element that hinders the directional prediction.
  • an image processing method for processing a picture of an image which is a basic unit for performing inter prediction or intra prediction, Dividing into a plurality of coding units (Coding Units); Identifying, for a unit of an intra predicted current block of the divided coding units, a reference sample used for intra prediction; Performing a padding process in which a change amount of a neighboring block of the current block is reflected on a reference sample in which there is no previously decoded information among the reference samples; And performing intra prediction decoding based on the padded reference sample.
  • an image decoding method including: receiving an encoded bitstream; Performing inverse quantization and inverse transform on the input bitstream to obtain a residual block; Performing inter-prediction or intra-prediction to obtain a prediction block; And reconstructing an image by summing the obtained residual block and a prediction block, wherein a coding unit, which is a basic unit in which the inter prediction or intra prediction is performed, is a block divided from a coding tree unit using a binary tree structure, Obtaining a prediction block may include identifying a reference sample used for intra prediction for a unit of an intra prediction current block of the divided coding units; And performing a padding process on a reference sample in which there is no previously decoded information of the reference sample, the padding process reflecting a variation amount of a neighboring block of the current block; And performing intra prediction decoding based on the padded reference sample to obtain the prediction block.
  • the above-described methods may be embodied as a computer-readable recording medium on which a program for execution by a computer is recorded.
  • the reference sample to which the variation amount of the current block or the neighboring blocks is applied is configured for the region in which the base decoded reference sample does not exist, And improve coding efficiency for high resolution images.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
  • FIGS. 2 to 5 are diagrams for explaining a first embodiment of a method for dividing and processing an image into blocks.
  • FIG. 6 is a block diagram for explaining an embodiment of a method of performing inter-prediction in the image encoding apparatus.
  • FIG. 7 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
  • FIG. 8 is a block diagram for explaining an embodiment of a method of performing inter-prediction in an image decoding apparatus.
  • FIG. 9 is a diagram for explaining a second embodiment of a method of dividing and processing an image into blocks.
  • FIG. 10 is a diagram showing an embodiment of a syntax structure used for dividing and processing an image into blocks.
  • FIG. 11 is a diagram for explaining a third embodiment of a method of dividing and processing an image into blocks.
  • FIG. 12 is a diagram for explaining an embodiment of a method of dividing a coding unit into a binary tree structure to construct a conversion unit.
  • FIG. 13 is a diagram for explaining a fourth embodiment of a method of dividing and processing an image into blocks.
  • FIGS. 14 to 16 are diagrams for explaining still another embodiment of a method of dividing and processing an image into blocks.
  • FIGS. 17 and 18 are diagrams for explaining embodiments of a method of performing a rate distortion optimization (RDO) to determine a division structure of a conversion unit.
  • RDO rate distortion optimization
  • 19 is a diagram for explaining a reference sample configuration applied with a variation according to an embodiment of the present invention.
  • 20 to 21 are flowcharts for explaining an intra prediction process according to an embodiment of the present invention.
  • 22 is a flowchart for explaining a single variation reflection sample generation process according to an embodiment of the present invention.
  • 23 to 24 are illustrations of a single variation amount reflecting sample according to an embodiment of the present invention.
  • 25 is a flow chart illustrating a selective single variation reflected sample padding process in accordance with an embodiment of the present invention.
  • 26 is a flowchart for explaining a multiple change amount reflection sample generation process according to an embodiment of the present invention.
  • 27 to 28 are exemplary diagrams of a multiple change amount reflection sample according to an embodiment of the present invention.
  • Figure 29 is a flow diagram illustrating a selective single and multiple varying reflected sample padding process in accordance with an embodiment of the present invention.
  • FIGS. 30 and 31 are views for explaining a reference sample non-existence case to which the variation-reflected sample padding process according to the embodiment of the present invention is applicable.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
  • each constituent unit is included in each constituent unit for convenience of explanation, and at least two constituent units of the constituent units may be combined to form one constituent unit, or one constituent unit may be divided into a plurality of constituent units to perform a function.
  • the integrated embodiments and separate embodiments of the components are also included within the scope of the present invention, unless they depart from the essence of the present invention.
  • the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance.
  • the present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.
  • FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an exemplary embodiment of the present invention.
  • the image encoding apparatus 10 includes a picture division unit 110, a transform unit 120, a quantization unit 130, An entropy encoding unit 140, an intra prediction unit 150, an inter prediction unit 160, an inverse quantization unit 135, an inverse transformation unit 125, a post-processing unit 170, a picture storage unit 180 A subtracting unit 190, and an adding unit 195.
  • a picture dividing unit 110 analyzes an input video signal, divides a picture into coding units to determine a prediction mode, and determines a size of a prediction unit for each coding unit.
  • the picture division unit 110 also sends the prediction unit to be encoded to the intra prediction unit 150 or the inter prediction unit 160 according to a prediction mode (or a prediction method). Further, the picture division unit 110 sends the prediction unit to be encoded to the subtraction unit 190.
  • a picture of an image is composed of a plurality of slices, and a slice can be divided into a plurality of coding tree units (CTU), which is a basic unit for dividing a picture.
  • CTU coding tree units
  • the coding tree unit may be divided into one or two or more coding units (CUs), which are basic units for performing inter prediction or intra prediction.
  • CUs coding units
  • the coding unit (CU) can be divided into one or more prediction units (PU), which are basic units on which prediction is performed.
  • PU prediction units
  • the coding apparatus 10 determines one of the inter prediction and intra prediction as the prediction method for each of the divided coding units (CUs), but differently for each prediction unit (PU) Can be generated.
  • the coding unit CU may be divided into one or two or more Transform Units (TUs), which are basic units for performing a conversion on a residual block.
  • TUs Transform Units
  • the picture dividing unit 110 may deliver the image data to the subtracting unit 190 in a block unit (for example, a prediction unit (PU) or a conversion unit (TU)) which is divided as described above.
  • a block unit for example, a prediction unit (PU) or a conversion unit (TU)
  • a coding tree unit (CTU) having a maximum size of 256x256 pixels may be divided into a quad tree structure and divided into four coding units (CUs) having a square shape.
  • Each of the four coding units (CUs) having the square shape can be divided into quad tree structures.
  • the depth of the coding unit (CU) It can have one integer value.
  • the coding unit (CU) may be divided into one or more prediction units (PU) according to the prediction mode.
  • the prediction unit PU can have the size of 2Nx2N shown in Fig. 3A or the size of NxN shown in Fig. 3B have.
  • the prediction unit PU is 2Nx2N shown in Fig. 4A, 2NxN shown in Fig. 4D, 2NxnU shown in FIG. 4E, 2NxnD shown in FIG. 4F, and FIG. 4G shown in FIG. 4G, And nRx2N shown in FIG. 4 (h).
  • a coding unit may be divided into a quad tree structure and divided into four transformation units (TUs) having a square shape.
  • Each of the four transformation units TU having the square shape can be divided into quad tree structures.
  • the depth of the transformation unit TU divided into the quad tree structure can be set to any one of 0 to 3 It can have one integer value.
  • the prediction unit (PU) and the conversion unit (TU) divided from the coding unit (CU) may have a division structure independent of each other.
  • the conversion unit TU divided from the coding unit CU can not be larger than the size of the prediction unit PU.
  • the conversion unit TU divided as described above can have a maximum size of 64x64 pixels.
  • the transforming unit 120 transforms the residual block, which is a residual signal between the original block of the input prediction unit PU, and the prediction block generated by the intra prediction unit 150 or the inter prediction unit 160, Unit TU as a basic unit.
  • different conversion matrices can be determined according to the intra or inter prediction mode. Since the residual signal of the intra prediction has directionality according to the intra prediction mode, the conversion matrix can be adaptively determined according to the intra prediction mode. have.
  • the transformation unit can be transformed by two (horizontal, vertical) one-dimensional transformation matrices, for example in the case of interprediction, one predetermined transformation matrix can be determined.
  • the DCT-based integer matrix is applied in the vertical direction and the DST- Apply a KLT-based integer matrix.
  • a DST-based or KLT-based integer matrix can be applied in the vertical direction
  • a DCT-based integer matrix can be applied in the horizontal direction.
  • a DCT-based integer matrix can be applied to both directions.
  • the transformation matrix may be adaptively determined based on the size of the transformation unit TU.
  • the quantization unit 130 determines a quantization step size for quantizing the coefficients of the residual block transformed by the transform matrix, and the quantization step size can be determined for each quantization unit larger than a predetermined size.
  • the size of the quantization unit may be 8x8 or 16x16, and the quantization unit 130 quantizes the coefficients of the transform block using the quantization matrix determined according to the quantization step size and the prediction mode.
  • the quantization unit 130 can use the quantization step size of the quantization unit adjacent to the current quantization unit as a quantization step size predictor of the current quantization unit.
  • the quantization unit 130 searches the left quantization unit, the upper quantization unit, and the upper left quantization unit of the current quantization unit in order, and can generate a quantization step size predictor of the current quantization unit using one or two effective quantization step sizes have.
  • the quantization unit 130 may determine the valid first quantization step size searched in the above order as a quantization step size predictor, determine an average value of two valid quantization step sizes searched in the above order as a quantization step size predictor, or If only one quantization step size is valid, it can be determined as a quantization step size predictor.
  • the quantization unit 130 transmits the difference value between the quantization step size of the current quantization unit and the quantization step size predictor to the entropy encoding unit 140.
  • the quantization step sizes of the quantization units adjacent to the current coding unit and the quantization unit immediately before the coding order in the maximum coding unit can be candidates.
  • the order may be changed, and the upper left side quantization unit may be omitted.
  • the quantized transform block is transferred to the inverse quantization unit 135 and the scanning unit 131.
  • the scanning unit 131 scans the coefficients of the quantized transform block and converts the coefficients into one-dimensional quantization coefficients. In this case, since the coefficient distribution of the transformed block after quantization may be dependent on the intra prediction mode, Can be determined accordingly.
  • the coefficient scanning method may be determined depending on the size of the conversion unit, and the scan pattern may be changed according to the directional intra prediction mode. In this case, the scanning order of the quantization coefficients may be scanned in the reverse direction.
  • the same scan pattern may be applied to the quantization coefficients in each sub-set, and a scan pattern between the sub-sets may be zigzag scan or diagonal scan.
  • the scan pattern is preferably scanned from the main subset including the DC to the remaining subset in the forward direction, but the reverse direction is also possible.
  • a scan pattern between subset can be set in the same manner as a scan pattern of quantized coefficients in a subset, and a scan pattern between subset can be determined according to an intra prediction mode.
  • the encoding apparatus 10 includes information indicating the position of the last non-zero quantization coefficient in the conversion unit PU and the position of the last non-zero quantization coefficient in each subset, 20).
  • the inverse quantization unit 135 dequantizes the quantized coefficients as described above.
  • the inverse transform unit 125 performs an inverse transform on the basis of the transform unit (TU), restores the inversely quantized transform coefficients into a residual block in the spatial domain can do.
  • TU transform unit
  • the adder 195 may generate a reconstruction block by summing the residual block reconstructed by the inverse transformer 125 and the prediction block received from the intra prediction unit 150 or the inter prediction unit 160.
  • the post-processing unit 170 performs a deblocking filtering process to remove a blocking effect generated in the reconstructed picture, a sample adaptive offset (hereinafter, referred to as " sample adaptive offset " SAO) application process and an Adaptive Loop Filtering (ALF) process to compensate a difference value between the original image and a coding unit.
  • a deblocking filtering process to remove a blocking effect generated in the reconstructed picture
  • SAO sample adaptive offset
  • ALF Adaptive Loop Filtering
  • the deblocking filtering process can be applied to a boundary of a prediction unit (PU) or a conversion unit (TU) having a size larger than a predetermined size.
  • PU prediction unit
  • TU conversion unit
  • the deblocking filtering process may include determining a boundary to be filtered, determining a bounary filtering strength to be applied to the boundary, determining whether to apply a deblocking filter, And if the deblocking filter is determined to be applied, selecting a filter to be applied to the boundary.
  • Whether or not the deblocking filter is applied depends on whether i) the border filtering strength is greater than 0 and ii) whether the pixel values at the boundary of two blocks (P block, Q block) May be determined by whether the value represented is less than a first reference value determined by the quantization parameter.
  • the filter is preferably at least two or more. If the absolute value of the difference between two pixels located at the block boundary is greater than or equal to the second reference value, a filter that performs relatively weak filtering is selected.
  • the second reference value is determined by the quantization parameter and the boundary filtering strength.
  • sample adaptive offset (SAO) process is intended to reduce the distortion between pixels in an image to which a deblocking filter is applied and a source pixel, and a process of applying a sample adaptive offset (SAO) Can be determined.
  • the picture or slice may be divided into a plurality of offset regions, and an offset type may be determined for each of the offset regions.
  • the offset type may include a predetermined number (for example, four) of edge offset types and two band offsets Type.
  • the edge type to which each pixel belongs is determined and the corresponding offset is applied.
  • the edge type can be determined based on the distribution of two pixel values adjacent to the current pixel have.
  • the adaptive loop filtering (ALF) process may perform filtering on the basis of a value obtained by comparing a reconstructed image and an original image through a deblocking filtering process or an adaptive offset applying process.
  • the picture storage unit 180 receives the post-processed image data from the post-processing unit 170 and restores and stores the picture in units of pictures.
  • the picture may be a frame-based image or a field-based image.
  • the inter-prediction unit 160 may perform motion estimation using at least one reference picture stored in the picture storage unit 180, and may determine a reference picture index and a motion vector indicating a reference picture.
  • the prediction block corresponding to the prediction unit to be coded can be extracted from the reference picture used for motion estimation among the plurality of reference pictures stored in the picture storage unit 180, according to the determined reference picture index and motion vector have.
  • the intraprediction unit 150 can perform intraprediction encoding using the reconstructed pixel values in the picture including the current prediction unit.
  • the intra prediction unit 150 receives the current prediction unit to be predictively encoded and can perform intra prediction by selecting one of a predetermined number of intra prediction modes according to the size of the current block.
  • the intraprediction unit 150 may adaptively filter the reference pixels to generate an intra prediction block, and may generate reference pixels using available reference pixels when the reference pixels are not available.
  • the entropy encoding unit 140 may entropy encode quantization coefficients quantized by the quantization unit 130, intra prediction information received from the intra prediction unit 150, motion information received from the inter prediction unit 160, and the like .
  • the inter-prediction encoder shown in FIG. 6 includes a motion information determination unit 161, a motion information encoding mode determination unit 162, A motion information coding unit 163, a prediction block generating unit 164, a residual block generating unit 165, a residual block coding unit 166 and a multiplexer 167.
  • the motion information determination unit 161 determines motion information of a current block, the motion information includes a reference picture index and a motion vector, and the reference picture index includes any one of previously coded and reconstructed pictures Lt; / RTI >
  • the reference picture indicating one of the reference pictures of the list 0 (L0) Index and a reference picture index indicating one of the reference pictures of the list 1 (L1).
  • the current block when the current block is bi-directionally predictive-coded, it may include an index indicating one or two pictures among the reference pictures of the composite list LC generated by combining the list 0 and the list 1.
  • the motion vector indicates the position of a prediction block in a picture indicated by each reference picture index, and the motion vector may be a pixel unit (integer unit) or a sub-pixel unit.
  • the motion vector may have a resolution of 1/2, 1/4, 1/8, or 1/16 pixels, and if the motion vector is not an integer unit, the prediction block may be generated from pixels of an integer unit .
  • the motion information encoding mode determination unit 162 may determine the encoding mode for the motion information of the current block to be one of a skip mode, a merge mode, and an AMVP mode.
  • the skip mode is applied when a skip candidate having motion information the same as the motion information of the current block exists and the residual signal is 0.
  • the skip mode is a mode in which the current block which is the prediction unit (PU) Can be applied.
  • the merge mode is applied when there is a merge candidate having the same motion information as the motion information of the current block.
  • the merge mode has a residual signal when the current block is different in size from the coding unit (CU) .
  • the merge candidate and the skip candidate can be the same.
  • the AMVP mode is applied when the skip mode and the merge mode are not applied, and the AMVP candidate having the motion vector most similar to the motion vector of the current block can be selected as the AMVP predictor.
  • the motion information encoding unit 163 can encode the motion information according to the mode determined by the motion information encoding mode deciding unit 162.
  • the motion information encoding unit 163 performs a merge motion vector encoding process when the motion information encoding mode is the skip mode or the merge mode, and performs the AMVP encoding process when the motion information encoding mode is the AMVP mode.
  • the prediction block generator 164 copies a block corresponding to a position indicated by a motion vector in a picture indicated by a reference picture index, and outputs the current block Lt; / RTI >
  • the prediction block generator 164 can generate the pixels of the prediction block from the integer unit pixels in the picture indicated by the reference picture index.
  • a predictive pixel is generated using an 8-tap interpolation filter for a luminance pixel, and a predictive pixel can be generated using a 4-tap interpolation filter for a chrominance pixel.
  • the residual block generating unit 165 generates a residual block using the current block and the current block. If the current block is 2Nx2N, the residual block generating unit 165 generates a residual block using a 2Nx2N- Blocks can be created.
  • the size of the current block used for prediction is 2NxN or Nx2N
  • a prediction block for each of the 2NxN blocks constituting 2Nx2N is obtained, and a 2Nx2N final prediction block using the 2NxN prediction blocks is obtained Lt; / RTI >
  • a residual block of 2Nx2N size may be generated using the 2Nx2N prediction block, and overlap smoothing is applied to the pixels of the boundary portion to solve the discontinuity of the boundary of two prediction blocks having 2NxN size .
  • the residual block coding unit 166 divides the residual block into one or more conversion units (TU), and each of the conversion units (TU) can be transcoded, quantized, and entropy encoded.
  • the residual block coding unit 166 may transform the residual block generated by the inter prediction method using an integer-based transform matrix, and the transform matrix may be an integer-based DCT matrix.
  • the residual block coding unit 166 uses a quantization matrix to quantize the coefficients of the residual block transformed by the transformation matrix, and the quantization matrix can be determined by the quantization parameter.
  • the quantization parameter is determined for each coding unit (CU) of a predetermined size or more, and when the current coding unit (CU) is smaller than the predetermined size, the coding unit (CU) CU) and the quantization parameters of the remaining coding units (CUs) are the same as those of the above-mentioned parameters, and thus can not be encoded.
  • coefficients of the transform block may be quantized using a quantization matrix determined according to the quantization parameter and the prediction mode.
  • the quantization parameter determined for each coding unit (CU) larger than the predetermined size can be predictively encoded using the quantization parameter of the coding unit (CU) adjacent to the current coding unit (CU).
  • a quantization parameter predictor of the current coding unit (CU) can be generated by searching in the order of the left coding unit (CU), the upper coding unit (CU) of the current coding unit (CU) and using one or two valid quantization parameters have.
  • a valid first quantization parameter retrieved in the above order can be determined as a quantization parameter predictor, and a valid first quantization parameter is quantized by searching in the order of a left coding unit (CU) and a coding unit (CU) It can be determined as a parameter predictor.
  • the coefficients of the quantized transform block are scanned and converted into one-dimensional quantization coefficients, and the scanning method can be set differently according to the entropy encoding mode.
  • inter-prediction-encoded quantized coefficients can be scanned in a predetermined manner (zigzag or diagonal raster scan), and when encoded by CAVLC, .
  • the scanning method may be determined according to the intra-prediction mode in the case of interlacing, or the intra-prediction mode in the case of intra, and the coefficient scanning method may be determined differently depending on the size of the conversion unit.
  • the scan pattern may vary according to the directional intra prediction mode, and the scan order of the quantization coefficients may be scanned in the reverse direction.
  • the multiplexer 167 multiplexes the motion information encoded by the motion information encoding unit 163 and the residual signals encoded by the residual block encoding unit 166.
  • the motion information may be changed according to the encoding mode.
  • the motion information may include only an index indicating a predictor.
  • the motion information may include a reference picture index, a difference motion vector, and an AMVP index of a current block .
  • the intra prediction unit 150 receives the prediction mode information and the size of the prediction unit PU from the picture division unit 110 and stores the reference pixels in the picture storage unit 110 to determine the intra prediction mode of the prediction unit PU. (180). ≪ / RTI >
  • the intra prediction unit 150 determines whether a reference pixel is generated by examining whether or not a reference pixel that is not available exists, and the reference pixels can be used to determine an intra prediction mode of the current block.
  • pixels adjacent to the upper side of the current block are not defined. If the current block is located at the left boundary of the current picture, pixels adjacent to the left side of the current block are not defined, It can be determined that the pixels are not usable pixels.
  • the current block is located at the slice boundary and the pixels adjacent to the upper side or the left side of the slice are not encoded and reconstructed, it can be determined that they are not usable pixels.
  • the intra prediction mode of the current block may be determined using only available pixels.
  • a reference pixel at an unavailable position may be generated using the available reference pixels of the current block. For example, when pixels of the upper block are not available, , And vice versa.
  • a reference pixel when a reference pixel is generated by copying an available reference pixel at a position closest to a predetermined direction from a reference pixel at an unavailable position, or when there is no reference pixel available in a predetermined direction, A reference pixel can be generated by copying an available reference pixel of the position.
  • the reference pixel may be determined as an unavailable reference pixel according to the encoding mode of the block to which the pixels belong.
  • the pixels can be determined as unavailable pixels.
  • usable reference pixels can be generated using pixels belonging to the restored block by intra-coded blocks adjacent to the current block, and information indicating that the encoding device 10 determines available reference pixels according to the encoding mode To the decryption apparatus 20.
  • the intra prediction unit 150 determines the intra prediction mode of the current block using the reference pixels, and the number of intra prediction modes that can be accepted in the current block may vary according to the size of the block.
  • the current block size is 8x8, 16x16, or 32x32, there may be 34 intra prediction modes. If the current block size is 4x4, 17 intra prediction modes may exist.
  • the 34 or 17 intra prediction modes may be composed of at least one non-directional mode and a plurality of directional modes s.
  • the one or more non-directional modes may be a DC mode and / or a planar mode.
  • the DC mode and the planar mode are included in the non-directional mode, there may be 35 intra-prediction modes regardless of the size of the current block.
  • DC mode and planar mode two non-directional modes (DC mode and planar mode) and 33 directional modes may be included.
  • a prediction block of a current block is predicted using at least one pixel value (or a predicted value of the pixel value, hereinafter referred to as a first reference value) located on the bottom-right side of the current block and reference pixels .
  • the configuration of an image decoding apparatus can be derived from the configuration of the image encoding apparatus 10 described with reference to FIGS. 1 to 6, and for example, as described with reference to FIGS. 1 to 6
  • the image can be decoded by reversing the processes of the same image encoding method.
  • FIG. 7 is a block diagram of a moving picture decoding apparatus according to an embodiment of the present invention.
  • the decoding apparatus 20 includes an entropy decoding unit 210, an inverse quantization / inverse transform unit 220, an adder 270, A de-blocking filter 250, a picture storage unit 260, an intra prediction unit 230, a motion compensation prediction unit 240, and an intra / inter changeover switch 280.
  • the entropy decoding unit 210 receives and decodes the bitstream encoded by the image encoding apparatus 10 and separates the bitstream into an intra prediction mode index, motion information, a quantization coefficient sequence, and the like, and outputs the decoded motion information to a motion compensation prediction unit 240).
  • the entropy decoding unit 210 transfers the intra prediction mode index to the intra prediction unit 230 and the inverse quantization / inverse transformation unit 220, and transmits the inverse quantization coefficient sequence to the inverse quantization / inverse transformation unit 220.
  • the inverse quantization / inverse transform unit 220 transforms the quantized coefficient sequence into a two-dimensional array of inverse quantization coefficients, and can select one of a plurality of scanning patterns for the transformation. For example, the inverse quantization / , Intra prediction or inter prediction) and the intra prediction mode.
  • the inverse quantization / inverse transform unit 220 applies quantization matrices selected from a plurality of quantization matrices to the inverse quantization coefficients of the two-dimensional array to restore the quantization coefficients.
  • the quantization matrix may be selected based on at least one of a prediction mode and an intra prediction mode of the current block with respect to a block of the same size, according to the size of the current block to be restored.
  • the inverse quantization / inverse transform unit 220 inversely transforms the reconstructed quantized coefficients to reconstruct residual blocks, and the inverse transform process may be performed using a transform unit (TU) as a basic unit.
  • TU transform unit
  • the adder 270 combines the residual block reconstructed by the inverse quantization / inverse transform unit 220 and the prediction block generated by the intra prediction unit 230 or the motion compensation prediction unit 240 to reconstruct the image block.
  • the deblocking filter 250 may perform deblocking filter processing on the reconstructed image generated by the adder 270 to reduce deblocking artifacts due to video loss due to the quantization process.
  • the picture storage unit 260 is a frame memory for storing the local decoded picture subjected to the deblocking filter process by the deblocking filter 250.
  • the intraprediction unit 230 restores the intra prediction mode of the current block based on the intra prediction mode index received from the entropy decoding unit 210 and generates a prediction block according to the reconstructed intra prediction mode.
  • the motion compensation prediction unit 240 generates a prediction block for the current block from the picture stored in the picture storage unit 260 based on the motion vector information and applies the interpolation filter selected when the motion compensation with the decimal precision is applied, Can be generated.
  • the intra / inter changeover switch 280 may provide the adder 270 with a prediction block generated in one of the intra prediction unit 230 and the motion compensation prediction unit 240 based on the encoding mode.
  • FIG. 8 is a block diagram of a configuration for performing inter-prediction in the image decoding apparatus 20.
  • the inter-prediction decoder includes a demultiplexer 241, a motion information encoding mode determiner 242, An information decoding unit 243, an AMVP mode motion information decoding unit 244, a prediction block generating unit 245, a residual block decoding unit 246 and a restoration block generating unit 247.
  • the de-multiplexer 241 demultiplexes the currently encoded motion information and the encoded residual signals from the received bit stream, and transmits the demultiplexed motion information to the motion information encoding mode determination unit 242 And transmit the demultiplexed residual signal to the residual block decoding unit 246.
  • the motion information encoding mode determination unit 242 determines the motion information encoding mode of the current block. If the skip_flag of the received bitstream has a value of 1, it is determined that the motion information encoding mode of the current block is encoded in the skip encoding mode can do.
  • the motion information encoding mode determiner 242 determines the motion information encoding mode of the current block Can be determined to be encoded in the merge mode.
  • the motion information encoding mode determination unit 242 determines that the skip_flag of the received bitstream has a value of 0, and the motion information received from the demultiplexer 241 includes the reference picture index, the differential motion vector, and the AMVP index , It can be determined that the motion information encoding mode of the current block is coded in the AMVP mode.
  • the merge mode motion information decoding unit 243 is activated when the motion information encoding mode determining unit 242 determines the motion information encoding mode of the current block as a skip or merge mode and the AMVP mode motion information decoding unit 244 decodes the motion Information encoding mode determination unit 242 determines that the motion information encoding mode of the current block is the AMVP mode.
  • the prediction block generation unit 245 generates a prediction block of the current block using the motion information reconstructed by the merge mode motion information decoding unit 243 or the AMVP mode motion information decoding unit 244.
  • a block corresponding to a position indicated by a motion vector in a picture indicated by a reference picture index may be copied to generate a prediction block of the current block.
  • pixels of the prediction block are generated from the pixels in the integer unit in the picture indicated by the reference picture index.
  • an 8-tap interpolation filter is used for the luminance pixel
  • a prediction pixel can be generated using a 4-tap interpolation filter.
  • the residual block decoding unit 246 entropy-decodes the residual signal, inversely scans the entropy-decoded coefficients to generate a two-dimensional quantized coefficient block, and the inverse scanning method can be changed according to the entropy decoding method.
  • the inverse scanning method can be applied in a zigzag reverse scan method.
  • the inverse scanning method may be determined differently depending on the size of the prediction block.
  • the residual block decoding unit 246 may dequantize the generated coefficient block using an inverse quantization matrix and restore the quantization parameter to derive the quantization matrix.
  • the quantization step size can be restored for each coding unit of a predetermined size or more.
  • the residual block decoding unit 260 inversely transforms the dequantized coefficient block to recover the residual block.
  • the reconstruction block generation unit 270 adds the prediction blocks generated by the prediction block generation unit 250 and the residual blocks generated by the residual block decoding unit 260 to generate reconstruction blocks.
  • the intra-prediction mode of the current block is decoded from the received bitstream.
  • the entropy decoding unit 210 restores the first intra-prediction mode index of the current block by referring to one of the plurality of intra- .
  • One table selected according to the distribution of intra-prediction modes for a plurality of blocks adjacent to the current block may be applied as a table shared by the intra-prediction mode tables encoding apparatus 10 and the decoding apparatus 20 .
  • the first intra prediction mode table of the current block is restored by applying the first intra prediction mode table
  • the second intra prediction mode table may be applied to restore the first intra prediction mode index of the current block.
  • the intra prediction modes of the upper block and the left block of the current block are both the directional intra prediction mode
  • the direction of the intra prediction mode of the upper block and the direction of the intra prediction mode of the left block The first intra prediction mode table of the current block is restored by restoring the first intra prediction mode index of the current block and the second intra prediction mode table of the current block is applied to the first intra prediction mode index .
  • the entropy decoding unit 210 transmits the first intra-prediction mode index of the restored current block to the intra-prediction unit 230.
  • the intraprediction unit 230 receiving the index of the first intraprediction mode can determine the maximum possible mode of the current block as the intra prediction mode of the current block when the index has the minimum value (i.e., when the index is 0) .
  • the intra predictor 230 compares the index indicated by the maximum possible mode of the current block with the first intra prediction mode index, The intra prediction mode corresponding to the second intra prediction mode index obtained by adding 1 to the first intra prediction mode index is determined as the intra prediction mode of the current block if the index is not smaller than the index indicated by the maximum possible mode of the current block.
  • the intra prediction mode corresponding to the first intra prediction mode index may be determined as the intra prediction mode of the current block.
  • the intra prediction mode that is acceptable for the current block may be composed of at least one non-directional mode and a plurality of directional modes s.
  • the one or more non-directional modes may be a DC mode and / or a planar mode.
  • either the DC mode or the planar mode may be adaptively included in the allowable intra prediction mode set.
  • information specifying the non-directional mode included in the allowable intra prediction mode set may be included in the picture header or slice header.
  • the intra predictor 230 rotors the reference pixels stored in the picture storage unit 260, and determines whether there is a reference pixel that is not available.
  • the determination may be made according to the presence or absence of the reference pixels used to generate the intra prediction block by applying the decoded intra prediction mode of the current block.
  • the intra-prediction unit 230 may generate reference pixels at positions that are not available by using previously reconstructed available reference pixels.
  • the definition of a non-available reference pixel and the method of generating a reference pixel may be the same as those of the intra prediction unit 150 shown in FIG. 1, but an intra prediction block may be generated according to a decoded intra prediction mode of the current block
  • the reference pixels used for the reference pixel may be selectively restored.
  • the intra-prediction unit 230 determines whether to apply a filter to the reference pixels to generate a prediction block, i.e., whether to apply filtering to the reference pixels to generate an intra-prediction block of the current block And may be determined based on the decoded intra prediction mode and the size of the current prediction block.
  • the number of prediction modes for filtering the reference pixel can be increased as the size of the block increases. However, if the block is larger than a predetermined size, It may not filter the reference pixel for the reference pixel.
  • the intra predictor 230 filters the reference pixels using a filter.
  • At least two or more filters may be adaptively applied according to the difference in level difference between the reference pixels.
  • the filter coefficient of the filter is preferably symmetrical.
  • the above two or more filters may be adaptively applied according to the size of the current block, and when a filter is applied, a filter having a narrow bandwidth for a block having a small size, a filter having a wide bandwidth for a block having a large size, May be applied.
  • the reference pixel can be adaptively filtered based on the intra-prediction mode of the current block and the size of the prediction block.
  • the intra-prediction unit 230 generates a prediction block using the reference pixel or the filtered reference pixels according to the reconstructed intra-prediction mode, and the generation of the prediction block is the same as the operation in the coding apparatus 10 A detailed description thereof will be omitted.
  • the intraprediction unit 230 may determine whether to filter the generated prediction block, and the filtering may be determined based on the information included in the slice header or the encoding unit header or according to the intra prediction mode of the current block.
  • the intra prediction unit 230 may generate a new pixel by filtering pixels at a specific position of the prediction block generated using the available reference pixels adjacent to the current block .
  • a prediction pixel in contact with reference pixels among prediction pixels may be filtered using a reference pixel in contact with the prediction pixel.
  • the prediction pixels are filtered using one or two reference pixels according to the positions of the prediction pixels, and the filtering of the prediction pixels in the DC mode can be applied to the prediction blocks of all sizes.
  • the prediction pixels adjacent to the left reference pixel among the prediction pixels of the prediction block can be changed using reference pixels other than the upper pixel used to generate the prediction block.
  • the prediction pixels adjacent to the upper reference pixel among the generated prediction pixels may be changed using reference pixels other than the left pixel used to generate the prediction block.
  • the current block can be restored using the predicted block of the current block restored in this manner and the residual block of the decoded current block.
  • FIG. 9 is a view for explaining a second embodiment of a method of dividing and processing an image into blocks.
  • a coding tree unit (CTU) having a maximum size of 256x256 pixels is divided into a quad tree structure and divided into four coding units (CUs) having a square shape.
  • At least one of the coding units divided into the quad tree structure may be divided into two coding units (CUs) having a rectangular shape by being divided into a binary tree structure.
  • CUs coding units
  • At least one of the coding units divided into the quad tree structure may be divided into four coding units (CUs) having a quad tree structure.
  • At least one of the subdivided coding units into the binary tree structure may be divided into two coding units (CUs) having a square or rectangular shape, divided again into a binary tree structure.
  • CUs coding units
  • At least one of the coding units re-divided into the quad-tree structure may be divided into a quad-tree structure or a binary-cry structure and divided into coding units (CUs) having a square or rectangular shape.
  • Coding blocks (CBs) divided into a binary tree structure as described above can be used for prediction and conversion without being further divided. That is, the size of the prediction unit PU and the conversion unit TU belonging to the coding block CB as shown in FIG. 9 may be the same as the size of the coding block CB.
  • the coding unit divided into the quad tree structure can be divided into one or more prediction units (PUs) using the method as described with reference to FIGS.
  • the coding unit divided into the quad tree structure as described above may be divided into one or two or more conversion units (TU) using the method as described with reference to FIG. 5, and the divided conversion unit (TU) Can have a maximum size of 64x64 pixels.
  • FIG. 10 shows an embodiment of a syntax structure used for dividing and processing an image into blocks.
  • the division of a coding unit (CU) as described with reference to FIG. 9 is expressed using split_cu_flag, and the depth of a coding unit (CU) divided using a binary tree is represented using binary_depth It can be done.
  • whether or not the coding unit CU is divided into a binary tree structure may be represented by a separate binary_split_flag.
  • coding unit e.g., a coding unit (CU), a prediction unit (PU), and a conversion unit (TU)
  • PU prediction unit
  • TU conversion unit
  • the coding unit can be divided into conversion units (TU) which are basic units in which a conversion to a residual block is divided into a binary tree structure.
  • TU conversion units
  • At least one of the rectangular coding blocks CB 0 and CB 1 divided into a binary tree structure and having a size of Nx2N or 2NxN is divided into a binary tree structure again to form a square having a size of NxN Conversion units TU 0 and TU 1 .
  • the block-based image encoding method can perform prediction, transformation, quantization, and entropy encoding steps.
  • a prediction signal is generated by referring to a current encoding block and an existing encoded image or a surrounding image, thereby calculating a difference signal with the current block.
  • the differential signal is input to perform conversion using various conversion functions.
  • the converted signal is classified into DC coefficients and AC coefficients and energy compaction is performed to enhance the coding efficiency .
  • quantization is performed by inputting transform coefficients, and then entropy encoding is performed on the quantized signal, so that the image can be encoded.
  • the image decoding method proceeds in the reverse order of the encoding process as described above, and the image quality distortion phenomenon may occur in the quantization step.
  • a cost measurement method such as SAD (Sum of Absolute Difference) or MSE (Mean Square Error)
  • SAD Sum of Absolute Difference
  • MSE Mel Square Error
  • effective encoding can be performed by selectively determining the size or shape of the conversion unit (CU) based on the distribution of the various difference signals and performing the conversion.
  • the DC value generally represents the average value of the input signal. Therefore, when the difference signal as shown in FIG. 12A is received as the input of the conversion process, the coding block CBx is divided into two By dividing into the conversion units (TUs), the DC value can be effectively represented.
  • a square coding unit CU 0 having a size of 2Nx2N is divided into binary tree structures and can be divided into rectangular transformation units TU 0 and TU 1 having a size of Nx2N or 2NxN .
  • the step of dividing the coding unit (CU) into the binary tree structure as described above may be repeatedly performed twice or more to divide it into a plurality of conversion units (TU).
  • a rectangular coding block CB 1 having a size of Nx2N is divided into a binary tree structure, a block having the size of the divided NxN is divided into a binary tree structure again to form N / 2xN or NxN / 2xN or NxN / 2 is further divided into binary tree structures to form square transform units TU 1 (N / 2xN / 2) having a size of N / 2xN / 2, , TU 2 , TU 4 , TU 5 ).
  • a square coding block CB 0 having a size of 2Nx2N is divided into a binary tree structure
  • a block having the size of the divided Nx2N is divided into a binary tree structure
  • a square having a size of NxN may be divided into a binary tree structure and divided into rectangular transform units TU 1 and TU 2 having a size of N / 2xN.
  • a rectangular coding block CB 0 having a size of 2NxN is divided into a binary tree structure, a block having the size of the divided NxN is divided into a quad tree structure again to obtain N / 2xN / 2 (TU 1 , TU 2 , TU 3 , TU 4 ) having a predetermined size.
  • coding unit e.g., a coding unit (CU), a prediction unit (PU), and a conversion unit (TU)
  • PU prediction unit
  • TU conversion unit
  • the picture dividing unit 110 included in the image coding apparatus 10 performs Rate Distortion Optimization (RDO) in accordance with a predetermined order and outputs the division unit CU, the prediction unit PU, The division structure of the unit TU can be determined.
  • RDO Rate Distortion Optimization
  • the picture division unit 110 performs an optimal block division structure in terms of bit rate and distortion while performing Rate distortion Optimization-Quantization (RDO-Q) You can decide.
  • RDO-Q Rate distortion Optimization-Quantization
  • RDO may be performed in order of the 2NxN pixel size conversion unit (PU) division structure shown in (d) to determine the optimum division structure of the conversion unit (PU).
  • the coding unit CU has the form of Nx2N or 2NxN pixel size
  • the pixel size of NxN shown in (b) (x) shown in (e) shown in (e)
  • the picture division unit 110 may use a Sum of Absolute Difference (SAD) or Mean Square Error ), It is possible to maintain a proper efficiency while reducing the complexity.
  • SAD Sum of Absolute Difference
  • Mean Square Error Mean Square Error
  • 19 is a diagram for explaining a reference sample configuration applied with a variation according to an embodiment of the present invention.
  • a value of the outermost pixel A4 of the current block is padded to construct a reference sample, .
  • this hinders the formation of directionality and reduces the accuracy of prediction blocks, resulting in a reduction in encoding / decoding performance.
  • the coding apparatus 10 and the decoding apparatus 20 calculate the change amount of the current or neighboring blocks in correspondence with the region in which the reference encoded / By applying padding to the applied reference sample, it is possible to improve the prediction and coding efficiency more accurately. Accordingly, as shown in FIG. 19B, the change sample padding makes it possible to construct a reference sample of a directionally preserved form, thereby improving encoding / decoding efficiency.
  • 20 to 21 are flowcharts for explaining an intra prediction process according to an embodiment of the present invention.
  • the decoding apparatus 20 processes entropy decoding (step S101), performs inverse quantization / inverse transform processing (step S103), and decodes the inverse transformed block information A prediction information generating process is performed (S105).
  • the decoding apparatus 20 performs correction processing of the prediction information (S107) and performs intra prediction coding using the corrected prediction information (S109).
  • the operation method of the decoding apparatus 20 according to the embodiment of the present invention for generating and correcting prediction information can be processed in the intra prediction unit 230, and more specifically, can be described with reference to FIG. 21 .
  • the intra-prediction unit 230 enters a prediction information generation process (S201), and identifies presence information of a reference decoded reference sample among neighboring blocks of a current block to perform intra-prediction (S203) .
  • the intra predictor 230 performs padding of the reference sample based on the variation amount of the surrounding sample, corresponding to the area in which the reference sample does not exist (S205).
  • the intra prediction unit 230 generates intra prediction information using the padded reference sample (S207).
  • the padding of the reference sample based on the variation amount of the surrounding sample may be processed with the single variation reflection sample padding and the multiple variation reflection sample padding, and may be selectively processed according to the case. This will be described later in more detail.
  • FIG. 22 is a flowchart for explaining a single variation reflection sample generation process according to an embodiment of the present invention
  • FIGS. 23 to 24 are exemplary views of a single variation reflection sample according to an embodiment of the present invention.
  • the intra predictor 230 identifies a reference sample in which there is no previously decoded information (S301), applies a single-axis variation amount of the previously decoded block, And generates a change amount reflection sample generation (S303).
  • the single variation sample padding process may include processing padding by reflecting a single variation amount of the x axis or a single variation amount of the y axis for the reference sample area in which the previously decoded information does not exist .
  • the intra predictor 230 may first determine whether to paddle a single variation reference sample.
  • a reference sample that does not exist can be padded using already existing samples.
  • the reference decoded reference sample when there is no existing sample, if the reference decoded reference sample does not exist in the upper or left side of the current block (only the upper or left sample exists)
  • the upper sample can be used to pad the reference sample.
  • NA # indicates a sample that does not exist, A # . ≪ / RTI >
  • a single variation reflection generation of a reference sample corresponding to a sample (NA d , NL d ) in which there is no previously decoded information can be exemplified as follows.
  • k means the number of existing samples
  • n can mean the number of samples to reflect the change amount, where n can be a constant that can be appropriately adjusted by the size of the current block or the like.
  • padding of reference samples as much as the height of the current block may be required to use a sample in which no information previously decoded exists as a reference block, as shown in FIG.
  • the intra prediction unit 230 may have a relatively small number of samples to be padded, while the number of samples that reflect the variation may increase, resulting in poor correlation. Therefore, the size of n is preferably adjusted to the height of the current block as shown in FIG.
  • the intraprediction unit 230 can determine whether to increase or decrease the nonexistent sample by determining the linearity of the existing sample.
  • the intra predictor 230 can use a condition function as shown in Equation (2) below.
  • Equation (2) may mean that it is determined that the sum of the first pixel and the last pixel of the existing sample, which determines whether a strong filter is used, is determined to be linear when the sum is greater than twice the intermediate pixel.
  • TH is a threshold value, and 0.9 can be exemplified.
  • the single variation-reflected sample padding according to the embodiment of the present invention can be used as a candidate for finding an optimal prediction block in the encoding apparatus 10.
  • an optimal prediction block can be found by concurrently performing the intra-picture prediction according to the present invention and the existing technology.
  • the encoding apparatus 10 can signal (1-bit) the reference sample padding mode flag (Padding Flag) corresponding to the selected reference sample padding method to the decoding apparatus 20.
  • 25 is a flow chart illustrating a selective single variation reflected sample padding process in accordance with an embodiment of the present invention.
  • the intraprediction unit 230 first identifies a reference sample in which there is no previously decoded information (S401), and determines the linearity of neighboring blocks corresponding to the identified block (S403).
  • the intra predictor 230 checks the signaling padding flag from the encoder 10 (S405), and if the padding flag is 1, processes the padding with the single variation reflection sample described above (S407).
  • the intra predictor 230 performs padding in the same manner as the closest sample, in step S409.
  • the intra predictor 230 performs intra prediction decoding processing using the padded reference samples (S411).
  • Such selective padding processing may be performed in parallel with padding of the conventional method, and in some cases, processing of the padding flag may be omitted.
  • the encoding apparatus 10 can perform an in-screen prediction by performing both the existing padding method and the variation-reflected padding to select an efficient method through RD-Cost comparison, and perform 1-bit signaling only when a padding flag is required have.
  • the decoding apparatus 20 it is possible to perform decryption without additional bits in a nonlinear manner by first performing a linear determination before determining a padding flag, and accordingly, the selective padding process can be efficiently handled.
  • FIG. 26 is a flowchart for explaining a multiple change amount reflection sample generation process according to an embodiment of the present invention
  • FIGS. 27 to 28 are illustrations of a multiple change amount reflection sample according to an embodiment of the present invention.
  • the multiple pseudo variation reflected sample generation process is a method in which the sample padding reflects the variation amount of the x axis and the variation amount of the y axis with respect to the reference sample in which the previously decoded information does not exist, (S501), and applying the X-axis and Y-axis variation amounts between the upper right (RA) sample and the right upper (RA) sample to generate a multiple variation amount reflection sample S503).
  • NA and NL can be included in the reference sample array (one-dimensional array).
  • NA # denotes a sample that does not exist
  • a # denotes a sample to be used for padding.
  • 27 shows an example of the upper reference sample, and the sample NL # not present on the left side can also be applied in the same way.
  • the multiple change amount sample padding can be applied when the first and second samples are partially present in the upper and left samples.
  • the intra predictor 230 may generate a RB (Right bottom) sample using RA (Right Above) and LB (Left Bottom) to construct a reference sample in which the previously decoded information does not exist.
  • the multiple change amount sample padding can be applied when neither the upper or left sample exists (only the upper or left sample exists).
  • the intra predictor 230 may process the RA and LB by copying the decoded information to a sample (A4, L2, or the like adjacent thereto) in which the decoded information already exists, or by performing the single variation amount sample padding described above.
  • the upper sample of the already-coded / decoded block is obtained by performing the above-described single variation sample padding, It can be formed by padding a reference sample in which no information exists.
  • the intra predictor 230 may generate an arbitrary RB sample for the multiple change amount sample padding.
  • the RB sample can be obtained by the distance proportional sum of the upper-left sample RA and the right-side sample LA, and the upper sample RA and the left sample LA can be obtained by copying from adjacent samples, And can be generated through a variation reflection padding method.
  • the distance proportional sum can be calculated as shown in Equation (3) below.
  • I (x, y) may represent a pixel value at the (x, y) position.
  • the intra prediction unit 230 may generate a sample reflecting the reference axis variation amount between the RB sample and the upper sample RA.
  • NA and NL can be generated by a distance-proportional weighted sum, as shown in Equation 4 below.
  • the change-reflected padding may be disadvantageous when the variation is irregular without increasing or decreasing because it is affected by the increase or decrease of the existing sample. Therefore, it is possible to determine whether the sample does not exist or not by judging the linearity of the existing sample.
  • the linearity determination may be determined based on a strong filter determination condition or a threshold comparison of Pearson correlation coefficients, as in the case of the single variation reflection sample described above.
  • the multiple change amount reflection sample padding according to the embodiment of the present invention can also be used as a candidate for finding an optimal prediction block in the encoding apparatus 10.
  • an optimal prediction block can be found by concurrently performing the intra-picture prediction according to the present invention and the existing technology.
  • the encoding apparatus 10 can signal the reference sample padding mode flag (Padding Flag) corresponding to the selected reference sample padding method to the decoding apparatus 20.
  • a padding mode flag of at least 1 bit and at most 2 bits can be used.
  • Figure 29 is a flow diagram illustrating a selective single and multiple varying reflected sample padding process in accordance with an embodiment of the present invention.
  • the intra predictor 230 first identifies a reference sample in which there is no previously decoded information (S601), and determines the linearity of neighboring blocks corresponding to the identified block (S603).
  • the intra predictor 230 checks the signaling padding flag from the encoding device 10 (S605), and if the padding flag is 1, processes the padding with the single variation reflection sample described above (S608).
  • the intra prediction unit 230 determines whether the padding mode is 2 (S606). If the padding mode is 2, the intra prediction unit 230 may process the padding with the multiple variation amount reflection sample (S607).
  • the intra predictor 230 performs padding in the same manner as the closest sample, in step S609.
  • the intra predictor 230 performs intra prediction decoding processing using the padded reference sample (S211).
  • Such selective padding processing may be performed in parallel with padding of the conventional method, and in some cases, processing of the padding flag may be omitted.
  • the encoding apparatus 10 can perform an in-frame prediction by performing both the existing padding method and the change amount reflecting padding, and can select an efficient method through RD-cost comparison. If the padding flag is required, 1-bit or 2-bit signaling is performed can do.
  • the decoding apparatus 20 it is possible to perform decryption without additional bits in a nonlinear manner by first performing a linear determination before determining a padding flag, and accordingly, the selective padding process can be efficiently handled.
  • the information signaled from the coding apparatus 10 to the decoding apparatus 20 can be exemplified as follows.
  • Variation Reflection Padding (single, multiple) is performed only in linear case by performing linear judgment first, and if existing padding method is excluded from comparison, 1bit signaling is padding mode,
  • FIGS. 30 and 31 are views for explaining a reference sample non-existence case to which the variation-reflected sample padding process according to the embodiment of the present invention is applicable.
  • the reference sample non-existence case to which the variation-reflected sample padding process according to the embodiment of the present invention is applicable includes a case where all the reference samples that have been decoded exist And at least one or more samples are not present. Accordingly, the variation-reflected sample padding process according to the embodiment of the present invention can be applied until all of the base-decoded reference samples are not present or partially present, and the intra-prediction unit 230 determines a condition for this So that efficient processing can be performed by processing in advance.
  • the method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium.
  • Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like.
  • the computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image comprenant : une étape de partitionnement d'une trame d'une image en une pluralité d'unités de codage constituant l'unité de base sur laquelle une inter-prédiction ou une intra-prédiction est exécutée ; et une étape de configuration sélective d'une liste de modes de prédiction pour dériver, à partir d'une direction d'intra-prédiction d'un bloc voisin adjacent à un bloc devant être décodé, une direction de prédiction du bloc devant être décodé, pour une unité intra-prédite parmi les unités de codage partitionnées, l'étape de partitionnement d'une trame d'une image en une pluralité d'unités de codage comprenant une étape de partitionnement de la trame ou des unités de codage partitionnées en une structure arborescente binaire.
PCT/KR2019/000238 2018-01-08 2019-01-08 Procédé de traitement d'image, et procédé de codage et de décodage d'image l'utilisant WO2019135658A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0002462 2018-01-08
KR1020180002462A KR102520405B1 (ko) 2018-01-08 2018-01-08 영상 처리 방법, 그를 이용한 영상 복호화 및 부호화 방법

Publications (1)

Publication Number Publication Date
WO2019135658A1 true WO2019135658A1 (fr) 2019-07-11

Family

ID=67144235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/000238 WO2019135658A1 (fr) 2018-01-08 2019-01-08 Procédé de traitement d'image, et procédé de codage et de décodage d'image l'utilisant

Country Status (2)

Country Link
KR (1) KR102520405B1 (fr)
WO (1) WO2019135658A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024025371A1 (fr) * 2022-07-27 2024-02-01 엘지전자 주식회사 Procédé et dispositif de codage/décodage d'image, et support d'enregistrement dans lequel est stocké un flux binaire
WO2024025370A1 (fr) * 2022-07-28 2024-02-01 현대자동차주식회사 Procédé de codage/décodage d'image, dispositif, et support d'enregistrement dans lequel est stocké un flux binaire

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130004548A (ko) * 2011-07-01 2013-01-11 삼성전자주식회사 단일화된 참조가능성 확인 과정을 통해 인트라 예측을 수반하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
KR20140008503A (ko) * 2012-07-10 2014-01-21 한국전자통신연구원 영상 부호화/복호화 방법 및 장치
KR101673025B1 (ko) * 2010-07-22 2016-11-04 에스케이 텔레콤주식회사 전 방위 기반의 인트라 예측 부호화/복호화 장치 및 방법
KR101722284B1 (ko) * 2011-03-25 2017-04-18 삼성전자주식회사 방송/통신 시스템에서 제어 정보를 부호화하는 방법 및 그 제어 정보를 송수신하는 장치 및 방법
KR20170125154A (ko) * 2016-05-03 2017-11-14 광운대학교 산학협력단 곡선 화면 내 예측을 사용하는 비디오 복호화 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101673025B1 (ko) * 2010-07-22 2016-11-04 에스케이 텔레콤주식회사 전 방위 기반의 인트라 예측 부호화/복호화 장치 및 방법
KR101722284B1 (ko) * 2011-03-25 2017-04-18 삼성전자주식회사 방송/통신 시스템에서 제어 정보를 부호화하는 방법 및 그 제어 정보를 송수신하는 장치 및 방법
KR20130004548A (ko) * 2011-07-01 2013-01-11 삼성전자주식회사 단일화된 참조가능성 확인 과정을 통해 인트라 예측을 수반하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
KR20140008503A (ko) * 2012-07-10 2014-01-21 한국전자통신연구원 영상 부호화/복호화 방법 및 장치
KR20170125154A (ko) * 2016-05-03 2017-11-14 광운대학교 산학협력단 곡선 화면 내 예측을 사용하는 비디오 복호화 방법 및 장치

Also Published As

Publication number Publication date
KR20190084560A (ko) 2019-07-17
KR102520405B1 (ko) 2023-04-10

Similar Documents

Publication Publication Date Title
WO2018070809A1 (fr) Procédé de traitement d'image, et procédé de codage et de décodage d'image associé
WO2017204427A1 (fr) Procédé de traitement d'image, et procédé d'encodage et de décodage d'image utilisant celui-ci
WO2017204532A1 (fr) Procédé de codage/décodage d'images et support d'enregistrement correspondant
WO2018026118A1 (fr) Procédé de codage/décodage d'images
WO2018030599A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé
WO2017018664A1 (fr) Procédé de traitement d'image basé sur un mode d'intra prédiction et appareil s'y rapportant
WO2019182385A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement contenant un flux binaire
WO2019022568A1 (fr) Procédé de traitement d'image, et procédé et dispositif de codage/décodage d'image en utilisant celui-ci
WO2019172705A1 (fr) Procédé et appareil de codage/décodage d'image utilisant un filtrage d'échantillon
WO2018097692A2 (fr) Procédé et appareil de codage/décodage d'image et support d'enregistrement contenant en mémoire un train de bits
WO2012005520A2 (fr) Procédé et appareil d'encodage vidéo au moyen d'une fusion de blocs, et procédé et appareil de décodage vidéo au moyen d'une fusion de blocs
WO2018135885A1 (fr) Procédé de décodage et de codage d'image pour fournir un traitement de transformation
WO2018047995A1 (fr) Procédé de traitement d'image basé sur un mode d'intraprédiction et appareil associé
WO2011071328A2 (fr) Procédé et appareil d'encodage vidéo, et procédé et appareil de décodage vidéo
WO2019066524A1 (fr) Procédé et appareil de codage/ décodage d'image et support d'enregistrement pour stocker un train de bits
WO2019050292A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018124333A1 (fr) Procédé de traitement d'image basé sur un mode de prédiction intra et appareil s'y rapportant
WO2019190201A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2019182295A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018066958A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2021107532A1 (fr) Procédé et appareil de codage/décodage d'image, et support d'enregistrement sur lequel est stocké un flux binaire
WO2018056701A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2019050291A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2019194647A1 (fr) Procédé de filtrage adaptatif de boucle basé sur des informations de filtre et procédé de codage et de décodage d'image l'utilisant
WO2019194653A1 (fr) Procédé de traitement d'image de fourniture de processus de mode de fusion complexe d'informations de mouvement, procédé de codage et de décodage d'image l'utilisant, et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19736027

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19736027

Country of ref document: EP

Kind code of ref document: A1