WO2013030902A1 - Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus - Google Patents

Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus Download PDF

Info

Publication number
WO2013030902A1
WO2013030902A1 PCT/JP2011/069300 JP2011069300W WO2013030902A1 WO 2013030902 A1 WO2013030902 A1 WO 2013030902A1 JP 2011069300 W JP2011069300 W JP 2011069300W WO 2013030902 A1 WO2013030902 A1 WO 2013030902A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
offset
filter
classes
filter coefficient
Prior art date
Application number
PCT/JP2011/069300
Other languages
French (fr)
Japanese (ja)
Inventor
孝幸 伊東
隆志 渡辺
山影 朋夫
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Priority to PCT/JP2011/069300 priority Critical patent/WO2013030902A1/en
Publication of WO2013030902A1 publication Critical patent/WO2013030902A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • Embodiments relate to encoding and decoding of moving images.
  • ALF adaptive loop filter
  • ALF there is also known a technique in which one or a plurality of filter coefficient sets are prepared in a slice, and the one or a plurality of filter coefficient sets are switched in pixel units or pixel block units and applied to a decoded image.
  • the loop filter process can be easily adapted to the local structure of the decoded image. That is, the image quality improvement effect of the decoded image can be improved.
  • the code amount increases due to the overhead of the filter coefficient set. Further, when hardware mounting is assumed, there is a concern about an increase in power consumption.
  • the total number of filter coefficient values included in each filter coefficient set is, for example, 21 at maximum, and the total number of filter coefficient sets held in a slice is, for example, 16 at maximum. It is difficult to hold all these pieces of information in the loop filter processing unit. Therefore, every time switching of the filter coefficient set occurs in pixel units or pixel block units, it is expected that the filter coefficient set after switching will be read.
  • SAO pixel adaptive offset
  • the encoding side sets a plurality of offset values, and transmits information indicating the plurality of offset values to the decoding side. Then, the decoding side switches these plural offset values in units of pixels and applies (adds) to the decoded image.
  • ALF pixel adaptive offset
  • the encoder sets a plurality of offset values according to SAO and then sets a filter coefficient set according to ALF. According to such a combination, application of ALF is not considered in setting a plurality of offset values. Therefore, the plurality of offset values and the filter coefficient set are not always set to optimum values.
  • one of the purposes is to improve the encoding efficiency.
  • one of the purposes of the embodiment is to reduce power consumption when hardware is mounted.
  • the moving image encoding method includes a plurality of offset classes for each first unit including one or more pixels in a decoded image based on a first index indicating an image feature of the first unit. Setting any one of.
  • the moving image encoding method includes setting a filter coefficient set including a plurality of filter coefficient values and an offset value corresponding to each of the plurality of offset classes based on the input image and the decoded image.
  • the moving picture encoding method includes encoding information indicating a filter coefficient set and information indicating an offset value corresponding to each of a plurality of offset classes to generate encoded data.
  • FIG. 1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment.
  • the block diagram which illustrates the filtering part of FIG. 3 is a flowchart illustrating an operation related to the ALF processing unit in FIG. 1.
  • 1 is a block diagram illustrating a moving image decoding apparatus according to a first embodiment.
  • FIG. 10 is a block diagram illustrating a moving image encoding apparatus according to a fifth embodiment.
  • the block diagram which illustrates the moving picture decoding device concerning a 5th embodiment.
  • FIG. 18 is a block diagram illustrating a filtering unit in FIG. 17.
  • FIG. 18 is a flowchart illustrating an operation related to the ALF processing unit of FIG.
  • the block diagram which illustrates the moving picture coding device concerning an 8th embodiment. 3 is a flowchart illustrating an operation of the moving image encoding apparatus in FIG. 1.
  • 6 is a flowchart illustrating an operation of the video decoding device in FIG. 5.
  • the moving picture coding apparatus includes a moving picture coding unit 100 and a coding control unit 110.
  • the video encoding unit 100 includes a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, an inverse quantization and inverse transform unit 104, an adder 105, an offset class setting unit 106, A filter coefficient set and offset value setting unit 107, a filtering unit 108, and an entropy encoding unit 109 are included.
  • the offset class setting unit 106, the filter coefficient set / offset value setting unit 107, and the filtering unit 108 may be referred to as an ALF processing unit.
  • the encoding control unit 110 controls the operation of each unit of the moving image encoding unit 100.
  • the predicted image generation unit 101 performs a prediction process on the input image 11 in units of pixel blocks, for example, and generates a predicted image.
  • the input image 11 includes a plurality of pixel signals and is input from the outside of the moving image encoding unit 100.
  • the predicted image generation unit 101 may perform a prediction process on the input image 11 based on an ALF processed image 17 described later.
  • the prediction process may be a general process such as a temporal direction prediction process using motion compensation, a spatial direction prediction process using encoded pixels in the screen, and the like. Therefore, the detailed description of the prediction process is omitted.
  • the predicted image generation unit 101 outputs the predicted image to the subtraction unit 102 and the addition unit 105.
  • the subtraction unit 102 acquires the input image 11 from the outside of the moving image encoding unit 100 and inputs the predicted image from the predicted image generation unit 101.
  • the subtraction unit 102 subtracts the prediction image from the input image 11 to obtain a prediction error image.
  • the subtraction unit 102 outputs the prediction error image to the transform and quantization unit 103.
  • the transform / quantization unit 103 receives the prediction error image from the subtraction unit 102.
  • the transform and quantization unit 103 performs transform processing on the prediction error image to generate transform coefficients. Further, the transform and quantization unit 103 quantizes the transform coefficient to generate a quantized transform coefficient.
  • the transform and quantization unit 103 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 104 and the entropy encoding unit 109.
  • the transformation process is typically orthogonal transformation such as Discrete Cosine Transform (DCT).
  • the conversion process is not limited to DCT, and may be wavelet conversion, independent component analysis, or the like.
  • the quantization process is performed based on the quantization parameter set by the encoding control unit 110.
  • the inverse quantization and inverse transform unit 104 inputs the quantized transform coefficient from the transform and quantization unit 103.
  • the inverse quantization and inverse transform unit 104 dequantizes the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 104 performs an inverse transform process on the transform coefficient to decode the prediction error image.
  • the inverse quantization and inverse transform unit 104 outputs the prediction error image to the addition unit 105.
  • the inverse quantization and inverse transform unit 104 performs an inverse process of the transform and quantization unit 103. That is, the inverse quantization is performed based on the quantization parameter set by the encoding control unit 110. Further, the inverse transformation process is determined by the transformation process performed by the transformation and quantization unit 103.
  • the inverse transform process includes inverse DCT (Inverse DCT; IDCT), inverse wavelet transform, and the like.
  • the addition unit 105 inputs a prediction image from the prediction image generation unit 101 and inputs a prediction error image from the inverse quantization and inverse conversion unit 104.
  • the adding unit 105 adds the prediction error image to the prediction image to generate a (local) decoded image 12.
  • the addition unit 105 outputs the decoded image 12 to the offset class setting unit 106, the filter coefficient set / offset value setting unit 107, and the filtering unit 108.
  • the offset class setting unit 106 receives the decoded image 12 from the adding unit 105 and sets an offset class for each first unit based on the first index.
  • the offset class setting unit 106 generates offset class information 13 indicating an offset class corresponding to each first unit. Details of the offset class setting unit 106 will be described later.
  • the offset class setting unit 106 outputs the offset class information 13 to the filter coefficient set / offset value setting unit 107 and the filtering unit 108.
  • the filter coefficient set / offset value setting unit 107 acquires the input image 11 from the outside of the moving image encoding unit 100, inputs the decoded image 12 from the addition unit 105, and inputs the offset class information 13 from the offset class setting unit 106. To do.
  • the filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class information 13. Details of the filter coefficient set and offset value setting unit 107 will be described later.
  • the filter coefficient set and offset value setting unit 107 outputs the filter coefficient set information 14 indicating the set filter coefficient set to the filtering unit 108 and the entropy encoding unit 109.
  • the filter coefficient set and offset value setting unit 107 outputs the offset information 15 indicating the offset value corresponding to each set offset class to the filtering unit 108 and the entropy coding unit 109.
  • the filtering unit 108 receives the decoded image 12 from the adding unit 105, receives the offset class information 13 from the offset class setting unit 106, and receives the filter coefficient set information 14 and the offset information 15 from the filter coefficient set / offset value setting unit 107. input.
  • the filtering unit 108 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, and the offset information 15 to generate an ALF processed image 17.
  • the filtering unit 108 outputs the ALF processed image 17 to the predicted image generation unit 101. Details of the filtering unit 108 will be described later.
  • the ALF processed image 17 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 101.
  • the ALF processed image 17 is read as a reference image by the predicted image generation unit 101 as necessary, and is used for the prediction process.
  • the entropy encoding unit 109 receives the quantized transform coefficient from the transform and quantization unit 103, receives the filter coefficient set information 14 and the offset information 15 from the filter coefficient and offset value setting unit 107, and receives from the encoding control unit 110. Enter the encoding parameters.
  • the encoding parameter may include, for example, mode information, motion information, encoded block division information, quantization parameter, and the like.
  • the entropy encoding unit 109 performs entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) on the quantized transform coefficients, filter coefficient set information 14, offset information 15, and encoding parameters, and generates encoded data 18. .
  • the entropy encoding unit 109 outputs the encoded data 18 to the outside of the moving image encoding unit 100 (for example, communication system, storage system, etc.).
  • the encoded data 18 is decoded by an image decoding device described later.
  • the encoding control unit 110 performs encoding block division control, generated code amount feedback control, quantization control, mode control, and the like for the moving image encoding unit 100.
  • the encoding control unit 110 outputs the encoding parameters to the entropy encoding unit 109.
  • the moving picture encoding unit 100 operates as shown in FIG. 22, for example. Specifically, the subtraction unit 102 subtracts the prediction image from the input image 11 to generate a prediction error image (step S2201).
  • the transform and quantization unit 103 performs transform and quantization on the prediction error image generated in step S2201, and generates a quantized transform coefficient (step S2202).
  • the inverse quantization and inverse transform unit 104 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S2202, and decodes the prediction error image (step S2203).
  • the adding unit 105 adds the prediction error image decoded in step S2203 to the prediction image to generate a (local) decoded image 12 (step S2204).
  • the ALF processing unit performs ALF processing (step S2205). Although details of the ALF processing will be described later, filter coefficient set information 14, offset information 15 and an ALF processed image 17 are generated based on the input image 11 and the decoded image 12 in step S2205.
  • the entropy encoding unit 109 entropy encodes the quantized transform coefficient generated in step S2202, the filter coefficient set information 14 and offset information 15 generated in step S2205, and the encoding parameter (step S2206). These series of processes are repeated until encoding of the input image 11 is completed.
  • the operation illustrated in FIG. 22 corresponds to so-called hybrid coding including prediction processing and conversion processing.
  • the moving image encoding apparatus does not necessarily need to perform hybrid encoding.
  • hybrid coding is replaced with DPCM (Differential Pulse Code Modulation)
  • unnecessary processing may be omitted while prediction processing based on neighboring pixels is performed.
  • the offset class setting unit 106 sets the offset class for each first unit of the decoded image 12 based on the first index.
  • the first unit may be a single pixel or a region including a plurality of pixels (for example, a pixel block).
  • the first unit is assumed to be one pixel.
  • the first unit may be appropriately extended to a region including a plurality of pixels.
  • the first index is a value indicating the image feature of the first unit.
  • the activity of the image of the first unit may be used as the first index.
  • the offset class setting unit 106 may calculate the first index k (x, y) of the pixel specified by the position (x, y) by the following formula (1).
  • S dec (x, y) represents the pixel value at the position (x, y) in the decoded image 12.
  • the first index k (x, y) represents the activity at the position (x, y). It should be noted that using the above formula (1), the activity is calculated for pixels within a certain range around the pixel of interest, for example, pixels around the pixel of interest N (N is an integer of 2 or more) ⁇ N pixels. These sums may be used as the first index k (x, y).
  • the offset class setting unit 106 can calculate the first index based on a comparison between the target pixel and surrounding pixels instead of the activity.
  • the first index may be such that when the pixel of interest and the surrounding pixels are ranked in descending order of the pixel value, the pixel of interest has a higher rank.
  • the offset class setting unit 106 may calculate the first index k (x, y) by the following mathematical formula (2).
  • the function sign ( ⁇ ) returns 1 if ⁇ is a positive value, returns 0 if ⁇ is 0, and returns -1 if ⁇ is a negative value.
  • the first index k (x, y) is 8 if the pixel value of the target pixel is larger than any of the surrounding four pixels, and the pixel value of the target pixel Is 4 if all of the surrounding 4 pixels are the same, and 0 if the pixel value of the pixel of interest is smaller than any of the surrounding 4 pixels. It is possible to modify the above formula (2).
  • the pixel value S dec (x, y) of the target pixel may be used as the first index k (x, y).
  • the scan order of the target pixel that is, the position of the target pixel within the picture, the slice, or the pixel block may be used as the first index k (x, y).
  • the scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
  • the first unit is not limited to one pixel and may be a region including a plurality of pixels.
  • the offset class setting unit 106 calculates the first index for each pixel in the area by the above-described method, and based on these, the first index is set.
  • the first index may be calculated.
  • the offset class setting unit 106 calculates the first index for all or some of the pixels included in the first unit, and sets the sum, average value, minimum value, or maximum value of the first unit as the first unit.
  • the first index may be calculated.
  • the scan order of the first unit may be used as the first index.
  • the scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
  • the offset class setting unit 106 may set the offset class based on the first index for each first unit, for example, according to the following mathematical formula (3).
  • offset_idx (x, y) represents the offset class of the first unit to which the pixel specified by the position (x, y) belongs.
  • k (x, y) represents the first index of the first unit.
  • represents a real number of 1 or more.
  • the offset class setting unit 106 may prepare a reference table that holds the correspondence between the first index and the offset class, as shown in FIG. According to Equation (3) above, the range of the first index corresponding to an arbitrary offset class is constant. On the other hand, according to the reference table, the range of the first index corresponding to a certain offset class can be narrowed, or the range of the first index corresponding to another offset class can be expanded.
  • the offset class setting unit 106 may fix the type of the first index to any one or may switch between them. For example, the offset class setting unit 106 may switch the type of the first index in slice units or other units. In this case, the encoding control unit 110 may select the optimum first index type for each slice. Information indicating the type of the selected first index is entropy encoded by the entropy encoding unit 109 and output as a part of the encoded data 18. Note that the optimum first index type may be one that minimizes the encoding cost represented by the following formula (4), for example.
  • Cost represents the coding cost
  • D represents the residual sum of squares (in the first unit)
  • R represents the code amount
  • the filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class information 13. To do.
  • the filter coefficient set and offset value setting unit 107 sets the filter coefficient set and the offset value corresponding to each offset class by solving the Wiener-Hopf equation.
  • the Wiener-Hopf equation for example, the filter coefficient set and the offset value corresponding to each offset class are set so that the square error sum between the ALF processed image 17 and the input image 11 is minimized in units of slices.
  • the This square error sum can be calculated by the following equation (5).
  • E represents the square error sum in the slice.
  • S flt (x, y) represents the pixel value at the position (x, y) of the ALF processed image 17.
  • S org (x, y) represents a pixel value at the same position (x, y) of the input image 11.
  • X and Y represent a set of positions (x, y) in the slice.
  • h i, j represents a filter coefficient.
  • the filter coefficient set that minimizes the square error sum and the offset value of each offset class can be obtained by solving an equation in which the square error sum is partially differentiated by each filter coefficient and each offset value. That is, the filter coefficient set and offset value setting unit 107 only has to solve the following formula (6).
  • the filter coefficient set and offset value setting unit 107 can set the optimum filter coefficient set that minimizes the square error sum in the slice and the offset value of each offset class.
  • the filter coefficient set and offset value setting unit 107 can use floating point arithmetic in the process of deriving the filter coefficient value and the offset value. However, if these filter coefficient values and offset values are set in a floating-point format, the amount of computation for the filter processing increases. Furthermore, since the ALF processing result depends on the floating-point model, the ALF processing result on the decoding side may not match the ALF processing image 17 depending on the processing environment. Therefore, the filter coefficient set and offset value setting unit 107 may quantize and set the filter coefficient value and the offset value from the viewpoint of speeding up the ALF process and the strictness of the ALF process result. For example, the filter coefficient set and offset value setting unit 107 may quantize the filter coefficient value and offset value in the floating-point format according to the following equation (7).
  • D represents a real value.
  • D is typically a power of 2, but is not limited thereto.
  • the filter coefficient value and the offset value are set by quantization.
  • the filtering unit 108 performs the filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, and the offset information 15 to generate an ALF processed image 17. More specifically, the filtering unit 108 includes an offset selection unit 201 and a filter processing unit 202, as shown in FIG.
  • the offset selection unit 201 receives the offset class information 13 from the offset class setting unit 106 and the offset information 15 from the filter coefficient set / offset value setting unit 107.
  • the offset selection unit 201 specifies an offset class based on the offset class information 13 for each first unit, and selects an offset value 16 corresponding to the offset class based on the offset information 15.
  • the offset selection unit 201 outputs the selected offset value 16 to the filter processing unit 202.
  • the filter processing unit 202 receives the decoded image 12 from the addition unit 105, receives the filter coefficient set information 14 from the filter coefficient set / offset value setting unit 107, and receives the offset value 16 from the offset selection unit 201.
  • the filter processing unit 202 performs a filter operation based on the filter coefficient set information 14 on each pixel in the decoded image 12 and performs an offset operation based on the offset value 16 to generate an ALF processed image 17. That is, the filter processing unit 202 generates a pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (8).
  • the filter processing unit 202 may generate the pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (9).
  • the offset value setting unit 106 sets an offset class based on the first index for each first unit (for example, pixel or pixel block) of the decoded image 12 (step S301).
  • the filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class set in step S301.
  • the filtering unit 108 performs a filtering process on the decoded image 12 based on the filter coefficient set and the offset value set in step S302 (step S303).
  • the entropy encoding unit 109 adds the filter coefficient set information 14 indicating the filter coefficient set set in step S302 and the offset indicating the offset value set in step S302, in addition to the quantized transform coefficient and the encoding parameter.
  • the information 15 is entropy encoded (step S304).
  • the filter coefficient set information 14 and the offset information 15 are described according to the syntax structure shown in FIG. 4, for example.
  • the syntax in FIG. 4 is described in units of slices, for example.
  • filter_type_idx is an index indicating the filter shape or tap length of the adaptive loop filter used in the target slice.
  • NumOfFilterCoeff represents the total number of filter coefficient values included in the filter coefficient set, and is determined by filter_type_idx.
  • the filter coefficient values included in the filter coefficient set are described one by one as filter_coeff [i]. It can be said that NumOfOffset represents the total number of offset classes, and can also be said to represent the total number of offset values that can be set in the target slice.
  • the offset value corresponding to the offset class specified by the variable i is described as offset_value [i].
  • the above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
  • filter_coeff [i] represents a filter coefficient value
  • offset_value [i] represents an offset value.
  • the filter coefficient difference may be encoded instead of the filter coefficient value in the target slice, or the offset difference may be encoded instead of the offset value in the target slice.
  • the filter coefficient value and the offset value are held for each slice of a frame encoded in the past, the difference between the filter coefficient value and the offset value can be calculated based on any one of these slices.
  • This reference slice is referred to as a reference slice in the following description. In order to select whether or not to apply such an operation, information indicating whether or not to perform difference calculation from the reference slice may be included in the syntax element.
  • Reference frames are typically listed in a reference list. For example, in this reference list, if the filter coefficient value and the offset value of each slice in the reference frame are held in addition to each reference frame, the difference calculation can be easily performed.
  • the reference slice may be fixed to the slice closest to the target slice in the reference frame encoded last on the reference list. Alternatively, the reference slice may be determined by selecting any slice in the reference list, or by selecting any frame in the reference list, the position closest to the target slice in the designated frame May be determined to be in a slice.
  • the filter coefficient value and the offset value in the reference slice are directly used.
  • the information amount of the filter coefficient set information 14 and the offset information 15 can be reduced.
  • information indicating whether or not to directly use the filter coefficient value and the offset value in the reference slice may be included in the syntax element.
  • offset values may not be set for some offset classes. If no offset value is set, the overhead of the offset information 15 can be reduced.
  • An offset class for which no offset value is set may be selected by the encoding control unit 110. For example, the encoding control unit 110 calculates the encoding cost when the offset value is not set for each offset class in the target slice by the above equation (4), and this is the case where the offset value is set for the offset class. If it is below the coding cost, no offset value is set for the offset class. However, in order to support such an operation, information indicating an offset class for which no offset value is set needs to be included in the syntax element. Alternatively, an offset class for which no offset value is set may be determined in advance. In this case, since the decoding side can know the offset class for which no offset value is set, information indicating the offset class for which no offset value is set need not be included in the syntax element.
  • the filter operation and the offset operation do not necessarily have to be applied to all slices.
  • the filter operation means a convolution operation using a filter coefficient set, for example.
  • the offset calculation means addition using, for example, an offset value.
  • the encoding control unit 110 calculates the encoding cost when the filter operation and the offset operation are not applied to the target slice according to the above equation (4), and this is the encoding cost when the filter operation and the offset operation are applied. If it is less than, the filter operation and the offset operation are not applied.
  • information indicating whether the filter operation and the offset operation are applied to the target slice that is, whether the filter process is applied
  • information indicating whether the filter operation and the offset operation are applied to the target slice that is, whether the filter process is applied
  • the encoding control unit 110 calculates the encoding cost when the offset operation is applied to the target slice and the filter operation is not applied using the above equation (4), and the filter operation and the offset operation are applied thereto. If the coding cost is less than the case, the filter operation is omitted. However, in order to support such an operation, information indicating whether or not the filter operation is applied to the target slice needs to be included in the syntax element.
  • the encoding control unit 110 calculates the encoding cost when each mode is selected for the target slice by the above equation (4), and selects a mode that minimizes the encoding cost.
  • information indicating which mode is selected for the target slice needs to be included in the syntax element.
  • the moving picture decoding apparatus includes a moving picture decoding unit 500 and a decoding control unit 507.
  • the moving picture decoding unit 500 includes an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, an addition unit 503, an offset class setting unit 504, a filtering unit 505, and a predicted image generation unit 506.
  • the decoding control unit 507 controls the operation of each unit of the moving image decoding unit 500.
  • the entropy decoding unit 501 inputs the encoded data 21 from outside the moving image decoding unit 500 (for example, a communication system or a storage system).
  • the encoded data 21 is the same as or similar to the encoded data 18 described above.
  • the entropy decoding unit 501 performs entropy decoding on the encoded data 21 to generate quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24.
  • the entropy decoding unit 501 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 502, outputs the encoding parameter 22 to the decoding control unit 507, and filters the filter coefficient set information 23 and the offset information 24 to the filtering unit.
  • the inverse quantization and inverse transform unit 502 inputs the quantized transform coefficient from the entropy decoding unit 501.
  • the inverse quantization and inverse transform unit 502 dequantizes the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 502 performs an inverse transform process on the transform coefficient to decode the prediction error image.
  • the inverse quantization and inverse transform unit 502 outputs the prediction error image to the addition unit 503.
  • the inverse quantization and inverse transform unit 502 performs the same or similar processing as the inverse quantization and inverse transform unit 104 described above. That is, the inverse quantization is performed based on the quantization parameter set by the decoding control unit 507. Further, the inverse conversion process is determined by the conversion process performed on the encoding side. For example, the inverse transform process is IDCT, inverse wavelet transform, or the like.
  • the addition unit 503 receives a prediction image from the prediction image generation unit 506 and inputs a prediction error image from the inverse quantization and inverse conversion unit 502. The adding unit 503 adds the prediction error image to the prediction image to generate a decoded image 25. The adding unit 503 outputs the decoded image 25 to the offset class setting unit 504 and the filtering unit 505.
  • the offset class setting unit 504 inputs the decoded image 25 from the adding unit 503, and sets an offset class based on the first index for each first unit.
  • the offset class setting unit 504 generates offset class information 26 indicating the offset class corresponding to each first unit.
  • the offset class setting unit 504 performs the same or similar processing as the offset class setting unit 106.
  • the offset class setting unit 504 outputs the offset class information 26 to the filtering unit 505.
  • the filtering unit 505 receives the decoded image 25 from the adding unit 503, receives the offset class information 26 from the offset class setting unit 504, and receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 501.
  • the filtering unit 505 performs filter processing on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, and the offset class information 26 to generate an ALF processed image 27. That is, the filtering unit 505 performs the same or similar processing as the filtering unit 108 described above.
  • the ALF processed image 27 is the same as or similar to the ALF processed image 17 described above.
  • the filtering unit 505 outputs the ALF processed image 27 to the predicted image generation unit 506 and also supplies the ALF processed image 27 to the outside (for example, a display system) as an output image.
  • the ALF processed image 27 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 506.
  • the ALF processed image 27 is read as a reference image by the predicted image generation unit 506 as necessary, and is used for the prediction process.
  • the predicted image generation unit 506 performs a prediction process on the output image in units of pixel blocks or different units, and generates a predicted image.
  • the predicted image generation unit 506 may perform output image prediction processing based on the ALF processed image 27 described above. That is, the predicted image generation unit 506 performs the same or similar processing as that of the predicted image generation unit 101 described above.
  • the predicted image generation unit 506 outputs the predicted image to the adding unit 503.
  • the decoding control unit 507 receives the encoding parameter 22 from the entropy decoding unit 501. Based on the encoding parameter 22, the decoding control unit 507 performs encoding block division control, quantization control, mode control, and the like.
  • the moving picture decoding unit 500 operates as shown in FIG. 23, for example.
  • the entropy decoding unit 501 performs entropy decoding on the encoded data 21, and generates quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24 (step S2301).
  • the inverse quantization and inverse transform unit 502 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S2301, and decodes a prediction error image (step S2302).
  • the adding unit 503 adds the prediction error image decoded in step S2302 to the prediction image to generate a decoded image 25 (step S2303).
  • the offset class setting unit 504 and the filtering unit 505 perform ALF processing on the decoded image 25 based on the filter coefficient set information 23 and the offset information 24 obtained in step S2302 (step S2304).
  • the ALF process on the decoding side is different from the ALF process on the encoding side in that a process for setting a filter coefficient set and an offset value corresponding to each offset class is unnecessary.
  • an ALF processed image 27 is generated. These series of processes are repeated until the output image is completely decoded.
  • the video encoding device and the video decoding device set an offset class based on the first index for each first unit in the target slice, for example, Set the offset value corresponding to the offset class. Therefore, according to these video encoding device and video decoding device, it is possible to switch the offset value for each first unit while fixing the filter coefficient set in the target slice. That is, it is possible to perform filter processing adapted to the local structure in the target slice by switching the offset value without causing switching of the filter coefficient set in the target slice. According to these video encoding device and video decoding device, the encoding efficiency can be improved. According to these moving image encoding device and moving image decoding device, the power consumption can be reduced by reducing the number of times switching occurs when hardware is mounted.
  • the filter coefficient set and offset value setting unit 107 calculates a filter coefficient set and offset value that minimizes the square error sum, but the operation may be modified. Specifically, a plurality of filter coefficient set candidates are prepared in advance for the filter coefficient set, and a plurality of offset candidates are prepared in advance for the offset value. Here, one certain offset candidate holds an offset value (that is, a set of offset values) corresponding to each offset class. Then, the filter coefficient set and offset value setting unit 107 may select one filter coefficient set candidate and offset candidate that minimize the square error sum from the plurality of filter coefficient set candidates and the plurality of offset candidates.
  • the filter coefficient set information 14 and the offset information 15 are indexes that designate one filter coefficient set candidate and one offset candidate, respectively.
  • the filter coefficient set information 14 and the offset information 15 are encoded for each filter coefficient set switching unit (for example, one or a plurality of slices, pixel blocks, etc.).
  • the information indicating the filter coefficient set candidate and the offset candidate corresponding to each index may be determined in advance between the encoding side and the decoding side, or the encoding side is larger than the switching unit of the filter coefficient set. You may encode for every unit (for example, a sequence, several pictures, several slices, etc.).
  • the filter coefficient set and offset value setting unit 107 may select a filter coefficient set from a plurality of filter coefficient set candidates and set an offset value corresponding to each offset class by calculation.
  • the filter coefficient set and offset value setting unit 107 may set the filter coefficient set by calculation and select a set of offset values corresponding to each offset class from a plurality of offset candidates.
  • a set of filter coefficients and a set of offset values corresponding to each offset class may be set, and a plurality of sets may be prepared and used as candidates.
  • the filter coefficient set information 14 and the offset information 15 serve as an index, overhead can be reduced. Therefore, if the square error sum in the filter coefficient set switching unit does not increase excessively when a filter coefficient set candidate or an offset candidate prepared in advance is used, the overhead reduction effect contributes more greatly, so that the coding efficiency Will improve.
  • the ALF process is not combined with the SAO process.
  • the same or similar encoding distortion improvement (image quality improvement) effect as the SAO processing can be obtained.
  • a combination of ALF processing and SAO processing is also possible, and this will be described in the second embodiment.
  • the filter coefficient set and the offset value corresponding to each offset class are switched in units of slices.
  • this switching unit may be transformed into a picture unit, a frame unit, and a field unit.
  • an area obtained by dividing a picture by a method different from a slice (which can be named a loop filter slice) may be a switching unit.
  • a filter coefficient set and an offset value corresponding to each offset class are set for each switching unit, and filter coefficient set information 14 and offset information 15 indicating these are signaled.
  • the first embodiment may be combined with SAO processing.
  • the second embodiment relates to a combination of the first embodiment and SAO processing.
  • the moving image encoding apparatus includes a moving image encoding unit 600 and an encoding control unit 610.
  • the moving image encoding unit 600 includes a predicted image generation unit 601, a subtraction unit 602, a transform and quantization unit 603, an inverse quantization and inverse transform unit 604, an adder 605, an offset class setting unit 606, A filter coefficient set / offset value setting unit 607, a filtering unit 608, an entropy encoding unit 609, a pixel adaptive offset setting unit 611, and a pixel adaptive offset processing unit 612 are included.
  • the offset class setting unit 606, the filter coefficient set / offset value setting unit 607, and the filtering unit 608 may be referred to as an ALF processing unit.
  • the encoding control unit 610 controls the operation of each unit of the moving image encoding unit 600.
  • the predicted image generation unit 601, the subtraction unit 602, the transform and quantization unit 603, the inverse quantization and inverse transform unit 604, and the encoding control unit 610 include a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, Since it is the same as or similar to the inverse quantization and inverse transform unit 104 and the encoding control unit 110, description thereof is omitted.
  • the addition unit 605 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the adding unit 605 outputs the decoded image 12 to the pixel adaptive offset setting unit 611 and the pixel adaptive offset processing unit 612. Adder 605 is the same as or similar to adder 105 in other respects.
  • the offset class setting unit 606 is different from the offset class setting unit 106 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12.
  • the offset class setting unit 606 is the same as or similar to the offset class setting unit 106 in other points.
  • the filter coefficient set and offset value setting unit 607 is different from the filter coefficient set and offset value setting unit 107 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12.
  • the filter coefficient set and offset value setting unit 607 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
  • the filtering unit 608 is different from the filtering unit 108 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12.
  • the filtering unit 608 is the same as or similar to the filtering unit 108 in other points.
  • the entropy encoding unit 609 entropy codes in that the encoded adaptive data 18 is generated by entropy encoding the pixel adaptive offset information 19 in addition to the quantized transform coefficients, the encoding parameters, the filter coefficient set information 14 and the offset information 15. Different from the conversion unit 109.
  • the entropy encoding unit 609 receives the pixel adaptive offset information 19 from the pixel adaptive offset setting unit 611.
  • the entropy encoding unit 609 is the same as or similar to the entropy encoding unit 109 in other points.
  • the pixel adaptive offset setting unit 611 acquires the input image 11 from the outside of the moving image encoding unit 600 and inputs the decoded image 12 from the addition unit 605.
  • the pixel adaptive offset setting unit 611 sets a parameter (for example, an offset value of each pixel) used in the SAO process based on the input image 11 and the decoded image 12.
  • the pixel adaptive offset setting unit 611 generates pixel adaptive offset information 19 indicating the set parameters, and outputs this to the entropy encoding unit 609 and the pixel adaptive offset processing unit 612.
  • the algorithm for the pixel adaptive offset setting unit 611 to set parameters is not particularly limited.
  • the pixel adaptive offset processing unit 612 receives the decoded image 12 from the adding unit 605 and the pixel adaptive offset information 19 from the pixel adaptive offset setting unit 611. The pixel adaptive offset processing unit 612 performs SAO processing on the decoded image 12 based on the pixel adaptive offset information 19 to generate the SAO processed image 31. The pixel adaptive offset processing unit 612 outputs the SAO processed image 31 to the offset class setting unit 606, the filter coefficient set and offset value setting unit 607, and the filtering unit 608.
  • the video decoding device includes a video decoding unit 700 and a decoding control unit 707.
  • the video decoding unit 700 includes an entropy decoding unit 701, an inverse quantization and inverse transformation unit 702, an addition unit 703, an offset class setting unit 704, a filtering unit 705, a predicted image generation unit 706, and a pixel adaptive offset. And a processing unit 708.
  • the decoding control unit 707 controls the operation of each unit of the moving image decoding unit 700.
  • the inverse quantization and inverse transform unit 702, the predicted image generation unit 706, and the decoding control unit 707 are the same as or similar to the inverse quantization and inverse transform unit 502, the predicted image generation unit 506, and the decoding control unit 507. Explanation is omitted.
  • the entropy decoding unit 701 performs entropy decoding on the encoded data 21 and generates pixel adaptive offset information 28 in addition to quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24. Different from the part 501.
  • the entropy decoding unit 701 outputs the pixel adaptive offset information 28 to the pixel adaptive offset processing unit 708.
  • the entropy decoding unit 701 is the same as or similar to the entropy decoding unit 501 in other points.
  • the addition unit 703 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 703 outputs the decoded image 25 to the pixel adaptive offset processing unit 708. Adder 703 is the same as or similar to adder 503 in other respects.
  • the offset class setting unit 704 is different from the offset class setting unit 504 in that the SAO processed image 29 is input from the pixel adaptive offset processing unit 708 instead of the decoded image 25.
  • the offset class setting unit 704 is the same as or similar to the offset class setting unit 504 in other respects.
  • the filtering unit 705 is different from the filtering unit 505 in that the SAO processed image 29 is input from the pixel adaptive offset processing unit 708 instead of the decoded image 25.
  • the filtering unit 705 is the same as or similar to the filtering unit 505 in other respects.
  • the pixel adaptive offset processing unit 708 performs the same or similar processing as the pixel adaptive offset processing unit 612. In other words, the pixel adaptive offset processing unit 708 receives the decoded image 25 from the adding unit 703 and receives the pixel adaptive offset information 28 from the entropy decoding unit 701. The pixel adaptive offset processing unit 708 performs SAO processing on the decoded image 25 based on the pixel adaptive offset information 28 to generate a SAO processed image 29. The pixel adaptive offset processing unit 708 outputs the SAO processed image 29 to the offset class setting unit 704 and the filtering unit 705.
  • the moving picture encoding apparatus and moving picture decoding apparatus combine SAO processing with the first embodiment. Therefore, according to the moving image encoding device and the moving image decoding device, it is possible to obtain the image quality improvement effect by the SAO process and the same or similar effect as the first embodiment. Note that the order of the SAO process and the ALF process may be switched.
  • the first embodiment may be combined with deblocking filtering.
  • the third embodiment relates to a combination of the first embodiment and deblocking filter processing.
  • the moving image encoding apparatus includes a moving image encoding unit 800 and an encoding control unit 810.
  • the video encoding unit 800 includes a predicted image generation unit 801, a subtraction unit 802, a transform and quantization unit 803, an inverse quantization and inverse transform unit 804, an adder 805, an offset class setting unit 806, A filter coefficient set and offset value setting unit 807, a filtering unit 808, an entropy encoding unit 809, and a deblocking filter processing unit 811 are included.
  • the offset class setting unit 806, the filter coefficient set / offset value setting unit 807, and the filtering unit 808 may be referred to as an ALF processing unit.
  • the encoding control unit 810 controls the operation of each unit of the moving image encoding unit 800.
  • the predicted image generation unit 801, the subtraction unit 802, the transform and quantization unit 803, the inverse quantization and inverse transform unit 804, the entropy encoding unit 809, and the encoding control unit 810 are the predicted image generation unit 101, the subtraction unit 102, and the transform.
  • the quantization unit 103, the inverse quantization and inverse transformation unit 104, the entropy coding unit 109, and the coding control unit 110 are the same as or similar to each other, and thus description thereof is omitted.
  • the addition unit 805 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the adding unit 805 outputs the decoded image 12 to the deblocking filter processing unit 811. Adder 805 is the same as or similar to adder 105 in other respects.
  • the offset class setting unit 806 is different from the offset class setting unit 106 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12.
  • the offset class setting unit 806 is the same as or similar to the offset class setting unit 106 in other points.
  • the filter coefficient set and offset value setting unit 807 is different from the filter coefficient set and offset value setting unit 107 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12.
  • the filter coefficient set and offset value setting unit 807 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
  • the filtering unit 808 is different from the filtering unit 108 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12.
  • the filtering unit 808 is the same as or similar to the filtering unit 108 in other points.
  • the deblocking filter processing unit 811 inputs the decoded image 12 from the adding unit 805.
  • the deblocking filter processing unit 811 performs deblocking filter processing on the decoded image 12 to obtain a deblocking filter processed image 32.
  • the deblocking filter processing can be expected to have an image quality improvement effect such as suppressing block distortion included in the decoded image 12.
  • the deblocking filter processing unit 811 outputs the deblocking filter processing image 32 to the offset class setting unit 806, the filter coefficient set / offset value setting unit 807, and the filtering unit 808.
  • the video decoding device includes a video decoding unit 900 and a decoding control unit 907.
  • the video decoding unit 900 includes an entropy decoding unit 901, an inverse quantization and inverse transformation unit 902, an addition unit 903, an offset class setting unit 904, a filtering unit 905, a predicted image generation unit 906, and a deblocking filter. And a processing unit 908.
  • the decoding control unit 907 controls the operation of each unit of the moving image decoding unit 900.
  • the entropy decoding unit 901, the inverse quantization and inverse transformation unit 902, the predicted image generation unit 906, and the decoding control unit 907 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, a predicted image generation unit 506, and a decoding control unit 507. Since these are the same as or similar to each other, their description is omitted.
  • the addition unit 903 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 903 outputs the decoded image 25 to the deblocking filter processing unit 908. Adder 903 is the same as or similar to adder 503 in other respects.
  • the offset class setting unit 904 is different from the offset class setting unit 504 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 908 instead of the decoded image 25.
  • the offset class setting unit 904 is the same as or similar to the offset class setting unit 504 in other respects.
  • the filtering unit 905 is different from the filtering unit 505 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 908 instead of the decoded image 25.
  • the filtering unit 905 is the same as or similar to the filtering unit 505 in other respects.
  • the deblocking filter processing unit 908 inputs the decoded image 25 from the addition unit 903.
  • the deblocking filter processing unit 908 performs deblocking filter processing on the decoded image 25 to obtain a deblocking filter processed image 41. That is, the deblocking filter processing unit 908 performs the same or similar processing as the deblocking filter processing unit 811.
  • the deblocking filter processing unit 908 outputs the deblocking filter processing image 41 to the offset class setting unit 904 and the filtering unit 905.
  • the video encoding device and the video decoding device according to the third embodiment combine the deblocking filter processing with the first embodiment. Therefore, according to these video encoding device and video decoding device, it is possible to obtain the image quality improvement effect by the deblocking filter process and the same or similar effect as the first embodiment. Note that the order of the deblocking filter process and the ALF process may be changed.
  • the first embodiment may be combined with deblocking filtering and SAO processing.
  • the fourth embodiment relates to a combination of the first embodiment with deblocking filter processing and SAO processing.
  • the moving picture coding apparatus includes a moving picture coding unit 1000 and a coding control unit 1010.
  • the video encoding unit 1000 includes a predicted image generation unit 1001, a subtraction unit 1002, a transform and quantization unit 1003, an inverse quantization and inverse transform unit 1004, an adder 1005, an offset class setting unit 1006, A filter coefficient set / offset value setting unit 1007, a filtering unit 1008, an entropy encoding unit 1009, a pixel adaptive offset setting unit 1011, a pixel adaptive offset processing unit 1012, and a deblocking filter processing unit 1013 are included.
  • the offset class setting unit 1006, the filter coefficient set / offset value setting unit 1007, and the filtering unit 1008 may be referred to as an ALF processing unit.
  • the encoding control unit 1010 controls the operation of each unit of the moving image encoding unit 1000.
  • the predicted image generation unit 1001, the subtraction unit 1002, the transform and quantization unit 1004, the offset class setting unit 1006, the filter coefficient set and offset value setting unit 1007, the filtering unit 1008, the entropy encoding unit 1009, and the encoding control unit 1010 are Same as prediction image generation unit 601, subtraction unit 602, transform and quantization unit 604, offset class setting unit 606, filter coefficient set and offset value setting unit 607, filtering unit 608, entropy encoding unit 609 and encoding control unit 610 Since these are similar, the description thereof is omitted.
  • the addition unit 1005 is different from the addition unit 605 in the output destination of the decoded image 12. Specifically, the adding unit 1005 outputs the decoded image 12 to the deblocking filter processing unit 1013. Adder 1005 is the same as or similar to adder 605 in other respects.
  • the pixel adaptive offset setting unit 1011 is different from the pixel adaptive offset setting unit 611 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 1013 instead of the decoded image 12.
  • the pixel adaptive offset setting unit 1011 is the same as or similar to the pixel adaptive offset setting unit 611 in other respects.
  • the pixel adaptive offset processing unit 1012 is different from the pixel adaptive offset processing unit 612 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 1013 instead of the decoded image 12.
  • the pixel adaptive offset processing unit 1012 is the same as or similar to the pixel adaptive offset processing unit 612 in other respects.
  • the deblocking filter processing unit 1013 is different from the deblocking filter processing unit 811 in the output destination of the deblocking filter processing image 32. Specifically, the deblocking filter processing unit 1013 outputs the deblocking filter processing image 32 to the pixel adaptive offset setting unit 1011 and the pixel adaptive offset processing unit 1012. The deblocking filter processing unit 1013 is the same as or similar to the deblocking filter processing unit 811 in other respects.
  • the video decoding device includes a video decoding unit 1100 and a decoding control unit 1107.
  • the video decoding unit 1100 includes an entropy decoding unit 1101, an inverse quantization and inverse transformation unit 1102, an addition unit 1103, an offset class setting unit 1104, a filtering unit 1105, a predicted image generation unit 1106, and a deblocking filter.
  • a processing unit 1108 and a pixel adaptive offset processing unit 1109 are included.
  • the decoding control unit 1107 controls the operation of each unit of the moving image decoding unit 1100.
  • An entropy decoding unit 1101, an inverse quantization and inverse transformation unit 1102, an offset class setting unit 1104, a filtering unit 1105, a predicted image generation unit 1106, and a decoding control unit 1107 are an entropy decoding unit 701, an inverse quantization and inverse transformation unit 702, Since it is the same as or similar to the offset class setting unit 704, the filtering unit 705, the predicted image generation unit 706, and the decoding control unit 707, description thereof is omitted.
  • the addition unit 1103 is different from the addition unit 703 in the output destination of the decoded image 25. Specifically, the adding unit 1103 outputs the decoded image 25 to the deblocking filter processing unit 1108. Adder 1103 is the same as or similar to adder 703 in other respects.
  • the deblocking filter processing unit 1108 is different from the deblocking filter processing unit 908 in the output destination of the deblocking filter processing image 41. Specifically, the deblocking filter processing unit 1108 outputs the deblocking filter processing image 41 to the pixel adaptive offset processing unit 1109.
  • the deblocking filter processing unit 1108 is the same as or similar to the deblocking filter processing unit 908 in other points.
  • the pixel adaptive offset processing unit 1109 is different from the pixel adaptive offset processing unit 708 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 1108 instead of the decoded image 25.
  • the pixel adaptive offset processing unit 1109 is the same as or similar to the pixel adaptive offset processing unit 708 in other respects.
  • the video encoding device and video decoding device combine the deblocking filter processing and the SAO processing with the first embodiment. Therefore, according to these video encoding device and video decoding device, it is possible to obtain the image quality improvement effect by the deblocking filter process and the SAO process and the same or similar effect as the first embodiment. Note that the order of the deblocking filter process, the SAO process, and the ALF process may be changed.
  • the filter coefficient set and the offset value of each offset class set in the first to fourth embodiments are not limited to the ALF process and may be used for the post filter process.
  • the fifth embodiment relates to post filter processing.
  • the moving picture coding apparatus includes a moving picture coding unit 1200 and a coding control unit 1209.
  • the moving image coding unit 1200 includes a predicted image generation unit 1201, a subtraction unit 1202, a transform and quantization unit 1203, an inverse quantization and inverse transform 1204, an adder 1205, an offset class setting unit 1206, a filter A coefficient set and offset value setting unit 1207 and an entropy encoding unit 1208 are included.
  • the encoding control unit 1209 controls the operation of each unit of the moving image encoding unit 1200.
  • the subtraction unit 1202, the transform and quantization unit 1203, the inverse quantization and inverse transform unit 1204, the entropy coding unit 1208, and the coding control unit 1209 are the subtraction unit 102, transform and quantization unit 103, inverse quantization and inverse transform. The description is omitted because it is the same as or similar to the unit 104, the entropy encoding unit 109, and the encoding control unit 110.
  • the addition unit 1205 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, since a component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the adding unit 1205 converts the decoded image 12 into a predicted image generating unit 1201, an offset class setting unit 1206, a filter coefficient set, and Output to the offset value setting unit 1207.
  • the addition unit 1205 is the same as or similar to the addition unit 105 in other points.
  • the decoded image 12 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 1201.
  • the decoded image 12 is read as a reference image by the predicted image generation unit 1201 as necessary, and is used for prediction processing.
  • the offset class setting unit 1206 is different from the offset class setting unit 106 in the output destination of the offset class information 13. Specifically, since the component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the offset class setting unit 1206 outputs the offset class information to the filter coefficient set and offset value setting unit 1207.
  • the offset class setting unit 1206 is the same as or similar to the offset class setting unit 106 in other points.
  • the filter coefficient set and offset value setting unit 1207 is different from the filter coefficient set and offset value setting unit 107 in the output destination of the filter coefficient set information 14 and the offset information 15. Specifically, since a component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the filter coefficient set and offset value setting unit 1207 performs entropy encoding on the filter coefficient set information 14 and the offset information 15. The data is output to the unit 1208.
  • the filter coefficient set and offset value setting unit 1207 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
  • the predicted image generation unit 1201 is different from the predicted image generation unit 101 in that the input image 11 is predicted based on the decoded image 12 instead of the ALF processed image 17.
  • the prediction image generation unit 1201 is the same as or similar to the prediction image generation unit 101 in other points.
  • Video decoding device For simplification, a case will be described in which the present embodiment is applied to the video decoding device according to the first embodiment. Note that the present embodiment may be applied to a video decoding device according to another embodiment.
  • the moving picture decoding apparatus includes a moving picture decoding unit 1300 and a decoding control unit 1307.
  • the moving picture decoding unit 1300 includes an entropy decoding unit 1301, an inverse quantization and inverse transformation unit 1302, an addition unit 1303, an offset class setting unit 1304, a filtering unit 1305, and a predicted image generation unit 1306.
  • the decoding control unit 1307 controls the operation of each unit of the moving image decoding unit 1300.
  • An entropy decoding unit 1301, an inverse quantization and inverse transformation unit 1302, an offset class setting unit 1304, and a decoding control unit 1307 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, an offset class setting unit 504, and a decoding control unit 507. Since these are the same as or similar to each other, their description is omitted.
  • the addition unit 1303 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the addition unit 1303 outputs the decoded image 25 not only to the offset class setting unit 1304 and the filtering unit 1305 but also to the predicted image generation unit 1306. Adder 1303 is the same as or similar to adder 503 in other respects.
  • the decoded image 25 may be stored in a storage unit (not shown) (for example, a buffer) that is accessible by the predicted image generation unit 1306.
  • the decoded image 25 is read as a reference image by the predicted image generation unit 1306 as necessary, and is used for the prediction process.
  • the predicted image generation unit 1306 is different from the predicted image generation unit 506 in that the output image is predicted based on the decoded image 25 instead of the ALF processed image 27.
  • the prediction image generation unit 11306 is the same as or similar to the prediction image generation unit 506 in other respects.
  • the filtering unit 1305 receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 1301, receives the decoded image 25 from the addition unit 1303, and inputs the offset class information 26 from the offset class setting unit 1304.
  • the filtering unit 1305 performs a filtering process on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, and the offset class information 25 to obtain a post-filter processed image 43.
  • the filtering unit 1305 gives the post-filter processed image 43 to the outside (for example, a display system) as an output image.
  • the filtering unit 1305 performs the same or similar processing as the filtering unit 505, but differs in that no corresponding processing is performed on the encoding side.
  • the video encoding device and video decoding device according to the fifth embodiment apply the first embodiment to the post-filter processing. Therefore, according to these moving image encoding device and moving image decoding device, the same or similar effect as that of the first embodiment can be obtained even when post filter processing is performed instead of ALF processing.
  • this embodiment is applicable to any of the first to fourth embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, and the like.
  • the sixth embodiment relates to a technique in which the total number of offset classes (or the total number of switchable offset values) is variable for each slice, for example, in the first to fifth embodiments.
  • the coding control unit 110 performs coding block division control, generated code amount feedback control, quantization control, mode control, and the like on the moving image coding unit 100. Furthermore, in this embodiment, the encoding control unit 110 controls the total number of offset classes set in the target slice.
  • the encoding control unit 110 controls, for example, whether or not adjacent offset classes are merged, and generates offset merge information.
  • the offset merge information is set for each pair of adjacent offset classes, for example.
  • the encoding control unit 110 may generate offset merge information by selecting a combination that minimizes the encoding cost based on the equation (4) from a plurality of combinations to be merged.
  • the offset merge information may be a 1-bit flag indicating whether or not to merge for each adjacent offset class.
  • the offset merge information may be an index specifying one combination.
  • the total number of offset classes set in the target slice decreases.
  • the offset classes 1, 2 and 3 are merged and the offset classes 4 and 5 are merged, so the total number of offset classes decreases from 5 to 2. Therefore, the total number of offset values that can be switched in the target slice is also reduced from 5 to 2.
  • the encoding control unit 110 outputs the offset merge information to the entropy encoding unit 109. Furthermore, the encoding control unit 110 controls the offset class setting unit 106 based on the offset merge information.
  • the offset class setting unit 106 sets an offset class based on the first index for each first unit of the decoded image 12 as described above. Further, in the present embodiment, the offset class setting unit 106 is controlled by the encoding control unit 110 and performs a merge process on the set offset class. Specifically, the offset class setting unit 106 performs a merge process on the offset class so as to match the offset merge information. The offset class setting unit 106 generates offset class information 13 indicating the offset class after merge processing corresponding to each first unit. The offset class setting unit 106 outputs the offset class information 13 to the filter coefficient set / offset value setting unit 107 and the filtering unit 108.
  • the entropy encoding unit 109 entropy-encodes the quantized transform coefficient, the filter coefficient set information 14, the offset information 15, and the encoding parameter as described above, and generates encoded data 18. Further, in the present embodiment, the entropy encoding unit 109 receives the offset merge information from the encoding control unit 110, entropy encodes it, and multiplexes it into the encoded data 18.
  • the filter coefficient set information 14, the offset merge information, and the offset information 15 are described according to the syntax structure shown in FIG. 15, for example.
  • the syntax in FIG. 15 is described in units of slices, for example.
  • filter_type_idx, NumOfFilterCoeff and filter_coeff [i] are the same as or similar to those in FIG.
  • MaxNumOfOffset represents the total number of offset classes before merge processing, and corresponds to NumOfOffset in FIG.
  • offset_merge_flag [i] is a 1-bit flag indicating whether or not to merge for each adjacent offset class, and corresponds to offset merge information.
  • NumOfOffset represents the total number of offset classes after merge processing (that is, the total number of offset values that can be switched within the target slice).
  • the offset value corresponding to the offset class after merge processing specified by the variable i is described as offset_value [i].
  • the above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
  • Video decoding device For simplification, the case where the present embodiment is applied to the video decoding device in FIG. 5 will be described. Note that the present embodiment may be applied to a video decoding device according to another embodiment.
  • the entropy decoding unit 501 entropy-decodes the encoded data 21 as described above to obtain quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24. Furthermore, in the present embodiment, the entropy decoding unit 501 performs entropy decoding on the encoded data 21 to obtain offset merge information. The entropy decoding unit 501 outputs the offset merge information to the decoding control unit 507.
  • the decoding control unit 507 performs coding block division control, quantization control, mode control, and the like based on the coding parameter 22. Further, the decoding control unit 507 receives the offset merge information from the entropy decoding unit 501 and controls the offset class setting unit 504 based on this information.
  • the offset class setting unit 504 sets an offset class based on the first index for each first unit of the decoded image 25 as described above. Furthermore, in the present embodiment, the offset class setting unit 504 is controlled by the decoding control unit 507 and performs a merge process on the set offset class. Specifically, the offset class setting unit 504 performs a merge process on the offset class so as to match the offset merge information. The offset class setting unit 504 outputs the offset class information 26 indicating the offset class after the merge processing corresponding to each first unit to the filtering unit 505.
  • the moving picture encoding apparatus and moving picture decoding apparatus make the total number of offset classes variable for each slice. Therefore, according to these video encoding device and video decoding device, overhead due to offset information can be reduced by signaling control information (for example, offset merge information) of the total number of offset information, so that encoding distortion can be suppressed. It is possible to reduce overhead due to control information related to the offset value and improve encoding efficiency.
  • signaling control information for example, offset merge information
  • the encoding control unit 110 includes a mode in which the total number of offset classes is set to one (hereinafter also referred to as a singular mode for convenience) and a mode in which the total number of offset classes remains in a plurality (hereinafter referred to as multiple for convenience). For example, each of them may be selected in units of slices. For example, the encoding control unit 110 may select a mode that minimizes the encoding cost based on Equation (4). According to the example of FIG. 14, when the single mode is applied, all the offset classes are merged, and the total number of offset classes becomes one. On the other hand, when multiple modes are applied, the total number of offset classes remains at 5.
  • the total number of offset classes for each slice (that is, the total number of offset values) can be expressed by a 1-bit flag. Therefore, in this modification, for example, a 1-bit flag can be used as control information for the total number of offset classes instead of the offset merge information described above. That is, the overhead due to the control information of the total number of offset classes can be reduced.
  • filter_type_idx, NumOfFilterCoeff and filter_coeff [i] are the same as or similar to those in FIG.
  • multi_offset_flag is a 1-bit flag indicating which of a single mode and a plurality of modes is applied, and is control information of the total number of offset classes in this modification. For example, if 1 is set in multi_offset_flag, the multiple mode is applied, and if 0 is set in multi_offset_flag, the single mode is applied.
  • NumOfOffset represents the total number of offset classes that can be switched in the target slice, but this value varies depending on the value of multi_offset_flag. That is, if singular mode is applied, NumOfOffset is equal to 1, and if multiple modes are applied, NumOfOffset is equal to the total number (ie, plural) of offset classes when not merged.
  • the offset value corresponding to the offset class specified by the variable i is described as offset_value [i].
  • an offset value corresponding to a single offset class is described as offset_value [0].
  • the above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
  • a mode in which the total number of offset classes is 0 may be prepared instead of the single mode.
  • the encoding control unit 110 may select a mode that minimizes the encoding cost based on Equation (4).
  • the total number of offset classes for each slice (that is, the total number of offset values) can be expressed by a 1-bit flag. Therefore, in this modification, for example, a 1-bit flag can be used as control information for the total number of offset classes instead of the offset merge information described above. That is, the overhead due to the control information of the total number of offset classes can be reduced.
  • the offset class is not set, so that no offset value is set. Therefore, the offset information 15 is not signaled.
  • the aforementioned multi_offset_flag may be used as a 1-bit flag indicating which of the zero mode and the plurality of modes is applied. For example, if 1 is set in multi_offset_flag, the multiple mode is applied, and if 0 is set in multi_offset_flag, the zero mode is applied.
  • NumOfOffset represents the total number of offset classes that can be switched in the target slice, but this value varies depending on the value of multi_offset_flag. That is, NumOfOffset is equal to 0 if zero mode is applied, and NumOfOffset is equal to the total number of offset classes (ie, multiples) when not merged if multiple modes are applied.
  • the offset value corresponding to the offset class specified by the variable i when multiple modes are applied is described as offset_value [i].
  • the offset value is not described when the zero mode is applied.
  • the above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
  • the total number of offset classes is switched in units of slices.
  • the total number of offset classes may be switched by the switching unit of the filter coefficient set.
  • control information of the total number of offset classes is signaled for each switching unit of the filter coefficient set.
  • the total number of offset classes may be switched in a unit larger than the unit for switching the filter coefficient set.
  • the total number of offset classes may be implicitly controlled based on various conditions. Specifically, the slice type of the target slice (for example, I slice, P slice, etc.), the value of the base QP used in the target slice, whether or not the target slice is referenced in the encoding / decoding process, etc. Accordingly, the total number of offset classes may be implicitly controlled. If the total number of offset classes is implicitly controlled based on such conditions, overhead due to control information on the total number of offset classes can be reduced.
  • the offset class may be controlled to be 0 (that is, the above-described zero mode is applied).
  • This threshold value may be prepared in advance between the encoding side and the decoding side, or may be signaled in sequence units, multiple picture units, picture units, or slice units.
  • this embodiment can be applied to any of the first to fifth embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, and the like.
  • the seventh embodiment relates to a technique for controlling the quantization accuracy for quantizing the offset value in the first to sixth embodiments.
  • the filter coefficient value and the offset value are quantized with the same quantization accuracy.
  • the filter coefficient value and the offset value may be quantized with different quantization accuracy.
  • D 1 and D 2 are different values, and D 1 > D 2 in the following description. That is, the quantization accuracy of the offset value is coarser than the quantization accuracy of the filter coefficient value.
  • the filter coefficient value is multiplied by the pixel value of the decoded image 12, and the offset value is added to the filter calculation result. Therefore, if the pixel value of the decoded image 12 is larger than 1, the influence of the quantization error of the filter coefficient value on the ALF processed image 17 is larger than the influence of the quantization error of the offset value on the ALF processed image 17. .
  • the overhead of the offset information 15 can be reduced while suppressing encoding distortion.
  • the moving picture coding apparatus and the moving picture decoding apparatus quantize the filter coefficient set and the offset value with different quantization accuracy. Specifically, these moving image encoding device and moving image decoding device determine the quantization accuracy of the offset value coarser than the quantization accuracy of the filter coefficient. Therefore, according to these video encoding device and video decoding device, it is possible to reduce overhead due to offset information while suppressing encoding distortion.
  • the quantization accuracy of the offset value may be switched in units of slices, for example.
  • the encoding control unit 110 may select the quantization accuracy of the offset value so as to minimize the encoding cost based on Equation (4).
  • the encoding control unit 110 controls the filter coefficient set / offset value setting unit 107 and the filtering unit 108 based on the quantization accuracy of the offset value. Further, the encoding control unit 110 outputs information indicating the quantization accuracy of the offset value to the entropy encoding unit 109.
  • the entropy encoding unit 109 receives information indicating the quantization accuracy of the offset value from the encoding control unit 110, entropy encodes this, and multiplexes the encoded data 18.
  • the entropy decoding unit 501 entropy-decodes the encoded data 21 to generate information indicating the quantization accuracy of the offset value, and outputs this to the decoding control unit 507.
  • the decoding control unit 507 inputs information indicating the quantization accuracy of the offset value from the entropy decoding unit 501 and controls the filtering unit 505 based on this information.
  • switching the quantization accuracy of the offset value in units of slices is exemplified.
  • the quantization accuracy of the offset value may be switched by the switching unit of the filter coefficient set.
  • information indicating the quantization accuracy of the offset value is signaled for each filter coefficient set switching unit.
  • the quantization accuracy of the offset value may be switched in a unit larger than the unit for switching the filter coefficient set.
  • the quantization accuracy of the offset value may be implicitly controlled based on various conditions. Specifically, the slice type of the target slice (for example, I slice, P slice, etc.), the value of the base QP used in the target slice, whether or not the target slice is referenced in the encoding / decoding process, etc. Accordingly, the quantization accuracy of the offset value may be implicitly controlled. If the total number of offset classes is implicitly controlled based on such conditions, the overhead due to the control information of the quantization accuracy of the offset value can be reduced.
  • the offset value quantization accuracy may be controlled to be rough (that is, the offset value quantization width is increased).
  • This threshold value may be prepared in advance between the encoding side and the decoding side, or may be signaled in sequence units, multiple picture units, picture units, or slice units.
  • this embodiment is applicable to any of the first to sixth embodiments as described above. That is, the present embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, control of the total number of offset classes, and the like.
  • the eighth embodiment relates to a technique that enables switching of a plurality of filter coefficient sets in a target slice, for example, in the first to seventh embodiments.
  • the moving image encoding apparatus includes a moving image encoding unit 700 and an encoding control unit 1711.
  • the moving image encoding unit 1700 includes a predicted image generation unit 1701, a subtraction unit 1702, a transform and quantization unit 1703, an inverse quantization and inverse transform unit 1704, an adder 1705, a filter class setting unit 1706, An offset class setting unit 1707, a filter coefficient set and offset value setting unit 1708, a filtering unit 1709, and an entropy encoding unit 1710 are included.
  • the filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set and offset value setting unit 1708, and the filtering unit 1709 may be referred to as an ALF processing unit.
  • the encoding control unit 1711 controls the operation of each unit of the moving image encoding unit 1700.
  • the predicted image generation unit 1701, the subtraction unit 1702, the transform / quantization unit 1703, the inverse quantization / inverse transform unit 1704, the offset class setting unit 1707, the entropy encoding unit 1710, and the encoding control unit 1711 are included in the predicted image generation unit 101.
  • the subtraction unit 102, the transform and quantization unit 103, the inverse quantization and inverse transform unit 104, the offset class setting unit 106, the entropy coding unit 109, and the coding control unit 110 are the same as or similar to each other. Omitted.
  • the addition unit 1705 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the addition unit 1705 outputs the decoded image 12 to the filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set / offset value setting unit 1708, and the filtering unit 1709. Adder 1705 is the same as or similar to adder 105 in other respects.
  • the filter class setting unit 1706 receives the decoded image 12 from the addition unit 1705, and sets a filter class based on the second index for each second unit.
  • the filter class setting unit 1706 generates filter class information 33 indicating a filter class corresponding to each second unit. Details of the filter class setting unit 1706 will be described later.
  • the filter class setting unit 1706 outputs the filter class information 33 to the filter coefficient set / offset value setting unit 1708 and the filtering unit 1709.
  • the filter coefficient set and offset value setting unit 1708 acquires the input image 11 from the outside of the moving image encoding unit 1700, inputs the decoded image 12 from the adder 1705, and inputs the filter class information 33 from the filter class setting unit 1706.
  • the offset class information 13 is input from the offset class setting unit 1707.
  • the filter coefficient set and offset value setting unit 1708 based on the input image 11, the decoded image 12, the offset class information 13, and the filter class information 33, a filter coefficient set corresponding to each filter class, a filter class, An offset value corresponding to each combination of offset classes is set. Details of the filter coefficient set and offset value setting unit 1708 will be described later.
  • the filter coefficient set and offset value setting unit 1708 outputs the filter coefficient set information 14 indicating the filter coefficient set corresponding to each set filter class to the filtering unit 1709 and the entropy encoding unit 1710.
  • the filter coefficient set and offset value setting unit 1708 outputs the offset information 15 indicating the offset value corresponding to each combination of the set filter class and offset class to the filtering unit 1709 and the entropy encoding unit 1710.
  • the filtering unit 1709 receives the decoded image 12 from the adding unit 1705, receives the filter class information 33 from the filter class setting unit 1706, receives the offset class information 13 from the offset class setting unit 1707, and sets filter coefficients and offset values.
  • the filter coefficient set information 14 and the offset information 15 are input from the unit 1708.
  • the filtering unit 1709 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, the offset information 15, and the filter class information 33 to generate an ALF processed image 17. Details of the filtering unit 1709 will be described later.
  • the filter class setting unit 1706 the filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set and offset value setting unit 1708, and the filtering unit 1709 will be described.
  • the filter class setting unit 1706 sets the filter class based on the second index for each second unit of the decoded image 12 as described above.
  • the second unit may be larger than the first unit or the same size.
  • the second unit may be a pixel block.
  • the second index indicates an image feature for each second unit.
  • the second index may be different from the index first or may be the same.
  • the second index may be one or a combination of image activity, texture direction, pixel block position information, and the like.
  • the offset class combined for a given filter class is fixed when the second unit is the same size as the first unit and the second index is the same as the first index. There is a risk. As a result, it becomes impossible to switch a plurality of offset values for a given filter coefficient set. Therefore, for example, the total number of offset classes may be controlled based on the sixth embodiment. If the total number of offset classes is controlled, it is possible to switch between multiple offset values for a given set of filter coefficients. Alternatively, the total number of filter classes may be controlled by applying the sixth embodiment to filter classes instead of offset classes. For example, the filter merge information may be signaled by controlling whether or not adjacent filter classes are merged. If the total number of filter classes is controlled, a plurality of offset values can be switched for a given filter coefficient set.
  • the filter class setting unit 1706 may fix the type of the second index to any one, or may switch between them.
  • the filter class setting unit 1706 may switch the type of the second index in units of slices or other units.
  • the encoding control unit 1711 may select the optimum second index type for each slice.
  • Information indicating the type of the selected second index is entropy encoded by the entropy encoding unit 1710 and output as a part of the encoded data 18.
  • the optimum type of the second index may be one that minimizes the encoding cost represented by the above formula (4), for example.
  • the type of the first index can be switched.
  • the type of the first index is not limited to a pixel area unit such as a slice, and may be switched on a filter class basis.
  • the offset class setting unit 1707 may switch the type of the first index for each filter class.
  • the encoding control unit 1711 may select the optimum first index type for each filter class.
  • Information indicating the type of the selected first index is entropy encoded by the entropy encoding unit 1710 and output as a part of the encoded data.
  • the optimum first index type may be one that minimizes the encoding cost represented by the above formula (4), for example.
  • the filter coefficient set and offset value setting unit 1708 based on the input image 11, the decoded image 12, the offset class information 13, and the filter class information 33, filter coefficient sets corresponding to each filter class,
  • the offset value corresponding to each combination of the filter class and the offset class is set.
  • the filter coefficient set and offset value setting unit 1708 sets these by solving the Wiener-Hopf equation described above.
  • the filter coefficient set and offset value setting unit 1708 solves the Wiener-Hopf equation for each filter class.
  • the filter coefficient value and the offset value may be quantized according to Equation (7) or Equation (10). In the following description, it is assumed that the filter coefficient value and the offset value are set by quantization.
  • the filtering unit 1709 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, the offset information 15, and the filter class information 33, and generates an ALF processed image 17.
  • the filtering unit 1709 includes a filter coefficient set selection unit 1801, an offset selection unit 1802, and a filter processing unit 1803, as shown in FIG.
  • the filter coefficient set selection unit 1801 receives the filter class information 33 from the filter class setting unit 1706 and the filter coefficient information 14 from the filter coefficient set / offset value setting unit 1708.
  • the filter coefficient set selection unit 1801 specifies a filter class based on the filter class information 33 for each second unit, and selects a filter coefficient set 34 corresponding to the filter class based on the filter coefficient set information 14.
  • the filter coefficient set selection unit 1801 outputs the selected filter coefficient set 34 to the filter processing unit 1803.
  • the offset selection unit 1802 receives the filter class information 33 from the filter class setting unit 1706, the offset class information 13 from the offset class setting unit 1707, and the offset information 15 from the filter coefficient set / offset value setting unit 1708. .
  • the offset selection unit 1802 identifies the filter class and the offset class based on the filter class information 33 and the offset class information 13 for each first unit, and corresponds to the combination of the filter class and the offset class based on the offset information 15.
  • An offset value 16 is selected.
  • the offset selection unit 1802 outputs the selected offset value 16 to the filter processing unit 1803.
  • the filter processing unit 1803 receives the decoded image 12 from the addition unit 1705, receives the filter coefficient set 34 from the filter coefficient set selection unit 1801, and receives the offset value 16 from the offset selection unit 1802.
  • the filter processing unit 1803 performs a filter operation based on the filter coefficient set 34 on each pixel in the decoded image 12 and performs an offset operation based on the offset value 16 to generate the ALF processed image 17. That is, the filter processing unit 1803 generates a pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (13).
  • filter_idx (x, y) represents the filter class of the second unit to which the pixel specified by the position (x, y) belongs.
  • the filter class setting unit 1706 sets the filter class based on the second index for each second unit (for example, pixel block) of the decoded image 12 (step S1901).
  • the offset class setting unit 1902 sets an offset class based on the first index for each first unit (for example, pixel or pixel block) of the decoded image 12 (step S1902).
  • the filter coefficient set and offset value setting unit 1708 corresponds to each filter class based on the input image 11, the decoded image 12, the filter class set in step S1901, and the offset class set in step S1902.
  • a filter coefficient set and an offset value corresponding to each combination of the filter class and the offset class are set (step S1903).
  • the filtering unit 1709 performs a filtering process on the decoded image 12 based on the filter coefficient set and the offset value set in step S1903 (step S1904).
  • the entropy encoding unit 1710 adds the filter coefficient set information 14 indicating the filter coefficient set set in step S1903 and the offset indicating the offset value set in step S1903, in addition to the quantized transform coefficient and the encoding parameter.
  • the information 15 is entropy encoded (step S1905).
  • the filter coefficient set information 14 and the offset information 15 are described according to the syntax structure shown in FIG. 20, for example.
  • the syntax in FIG. 20 is described in units of slices, for example.
  • filter_type_idx is the same as or similar to that in FIG.
  • NumOfFilterSets represents the total number of filter classes that can be switched in the target slice.
  • NumOfFilterCoeff represents the total number of filter coefficient values included in the filter coefficient set.
  • the value of NumOfFilterCoeff may be common regardless of the filter class, but may be different for each filter class.
  • the filter coefficient values included in the filter coefficient set of the filter class specified by the variable i are described one by one as filter_coeff [i] [j].
  • NumOfOffset represents the total number of offset classes that can be switched within one filter class.
  • the value of NumOfOffset may be common regardless of the filter class, but may be different for each filter class.
  • the offset value corresponding to the combination of the filter class specified by the variable i and the offset class specified by the variable j is described as offset_value [i] [j].
  • the above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
  • the set filter coefficient value and the offset value may be encoded as they are, or by the above difference calculation.
  • the obtained difference value may be encoded.
  • the difference calculation between the filter coefficient value and the offset value may be performed based on different filter classes in the target slice.
  • information indicating that the filter coefficient value and the offset value set in one filter class in the reference slice are directly used may be encoded.
  • offset values may not be set for some offset classes.
  • the filter operation and the offset operation may not be applied to a part or all of the filter classes in the target slice. Only the offset calculation may be applied to the filter class and the offset calculation for a part or all of the filter classes in the target slice.
  • a mode in which the filter operation and the offset operation are applied (2) a mode in which the filter operation is not applied and the offset operation is applied, and (3) a filter operation and Any mode in which the offset calculation is not applied may be selected.
  • Whether or not to apply the filter operation and the offset operation for each filter class may be determined so as to minimize the coding cost based on the above equation (4), for example.
  • information indicating which mode has been selected for a part or all of the filter classes in the target slice needs to be included in the syntax element.
  • the video decoding device includes a video decoding unit 2100 and a decoding control unit 2108.
  • the video decoding unit 2100 includes an entropy decoding unit 2101, an inverse quantization and inverse transformation unit 2102, an addition unit 2103, a filter class setting unit 2104, an offset class setting unit 2105, a filtering unit 2106, and a predicted image generation Part 2107.
  • the decoding control unit 2108 controls the operation of each unit of the moving image decoding unit 2100.
  • the entropy decoding unit 2101, the inverse quantization and inverse transformation unit 2102, the offset class setting unit 2105, the predicted image generation unit 2107, and the decoding control unit 2108 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, and an offset class setting unit. 504, the prediction image generation unit 506, and the decoding control unit 507 are the same as or similar to each other, and thus description thereof is omitted.
  • the addition unit 2103 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 2103 outputs the decoded image 25 to the filter class setting unit 2104, the offset class setting unit 2105, and the filtering unit 2106. Adder 2103 is the same as or similar to adder 503 in other respects.
  • the filter class setting unit 2104 receives the decoded image 25 from the adding unit 2103 and sets the filter class based on the second index for each second unit.
  • the filter class setting unit 2104 generates filter class information 42 indicating a filter class corresponding to each second unit. Basically, the filter class setting unit 2104 performs the same or similar processing as the filter class setting unit 1706.
  • the filter class setting unit 2104 outputs the filter class information 42 to the filtering unit 2106.
  • the filtering unit 2106 receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 2101, receives the decoded image 25 from the addition unit 2103, and inputs the offset class information 26 from the offset class setting unit 2105.
  • the filtering unit 2106 performs filter processing on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, the offset class information 26, and the filter class information 42 to generate an ALF processed image 27. That is, the filtering unit 2106 performs the same or similar processing as the filtering unit 1709 described above.
  • the moving image encoding device and the moving image decoding device can switch a plurality of filter coefficient sets within the target slice, for example, and more than one for each filter coefficient set.
  • the offset value can be switched. Therefore, according to the moving image encoding device and the moving image decoding device, it is possible to perform filter processing adapted to the local structure in the target slice by switching the filter coefficient set and the offset value, so that the encoding efficiency can be improved. It is.
  • this embodiment is applicable to any of the first to seventh embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, control of the total number of offset classes or the total number of filter classes, control of quantization accuracy of offset values, and the like. Further, regarding the sixth embodiment, the total number of offset classes may be controlled for each filter class. In this case, the control information of the total number of offset classes is signaled in units of filter classes. Furthermore, regarding the seventh embodiment, the quantization accuracy of the offset value may be controlled for each filter class. In this case, information indicating the quantization accuracy of the offset value is signaled for each filter class.
  • the processing of each of the above embodiments can be realized by using a general-purpose computer as basic hardware.
  • the program for realizing the processing of each of the above embodiments may be provided by being stored in a computer-readable storage medium.
  • the program is stored in the storage medium as an installable file or an executable file. Examples of the storage medium include a magnetic disk, an optical disk (CD-ROM, CD-R, DVD, etc.), a magneto-optical disk (MO, etc.), and a semiconductor memory.
  • the storage medium may be any as long as it can store the program and can be read by the computer.
  • the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
  • Filter coefficient set and offset value setting unit 108 505, 608, 705, 808, 05, 1008, 1105, 1305, 1709, 2106 ... Filtering unit 109, 609, 809, 1009, 1208, 1710 ... Entropy encoding unit 110, 610, 810, 1010, 1209, 1711 ... Encoding Control unit 201, 1802 ... Offset selection unit 202, 1803 ... Filter processing unit 500, 700, 900, 1100, 1300, 2100 ... Moving picture decoding unit 501, 701, 901, 1101, 1301, 2101 .. Entropy decoding unit 507, 707, 907, 1107, 1307, 2108 ... decoding control unit 611, 111 ... pixel adaptive offset setting unit 612, 708, 1012, 1109 ... pixel adaptive offset processing unit 811 908, 1013, 1108 ... Deblocking filter processing unit 1706,2104 ... filter class setting section 1801 ... filter coefficient set selection section

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

According to an embodiment of the invention, a moving image encoding method comprises setting, for each of a plurality of first units that includes one or more pixels in a decoded image, one of a plurality of offset classes on the basis of a first index that indicates an image characteristic of the first unit. The moving image encoding method comprises setting, on the basis of an input image and the decoded image, a filter factor set, which includes a plurality of filter factor values, and offset values corresponding to the respective ones of the plurality of offset classes. The moving image encoding method comprises encoding both information indicating the filter factor set and information indicating the offset values corresponding to the respective ones of the plurality of offset classes, thereby generating encoded data.

Description

動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置Moving picture coding method, moving picture decoding method, moving picture coding apparatus, and moving picture decoding apparatus
 実施形態は、動画像の符号化及び復号に関する。 Embodiments relate to encoding and decoding of moving images.
 動画像の符号化及び復号に関して、適応ループフィルタ(Adaptive Loop Filter;ALF)と呼ばれる技法が知られている。ALFによれば、符号化側は、フィルタ係数セットを設定し、当該フィルタ係数セットを示す情報を復号側に伝送する。そして、復号側は、伝送されたフィルタ係数セットを示す情報を用いて復号画像にループフィルタ処理を行う。 A technique called an adaptive loop filter (ALF) is known for encoding and decoding moving images. According to ALF, the encoding side sets a filter coefficient set and transmits information indicating the filter coefficient set to the decoding side. Then, the decoding side performs loop filter processing on the decoded image using the information indicating the transmitted filter coefficient set.
 更に、ALFの一種として、スライス内で1または複数のフィルタ係数セットを用意し、これら1または複数のフィルタ係数セットを画素単位または画素ブロック単位で切り替えて復号画像に適用する技法も知られている。係るALFにおいて、スライス内で用意されるフィルタ係数セットの総数を増大させることによって、ループフィルタ処理は復号画像の局所構造に適応し易くなる。即ち、復号画像の画質改善効果を向上できる。他方、フィルタ係数セットのオーバーヘッドにより符号量は大きくなる。また、ハードウェア実装を想定する場合に、消費電力の増大が懸念される。具体的には、各フィルタ係数セットに含まれるフィルタ係数値の総数は例えば最大21個であり、スライス内で保持されるフィルタ係数セットの総数は例えば最大16個である。ループフィルタ処理部においてこれらの情報を全て保持しておくことは困難である。故に、画素単位または画素ブロック単位でフィルタ係数セットのスイッチングが生じる度に、切り替え後のフィルタ係数セットの読み込みが発生すると予想される。 Further, as one type of ALF, there is also known a technique in which one or a plurality of filter coefficient sets are prepared in a slice, and the one or a plurality of filter coefficient sets are switched in pixel units or pixel block units and applied to a decoded image. . In such ALF, by increasing the total number of filter coefficient sets prepared in a slice, the loop filter process can be easily adapted to the local structure of the decoded image. That is, the image quality improvement effect of the decoded image can be improved. On the other hand, the code amount increases due to the overhead of the filter coefficient set. Further, when hardware mounting is assumed, there is a concern about an increase in power consumption. Specifically, the total number of filter coefficient values included in each filter coefficient set is, for example, 21 at maximum, and the total number of filter coefficient sets held in a slice is, for example, 16 at maximum. It is difficult to hold all these pieces of information in the loop filter processing unit. Therefore, every time switching of the filter coefficient set occurs in pixel units or pixel block units, it is expected that the filter coefficient set after switching will be read.
 また、動画像の符号化及び復号に関して、画素適応オフセット(Sample Adaptive Offset;SAO)と呼ばれる技法が知られている。SAOによれば、符号化側は複数のオフセット値を設定し、当該複数のオフセット値を示す情報を復号側に伝送する。そして、復号側は、これら複数のオフセット値を画素単位で切り替えて復号画像に適用(加算)する。尚、SAO及びALFを組み合わせて適用することも想定されるが、両者を単純に組み合わせたとしても両者の最適値を得ることは困難である。例えば、符号化側がSAOに従って複数のオフセット値を設定し、それからALFに従ってフィルタ係数セットを設定すると仮定する。係る組み合わせによると、複数のオフセット値の設定において、ALFの適用が考慮されない。従って、これら複数のオフセット値と、フィルタ係数セットとが最適値に設定されるとは限らない。 In addition, a technique called pixel adaptive offset (SAO) is known for encoding and decoding moving images. According to SAO, the encoding side sets a plurality of offset values, and transmits information indicating the plurality of offset values to the decoding side. Then, the decoding side switches these plural offset values in units of pixels and applies (adds) to the decoded image. In addition, although applying combining SAO and ALF is assumed, even if both are combined simply, it is difficult to obtain the optimal value of both. For example, assume that the encoder sets a plurality of offset values according to SAO and then sets a filter coefficient set according to ALF. According to such a combination, application of ALF is not considered in setting a plurality of offset values. Therefore, the plurality of offset values and the filter coefficient set are not always set to optimum values.
 実施形態は、符号化効率を向上させることを目的の1つとする。また、実施形態は、ハードウェア実装時における消費電力の削減を目的の1つとする。 In the embodiment, one of the purposes is to improve the encoding efficiency. In addition, one of the purposes of the embodiment is to reduce power consumption when hardware is mounted.
 実施形態によれば、動画像符号化方法は、復号画像内の1以上の画素を含む第1の単位毎に、第1の単位の画像特徴を示す第1の指標に基づいて複数のオフセットクラスのうちのいずれか1つを設定することを含む。動画像符号化方法は、入力画像及び復号画像に基づいて、複数のフィルタ係数値を含むフィルタ係数セットと、複数のオフセットクラスの各々に対応するオフセット値とを設定することを含む。動画像符号化方法は、フィルタ係数セットを示す情報と、複数のオフセットクラスの各々に対応するオフセット値を示す情報とを符号化し、符号化データを生成することを含む。 According to the embodiment, the moving image encoding method includes a plurality of offset classes for each first unit including one or more pixels in a decoded image based on a first index indicating an image feature of the first unit. Setting any one of. The moving image encoding method includes setting a filter coefficient set including a plurality of filter coefficient values and an offset value corresponding to each of the plurality of offset classes based on the input image and the decoded image. The moving picture encoding method includes encoding information indicating a filter coefficient set and information indicating an offset value corresponding to each of a plurality of offset classes to generate encoded data.
第1の実施形態に係る動画像符号化装置を例示するブロック図。1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment. 図1のフィルタリング部を例示するブロック図。The block diagram which illustrates the filtering part of FIG. 図1のALF処理部に関連する動作を例示するフローチャート。3 is a flowchart illustrating an operation related to the ALF processing unit in FIG. 1. フィルタ係数セット情報及びオフセット情報に関するシンタクス構造を例示する図。The figure which illustrates the syntax structure regarding filter coefficient set information and offset information. 第1の実施形態に係る動画像復号装置を例示するブロック図。1 is a block diagram illustrating a moving image decoding apparatus according to a first embodiment. 第2の実施形態に係る動画像符号化装置を例示するブロック図。The block diagram which illustrates the moving picture coding device concerning a 2nd embodiment. 第2の実施形態に係る動画像復号装置を例示するブロック図。The block diagram which illustrates the moving picture decoding device concerning a 2nd embodiment. 第3の実施形態に係る動画像符号化装置を例示するブロック図。The block diagram which illustrates the animation coding device concerning a 3rd embodiment. 第3の実施形態に係る動画像復号装置を例示するブロック図。The block diagram which illustrates the video decoding device concerning a 3rd embodiment. 第4の実施形態に係る動画像符号化装置を例示するブロック図。The block diagram which illustrates the moving picture coding device concerning a 4th embodiment. 第4の実施形態に係る動画像復号装置を例示するブロック図。The block diagram which illustrates the video decoding device concerning a 4th embodiment. 第5の実施形態に係る動画像符号化装置を例示するブロック図。FIG. 10 is a block diagram illustrating a moving image encoding apparatus according to a fifth embodiment. 第5の実施形態に係る動画像復号装置を例示するブロック図。The block diagram which illustrates the moving picture decoding device concerning a 5th embodiment. 第6の実施形態に係るマージ処理を例示する図。The figure which illustrates merge processing concerning a 6th embodiment. 第6の実施形態に係るフィルタ係数セット情報、オフセットマージ情報及びオフセット情報に関するシンタクス構造を例示する図。The figure which illustrates the syntax structure regarding the filter coefficient set information which concerns on 6th Embodiment, offset merge information, and offset information. 第6の実施形態の変形例に係るフィルタ係数セット情報、オフセットクラスの総数の制御情報及びオフセット情報に関するシンタクス構造を例示する図。The figure which illustrates the syntax structure regarding the filter coefficient set information which concerns on the modification of 6th Embodiment, the control information of the total number of offset classes, and offset information. 第8の実施形態に係る動画像符号化装置を例示するブロック図。The block diagram which illustrates the moving picture coding device concerning an 8th embodiment. 図17のフィルタリング部を例示するブロック図。FIG. 18 is a block diagram illustrating a filtering unit in FIG. 17. 図17のALF処理部に関連する動作を例示するフローチャート。18 is a flowchart illustrating an operation related to the ALF processing unit of FIG. 第8の実施形態に係るフィルタ係数セット情報及びオフセット情報に関するシンタクス構造を例示する図。The figure which illustrates the syntax structure regarding the filter coefficient set information and offset information which concern on 8th Embodiment. 第8の実施形態に係る動画像符号化装置を例示するブロック図。The block diagram which illustrates the moving picture coding device concerning an 8th embodiment. 図1の動画像符号化装置の動作を例示するフローチャート。3 is a flowchart illustrating an operation of the moving image encoding apparatus in FIG. 1. 図5の動画像復号装置の動作を例示するフローチャート。6 is a flowchart illustrating an operation of the video decoding device in FIG. 5. 第1の指標とオフセットクラスとの対応関係を保持する参照テーブルを例示する図。The figure which illustrates the reference table holding the correspondence of a 1st parameter | index and an offset class.
 以下、図面を参照しながら実施形態の説明が述べられる。尚、以降、説明済みの要素と同一または類似の要素には同一または類似の符号が付され、重複する説明は基本的に省略される。 Hereinafter, embodiments will be described with reference to the drawings. Hereinafter, the same or similar elements as those already described are denoted by the same or similar reference numerals, and redundant description is basically omitted.
 (第1の実施形態) 
 (動画像符号化装置) 
 図1に示されるように、第1の実施形態に係る動画像符号化装置は、動画像符号化部100と、符号化制御部110とを含む。動画像符号化部100は、予測画像生成部101と、減算部102と、変換及び量子化部103と、逆量子化及び逆変換部104と、加算部105と、オフセットクラス設定部106と、フィルタ係数セット及びオフセット値設定部107と、フィルタリング部108と、エントロピー符号化部109とを含む。尚、オフセットクラス設定部106、フィルタ係数セット及びオフセット値設定部107及びフィルタリング部108は、ALF処理部と称されてよい。符号化制御部110は、動画像符号化部100の各部の動作を制御する。
(First embodiment)
(Moving picture encoding device)
As shown in FIG. 1, the moving picture coding apparatus according to the first embodiment includes a moving picture coding unit 100 and a coding control unit 110. The video encoding unit 100 includes a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, an inverse quantization and inverse transform unit 104, an adder 105, an offset class setting unit 106, A filter coefficient set and offset value setting unit 107, a filtering unit 108, and an entropy encoding unit 109 are included. The offset class setting unit 106, the filter coefficient set / offset value setting unit 107, and the filtering unit 108 may be referred to as an ALF processing unit. The encoding control unit 110 controls the operation of each unit of the moving image encoding unit 100.
 予測画像生成部101は、入力画像11の予測処理を例えば画素ブロック単位で行い、予測画像を生成する。入力画像11は、複数の画素信号を含み、動画像符号化部100の外部より入力される。予測画像生成部101は、後述されるALF処理画像17に基づいて入力画像11の予測処理を行ってよい。尚、予測処理は、動き補償を用いた時間方向の予測処理、画面内の符号化済み画素を用いた空間方向の予測処理などの一般的なものであってよい。従って、予測処理の詳細についての説明は省略される。予測画像生成部101は、予測画像を減算部102及び加算部105へと出力する。 The predicted image generation unit 101 performs a prediction process on the input image 11 in units of pixel blocks, for example, and generates a predicted image. The input image 11 includes a plurality of pixel signals and is input from the outside of the moving image encoding unit 100. The predicted image generation unit 101 may perform a prediction process on the input image 11 based on an ALF processed image 17 described later. Note that the prediction process may be a general process such as a temporal direction prediction process using motion compensation, a spatial direction prediction process using encoded pixels in the screen, and the like. Therefore, the detailed description of the prediction process is omitted. The predicted image generation unit 101 outputs the predicted image to the subtraction unit 102 and the addition unit 105.
 減算部102は、動画像符号化部100の外部から入力画像11を取得し、予測画像生成部101から予測画像を入力する。減算部102は、入力画像11から予測画像を減算し、予測誤差画像を得る。減算部102は、予測誤差画像を変換及び量子化部103へと出力する。 The subtraction unit 102 acquires the input image 11 from the outside of the moving image encoding unit 100 and inputs the predicted image from the predicted image generation unit 101. The subtraction unit 102 subtracts the prediction image from the input image 11 to obtain a prediction error image. The subtraction unit 102 outputs the prediction error image to the transform and quantization unit 103.
 変換及び量子化部103は、減算部102から予測誤差画像を入力する。変換及び量子化部103は、予測誤差画像に変換処理を行って変換係数を生成する。更に、変換及び量子化部103は、変換係数を量子化して量子化変換係数を生成する。変換及び量子化部103は、量子化変換係数を逆量子化及び逆変換部104及びエントロピー符号化部109へと出力する。ここで、変換処理は、典型的には離散コサイン変換(Discrete Cosine Transform;DCT)などの直交変換である。尚、変換処理は、DCTに限られず、ウェーブレット変換、独立成分解析などであってもよい。量子化処理は、符号化制御部110によって設定される量子化パラメータに基づいて行われる。 The transform / quantization unit 103 receives the prediction error image from the subtraction unit 102. The transform and quantization unit 103 performs transform processing on the prediction error image to generate transform coefficients. Further, the transform and quantization unit 103 quantizes the transform coefficient to generate a quantized transform coefficient. The transform and quantization unit 103 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 104 and the entropy encoding unit 109. Here, the transformation process is typically orthogonal transformation such as Discrete Cosine Transform (DCT). The conversion process is not limited to DCT, and may be wavelet conversion, independent component analysis, or the like. The quantization process is performed based on the quantization parameter set by the encoding control unit 110.
 逆量子化及び逆変換部104は、変換及び量子化部103から量子化変換係数を入力する。逆量子化及び逆変換部104は、量子化変換係数を逆量子化して変換係数を復号する。更に、逆量子化及び逆変換部104は、変換係数に逆変換処理を行って予測誤差画像を復号する。逆量子化及び逆変換部104は、予測誤差画像を加算部105へと出力する。基本的に、逆量子化及び逆変換部104は、変換及び量子化部103の逆処理を行う。即ち、逆量子化は、符号化制御部110によって設定される量子化パラメータに基づいて行われる。更に、逆変換処理は、変換及び量子化部103が行った変換処理によって決まる。例えば、逆変換処理は、逆DCT(Inverse DCT;IDCT)、逆ウェーブレット変換などである。 The inverse quantization and inverse transform unit 104 inputs the quantized transform coefficient from the transform and quantization unit 103. The inverse quantization and inverse transform unit 104 dequantizes the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 104 performs an inverse transform process on the transform coefficient to decode the prediction error image. The inverse quantization and inverse transform unit 104 outputs the prediction error image to the addition unit 105. Basically, the inverse quantization and inverse transform unit 104 performs an inverse process of the transform and quantization unit 103. That is, the inverse quantization is performed based on the quantization parameter set by the encoding control unit 110. Further, the inverse transformation process is determined by the transformation process performed by the transformation and quantization unit 103. For example, the inverse transform process includes inverse DCT (Inverse DCT; IDCT), inverse wavelet transform, and the like.
 加算部105は、予測画像生成部101から予測画像を入力し、逆量子化及び逆変換部104から予測誤差画像を入力する。加算部105は、予測誤差画像を予測画像と加算し、(局所)復号画像12を生成する。加算部105は、復号画像12をオフセットクラス設定部106、フィルタ係数セット及びオフセット値設定部107及びフィルタリング部108へと出力する。 The addition unit 105 inputs a prediction image from the prediction image generation unit 101 and inputs a prediction error image from the inverse quantization and inverse conversion unit 104. The adding unit 105 adds the prediction error image to the prediction image to generate a (local) decoded image 12. The addition unit 105 outputs the decoded image 12 to the offset class setting unit 106, the filter coefficient set / offset value setting unit 107, and the filtering unit 108.
 オフセットクラス設定部106は、加算部105から復号画像12を入力し、第1の単位毎に第1の指標に基づいてオフセットクラスを設定する。オフセットクラス設定部106は、第1の単位毎に対応するオフセットクラスを示すオフセットクラス情報13を生成する。尚、オフセットクラス設定部106の詳細は後述される。オフセットクラス設定部106は、オフセットクラス情報13をフィルタ係数セット及びオフセット値設定部107及びフィルタリング部108へと出力する。 The offset class setting unit 106 receives the decoded image 12 from the adding unit 105 and sets an offset class for each first unit based on the first index. The offset class setting unit 106 generates offset class information 13 indicating an offset class corresponding to each first unit. Details of the offset class setting unit 106 will be described later. The offset class setting unit 106 outputs the offset class information 13 to the filter coefficient set / offset value setting unit 107 and the filtering unit 108.
 フィルタ係数セット及びオフセット値設定部107は、動画像符号化部100の外部から入力画像11を取得し、加算部105から復号画像12を入力し、オフセットクラス設定部106からオフセットクラス情報13を入力する。フィルタ係数セット及びオフセット値設定部107は、入力画像11と、復号画像12と、オフセットクラス情報13とに基づいて、フィルタ係数セットと、各オフセットクラスに対応するオフセット値とを設定する。尚、フィルタ係数セット及びオフセット値設定部107の詳細は後述される。 The filter coefficient set / offset value setting unit 107 acquires the input image 11 from the outside of the moving image encoding unit 100, inputs the decoded image 12 from the addition unit 105, and inputs the offset class information 13 from the offset class setting unit 106. To do. The filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class information 13. Details of the filter coefficient set and offset value setting unit 107 will be described later.
 フィルタ係数セット及びオフセット値設定部107は、設定したフィルタ係数セットを示すフィルタ係数セット情報14をフィルタリング部108及びエントロピー符号化部109へと出力する。フィルタ係数セット及びオフセット値設定部107は、設定した各オフセットクラスに対応するオフセット値を示すオフセット情報15をフィルタリング部108及びエントロピー符号化部109へと出力する。 The filter coefficient set and offset value setting unit 107 outputs the filter coefficient set information 14 indicating the set filter coefficient set to the filtering unit 108 and the entropy encoding unit 109. The filter coefficient set and offset value setting unit 107 outputs the offset information 15 indicating the offset value corresponding to each set offset class to the filtering unit 108 and the entropy coding unit 109.
 フィルタリング部108は、加算部105から復号画像12を入力し、オフセットクラス設定部106からオフセットクラス情報13を入力し、フィルタ係数セット及びオフセット値設定部107からフィルタ係数セット情報14及びオフセット情報15を入力する。フィルタリング部108は、オフセットクラス情報13と、フィルタ係数セット情報14と、オフセット情報15とに基づいて復号画像12にフィルタ処理を行い、ALF処理画像17を生成する。フィルタリング部108は、ALF処理画像17を予測画像生成部101へと出力する。尚、フィルタリング部108の詳細は後述される。 The filtering unit 108 receives the decoded image 12 from the adding unit 105, receives the offset class information 13 from the offset class setting unit 106, and receives the filter coefficient set information 14 and the offset information 15 from the filter coefficient set / offset value setting unit 107. input. The filtering unit 108 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, and the offset information 15 to generate an ALF processed image 17. The filtering unit 108 outputs the ALF processed image 17 to the predicted image generation unit 101. Details of the filtering unit 108 will be described later.
 尚、ALF処理画像17は、予測画像生成部101がアクセス可能な図示されない記憶部(例えばバッファなど)に保存されてもよい。ALF処理画像17は、必要に応じて予測画像生成部101によって参照画像として読み出され、予測処理に利用される。 Note that the ALF processed image 17 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 101. The ALF processed image 17 is read as a reference image by the predicted image generation unit 101 as necessary, and is used for the prediction process.
 エントロピー符号化部109は、変換及び量子化部103から量子化変換係数を入力し、フィルタ係数及びオフセット値設定部107からフィルタ係数セット情報14及びオフセット情報15を入力し、符号化制御部110から符号化パラメータを入力する。符号化パラメータは、例えば、モード情報、動き情報、符号化ブロック分割情報、量子化パラメータなどを含んでよい。エントロピー符号化部109は、量子化変換係数、フィルタ係数セット情報14、オフセット情報15及び符号化パラメータをエントロピー符号化(例えば、ハフマン符号化、算術符号化など)し、符号化データ18を生成する。エントロピー符号化部109は、動画像符号化部100の外部(例えば、通信系、蓄積系など)へと符号化データ18を出力する。符号化データ18は、後述される画像復号装置によって復号されることになる。 The entropy encoding unit 109 receives the quantized transform coefficient from the transform and quantization unit 103, receives the filter coefficient set information 14 and the offset information 15 from the filter coefficient and offset value setting unit 107, and receives from the encoding control unit 110. Enter the encoding parameters. The encoding parameter may include, for example, mode information, motion information, encoded block division information, quantization parameter, and the like. The entropy encoding unit 109 performs entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) on the quantized transform coefficients, filter coefficient set information 14, offset information 15, and encoding parameters, and generates encoded data 18. . The entropy encoding unit 109 outputs the encoded data 18 to the outside of the moving image encoding unit 100 (for example, communication system, storage system, etc.). The encoded data 18 is decoded by an image decoding device described later.
 符号化制御部110は、動画像符号化部100に対して符号化ブロックの分割制御、発生符号量のフィードバック制御、量子化制御及びモード制御などを行う。符号化制御部110は、符号化パラメータをエントロピー符号化部109へと出力する。 The encoding control unit 110 performs encoding block division control, generated code amount feedback control, quantization control, mode control, and the like for the moving image encoding unit 100. The encoding control unit 110 outputs the encoding parameters to the entropy encoding unit 109.
 動画像符号化部100は、例えば図22に示されるように動作する。具体的には、減算部102が、入力画像11から予測画像を減算し、予測誤差画像を生成する(ステップS2201)。変換及び量子化部103が、ステップS2201において生成された予測誤差画像に変換及び量子化を行い、量子化変換係数を生成する(ステップS2202)。逆量子化及び逆変換部104が、ステップS2202において生成された量子化変換係数に逆量子化及び逆変換を行い、予測誤差画像を復号する(ステップS2203)。加算部105が、ステップS2203において復号された予測誤差画像を予測画像に加算し、(局所)復号画像12を生成する(ステップS2204)。次に、ALF処理部が、ALF処理を行う(ステップS2205)。尚、ALF処理の詳細は後述されるが、ステップS2205において、入力画像11及び復号画像12に基づいて、フィルタ係数セット情報14、オフセット情報15及びALF処理画像17が生成される。エントロピー符号化部109が、ステップS2202において生成された量子化変換係数と、ステップS2205において生成されたフィルタ係数セット情報14及びオフセット情報15と、符号化パラメータとをエントロピー符号化する(ステップS2206)。入力画像11の符号化が完了するまでこれらの一連の処理が繰り返される。 The moving picture encoding unit 100 operates as shown in FIG. 22, for example. Specifically, the subtraction unit 102 subtracts the prediction image from the input image 11 to generate a prediction error image (step S2201). The transform and quantization unit 103 performs transform and quantization on the prediction error image generated in step S2201, and generates a quantized transform coefficient (step S2202). The inverse quantization and inverse transform unit 104 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S2202, and decodes the prediction error image (step S2203). The adding unit 105 adds the prediction error image decoded in step S2203 to the prediction image to generate a (local) decoded image 12 (step S2204). Next, the ALF processing unit performs ALF processing (step S2205). Although details of the ALF processing will be described later, filter coefficient set information 14, offset information 15 and an ALF processed image 17 are generated based on the input image 11 and the decoded image 12 in step S2205. The entropy encoding unit 109 entropy encodes the quantized transform coefficient generated in step S2202, the filter coefficient set information 14 and offset information 15 generated in step S2205, and the encoding parameter (step S2206). These series of processes are repeated until encoding of the input image 11 is completed.
 尚、図22に例示される動作は、予測処理及び変換処理を含むいわゆるハイブリッド符号化に相当する。しかしながら、動画像符号化装置は、必ずしもハイブリッド符号化を行う必要はない。例えば、ハイブリッド符号化をDPCM(Differential Pulse Code Modulation)に置き換える場合に、隣接画素に基づく予測処理が行われる一方で不要な処理が省略されてよい。 The operation illustrated in FIG. 22 corresponds to so-called hybrid coding including prediction processing and conversion processing. However, the moving image encoding apparatus does not necessarily need to perform hybrid encoding. For example, when hybrid coding is replaced with DPCM (Differential Pulse Code Modulation), unnecessary processing may be omitted while prediction processing based on neighboring pixels is performed.
 以下、図1のALF処理部、即ち、オフセットクラス設定部106、フィルタ係数セット及びオフセット値設定部107及びフィルタリング部108の詳細が説明される。 
 オフセットクラス設定部106は、前述の通り、復号画像12の第1の単位毎に第1の指標に基づいてオフセットクラスを設定する。ここで、第1の単位は、一画素であってもよいし、複数画素を含む領域(例えば、画素ブロック)であってもよい。尚、以降の説明は、簡単化のために第1の単位は、一画素であるとしているが、後述されるように複数画素を含む領域へと適宜拡張されてよい。第1の指標は、第1の単位の画像特徴を示す値である。
Hereinafter, details of the ALF processing unit of FIG. 1, that is, the offset class setting unit 106, the filter coefficient set and offset value setting unit 107, and the filtering unit 108 will be described.
As described above, the offset class setting unit 106 sets the offset class for each first unit of the decoded image 12 based on the first index. Here, the first unit may be a single pixel or a region including a plurality of pixels (for example, a pixel block). In the following description, for the sake of simplicity, the first unit is assumed to be one pixel. However, as will be described later, the first unit may be appropriately extended to a region including a plurality of pixels. The first index is a value indicating the image feature of the first unit.
 例えば、第1の指標として第1の単位の画像のアクティビティであってよい。例えば、オフセットクラス設定部106は、位置(x,y)によって特定される画素の第1の指標k(x,y)を下記数式(1)によって算出してもよい。
Figure JPOXMLDOC01-appb-M000001
For example, the activity of the image of the first unit may be used as the first index. For example, the offset class setting unit 106 may calculate the first index k (x, y) of the pixel specified by the position (x, y) by the following formula (1).
Figure JPOXMLDOC01-appb-M000001

 上記数式(1)において、Sdec(x,y)は、復号画像12における位置(x,y)の画素値を表す。上記数式(1)によれば、第1の指標k(x,y)は、位置(x,y)におけるアクティビティを表す。尚、上記数式(1)を利用して、注目画素の周囲の一定範囲内の画素、例えば、注目画素の周囲N(Nは2以上の整数)×N画素のブロック内の画素についてアクティビティを算出し、これらの和を第1の指標k(x,y)としてもよい。

In the above equation (1), S dec (x, y) represents the pixel value at the position (x, y) in the decoded image 12. According to the mathematical formula (1), the first index k (x, y) represents the activity at the position (x, y). It should be noted that using the above formula (1), the activity is calculated for pixels within a certain range around the pixel of interest, for example, pixels around the pixel of interest N (N is an integer of 2 or more) × N pixels. These sums may be used as the first index k (x, y).
 オフセットクラス設定部106は、アクティビティに代えて注目画素と周囲の画素との間の比較に基づいて第1の指標を算出することも可能である。例えば、第1の指標は、注目画素と周囲の画素とを画素値の大きい順に順位付けした場合に、注目画素の順位が高いほど大きくなるものであってよい。具体的には、オフセットクラス設定部106は、下記数式(2)によって、第1の指標k(x,y)を算出してもよい。
Figure JPOXMLDOC01-appb-M000002
The offset class setting unit 106 can calculate the first index based on a comparison between the target pixel and surrounding pixels instead of the activity. For example, the first index may be such that when the pixel of interest and the surrounding pixels are ranked in descending order of the pixel value, the pixel of interest has a higher rank. Specifically, the offset class setting unit 106 may calculate the first index k (x, y) by the following mathematical formula (2).
Figure JPOXMLDOC01-appb-M000002

 上記数式(2)において、関数sign(α)は、αが正値であれば1を返し、αが0であれば0を返し、αが負値であれば-1を返す。具体的には、上記数式(2)によれば、第1の指標k(x,y)は、注目画素の画素値が周囲の4画素のいずれよりも大きければ8となり、注目画素の画素値が周囲の4画素のいずれとも同じであれば4となり、注目画素の画素値が周囲の4画素のいずれよりも小さければ0となる。尚、上記数式(2)を変形することも可能である。

In the above equation (2), the function sign (α) returns 1 if α is a positive value, returns 0 if α is 0, and returns -1 if α is a negative value. Specifically, according to the above formula (2), the first index k (x, y) is 8 if the pixel value of the target pixel is larger than any of the surrounding four pixels, and the pixel value of the target pixel Is 4 if all of the surrounding 4 pixels are the same, and 0 if the pixel value of the pixel of interest is smaller than any of the surrounding 4 pixels. It is possible to modify the above formula (2).
 或いは、注目画素の画素値Sdec(x,y)が第1の指標k(x,y)として用いられてもよい。また、ピクチャ内、スライス内、或いは、画素ブロック内における注目画素のスキャン順(即ち、注目画素の位置)が、第1の指標k(x,y)として用いられてもよい。スキャン順は、ラスタスキャン、ジグザグスキャン、ヒルベルトスキャンなどに基づく順序であってよい。 Alternatively, the pixel value S dec (x, y) of the target pixel may be used as the first index k (x, y). Further, the scan order of the target pixel (that is, the position of the target pixel) within the picture, the slice, or the pixel block may be used as the first index k (x, y). The scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
 前述の通り、第1の単位は一画素に限られず複数画素を含む領域であってもよい。第1の単位が複数画素を含む領域である場合には、オフセットクラス設定部106は領域内の各画素について前述の手法によって第1の指標を算出し、これらに基づいて第1の単位毎の第1の指標を算出してもよい。例えば、オフセットクラス設定部106は、第1の単位に含まれる全画素または一部の画素について第1の指標を算出し、これらの総和、平均値、最小値または最大値を当該第1の単位の第1の指標として算出してもよい。或いは、第1の単位がピクチャ、スライスまたは画素ブロックを水平方向及び垂直方向に分割して得られるならば、当該第1の単位のスキャン順が第1の指標として用いられてもよい。スキャン順は、ラスタスキャン、ジグザグスキャン、ヒルベルトスキャンなどに基づく順序であってよい。 As described above, the first unit is not limited to one pixel and may be a region including a plurality of pixels. When the first unit is an area including a plurality of pixels, the offset class setting unit 106 calculates the first index for each pixel in the area by the above-described method, and based on these, the first index is set. The first index may be calculated. For example, the offset class setting unit 106 calculates the first index for all or some of the pixels included in the first unit, and sets the sum, average value, minimum value, or maximum value of the first unit as the first unit. The first index may be calculated. Alternatively, if the first unit is obtained by dividing a picture, slice, or pixel block in the horizontal direction and the vertical direction, the scan order of the first unit may be used as the first index. The scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
 オフセットクラス設定部106は、例えば下記数式(3)に従って、第1の単位毎に第1の指標に基づいてオフセットクラスを設定してもよい。 
Figure JPOXMLDOC01-appb-M000003
The offset class setting unit 106 may set the offset class based on the first index for each first unit, for example, according to the following mathematical formula (3).
Figure JPOXMLDOC01-appb-M000003

 上記数式(3)において、offset_idx(x,y)は位置(x,y)によって特定される画素が属する第1の単位のオフセットクラスを表す。k(x,y)は、第1の単位の第1の指標を表す。δは、1以上の実数を表す。

In the above equation (3), offset_idx (x, y) represents the offset class of the first unit to which the pixel specified by the position (x, y) belongs. k (x, y) represents the first index of the first unit. δ represents a real number of 1 or more.
 或いは、オフセットクラス設定部106は、図24に示されるように、第1の指標とオフセットクラスとの対応関係を保持する参照テーブルを用意してもよい。上記数式(3)によれば、任意のオフセットクラスに対応する第1の指標の範囲は一定である。他方、参照テーブルによれば、あるオフセットクラスに対応する第1の指標の範囲を狭くしたり、別のオフセットクラスに対応する第1の指標の範囲を広げたりすることができる。 Alternatively, the offset class setting unit 106 may prepare a reference table that holds the correspondence between the first index and the offset class, as shown in FIG. According to Equation (3) above, the range of the first index corresponding to an arbitrary offset class is constant. On the other hand, according to the reference table, the range of the first index corresponding to a certain offset class can be narrowed, or the range of the first index corresponding to another offset class can be expanded.
 上記数式(3)におけるδを調整したり、参照テーブルを変更したりすることによって、オフセットクラスの総数を増減させることも可能である。オフセットクラスの総数が大きいほど設定可能なオフセット値の総数が増えるので、画質改善効果を向上させやすい。他方、オフセットクラスの総数が小さいほど設定可能なオフセット値の総数が減るので、後述されるオフセット情報15によるオーバーヘッドを削減できる。 It is also possible to increase or decrease the total number of offset classes by adjusting δ in the above formula (3) or changing the reference table. Since the total number of offset values that can be set increases as the total number of offset classes increases, it is easy to improve the image quality improvement effect. On the other hand, the smaller the total number of offset classes, the smaller the total number of offset values that can be set, so the overhead due to offset information 15 described later can be reduced.
 尚、前述の通り、種々の第1の指標が想定される。オフセットクラス設定部106は、第1の指標の種別をいずれか1つに固定してもよいし、これらを切り替えてもよい。例えば、オフセットクラス設定部106は、スライス単位または他の単位で、第1の指標の種別を切り替えてもよい。この場合に、符号化制御部110は、スライス毎に最適な第1の指標の種別を選択してもよい。選択された第1の指標の種別を示す情報は、エントロピー符号化部109によってエントロピー符号化され、符号化データ18の一部として出力される。尚、最適な第1の指標の種別は、例えば下記数式(4)に示される符号化コストを最小化するものであってよい。 
Figure JPOXMLDOC01-appb-M000004
As described above, various first indices are assumed. The offset class setting unit 106 may fix the type of the first index to any one or may switch between them. For example, the offset class setting unit 106 may switch the type of the first index in slice units or other units. In this case, the encoding control unit 110 may select the optimum first index type for each slice. Information indicating the type of the selected first index is entropy encoded by the entropy encoding unit 109 and output as a part of the encoded data 18. Note that the optimum first index type may be one that minimizes the encoding cost represented by the following formula (4), for example.
Figure JPOXMLDOC01-appb-M000004

 上記数式(4)において、Costは符号化コストを表し、Dは(第1の単位内の)残差二乗和を表し、Rは符号量を表す。

In the above formula (4), Cost represents the coding cost, D represents the residual sum of squares (in the first unit), and R represents the code amount.
 フィルタ係数セット及びオフセット値設定部107は、前述の通り、入力画像11と、復号画像12と、オフセットクラス情報13とに基づいて、フィルタ係数セットと、各オフセットクラスに対応するオフセット値とを設定する。例えば、フィルタ係数セット及びオフセット値設定部107は、Wiener-Hopf方程式を解くことにより、フィルタ係数セット及び各オフセットクラスに対応するオフセット値とを設定する。Wiener-Hopf方程式によれば、例えばスライス単位でALF処理画像17と入力画像11との間の二乗誤差和が最小化するように、フィルタ係数セット及び各オフセットクラスに対応するオフセット値とが設定される。この二乗誤差和は、下記数式(5)により算出できる。
Figure JPOXMLDOC01-appb-M000005
As described above, the filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class information 13. To do. For example, the filter coefficient set and offset value setting unit 107 sets the filter coefficient set and the offset value corresponding to each offset class by solving the Wiener-Hopf equation. According to the Wiener-Hopf equation, for example, the filter coefficient set and the offset value corresponding to each offset class are set so that the square error sum between the ALF processed image 17 and the input image 11 is minimized in units of slices. The This square error sum can be calculated by the following equation (5).
Figure JPOXMLDOC01-appb-M000005

 上記数式(5)において、Eはスライス内の二乗誤差和を表す。Sflt(x,y)はALF処理画像17の位置(x,y)における画素値を表す。Sorg(x,y)は入力画像11の同位置(x,y)における画素値を表す。X,Yは、スライス内における位置(x,y)の集合を表す。hi,jはフィルタ係数を表す。δは、オフセットクラス(=k)に対応するオフセット値を表す。

In the above formula (5), E represents the square error sum in the slice. S flt (x, y) represents the pixel value at the position (x, y) of the ALF processed image 17. S org (x, y) represents a pixel value at the same position (x, y) of the input image 11. X and Y represent a set of positions (x, y) in the slice. h i, j represents a filter coefficient. [delta] k represents an offset value corresponding to the offset class (= k).
 二乗誤差和を最小化するフィルタ係数セット及び各オフセットクラスのオフセット値は、二乗誤差和を各フィルタ係数及び各オフセット値によって偏微分したものを0とする方程式を解くことによって得られる。即ち、フィルタ係数セット及びオフセット値設定部107は、下記数式(6)を解けばよい。
Figure JPOXMLDOC01-appb-M000006
The filter coefficient set that minimizes the square error sum and the offset value of each offset class can be obtained by solving an equation in which the square error sum is partially differentiated by each filter coefficient and each offset value. That is, the filter coefficient set and offset value setting unit 107 only has to solve the following formula (6).
Figure JPOXMLDOC01-appb-M000006

 上記数式(6)を解くことにより、フィルタ係数セット及びオフセット値設定部107は、スライス内の二乗誤差和を最小化する最適なフィルタ係数セット及び各オフセットクラスのオフセット値を設定できる。

By solving Equation (6), the filter coefficient set and offset value setting unit 107 can set the optimum filter coefficient set that minimizes the square error sum in the slice and the offset value of each offset class.
 尚、フィルタ係数セット及びオフセット値設定部107は、フィルタ係数値及びオフセット値を導出する過程において、浮動小数点演算を利用できる。しかしながら、これらフィルタ係数値及びオフセット値を浮動小数点形式で設定すると、フィルタ処理の演算量が大きくなる。更に、ALF処理結果が浮動小数点のモデルに依存するので、処理環境次第では復号側におけるALF処理結果がALF処理画像17と一致しないおそれがある。そこで、フィルタ係数セット及びオフセット値設定部107は、ALF処理の高速化及びALF処理結果の厳密性の観点から、フィルタ係数値及びオフセット値を量子化して設定してよい。例えば、フィルタ係数セット及びオフセット値設定部107は、下記数式(7)に従って、浮動小数点形式のフィルタ係数値及びオフセット値を量子化してもよい。
Figure JPOXMLDOC01-appb-M000007
Note that the filter coefficient set and offset value setting unit 107 can use floating point arithmetic in the process of deriving the filter coefficient value and the offset value. However, if these filter coefficient values and offset values are set in a floating-point format, the amount of computation for the filter processing increases. Furthermore, since the ALF processing result depends on the floating-point model, the ALF processing result on the decoding side may not match the ALF processing image 17 depending on the processing environment. Therefore, the filter coefficient set and offset value setting unit 107 may quantize and set the filter coefficient value and the offset value from the viewpoint of speeding up the ALF process and the strictness of the ALF process result. For example, the filter coefficient set and offset value setting unit 107 may quantize the filter coefficient value and offset value in the floating-point format according to the following equation (7).
Figure JPOXMLDOC01-appb-M000007

 上記数式(7)において、Dは実数値を表す。Dは、典型的には、2の冪乗であるが、これに限られない。以降の説明では、フィルタ係数値及びオフセット値は量子化して設定されているとする。

In the above mathematical formula (7), D represents a real value. D is typically a power of 2, but is not limited thereto. In the following description, it is assumed that the filter coefficient value and the offset value are set by quantization.
 フィルタリング部108は、前述の通り、オフセットクラス情報13と、フィルタ係数セット情報14と、オフセット情報15とに基づいて復号画像12にフィルタ処理を行い、ALF処理画像17を生成する。より具体的には、フィルタリング部108は、図2に示されるように、オフセット選択部201と、フィルタ処理部202とを含む。 As described above, the filtering unit 108 performs the filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, and the offset information 15 to generate an ALF processed image 17. More specifically, the filtering unit 108 includes an offset selection unit 201 and a filter processing unit 202, as shown in FIG.
 オフセット選択部201は、オフセットクラス設定部106からオフセットクラス情報13を入力し、フィルタ係数セット及びオフセット値設定部107からオフセット情報15を入力する。オフセット選択部201は、第1の単位毎にオフセットクラス情報13に基づいてオフセットクラスを特定し、オフセット情報15に基づいて当該オフセットクラスに対応するオフセット値16を選択する。オフセット選択部201は、選択したオフセット値16をフィルタ処理部202へと出力する。 The offset selection unit 201 receives the offset class information 13 from the offset class setting unit 106 and the offset information 15 from the filter coefficient set / offset value setting unit 107. The offset selection unit 201 specifies an offset class based on the offset class information 13 for each first unit, and selects an offset value 16 corresponding to the offset class based on the offset information 15. The offset selection unit 201 outputs the selected offset value 16 to the filter processing unit 202.
 フィルタ処理部202は、加算部105から復号画像12を入力し、フィルタ係数セット及びオフセット値設定部107からフィルタ係数セット情報14を入力し、オフセット選択部201からオフセット値16を入力する。フィルタ処理部202は、復号画像12内の各画素に、フィルタ係数セット情報14に基づくフィルタ演算を行い、オフセット値16に基づくオフセット演算を行ってALF処理画像17を生成する。即ち、フィルタ処理部202は、下記数式(8)に従って、ALF処理画像17における位置(x,y)の画素値を生成する。
Figure JPOXMLDOC01-appb-M000008
The filter processing unit 202 receives the decoded image 12 from the addition unit 105, receives the filter coefficient set information 14 from the filter coefficient set / offset value setting unit 107, and receives the offset value 16 from the offset selection unit 201. The filter processing unit 202 performs a filter operation based on the filter coefficient set information 14 on each pixel in the decoded image 12 and performs an offset operation based on the offset value 16 to generate an ALF processed image 17. That is, the filter processing unit 202 generates a pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (8).
Figure JPOXMLDOC01-appb-M000008

 尚、Dが2に等しいならば、上記数式(8)における除算はビットシフト演算と等価である。除算をビットシフト演算に置き換えることにより、フィルタ処理部202の構成を簡略化できると共にフィルタ処理を高速化できる。従って、フィルタ処理部202は、下記数式(9)に従って、ALF処理画像17における位置(x,y)の画素値を生成してもよい。
Figure JPOXMLDOC01-appb-M000009

If D is equal to 2 n , the division in the equation (8) is equivalent to the bit shift operation. By replacing the division with a bit shift operation, the configuration of the filter processing unit 202 can be simplified and the filter processing can be speeded up. Therefore, the filter processing unit 202 may generate the pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (9).
Figure JPOXMLDOC01-appb-M000009

 以上のように、図1のALF処理部に関連する動作は例えば図3に示すものとなる。即ち、オフセット値設定部106が、復号画像12の第1の単位(例えば画素または画素ブロック)毎に第1の指標に基づいてオフセットクラスを設定する(ステップS301)。フィルタ係数セット及びオフセット値設定部107が、入力画像11と、復号画像12と、ステップS301において設定されたオフセットクラスとに基づいて、フィルタ係数セットと、各オフセットクラスに対応するオフセット値とを設定する(ステップS302)。フィルタリング部108が、ステップS302において設定されたフィルタ係数セット及びオフセット値に基づいて、復号画像12にフィルタ処理を行う(ステップS303)。更に、エントロピー符号化部109が、量子化変換係数及び符号化パラメータに加えて、ステップS302において設定されたフィルタ係数セットを示すフィルタ係数セット情報14と、ステップS302において設定されたオフセット値を示すオフセット情報15とをエントロピー符号化する(ステップS304)。

As described above, the operation related to the ALF processing unit in FIG. 1 is as shown in FIG. 3, for example. That is, the offset value setting unit 106 sets an offset class based on the first index for each first unit (for example, pixel or pixel block) of the decoded image 12 (step S301). The filter coefficient set and offset value setting unit 107 sets a filter coefficient set and an offset value corresponding to each offset class based on the input image 11, the decoded image 12, and the offset class set in step S301. (Step S302). The filtering unit 108 performs a filtering process on the decoded image 12 based on the filter coefficient set and the offset value set in step S302 (step S303). Further, the entropy encoding unit 109 adds the filter coefficient set information 14 indicating the filter coefficient set set in step S302 and the offset indicating the offset value set in step S302, in addition to the quantized transform coefficient and the encoding parameter. The information 15 is entropy encoded (step S304).
 フィルタ係数セット情報14及びオフセット情報15は、例えば図4に示されるシンタクス構造に従って記述される。図4のシンタクスは、例えばスライス単位で記述される。図4において、filter_type_idxは、対象スライス内で使用される適応ループフィルタのフィルタ形状或いはタップ長を示すインデックスである。NumOfFilterCoeffは、フィルタ係数セットに含まれるフィルタ係数値の総数を表し、filter_type_idxによって決まる。フィルタ係数セットに含まれるフィルタ係数値は、filter_coeff[i]として1つずつ記述される。NumOfOffsetは、オフセットクラスの総数を表すといえるし、対象スライス内で設定可能なオフセット値の総数を表すともいえる。変数iによって特定されるオフセットクラスに対応するオフセット値は、offset_value[i]として記述される。以上のシンタクス要素がスライス単位で記述され、符号化側においてエントロピー符号化され、符号化データ18の一部として復号側に伝送される。 The filter coefficient set information 14 and the offset information 15 are described according to the syntax structure shown in FIG. 4, for example. The syntax in FIG. 4 is described in units of slices, for example. In FIG. 4, filter_type_idx is an index indicating the filter shape or tap length of the adaptive loop filter used in the target slice. NumOfFilterCoeff represents the total number of filter coefficient values included in the filter coefficient set, and is determined by filter_type_idx. The filter coefficient values included in the filter coefficient set are described one by one as filter_coeff [i]. It can be said that NumOfOffset represents the total number of offset classes, and can also be said to represent the total number of offset values that can be set in the target slice. The offset value corresponding to the offset class specified by the variable i is described as offset_value [i]. The above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
 図4のシンタクス構造によれば、filter_coeff[i]がフィルタ係数値を表し、offset_value[i]がオフセット値を表す。しかしながら、対象スライスにおけるフィルタ係数値及びオフセット値をそのまま符号化するよりも、既に符号化済みのスライスにおけるフィルタ係数値からの差分及び既に符号化済みのスライスにおけるオフセット値からの差分を符号化する方が効率的であるかもしれない。係る場合には、対象スライスにおけるフィルタ係数値に代えてフィルタ係数差分を符号化したり、対象スライスにおけるオフセット値に代えてオフセット差分を符号化したりしてもよい。例えば、過去に符号化されたフレームの各スライスについてフィルタ係数値及びオフセット値を保持しておけば、これらのスライスのいずれかを基準にフィルタ係数値及びオフセット値の差分を計算可能である。この基準とされるスライスは、以降の説明において基準スライスと称される。係る動作を適用するか否かを選択可能とするために、基準スライスからの差分計算を行うか否かを示す情報がシンタクス要素に含められてもよい。 4, filter_coeff [i] represents a filter coefficient value, and offset_value [i] represents an offset value. However, rather than encoding the filter coefficient value and offset value in the target slice as they are, the difference from the filter coefficient value in the already encoded slice and the difference from the offset value in the already encoded slice are encoded. May be efficient. In such a case, the filter coefficient difference may be encoded instead of the filter coefficient value in the target slice, or the offset difference may be encoded instead of the offset value in the target slice. For example, if the filter coefficient value and the offset value are held for each slice of a frame encoded in the past, the difference between the filter coefficient value and the offset value can be calculated based on any one of these slices. This reference slice is referred to as a reference slice in the following description. In order to select whether or not to apply such an operation, information indicating whether or not to perform difference calculation from the reference slice may be included in the syntax element.
 基準スライスからの差分計算を行う場合には、エラー耐性などの観点から、画面間予測において使用される参照フレームの中から基準スライスを選択することが望ましい。参照フレームは、一般的に、参照リストにおいて列挙される。例えば、この参照リストにおいて、各参照フレームに加えて当該参照フレーム内の各スライスのフィルタ係数値及びオフセット値を保持しておけば、差分計算を容易に行うことができる。尚、基準スライスは、参照リスト上で最後に符号化された参照フレーム内で、対象スライスに最も近い位置にあるスライスに固定されてもよい。或いは、基準スライスは、参照リスト内のいずれかのスライスを選択することによって決定されてもよいし、参照リスト内のいずれかのフレームを選択することによって当該指定フレーム内で対象スライスに最も近い位置にあるスライスに決定されてもよい。参照リスト内のスライスを選択することによって基準スライスを決定する場合には、選択されたスライスを識別する情報がシンタクス要素として含められる必要がある。参照リスト内の参照フレームを選択することによって基準スライスを決定する場合には、選択された参照フレームを識別する情報がシンタクス要素として含められる必要がある。 When performing the difference calculation from the reference slice, it is desirable to select the reference slice from the reference frames used in inter-screen prediction from the viewpoint of error tolerance. Reference frames are typically listed in a reference list. For example, in this reference list, if the filter coefficient value and the offset value of each slice in the reference frame are held in addition to each reference frame, the difference calculation can be easily performed. Note that the reference slice may be fixed to the slice closest to the target slice in the reference frame encoded last on the reference list. Alternatively, the reference slice may be determined by selecting any slice in the reference list, or by selecting any frame in the reference list, the position closest to the target slice in the designated frame May be determined to be in a slice. When the reference slice is determined by selecting a slice in the reference list, information for identifying the selected slice needs to be included as a syntax element. When the reference slice is determined by selecting a reference frame in the reference list, information for identifying the selected reference frame needs to be included as a syntax element.
 更に、基準スライスにおけるフィルタ係数値及びオフセット値をダイレクトに使用することも想定される。このようにフィルタ係数値及びオフセット値を指定すれば、フィルタ係数セット情報14及びオフセット情報15の情報量を削減できる。係る動作を適用するか否かを選択可能とするために、基準スライスにおけるフィルタ係数値及びオフセット値をダイレクトに使用するか否かを示す情報がシンタクス要素に含められてもよい。 Furthermore, it is assumed that the filter coefficient value and the offset value in the reference slice are directly used. By specifying the filter coefficient value and the offset value in this way, the information amount of the filter coefficient set information 14 and the offset information 15 can be reduced. In order to select whether or not to apply such an operation, information indicating whether or not to directly use the filter coefficient value and the offset value in the reference slice may be included in the syntax element.
 尚、一部のオフセットクラスについてオフセット値が設定されなくてもよい。オフセット値を設定しなければ、オフセット情報15のオーバーヘッドを削減できる。オフセット値が設定されないオフセットクラスは、符号化制御部110によって選択されてもよい。例えば、符号化制御部110は、対象スライス内の各オフセットクラスについてオフセット値が設定されない場合の符号化コストを上記数式(4)によって計算し、これが当該オフセットクラスについてオフセット値が設定される場合の符号化コストを下回るならば、当該オフセットクラスにオフセット値を設定しない。但し、係る動作をサポートするためには、オフセット値が設定されないオフセットクラスを示す情報がシンタクス要素に含められる必要がある。或いは、オフセット値が設定されないオフセットクラスが予め定められていてもよい。この場合には、復号側はオフセット値が設定されないオフセットクラスを知ることができるので、オフセット値が設定されないオフセットクラスを示す情報がシンタクス要素に含められる必要はない。 Note that offset values may not be set for some offset classes. If no offset value is set, the overhead of the offset information 15 can be reduced. An offset class for which no offset value is set may be selected by the encoding control unit 110. For example, the encoding control unit 110 calculates the encoding cost when the offset value is not set for each offset class in the target slice by the above equation (4), and this is the case where the offset value is set for the offset class. If it is below the coding cost, no offset value is set for the offset class. However, in order to support such an operation, information indicating an offset class for which no offset value is set needs to be included in the syntax element. Alternatively, an offset class for which no offset value is set may be determined in advance. In this case, since the decoding side can know the offset class for which no offset value is set, information indicating the offset class for which no offset value is set need not be included in the syntax element.
 また、フィルタ演算及びオフセット演算は必ずしも全てのスライスにおいて適用される必要はない。尚、フィルタ演算は、例えばフィルタ係数セットを用いた畳み込み演算を意味する。オフセット演算は、例えばオフセット値を用いた加算を意味する。例えば、符号化制御部110は、対象スライスについてフィルタ演算及びオフセット演算が適用されない場合の符号化コストを上記数式(4)によって計算し、これがフィルタ演算及びオフセット演算が適用される場合の符号化コストを下回るならば、フィルタ演算及びオフセット演算を適用しない。但し、係る動作をサポートするためには、対象スライスについてフィルタ演算及びオフセット演算が適用されるか否か(即ち、フィルタ処理が適用されるか否か)を示す情報がシンタクス要素に含められる必要がある。 Also, the filter operation and the offset operation do not necessarily have to be applied to all slices. The filter operation means a convolution operation using a filter coefficient set, for example. The offset calculation means addition using, for example, an offset value. For example, the encoding control unit 110 calculates the encoding cost when the filter operation and the offset operation are not applied to the target slice according to the above equation (4), and this is the encoding cost when the filter operation and the offset operation are applied. If it is less than, the filter operation and the offset operation are not applied. However, in order to support such an operation, information indicating whether the filter operation and the offset operation are applied to the target slice (that is, whether the filter process is applied) needs to be included in the syntax element. is there.
 また、対象スライスについてフィルタ演算及びオフセット演算のうちオフセット演算のみが適用されてもよい。例えば、符号化制御部110は、対象スライスについてオフセット演算が適用され、かつ、フィルタ演算が適用されない場合の符号化コストを上記数式(4)によって計算し、これがフィルタ演算及びオフセット演算が適用される場合の符号化コストを下回るならばフィルタ演算を省略する。但し、係る動作をサポートするためには、対象スライスについてフィルタ演算が適用されるか否かを示す情報がシンタクス要素に含められる必要がある。 Also, only the offset calculation among the filter calculation and the offset calculation may be applied to the target slice. For example, the encoding control unit 110 calculates the encoding cost when the offset operation is applied to the target slice and the filter operation is not applied using the above equation (4), and the filter operation and the offset operation are applied thereto. If the coding cost is less than the case, the filter operation is omitted. However, in order to support such an operation, information indicating whether or not the filter operation is applied to the target slice needs to be included in the syntax element.
 更に、対象スライスについて、(1)フィルタ演算及びオフセット演算が適用されるモード、(2)フィルタ演算が適用されずオフセット演算が適用されるモード及び(3)フィルタ演算及びオフセット演算が適用されないモードのいずれかが選択されてもよい。例えば、符号化制御部110は、対象スライスについて各モードを選択した場合の符号化コストを上記数式(4)によって計算し、これを最小化するモードを選択する。但し、係る動作をサポートするためには、対象スライスについていずれのモードが選択されたかを示す情報がシンタクス要素に含められる必要がある。 Further, for the target slice, (1) a mode in which the filter operation and the offset operation are applied, (2) a mode in which the filter operation is not applied and the offset operation is applied, and (3) a mode in which the filter operation and the offset operation are not applied. Either may be selected. For example, the encoding control unit 110 calculates the encoding cost when each mode is selected for the target slice by the above equation (4), and selects a mode that minimizes the encoding cost. However, in order to support such an operation, information indicating which mode is selected for the target slice needs to be included in the syntax element.
 (動画像復号装置) 
 図5に示されるように、第1の実施形態に係る動画像復号装置は、動画像復号部500と、復号制御部507とを含む。動画像復号部500は、エントロピー復号部501と、逆量子化及び逆変換部502と、加算部503と、オフセットクラス設定部504と、フィルタリング部505と、予測画像生成部506とを含む。復号制御部507は、動画像復号部500の各部の動作を制御する。
(Video decoding device)
As shown in FIG. 5, the moving picture decoding apparatus according to the first embodiment includes a moving picture decoding unit 500 and a decoding control unit 507. The moving picture decoding unit 500 includes an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, an addition unit 503, an offset class setting unit 504, a filtering unit 505, and a predicted image generation unit 506. The decoding control unit 507 controls the operation of each unit of the moving image decoding unit 500.
 エントロピー復号部501は、動画像復号部500の外部(例えば通信系または蓄積系)から符号化データ21を入力する。符号化データ21は、前述の符号化データ18と同一または類似である。エントロピー復号部501は、符号化データ21をエントロピー復号し、量子化変換係数、符号化パラメータ22、フィルタ係数セット情報23及びオフセット情報24を生成する。エントロピー復号部501は、量子化変換係数を逆量子化及び逆変換部502へと出力し、符号化パラメータ22を復号制御部507へと出力し、フィルタ係数セット情報23及びオフセット情報24をフィルタリング部505へと出力する。 The entropy decoding unit 501 inputs the encoded data 21 from outside the moving image decoding unit 500 (for example, a communication system or a storage system). The encoded data 21 is the same as or similar to the encoded data 18 described above. The entropy decoding unit 501 performs entropy decoding on the encoded data 21 to generate quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24. The entropy decoding unit 501 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 502, outputs the encoding parameter 22 to the decoding control unit 507, and filters the filter coefficient set information 23 and the offset information 24 to the filtering unit. To 505.
 逆量子化及び逆変換部502は、エントロピー復号部501から量子化変換係数を入力する。逆量子化及び逆変換部502は、量子化変換係数を逆量子化して変換係数を復号する。更に、逆量子化及び逆変換部502は、変換係数に逆変換処理を行って予測誤差画像を復号する。逆量子化及び逆変換部502は、予測誤差画像を加算部503へと出力する。基本的に、逆量子化及び逆変換部502は、前述の逆量子化及び逆変換部104と同一または類似の処理を行う。即ち、逆量子化は、復号制御部507によって設定される量子化パラメータに基づいて行われる。更に、逆変換処理は、符号化側において行われた変換処理によって決まる。例えば、逆変換処理は、IDCT、逆ウェーブレット変換などである。 The inverse quantization and inverse transform unit 502 inputs the quantized transform coefficient from the entropy decoding unit 501. The inverse quantization and inverse transform unit 502 dequantizes the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 502 performs an inverse transform process on the transform coefficient to decode the prediction error image. The inverse quantization and inverse transform unit 502 outputs the prediction error image to the addition unit 503. Basically, the inverse quantization and inverse transform unit 502 performs the same or similar processing as the inverse quantization and inverse transform unit 104 described above. That is, the inverse quantization is performed based on the quantization parameter set by the decoding control unit 507. Further, the inverse conversion process is determined by the conversion process performed on the encoding side. For example, the inverse transform process is IDCT, inverse wavelet transform, or the like.
 加算部503は、予測画像生成部506から予測画像を入力し、逆量子化及び逆変換部502から予測誤差画像を入力する。加算部503は、予測誤差画像を予測画像と加算し、復号画像25を生成する。加算部503は、復号画像25をオフセットクラス設定部504及びフィルタリング部505へと出力する。 The addition unit 503 receives a prediction image from the prediction image generation unit 506 and inputs a prediction error image from the inverse quantization and inverse conversion unit 502. The adding unit 503 adds the prediction error image to the prediction image to generate a decoded image 25. The adding unit 503 outputs the decoded image 25 to the offset class setting unit 504 and the filtering unit 505.
 オフセットクラス設定部504は、加算部503から復号画像25を入力し、第1の単位毎に第1の指標に基づいてオフセットクラスを設定する。オフセットクラス設定部504は、第1の単位毎に対応するオフセットクラスを示すオフセットクラス情報26を生成する。基本的に、オフセットクラス設定部504は、オフセットクラス設定部106と同一または類似の処理を行う。オフセットクラス設定部504は、オフセットクラス情報26をフィルタリング部505へと出力する。 The offset class setting unit 504 inputs the decoded image 25 from the adding unit 503, and sets an offset class based on the first index for each first unit. The offset class setting unit 504 generates offset class information 26 indicating the offset class corresponding to each first unit. Basically, the offset class setting unit 504 performs the same or similar processing as the offset class setting unit 106. The offset class setting unit 504 outputs the offset class information 26 to the filtering unit 505.
 フィルタリング部505は、加算部503から復号画像25を入力し、オフセットクラス設定部504からオフセットクラス情報26を入力し、エントロピー復号部501からフィルタ係数セット情報23及びオフセット情報24を入力する。フィルタリング部505は、フィルタ係数セット情報23と、オフセット情報24と、オフセットクラス情報26とに基づいて復号画像25にフィルタ処理を行い、ALF処理画像27を生成する。即ち、フィルタリング部505は、前述のフィルタリング部108と同一または類似の処理を行う。また、ALF処理画像27は、前述のALF処理画像17と同一または類似である。フィルタリング部505は、ALF処理画像27を予測画像生成部506へと出力すると共に、ALF処理画像27を出力画像として外部(例えば表示系など)にも与える。 The filtering unit 505 receives the decoded image 25 from the adding unit 503, receives the offset class information 26 from the offset class setting unit 504, and receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 501. The filtering unit 505 performs filter processing on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, and the offset class information 26 to generate an ALF processed image 27. That is, the filtering unit 505 performs the same or similar processing as the filtering unit 108 described above. The ALF processed image 27 is the same as or similar to the ALF processed image 17 described above. The filtering unit 505 outputs the ALF processed image 27 to the predicted image generation unit 506 and also supplies the ALF processed image 27 to the outside (for example, a display system) as an output image.
 尚、ALF処理画像27は、予測画像生成部506がアクセス可能な図示されない記憶部(例えばバッファなど)に保存されてもよい。ALF処理画像27は、必要に応じて予測画像生成部506によって参照画像として読み出され、予測処理に利用される。 The ALF processed image 27 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 506. The ALF processed image 27 is read as a reference image by the predicted image generation unit 506 as necessary, and is used for the prediction process.
 予測画像生成部506は、出力画像の予測処理を画素ブロック単位または異なる単位で行い、予測画像を生成する。予測画像生成部506は、前述のALF処理画像27に基づいて出力画像の予測処理を行ってよい。即ち、予測画像生成部506は、前述の予測画像生成部101と同一または類似の処理を行う。予測画像生成部506は、予測画像を加算部503へと出力する。 The predicted image generation unit 506 performs a prediction process on the output image in units of pixel blocks or different units, and generates a predicted image. The predicted image generation unit 506 may perform output image prediction processing based on the ALF processed image 27 described above. That is, the predicted image generation unit 506 performs the same or similar processing as that of the predicted image generation unit 101 described above. The predicted image generation unit 506 outputs the predicted image to the adding unit 503.
 復号制御部507は、エントロピー復号部501から符号化パラメータ22を入力する。復号制御部507は、符号化パラメータ22に基づいて、符号化ブロックの分割制御、量子化制御及びモード制御などを行う。 The decoding control unit 507 receives the encoding parameter 22 from the entropy decoding unit 501. Based on the encoding parameter 22, the decoding control unit 507 performs encoding block division control, quantization control, mode control, and the like.
 動画像復号部500は、例えば図23に示されるように動作する。具体的には、エントロピー復号部501が、符号化データ21をエントロピー復号し、量子化変換係数、符号化パラメータ22、フィルタ係数セット情報23及びオフセット情報24を生成する(ステップS2301)。逆量子化及び逆変換部502が、ステップS2301において生成された量子化変換係数を逆量子化及び逆変換し、予測誤差画像を復号する(ステップS2302)。加算部503が、ステップS2302において復号された予測誤差画像を予測画像に加算し、復号画像25を生成する(ステップS2303)。次に、オフセットクラス設定部504及びフィルタリング部505が、ステップS2302において得られたフィルタ係数セット情報23及びオフセット情報24に基づいて、復号画像25にALF処理を行う(ステップS2304)。尚、復号側におけるALF処理は、フィルタ係数セット及び各オフセットクラスに対応するオフセット値を設定する処理が不要な点で符号化側におけるALF処理と異なる。ステップS2304の結果、ALF処理画像27が生成される。出力画像の復号が完了するまでこれら一連の処理が繰り返される。 The moving picture decoding unit 500 operates as shown in FIG. 23, for example. Specifically, the entropy decoding unit 501 performs entropy decoding on the encoded data 21, and generates quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24 (step S2301). The inverse quantization and inverse transform unit 502 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S2301, and decodes a prediction error image (step S2302). The adding unit 503 adds the prediction error image decoded in step S2302 to the prediction image to generate a decoded image 25 (step S2303). Next, the offset class setting unit 504 and the filtering unit 505 perform ALF processing on the decoded image 25 based on the filter coefficient set information 23 and the offset information 24 obtained in step S2302 (step S2304). The ALF process on the decoding side is different from the ALF process on the encoding side in that a process for setting a filter coefficient set and an offset value corresponding to each offset class is unnecessary. As a result of step S2304, an ALF processed image 27 is generated. These series of processes are repeated until the output image is completely decoded.
 以上説明したように、第1の実施形態に係る動画像符号化装置及び動画像復号装置は、例えば対象スライス内の第1の単位毎に第1の指標に基づいてオフセットクラスを設定し、各オフセットクラスに対応するオフセット値を設定する。従って、これら動画像符号化装置及び動画像復号装置によれば、対象スライス内でフィルタ係数セットを固定しながら第1の単位毎にオフセット値を切り替えることができる。即ち、対象スライス内でフィルタ係数セットのスイッチングを生じさせることなく、オフセット値の切り替えによって対象スライス内の局所構造に適応したフィルタ処理が可能である。これら動画像符号化装置及び動画像復号装置によれば、符号化効率の向上が可能である。これら動画像符号化装置及び動画像復号装置によれば、ハードウェア実装時に、スイッチング発生回数を減少させて消費電力を削減できる。 As described above, the video encoding device and the video decoding device according to the first embodiment set an offset class based on the first index for each first unit in the target slice, for example, Set the offset value corresponding to the offset class. Therefore, according to these video encoding device and video decoding device, it is possible to switch the offset value for each first unit while fixing the filter coefficient set in the target slice. That is, it is possible to perform filter processing adapted to the local structure in the target slice by switching the offset value without causing switching of the filter coefficient set in the target slice. According to these video encoding device and video decoding device, the encoding efficiency can be improved. According to these moving image encoding device and moving image decoding device, the power consumption can be reduced by reducing the number of times switching occurs when hardware is mounted.
 本実施形態は様々な変形例が想定される。以下に、係る変形例の一部が記載される。 
 フィルタ係数セット及びオフセット値設定部107は、二乗誤差和を最小化するフィルタ係数セット及びオフセット値を算出しているが、係る動作は変形されてよい。具体的には、フィルタ係数セットについて複数のフィルタ係数セット候補が予め用意され、オフセット値について複数のオフセット候補が予め用意される。ここで、ある1つのオフセット候補は、オフセットクラスの夫々に対応するオフセット値(即ち、オフセット値の集合)を保持している。そして、フィルタ係数セット及びオフセット値設定部107は、複数のフィルタ係数セット候補及び複数のオフセット候補から、二乗誤差和を最小化する1つのフィルタ係数セット候補及びオフセット候補を選択すればよい。係る動作によれば、フィルタ係数セット情報14及びオフセット情報15はそれぞれ1つのフィルタ係数セット候補と1つのオフセット候補とを指定するインデックスとなる。これらフィルタ係数セット情報14及びオフセット情報15は、フィルタ係数セットの切り替え単位(例えば、1または複数のスライス、画素ブロックなど)毎に符号化される。尚、各インデックスに対応するフィルタ係数セット候補及びオフセット候補を示す情報は、符号化側と復号側との間で予め決められていてもよいし、符号化側がフィルタ係数セットの切り替え単位よりも大きな単位(例えば、シーケンス、複数のピクチャ、複数のスライスなど)毎に符号化してもよい。
Various modifications are assumed for this embodiment. Below, a part of the modification which concerns is described.
The filter coefficient set and offset value setting unit 107 calculates a filter coefficient set and offset value that minimizes the square error sum, but the operation may be modified. Specifically, a plurality of filter coefficient set candidates are prepared in advance for the filter coefficient set, and a plurality of offset candidates are prepared in advance for the offset value. Here, one certain offset candidate holds an offset value (that is, a set of offset values) corresponding to each offset class. Then, the filter coefficient set and offset value setting unit 107 may select one filter coefficient set candidate and offset candidate that minimize the square error sum from the plurality of filter coefficient set candidates and the plurality of offset candidates. According to this operation, the filter coefficient set information 14 and the offset information 15 are indexes that designate one filter coefficient set candidate and one offset candidate, respectively. The filter coefficient set information 14 and the offset information 15 are encoded for each filter coefficient set switching unit (for example, one or a plurality of slices, pixel blocks, etc.). The information indicating the filter coefficient set candidate and the offset candidate corresponding to each index may be determined in advance between the encoding side and the decoding side, or the encoding side is larger than the switching unit of the filter coefficient set. You may encode for every unit (for example, a sequence, several pictures, several slices, etc.).
 また、フィルタ係数セット候補及びオフセット候補の一方のみが使用されてもよい。即ち、フィルタ係数セット及びオフセット値設定部107は、フィルタ係数セットを複数のフィルタ係数セット候補から選択し、各オフセットクラスに対応するオフセット値を計算により設定してもよい。或いは、フィルタ係数セット及びオフセット値設定部107は、フィルタ係数セットを計算により設定し、各オフセットクラスに対応するオフセット値の集合を複数のオフセット候補から選択してもよい。 Further, only one of the filter coefficient set candidate and the offset candidate may be used. That is, the filter coefficient set and offset value setting unit 107 may select a filter coefficient set from a plurality of filter coefficient set candidates and set an offset value corresponding to each offset class by calculation. Alternatively, the filter coefficient set and offset value setting unit 107 may set the filter coefficient set by calculation and select a set of offset values corresponding to each offset class from a plurality of offset candidates.
 また、フィルタ係数セットと、オフセットクラスの夫々に対応するオフセット値の集合とをセットとし、このセットを複数用意して候補として利用してもよい。 Also, a set of filter coefficients and a set of offset values corresponding to each offset class may be set, and a plurality of sets may be prepared and used as candidates.
 これらの変形例によれば、フィルタ係数セット情報14及びオフセット情報15の一方または両方がインデックスとなるのでオーバーヘッドを削減できる。従って、予め用意されたフィルタ係数セット候補またはオフセット候補を利用した場合にフィルタ係数セットの切り替え単位内の二乗誤差和が過度に増大しないならば、オーバーヘッドの削減効果がより大きく寄与するので符号化効率は向上する。 According to these modified examples, since one or both of the filter coefficient set information 14 and the offset information 15 serve as an index, overhead can be reduced. Therefore, if the square error sum in the filter coefficient set switching unit does not increase excessively when a filter coefficient set candidate or an offset candidate prepared in advance is used, the overhead reduction effect contributes more greatly, so that the coding efficiency Will improve.
 本実施形態において、ALF処理にSAO処理は組み合わせられていない。しかしながら、本実施形態は、複数のオフセット値を第1の単位毎に切り替えるので、SAO処理と同一または類似の符号化歪改善(画質改善)効果を得ることができる。更に、本実施形態によれば、SAO処理に起因する遅延は生じない。尚、ALF処理及びSAO処理の組み合わせも可能であり、これは第2の実施形態において説明される。 In this embodiment, the ALF process is not combined with the SAO process. However, in this embodiment, since a plurality of offset values are switched for each first unit, the same or similar encoding distortion improvement (image quality improvement) effect as the SAO processing can be obtained. Furthermore, according to the present embodiment, there is no delay caused by the SAO process. A combination of ALF processing and SAO processing is also possible, and this will be described in the second embodiment.
 また、本実施形態において、フィルタ係数セット及び各オフセットクラスに対応するオフセット値はスライス単位で切り替えられることが前提とされている。しかしながら、この切り替え単位は、ピクチャ単位、フレーム単位、フィールド単位に変形されてもよい。或いは、スライスとは異なる方法でピクチャを分割して得られる領域(ループフィルタスライスと名付けることができる)が切り替え単位であってもよい。いずれにせよ、切り替え単位毎に、フィルタ係数セット及び各オフセットクラスに対応するオフセット値が設定され、これらを示すフィルタ係数セット情報14及びオフセット情報15がシグナリングされる。 In the present embodiment, it is assumed that the filter coefficient set and the offset value corresponding to each offset class are switched in units of slices. However, this switching unit may be transformed into a picture unit, a frame unit, and a field unit. Alternatively, an area obtained by dividing a picture by a method different from a slice (which can be named a loop filter slice) may be a switching unit. In any case, a filter coefficient set and an offset value corresponding to each offset class are set for each switching unit, and filter coefficient set information 14 and offset information 15 indicating these are signaled.
 (第2の実施形態) 
 第1の実施形態は、SAO処理と組み合わせられてもよい。第2の実施形態は、第1の実施形態とSAO処理との組み合わせに関する。
(Second Embodiment)
The first embodiment may be combined with SAO processing. The second embodiment relates to a combination of the first embodiment and SAO processing.
 (動画像符号化装置) 
 図6に示されるように、第2の実施形態に係る動画像符号化装置は、動画像符号化部600と、符号化制御部610とを含む。動画像符号化部600は、予測画像生成部601と、減算部602と、変換及び量子化部603と、逆量子化及び逆変換部604と、加算部605と、オフセットクラス設定部606と、フィルタ係数セット及びオフセット値設定部607と、フィルタリング部608と、エントロピー符号化部609と、画素適応オフセット設定部611と、画素適応オフセット処理部612とを含む。尚、オフセットクラス設定部606、フィルタ係数セット及びオフセット値設定部607及びフィルタリング部608は、ALF処理部と称されてよい。符号化制御部610は、動画像符号化部600の各部の動作を制御する。
(Moving picture encoding device)
As illustrated in FIG. 6, the moving image encoding apparatus according to the second embodiment includes a moving image encoding unit 600 and an encoding control unit 610. The moving image encoding unit 600 includes a predicted image generation unit 601, a subtraction unit 602, a transform and quantization unit 603, an inverse quantization and inverse transform unit 604, an adder 605, an offset class setting unit 606, A filter coefficient set / offset value setting unit 607, a filtering unit 608, an entropy encoding unit 609, a pixel adaptive offset setting unit 611, and a pixel adaptive offset processing unit 612 are included. The offset class setting unit 606, the filter coefficient set / offset value setting unit 607, and the filtering unit 608 may be referred to as an ALF processing unit. The encoding control unit 610 controls the operation of each unit of the moving image encoding unit 600.
 予測画像生成部601、減算部602、変換及び量子化部603、逆量子化及び逆変換部604及び符号化制御部610は、予測画像生成部101、減算部102、変換及び量子化部103、逆量子化及び逆変換部104及び符号化制御部110と同一または類似であるので、これらの説明は省略される。 The predicted image generation unit 601, the subtraction unit 602, the transform and quantization unit 603, the inverse quantization and inverse transform unit 604, and the encoding control unit 610 include a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, Since it is the same as or similar to the inverse quantization and inverse transform unit 104 and the encoding control unit 110, description thereof is omitted.
 加算部605は、復号画像12の出力先において、加算部105とは異なる。具体的には、加算部605は、復号画像12を画素適応オフセット設定部611及び画素適応オフセット処理部612へと出力する。加算部605は、その他の点において、加算部105と同一または類似である。 The addition unit 605 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the adding unit 605 outputs the decoded image 12 to the pixel adaptive offset setting unit 611 and the pixel adaptive offset processing unit 612. Adder 605 is the same as or similar to adder 105 in other respects.
 オフセットクラス設定部606は、復号画像12の代わりにSAO処理画像31を画素適応オフセット処理部612から入力する点において、オフセットクラス設定部106とは異なる。オフセットクラス設定部606は、その他の点において、オフセットクラス設定部106と同一または類似である。 The offset class setting unit 606 is different from the offset class setting unit 106 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12. The offset class setting unit 606 is the same as or similar to the offset class setting unit 106 in other points.
 フィルタ係数セット及びオフセット値設定部607は、復号画像12の代わりにSAO処理画像31を画素適応オフセット処理部612から入力する点において、フィルタ係数セット及びオフセット値設定部107とは異なる。フィルタ係数セット及びオフセット値設定部607は、その他の点において、フィルタ係数セット及びオフセット値設定部107と同一または類似である。 The filter coefficient set and offset value setting unit 607 is different from the filter coefficient set and offset value setting unit 107 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12. The filter coefficient set and offset value setting unit 607 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
 フィルタリング部608は、復号画像12の代わりにSAO処理画像31を画素適応オフセット処理部612から入力する点において、フィルタリング部108とは異なる。フィルタリング部608は、その他の点において、フィルタリング部108と同一または類似である。 The filtering unit 608 is different from the filtering unit 108 in that the SAO processed image 31 is input from the pixel adaptive offset processing unit 612 instead of the decoded image 12. The filtering unit 608 is the same as or similar to the filtering unit 108 in other points.
 エントロピー符号化部609は、量子化変換係数、符号化パラメータ、フィルタ係数セット情報14及びオフセット情報15に加えて画素適応オフセット情報19をエントロピー符号化して符号化データ18を生成する点において、エントロピー符号化部109とは異なる。エントロピー符号化部609は、画素適応オフセット情報19を画素適応オフセット設定部611から入力する。エントロピー符号化部609は、その他の点において、エントロピー符号化部109と同一または類似である。 The entropy encoding unit 609 entropy codes in that the encoded adaptive data 18 is generated by entropy encoding the pixel adaptive offset information 19 in addition to the quantized transform coefficients, the encoding parameters, the filter coefficient set information 14 and the offset information 15. Different from the conversion unit 109. The entropy encoding unit 609 receives the pixel adaptive offset information 19 from the pixel adaptive offset setting unit 611. The entropy encoding unit 609 is the same as or similar to the entropy encoding unit 109 in other points.
 画素適応オフセット設定部611は、動画像符号化部600の外部から入力画像11を取得し、加算部605から復号画像12を入力する。画素適応オフセット設定部611は、入力画像11と復号画像12とに基づいて、SAO処理において使用されるパラメータ(例えば、各画素のオフセット値)を設定する。画素適応オフセット設定部611は、設定したパラメータを示す画素適応オフセット情報19を生成し、これをエントロピー符号化部609及び画素適応オフセット処理部612へと出力する。尚、画素適応オフセット設定部611がパラメータを設定するためのアルゴリズムは特に制限されない。 The pixel adaptive offset setting unit 611 acquires the input image 11 from the outside of the moving image encoding unit 600 and inputs the decoded image 12 from the addition unit 605. The pixel adaptive offset setting unit 611 sets a parameter (for example, an offset value of each pixel) used in the SAO process based on the input image 11 and the decoded image 12. The pixel adaptive offset setting unit 611 generates pixel adaptive offset information 19 indicating the set parameters, and outputs this to the entropy encoding unit 609 and the pixel adaptive offset processing unit 612. The algorithm for the pixel adaptive offset setting unit 611 to set parameters is not particularly limited.
 画素適応オフセット処理部612は、加算部605から復号画像12を入力し、画素適応オフセット設定部611から画素適応オフセット情報19を入力する。画素適応オフセット処理部612は、復号画像12に対して画素適応オフセット情報19に基づくSAO処理を行い、SAO処理画像31を生成する。画素適応オフセット処理部612は、SAO処理画像31をオフセットクラス設定部606、フィルタ係数セット及びオフセット値設定部607及びフィルタリング部608へと出力する。 The pixel adaptive offset processing unit 612 receives the decoded image 12 from the adding unit 605 and the pixel adaptive offset information 19 from the pixel adaptive offset setting unit 611. The pixel adaptive offset processing unit 612 performs SAO processing on the decoded image 12 based on the pixel adaptive offset information 19 to generate the SAO processed image 31. The pixel adaptive offset processing unit 612 outputs the SAO processed image 31 to the offset class setting unit 606, the filter coefficient set and offset value setting unit 607, and the filtering unit 608.
 (動画像復号装置) 
 図7に示されるように、第2の実施形態に係る動画像復号装置は、動画像復号部700と、復号制御部707とを含む。動画像復号部700は、エントロピー復号部701と、逆量子化及び逆変換部702と、加算部703と、オフセットクラス設定部704と、フィルタリング部705と、予測画像生成部706と、画素適応オフセット処理部708とを含む。復号制御部707は、動画像復号部700の各部の動作を制御する。
(Video decoding device)
As illustrated in FIG. 7, the video decoding device according to the second embodiment includes a video decoding unit 700 and a decoding control unit 707. The video decoding unit 700 includes an entropy decoding unit 701, an inverse quantization and inverse transformation unit 702, an addition unit 703, an offset class setting unit 704, a filtering unit 705, a predicted image generation unit 706, and a pixel adaptive offset. And a processing unit 708. The decoding control unit 707 controls the operation of each unit of the moving image decoding unit 700.
 逆量子化及び逆変換部702、予測画像生成部706及び復号制御部707は、逆量子化及び逆変換部502、予測画像生成部506及び復号制御部507と同一または類似であるので、これらの説明は省略される。 The inverse quantization and inverse transform unit 702, the predicted image generation unit 706, and the decoding control unit 707 are the same as or similar to the inverse quantization and inverse transform unit 502, the predicted image generation unit 506, and the decoding control unit 507. Explanation is omitted.
 エントロピー復号部701は、符号化データ21をエントロピー復号し、量子化変換係数、符号化パラメータ22、フィルタ係数セット情報23及びオフセット情報24に加えて画素適応オフセット情報28を生成する点において、エントロピー復号部501とは異なる。エントロピー復号部701は、画素適応オフセット情報28を画素適応オフセット処理部708へと出力する。エントロピー復号部701は、その他の点において、エントロピー復号部501と同一または類似である。 The entropy decoding unit 701 performs entropy decoding on the encoded data 21 and generates pixel adaptive offset information 28 in addition to quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24. Different from the part 501. The entropy decoding unit 701 outputs the pixel adaptive offset information 28 to the pixel adaptive offset processing unit 708. The entropy decoding unit 701 is the same as or similar to the entropy decoding unit 501 in other points.
 加算部703は、復号画像25の出力先において、加算部503とは異なる。具体的には、加算部703は、復号画像25を画素適応オフセット処理部708へと出力する。加算部703は、その他の点において、加算部503と同一または類似である。 The addition unit 703 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 703 outputs the decoded image 25 to the pixel adaptive offset processing unit 708. Adder 703 is the same as or similar to adder 503 in other respects.
 オフセットクラス設定部704は、復号画像25の代わりにSAO処理画像29を画素適応オフセット処理部708から入力する点において、オフセットクラス設定部504とは異なる。オフセットクラス設定部704は、その他の点において、オフセットクラス設定部504と同一または類似である。 The offset class setting unit 704 is different from the offset class setting unit 504 in that the SAO processed image 29 is input from the pixel adaptive offset processing unit 708 instead of the decoded image 25. The offset class setting unit 704 is the same as or similar to the offset class setting unit 504 in other respects.
 フィルタリング部705は、復号画像25の代わりにSAO処理画像29を画素適応オフセット処理部708から入力する点において、フィルタリング部505とは異なる。フィルタリング部705は、その他の点において、フィルタリング部505と同一または類似である。 The filtering unit 705 is different from the filtering unit 505 in that the SAO processed image 29 is input from the pixel adaptive offset processing unit 708 instead of the decoded image 25. The filtering unit 705 is the same as or similar to the filtering unit 505 in other respects.
 画素適応オフセット処理部708は、画素適応オフセット処理部612と同一または類似の処理を行う。即ち、画素適応オフセット処理部708は、加算部703から復号画像25を入力し、エントロピー復号部701から画素適応オフセット情報28を入力する。画素適応オフセット処理部708は、復号画像25に対して画素適応オフセット情報28に基づくSAO処理を行い、SAO処理画像29を生成する。画素適応オフセット処理部708は、SAO処理画像29をオフセットクラス設定部704及びフィルタリング部705へと出力する。 The pixel adaptive offset processing unit 708 performs the same or similar processing as the pixel adaptive offset processing unit 612. In other words, the pixel adaptive offset processing unit 708 receives the decoded image 25 from the adding unit 703 and receives the pixel adaptive offset information 28 from the entropy decoding unit 701. The pixel adaptive offset processing unit 708 performs SAO processing on the decoded image 25 based on the pixel adaptive offset information 28 to generate a SAO processed image 29. The pixel adaptive offset processing unit 708 outputs the SAO processed image 29 to the offset class setting unit 704 and the filtering unit 705.
 以上説明したように、第2の実施形態に係る動画像符号化装置及び動画像復号装置は、SAO処理を第1の実施形態に組み合わせている。従って、これら動画像符号化装置及び動画像復号装置によれば、SAO処理による画質改善効果と第1の実施形態と同一または類似の効果とを得ることができる。尚、SAO処理及びALF処理の順序は入れ替えられてもよい。 As described above, the moving picture encoding apparatus and moving picture decoding apparatus according to the second embodiment combine SAO processing with the first embodiment. Therefore, according to the moving image encoding device and the moving image decoding device, it is possible to obtain the image quality improvement effect by the SAO process and the same or similar effect as the first embodiment. Note that the order of the SAO process and the ALF process may be switched.
 (第3の実施形態) 
 第1の実施形態は、デブロッキングフィルタ処理と組み合わせられてもよい。第3の実施形態は、第1の実施形態とデブロッキングフィルタ処理との組み合わせに関する。
(Third embodiment)
The first embodiment may be combined with deblocking filtering. The third embodiment relates to a combination of the first embodiment and deblocking filter processing.
 (動画像符号化装置) 
 図8に示されるように、第3の実施形態に係る動画像符号化装置は、動画像符号化部800と、符号化制御部810とを含む。動画像符号化部800は、予測画像生成部801と、減算部802と、変換及び量子化部803と、逆量子化及び逆変換部804と、加算部805と、オフセットクラス設定部806と、フィルタ係数セット及びオフセット値設定部807と、フィルタリング部808と、エントロピー符号化部809と、デブロッキングフィルタ処理部811とを含む。尚、オフセットクラス設定部806、フィルタ係数セット及びオフセット値設定部807及びフィルタリング部808は、ALF処理部と称されてもよい。符号化制御部810は、動画像符号化部800の各部の動作を制御する。
(Moving picture encoding device)
As illustrated in FIG. 8, the moving image encoding apparatus according to the third embodiment includes a moving image encoding unit 800 and an encoding control unit 810. The video encoding unit 800 includes a predicted image generation unit 801, a subtraction unit 802, a transform and quantization unit 803, an inverse quantization and inverse transform unit 804, an adder 805, an offset class setting unit 806, A filter coefficient set and offset value setting unit 807, a filtering unit 808, an entropy encoding unit 809, and a deblocking filter processing unit 811 are included. The offset class setting unit 806, the filter coefficient set / offset value setting unit 807, and the filtering unit 808 may be referred to as an ALF processing unit. The encoding control unit 810 controls the operation of each unit of the moving image encoding unit 800.
 予測画像生成部801、減算部802、変換及び量子化部803、逆量子化及び逆変換部804、エントロピー符号化部809及び符号化制御部810は、予測画像生成部101、減算部102、変換及び量子化部103、逆量子化及び逆変換部104、エントロピー符号化部109及び符号化制御部110と同一または類似であるので、これらの説明は省略される。 The predicted image generation unit 801, the subtraction unit 802, the transform and quantization unit 803, the inverse quantization and inverse transform unit 804, the entropy encoding unit 809, and the encoding control unit 810 are the predicted image generation unit 101, the subtraction unit 102, and the transform. The quantization unit 103, the inverse quantization and inverse transformation unit 104, the entropy coding unit 109, and the coding control unit 110 are the same as or similar to each other, and thus description thereof is omitted.
 加算部805は、復号画像12の出力先において、加算部105とは異なる。具体的には、加算部805は、復号画像12をデブロッキングフィルタ処理部811へと出力する。加算部805は、その他の点において、加算部105と同一または類似である。 The addition unit 805 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the adding unit 805 outputs the decoded image 12 to the deblocking filter processing unit 811. Adder 805 is the same as or similar to adder 105 in other respects.
 オフセットクラス設定部806は、復号画像12の代わりにデブロッキングフィルタ処理画像32をデブロッキングフィルタ処理部811から入力する点において、オフセットクラス設定部106とは異なる。オフセットクラス設定部806は、その他の点において、オフセットクラス設定部106と同一または類似である。 The offset class setting unit 806 is different from the offset class setting unit 106 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12. The offset class setting unit 806 is the same as or similar to the offset class setting unit 106 in other points.
 フィルタ係数セット及びオフセット値設定部807は、復号画像12の代わりにデブロッキングフィルタ処理画像32をデブロッキングフィルタ処理部811から入力する点において、フィルタ係数セット及びオフセット値設定部107とは異なる。フィルタ係数セット及びオフセット値設定部807は、その他の点において、フィルタ係数セット及びオフセット値設定部107と同一または類似である。 The filter coefficient set and offset value setting unit 807 is different from the filter coefficient set and offset value setting unit 107 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12. The filter coefficient set and offset value setting unit 807 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
 フィルタリング部808は、復号画像12の代わりにデブロッキングフィルタ処理画像32をデブロッキングフィルタ処理部811から入力する点において、フィルタリング部108とは異なる。フィルタリング部808は、その他の点において、フィルタリング部108と同一または類似である。 The filtering unit 808 is different from the filtering unit 108 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 811 instead of the decoded image 12. The filtering unit 808 is the same as or similar to the filtering unit 108 in other points.
 デブロッキングフィルタ処理部811は、復号画像12を加算部805から入力する。デブロッキングフィルタ処理部811は、復号画像12に対してデブロッキングフィルタ処理を行い、デブロッキングフィルタ処理画像32を得る。デブロッキングフィルタ処理は、復号画像12に含まれるブロック歪を抑制するなどの画質改善効果が期待できる。デブロッキングフィルタ処理部811は、デブロッキングフィルタ処理画像32をオフセットクラス設定部806、フィルタ係数セット及びオフセット値設定部807及びフィルタリング部808へと出力する。 The deblocking filter processing unit 811 inputs the decoded image 12 from the adding unit 805. The deblocking filter processing unit 811 performs deblocking filter processing on the decoded image 12 to obtain a deblocking filter processed image 32. The deblocking filter processing can be expected to have an image quality improvement effect such as suppressing block distortion included in the decoded image 12. The deblocking filter processing unit 811 outputs the deblocking filter processing image 32 to the offset class setting unit 806, the filter coefficient set / offset value setting unit 807, and the filtering unit 808.
 (動画像復号装置) 
 図9に示されるように、第3の実施形態に係る動画像復号装置は、動画像復号部900と、復号制御部907とを含む。動画像復号部900は、エントロピー復号部901と、逆量子化及び逆変換部902と、加算部903と、オフセットクラス設定部904と、フィルタリング部905と、予測画像生成部906と、デブロッキングフィルタ処理部908とを含む。復号制御部907は、動画像復号部900の各部の動作を制御する。
(Video decoding device)
As illustrated in FIG. 9, the video decoding device according to the third embodiment includes a video decoding unit 900 and a decoding control unit 907. The video decoding unit 900 includes an entropy decoding unit 901, an inverse quantization and inverse transformation unit 902, an addition unit 903, an offset class setting unit 904, a filtering unit 905, a predicted image generation unit 906, and a deblocking filter. And a processing unit 908. The decoding control unit 907 controls the operation of each unit of the moving image decoding unit 900.
 エントロピー復号部901、逆量子化及び逆変換部902、予測画像生成部906及び復号制御部907は、エントロピー復号部501、逆量子化及び逆変換部502、予測画像生成部506及び復号制御部507と同一または類似であるので、これらの説明は省略される。 The entropy decoding unit 901, the inverse quantization and inverse transformation unit 902, the predicted image generation unit 906, and the decoding control unit 907 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, a predicted image generation unit 506, and a decoding control unit 507. Since these are the same as or similar to each other, their description is omitted.
 加算部903は、復号画像25の出力先において、加算部503とは異なる。具体的には、加算部903は、復号画像25をデブロッキングフィルタ処理部908へと出力する。加算部903は、その他の点において、加算部503と同一または類似である。 The addition unit 903 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 903 outputs the decoded image 25 to the deblocking filter processing unit 908. Adder 903 is the same as or similar to adder 503 in other respects.
 オフセットクラス設定部904は、復号画像25の代わりにデブロッキングフィルタ処理画像41をデブロッキングフィルタ処理部908から入力する点において、オフセットクラス設定部504とは異なる。オフセットクラス設定部904は、その他の点において、オフセットクラス設定部504と同一または類似である。 The offset class setting unit 904 is different from the offset class setting unit 504 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 908 instead of the decoded image 25. The offset class setting unit 904 is the same as or similar to the offset class setting unit 504 in other respects.
 フィルタリング部905は、復号画像25の代わりにデブロッキングフィルタ処理画像41をデブロッキングフィルタ処理部908から入力する点において、フィルタリング部505とは異なる。フィルタリング部905は、その他の点において、フィルタリング部505と同一または類似である。 The filtering unit 905 is different from the filtering unit 505 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 908 instead of the decoded image 25. The filtering unit 905 is the same as or similar to the filtering unit 505 in other respects.
 デブロッキングフィルタ処理部908は、復号画像25を加算部903から入力する。デブロッキングフィルタ処理部908は、復号画像25に対してデブロッキングフィルタ処理を行い、デブロッキングフィルタ処理画像41を得る。即ち、デブロッキングフィルタ処理部908は、デブロッキングフィルタ処理部811と同一または類似の処理を行う。デブロッキングフィルタ処理部908は、デブロッキングフィルタ処理画像41をオフセットクラス設定部904及びフィルタリング部905へと出力する。 The deblocking filter processing unit 908 inputs the decoded image 25 from the addition unit 903. The deblocking filter processing unit 908 performs deblocking filter processing on the decoded image 25 to obtain a deblocking filter processed image 41. That is, the deblocking filter processing unit 908 performs the same or similar processing as the deblocking filter processing unit 811. The deblocking filter processing unit 908 outputs the deblocking filter processing image 41 to the offset class setting unit 904 and the filtering unit 905.
 以上説明したように、第3の実施形態に係る動画像符号化装置及び動画像復号装置は、デブロッキングフィルタ処理を第1の実施形態に組み合わせている。従って、これら動画像符号化装置及び動画像復号装置によれば、デブロッキングフィルタ処理による画質改善効果と第1の実施形態と同一または類似の効果とを得ることができる。尚、デブロッキングフィルタ処理及びALF処理の順序は入れ替えられてもよい。 As described above, the video encoding device and the video decoding device according to the third embodiment combine the deblocking filter processing with the first embodiment. Therefore, according to these video encoding device and video decoding device, it is possible to obtain the image quality improvement effect by the deblocking filter process and the same or similar effect as the first embodiment. Note that the order of the deblocking filter process and the ALF process may be changed.
 (第4の実施形態) 
 第1の実施形態は、デブロッキングフィルタ処理及びSAO処理と組み合わせられてもよい。第4の実施形態は、第1の実施形態とデブロッキングフィルタ処理及びSAO処理との組み合わせに関する。
(Fourth embodiment)
The first embodiment may be combined with deblocking filtering and SAO processing. The fourth embodiment relates to a combination of the first embodiment with deblocking filter processing and SAO processing.
 (動画像符号化装置) 
 図10に示されるように、第4の実施形態に係る動画像符号化装置は、動画像符号化部1000と、符号化制御部1010とを含む。動画像符号化部1000は、予測画像生成部1001と、減算部1002と、変換及び量子化部1003と、逆量子化及び逆変換部1004と、加算部1005と、オフセットクラス設定部1006と、フィルタ係数セット及びオフセット値設定部1007と、フィルタリング部1008と、エントロピー符号化部1009と、画素適応オフセット設定部1011と、画素適応オフセット処理部1012と、デブロッキングフィルタ処理部1013とを含む。尚、オフセットクラス設定部1006、フィルタ係数セット及びオフセット値設定部1007及びフィルタリング部1008は、ALF処理部と称されてよい。符号化制御部1010は、動画像符号化部1000の各部の動作を制御する。
(Moving picture encoding device)
As shown in FIG. 10, the moving picture coding apparatus according to the fourth embodiment includes a moving picture coding unit 1000 and a coding control unit 1010. The video encoding unit 1000 includes a predicted image generation unit 1001, a subtraction unit 1002, a transform and quantization unit 1003, an inverse quantization and inverse transform unit 1004, an adder 1005, an offset class setting unit 1006, A filter coefficient set / offset value setting unit 1007, a filtering unit 1008, an entropy encoding unit 1009, a pixel adaptive offset setting unit 1011, a pixel adaptive offset processing unit 1012, and a deblocking filter processing unit 1013 are included. The offset class setting unit 1006, the filter coefficient set / offset value setting unit 1007, and the filtering unit 1008 may be referred to as an ALF processing unit. The encoding control unit 1010 controls the operation of each unit of the moving image encoding unit 1000.
 予測画像生成部1001、減算部1002、変換及び量子化部1004、オフセットクラス設定部1006、フィルタ係数セット及びオフセット値設定部1007、フィルタリング部1008、エントロピー符号化部1009及び符号化制御部1010は、予測画像生成部601、減算部602、変換及び量子化部604、オフセットクラス設定部606、フィルタ係数セット及びオフセット値設定部607、フィルタリング部608、エントロピー符号化部609及び符号化制御部610と同一または類似であるので、これらの説明は省略される。 The predicted image generation unit 1001, the subtraction unit 1002, the transform and quantization unit 1004, the offset class setting unit 1006, the filter coefficient set and offset value setting unit 1007, the filtering unit 1008, the entropy encoding unit 1009, and the encoding control unit 1010 are Same as prediction image generation unit 601, subtraction unit 602, transform and quantization unit 604, offset class setting unit 606, filter coefficient set and offset value setting unit 607, filtering unit 608, entropy encoding unit 609 and encoding control unit 610 Since these are similar, the description thereof is omitted.
 加算部1005は、復号画像12の出力先において、加算部605とは異なる。具体的には、加算部1005は、復号画像12をデブロッキングフィルタ処理部1013へと出力する。加算部1005は、その他の点において、加算部605と同一または類似である。 The addition unit 1005 is different from the addition unit 605 in the output destination of the decoded image 12. Specifically, the adding unit 1005 outputs the decoded image 12 to the deblocking filter processing unit 1013. Adder 1005 is the same as or similar to adder 605 in other respects.
 画素適応オフセット設定部1011は、復号画像12の代わりにデブロッキングフィルタ処理画像32をデブロッキングフィルタ処理部1013から入力する点において、画素適応オフセット設定部611とは異なる。画素適応オフセット設定部1011は、その他の点において、画素適応オフセット設定部611と同一または類似である。 The pixel adaptive offset setting unit 1011 is different from the pixel adaptive offset setting unit 611 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 1013 instead of the decoded image 12. The pixel adaptive offset setting unit 1011 is the same as or similar to the pixel adaptive offset setting unit 611 in other respects.
 画素適応オフセット処理部1012は、復号画像12の代わりにデブロッキングフィルタ処理画像32をデブロッキングフィルタ処理部1013から入力する点において、画素適応オフセット処理部612とは異なる。画素適応オフセット処理部1012は、その他の点において、画素適応オフセット処理部612と同一または類似である。 The pixel adaptive offset processing unit 1012 is different from the pixel adaptive offset processing unit 612 in that the deblocking filter processed image 32 is input from the deblocking filter processing unit 1013 instead of the decoded image 12. The pixel adaptive offset processing unit 1012 is the same as or similar to the pixel adaptive offset processing unit 612 in other respects.
 デブロッキングフィルタ処理部1013は、デブロッキングフィルタ処理画像32の出力先において、デブロッキングフィルタ処理部811とは異なる。具体的には、デブロッキングフィルタ処理部1013は、デブロッキングフィルタ処理画像32を画素適応オフセット設定部1011及び画素適応オフセット処理部1012へと出力する。デブロッキングフィルタ処理部1013は、その他の点において、デブロッキングフィルタ処理部811と同一または類似である。 The deblocking filter processing unit 1013 is different from the deblocking filter processing unit 811 in the output destination of the deblocking filter processing image 32. Specifically, the deblocking filter processing unit 1013 outputs the deblocking filter processing image 32 to the pixel adaptive offset setting unit 1011 and the pixel adaptive offset processing unit 1012. The deblocking filter processing unit 1013 is the same as or similar to the deblocking filter processing unit 811 in other respects.
 (動画像復号装置) 
 図11に示されるように、第4の実施形態に係る動画像復号装置は、動画像復号部1100と、復号制御部1107とを含む。動画像復号部1100は、エントロピー復号部1101と、逆量子化及び逆変換部1102と、加算部1103と、オフセットクラス設定部1104と、フィルタリング部1105と、予測画像生成部1106と、デブロッキングフィルタ処理部1108と、画素適応オフセット処理部1109とを含む。復号制御部1107は、動画像復号部1100の各部の動作を制御する。
(Video decoding device)
As illustrated in FIG. 11, the video decoding device according to the fourth embodiment includes a video decoding unit 1100 and a decoding control unit 1107. The video decoding unit 1100 includes an entropy decoding unit 1101, an inverse quantization and inverse transformation unit 1102, an addition unit 1103, an offset class setting unit 1104, a filtering unit 1105, a predicted image generation unit 1106, and a deblocking filter. A processing unit 1108 and a pixel adaptive offset processing unit 1109 are included. The decoding control unit 1107 controls the operation of each unit of the moving image decoding unit 1100.
 エントロピー復号部1101、逆量子化及び逆変換部1102、オフセットクラス設定部1104、フィルタリング部1105、予測画像生成部1106及び復号制御部1107は、エントロピー復号部701、逆量子化及び逆変換部702、オフセットクラス設定部704、フィルタリング部705、予測画像生成部706及び復号制御部707と同一または類似であるので、これらの説明は省略される。 An entropy decoding unit 1101, an inverse quantization and inverse transformation unit 1102, an offset class setting unit 1104, a filtering unit 1105, a predicted image generation unit 1106, and a decoding control unit 1107 are an entropy decoding unit 701, an inverse quantization and inverse transformation unit 702, Since it is the same as or similar to the offset class setting unit 704, the filtering unit 705, the predicted image generation unit 706, and the decoding control unit 707, description thereof is omitted.
 加算部1103は、復号画像25の出力先において、加算部703とは異なる。具体的には、加算部1103は、復号画像25をデブロッキングフィルタ処理部1108へと出力する。加算部1103は、その他の点において、加算部703と同一または類似である。 The addition unit 1103 is different from the addition unit 703 in the output destination of the decoded image 25. Specifically, the adding unit 1103 outputs the decoded image 25 to the deblocking filter processing unit 1108. Adder 1103 is the same as or similar to adder 703 in other respects.
 デブロッキングフィルタ処理部1108は、デブロッキングフィルタ処理画像41の出力先において、デブロッキングフィルタ処理部908とは異なる。具体的には、デブロッキングフィルタ処理部1108は、デブロッキングフィルタ処理画像41を画素適応オフセット処理部1109へと出力する。デブロッキングフィルタ処理部1108は、その他の点において、デブロッキングフィルタ処理部908と同一または類似である。 The deblocking filter processing unit 1108 is different from the deblocking filter processing unit 908 in the output destination of the deblocking filter processing image 41. Specifically, the deblocking filter processing unit 1108 outputs the deblocking filter processing image 41 to the pixel adaptive offset processing unit 1109. The deblocking filter processing unit 1108 is the same as or similar to the deblocking filter processing unit 908 in other points.
 画素適応オフセット処理部1109は、復号画像25の代わりにデブロッキングフィルタ処理画像41をデブロッキングフィルタ処理部1108から入力する点において、画素適応オフセット処理部708とは異なる。画素適応オフセット処理部1109は、その他の点において、画素適応オフセット処理部708と同一または類似である。 The pixel adaptive offset processing unit 1109 is different from the pixel adaptive offset processing unit 708 in that the deblocking filter processed image 41 is input from the deblocking filter processing unit 1108 instead of the decoded image 25. The pixel adaptive offset processing unit 1109 is the same as or similar to the pixel adaptive offset processing unit 708 in other respects.
 以上説明したように、第4の実施形態に係る動画像符号化装置及び動画像復号装置は、デブロッキングフィルタ処理及びSAO処理を第1の実施形態に組み合わせている。従って、これら動画像符号化装置及び動画像復号装置によれば、デブロッキングフィルタ処理及びSAO処理による画質改善効果と第1の実施形態と同一または類似の効果とを得ることができる。尚、デブロッキングフィルタ処理、SAO処理及びALF処理の順序は変更されてもよい。 As described above, the video encoding device and video decoding device according to the fourth embodiment combine the deblocking filter processing and the SAO processing with the first embodiment. Therefore, according to these video encoding device and video decoding device, it is possible to obtain the image quality improvement effect by the deblocking filter process and the SAO process and the same or similar effect as the first embodiment. Note that the order of the deblocking filter process, the SAO process, and the ALF process may be changed.
 (第5の実施形態) 
 第1乃至第4の実施形態において設定されるフィルタ係数セット及び各オフセットクラスのオフセット値は、ALF処理に限られずポストフィルタ処理に利用されてもよい。第5の実施形態は、ポストフィルタ処理に関する。
(Fifth embodiment)
The filter coefficient set and the offset value of each offset class set in the first to fourth embodiments are not limited to the ALF process and may be used for the post filter process. The fifth embodiment relates to post filter processing.
 (動画像符号化装置) 
 簡単化のために、本実施形態が第1の実施形態に係る動画像符号化装置に適用される場合について説明される。尚、本実施形態は、他の実施形態に係る動画像符号化装置に適用されてもよい。
(Moving picture encoding device)
For simplification, a case will be described in which the present embodiment is applied to the moving picture encoding apparatus according to the first embodiment. Note that the present embodiment may be applied to a moving image encoding apparatus according to another embodiment.
 図12に示されるように、第5の実施形態に係る動画像符号化装置は、動画像符号化部1200と、符号化制御部1209とを含む。動画像符号化部1200は、予測画像生成部1201と、減算部1202と、変換及び量子化部1203と、逆量子化及び逆変換1204と、加算部1205と、オフセットクラス設定部1206と、フィルタ係数セット及びオフセット値設定部1207と、エントロピー符号化部1208とを含む。符号化制御部1209は、動画像符号化部1200の各部の動作を制御する。 As shown in FIG. 12, the moving picture coding apparatus according to the fifth embodiment includes a moving picture coding unit 1200 and a coding control unit 1209. The moving image coding unit 1200 includes a predicted image generation unit 1201, a subtraction unit 1202, a transform and quantization unit 1203, an inverse quantization and inverse transform 1204, an adder 1205, an offset class setting unit 1206, a filter A coefficient set and offset value setting unit 1207 and an entropy encoding unit 1208 are included. The encoding control unit 1209 controls the operation of each unit of the moving image encoding unit 1200.
 減算部1202、変換及び量子化部1203、逆量子化及び逆変換部1204、エントロピー符号化部1208及び符号化制御部1209は、減算部102、変換及び量子化部103、逆量子化及び逆変換部104、エントロピー符号化部109及び符号化制御部110と同一または類似であるので、これらの説明は省略される。 The subtraction unit 1202, the transform and quantization unit 1203, the inverse quantization and inverse transform unit 1204, the entropy coding unit 1208, and the coding control unit 1209 are the subtraction unit 102, transform and quantization unit 103, inverse quantization and inverse transform. The description is omitted because it is the same as or similar to the unit 104, the entropy encoding unit 109, and the encoding control unit 110.
 加算部1205は、復号画像12の出力先において、加算部105とは異なる。具体的には、フィルタリング部108に相当する構成要素が動画像符号化部1200に含まれないので、加算部1205は復号画像12を予測画像生成部1201、オフセットクラス設定部1206及びフィルタ係数セット及びオフセット値設定部1207へと出力する。加算部1205は、その他の点において、加算部105と同一または類似である。 The addition unit 1205 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, since a component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the adding unit 1205 converts the decoded image 12 into a predicted image generating unit 1201, an offset class setting unit 1206, a filter coefficient set, and Output to the offset value setting unit 1207. The addition unit 1205 is the same as or similar to the addition unit 105 in other points.
 尚、復号画像12は、予測画像生成部1201がアクセス可能な図示されない記憶部(例えばバッファなど)に保存されてもよい。復号画像12は、必要に応じて予測画像生成部1201によって参照画像として読み出され、予測処理に利用される。 Note that the decoded image 12 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 1201. The decoded image 12 is read as a reference image by the predicted image generation unit 1201 as necessary, and is used for prediction processing.
 オフセットクラス設定部1206は、オフセットクラス情報13の出力先において、オフセットクラス設定部106とは異なる。具体的には、フィルタリング部108に相当する構成要素が動画像符号化部1200に含まれないので、オフセットクラス設定部1206はオフセットクラス情報をフィルタ係数セット及びオフセット値設定部1207へと出力する。オフセットクラス設定部1206は、その他の点において、オフセットクラス設定部106と同一または類似である。 The offset class setting unit 1206 is different from the offset class setting unit 106 in the output destination of the offset class information 13. Specifically, since the component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the offset class setting unit 1206 outputs the offset class information to the filter coefficient set and offset value setting unit 1207. The offset class setting unit 1206 is the same as or similar to the offset class setting unit 106 in other points.
 フィルタ係数セット及びオフセット値設定部1207は、フィルタ係数セット情報14及びオフセット情報15の出力先において、フィルタ係数セット及びオフセット値設定部107とは異なる。具体的には、フィルタリング部108に相当する構成要素が動画像符号化部1200に含まれないので、フィルタ係数セット及びオフセット値設定部1207は、フィルタ係数セット情報14及びオフセット情報15をエントロピー符号化部1208へと出力する。フィルタ係数セット及びオフセット値設定部1207は、その他の点において、フィルタ係数セット及びオフセット値設定部107と同一または類似である。 The filter coefficient set and offset value setting unit 1207 is different from the filter coefficient set and offset value setting unit 107 in the output destination of the filter coefficient set information 14 and the offset information 15. Specifically, since a component corresponding to the filtering unit 108 is not included in the moving image encoding unit 1200, the filter coefficient set and offset value setting unit 1207 performs entropy encoding on the filter coefficient set information 14 and the offset information 15. The data is output to the unit 1208. The filter coefficient set and offset value setting unit 1207 is the same as or similar to the filter coefficient set and offset value setting unit 107 in other points.
 予測画像生成部1201は、ALF処理画像17ではなく復号画像12に基づいて入力画像11の予測処理を行う点において、予測画像生成部101とは異なる。予測画像生成部1201は、その他の点において、予測画像生成部101と同一または類似である。 The predicted image generation unit 1201 is different from the predicted image generation unit 101 in that the input image 11 is predicted based on the decoded image 12 instead of the ALF processed image 17. The prediction image generation unit 1201 is the same as or similar to the prediction image generation unit 101 in other points.
 (動画像復号装置) 
 簡単化のために、本実施形態が第1の実施形態に係る動画像復号装置に適用される場合について説明される。尚、本実施形態は、他の実施形態に係る動画像復号装置に適用されてもよい。
(Video decoding device)
For simplification, a case will be described in which the present embodiment is applied to the video decoding device according to the first embodiment. Note that the present embodiment may be applied to a video decoding device according to another embodiment.
 図13に示されるように、第5の実施形態に係る動画像復号装置は、動画像復号部1300と、復号制御部1307とを含む。動画像復号部1300は、エントロピー復号部1301と、逆量子化及び逆変換部1302と、加算部1303と、オフセットクラス設定部1304と、フィルタリング部1305と、予測画像生成部1306とを含む。復号制御部1307は、動画像復号部1300の各部の動作を制御する。 As illustrated in FIG. 13, the moving picture decoding apparatus according to the fifth embodiment includes a moving picture decoding unit 1300 and a decoding control unit 1307. The moving picture decoding unit 1300 includes an entropy decoding unit 1301, an inverse quantization and inverse transformation unit 1302, an addition unit 1303, an offset class setting unit 1304, a filtering unit 1305, and a predicted image generation unit 1306. The decoding control unit 1307 controls the operation of each unit of the moving image decoding unit 1300.
 エントロピー復号部1301、逆量子化及び逆変換部1302、オフセットクラス設定部1304及び復号制御部1307は、エントロピー復号部501、逆量子化及び逆変換部502、オフセットクラス設定部504及び復号制御部507と同一または類似であるので、これらの説明は省略される。 An entropy decoding unit 1301, an inverse quantization and inverse transformation unit 1302, an offset class setting unit 1304, and a decoding control unit 1307 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, an offset class setting unit 504, and a decoding control unit 507. Since these are the same as or similar to each other, their description is omitted.
 加算部1303は、復号画像25の出力先において、加算部503とは異なる。具体的には、加算部1303は、復号画像25をオフセットクラス設定部1304及びフィルタリング部1305だけではなく予測画像生成部1306へも出力する。加算部1303は、その他の点において、加算部503と同一または類似である。 The addition unit 1303 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the addition unit 1303 outputs the decoded image 25 not only to the offset class setting unit 1304 and the filtering unit 1305 but also to the predicted image generation unit 1306. Adder 1303 is the same as or similar to adder 503 in other respects.
 尚、復号画像25は、予測画像生成部1306がアクセス可能な図示されない記憶部(例えばバッファなど)に保存されてもよい。復号画像25は、必要に応じて予測画像生成部1306によって参照画像として読み出され、予測処理に利用される。 Note that the decoded image 25 may be stored in a storage unit (not shown) (for example, a buffer) that is accessible by the predicted image generation unit 1306. The decoded image 25 is read as a reference image by the predicted image generation unit 1306 as necessary, and is used for the prediction process.
 予測画像生成部1306は、ALF処理画像27ではなく復号画像25に基づいて出力画像の予測処理を行う点において、予測画像生成部506とは異なる。予測画像生成部11306は、その他の点において、予測画像生成部506と同一または類似である。 The predicted image generation unit 1306 is different from the predicted image generation unit 506 in that the output image is predicted based on the decoded image 25 instead of the ALF processed image 27. The prediction image generation unit 11306 is the same as or similar to the prediction image generation unit 506 in other respects.
 フィルタリング部1305は、エントロピー復号部1301からフィルタ係数セット情報23及びオフセット情報24を入力し、加算部1303から復号画像25を入力し、オフセットクラス設定部1304からオフセットクラス情報26を入力する。フィルタリング部1305は、フィルタ係数セット情報23と、オフセット情報24と、オフセットクラス情報25とに基づいて復号画像25にフィルタ処理を行い、ポストフィルタ処理画像43を得る。フィルタリング部1305は、ポストフィルタ処理画像43を出力画像として外部(例えば表示系など)に与える。尚、基本的には、フィルタリング部1305は、フィルタリング部505と同一または類似の処理を行うが、符号化側において対応する処理が行われない点で異なる。 The filtering unit 1305 receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 1301, receives the decoded image 25 from the addition unit 1303, and inputs the offset class information 26 from the offset class setting unit 1304. The filtering unit 1305 performs a filtering process on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, and the offset class information 25 to obtain a post-filter processed image 43. The filtering unit 1305 gives the post-filter processed image 43 to the outside (for example, a display system) as an output image. Basically, the filtering unit 1305 performs the same or similar processing as the filtering unit 505, but differs in that no corresponding processing is performed on the encoding side.
 以上説明したように、第5の実施形態に係る動画像符号化装置及び動画像復号装置は、第1の実施形態をポストフィルタ処理に適用している。従って、これら動画像符号化装置及び動画像復号装置によれば、ALF処理の代わりにポストフィルタ処理が行われる場合にも、第1の実施形態と同一または類似の効果を得ることができる。 As described above, the video encoding device and video decoding device according to the fifth embodiment apply the first embodiment to the post-filter processing. Therefore, according to these moving image encoding device and moving image decoding device, the same or similar effect as that of the first embodiment can be obtained even when post filter processing is performed instead of ALF processing.
 尚、本実施形態は、前述の通り、第1乃至第4の実施形態のいずれにも適用可能である。即ち、本実施形態は、SAO処理、デブロッキングフィルタ処理などと組み合わせられてもよい。 Note that this embodiment is applicable to any of the first to fourth embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, and the like.
 (第6の実施形態) 
 第6の実施形態は、第1乃至第5の実施形態において、オフセットクラスの総数(或いは、切り替え可能なオフセット値の総数)を例えばスライス毎に可変とする技法に関する。
(Sixth embodiment)
The sixth embodiment relates to a technique in which the total number of offset classes (or the total number of switchable offset values) is variable for each slice, for example, in the first to fifth embodiments.
 (動画像符号化装置) 
 簡単化のために、本実施形態が図1の動画像符号化装置に適用される場合について説明される。尚、本実施形態は、他の実施形態に係る動画像符号化装置に適用されてもよい。
(Moving picture encoding device)
For simplification, a case will be described in which the present embodiment is applied to the moving picture encoding apparatus of FIG. Note that the present embodiment may be applied to a moving image encoding apparatus according to another embodiment.
 符号化制御部110は、前述の通り、動画像符号化部100に対して符号化ブロックの分割制御、発生符号量のフィードバック制御、量子化制御及びモード制御などを行う。更に、本実施形態において、符号化制御部110は対象スライス内で設定されるオフセットクラスの総数を制御する。 As described above, the coding control unit 110 performs coding block division control, generated code amount feedback control, quantization control, mode control, and the like on the moving image coding unit 100. Furthermore, in this embodiment, the encoding control unit 110 controls the total number of offset classes set in the target slice.
 符号化制御部110は、例えば隣接するオフセットクラス同士をマージするか否かを制御し、オフセットマージ情報を生成する。オフセットマージ情報は、例えば隣接するオフセットクラスのペア毎に設定される。符号化制御部110は、複数のマージする/しないの組み合わせから上記数式(4)に基づく符号化コストを最小化する組み合わせを選択することによって、オフセットマージ情報を生成してもよい。 The encoding control unit 110 controls, for example, whether or not adjacent offset classes are merged, and generates offset merge information. The offset merge information is set for each pair of adjacent offset classes, for example. The encoding control unit 110 may generate offset merge information by selecting a combination that minimizes the encoding cost based on the equation (4) from a plurality of combinations to be merged.
 オフセットマージ情報は、隣接するオフセットクラス毎にマージするか否かを示す1ビットフラグであってもよい。或いは、複数のマージする/しないのパターンが予め用意されている場合には、オフセットマージ情報は1つの組み合わせを指定するインデックスであってもよい。 The offset merge information may be a 1-bit flag indicating whether or not to merge for each adjacent offset class. Alternatively, when a plurality of merge / non-merge patterns are prepared in advance, the offset merge information may be an index specifying one combination.
 オフセットクラスをマージすることによって、対象スライス内で設定されるオフセットクラスの総数は減少する。図14の例によれば、オフセットクラス1,2,3がマージされ、オフセットクラス4,5がマージされるので、オフセットクラスの総数は5から2へと減少する。従って、対象スライス内で切り替え可能なオフセット値の総数も5から2へと減少する。符号化制御部110は、オフセットマージ情報をエントロピー符号化部109へと出力する。更に、符号化制御部110は、オフセットマージ情報に基づいてオフセットクラス設定部106を制御する。 -By merging offset classes, the total number of offset classes set in the target slice decreases. According to the example of FIG. 14, the offset classes 1, 2 and 3 are merged and the offset classes 4 and 5 are merged, so the total number of offset classes decreases from 5 to 2. Therefore, the total number of offset values that can be switched in the target slice is also reduced from 5 to 2. The encoding control unit 110 outputs the offset merge information to the entropy encoding unit 109. Furthermore, the encoding control unit 110 controls the offset class setting unit 106 based on the offset merge information.
 オフセットクラス設定部106は、前述の通り、復号画像12の第1の単位毎に第1の指標に基づいてオフセットクラスを設定する。更に、本実施形態において、オフセットクラス設定部106は、符号化制御部110によって制御され、設定したオフセットクラスに対してマージ処理を行う。具体的には、オフセットクラス設定部106は、オフセットマージ情報に合致するようにオフセットクラスに対してマージ処理を行う。オフセットクラス設定部106は、第1の単位毎に対応するマージ処理後のオフセットクラスを示すオフセットクラス情報13を生成する。オフセットクラス設定部106は、オフセットクラス情報13をフィルタ係数セット及びオフセット値設定部107及びフィルタリング部108へと出力する。 The offset class setting unit 106 sets an offset class based on the first index for each first unit of the decoded image 12 as described above. Further, in the present embodiment, the offset class setting unit 106 is controlled by the encoding control unit 110 and performs a merge process on the set offset class. Specifically, the offset class setting unit 106 performs a merge process on the offset class so as to match the offset merge information. The offset class setting unit 106 generates offset class information 13 indicating the offset class after merge processing corresponding to each first unit. The offset class setting unit 106 outputs the offset class information 13 to the filter coefficient set / offset value setting unit 107 and the filtering unit 108.
 エントロピー符号化部109は、前述の通り、量子化変換係数、フィルタ係数セット情報14、オフセット情報15及び符号化パラメータをエントロピー符号化し、符号化データ18を生成する。更に、本実施形態において、エントロピー符号化部109は、符号化制御部110からオフセットマージ情報を入力し、これをエントロピー符号化して符号化データ18に多重化する。 The entropy encoding unit 109 entropy-encodes the quantized transform coefficient, the filter coefficient set information 14, the offset information 15, and the encoding parameter as described above, and generates encoded data 18. Further, in the present embodiment, the entropy encoding unit 109 receives the offset merge information from the encoding control unit 110, entropy encodes it, and multiplexes it into the encoded data 18.
 本実施形態において、フィルタ係数セット情報14、オフセットマージ情報及びオフセット情報15は、例えば図15に示されるシンタクス構造に従って記述される。図15のシンタクスは、例えばスライス単位で記述される。図15において、filter_type_idx,NumOfFilterCoeff及びfilter_coeff[i]は、図4と同一または類似であるので説明が省略される。MaxNumOfOffsetは、マージ処理前のオフセットクラスの総数を表しており、図4のNumOfOffsetに相当する。offset_merge_flag[i]は、隣接するオフセットクラス毎にマージするか否かを示す1ビットフラグであり、オフセットマージ情報に相当する。NumOfOffsetは、マージ処理後のオフセットクラスの総数(即ち、対象スライス内で切り替え可能なオフセット値の総数)を表す。変数iによって特定されるマージ処理後のオフセットクラスに対応するオフセット値は、offset_value[i]として記述される。以上のシンタクス要素がスライス単位で記述され、符号化側においてエントロピー符号化され、符号化データ18の一部として復号側に伝送される。 In this embodiment, the filter coefficient set information 14, the offset merge information, and the offset information 15 are described according to the syntax structure shown in FIG. 15, for example. The syntax in FIG. 15 is described in units of slices, for example. In FIG. 15, filter_type_idx, NumOfFilterCoeff and filter_coeff [i] are the same as or similar to those in FIG. MaxNumOfOffset represents the total number of offset classes before merge processing, and corresponds to NumOfOffset in FIG. offset_merge_flag [i] is a 1-bit flag indicating whether or not to merge for each adjacent offset class, and corresponds to offset merge information. NumOfOffset represents the total number of offset classes after merge processing (that is, the total number of offset values that can be switched within the target slice). The offset value corresponding to the offset class after merge processing specified by the variable i is described as offset_value [i]. The above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
 (動画像復号装置) 
 簡単化のために、本実施形態が図5の動画像復号装置に適用される場合について説明される。尚、本実施形態は、他の実施形態に係る動画像復号装置に適用されてもよい。
(Video decoding device)
For simplification, the case where the present embodiment is applied to the video decoding device in FIG. 5 will be described. Note that the present embodiment may be applied to a video decoding device according to another embodiment.
 エントロピー復号部501は、前述の通り、符号化データ21をエントロピー復号し、量子化変換係数、符号化パラメータ22、フィルタ係数セット情報23及びオフセット情報24を得る。更に、本実施形態において、エントロピー復号部501は、符号化データ21をエントロピー復号し、オフセットマージ情報を得る。エントロピー復号部501は、オフセットマージ情報を復号制御部507へと出力する。 The entropy decoding unit 501 entropy-decodes the encoded data 21 as described above to obtain quantized transform coefficients, encoding parameters 22, filter coefficient set information 23, and offset information 24. Furthermore, in the present embodiment, the entropy decoding unit 501 performs entropy decoding on the encoded data 21 to obtain offset merge information. The entropy decoding unit 501 outputs the offset merge information to the decoding control unit 507.
 復号制御部507は、前述の通り、符号化パラメータ22に基づいて、符号化ブロックの分割制御、量子化制御及びモード制御などを行う。更に、復号制御部507は、エントロピー復号部501からオフセットマージ情報を入力し、これに基づいてオフセットクラス設定部504を制御する。 As described above, the decoding control unit 507 performs coding block division control, quantization control, mode control, and the like based on the coding parameter 22. Further, the decoding control unit 507 receives the offset merge information from the entropy decoding unit 501 and controls the offset class setting unit 504 based on this information.
 オフセットクラス設定部504は、前述の通り、復号画像25の第1の単位毎に第1の指標に基づいてオフセットクラスを設定する。更に、本実施形態において、オフセットクラス設定部504は、復号制御部507によって制御され、設定したオフセットクラスに対してマージ処理を行う。具体的には、オフセットクラス設定部504は、オフセットマージ情報に合致するようにオフセットクラスに対してマージ処理を行う。オフセットクラス設定部504は、第1の単位毎に対応するマージ処理後のオフセットクラスを示すオフセットクラス情報26をフィルタリング部505へと出力する。 The offset class setting unit 504 sets an offset class based on the first index for each first unit of the decoded image 25 as described above. Furthermore, in the present embodiment, the offset class setting unit 504 is controlled by the decoding control unit 507 and performs a merge process on the set offset class. Specifically, the offset class setting unit 504 performs a merge process on the offset class so as to match the offset merge information. The offset class setting unit 504 outputs the offset class information 26 indicating the offset class after the merge processing corresponding to each first unit to the filtering unit 505.
 以上説明したように、第6の実施形態に係る動画像符号化装置及び動画像復号装置は、例えばスライス毎にオフセットクラスの総数を可変とする。従って、これら動画像符号化装置及び動画像復号装置によれば、オフセット情報の総数の制御情報(例えばオフセットマージ情報)をシグナリングすることでオフセット情報によるオーバーヘッドを削減できるので、符号化歪を抑えながらオフセット値に関する制御情報によるオーバーヘッドを削減し、符号化効率を向上させることができる。 As described above, the moving picture encoding apparatus and moving picture decoding apparatus according to the sixth embodiment, for example, make the total number of offset classes variable for each slice. Therefore, according to these video encoding device and video decoding device, overhead due to offset information can be reduced by signaling control information (for example, offset merge information) of the total number of offset information, so that encoding distortion can be suppressed. It is possible to reduce overhead due to control information related to the offset value and improve encoding efficiency.
 本実施形態は様々な変形例が想定される。以下に、係る変形例の一部が記載される。 
 符号化制御部110は、オフセットクラスの総数を1つにするモード(以降、便宜的に単数モードとも称される)と、オフセットクラスの総数を複数のままとするモード(以降、便宜的に複数モードとも称される)とのどちらか一方を例えばスライス単位で選択してもよい。例えば、符号化制御部110は、上記数式(4)に基づく符号化コストを最小化するモードを選択してもよい。図14の例によれば、単数モードが適用されると、全てのオフセットクラスがマージされ、オフセットクラスの総数は1つとなる。他方、複数モードが適用されると、オフセットクラスの総数は5のままとなる。この変形例によれば、スライス毎のオフセットクラスの総数(即ち、オフセット値の総数)が1ビットフラグで表現できる。従って、この変形例において、前述のオフセットマージ情報に代えて例えば1ビットフラグがオフセットクラスの総数の制御情報として利用できる。即ち、オフセットクラスの総数の制御情報によるオーバーヘッドを削減できる。
Various modifications are assumed for this embodiment. Below, a part of the modification which concerns is described.
The encoding control unit 110 includes a mode in which the total number of offset classes is set to one (hereinafter also referred to as a singular mode for convenience) and a mode in which the total number of offset classes remains in a plurality (hereinafter referred to as multiple for convenience). For example, each of them may be selected in units of slices. For example, the encoding control unit 110 may select a mode that minimizes the encoding cost based on Equation (4). According to the example of FIG. 14, when the single mode is applied, all the offset classes are merged, and the total number of offset classes becomes one. On the other hand, when multiple modes are applied, the total number of offset classes remains at 5. According to this modification, the total number of offset classes for each slice (that is, the total number of offset values) can be expressed by a 1-bit flag. Therefore, in this modification, for example, a 1-bit flag can be used as control information for the total number of offset classes instead of the offset merge information described above. That is, the overhead due to the control information of the total number of offset classes can be reduced.
 この変形例において、フィルタ係数セット情報14、オフセットクラスの総数の制御情報及びオフセット情報15は、例えば図16に示されるシンタクス構造に従って記述される。図16のシンタクスは、例えばスライス単位で記述される。図16において、filter_type_idx,NumOfFilterCoeff及びfilter_coeff[i]は、図4と同一または類似であるので説明が省略される。multi_offset_flagは、単数モード及び複数モードのいずれが適用されるかを示す1ビットフラグであり、この変形例においてオフセットクラスの総数の制御情報である。例えば、multi_offset_flagに1が設定されれば複数モードが適用され、multi_offset_flagに0が設定されれば単数モードが適用される。NumOfOffsetは、対象スライス内で切り替え可能なオフセットクラスの総数を表すが、この値はmulti_offset_flagの値に応じて異なる。即ち、単数モードが適用されるならばNumOfOffsetは1に等しく、複数モードが適用されるならばNumOfOffsetはマージされない場合のオフセットクラスの総数(即ち、複数)に等しい。変数iによって特定されるオフセットクラスに対応するオフセット値は、offset_value[i]として記述される。特に単数モードが適用される場合には、唯一のオフセットクラスに対応するオフセット値がoffset_value[0]として記述される。以上のシンタクス要素がスライス単位で記述され、符号化側においてエントロピー符号化され、符号化データ18の一部として復号側に伝送される。 In this modification, the filter coefficient set information 14, the control information of the total number of offset classes, and the offset information 15 are described according to the syntax structure shown in FIG. 16, for example. The syntax in FIG. 16 is described in units of slices, for example. In FIG. 16, filter_type_idx, NumOfFilterCoeff and filter_coeff [i] are the same as or similar to those in FIG. multi_offset_flag is a 1-bit flag indicating which of a single mode and a plurality of modes is applied, and is control information of the total number of offset classes in this modification. For example, if 1 is set in multi_offset_flag, the multiple mode is applied, and if 0 is set in multi_offset_flag, the single mode is applied. NumOfOffset represents the total number of offset classes that can be switched in the target slice, but this value varies depending on the value of multi_offset_flag. That is, if singular mode is applied, NumOfOffset is equal to 1, and if multiple modes are applied, NumOfOffset is equal to the total number (ie, plural) of offset classes when not merged. The offset value corresponding to the offset class specified by the variable i is described as offset_value [i]. In particular, when the single mode is applied, an offset value corresponding to a single offset class is described as offset_value [0]. The above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
 或いは、上記変形例において、オフセットクラスの総数を0にするモード(以降、便宜的に零モードとも称される)が単数モードに代えて用意されてもよい。例えば、符号化制御部110は、上記数式(4)に基づく符号化コストを最小化するモードを選択してもよい。図14の例によれば、零モードが適用されると、オフセットクラスの総数は0となる。この変形例によれば、スライス毎のオフセットクラスの総数(即ち、オフセット値の総数)が1ビットフラグで表現できる。従って、この変形例において、前述のオフセットマージ情報に代えて例えば1ビットフラグがオフセットクラスの総数の制御情報として利用できる。即ち、オフセットクラスの総数の制御情報によるオーバーヘッドを削減できる。尚、零モードが適用される場合には、オフセットクラスが設定されないので当然にオフセット値も設定されない。従って、オフセット情報15はシグナリングされない。 Alternatively, in the above modification, a mode in which the total number of offset classes is 0 (hereinafter also referred to as zero mode for convenience) may be prepared instead of the single mode. For example, the encoding control unit 110 may select a mode that minimizes the encoding cost based on Equation (4). According to the example of FIG. 14, when the zero mode is applied, the total number of offset classes becomes zero. According to this modification, the total number of offset classes for each slice (that is, the total number of offset values) can be expressed by a 1-bit flag. Therefore, in this modification, for example, a 1-bit flag can be used as control information for the total number of offset classes instead of the offset merge information described above. That is, the overhead due to the control information of the total number of offset classes can be reduced. In addition, when the zero mode is applied, the offset class is not set, so that no offset value is set. Therefore, the offset information 15 is not signaled.
 この変形例は、図16のシンタクス構造を利用できる。具体的には、前述のmulti_offset_flagが、零モード及び複数モードのいずれが適用されるかを示す1ビットフラグとして利用されてよい。例えば、multi_offset_flagに1が設定されれば複数モードが適用され、multi_offset_flagに0が設定されれば零モードが適用される。NumOfOffsetは、対象スライス内で切り替え可能なオフセットクラスの総数を表すが、この値はmulti_offset_flagの値に応じて異なる。即ち、零モードが適用されるならばNumOfOffsetは0に等しく、複数モードが適用されるならばNumOfOffsetはマージされない場合のオフセットクラスの総数(即ち、複数)に等しい。複数モードが適用される場合に変数iによって特定されるオフセットクラスに対応するオフセット値は、offset_value[i]として記述される。零モードが適用される場合にオフセット値は記述されない。以上のシンタクス要素がスライス単位で記述され、符号化側においてエントロピー符号化され、符号化データ18の一部として復号側に伝送される。 This modification can use the syntax structure of FIG. Specifically, the aforementioned multi_offset_flag may be used as a 1-bit flag indicating which of the zero mode and the plurality of modes is applied. For example, if 1 is set in multi_offset_flag, the multiple mode is applied, and if 0 is set in multi_offset_flag, the zero mode is applied. NumOfOffset represents the total number of offset classes that can be switched in the target slice, but this value varies depending on the value of multi_offset_flag. That is, NumOfOffset is equal to 0 if zero mode is applied, and NumOfOffset is equal to the total number of offset classes (ie, multiples) when not merged if multiple modes are applied. The offset value corresponding to the offset class specified by the variable i when multiple modes are applied is described as offset_value [i]. The offset value is not described when the zero mode is applied. The above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
 本実施形態において、スライス単位でオフセットクラスの総数を切り替えることが前提とされている。しかしながら、例えばフィルタ係数セットの切り替え単位がスライスとは異なる場合には、このフィルタ係数セットの切り替え単位でオフセットクラスの総数が切り替えられてもよい。この場合には、フィルタ係数セットの切り替え単位毎にオフセットクラスの総数の制御情報がシグナリングされる。或いは、フィルタ係数セットの切り替え単位よりも大きな単位でオフセットクラスの総数が切り替えられてもよい。オフセットクラスの総数をより大きな単位で切り替えることにより、当該オフセットクラスの総数の制御情報によるオーバーヘッドを削減できる。 In this embodiment, it is assumed that the total number of offset classes is switched in units of slices. However, for example, when the switching unit of the filter coefficient set is different from the slice, the total number of offset classes may be switched by the switching unit of the filter coefficient set. In this case, control information of the total number of offset classes is signaled for each switching unit of the filter coefficient set. Alternatively, the total number of offset classes may be switched in a unit larger than the unit for switching the filter coefficient set. By switching the total number of offset classes in a larger unit, the overhead due to the control information of the total number of the offset classes can be reduced.
 また、オフセットクラスの総数は、種々の条件に基づいて暗黙に制御されてよい。具体的には、対象スライスのスライスタイプ(例えば、Iスライス、Pスライスなど)、対象スライス内で使用されるベースQPの値、対象スライスが符号化/復号処理において参照されるか否かなどに応じて、オフセットクラスの総数が暗黙に制御されてよい。係る条件に基づいてオフセットクラスの総数を暗黙に制御すれば、オフセットクラスの総数の制御情報によるオーバーヘッドを削減できる。 Also, the total number of offset classes may be implicitly controlled based on various conditions. Specifically, the slice type of the target slice (for example, I slice, P slice, etc.), the value of the base QP used in the target slice, whether or not the target slice is referenced in the encoding / decoding process, etc. Accordingly, the total number of offset classes may be implicitly controlled. If the total number of offset classes is implicitly controlled based on such conditions, overhead due to control information on the total number of offset classes can be reduced.
 例えば、ベースQPが大きくなるほどオフセットクラスの総数の制御情報をシグナリングするための符号量の影響が大きくなり、符号化効率が悪化するおそれがある。従って、ベースQPが閾値を超えた場合に、オフセットクラスを0にする(即ち、前述の零モードを適用する)ように制御されてよい。この閾値は、符号化側と復号側との間で予め用意されてもよいし、シーケンス単位、複数ピクチャ単位、ピクチャ単位またはスライス単位でシグナリングされてもよい。 For example, as the base QP increases, the influence of the code amount for signaling the control information for the total number of offset classes increases, and the coding efficiency may deteriorate. Therefore, when the base QP exceeds the threshold value, the offset class may be controlled to be 0 (that is, the above-described zero mode is applied). This threshold value may be prepared in advance between the encoding side and the decoding side, or may be signaled in sequence units, multiple picture units, picture units, or slice units.
 尚、本実施形態は、前述の通り、第1乃至第5の実施形態のいずれにも適用可能である。即ち、本実施形態は、SAO処理、デブロッキングフィルタ処理、ポストフィルタ処理などと組み合わせられてもよい。 Note that this embodiment can be applied to any of the first to fifth embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, and the like.
 (第7の実施形態) 
 第7の実施形態は、前述の第1乃至第6の実施形態において、オフセット値を量子化するための量子化精度を制御する技法に関する。
(Seventh embodiment)
The seventh embodiment relates to a technique for controlling the quantization accuracy for quantizing the offset value in the first to sixth embodiments.
 上記数式(7)によれば、フィルタ係数値及びオフセット値は同一の量子化精度で量子化される。しかしながら、下記数式(10)に示されるように、フィルタ係数値及びオフセット値は異なる量子化精度で量子化されてもよい。
Figure JPOXMLDOC01-appb-M000010
According to Equation (7) above, the filter coefficient value and the offset value are quantized with the same quantization accuracy. However, as shown in the following formula (10), the filter coefficient value and the offset value may be quantized with different quantization accuracy.
Figure JPOXMLDOC01-appb-M000010

 尚、D及びDは相異なる値であり、以降の説明においてD>Dであるとする。即ち、オフセット値の量子化精度は、フィルタ係数値の量子化精度よりも粗い。ここで、フィルタ係数値は復号画像12の画素値と乗算され、オフセット値はフィルタ演算結果に加算される。故に、復号画像12の画素値が1よりも大きければ、フィルタ係数値の量子化誤差がALF処理画像17に与える影響は、オフセット値の量子化誤差がALF処理画像17に与える影響に比べて大きい。オフセット値の量子化精度をフィルタ係数値の量子化精度よりも粗くすることによって、符号化歪を抑えつつオフセット情報15のオーバーヘッドを削減することができる。

Note that D 1 and D 2 are different values, and D 1 > D 2 in the following description. That is, the quantization accuracy of the offset value is coarser than the quantization accuracy of the filter coefficient value. Here, the filter coefficient value is multiplied by the pixel value of the decoded image 12, and the offset value is added to the filter calculation result. Therefore, if the pixel value of the decoded image 12 is larger than 1, the influence of the quantization error of the filter coefficient value on the ALF processed image 17 is larger than the influence of the quantization error of the offset value on the ALF processed image 17. . By making the quantization accuracy of the offset value coarser than the quantization accuracy of the filter coefficient value, the overhead of the offset information 15 can be reduced while suppressing encoding distortion.
 上記数式(10)に従ってフィルタ係数セット及びオフセット値が量子化されているならば、フィルタ処理を表す上記数式(8)は下記数式(11)に置き換えられる。 
Figure JPOXMLDOC01-appb-M000011
If the filter coefficient set and the offset value are quantized according to the equation (10), the equation (8) representing the filter processing is replaced with the following equation (11).
Figure JPOXMLDOC01-appb-M000011

 更に、Dが2n1に等しく、Dが2n2に等しいならば、上記数式(11)における除算はビットシフト演算と等価である。従って、上記数式(11)は下記数式(12)によって置き換えることができる。
Figure JPOXMLDOC01-appb-M000012

Further, if D 1 is equal to 2 n1 and D 2 is equal to 2 n2 , the division in the above equation (11) is equivalent to the bit shift operation. Therefore, the above formula (11) can be replaced by the following formula (12).
Figure JPOXMLDOC01-appb-M000012

 以上説明したように、本実施形態に係る動画像符号化装置及び動画像復号装置は、フィルタ係数セット及びオフセット値を相異なる量子化精度で量子化する。具体的には、これら動画像符号化装置及び動画像復号装置は、オフセット値の量子化精度をフィルタ係数の量子化精度よりも粗く定める。従って、これら動画像符号化装置及び動画像復号装置によれば、符号化歪を抑えつつオフセット情報によるオーバーヘッドを削減することができる。

As described above, the moving picture coding apparatus and the moving picture decoding apparatus according to the present embodiment quantize the filter coefficient set and the offset value with different quantization accuracy. Specifically, these moving image encoding device and moving image decoding device determine the quantization accuracy of the offset value coarser than the quantization accuracy of the filter coefficient. Therefore, according to these video encoding device and video decoding device, it is possible to reduce overhead due to offset information while suppressing encoding distortion.
 本実施形態は様々な変形例が想定される。以下に、係る変形例の一部が記載される。 
 本実施形態において、オフセット値の量子化精度は例えばスライス単位で切り替えられてよい。具体的には、符号化制御部110が上記数式(4)に基づく符号化コストを最小化するようにオフセット値の量子化精度を選択してもよい。符号化制御部110は、オフセット値の量子化精度に基づいてフィルタ係数セット及びオフセット値設定部107及びフィルタリング部108を制御する。更に、符号化制御部110は、オフセット値の量子化精度を示す情報をエントロピー符号化部109へと出力する。エントロピー符号化部109は、符号化制御部110からオフセット値の量子化精度を示す情報を入力し、これをエントロピー符号化して符号化データ18に多重化する。エントロピー復号部501は、符号化データ21をエントロピー復号してオフセット値の量子化精度を示す情報を生成して、これを復号制御部507へと出力する。復号制御部507は、エントロピー復号部501からオフセット値の量子化精度を示す情報を入力し、これに基づいてフィルタリング部505を制御する。
Various modifications are assumed for this embodiment. Below, a part of the modification which concerns is described.
In the present embodiment, the quantization accuracy of the offset value may be switched in units of slices, for example. Specifically, the encoding control unit 110 may select the quantization accuracy of the offset value so as to minimize the encoding cost based on Equation (4). The encoding control unit 110 controls the filter coefficient set / offset value setting unit 107 and the filtering unit 108 based on the quantization accuracy of the offset value. Further, the encoding control unit 110 outputs information indicating the quantization accuracy of the offset value to the entropy encoding unit 109. The entropy encoding unit 109 receives information indicating the quantization accuracy of the offset value from the encoding control unit 110, entropy encodes this, and multiplexes the encoded data 18. The entropy decoding unit 501 entropy-decodes the encoded data 21 to generate information indicating the quantization accuracy of the offset value, and outputs this to the decoding control unit 507. The decoding control unit 507 inputs information indicating the quantization accuracy of the offset value from the entropy decoding unit 501 and controls the filtering unit 505 based on this information.
 上記変形例において、スライス単位でオフセット値の量子化精度を切り替えることが例示されている。しかしながら、例えばフィルタ係数セットの切り替え単位がスライスとは異なる場合には、このフィルタ係数セットの切り替え単位でオフセット値の量子化精度が切り替えられてもよい。この場合には、フィルタ係数セットの切り替え単位毎にオフセット値の量子化精度を示す情報がシグナリングされる。或いは、フィルタ係数セットの切り替え単位よりも大きな単位でオフセット値の量子化精度が切り替えられてもよい。オフセット値の量子化精度をより大きな単位で切り替えることにより、当該オフセット値の量子化精度を示す情報によるオーバーヘッドを削減できる。 In the above modification, switching the quantization accuracy of the offset value in units of slices is exemplified. However, for example, when the switching unit of the filter coefficient set is different from the slice, the quantization accuracy of the offset value may be switched by the switching unit of the filter coefficient set. In this case, information indicating the quantization accuracy of the offset value is signaled for each filter coefficient set switching unit. Alternatively, the quantization accuracy of the offset value may be switched in a unit larger than the unit for switching the filter coefficient set. By switching the quantization accuracy of the offset value in a larger unit, overhead due to information indicating the quantization accuracy of the offset value can be reduced.
 また、オフセット値の量子化精度は、種々の条件に基づいて暗黙に制御されてよい。具体的には、対象スライスのスライスタイプ(例えば、Iスライス、Pスライスなど)、対象スライス内で使用されるベースQPの値、対象スライスが符号化/復号処理において参照されるか否かなどに応じて、オフセット値の量子化精度が暗黙に制御されてよい。係る条件に基づいてオフセットクラスの総数を暗黙に制御すれば、オフセット値の量子化精度の制御情報によるオーバーヘッドを削減できる。 Also, the quantization accuracy of the offset value may be implicitly controlled based on various conditions. Specifically, the slice type of the target slice (for example, I slice, P slice, etc.), the value of the base QP used in the target slice, whether or not the target slice is referenced in the encoding / decoding process, etc. Accordingly, the quantization accuracy of the offset value may be implicitly controlled. If the total number of offset classes is implicitly controlled based on such conditions, the overhead due to the control information of the quantization accuracy of the offset value can be reduced.
 例えば、ベースQPが大きくなるほどオフセット情報15をシグナリングするための符号量の影響が大きくなり、符号化効率が悪化するおそれがある。従って、ベースQPが閾値を超えた場合に、オフセット値の量子化精度を粗くする(即ち、前述のオフセット値の量子化幅を大きくする)ように制御されてよい。この閾値は、符号化側と復号側との間で予め用意されてもよいし、シーケンス単位、複数ピクチャ単位、ピクチャ単位またはスライス単位でシグナリングされてもよい。 For example, as the base QP increases, the influence of the code amount for signaling the offset information 15 increases, and the coding efficiency may deteriorate. Therefore, when the base QP exceeds the threshold value, the offset value quantization accuracy may be controlled to be rough (that is, the offset value quantization width is increased). This threshold value may be prepared in advance between the encoding side and the decoding side, or may be signaled in sequence units, multiple picture units, picture units, or slice units.
 尚、本実施形態は、前述の通り、第1乃至第6の実施形態のいずれにも適用可能である。即ち、本実施形態は、SAO処理、デブロッキングフィルタ処理、ポストフィルタ処理、オフセットクラスの総数の制御などと組み合わせられてもよい。 Note that this embodiment is applicable to any of the first to sixth embodiments as described above. That is, the present embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, control of the total number of offset classes, and the like.
 (第8の実施形態) 
 第8の実施形態は、第1乃至第7の実施形態において、例えば対象スライス内で複数のフィルタ係数セットを切り替え可能とする技法に関する。
(Eighth embodiment)
The eighth embodiment relates to a technique that enables switching of a plurality of filter coefficient sets in a target slice, for example, in the first to seventh embodiments.
 (動画像符号化装置) 
 図17に示されるように、第8の実施形態に係る動画像符号化装置は、動画像符号化部700と、符号化制御部1711とを含む。動画像符号化部1700は、予測画像生成部1701と、減算部1702と、変換及び量子化部1703と、逆量子化及び逆変換部1704と、加算部1705と、フィルタクラス設定部1706と、オフセットクラス設定部1707と、フィルタ係数セット及びオフセット値設定部1708と、フィルタリング部1709と、エントロピー符号化部1710とを含む。尚、フィルタクラス設定部1706、オフセットクラス設定部1707、フィルタ係数セット及びオフセット値設定部1708及びフィルタリング部1709は、ALF処理部と称されてよい。符号化制御部1711は、動画像符号化部1700の各部の動作を制御する。
(Moving picture encoding device)
As illustrated in FIG. 17, the moving image encoding apparatus according to the eighth embodiment includes a moving image encoding unit 700 and an encoding control unit 1711. The moving image encoding unit 1700 includes a predicted image generation unit 1701, a subtraction unit 1702, a transform and quantization unit 1703, an inverse quantization and inverse transform unit 1704, an adder 1705, a filter class setting unit 1706, An offset class setting unit 1707, a filter coefficient set and offset value setting unit 1708, a filtering unit 1709, and an entropy encoding unit 1710 are included. The filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set and offset value setting unit 1708, and the filtering unit 1709 may be referred to as an ALF processing unit. The encoding control unit 1711 controls the operation of each unit of the moving image encoding unit 1700.
 予測画像生成部1701、減算部1702、変換及び量子化部1703、逆量子化及び逆変換部1704、オフセットクラス設定部1707、エントロピー符号化部1710及び符号化制御部1711は、予測画像生成部101、減算部102、変換及び量子化部103、逆量子化及び逆変換部104、オフセットクラス設定部106、エントロピー符号化部109及び符号化制御部110と同一または類似であるので、これらの説明は省略される。 The predicted image generation unit 1701, the subtraction unit 1702, the transform / quantization unit 1703, the inverse quantization / inverse transform unit 1704, the offset class setting unit 1707, the entropy encoding unit 1710, and the encoding control unit 1711 are included in the predicted image generation unit 101. The subtraction unit 102, the transform and quantization unit 103, the inverse quantization and inverse transform unit 104, the offset class setting unit 106, the entropy coding unit 109, and the coding control unit 110 are the same as or similar to each other. Omitted.
 加算部1705は、復号画像12の出力先において、加算部105とは異なる。具体的には、加算部1705は、復号画像12をフィルタクラス設定部1706、オフセットクラス設定部1707、フィルタ係数セット及びオフセット値設定部1708及びフィルタリング部1709へと出力する。加算部1705は、その他の点において、加算部105と同一または類似である。 The addition unit 1705 is different from the addition unit 105 in the output destination of the decoded image 12. Specifically, the addition unit 1705 outputs the decoded image 12 to the filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set / offset value setting unit 1708, and the filtering unit 1709. Adder 1705 is the same as or similar to adder 105 in other respects.
 フィルタクラス設定部1706は、加算部1705から復号画像12を入力し、第2の単位毎に第2の指標に基づいてフィルタクラスを設定する。フィルタクラス設定部1706は、第2の単位毎に対応するフィルタクラスを示すフィルタクラス情報33を生成する。尚、フィルタクラス設定部1706の詳細は後述される。フィルタクラス設定部1706は、フィルタクラス情報33をフィルタ係数セット及びオフセット値設定部1708及びフィルタリング部1709へと出力する。 The filter class setting unit 1706 receives the decoded image 12 from the addition unit 1705, and sets a filter class based on the second index for each second unit. The filter class setting unit 1706 generates filter class information 33 indicating a filter class corresponding to each second unit. Details of the filter class setting unit 1706 will be described later. The filter class setting unit 1706 outputs the filter class information 33 to the filter coefficient set / offset value setting unit 1708 and the filtering unit 1709.
 フィルタ係数セット及びオフセット値設定部1708は、動画像符号化部1700の外部から入力画像11を取得し、加算部1705から復号画像12を入力し、フィルタクラス設定部1706からフィルタクラス情報33を入力し、オフセットクラス設定部1707からオフセットクラス情報13を入力する。フィルタ係数セット及びオフセット値設定部1708は、入力画像11と、復号画像12と、オフセットクラス情報13と、フィルタクラス情報33とに基づいて、各フィルタクラスに対応するフィルタ係数セットと、フィルタクラス及びオフセットクラスの各組み合わせに対応するオフセット値とを設定する。尚、フィルタ係数セット及びオフセット値設定部1708の詳細は後述される。 The filter coefficient set and offset value setting unit 1708 acquires the input image 11 from the outside of the moving image encoding unit 1700, inputs the decoded image 12 from the adder 1705, and inputs the filter class information 33 from the filter class setting unit 1706. The offset class information 13 is input from the offset class setting unit 1707. The filter coefficient set and offset value setting unit 1708, based on the input image 11, the decoded image 12, the offset class information 13, and the filter class information 33, a filter coefficient set corresponding to each filter class, a filter class, An offset value corresponding to each combination of offset classes is set. Details of the filter coefficient set and offset value setting unit 1708 will be described later.
 フィルタ係数セット及びオフセット値設定部1708は、設定した各フィルタクラスに対応するフィルタ係数セットを示すフィルタ係数セット情報14をフィルタリング部1709及びエントロピー符号化部1710へと出力する。フィルタ係数セット及びオフセット値設定部1708は、設定したフィルタクラス及びオフセットクラスの各組み合わせに対応するオフセット値を示すオフセット情報15をフィルタリング部1709及びエントロピー符号化部1710へと出力する。 The filter coefficient set and offset value setting unit 1708 outputs the filter coefficient set information 14 indicating the filter coefficient set corresponding to each set filter class to the filtering unit 1709 and the entropy encoding unit 1710. The filter coefficient set and offset value setting unit 1708 outputs the offset information 15 indicating the offset value corresponding to each combination of the set filter class and offset class to the filtering unit 1709 and the entropy encoding unit 1710.
 フィルタリング部1709は、加算部1705から復号画像12を入力し、フィルタクラス設定部1706からフィルタクラス情報33を入力し、オフセットクラス設定部1707からオフセットクラス情報13を入力し、フィルタ係数及びオフセット値設定部1708からフィルタ係数セット情報14及びオフセット情報15を入力する。フィルタリング部1709は、オフセットクラス情報13と、フィルタ係数セット情報14と、オフセット情報15と、フィルタクラス情報33とに基づいて復号画像12にフィルタ処理を行い、ALF処理画像17を生成する。尚、フィルタリング部1709の詳細は後述される。 The filtering unit 1709 receives the decoded image 12 from the adding unit 1705, receives the filter class information 33 from the filter class setting unit 1706, receives the offset class information 13 from the offset class setting unit 1707, and sets filter coefficients and offset values. The filter coefficient set information 14 and the offset information 15 are input from the unit 1708. The filtering unit 1709 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, the offset information 15, and the filter class information 33 to generate an ALF processed image 17. Details of the filtering unit 1709 will be described later.
 以下、ALF処理部、即ち、フィルタクラス設定部1706、オフセットクラス設定部1707、フィルタ係数セット及びオフセット値設定部1708及びフィルタリング部1709の詳細が説明される。 Hereinafter, details of the ALF processing unit, that is, the filter class setting unit 1706, the offset class setting unit 1707, the filter coefficient set and offset value setting unit 1708, and the filtering unit 1709 will be described.
 フィルタクラス設定部1706は、前述の通り、復号画像12の第2の単位毎に第2の指標に基づいてフィルタクラスを設定する。ここで、第2の単位は第1の単位より大きくてもよいし、同じ大きさであってもよい。例えば、第2の単位は画素ブロックであってよい。第2の指標は、第2の単位毎の画像特徴を示す。第2の指標は、第1に指標と異なっていてもよいし、同じであってもよい。例えば、第2の指標は、画像のアクティビティ、テクスチャの方向、画素ブロックの位置情報などのうちの1つ或いは複数の組み合わせであってもよい。 The filter class setting unit 1706 sets the filter class based on the second index for each second unit of the decoded image 12 as described above. Here, the second unit may be larger than the first unit or the same size. For example, the second unit may be a pixel block. The second index indicates an image feature for each second unit. The second index may be different from the index first or may be the same. For example, the second index may be one or a combination of image activity, texture direction, pixel block position information, and the like.
 但し、第2の単位が第1の単位と同じ大きさであって、かつ、第2の指標が第1の指標が同じである場合に、所与のフィルタクラスについて組み合わせられるオフセットクラスが固定されるおそれがある。この結果、所与のフィルタ係数セットについて複数のオフセット値を切り替えることが不可能となる。従って、例えば第6の実施形態に基づいてオフセットクラスの総数が制御されてよい。オフセットクラスの総数が制御されれば、所与のフィルタ係数セットについて複数のオフセット値を切り替えることが可能となる。或いは、第6の実施形態をオフセットクラスの代わりにフィルタクラスに対して適用することによって、フィルタクラスの総数が制御されてもよい。例えば、隣接するフィルタクラス同士でマージする/しないを制御し、フィルタマージ情報をシグナリングすればよい。フィルタクラスの総数が制御されれば、所与のフィルタ係数セットについて複数のオフセット値を切り替えることが可能となる。 However, the offset class combined for a given filter class is fixed when the second unit is the same size as the first unit and the second index is the same as the first index. There is a risk. As a result, it becomes impossible to switch a plurality of offset values for a given filter coefficient set. Therefore, for example, the total number of offset classes may be controlled based on the sixth embodiment. If the total number of offset classes is controlled, it is possible to switch between multiple offset values for a given set of filter coefficients. Alternatively, the total number of filter classes may be controlled by applying the sixth embodiment to filter classes instead of offset classes. For example, the filter merge information may be signaled by controlling whether or not adjacent filter classes are merged. If the total number of filter classes is controlled, a plurality of offset values can be switched for a given filter coefficient set.
 尚、前述の通り、種々の第2の指標が想定される。フィルタクラス設定部1706は、第2の指標の種別をいずれか1つに固定してもよいし、これらを切り替えてもよい。例えば、フィルタクラス設定部1706は、スライス単位または他の単位で、第2の指標の種別を切り替えてもよい。この場合に、符号化制御部1711は、スライス毎に最適な第2の指標の種別を選択してもよい。選択された第2の指標の種別を示す情報は、エントロピー符号化部1710によってエントロピー符号化され、符号化データ18の一部として出力される。尚、最適な第2の指標の種別は、例えば上記数式(4)に示される符号化コストを最小化するものであってよい。 As described above, various second indicators are assumed. The filter class setting unit 1706 may fix the type of the second index to any one, or may switch between them. For example, the filter class setting unit 1706 may switch the type of the second index in units of slices or other units. In this case, the encoding control unit 1711 may select the optimum second index type for each slice. Information indicating the type of the selected second index is entropy encoded by the entropy encoding unit 1710 and output as a part of the encoded data 18. Note that the optimum type of the second index may be one that minimizes the encoding cost represented by the above formula (4), for example.
 また、前述の通り、第1の指標の種別の切り替えも可能である。但し、本実施形態において、第1の指標の種別はスライスなどの画素領域単位に限られずフィルタクラス単位で切り替えられてもよい。例えば、オフセットクラス設定部1707は、フィルタクラス単位で、第1の指標の種別を切り替えてもよい。この場合に、符号化制御部1711は、フィルタクラス毎に最適な第1の指標の種別を選択してもよい。選択された第1の指標の種別を示す情報は、エントロピー符号化部1710によってエントロピー符号化され、符号化データの一部として出力される。尚、最適な第1の指標の種別は、例えば上記数式(4)に示される符号化コストを最小化するものであってよい。 Also, as described above, the type of the first index can be switched. However, in the present embodiment, the type of the first index is not limited to a pixel area unit such as a slice, and may be switched on a filter class basis. For example, the offset class setting unit 1707 may switch the type of the first index for each filter class. In this case, the encoding control unit 1711 may select the optimum first index type for each filter class. Information indicating the type of the selected first index is entropy encoded by the entropy encoding unit 1710 and output as a part of the encoded data. Note that the optimum first index type may be one that minimizes the encoding cost represented by the above formula (4), for example.
 フィルタ係数セット及びオフセット値設定部1708は、前述の通り、入力画像11と、復号画像12と、オフセットクラス情報13と、フィルタクラス情報33とに基づいて、各フィルタクラスに対応するフィルタ係数セットと、フィルタクラス及びオフセットクラスの各組み合わせに対応するオフセット値とを設定する。例えば、フィルタ係数セット及びオフセット値設定部1708は、前述のWiener-Hopf方程式を解くことにより、これらを設定する。但し、本実施形態において、基本的に複数のフィルタクラスが用意されるので、フィルタ係数セット及びオフセット値設定部1708はフィルタクラス毎にWiener-Hopf方程式を解く。フィルタ係数値及びオフセット値は、上記数式(7)または上記数式(10)に従って量子化してもよい。以降の説明では、フィルタ係数値及びオフセット値は量子化して設定されているとする。 As described above, the filter coefficient set and offset value setting unit 1708, based on the input image 11, the decoded image 12, the offset class information 13, and the filter class information 33, filter coefficient sets corresponding to each filter class, The offset value corresponding to each combination of the filter class and the offset class is set. For example, the filter coefficient set and offset value setting unit 1708 sets these by solving the Wiener-Hopf equation described above. However, since a plurality of filter classes are basically prepared in this embodiment, the filter coefficient set and offset value setting unit 1708 solves the Wiener-Hopf equation for each filter class. The filter coefficient value and the offset value may be quantized according to Equation (7) or Equation (10). In the following description, it is assumed that the filter coefficient value and the offset value are set by quantization.
 フィルタリング部1709は、前述の通り、オフセットクラス情報13と、フィルタ係数セット情報14と、オフセット情報15と、フィルタクラス情報33とに基づいて復号画像12にフィルタ処理を行い、ALF処理画像17を生成する。より具体的には、フィルタリング部1709は、図18に示されるように、フィルタ係数セット選択部1801と、オフセット選択部1802と、フィルタ処理部1803とを含む。 As described above, the filtering unit 1709 performs a filtering process on the decoded image 12 based on the offset class information 13, the filter coefficient set information 14, the offset information 15, and the filter class information 33, and generates an ALF processed image 17. To do. More specifically, the filtering unit 1709 includes a filter coefficient set selection unit 1801, an offset selection unit 1802, and a filter processing unit 1803, as shown in FIG.
 フィルタ係数セット選択部1801は、フィルタクラス設定部1706からフィルタクラス情報33を入力し、フィルタ係数セット及びオフセット値設定部1708からフィルタ係数情報14を入力する。フィルタ係数セット選択部1801は、第2の単位毎にフィルタクラス情報33に基づいてフィルタクラスを特定し、フィルタ係数セット情報14に基づいて当該フィルタクラスに対応するフィルタ係数セット34を選択する。フィルタ係数セット選択部1801は、選択したフィルタ係数セット34をフィルタ処理部1803へと出力する。 The filter coefficient set selection unit 1801 receives the filter class information 33 from the filter class setting unit 1706 and the filter coefficient information 14 from the filter coefficient set / offset value setting unit 1708. The filter coefficient set selection unit 1801 specifies a filter class based on the filter class information 33 for each second unit, and selects a filter coefficient set 34 corresponding to the filter class based on the filter coefficient set information 14. The filter coefficient set selection unit 1801 outputs the selected filter coefficient set 34 to the filter processing unit 1803.
 オフセット選択部1802は、フィルタクラス設定部1706からフィルタクラス情報33を入力し、オフセットクラス設定部1707からオフセットクラス情報13を入力し、フィルタ係数セット及びオフセット値設定部1708からオフセット情報15を入力する。オフセット選択部1802は、第1の単位毎にフィルタクラス情報33及びオフセットクラス情報13に基づいてフィルタクラス及びオフセットクラスを特定し、オフセット情報15に基づいて当該フィルタクラス及びオフセットクラスの組み合わせに対応するオフセット値16を選択する。オフセット選択部1802は、選択したオフセット値16をフィルタ処理部1803へと出力する。 The offset selection unit 1802 receives the filter class information 33 from the filter class setting unit 1706, the offset class information 13 from the offset class setting unit 1707, and the offset information 15 from the filter coefficient set / offset value setting unit 1708. . The offset selection unit 1802 identifies the filter class and the offset class based on the filter class information 33 and the offset class information 13 for each first unit, and corresponds to the combination of the filter class and the offset class based on the offset information 15. An offset value 16 is selected. The offset selection unit 1802 outputs the selected offset value 16 to the filter processing unit 1803.
 フィルタ処理部1803は、加算部1705から復号画像12を入力し、フィルタ係数セット選択部1801からフィルタ係数セット34を入力し、オフセット選択部1802からオフセット値16を入力する。フィルタ処理部1803は、復号画像12内の各画素に、フィルタ係数セット34に基づくフィルタ演算を行い、オフセット値16に基づくオフセット演算を行ってALF処理画像17を生成する。即ち、フィルタ処理部1803は、下記数式(13)に従って、ALF処理画像17における位置(x,y)の画素値を生成する。 
Figure JPOXMLDOC01-appb-M000013
The filter processing unit 1803 receives the decoded image 12 from the addition unit 1705, receives the filter coefficient set 34 from the filter coefficient set selection unit 1801, and receives the offset value 16 from the offset selection unit 1802. The filter processing unit 1803 performs a filter operation based on the filter coefficient set 34 on each pixel in the decoded image 12 and performs an offset operation based on the offset value 16 to generate the ALF processed image 17. That is, the filter processing unit 1803 generates a pixel value at the position (x, y) in the ALF processed image 17 according to the following mathematical formula (13).
Figure JPOXMLDOC01-appb-M000013

 ここで、filter_idx(x,y)は位置(x,y)によって特定される画素が属する第2の単位のフィルタクラスを表す。

Here, filter_idx (x, y) represents the filter class of the second unit to which the pixel specified by the position (x, y) belongs.
 以上のように、図17のALF処理部に関連する動作は例えば図19に示すものとなる。即ち、フィルタクラス設定部1706が、復号画像12の第2の単位(例えば、画素ブロック)毎に第2の指標に基づいてフィルタクラスを設定する(ステップS1901)。オフセットクラス設定部1902が、復号画像12の第1の単位(例えば、画素または画素ブロック)毎に第1の指標に基づいてオフセットクラスを設定する(ステップS1902)。フィルタ係数セット及びオフセット値設定部1708が、入力画像11と、復号画像12と、ステップS1901において設定されたフィルタクラスと、ステップS1902において設定されたオフセットクラスとに基づいて、各フィルタクラスに対応するフィルタ係数セットとフィルタクラス及びオフセットクラスの各組み合わせに対応するオフセット値とを設定する(ステップS1903)。フィルタリング部1709が、ステップS1903において設定されたフィルタ係数セット及びオフセット値に基づいて、復号画像12にフィルタ処理を行う(ステップS1904)。更に、エントロピー符号化部1710が、量子化変換係数及び符号化パラメータに加えて、ステップS1903において設定されたフィルタ係数セットを示すフィルタ係数セット情報14と、ステップS1903において設定されたオフセット値を示すオフセット情報15とをエントロピー符号化する(ステップS1905)。 As described above, the operation related to the ALF processing unit in FIG. 17 is as shown in FIG. 19, for example. That is, the filter class setting unit 1706 sets the filter class based on the second index for each second unit (for example, pixel block) of the decoded image 12 (step S1901). The offset class setting unit 1902 sets an offset class based on the first index for each first unit (for example, pixel or pixel block) of the decoded image 12 (step S1902). The filter coefficient set and offset value setting unit 1708 corresponds to each filter class based on the input image 11, the decoded image 12, the filter class set in step S1901, and the offset class set in step S1902. A filter coefficient set and an offset value corresponding to each combination of the filter class and the offset class are set (step S1903). The filtering unit 1709 performs a filtering process on the decoded image 12 based on the filter coefficient set and the offset value set in step S1903 (step S1904). Further, the entropy encoding unit 1710 adds the filter coefficient set information 14 indicating the filter coefficient set set in step S1903 and the offset indicating the offset value set in step S1903, in addition to the quantized transform coefficient and the encoding parameter. The information 15 is entropy encoded (step S1905).
 フィルタ係数セット情報14及びオフセット情報15は、例えば図20に示されるシンタクス構造に従って記述される。図20のシンタクスは、例えばスライス単位で記述される。図20において、filter_type_idxは図4と同一または類似であるので説明が省略される。NumOfFilterSetsは、対象スライス内で切り替え可能なフィルタクラスの総数を表す。NumOfFilterCoeffは、フィルタ係数セットに含まれるフィルタ係数値の総数を表す。NumOfFilterCoeffの値は、フィルタクラスに関わらず共通であってもよいが、フィルタクラス毎に異なっていてもよい。変数iによって特定されるフィルタクラスのフィルタ係数セットに含まれるフィルタ係数値は、filter_coeff[i][j]として1つずつ記述される。NumOfOffsetは、1つのフィルタクラス内で切り替え可能なオフセットクラスの総数を表す。NumOfOffsetの値は、フィルタクラスに関わらず共通であってもよいが、フィルタクラス毎に異なっていてもよい。変数iによって特定されるフィルタクラスと変数jによって特定されるオフセットクラスの組み合わせに対応するオフセット値は、offset_value[i][j]として記述される。以上のシンタクス要素がスライス単位で記述され、符号化側においてエントロピー符号化され、符号化データ18の一部として復号側に伝送される。 The filter coefficient set information 14 and the offset information 15 are described according to the syntax structure shown in FIG. 20, for example. The syntax in FIG. 20 is described in units of slices, for example. In FIG. 20, filter_type_idx is the same as or similar to that in FIG. NumOfFilterSets represents the total number of filter classes that can be switched in the target slice. NumOfFilterCoeff represents the total number of filter coefficient values included in the filter coefficient set. The value of NumOfFilterCoeff may be common regardless of the filter class, but may be different for each filter class. The filter coefficient values included in the filter coefficient set of the filter class specified by the variable i are described one by one as filter_coeff [i] [j]. NumOfOffset represents the total number of offset classes that can be switched within one filter class. The value of NumOfOffset may be common regardless of the filter class, but may be different for each filter class. The offset value corresponding to the combination of the filter class specified by the variable i and the offset class specified by the variable j is described as offset_value [i] [j]. The above syntax elements are described in units of slices, entropy-coded on the encoding side, and transmitted to the decoding side as part of the encoded data 18.
 尚、第1の実施形態において説明した通り、フィルタ係数セット情報14及びオフセット情報15の符号化に関して、設定されたフィルタ係数値及びオフセット値がそのまま符号化されてもよいし、前述の差分計算によって得られる差分値が符号化されてもよい。更に、本実施形態において、対象スライス内に複数のフィルタクラスが用意されるので、対象スライス内の異なるフィルタクラスを基準にフィルタ係数値及びオフセット値の差分計算が行われてもよい。或いは、基準スライス内の1つのフィルタクラスで設定されたフィルタ係数値及びオフセット値をダイレクトに使用することを示す情報が符号化されてもよい。 As described in the first embodiment, regarding the encoding of the filter coefficient set information 14 and the offset information 15, the set filter coefficient value and the offset value may be encoded as they are, or by the above difference calculation. The obtained difference value may be encoded. Furthermore, in this embodiment, since a plurality of filter classes are prepared in the target slice, the difference calculation between the filter coefficient value and the offset value may be performed based on different filter classes in the target slice. Alternatively, information indicating that the filter coefficient value and the offset value set in one filter class in the reference slice are directly used may be encoded.
 また、第1の実施形態において説明した通り、一部のオフセットクラスについてオフセット値が設定されなくてもよい。対象スライス内の一部または全部のフィルタクラスについてフィルタ演算及びオフセット演算が適用されなくてもよい。対象スライス内の一部または全部のフィルタクラスについてフィルタ演算及びオフセット演算のうちオフセット演算のみが適用されてもよい。対象スライス内の一部または全部のフィルタクラスについて、(1)フィルタ演算及びオフセット演算が適用されるモード、(2)フィルタ演算が適用されずオフセット演算が適用されるモード及び(3)フィルタ演算及びオフセット演算が適用されないモードのいずれかが選択されてもよい。各フィルタクラスについてフィルタ演算及びオフセット演算を適用するか否かは、例えば上記数式(4)に基づく符号化コストを最小化するように定められてよい。但し、係る動作をサポートするためには、対象スライス内の一部または全部のフィルタクラスについていずれのモードが選択されたかを示す情報がシンタクス要素に含められる必要がある。 Also, as described in the first embodiment, offset values may not be set for some offset classes. The filter operation and the offset operation may not be applied to a part or all of the filter classes in the target slice. Only the offset calculation may be applied to the filter class and the offset calculation for a part or all of the filter classes in the target slice. For some or all filter classes in the target slice, (1) a mode in which the filter operation and the offset operation are applied, (2) a mode in which the filter operation is not applied and the offset operation is applied, and (3) a filter operation and Any mode in which the offset calculation is not applied may be selected. Whether or not to apply the filter operation and the offset operation for each filter class may be determined so as to minimize the coding cost based on the above equation (4), for example. However, in order to support such an operation, information indicating which mode has been selected for a part or all of the filter classes in the target slice needs to be included in the syntax element.
 (動画像復号装置) 
 図21に示されるように、第8の実施形態に係る動画像復号装置は、動画像復号部2100と、復号制御部2108とを含む。動画像復号部2100は、エントロピー復号部2101と、逆量子化及び逆変換部2102と、加算部2103と、フィルタクラス設定部2104と、オフセットクラス設定部2105と、フィルタリング部2106と、予測画像生成部2107とを含む。復号制御部2108は、動画像復号部2100の各部の動作を制御する。
(Video decoding device)
As illustrated in FIG. 21, the video decoding device according to the eighth embodiment includes a video decoding unit 2100 and a decoding control unit 2108. The video decoding unit 2100 includes an entropy decoding unit 2101, an inverse quantization and inverse transformation unit 2102, an addition unit 2103, a filter class setting unit 2104, an offset class setting unit 2105, a filtering unit 2106, and a predicted image generation Part 2107. The decoding control unit 2108 controls the operation of each unit of the moving image decoding unit 2100.
 エントロピー復号部2101、逆量子化及び逆変換部2102、オフセットクラス設定部2105、予測画像生成部2107及び復号制御部2108は、エントロピー復号部501、逆量子化及び逆変換部502、オフセットクラス設定部504、予測画像生成部506及び復号制御部507と同一または類似であるので、これらの説明は省略される。 The entropy decoding unit 2101, the inverse quantization and inverse transformation unit 2102, the offset class setting unit 2105, the predicted image generation unit 2107, and the decoding control unit 2108 are an entropy decoding unit 501, an inverse quantization and inverse transformation unit 502, and an offset class setting unit. 504, the prediction image generation unit 506, and the decoding control unit 507 are the same as or similar to each other, and thus description thereof is omitted.
 加算部2103は、復号画像25の出力先において、加算部503とは異なる。具体的には、加算部2103は、復号画像25をフィルタクラス設定部2104、オフセットクラス設定部2105及びフィルタリング部2106へと出力する。加算部2103は、その他の点において、加算部503と同一または類似である。 The addition unit 2103 is different from the addition unit 503 in the output destination of the decoded image 25. Specifically, the adding unit 2103 outputs the decoded image 25 to the filter class setting unit 2104, the offset class setting unit 2105, and the filtering unit 2106. Adder 2103 is the same as or similar to adder 503 in other respects.
 フィルタクラス設定部2104は、加算部2103から復号画像25を入力し、第2の単位毎に第2の指標に基づいてフィルタクラスを設定する。フィルタクラス設定部2104は、第2の単位毎に対応するフィルタクラスを示すフィルタクラス情報42を生成する。基本的に、フィルタクラス設定部2104は、フィルタクラス設定部1706と同一または類似の処理を行う。フィルタクラス設定部2104は、フィルタクラス情報42をフィルタリング部2106へと出力する。 The filter class setting unit 2104 receives the decoded image 25 from the adding unit 2103 and sets the filter class based on the second index for each second unit. The filter class setting unit 2104 generates filter class information 42 indicating a filter class corresponding to each second unit. Basically, the filter class setting unit 2104 performs the same or similar processing as the filter class setting unit 1706. The filter class setting unit 2104 outputs the filter class information 42 to the filtering unit 2106.
 フィルタリング部2106は、エントロピー復号部2101からフィルタ係数セット情報23及びオフセット情報24を入力し、加算部2103から復号画像25を入力し、オフセットクラス設定部2105からオフセットクラス情報26を入力する。フィルタリング部2106は、フィルタ係数セット情報23と、オフセット情報24と、オフセットクラス情報26と、フィルタクラス情報42とに基づいて復号画像25にフィルタ処理を行い、ALF処理画像27を生成する。即ち、フィルタリング部2106は、前述のフィルタリング部1709と同一または類似の処理を行う。 The filtering unit 2106 receives the filter coefficient set information 23 and the offset information 24 from the entropy decoding unit 2101, receives the decoded image 25 from the addition unit 2103, and inputs the offset class information 26 from the offset class setting unit 2105. The filtering unit 2106 performs filter processing on the decoded image 25 based on the filter coefficient set information 23, the offset information 24, the offset class information 26, and the filter class information 42 to generate an ALF processed image 27. That is, the filtering unit 2106 performs the same or similar processing as the filtering unit 1709 described above.
 以上説明したように、第8の実施形態に係る動画像符号化装置及び動画像復号装置は、例えば対象スライス内で複数のフィルタ係数セットを切り替え可能とし、更に各フィルタ係数セットに対して更に複数のオフセット値を切り替え可能としている。従って、これら動画像符号化装置及び動画像復号装置によれば、フィルタ係数セット及びオフセット値の切り替えによって対象スライス内の局所構造に適応したフィルタ処理が可能であるので、符号化効率の向上が可能である。 As described above, the moving image encoding device and the moving image decoding device according to the eighth embodiment can switch a plurality of filter coefficient sets within the target slice, for example, and more than one for each filter coefficient set. The offset value can be switched. Therefore, according to the moving image encoding device and the moving image decoding device, it is possible to perform filter processing adapted to the local structure in the target slice by switching the filter coefficient set and the offset value, so that the encoding efficiency can be improved. It is.
 尚、本実施形態は、前述の通り、第1乃至第7の実施形態のいずれにも適用可能である。即ち、本実施形態は、SAO処理、デブロッキングフィルタ処理、ポストフィルタ処理、オフセットクラスの総数またはフィルタクラスの総数の制御、オフセット値の量子化精度の制御などと組み合わせられてもよい。また、第6の実施形態に関して、フィルタクラス毎にオフセットクラスの総数が制御されてもよい。この場合には、オフセットクラスの総数の制御情報は、フィルタクラス単位でシグナリングされる。更に、第7の実施形態に関して、フィルタクラス毎にオフセット値の量子化精度が制御されてもよい。この場合には、オフセット値の量子化精度を示す情報は、フィルタクラス単位でシグナリングされる。 Note that this embodiment is applicable to any of the first to seventh embodiments as described above. That is, this embodiment may be combined with SAO processing, deblocking filter processing, post filter processing, control of the total number of offset classes or the total number of filter classes, control of quantization accuracy of offset values, and the like. Further, regarding the sixth embodiment, the total number of offset classes may be controlled for each filter class. In this case, the control information of the total number of offset classes is signaled in units of filter classes. Furthermore, regarding the seventh embodiment, the quantization accuracy of the offset value may be controlled for each filter class. In this case, information indicating the quantization accuracy of the offset value is signaled for each filter class.
 上記各実施形態の処理は、汎用のコンピュータを基本ハードウェアとして用いることで実現可能である。上記各実施形態の処理を実現するプログラムは、コンピュータで読み取り可能な記憶媒体に格納して提供されてもよい。プログラムは、インストール可能な形式のファイルまたは実行可能な形式のファイルとして記憶媒体に記憶される。記憶媒体としては、磁気ディスク、光ディスク(CD-ROM、CD-R、DVD等)、光磁気ディスク(MO等)、半導体メモリなどである。記憶媒体は、プログラムを記憶でき、かつ、コンピュータが読み取り可能であれば、何れであってもよい。また、上記各実施形態の処理を実現するプログラムを、インターネットなどのネットワークに接続されたコンピュータ(サーバ)上に格納し、ネットワーク経由でコンピュータ(クライアント)にダウンロードさせてもよい。 The processing of each of the above embodiments can be realized by using a general-purpose computer as basic hardware. The program for realizing the processing of each of the above embodiments may be provided by being stored in a computer-readable storage medium. The program is stored in the storage medium as an installable file or an executable file. Examples of the storage medium include a magnetic disk, an optical disk (CD-ROM, CD-R, DVD, etc.), a magneto-optical disk (MO, etc.), and a semiconductor memory. The storage medium may be any as long as it can store the program and can be read by the computer. Further, the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
 本発明のいくつかの実施形態を説明したが、これらの実施形態は、例として提示したものであり、発明の範囲を限定することは意図していない。これら新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれる。 Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalents thereof.
 11・・・入力画像
 12,25・・・復号画像
 13,26・・・オフセットクラス情報
 14,23・・・フィルタ係数セット情報
 15,24・・・オフセット情報
 16・・・オフセット値
 17,27・・・ALF処理画像
 18,21・・・符号化データ
 19,28・・・画素適応オフセット情報
 22・・・符号化パラメータ
 29,31・・・SAO処理画像
 32,41・・・デブロッキングフィルタ処理画像
 33,42・・・フィルタクラス情報
 34・・・フィルタ係数セット
 43・・・ポストフィルタ処理画像
 100,600,800,1000,1200,1700・・・動画像符号化部
 101,506,601,706,801,906,1001,1106,1201,1306,1701,2107・・・予測画像生成部
 102,602,802,1002,1202,1702・・・減算部
 103,603,803,1003,1203,1703・・・変換及び量子化部
 104,502,604,702,804,902,1004,1102,1204,1302,1704,2102・・・逆量子化及び逆変換部
 105,503,605,703,805,903,1005,1103,1205,1303,1705,2103・・・加算部
 106,504,606,704,806,904,1006,1104,1206,1304,1707,2105・・・オフセットクラス設定部
 107,607,807,1007,1207,1708・・・フィルタ係数セット及びオフセット値設定部
 108,505,608,705,808,905,1008,1105,1305,1709,2106・・・フィルタリング部
 109,609,809,1009,1208,1710・・・エントロピー符号化部
 110,610,810,1010,1209,1711・・・符号化制御部
 201,1802・・・オフセット選択部
 202,1803・・・フィルタ処理部
 500,700,900,1100,1300,2100・・・動画像復号部
 501,701,901,1101,1301,2101・・・エントロピー復号部
 507,707,907,1107,1307,2108・・・復号制御部
 611,111・・・画素適応オフセット設定部
 612,708,1012,1109・・・画素適応オフセット処理部
 811,908,1013,1108・・・デブロッキングフィルタ処理部
 1706,2104・・・フィルタクラス設定部
 1801・・・フィルタ係数セット選択部
DESCRIPTION OF SYMBOLS 11 ... Input image 12, 25 ... Decoded image 13, 26 ... Offset class information 14, 23 ... Filter coefficient set information 15, 24 ... Offset information 16 ... Offset value 17, 27 ... ALF processed image 18, 21 ... Encoded data 19, 28 ... Pixel adaptive offset information 22 ... Encoding parameter 29, 31 ... SAO processed image 32, 41 ... Deblocking filter Processed image 33, 42 ... Filter class information 34 ... Filter coefficient set 43 ... Post filter processed image 100, 600, 800, 1000, 1200, 1700 ... Moving picture encoding unit 101, 506, 601 , 706, 801, 906, 1001, 1106, 1201, 1306, 1701, 2107 ... predicted image generation Unit 102,602,802,1002,1202,1702 ... subtraction unit 103,603,803,1003,1203,1703 ... conversion and quantization unit 104,502,604,702,804,902,1004 1102, 1204, 1302, 1704, 2102... Inverse quantization and inverse transform unit 105, 503, 605, 703, 805, 903, 1005, 1103, 1205, 1303, 1705, 2103... Adder unit 106, 504 , 606, 704, 806, 904, 1006, 1104, 1206, 1304, 1707, 2105... Offset class setting unit 107, 607, 807, 1007, 1207, 1708... Filter coefficient set and offset value setting unit 108 , 505, 608, 705, 808, 05, 1008, 1105, 1305, 1709, 2106 ... Filtering unit 109, 609, 809, 1009, 1208, 1710 ... Entropy encoding unit 110, 610, 810, 1010, 1209, 1711 ... Encoding Control unit 201, 1802 ... Offset selection unit 202, 1803 ... Filter processing unit 500, 700, 900, 1100, 1300, 2100 ... Moving picture decoding unit 501, 701, 901, 1101, 1301, 2101 .. Entropy decoding unit 507, 707, 907, 1107, 1307, 2108 ... decoding control unit 611, 111 ... pixel adaptive offset setting unit 612, 708, 1012, 1109 ... pixel adaptive offset processing unit 811 908, 1013, 1108 ... Deblocking filter processing unit 1706,2104 ... filter class setting section 1801 ... filter coefficient set selection section

Claims (20)

  1.  復号画像内の1以上の画素を含む第1の単位毎に、前記第1の単位の画像特徴を示す第1の指標に基づいて複数のオフセットクラスのうちのいずれか1つを設定することと、
     入力画像及び前記復号画像に基づいて、複数のフィルタ係数値を含むフィルタ係数セットと、前記複数のオフセットクラスの各々に対応するオフセット値とを設定することと、
     前記フィルタ係数セットを示す情報と、前記複数のオフセットクラスの各々に対応するオフセット値を示す情報とを符号化し、符号化データを生成することと、
     を具備する動画像符号化方法。
    For each first unit including one or more pixels in the decoded image, setting any one of a plurality of offset classes based on a first index indicating an image feature of the first unit; ,
    Setting a filter coefficient set including a plurality of filter coefficient values based on the input image and the decoded image, and an offset value corresponding to each of the plurality of offset classes;
    Encoding information indicating the filter coefficient set and information indicating offset values corresponding to each of the plurality of offset classes to generate encoded data;
    A video encoding method comprising:
  2.  前記復号画像内の複数の画素を含み、かつ、前記第1の単位よりも大きい第2の単位毎に、前記第2の単位の画像特徴を示す第2の指標に基づいて複数のフィルタクラスのうちのいずれか1つを設定することを更に具備し、
     前記フィルタ係数セットは、前記複数のフィルタクラスの各々に対して設定され、
     前記オフセット値は、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対して設定され、
     前記複数のフィルタクラスの各々に対応する前記フィルタ係数セットを示す情報と、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対応する前記オフセット値を示す情報とが符号化され、前記符号化データが生成される、
     請求項1の動画像符号化方法。
    For each second unit that includes a plurality of pixels in the decoded image and is larger than the first unit, a plurality of filter classes based on a second index indicating an image characteristic of the second unit. Further comprising setting any one of them,
    The filter coefficient set is set for each of the plurality of filter classes,
    The offset value is set for each of the combination of the plurality of filter classes and the plurality of offset classes,
    Information indicating the filter coefficient set corresponding to each of the plurality of filter classes, and information indicating the offset value corresponding to each of the combination of the plurality of filter classes and the plurality of offset classes are encoded, and Encoded data is generated,
    The moving image encoding method according to claim 1.
  3.  前記復号画像内の1以上の画素を含み、かつ、前記第1の単位と同じ大きさの第2の単位毎に、前記第2の単位の画像特徴を反映し、かつ、前記第1の指標と異なる第2の指標に基づいて複数のフィルタクラスのうちのいずれか1つを設定することを更に具備し、
     前記フィルタ係数セットは、前記複数のフィルタクラスの各々に対して設定され、
     前記オフセット値は、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対して設定され、
     前記複数のフィルタクラスの各々に対応する前記フィルタ係数セットを示す情報と、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対応する前記オフセット値を示す情報とが符号化され、前記符号化データが生成される、
     請求項1の動画像符号化方法。
    For each second unit having one or more pixels in the decoded image and having the same size as the first unit, the image feature of the second unit is reflected, and the first index Setting any one of a plurality of filter classes based on a second index different from
    The filter coefficient set is set for each of the plurality of filter classes,
    The offset value is set for each of the combination of the plurality of filter classes and the plurality of offset classes,
    Information indicating the filter coefficient set corresponding to each of the plurality of filter classes, and information indicating the offset value corresponding to each of the combination of the plurality of filter classes and the plurality of offset classes are encoded, and Encoded data is generated,
    The moving image encoding method according to claim 1.
  4.  前記第2の指標が前記第1の指標とは異なる、請求項2の動画像符号化方法。 The moving picture coding method according to claim 2, wherein the second index is different from the first index.
  5.  前記オフセットクラスの総数を前記複数のフィルタクラスの各々について制御することを更に具備し、
     前記オフセットクラスの総数を制御する情報が更に符号化され、前記符号化データが生成される、
     請求項2の動画像符号化方法。
    Further controlling the total number of the offset classes for each of the plurality of filter classes;
    Information for controlling the total number of the offset classes is further encoded, and the encoded data is generated.
    The moving image encoding method according to claim 2.
  6.  前記オフセットクラスの総数を制御する情報は、前記オフセットクラスの総数が複数であるか1個であるかを示す、請求項5の動画像符号化方法。 The moving picture encoding method according to claim 5, wherein the information for controlling the total number of offset classes indicates whether the total number of the offset classes is plural or one.
  7.  前記オフセットクラスの総数を制御する情報は、前記オフセットクラスの総数が複数であるか0個であるかを示す、請求項5の動画像符号化方法。 The moving picture encoding method according to claim 5, wherein the information for controlling the total number of offset classes indicates whether the total number of offset classes is plural or zero.
  8.  前記オフセットクラスの総数は、前記複数のオフセットクラスのうち隣接するペアの各々をマージするか否かによって制御され、
     前記オフセットクラスの総数を制御する情報は、前記複数のオフセットクラスのうち隣接するペアの各々をマージするか否かを示す、
     請求項5の動画像符号化方法。
    The total number of offset classes is controlled by whether or not to merge each of adjacent pairs of the plurality of offset classes,
    The information for controlling the total number of offset classes indicates whether or not to merge each adjacent pair of the plurality of offset classes.
    The moving image encoding method according to claim 5.
  9.  前記複数のフィルタクラスの各々について、前記フィルタ係数セット及び前記オフセット値の両方が設定される第1のモードと、前記フィルタ係数セットが設定されず、かつ、前記オフセット値が設定される第2のモードとを切り替えることを更に具備し、
     前記複数のフィルタクラスの各々について、前記第1のモード及び前記第2のモードのいずれが適用されるかを示す情報が更に符号化され、前記符号化データが生成される、
     請求項2の動画像符号化方法。
    A first mode in which both the filter coefficient set and the offset value are set for each of the plurality of filter classes, and a second mode in which the filter coefficient set is not set and the offset value is set Further switching between modes,
    For each of the plurality of filter classes, information indicating which of the first mode and the second mode is applied is further encoded, and the encoded data is generated.
    The moving image encoding method according to claim 2.
  10.  符号化データを復号し、複数のフィルタ係数値を含むフィルタ係数セットを示す情報と、複数のオフセットクラスの各々に対応するオフセット値を示す情報とを生成することと、
     復号画像内の1以上の画素を含む第1の単位毎に、前記第1の単位の画像特徴を示す第1の指標に基づいて前記複数のオフセットクラスのいずれか1つを設定することと、
     前記復号画像内の対象画素に対して、前記フィルタ係数セットと、前記対象画素が属する第1の単位に設定されたオフセットクラスに対応する前記オフセット値とに基づいてフィルタ処理を行うことと
     を具備する動画像復号方法。
    Decoding encoded data, generating information indicating a filter coefficient set including a plurality of filter coefficient values, and information indicating offset values corresponding to each of a plurality of offset classes;
    For each first unit including one or more pixels in the decoded image, setting any one of the plurality of offset classes based on a first index indicating an image feature of the first unit;
    Filtering the target pixel in the decoded image based on the filter coefficient set and the offset value corresponding to the offset class set in the first unit to which the target pixel belongs. A moving picture decoding method.
  11.  前記復号画像内の複数の画素を含み、かつ、前記第1の単位よりも大きい第2の単位毎に、前記第2の単位の画像特徴を示す第2の指標に基づいて複数のフィルタクラスのうちのいずれか1つを設定することを更に具備し、
     前記符号化データが復号され、前記複数のフィルタクラスの各々に対応する前記フィルタ係数セットを示す情報と、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対応する前記オフセット値を示す情報とが生成され、
     前記対象画素に対して、前記対象画素が属する第2の単位に設定されたフィルタクラスに対応する前記フィルタ係数セットと、前記対象画素が属する前記第2の単位に設定されたフィルタクラス及び前記対象画素が属する前記第1の単位に設定されたオフセットクラスの組み合わせに対応する前記オフセット値とに基づいてフィルタ処理が行われる、
     請求項10の動画像復号方法。
    For each second unit that includes a plurality of pixels in the decoded image and is larger than the first unit, a plurality of filter classes based on a second index indicating an image characteristic of the second unit. Further comprising setting any one of them,
    Decoding the encoded data, indicating information indicating the filter coefficient set corresponding to each of the plurality of filter classes, and indicating the offset value corresponding to each of the combination of the plurality of filter classes and the plurality of offset classes Information and
    For the target pixel, the filter coefficient set corresponding to the filter class set in the second unit to which the target pixel belongs, the filter class set in the second unit to which the target pixel belongs, and the target Filter processing is performed based on the offset value corresponding to the combination of offset classes set in the first unit to which the pixel belongs,
    The moving image decoding method according to claim 10.
  12.  前記復号画像内の1以上の画素を含み、かつ、前記第1の単位と同じ大きさの第2の単位毎に、前記第2の単位の画像特徴を反映し、かつ、前記第1の指標と異なる第2の指標に基づいて複数のフィルタクラスのうちのいずれか1つを設定することを更に具備し、
     前記符号化データが復号され、前記複数のフィルタクラスの各々に対応する前記フィルタ係数セットを示す情報と、前記複数のフィルタクラス及び前記複数のオフセットクラスの組み合わせの各々に対応する前記オフセット値を示す情報とが生成され、
     前記対象画素に対して、前記対象画素が属する第2の単位に設定されたフィルタクラスに対応する前記フィルタ係数セットと、前記対象画素が属する前記第2の単位に設定されたフィルタクラス及び前記対象画素が属する前記第1の単位に設定されたオフセットクラスの組み合わせに対応する前記オフセット値とに基づいてフィルタ処理が行われる、
     請求項10の動画像復号方法。
    For each second unit having one or more pixels in the decoded image and having the same size as the first unit, the image feature of the second unit is reflected, and the first index Setting any one of a plurality of filter classes based on a second index different from
    Decoding the encoded data, indicating information indicating the filter coefficient set corresponding to each of the plurality of filter classes, and indicating the offset value corresponding to each of the combination of the plurality of filter classes and the plurality of offset classes Information and
    For the target pixel, the filter coefficient set corresponding to the filter class set in the second unit to which the target pixel belongs, the filter class set in the second unit to which the target pixel belongs, and the target Filter processing is performed based on the offset value corresponding to the combination of offset classes set in the first unit to which the pixel belongs,
    The moving image decoding method according to claim 10.
  13.  前記第2の指標が前記第1の指標とは異なる、請求項11の動画像復号方法。 The moving picture decoding method according to claim 11, wherein the second index is different from the first index.
  14.  前記オフセットクラスの総数を前記複数のフィルタクラスの各々について制御することを更に具備し、
     前記符号化データが復号され、前記オフセットクラスの総数を制御する情報が更に生成される、
     請求項11の動画像復号方法。
    Further controlling the total number of the offset classes for each of the plurality of filter classes;
    The encoded data is decoded, and information for controlling the total number of the offset classes is further generated.
    The moving image decoding method according to claim 11.
  15.  前記オフセットクラスの総数を制御する情報は、前記オフセットクラスの総数が複数であるか1個であるかを示す、請求項14の動画像復号方法。 15. The moving picture decoding method according to claim 14, wherein the information for controlling the total number of offset classes indicates whether the total number of offset classes is plural or one.
  16.  前記オフセットクラスの総数を制御する情報は、前記オフセットクラスの総数が複数であるか0個であるかを示す、請求項14の動画像復号方法。 15. The moving picture decoding method according to claim 14, wherein the information for controlling the total number of offset classes indicates whether the total number of offset classes is plural or zero.
  17.  前記オフセットクラスの総数は、前記複数のオフセットクラスのうち隣接するペアの各々をマージするか否かによって制御され、
     前記オフセットクラスの総数を制御する情報は、前記複数のオフセットクラスのうち隣接するペアの各々をマージするか否かを示す、
     請求項14の動画像復号方法。
    The total number of offset classes is controlled by whether or not to merge each of adjacent pairs of the plurality of offset classes,
    The information for controlling the total number of offset classes indicates whether or not to merge each adjacent pair of the plurality of offset classes.
    The moving image decoding method according to claim 14.
  18.  前記符号化データが復号され、前記複数のフィルタクラスの各々について、前記フィルタ係数セット及び前記オフセット値の両方が設定される第1のモードと、前記フィルタ係数セットが設定されず、かつ、前記オフセット値が設定される第2のモードとのいずれが適用されるかを示す情報が更に生成され、
     前記対象画素が属する第2の単位に設定されたフィルタクラスに前記第1のモードが適用される場合に、前記対象画素に対して、前記対象画素が属する第2の単位に設定されたフィルタクラスに対応する前記フィルタ係数セットと、前記対象画素が属する前記第2の単位に設定されたフィルタクラス及び前記対象画素が属する前記第1の単位に設定されたオフセットクラスの組み合わせに対応する前記オフセット値とに基づいてフィルタ処理が行われ、
     前記対象画素が属する第2の単位に設定されたフィルタクラスに前記第2のモードが適用される場合に、前記対象画素に対して、前記対象画素が属する前記第2の単位に設定されたフィルタクラス及び前記対象画素が属する前記第1の単位に設定されたオフセットクラスの組み合わせに対応する前記オフセット値に基づいてフィルタ処理が行われる、
     請求項11の動画像復号方法。
    A first mode in which both the filter coefficient set and the offset value are set for each of the plurality of filter classes, the filter coefficient set is not set, and the offset is set. Further information is generated indicating which of the second mode in which the value is set is applied,
    When the first mode is applied to the filter class set to the second unit to which the target pixel belongs, the filter class set to the second unit to which the target pixel belongs with respect to the target pixel The offset value corresponding to a combination of the filter coefficient set corresponding to, the filter class set in the second unit to which the target pixel belongs and the offset class set in the first unit to which the target pixel belongs And filtering based on
    When the second mode is applied to the filter class set to the second unit to which the target pixel belongs, the filter set to the second unit to which the target pixel belongs with respect to the target pixel Filter processing is performed based on the offset value corresponding to the combination of the class and the offset class set in the first unit to which the target pixel belongs,
    The moving image decoding method according to claim 11.
  19.  復号画像内の1以上の画素を含む第1の単位毎に、前記第1の単位の画像特徴を示す第1の指標に基づいて複数のオフセットクラスのうちのいずれか1つを設定する第1の設定部と、
     入力画像及び前記復号画像に基づいて、複数のフィルタ係数値を含むフィルタ係数セットと、前記複数のオフセットクラスの各々に対応するオフセット値とを設定する第2の設定部と、
     前記フィルタ係数セットを示す情報と、前記複数のオフセットクラスの各々に対応するオフセット値を示す情報とを符号化し、符号化データを生成する符号化部と
     を具備する動画像符号化装置。
    1st which sets any one of several offset classes based on the 1st parameter | index which shows the image characteristic of a said 1st unit for every 1st unit containing one or more pixels in a decoded image. The setting part of
    A second setting unit that sets a filter coefficient set including a plurality of filter coefficient values based on the input image and the decoded image, and an offset value corresponding to each of the plurality of offset classes;
    A video encoding device comprising: an encoding unit that encodes information indicating the filter coefficient set and information indicating an offset value corresponding to each of the plurality of offset classes to generate encoded data.
  20.  符号化データを復号し、複数のフィルタ係数値を含むフィルタ係数セットを示す情報と、複数のオフセットクラスの各々に対応するオフセット値を示す情報とを生成する復号部と、
     復号画像内の1以上の画素を含む第1の単位毎に、前記第1の単位の画像特徴を示す第1の指標に基づいて前記複数のオフセットクラスのいずれか1つを設定する設定部と、
     前記復号画像内の対象画素に対して、前記フィルタ係数セットと、前記対象画素が属する第1の単位に設定されたオフセットクラスに対応する前記オフセット値とに基づいてフィルタ処理を行うフィルタリング部と
     を具備する動画像復号装置。
    A decoding unit that decodes encoded data and generates information indicating a filter coefficient set including a plurality of filter coefficient values, and information indicating offset values corresponding to each of the plurality of offset classes;
    A setting unit that sets, for each first unit including one or more pixels in the decoded image, one of the plurality of offset classes based on a first index indicating an image feature of the first unit; ,
    A filtering unit that performs filtering on the target pixel in the decoded image based on the filter coefficient set and the offset value corresponding to the offset class set in the first unit to which the target pixel belongs; A moving picture decoding apparatus.
PCT/JP2011/069300 2011-08-26 2011-08-26 Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus WO2013030902A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/069300 WO2013030902A1 (en) 2011-08-26 2011-08-26 Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/069300 WO2013030902A1 (en) 2011-08-26 2011-08-26 Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus

Publications (1)

Publication Number Publication Date
WO2013030902A1 true WO2013030902A1 (en) 2013-03-07

Family

ID=47755458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/069300 WO2013030902A1 (en) 2011-08-26 2011-08-26 Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus

Country Status (1)

Country Link
WO (1) WO2013030902A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (en) * 2016-05-13 2017-11-16 シャープ株式会社 Image decoding device and image encoding device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011083713A1 (en) * 2010-01-06 2011-07-14 ソニー株式会社 Device and method for processing image
WO2011089865A1 (en) * 2010-01-21 2011-07-28 パナソニック株式会社 Image encoding method, image decoding method, device therefor, program, and integrated circuit

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011083713A1 (en) * 2010-01-06 2011-07-14 ソニー株式会社 Device and method for processing image
WO2011089865A1 (en) * 2010-01-21 2011-07-28 パナソニック株式会社 Image encoding method, image decoding method, device therefor, program, and integrated circuit

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHIH-MING FU ET AL.: "CE13: Sample Adaptive Offset with LCU-Independent Decoding", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E049, 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH, pages 1 - 6 *
CHIH-MING FU ET AL.: "CE8 Subtest3: Picture Quadtree Adaptive Offset", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-D122, 4TH MEETING, January 2011 (2011-01-01), DAEGU, KR, pages 1 - 10 *
CHIH-MING FU ET AL.: "Sample Adaptive Offset with LCU-based Syntax", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-F056, 6TH MEETING, July 2011 (2011-07-01), TORINO, IT, pages 1 - 6 *
I.S.CHONG ET AL.: "CE8 Subtest 2: Block based adaptive loop filter (ALF)", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E323, 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH, pages 1 - 4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017195532A1 (en) * 2016-05-13 2017-11-16 シャープ株式会社 Image decoding device and image encoding device

Similar Documents

Publication Publication Date Title
US20240179305A1 (en) Constrained position dependent intra prediction combination (pdpc)
US8249154B2 (en) Method and apparatus for encoding/decoding image based on intra prediction
JP2023179682A (en) Intra prediction using linear or affine transformation with adjacent sample reduction
CN107623853B (en) Video encoding and decoding methods and non-transitory computer-readable storage medium
JP2022087158A (en) Intra-prediction method and encoder and decoder using same
US9706226B2 (en) Image encoding apparatus and image decoding apparatus employing intra preciction and direction transform matrix
EP3232664B1 (en) Method for decoding using an interpolation filter
JP7325540B2 (en) Block-based prediction
US20130003855A1 (en) Processing method and device for video signals
KR101621854B1 (en) Tsm rate-distortion optimizing method, encoding method and device using the same, and apparatus for processing picture
KR101450645B1 (en) A method and an apparatus for controlling a video bitrate
JP2024069438A (en) Coding using intra prediction
EP2252059A2 (en) Image encoding and decoding method and device
JP2024506213A (en) Encoding/decoding method, apparatus and device thereof
CN113709480A (en) Encoding and decoding method, device and equipment
US20210021871A1 (en) Method and apparatus for performing low-complexity operation of transform kernel for video compression
WO2019107182A1 (en) Encoding device, encoding method, decoding device, and decoding method
WO2013030902A1 (en) Moving image encoding method, moving image decoding method, moving image encoding apparatus and moving image decoding apparatus
WO2014084674A2 (en) Intra prediction method and intra prediction apparatus using residual transform
JP5358485B2 (en) Image encoding device
WO2015045301A1 (en) Video encoding device, video encoding method, and video encoding program
US20210377523A1 (en) Encoding device, encoding method, decoding device, and decoding method
WO2013145174A1 (en) Video encoding method, video decoding method, video encoding device, and video decoding device
KR20150057801A (en) Video coding method for fast intra prediction and apparatus thereof
CN113473129B (en) Encoding and decoding method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11871586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11871586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP