WO2014087861A1 - 画像処理装置、画像処理方法、およびプログラム - Google Patents
画像処理装置、画像処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2014087861A1 WO2014087861A1 PCT/JP2013/081596 JP2013081596W WO2014087861A1 WO 2014087861 A1 WO2014087861 A1 WO 2014087861A1 JP 2013081596 W JP2013081596 W JP 2013081596W WO 2014087861 A1 WO2014087861 A1 WO 2014087861A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- image
- processing
- filter
- units
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
Definitions
- the present technology relates to an image processing apparatus, an image processing method, and a program, and in particular, an image processing apparatus capable of performing filter processing on a decoded image in parallel in processing units unrelated to parallel encoding processing units. , An image processing method, and a program.
- HEVC High Efficiency Video Coding
- a slice or a tile can be used as a parallel coding processing unit which is a coding processing unit capable of decoding in parallel.
- the present technology has been made in view of such a situation, and is capable of performing filter processing on a decoded image in parallel in units of processing unrelated to the unit of parallel encoding processing.
- the image processing apparatus decodes a coded data and generates an image, and the image generated by the decoding unit is processed in parallel in a processing unit unrelated to a slice. And an image processing apparatus including a filter processing unit that performs filter processing.
- the image processing method and program of the first aspect of the present technology correspond to the image processing device of the first aspect of the present technology.
- encoded data is decoded to generate an image, and the image is subjected to parallel filtering in units of processing unrelated to slice.
- the image processing apparatus decodes a coded data and generates an image, and the image generated by the decoding unit is processed in parallel in units of processing unrelated to the tile. And an image processing apparatus including a filter processing unit that performs filter processing.
- encoded data is decoded to generate an image, and the image is subjected to parallel filtering in units of processing unrelated to tiles.
- FIG. 1 is a block diagram showing a configuration example of a first embodiment of an encoding device as an image processing device to which the present technology is applied.
- the encoding apparatus 11 of FIG. 1 includes an A / D conversion unit 31, a screen rearrangement buffer 32, an operation unit 33, an orthogonal conversion unit 34, a quantization unit 35, a lossless encoding unit 36, an accumulation buffer 37, and an inverse quantization unit. 38, inverse orthogonal transform unit 39, addition unit 40, deblock filter 41, adaptive offset filter 42, adaptive loop filter 43, frame memory 44, switch 45, intra prediction unit 46, motion prediction / compensation unit 47, predicted image selection unit 48 and a rate control unit 49.
- the encoding device 11 encodes an image by a method according to the HEVC method.
- the A / D conversion unit 31 of the encoding device 11 performs A / D conversion on an image in units of frames input from the outside as an input signal, and outputs the image to the screen rearrangement buffer 32 for storage.
- the screen rearrangement buffer 32 rearranges the images in frame units in the stored display order in the order for encoding according to the GOP structure, and the arithmetic unit 33, the intra prediction unit 46, and the motion prediction / compensation unit Output to 47.
- the calculation unit 33 performs coding by calculating the difference between the predicted image supplied from the predicted image selection unit 48 and the image to be coded output from the screen rearrangement buffer 32. Specifically, the calculation unit 33 performs coding by subtracting the prediction image supplied from the prediction image selection unit 48 from the image to be encoded output from the screen rearrangement buffer 32. The calculation unit 33 outputs the image obtained as a result to the orthogonal transformation unit 34 as residual information. When the prediction image is not supplied from the prediction image selection unit 48, the calculation unit 33 outputs the image read from the screen rearrangement buffer 32 to the orthogonal transformation unit 34 as the residual information as it is.
- the orthogonal transformation unit 34 orthogonally transforms the residual information from the calculation unit 33, and supplies the generated orthogonal transformation coefficient to the quantization unit 35.
- the quantization unit 35 quantizes the orthogonal transformation coefficient supplied from the orthogonal transformation unit 34, and supplies the resultant coefficient to the lossless encoding unit 36.
- the lossless encoding unit 36 acquires information indicating the optimal intra prediction mode (hereinafter referred to as intra prediction mode information) from the intra prediction unit 46. Also, the lossless encoding unit 36 acquires, from the motion prediction / compensation unit 47, information indicating the optimal inter prediction mode (hereinafter referred to as inter prediction mode information), a motion vector, information for specifying a reference image, and the like.
- intra prediction mode information information indicating the optimal intra prediction mode
- inter prediction mode information information indicating the optimal inter prediction mode
- a motion vector information for specifying a reference image, and the like.
- the lossless encoding unit 36 acquires offset filter information related to the offset filter from the adaptive offset filter 42 and acquires filter coefficients from the adaptive loop filter 43.
- the lossless encoding unit 36 performs lossless encoding such as arithmetic coding (for example, CABAC (Context-Adaptive Binary Arithmetic Coding) or the like) on the quantized coefficients supplied from the quantization unit 35.
- lossless encoding such as arithmetic coding (for example, CABAC (Context-Adaptive Binary Arithmetic Coding) or the like) on the quantized coefficients supplied from the quantization unit 35.
- the lossless encoding unit 36 uses the intra prediction mode information or the inter prediction mode information, the motion vector, the information for identifying the reference image, the offset filter information, and the filter coefficient as the lossless code as the coding information related to the encoding. Turn The lossless encoding unit 36 supplies and stores the lossless encoded encoded information and the coefficient (syntax) to the accumulation buffer 37 as encoded data.
- the lossless encoded coding information may be header information (slice header) of the lossless encoded coefficient.
- the accumulation buffer 37 temporarily stores the encoded data (bit stream) supplied from the lossless encoding unit 36. Further, the accumulation buffer 37 transmits the stored encoded data.
- the quantized coefficients output from the quantization unit 35 are also input to the inverse quantization unit 38.
- the inverse quantization unit 38 performs inverse quantization on the coefficients quantized by the quantization unit 35 in parallel in a predetermined processing unit, and supplies orthogonal transformation coefficients obtained as a result to the inverse orthogonal transformation unit 39. .
- the inverse orthogonal transformation unit 39 performs inverse orthogonal transformation on the orthogonal transformation coefficient supplied from the inverse quantization unit 38 in parallel in predetermined processing units, and supplies the resultant residual information to the addition unit 40. .
- the addition unit 40 functions as a decoding unit, and performs addition processing of adding the predicted image supplied from the motion prediction / compensation unit 47 and the residual information supplied from the inverse orthogonal transformation unit 39 in parallel in a predetermined processing unit Decode locally by decoding.
- the addition unit 40 supplies the resulting locally decoded image to the frame memory 44.
- the adding unit 40 performs decoding locally by performing addition processing of adding the predicted image supplied from the intra prediction unit 46 and the residual information on a PU (Prediction Unit) basis.
- the addition unit 40 supplies the resulting locally decoded PU-based image to the frame memory 44.
- the adding unit 40 supplies the completely decoded picture unit image to the deblocking filter 41.
- the deblocking filter 41 performs deblocking filter processing for removing block distortion on the image supplied from the adding unit 40 in parallel in a predetermined processing unit, and supplies the resulting image to the adaptive offset filter 42. .
- the adaptive offset filter 42 performs an adaptive offset filter (SAO (Sample adaptive offset)) process that mainly removes ringing for each LCU (Largest Coding Unit) on the image after the deblocking filter process by the deblocking filter 41. Perform in parallel in the processing unit of.
- the adaptive offset filter 42 supplies, to the lossless encoding unit 36, offset filter information which is information on adaptive offset filtering of each LCU.
- the adaptive loop filter 43 is configured by, for example, a two-dimensional Wiener filter.
- the adaptive loop filter 43 performs adaptive loop filter (ALF (Adaptive Loop Filter)) processing for each LCU on the image after adaptive offset filter processing supplied from the adaptive offset filter 42 in parallel in a predetermined processing unit.
- ALF Adaptive Loop Filter
- the adaptive loop filter 43 supplies the filter coefficient used in the adaptive loop filter processing of each LCU to the lossless encoding unit 36.
- the frame memory 44 accumulates the image supplied from the adaptive loop filter 43 and the image supplied from the adding unit 40.
- the image supplied from the adaptive loop filter 43 accumulated in the frame memory 44 is output to the motion prediction / compensation unit 47 via the switch 45 as a reference image. Further, the image supplied from the adding unit 40 accumulated in the frame memory 44 is output to the intra prediction unit 46 via the switch 45 as a reference image.
- the intra prediction unit 46 uses the reference image read from the frame memory 44 via the switch 45 and performs the intra prediction process of all candidate intra prediction modes in PU units.
- the intra prediction unit 46 applies to all candidate intra prediction modes as candidates based on the image read from the screen rearrangement buffer 32 and the predicted image generated as a result of the intra prediction process for each PU.
- the cost function value (details will be described later) is calculated. Then, the intra prediction unit 46 determines, for each PU, the intra prediction mode in which the cost function value is minimum as the optimal intra prediction mode.
- the intra prediction unit 46 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 48 for each PU.
- the cost function value is also referred to as RD (Rate Distortion) cost. It is calculated based on either the High Complexity mode or the Low Complexity mode as defined by JM (Joint Model), which is reference software in the H.264 / AVC system. H. Reference software in the H.264 / AVC format is published at http://iphome.hhi.de/suehring/tml/index.htm.
- the process is tentatively performed for all candidate prediction modes, and a cost function represented by the following equation (1) A value is calculated for each prediction mode.
- D is the difference (distortion) between the original image and the decoded image
- R is the generated code amount including up to the coefficients of orthogonal transform
- ⁇ is Lagrange undetermined multiplier given as a function of the quantization parameter QP.
- D is a difference (distortion) between an original image and a predicted image
- Header_Bit is a code amount of encoded information
- QPtoQuant is a function given as a function of a quantization parameter QP.
- the intra prediction unit 46 supplies the PU optimum intra prediction mode information to the lossless coding unit 36. Do. In addition, the intra prediction unit 46 performs intra prediction processing in the optimal intra prediction mode on a PU basis for each PU notified of the selection of the predicted image generated in the optimal intra prediction mode from the predicted image selection unit 48. The intra prediction unit 46 supplies the prediction image of each PU obtained as a result to the addition unit 40.
- the motion prediction / compensation unit 47 performs motion prediction / compensation processing for all candidate inter prediction modes. Specifically, the motion prediction / compensation unit 47 selects all candidates for each PU based on the image supplied from the screen rearrangement buffer 32 and the reference image read from the frame memory 44 via the switch 45. Detect motion vectors in inter prediction mode. Then, the motion prediction / compensation unit 47 performs a compensation process on the reference image based on the motion vector for each PU, and generates a predicted image.
- the motion prediction / compensation unit 47 calculates cost function values for all candidate inter prediction modes based on the image supplied from the screen rearrangement buffer 32 and the predicted image for each PU.
- the inter prediction mode that minimizes the cost function value is determined as the optimal inter prediction mode.
- the motion prediction / compensation unit 47 supplies the cost function value of the optimal inter prediction mode and the corresponding prediction image to the prediction image selection unit 48 for each PU.
- the motion prediction / compensation unit 47 when notified by the prediction image selection unit 48 of the selection of the prediction image generated in the optimal inter prediction mode, performs inter prediction mode information, a corresponding motion vector, information for specifying a reference image, etc. It is output to the lossless encoding unit 36. Also, the motion prediction / compensation unit 47 determines, based on the corresponding motion vector, each PU notified of the selection of the prediction image generated in the optimal inter prediction mode from the prediction image selection unit 48 in parallel in a predetermined processing unit. Then, compensation processing of the optimal inter prediction mode is performed on the reference image specified by the information specifying the reference image. The motion prediction / compensation unit 47 supplies the prediction image in units of pictures obtained as a result to the addition unit 40.
- the predicted image selection unit 48 selects one of the optimal intra prediction mode and the optimal inter prediction mode with a smaller corresponding cost function value. To be the optimal prediction mode. Then, the prediction image selection unit 48 supplies the prediction image of the optimal prediction mode to the calculation unit 33. Further, the predicted image selection unit 48 notifies the intra prediction unit 46 or the motion prediction / compensation unit 47 of the selection of the predicted image in the optimal prediction mode.
- the rate control unit 49 controls the rate of the quantization operation of the quantization unit 35 based on the encoded data accumulated in the accumulation buffer 37 so that overflow or underflow does not occur.
- the adaptive loop filter 43 is not provided.
- FIG. 2 is a diagram for explaining an LCU which is the largest coding unit in the HEVC scheme.
- a LCU (Largest Coding Unit) 61 of a fixed size set by SPS (Sequence Parameter Set) is defined as the largest coding unit.
- the picture is composed of 8 ⁇ 8 LCUs 61.
- the LCU can be further divided recursively by quadtree division to form a coding unit CU 62.
- the CU 62 is divided into PUs, which are units of intra prediction or inter prediction, or divided into Transform Units (TUs), which are units of orthogonal transformation.
- TUs Transform Units
- the boundary of the LCU 61 is referred to as an LCU boundary.
- FIG. 3 is a diagram illustrating an example of parallel processing units in inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing.
- Inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing can be processed independently in units of LCU. Therefore, the encoding apparatus 11 performs dequantization, inverse orthogonal transformation, addition processing, and compensation processing in parallel in units of Recon Pseudo Slices composed of one or more LCUs 61 regardless of the setting of slices and tiles.
- the picture is composed of 8 ⁇ 8 LCUs 61, and the Recon Pseudo Slice unit is composed of one line of LCUs 61. Therefore, a picture is composed of eight Recon Pseudo Slice units.
- the Recon Pseudo Slice unit is not limited to this, and may be configured of, for example, one or more columns of LCUs. That is, it is also possible to divide Recon Pseudo Slice by the horizontally extending LCU boundary 64 instead of dividing into the Recon Pseudo Slice by the horizontally extending LCU boundary 63.
- FIG. 4 is a block diagram showing a configuration example of the deblocking filter 41 of FIG.
- the deblocking filter 41 of FIG. 4 includes a buffer 80, a dividing unit 81, processing units 82-1 to 82-n, and an output unit 83.
- the buffer 80 of the deblocking filter 41 holds the completely decoded image supplied from the adding unit 40 of FIG. 1 in units of pictures.
- the buffer 80 also updates the decoded image to the image after the deblocking filter processing of the predetermined processing unit supplied from the processing units 82-1 to 82-n.
- the dividing unit 81 divides the picture unit image held in the buffer 80 into n ⁇ m (n is an integer of 2 or more and m is an integer of 1 or more) predetermined processing units.
- the dividing unit 81 supplies m divided images of n ⁇ m predetermined processing units to the processing units 82-1 to 82-n.
- the processing units 82-1 to 82-n each perform deblocking filtering on the image of the predetermined processing unit supplied from the dividing unit 81, and supplies the resulting image to the buffer 80.
- the output unit 83 supplies the image after deblocking filter processing in units of pictures held in the buffer 80 to the adaptive offset filter 42 in FIG. 1.
- Example of parallel processing unit of deblocking filter processing> 5 to 8 are diagrams for explaining parallel processing units of the deblocking filter processing on the luminance component (luma) of the image.
- Circles in FIG. 5 represent pixels.
- the deblocking filter processing of the HEVC method first, horizontal deblocking filter processing is performed on the pixels aligned in the horizontal direction for the entire picture, and then deblocking in the vertical direction is performed on the pixels aligned in the vertical direction. Blocking filtering is performed on the entire picture.
- a maximum of 4 pixels on the left and right of the boundary of every 8 pixels in the right direction (for example, circles indicated by 0 to 7 in FIG. 5 represent The pixel values of the maximum three pixels on the left and right of the boundary are rewritten using the pixel value of the pixel).
- a maximum of 4 pixels above and below the boundary of every 8 pixels downward from the LCU boundary 63 extending in the horizontal direction (for example, pixels represented by circles with a to h in FIG. 5) The pixel values of up to three pixels above and below the boundary are rewritten using the pixel values of.
- the horizontally extending vertical boundary De-blocking Pseudo boundary 91 of the minimum value DBK Pseudo Slice Min of the unit DBK Pseudo Slice which can independently process deblocking filtering without using the other unit DBK Pseudo Slice is horizontal.
- a unit DBK Pseudo Slice (hereinafter referred to as a parallel processing unit DBK Pseudo Slice), which is a parallel processing unit of deblocking filter processing for the luminance component of an image, has a boundary De-blocking Pseudo boundary 91 for each pixel multiple of 8 as a boundary.
- the parallel processing unit DBK Pseudo Slice for deblocking filter processing for the luminance component of an image has a boundary De-blocking Pseudo boundary 91 which is four pixels above LCU boundary 63 as a boundary it can.
- the upper boundary De-blocking Pseudo boundary 91 of the uppermost parallel processing unit DBK Pseudo Slice and the lower boundary De-blocking Pseudo boundary 91 of the lowermost parallel processing unit DBK Pseudo Slice are the LCU boundary 63.
- the picture when the picture is composed of 8 ⁇ 8 LCUs 61, the picture is composed of eight DBK Pseudo Slices.
- the encoding apparatus 11 performs deblocking filter processing in parallel using the parallel processing unit DBK Pseudo Slice regardless of whether a slice or tile is set.
- the horizontally extending boundary De-blocking Pseudo boundary 91 of the smallest unit DBK Pseudo Slice Min is set as the boundary of the parallel processing unit DBK Pseudo Slice, but as shown in FIG.
- the vertically extending horizontal boundary De-blocking Pseudo boundary 101 of the unit DBK Pseudo Slice Min can be used as the boundary of the parallel processing unit DBK Pseudo Slice.
- the boundary De-blocking Pseudo boundary 101 is a position to the right of the LCU boundary 64 extending in the vertical direction by 4 pixels and a position to the right by 8 pixels from the position. Therefore, the parallel processing unit DBK Pseudo Slice can be a unit having the boundary De-blocking Pseudo boundary 101 for each pixel of multiple of eight.
- the horizontally extending border De-blocking Pseudo boundary of the smallest unit DBK Pseudo Slice Min of the color component is identical to the border De-blocking Pseudo boundary 91 of the luminance component shown in FIG. .
- the vertically extending boundary De-blocking Pseudo boundary of the smallest unit DBK Pseudo Slice Min of the color component is located at a position two pixels to the right of LCU boundary 64 extending in the vertical direction and four pixels each from that position. is there.
- the parallel processing unit DBK Pseudo Slice aligned in the horizontal direction of the deblocking filter processing for the color component of the image is a unit having the boundary De-blocking Pseudo boundary for each pixel which is a multiple of four.
- the horizontally extending border De-blocking Pseudo boundary of the smallest unit DBK Pseudo Slice Min of the color component is a position 2 pixels above the horizontally extending LCU boundary 63 and 4 from the position It is the position of the upper pixel by pixel.
- the vertically extending boundary De-blocking Pseudo boundary of the smallest unit DBK Pseudo Slice Min of the color component is located at a position two pixels to the right of LCU boundary 64 extending in the vertical direction and four pixels each from that position. is there.
- the parallel processing unit DBK Pseudo Slice of the deblocking filter processing for the color component of the image is a unit bordering on the boundary De-blocking Pseudo boundary for each pixel which is a multiple of four.
- the horizontal and vertical border De-blocking Pseudo boundary of the smallest unit DBK Pseudo Slice Min of the color component is the border De-blocking Pseudo boundary 91 of the luminance component in FIG. 5, respectively. It is identical to the boundary De-blocking Pseudo boundary 101 of 8 luminance components.
- FIG. 9 is a block diagram showing a configuration example of the adaptive offset filter 42 of FIG.
- the adaptive offset filter 42 shown in FIG. 9 includes a buffer 110, a dividing unit 111, a buffer 112, processing units 113-1 to 113-n, and an output unit 114.
- the buffer 110 of the adaptive offset filter 42 holds the image after the deblocking filter processing in units of pictures supplied from the deblocking filter 41 of FIG. 1.
- the buffer 110 updates the image after the deblocking filter processing to the image after the adaptive offset filter processing supplied from the processing units 113-1 to 113-n.
- the buffer 110 holds offset filter information of each LCU supplied from the processing units 113-1 to 113-n in association with the image after adaptive offset filter processing.
- the division unit 111 divides the image after deblocking filter processing in units of pictures held in the buffer 110 into n ⁇ m predetermined processing units.
- the dividing unit 111 supplies m divided images of n ⁇ m predetermined processing units to the processing units 113-1 to 113-n. Further, the division unit 111 supplies the pixel value of the pixel at the boundary of the predetermined processing unit among the divided n ⁇ m predetermined images of the processing unit to the buffer 112 and holds the pixel value.
- the buffer 112 functions as a holding unit, and holds the pixel value supplied from the dividing unit 111.
- the processing units 113-1 to 113-n perform adaptive offset filter processing for each LCU, using the pixel values held in the buffer 112, on the image of the predetermined processing unit supplied from the dividing unit 111. Do. Then, each of the processing units 113-1 to 113-n processes the image after adaptive offset filter processing of each LCU and offset filter information representing the type of corresponding adaptive offset filter processing and the offset used in the adaptive offset filter processing. Are supplied to the buffer 110.
- the output unit 114 supplies the image after the adaptive offset filter processing in units of pictures held in the buffer 110 to the adaptive loop filter 43 of FIG. 1 and supplies the offset filter information of each LCU to the lossless encoding unit 36.
- Circles in FIG. 10 represent pixels.
- a unit having a boundary in the vertical direction of an arbitrary pixel as a boundary is taken as a parallel processing unit SAO Pseudo Slice of adaptive offset filtering.
- the picture is divided into three parallel processing units SAO Pseudo Slice.
- the division unit 111 causes the buffer 112 to hold the pixel value of the pixel at the boundary of the parallel processing unit SAO Pseudo Slice.
- the pixel values of the pixels represented by circles A to C and the like in the lowermost row of Slice are held in the buffer 112.
- the pixel value of the pixel represented by the circle with the X to Z etc. on the top row of the lowermost parallel processing unit SAO Pseudo Slice and the U to W etc on the bottom row of the central parallel processing unit SAO Pseudo Slice are shown.
- the pixel value of the pixel represented by the added circle is held in the buffer 112.
- the top row pixels of the held parallel processing unit SAO Pseudo Slice are used, when necessary, at the time of adaptive offset filtering of the bottom row pixels of the parallel processing unit SAO Pseudo Slice on the parallel processing unit SAO Pseudo Slice. . Further, the lowermost row pixel of the parallel processing unit SAO Pseudo Slice is used at the time of adaptive offset filter processing of the uppermost row pixel of the parallel processing unit SAO Pseudo Slice under the parallel processing unit SAO Pseudo Slice, as necessary.
- the processing units 113-1 to 113-n need to read the pixel value from the buffer 110. is there.
- the processing units 113-1 to 113-n asynchronously perform the adaptive offset filter processing, the pixel values thereof are updated to the pixel values after the adaptive offset filter processing, and the adaptive offset filter processing may be accurately performed. It may not be possible.
- the boundary of the parallel processing unit SAO Pseudo Slice may be an LCU boundary 63 extending in the horizontal direction.
- the picture is composed of 8 ⁇ 8 LCUs 61, the picture is composed of eight parallel processing units SAO Pseudo Slice.
- the boundary of the parallel processing unit SAO Pseudo Slice may be a boundary De-blocking Pseudo boundary 91 extending in the horizontal direction.
- the boundary of the parallel processing unit SAO Pseudo Slice can be a horizontal boundary of any pixel.
- the boundary of the parallel processing unit SAO Pseudo Slice may be an LCU boundary 64 extending in the vertical direction, or may be a boundary De-blocking Pseudo boundary 101 extending in the vertical direction.
- the parallel processing unit SAO Pseudo Slice can be made identical to the parallel processing unit DBK Pseudo Slice.
- FIG. 15 is a block diagram showing a configuration example of the adaptive loop filter 43 of FIG.
- the adaptive loop filter 43 of FIG. 15 includes a buffer 120, a dividing unit 121, processing units 122-1 to 122-n, and an output unit 123.
- the buffer 120 of the adaptive loop filter 43 holds the image after adaptive offset filter processing in units of pictures supplied from the adaptive offset filter 42 of FIG. 1.
- the buffer 120 updates the image after the adaptive offset filter processing to the image after the adaptive loop filter processing supplied from the processing units 122-1 to 122-n. Further, the buffer 120 holds the filter coefficients of the LCUs supplied from the processing units 122-1 to 122-n in association with the image after the adaptive loop filter processing.
- the dividing unit 121 divides the image after the adaptive offset filter processing in units of pictures held in the buffer 120 into n ⁇ m predetermined processing units.
- the dividing unit 121 supplies m divided images of n ⁇ m predetermined processing units to the processing units 122-1 to 122-n.
- the processing units 122-1 to 122-n respectively calculate filter coefficients used in adaptive loop filter processing for each LCU with respect to the image of the predetermined processing unit supplied from the dividing unit 121, and the filter coefficients are calculated. Use adaptive loop filter processing. Then, the processing units 122-1 to 122-n respectively supply the image after the adaptive loop filter processing of each LCU and the corresponding filter coefficient to the buffer 120.
- the processing unit of the adaptive loop filter process is not limited to the LCU.
- processing can be performed efficiently.
- the output unit 123 supplies the image after the adaptive loop filter processing in units of pictures held in the buffer 120 to the frame memory 44 in FIG. 1, and supplies the filter coefficient of each LCU to the lossless encoding unit 36.
- Example of parallel processing unit of adaptive loop filter processing> 16 to 19 are diagrams for explaining parallel processing units of adaptive loop filter processing.
- Circles in FIG. 16 represent pixels.
- the pixel is made up of four pixels in the horizontal direction centering on that pixel
- the horizontally extending vertical boundary ALF Pseudo boundary 131 which is the minimum value ALF Pseudo Slice Min of the unit ALF Pseudo Slice that can independently process adaptive loop filtering without using the other unit ALF Pseudo Slice, in the horizontal direction. It is a position 4 pixels above LCU boundary 63 which extends.
- a unit ALF Pseudo Slice (hereinafter referred to as a parallel processing unit ALF Pseudo Slice) as a parallel processing unit of adaptive loop filter processing is a boundary ALF Pseudo boundary 131 which is four pixels above LCU boundary 63.
- ALF Pseudo boundary 131 which is four pixels above LCU boundary 63.
- the picture is composed of 8 ⁇ 8 LCUs 61
- the picture is composed of eight ALF Pseudo Slices.
- a slice or a tile is not set, even if a slice or a tile is set, the unit ALF Pseudo Slice is set regardless of the slice or the tile.
- the horizontally extending boundary ALF Pseudo boundary 131 of the minimum value ALF Pseudo Slice Min is located four pixels above the horizontally extending LCU boundary 63, and extends in the horizontal direction of the minimum value DBK Pseudo Slice.
- the boundary De-blocking Pseudo boundary 91 is a position 4 pixels above the horizontally extending LCU boundary 63 and a position 8 pixels above the position. Therefore, as shown in FIG. 18, the parallel processing unit DBK Pseudo Slice can be made identical to the parallel processing unit ALF Pseudo Slice.
- the parallel processing unit SAO Pseudo Slice of adaptive offset filtering can be made a unit having the boundary in the vertical direction of any pixel as the boundary. Therefore, as shown in FIG. 19, the parallel processing unit SAO Pseudo Slice can be made the same as the parallel processing unit ALF Pseudo Slice.
- step S31 of FIG. 20 the A / D conversion unit 31 of the encoding device 11 performs A / D conversion on an image in frame units input as an input signal from the outside, and outputs the image to the screen rearrangement buffer 32 for storage. .
- step S32 the screen rearrangement buffer 32 rearranges the images of the frames of the stored display order into the order for encoding in accordance with the GOP structure.
- the screen rearrangement buffer 32 supplies the image in frame units after the rearrangement to the calculation unit 33, the intra prediction unit 46, and the motion prediction / compensation unit 47.
- the processing of the subsequent steps S33 to S37 is performed in PU units.
- step S33 the intra prediction unit 46 performs intra prediction processing in all candidate intra prediction modes.
- the intra prediction unit 46 calculates cost functions for all candidate intra prediction modes based on the image read from the screen rearrangement buffer 32 and the predicted image generated as a result of the intra prediction process. Calculate Then, the intra prediction unit 46 determines the intra prediction mode with the smallest cost function value as the optimal intra prediction mode. The intra prediction unit 46 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 48.
- the motion prediction / compensation unit 47 performs motion prediction / compensation processing for all candidate inter prediction modes. Also, the motion prediction / compensation unit 47 calculates cost function values for all candidate inter prediction modes based on the image supplied from the screen rearrangement buffer 32 and the prediction image, and the cost function value The smallest inter prediction mode is determined as the optimum inter prediction mode. Then, the motion prediction / compensation unit 47 supplies the cost function value of the optimal inter prediction mode and the corresponding prediction image to the prediction image selection unit 48.
- step S34 the predicted image selection unit 48 selects one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values supplied from the intra prediction unit 46 and the motion prediction / compensation unit 47 in the process of step S33. The one with the smallest cost function value is determined as the optimal prediction mode. Then, the prediction image selection unit 48 supplies the prediction image of the optimal prediction mode to the calculation unit 33.
- step S35 the prediction image selection unit 48 determines whether the optimum prediction mode is the optimum inter prediction mode. If it is determined in step S35 that the optimal prediction mode is the optimal inter prediction mode, the predicted image selection unit 48 notifies the motion prediction / compensation unit 47 of selection of a predicted image generated in the optimal inter prediction mode.
- step S36 the motion prediction / compensation unit 47 supplies the inter prediction mode information, the motion vector, and the information specifying the reference image to the lossless encoding unit 36.
- step S35 if it is determined in step S35 that the optimum prediction mode is not the optimum inter prediction mode, that is, if the optimum prediction mode is the optimum intra prediction mode, the predicted image selection unit 48 generates the prediction generated in the optimum intra prediction mode.
- the intra prediction unit 46 is notified of image selection. Then, in step S37, the intra prediction unit 46 supplies the intra prediction mode information to the lossless encoding unit 36, and advances the process to step S38.
- step S38 the computing unit 33 performs encoding by subtracting the predicted image supplied from the predicted image selecting unit 48 from the image supplied from the screen rearrangement buffer 32.
- the calculation unit 33 outputs the image obtained as a result to the orthogonal transformation unit 34 as residual information.
- step S39 the orthogonal transformation unit 34 performs orthogonal transformation on the residual information from the calculation unit 33, and supplies the orthogonal transformation coefficient obtained as a result to the quantization unit 35.
- step S40 the quantization unit 35 quantizes the coefficients supplied from the orthogonal transformation unit 34, and supplies the resulting coefficients to the lossless encoding unit 36 and the inverse quantization unit 38.
- step S41 of FIG. 21 the dequantization unit 38 performs dequantization parallel processing in which dequantization is performed on the quantized coefficients supplied from the quantization unit 35 in parallel in units of Recon Pseudo Slice.
- dequantization parallel processing The details of the inverse quantization parallel processing will be described with reference to FIG. 22 described later.
- step S42 the inverse orthogonal transformation unit 39 performs inverse orthogonal transformation parallel processing for performing inverse orthogonal transformation in parallel on the orthogonal transformation coefficient supplied from the inverse quantization unit 38 in Recon Pseudo Slice units.
- the details of the inverse orthogonal transformation parallel processing will be described with reference to FIG. 23 described later.
- step S43 the motion prediction / compensation unit 47 compensates for the optimal inter prediction mode for the PU notified of the selection of the predicted image generated in the optimal inter prediction mode from the predicted image selection unit 48 in parallel in Recon Pseudo Slice units. Perform inter prediction parallel processing to perform processing. The details of the inter prediction parallel processing will be described with reference to FIG. 24 described later.
- step S44 the addition unit 40 performs addition parallel processing of adding the residual information supplied from the inverse orthogonal transformation unit 39 and the predicted image supplied from the motion prediction / compensation unit 47 in parallel in Recon Pseudo Slice units. .
- the details of the addition parallel processing will be described with reference to FIG. 25 described later.
- step S45 the encoding device 11 performs the intra prediction process of the optimal intra prediction mode of the PU to which the selection of the predicted image generated in the optimal intra prediction mode is notified from the predicted image selection unit 48.
- the details of the intra prediction process will be described with reference to FIG. 26 described later.
- step S46 the deblocking filter 41 performs deblocking filter parallel processing on the decoded image supplied from the adding unit 40 to perform deblocking filter processing in parallel using m parallel processing units DBK Pseudo Slice. .
- This deblocking filter parallel processing will be described with reference to FIG. 27 described later.
- step S47 the adaptive offset filter 42 performs adaptive offset filter parallel processing for performing adaptive offset filter processing for each LCU in parallel using m parallel processing units SAO Pseudo Slice on the image supplied from the deblocking filter 41. Do.
- the details of the adaptive offset filter parallel processing will be described with reference to FIG. 28 described later.
- step S48 the adaptive loop filter 43 performs adaptive loop filter parallel processing for performing adaptive loop filter processing for each LCU in parallel using m parallel processing units ALF Pseudo Slice on the image supplied from the adaptive offset filter 42. Do. Details of the adaptive loop filter parallel processing will be described with reference to FIG. 29 described later.
- step S49 the frame memory 44 stores the image supplied from the adaptive loop filter 43. This image is output to the intra prediction unit 46 via the switch 45 as a reference image.
- step S50 the lossless encoding unit 36 losslessly encodes the intra prediction mode information or the inter prediction mode information, the motion vector, the information specifying the reference image, the offset filter information, and the filter coefficient as the coding information. Do.
- step S51 the lossless encoding unit 36 losslessly encodes the quantized coefficient supplied from the quantization unit 35. Then, the lossless encoding unit 36 generates encoded data from the encoded information losslessly encoded in the process of step S50 and the losslessly encoded coefficient, and supplies the encoded data to the accumulation buffer 37.
- step S52 the accumulation buffer 37 temporarily accumulates the encoded data supplied from the lossless encoding unit 36.
- step S53 the rate control unit 49 controls the rate of the quantization operation of the quantization unit 35 based on the encoded data accumulated in the accumulation buffer 37 so that an overflow or an underflow does not occur.
- step S54 the accumulation buffer 37 transmits the stored encoded data.
- step S33 in order to simplify the description, the intra prediction process and the motion prediction / compensation process are always performed, but in actuality, only one of them may be performed depending on the picture type and the like. .
- FIG. 22 is a flowchart for explaining the details of the inverse quantization parallel processing in step S41 of FIG.
- step S71 in FIG. 22 the inverse quantization unit 38 divides the quantized coefficients supplied from the quantization unit 35 into n (n is an integer of 2 or more) Recon Pseudo Slices.
- step S72 the inverse quantization unit 38 sets the count value i to zero.
- step S73 the inverse quantization unit 38 determines whether the count value i is smaller than n. If it is determined in step S73 that the count value i is smaller than n, in step S74, the inverse quantization process for the i-th Recon Pseudo Slice of the divided Recon Pseudo Slices is started.
- step S75 the inverse quantization unit 38 increments the count value i by one. Then, the process returns to step S73, and the processes of steps S73 to S75 are repeated until the count value i becomes n or more, that is, the inverse quantization process for all divided Recon Pseudo Slices is started.
- step S73 determines whether or not all the n inverse quantization processes started in step S74 have been completed. If it is determined that all have not been completed, all are completed. Wait up to.
- step S76 If it is determined in step S76 that all the n inverse quantization processes started in step S74 have been completed, the inverse quantization unit 38 performs inverse orthogonal transformation on orthogonal transformation coefficients obtained as a result of the inverse quantization process. It supplies to the part 39. Then, the process returns to step S41 in FIG. 21 and proceeds to step S42.
- FIG. 23 is a flow chart for explaining the details of the inverse orthogonal transformation parallel processing of step S42 of FIG.
- steps S91 to S96 in FIG. 23 are the same as the processes in steps S71 to S76 in FIG. 22 except that the inverse quantization process is replaced with the inverse orthogonal transformation process, and thus the description thereof is omitted.
- the residual information obtained as a result of the inverse orthogonal transformation process is supplied to the addition unit 40.
- FIG. 24 is a flowchart for describing the details of the inter prediction parallel processing in step S43 of FIG.
- the inverse quantization process replaces the compensation process of the optimal inter prediction mode for the PU notified of the selection of the prediction image generated in the optimal inter prediction mode in Recon Pseudo Slice.
- the processing is the same as the processing of steps S71 to S76 in FIG.
- the predicted image obtained as a result of the compensation process is supplied to the adding unit 40.
- FIG. 25 is a flowchart for describing the details of the addition parallel processing of step S44 of FIG.
- the inverse quantization process is supplied from the predicted image of the PU in Recon Pseudo Slice supplied from the motion prediction / compensation unit 47 and the inverse orthogonal transform unit 39 of that PU.
- the processing is the same as the processing of steps S71 to S76 in FIG. 22 except for the point of replacing the addition processing of adding the residual information, and the description will be omitted.
- the decoded image obtained as a result of the addition process is supplied to the frame memory 44.
- FIG. 26 is a flowchart for describing the details of the intra prediction process in step S45 of FIG.
- step S140 in FIG. 26 the intra prediction unit 46 sets the count value i to zero.
- step S141 the intra prediction unit 46 determines whether the count value i is smaller than the total number of LCUs of the picture. If it is determined in step S141 that the count value i is smaller than the total number of LCUs of the picture, the process proceeds to step S142.
- step S142 the intra prediction unit 46 sets the count value j to zero.
- step S143 the intra prediction unit 46 determines whether the count value j is smaller than the total number of PUs in the i-th LCU. If it is determined in step S143 that the optimal prediction mode of the j-th PU is the optimal intra prediction mode, the intra-prediction unit 46 performs optimization on the j-th PU of the i-th LCU in the picture in step S144. It is determined whether the selection of the prediction image in the intra prediction mode is notified from the prediction image selection unit 48.
- step S144 If it is determined in step S144 that the selection of the prediction image of the optimal intra prediction mode has been notified to the j-th PU, the process proceeds to step S145.
- step S145 the intra prediction unit 46 performs the intra prediction process in the optimal intra prediction mode on the j-th PU using the reference image supplied from the frame memory 44 via the switch 45.
- the intra prediction unit 46 supplies the predicted image of the j-th PU obtained as a result to the addition unit 40.
- step S146 the adding unit 40 adds the predicted image of the j-th PU supplied from the intra prediction unit 46 and the residual information supplied from the inverse orthogonal transform unit 39 of that PU, and the addition result is obtained.
- the decoded image in PU units is supplied to the frame memory 44.
- step S ⁇ b> 147 the frame memory 44 stores the decoded image of PU unit supplied from the adding unit 40. This image is output to the motion prediction / compensation unit 47 via the switch 45 as a reference image.
- step S148 After the process of step S147 or when it is determined in step S144 that the selection of the prediction image of the optimal intra prediction mode is not notified to the j-th PU, the process is in step S148 and the intra prediction unit 46 The count value j is incremented by one. Then, the process returns to step S143, and the processes of steps S144 to S148 are performed on all PUs in the i-th LCU until the count value j becomes equal to or more than the total number of PUs in the i-th LCU. The processes of steps S143 to S148 are performed until the process.
- step S143 determines whether the count value j is smaller than the number of all PUs in the i-th LCU. If yes, the process proceeds to step S149.
- step S149 the intra prediction unit 46 increments the count value i by one. Then, the process returns to step S141, and the processes of steps S143 to S148 are performed until the count value i becomes equal to or more than the total number of LCUs of the picture, that is, the processes of steps S142 to S149 are performed on all LCUs of the picture. Is done.
- step S141 If it is determined in step S141 that the count value i is smaller than the total number of LCUs of a picture, the adding unit 40 supplies the decoded image of all the LCUs constituting the picture to the deblocking filter 41, as shown in FIG. Return to step S45 of. Then, the process proceeds to step S46.
- FIG. 27 is a flow chart for explaining the details of the deblocking filter parallel processing of step S46 of FIG.
- step S150 of FIG. 27 the buffer 80 holds the decoded image supplied from the adding unit 40 of FIG.
- step S151 the division unit 81 divides the image in units of pictures held in the buffer 80 into unit DBK Pseudo Slices in a De-blocking Pseudo boundary.
- step S152 the dividing unit 81 determines the number m of unit DBK Pseudo Slices to be allocated to each of the n processing units 82-1 to 82-n.
- step S153 the dividing unit 81 sets the count value i to zero.
- step S154 the dividing unit 81 determines whether the count value i is smaller than n.
- step S154 If it is determined in step S154 that the count value i is smaller than n, the dividing unit 81 supplies the ith m unit DBK Pseudo Slice to the processing unit 82-i. Then, in step S155, the processing unit 82-i starts the deblocking filter process on the ith unit DBK Pseudo Slice.
- the unit DBK Pseudo Slice after deblocking filtering is supplied to the buffer 80 and held.
- step S156 the division unit 81 increments the count value i by 1 and returns the process to step S154. Then, the processes of steps S154 to S156 are repeated until the count value i becomes n or more, that is, until the deblocking filtering process is started in all the processing units 82-1 to 82-n.
- step S154 determines whether the count value i is not smaller than n. That is, if the deblocking filtering process is started by the processing units 82-1 to 82-n. the process proceeds to step S157.
- step S157 the output unit 83 determines whether the n deblocking filter processes by the processing units 82-1 to 82-n have been completed.
- step S157 If it is determined in step S157 that the n deblocking filter processes by the processing units 82-1 to 82-n are not completed, the output unit 83 waits until the n deblocking filter processes are completed.
- step S157 If it is determined in step S157 that the n deblocking filter processes have ended, the output unit 83 adaptively offsets the picture unit image after the deblocking filter process held in the buffer 80 in step S158. Output to the filter 42. Then, the process returns to step S46 of FIG. 21 and proceeds to step S47.
- FIG. 28 is a flowchart for describing the details of the adaptive offset filter parallel processing in step S47 of FIG.
- FIG. 28 describes the case where the boundary of parallel processing unit SAO Pseudo Slice is horizontally extending LCU boundary 63, the same applies to the case where the boundary is other than LCU boundary 63.
- step S170 of FIG. 28 the buffer 110 holds the image after the deblocking filter processing supplied from the deblocking filter 41 of FIG.
- step S171 the division unit 111 divides the image in units of pictures held in the buffer 110 into units of SAO Pseudo Slice at LCU boundary 63.
- step S172 the division unit 111 determines the number m of unit SAO Pseudo Slices to be allocated to each of the n processing units 113-1 to 113-n.
- step S173 the division unit 111 supplies the pixel values after the deblocking filter processing of the pixels in the top row and the bottom row of the unit SAO Pseudo Slice to the buffer 112 and holds the pixel values.
- step S174 the division unit 111 sets the count value i to zero.
- step S175 the division unit 111 determines whether the count value i is smaller than n.
- step S175 If it is determined in step S175 that the count value i is smaller than n, the division unit 111 supplies the ith m unit SAO Pseudo Slice to the processing unit 113-i. Then, in step S176, the processing unit 113-i starts adaptive offset filter processing for each LCU with respect to the i-th m units of SAO Pseudo Slice.
- the unit SAO Pseudo Slice after the adaptive offset filtering and the offset filter information of each LCU are supplied to the buffer 110 and held.
- step S177 the division unit 111 increments the count value i by 1 and returns the process to step S175. Then, the process of steps S175 to S177 is repeated until the count value i becomes n or more, that is, the adaptive offset filter process is started in all the processing units 113-1 to 113-n.
- step S175 when it is determined in step S175 that the count value i is not smaller than n, that is, when the offset filter process is started in the processing units 113-1 to 113-n, the process proceeds to step S178.
- step S178 the output unit 114 determines whether the n adaptive offset filter processes by the processing units 113-1 to 113-n have ended.
- step S178 If it is determined in step S178 that the n adaptive offset filter processes by the processing units 113-1 to 113-n have not been completed, the output unit 114 waits until the n adaptive offset filter processes are completed.
- step S178 If it is determined in step S178 that the n adaptive offset filter processes have ended, the process proceeds to step S179.
- step S179 the output unit 114 outputs the image of the picture unit after the adaptive offset filter processing held in the buffer 110 to the adaptive loop filter 43, and the offset filter information of each corresponding LCU to the lossless encoding unit 36. Output. Then, the process returns to step S47 of FIG. 21 and proceeds to step S48.
- FIG. 29 is a flowchart for describing the details of the adaptive loop filter parallel processing in step S48 in FIG.
- the processing in steps S190 to S198 in FIG. 29 is that the boundary De-blocking Pseudo boundary replaces the boundary ALF Pseudo boundary, the unit DBK Pseudo Slice replaces the unit ALF Pseudo Slice, and the deblocking filter processing replaces the adaptive loop filter processing.
- the processing is the same as the processing in steps S150 to S158 in FIG. 27 except that the point and the filter coefficient are output to the lossless encoding unit 36, and thus the description will be omitted.
- the encoding device 11 can perform deblocking filter processing, adaptive offset processing, and adaptive loop filter processing on the decoded image in parallel in a predetermined processing unit. Also, the encoding apparatus 11 can perform inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing in parallel in Recon Pseudo Slice units. Therefore, high speed decoding can be performed at the time of encoding, regardless of the presence or absence of slice and tile settings. As a result, coding can be performed at high speed.
- FIG. 30 is a block diagram showing a configuration example of a first embodiment of a decoding device as an image processing device to which the present technology is applied, which decodes a coded stream transmitted from the coding device 11 of FIG.
- the decoding apparatus 160 of FIG. 30 includes an accumulation buffer 161, a lossless decoding unit 162, an inverse quantization unit 163, an inverse orthogonal transformation unit 164, an addition unit 165, a deblock filter 166, an adaptive offset filter 167, an adaptive loop filter 168, and screen arrangement.
- the accumulation buffer 161 of the decoding device 160 receives and stores the encoded data transmitted from the encoding device 11 of FIG.
- the accumulation buffer 161 supplies the encoded data that has been accumulated to the lossless decoding unit 162.
- the lossless decoding unit 162 performs lossless decoding such as variable-length decoding or arithmetic decoding on the encoded data from the accumulation buffer 161 to obtain quantized coefficients and encoding information.
- the lossless decoding unit 162 supplies the quantized coefficient to the inverse quantization unit 163.
- the lossless decoding unit 162 supplies intra prediction mode information and the like as coding information to the intra prediction unit 173, and supplies a motion vector, inter prediction mode information, information for specifying a reference image and the like to the motion compensation unit 174. .
- the lossless decoding unit 162 supplies intra prediction mode information or inter prediction mode information as coding information to the switch 175.
- the lossless decoding unit 162 supplies offset filter information as coding information to the adaptive offset filter 167, and supplies filter coefficients to the adaptive loop filter 168.
- the same processing as that of the prediction / compensation unit 47 is performed to decode the image.
- the inverse quantization unit 163 performs inverse quantization on the quantized coefficients from the lossless decoding unit 162 in parallel in units of Recon Pseudo Slice, and the orthogonal transformation coefficients obtained as a result thereof are inverse orthogonal
- the data is supplied to the conversion unit 164.
- the inverse orthogonal transformation unit 164 performs inverse orthogonal transformation on the orthogonal transformation coefficients from the inverse quantization unit 163 in parallel in units of Recon Pseudo Slice.
- the inverse orthogonal transformation unit 164 supplies the residual information obtained as a result of the inverse orthogonal transformation to the addition unit 165.
- the addition unit 165 functions as a decoding unit, and generates residual information as an image to be decoded supplied from the inverse orthogonal transformation unit 164 and a predicted image supplied from the motion compensation unit 174 via the switch 175 as Recon Pseudo. Decoding is performed locally by adding in units of Slice. Then, the addition unit 165 supplies the locally decoded image to the frame memory 171.
- the adding unit 165 locally decodes by adding the PU predicted image supplied from the intra prediction unit 173 via the switch 175 and the residual information of the PU. Then, the addition unit 165 supplies the locally decoded image to the frame memory 171. Further, the adding unit 165 supplies the completely decoded picture unit image to the deblocking filter 166.
- the deblocking filter 166 performs deblocking filter processing in parallel on the image supplied from the adding unit 165 using m parallel processing units DBK Pseudo Slice, and supplies the resulting image to the adaptive offset filter 167. .
- the adaptive offset filter 167 performs m parallel processing units SAO on the image of each LCU after the deblocking filter processing by the deblocking filter 166 based on the offset filter information of each LCU supplied from the lossless decoding unit 162. Perform adaptive offset filter processing in parallel with Pseudo Slice.
- the adaptive offset filter 167 supplies the image after the adaptive offset filter processing to the adaptive loop filter 168.
- the adaptive loop filter 168 uses the filter coefficients of each LCU supplied from the lossless decoding unit 162 for the image of each LCU supplied from the adaptive offset filter 167 to execute m parallel processing units ALF Pseudo Slice in parallel. Perform adaptive loop filter processing.
- the adaptive loop filter 168 supplies the resulting image to the frame memory 171 and the screen rearrangement buffer 169.
- the screen rearrangement buffer 169 stores the image supplied from the adaptive loop filter 168 in frame units.
- the screen rearrangement buffer 169 rearranges the stored frame-by-frame images for encoding in the original display order, and supplies the rearranged images to the D / A conversion unit 170.
- the D / A conversion unit 170 D / A converts the image in units of frames supplied from the screen rearrangement buffer 169 and outputs it as an output signal.
- the frame memory 171 accumulates the image supplied from the adaptive loop filter 168 and the image supplied from the adding unit 165.
- the image supplied from the adaptive loop filter 168 and stored in the frame memory 171 is read as a reference image and supplied to the motion compensation unit 174 via the switch 172.
- the image supplied from the adding unit 165 and stored in the frame memory 171 is read as a reference image, and is supplied to the intra prediction unit 173 via the switch 172.
- the intra prediction unit 173 uses the reference image read from the frame memory 171 via the switch 172 to perform intra prediction processing in the optimal intra prediction mode indicated by the intra prediction mode information supplied from the lossless decoding unit 162 in PU units. To do.
- the intra prediction unit 173 supplies the prediction image of PU unit generated as a result to the switch 175.
- the motion compensation unit 174 reads a reference image specified by the information specifying the reference image supplied from the lossless decoding unit 162 from the frame memory 171 via the switch 172 in parallel in Recon Pseudo Slice units.
- the motion compensation unit 174 uses the motion vector and the reference image supplied from the lossless decoding unit 162 in parallel in units of Recon Pseudo Slice, and the optimal inter prediction mode indicated by the inter prediction mode information supplied from the lossless decoding unit 162. Perform motion compensation processing.
- the motion compensation unit 174 supplies the predicted image in units of pictures generated as a result to the switch 175.
- the switch 175 supplies the prediction image in PU units supplied from the intra prediction unit 173 to the addition unit 165.
- the switch 175 supplies the predicted image in units of pictures supplied from the motion compensation unit 174 to the addition unit 165.
- FIG. 31 is a flowchart for describing the decoding process of the decoding device 160 of FIG. This decoding process is performed on a frame basis.
- step S231 of FIG. 31 the accumulation buffer 161 of the decoding device 160 receives and accumulates encoded data in units of frames transmitted from the encoding device 11 of FIG.
- the accumulation buffer 161 supplies the encoded data that has been accumulated to the lossless decoding unit 162.
- the lossless decoding unit 162 losslessly decodes the encoded data from the accumulation buffer 161 to obtain quantized coefficients and encoding information.
- the lossless decoding unit 162 supplies the quantized coefficient to the inverse quantization unit 163. Further, the lossless decoding unit 162 supplies intra prediction mode information and the like as coding information to the intra prediction unit 173, and supplies a motion vector, inter prediction mode information, information for specifying a reference image and the like to the motion compensation unit 174. .
- the lossless decoding unit 162 supplies intra prediction mode information or inter prediction mode information as coding information to the switch 175.
- the lossless decoding unit 162 supplies offset filter information as coding information to the adaptive offset filter 167, and supplies filter coefficients to the adaptive loop filter 168.
- step S233 the inverse quantization unit 163 performs the same inverse quantization parallel processing as the inverse quantization parallel processing in FIG. 22 on the quantized coefficients from the lossless decoding unit 162.
- the orthogonal transformation coefficient obtained as a result of the inverse quantization parallel processing is supplied to the inverse orthogonal transformation unit 164.
- step S234 the inverse orthogonal transformation unit 164 performs inverse orthogonal transformation parallel processing similar to the inverse orthogonal transformation parallel processing in FIG. 23 on the orthogonal transformation coefficients from the inverse quantization unit 163. Residual information obtained as a result of inverse orthogonal transformation parallel processing is supplied to the addition unit 165.
- step S235 the motion compensation unit 174 performs inter prediction parallel processing similar to the inter prediction parallel processing of FIG. In this inter prediction parallel processing, it is optimal not for the PU notified of the selection of the predicted image generated in the optimal inter prediction mode but for the PU corresponding to the inter prediction mode information supplied from the lossless decoding unit 162 Inter prediction mode compensation processing is performed.
- step S 236 the adding unit 165 performs the same process as the addition parallel processing in FIG. 25 on the residual information supplied from the inverse orthogonal transformation unit 164 and the predicted image supplied from the motion compensating unit 174 via the switch 175. Perform additive parallel processing. An image obtained as a result of the addition parallel processing is supplied to the frame memory 171.
- step S237 the intra prediction unit 173 performs the same intra prediction process as the intra prediction process of FIG. Note that in this intra prediction process, the optimal intra with respect to the PU corresponding to the intra prediction mode information supplied from the lossless decoding unit 162, not the PU notified of the selection of the predicted image generated in the optimal intra prediction mode. Intra prediction processing in prediction mode is performed.
- step S2308 the deblocking filter 166 performs the deblocking filter parallel processing of FIG. 27 on the image supplied from the adding unit 165.
- the picture unit image obtained as a result of the deblocking filter parallel processing is supplied to the adaptive offset filter 167.
- step S239 the adaptive offset filter 167 performs adaptive offset filter parallel processing of FIG. 28 on the image supplied from the deblocking filter 166 based on the offset filter information of each LCU supplied from the lossless decoding unit 162. Similar adaptive offset filter parallel processing is performed. An image in picture units obtained as a result of adaptive offset filter parallel processing is supplied to an adaptive loop filter 168.
- step S240 the adaptive loop filter 168 uses the filter coefficients supplied from the lossless decoding unit 162 for the image supplied from the adaptive offset filter 167 to perform an adaptive loop similar to the adaptive loop filter parallel processing in FIG. Perform filter parallel processing.
- the picture unit image obtained as a result of the adaptive loop filter process is supplied to the frame memory 171 and the screen rearrangement buffer 169.
- step S241 the frame memory 171 accumulates the image supplied from the adaptive loop filter 168.
- the image supplied from the adaptive loop filter 168 and stored in the frame memory 171 is read as a reference image and supplied to the motion compensation unit 174 via the switch 172.
- the image supplied from the adding unit 165 and stored in the frame memory 171 is read as a reference image, and is supplied to the intra prediction unit 173 via the switch 172.
- step S 242 the screen rearrangement buffer 169 stores the image supplied from the adaptive loop filter 168 in frame units, and rearranges the stored frame unit images in the order for encoding into the original display order , D / A conversion unit 170.
- step S243 the D / A conversion unit 170 D / A converts the frame unit image supplied from the screen rearrangement buffer 169, and outputs it as an output signal. Then, the process ends.
- the decoding device 160 can perform deblocking filter processing, adaptive offset processing, and adaptive loop filter processing in parallel in predetermined processing units on the decoded image.
- the decoding device 160 can perform inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing in parallel in units of Recon Pseudo Slice. Therefore, decoding can be performed at high speed regardless of the presence or absence of slice and tile settings.
- FIG. 32 is a block diagram showing a configuration example of a second embodiment of an encoding device as an image processing device to which the present technology is applied.
- the configuration of the coding device 190 in FIG. 32 is the same as that of the inverse quantization unit 191, the inverse orthogonal transformation unit 192, and the addition instead of the inverse quantization unit 38, the inverse orthogonal transformation unit 39, the addition unit 40, and the motion prediction / compensation unit 47.
- the encoding apparatus 11 of FIG. 1 is provided with a point that a motion prediction / compensation unit 194 is provided, and that a filtering unit 195 is provided instead of the deblocking filter 41, the adaptive offset filter 42, and the adaptive loop filter 43.
- a filtering unit 195 is provided instead of the deblocking filter 41, the adaptive offset filter 42, and the adaptive loop filter 43.
- the coding device 190 collectively performs dequantization, inverse orthogonal transformation, addition processing, and compensation processing in Recon Pseudo Slice units, and performs deblocking filter processing, adaptive offset filter processing, and adaptive loops in predetermined processing units. Perform filter processing collectively.
- the inverse quantization unit 191 of the encoding device 190 performs inverse quantization on the coefficients quantized by the quantization unit 35 in parallel in units of Recon Pseudo Slice, and the resulting Recon Pseudo is obtained.
- the orthogonal transform coefficient in units of Slice is supplied to the inverse orthogonal transform unit 192.
- the inverse orthogonal transformation unit 192 performs inverse orthogonal transformation in parallel on the orthogonal transformation coefficient in Recon Pseudo Slice units supplied from the inverse quantization unit 1 and adds the residual information in Recon Pseudo Slice units obtained as a result thereof Supply to 193.
- the addition unit 193 functions as a decoding unit, and adds the predicted image in Recon Pseudo Slice units supplied from the motion prediction / compensation unit 194 and the residual information in Recon Pseudo Slice units supplied from the inverse orthogonal transformation unit 192. Perform addition processing in parallel in Recon Pseudo Slice units.
- the addition unit 193 supplies the image in units of pictures obtained as a result of the addition process to the frame memory 44.
- the addition unit 193 performs the addition process of adding the prediction image of PU units supplied from the intra prediction unit 46 and the residual information on a per PU basis. Decrypt to The addition unit 193 supplies the resulting locally decoded PU-based image to the frame memory 44. Further, the adding unit 193 supplies the completely decoded picture unit image to the filter processing unit 195.
- the filter processing unit 195 performs deblocking filter processing, adaptive offset filter processing, and adaptive loop filter processing on the decoded image supplied from the adding unit 193 in parallel in m common processing units.
- the common processing unit is the unit of the smallest unit ALF Pseudo Slice Min when the integral multiple of the smallest unit DBK Pseudo Slice Min and the integral unit of the smallest unit ALF Pseudo Slice Min match, for example, the smallest unit
- the unit is ALF Pseudo Slice Min.
- the filter processing unit 195 supplies the image obtained as a result of the adaptive loop filter processing to the frame memory 44. Also, the filter processing unit 195 supplies the offset filter information and the filter coefficient of each LCU to the lossless encoding unit 36.
- the motion prediction / compensation unit 194 performs motion prediction / compensation processing on all candidate inter prediction modes as in the motion prediction / compensation unit 47 of FIG. decide. Then, similarly to the motion prediction / compensation unit 47, the motion prediction / compensation unit 194 supplies the cost function value of the optimal inter prediction mode and the corresponding prediction image to the prediction image selection unit 48.
- the motion prediction / compensation unit 194 is notified of the selection of the prediction image generated in the optimal inter prediction mode from the prediction image selection unit 48, the inter prediction mode information, the corresponding motion A vector, information for specifying a reference image, etc. are output to the lossless encoding unit 36.
- the motion prediction / compensation unit 194 determines, based on the corresponding motion vector, the PUs notified of the selection of the predicted image generated in the optimal inter prediction mode from the predicted image selection unit 48 in Recon Pseudo Slice units.
- the compensation processing of the optimal inter prediction mode is performed on the reference image identified by the information identifying the reference image.
- the motion prediction / compensation unit 194 supplies the predicted image in Recon Pseudo Slice units obtained as a result to the addition unit 193.
- FIG. 33 is a block diagram showing a configuration example of the filter processing unit 195 of FIG.
- the filter processing unit 195 in FIG. 33 includes a buffer 210, a division unit 211, processing units 212-1 to 212-n, a buffer 213, and an output unit 214.
- the buffer 210 of the filter processing unit 195 holds the completely decoded image supplied from the addition unit 193 of FIG. 32 in units of pictures. Also, the buffer 210 updates the decoded image to the image after adaptive loop filter processing supplied from the processing units 212-1 to 212-n. Also, the buffer 210 holds the offset filter information and the filter coefficient of each LCU supplied from the processing units 212-1 to 212-n in association with the image after the adaptive loop filter processing.
- the dividing unit 211 divides the image held in the buffer 210 into n ⁇ m common processing units.
- the dividing unit 211 supplies m divided images of n ⁇ m common processing units to the processing units 212-1 to 212-n.
- the processing units 212-1 to 212-n each perform deblocking filter processing on the image of the common processing unit supplied from the dividing unit 211.
- Each of the processing units 212-1 to 212-n supplies the pixel value of the pixel at the boundary of the common processing unit in the image of the common processing unit after the deblocking filter processing to the buffer 213 and holds it.
- the processing units 212-1 to 212-n perform adaptive offset filter processing on the image of the common processing unit after the deblocking filter processing using the pixel values stored in the buffer 213, respectively.
- the processing units 212-1 to 212-n each perform adaptive loop filter processing on the image of the common processing unit after the adaptive offset filter processing.
- the processing units 212-1 to 212-n respectively supply the image after the adaptive loop filter processing of each LCU, the offset filter information, and the filter coefficient to the buffer 210.
- the buffer 213 holds pixel values supplied from the processing units 212-1 to 212-n.
- the output unit 214 supplies the image of the picture unit held in the buffer 210 to the frame memory 44 of FIG. 32, and supplies the offset filter information and the filter coefficient of each LCU to the lossless encoding unit 36.
- steps S261 to S270 in FIG. 34 are the same as the processes in steps S31 to S40 in FIG. This encoding process is performed, for example, on a frame basis.
- step S271 of FIG. 35 the encoding device 190 performs inter-parallel processing in which inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing are collectively performed in parallel in Recon Pseudo Slice units.
- inter-parallel processing in which inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing are collectively performed in parallel in Recon Pseudo Slice units. The details of this inter-parallel processing will be described with reference to FIG. 36 described later.
- step S272 the intra prediction unit 46 performs the intra prediction process of FIG.
- step S273 the encoding device 190 performs filter parallel processing which collectively performs deblocking filter processing, adaptive offset filter processing, and adaptive loop filter processing in parallel in m common parallel processing units. The details of the filter parallel processing will be described with reference to FIG. 37 described later.
- steps S274 to S279 are the same as the processes of steps S49 to S54 of FIG.
- FIG. 36 is a flowchart for describing the details of the inter-parallel processing in step S271 of FIG.
- step S301 in FIG. 36 the inverse quantization unit 191 divides the coefficient supplied from the quantization unit 35 into units Recon Pseudo Slice.
- step S302 the inverse quantization unit 191 sets the count value i to zero.
- step S303 it is determined whether the count value i is smaller than the number n.
- step S303 If it is determined in step S303 that the count value i is smaller than the number n, the inverse quantization unit 191 starts inverse quantization processing on the i-th unit Recon Pseudo Slice in step S304. Then, after the end of the inverse quantization processing, the inverse orthogonal transform unit 192 starts inverse orthogonal transform processing on the i-th unit Recon Pseudo Slice. Then, after the inverse orthogonal transformation processing is finished, the motion prediction / compensation unit 194 transmits the PU to which the selection of the prediction image generated in the optimal inter prediction mode has been communicated from the prediction image selection unit 48 in the i-th unit Recon Pseudo Slice. Start the inter prediction process. Then, after the end of the inter prediction processing, the addition unit 193 starts the addition processing for the i-th unit Recon Pseudo Slice.
- step S305 the inverse quantization unit 191 increments the count value i by 1 and returns the process to step S303. Then, the process of steps S303 to S305 is repeated until the count value i becomes n or more.
- step S303 If it is determined in step S303 that the count value i is not smaller than n, that is, if the process of step S304 of all n units Recon Pseudo Slice is started, the process proceeds to step S306.
- step S306 the encoding apparatus 190 determines whether the process of step S304 of all n units Recon Pseudo Slice is completed. If it is determined that the process is not completed, the encoding apparatus 190 waits until the process is completed.
- step S306 If it is determined in step S306 that the process of step S304 of all n units Recon Pseudo Slice is completed, the adding unit 193 uses the locally decoded picture unit image obtained as a result of the addition process in the frame memory 44. Supply to Then, the process returns to step S271 of FIG. 35 and proceeds to step S272.
- FIG. 37 is a flowchart for describing the details of the filter parallel processing in step S273 of FIG.
- step S320 in FIG. 37 the buffer 210 of the filter processing unit 195 holds the decoded image in units of pictures supplied from the addition unit 193 in FIG.
- step S ⁇ b> 321 the division unit 211 divides the image in units of pictures held in the buffer 210 into units of common processing. For example, if the common processing unit is the smallest unit ALF Pseudo Slice, the filter processing unit 195 divides the picture unit image at the boundary ALF Pseudo boundary.
- step S322 the dividing unit 211 determines the number m of common processing units to be allocated to each of the n processing units 212-1 to 212-n.
- step S323 the dividing unit 211 sets the count values i, j, and k to zero.
- step S324 the dividing unit 211 determines whether the count value i is smaller than n. If it is determined in step S324 that the count value i is smaller than n, the division unit 211 supplies the image of the i-th m common processing units to the processing unit 212-i, and the process proceeds to step S325.
- step S325 the processing unit 212-i performs deblocking filter processing on the i-th m common processing units, and uses the pixel values of the top and bottom rows of the common processing units after the deblocking filter processing. Processing to be stored in the buffer 213 is started.
- step S326 the dividing unit 211 increments the count value i by 1, and returns the process to step S324. Then, the processes of steps S324 to S326 are repeated until the count value i becomes n or more.
- step S324 determines whether the count value i is smaller than n, that is, if the process of step S325 for all common processing units in the picture is started. If it is determined in step S324 that the count value i is not smaller than n, that is, if the process of step S325 for all common processing units in the picture is started, the process proceeds to step S327.
- step S327 the dividing unit 211 determines whether the count value j is smaller than n. If it is determined in step S327 that the count value j is smaller than n, the process proceeds to step S328.
- step S328 the processing unit 212-j determines whether or not the deblocking filter processing on all the jth m common processing units and the upper and lower common processing units of the m common processing units has ended. .
- step S328 If it is determined in step S328 that the deblocking filter processing of all the j-th m common processing units and the upper and lower common processing units of the m common processing units are not completed, the process waits until the end .
- step S328 If it is determined in step S328 that the deblocking filter processing on all the j-th m common processing units and the upper and lower common processing units of the m common processing units is completed, the process proceeds to step S329.
- step S329 the processing unit 212-j starts adaptive offset filter processing for the j-th m common processing units using the pixel values held in the buffer 213.
- step S330 the processing unit 212-j increments the count value j by 1, and returns the process to step S327. Then, the process of steps S327 to S330 is repeated until the count value j becomes n or more.
- step S327 If it is determined in step S327 that the count value j is not smaller than n, that is, if the process of step S329 for all common processing units in the picture is started, the process proceeds to step S331.
- step S331 it is determined whether the count value k is smaller than n. If it is determined in step S331 that the count value k is smaller than n, the process proceeds to step S332.
- step S332 the processing unit 212-k determines whether or not the adaptive offset filter processing for all the k-th m common processing units has ended. If it is determined that the processing has not ended, the process waits until the end Do.
- step S332 If it is determined in step S332 that the adaptive offset filter processing for all the k-th m common processing units has ended, the process proceeds to step S333.
- step S333 the processing unit 212-k starts adaptive loop filter processing for the k-th m common processing units.
- step S334 the processing unit 212-k increments the count value k by 1 and advances the process to step S331. Then, the process of steps S331 to S334 is repeated until the count value k becomes n or more.
- step S331 If it is determined in step S331 that the count value k is not smaller than n, that is, if the process of step S333 for all common processing units in the picture is started, the process proceeds to step S335.
- step S 335 the output unit 214 determines whether the adaptive loop filter processing by the n processing units 212-1 to 212-n has ended. If it is determined that the adaptive loop filter processing has not ended, the output unit 214 waits until the end.
- step S331 If it is determined in step S331 that the adaptive loop filter processing by the n number of processing units 212-1 to 212-n is completed, the output unit 214 performs the processing after the adaptive loop filter processing in units of pictures stored in the buffer 210. The image is supplied to the frame memory 44. Then, the process returns to step S273 of FIG. 35 and proceeds to step S274.
- the encoding apparatus 190 can collectively perform deblocking filter processing, adaptive offset processing, and adaptive loop filter processing on the decoded image in m common parallel processing units in parallel.
- the encoding apparatus 190 can collectively perform inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing in parallel in Recon Pseudo Slice units.
- processing to be divided into parallel processing units can be reduced. Further, the next process can be performed without waiting for the end of each process for the entire picture. Therefore, coding can be performed at higher speed.
- FIG. 38 is a block diagram showing a configuration example of a second embodiment of a decoding device as an image processing device to which the present technology is applied, which decodes the coded stream transmitted from the coding device 190 of FIG.
- the configuration of the decoding device 230 in FIG. 38 is an inverse quantization unit 231, an inverse orthogonal transformation unit 232, an addition unit instead of the inverse quantization unit 163, the inverse orthogonal transformation unit 164, the addition unit 165, and the motion prediction / compensation unit 174.
- the configuration of the decoding apparatus 160 in FIG. 30 is that the motion prediction / compensation unit 234 is provided, and that a filtering unit 235 is provided instead of the deblocking filter 166, the adaptive offset filter 167, and the adaptive loop filter 168. It is different from
- the decoding device 230 collectively performs dequantization, inverse orthogonal transformation, addition processing, and compensation processing in units of Recon Pseudo Slice, and performs deblocking filter processing, adaptive offset filter processing, and adaptation in m common processing units. Perform loop filter processing collectively.
- the inverse quantization unit 231 of the decoding device 230 performs inverse quantization on the quantized coefficients from the lossless decoding unit 162 in parallel in units of Recon Pseudo Slice, and the resulting Recon Pseudo is obtained.
- the orthogonal transform coefficient in units of Slice is supplied to the inverse orthogonal transform unit 232.
- the inverse orthogonal transformation unit 232 performs inverse orthogonal transformation on the orthogonal transformation coefficients in Recon Pseudo Slice units from the inverse quantization unit 231 in parallel in Recon Pseudo Slice units.
- the inverse orthogonal transformation unit 232 supplies, to the addition unit 233, residual information in Recon Pseudo Slice units obtained as a result of the inverse orthogonal transformation.
- the addition unit 233 functions as a decoding unit, and the residual information in Recon Pseudo Slice units as an image to be decoded supplied from the inverse orthogonal transformation unit 232, and Recon supplied from the motion compensation unit 234 via the switch 175. Decoding is performed locally by adding a predicted image of Pseudo Slice units in Recon Pseudo Slice units. Then, the adding unit 233 supplies the locally decoded picture unit image to the frame memory 171.
- the addition unit 233 adds the prediction image of PU units supplied from the intra prediction unit 173 via the switch 175 and the residual information of the PU, thereby generating a local Decryption. Then, the adding unit 233 supplies the locally decoded picture-based image to the frame memory 171 as in the adding unit 165. Also, the adding unit 233 supplies the completely decoded picture unit image to the filter processing unit 235.
- the motion compensation unit 234 reads out a reference image specified by the information specifying the reference image supplied from the lossless decoding unit 162 from the frame memory 171 via the switch 172 in parallel in Recon Pseudo Slice units.
- the motion compensation unit 234 uses the motion vector supplied from the lossless decoding unit 162 and the reference image to perform motion compensation processing of the optimal inter prediction mode indicated by the inter prediction mode information supplied from the lossless decoding unit 162 in Recon Pseudo Slice units. To do.
- the motion compensation unit 234 supplies the predicted image of Recon Pseudo Slice unit generated as a result to the switch 175.
- the filter processing unit 235 is configured in the same manner as the filter processing unit 195 of FIG.
- the filter processing unit 235 performs deblocking filter processing on the image supplied from the adding unit 233 in m common processing units in parallel, and an adaptive offset filter using offset filter information supplied from the lossless decoding unit 162. Processing and adaptive loop filter processing using filter coefficients are performed.
- the filter processing unit 235 supplies the picture unit image obtained as a result to the frame memory 171 and the screen rearrangement buffer 169.
- FIG. 39 is a flowchart for describing the decoding process of the decoding device 230 of FIG.
- steps S351 and S352 of FIG. 39 are the same as the processes of steps S231 and S232 of FIG.
- step S353 the decoding device 230 performs inter-parallel processing similar to the inter-parallel processing in FIG.
- the intra prediction unit 173 performs intra prediction processing in the same manner as the processing in step S237 of FIG.
- step S355 the filter processing unit 235 performs filter parallel processing similar to the filter parallel processing of FIG.
- steps S356 to S358 are the same as the processes in steps S241 to S243 in FIG.
- the decoding device 230 can collectively perform deblocking filter processing, adaptive offset processing, and adaptive loop filter processing on the decoded image in parallel in a predetermined processing unit. Also, the decoding device 230 can perform inverse quantization, inverse orthogonal transformation, addition processing, and compensation processing in parallel in Recon Pseudo Slice units. Therefore, compared with the decoding device 160, processing to be divided into parallel processing units can be reduced. Further, the next process can be performed without waiting for the end of each process for the entire picture. Therefore, decoding can be performed faster.
- the above-described series of processes may be performed by hardware or software.
- a program that configures the software is installed on a computer.
- the computer includes, for example, a general-purpose personal computer that can execute various functions by installing a computer incorporated in dedicated hardware and various programs.
- FIG. 40 is a block diagram showing an example of a hardware configuration of a computer that executes the series of processes described above according to a program.
- a central processing unit (CPU) 601, a read only memory (ROM) 602, and a random access memory (RAM) 603 are mutually connected by a bus 604.
- an input / output interface 605 is connected to the bus 604.
- An input unit 606, an output unit 607, a storage unit 608, a communication unit 609, and a drive 610 are connected to the input / output interface 605.
- the input unit 606 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 607 includes a display, a speaker, and the like.
- the storage unit 608 is formed of a hard disk, a non-volatile memory, or the like.
- the communication unit 609 is formed of a network interface or the like.
- the drive 610 drives removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 601 loads the program stored in the storage unit 608 into the RAM 603 via the input / output interface 605 and the bus 604 and executes the program. Processing is performed.
- the program executed by the computer (CPU 601) can be provided by being recorded on, for example, a removable medium 611 as a package medium or the like. Also, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 608 via the input / output interface 605 by attaching the removable media 611 to the drive 610.
- the program can be received by the communication unit 609 via a wired or wireless transmission medium and installed in the storage unit 608.
- the program can be installed in advance in the ROM 602 or the storage unit 608.
- the program executed by the computer may be a program that performs processing in chronological order according to the order described in this specification, in parallel, or when necessary, such as when a call is made. It may be a program to be processed.
- the present technology can have a cloud computing configuration in which one function is shared and processed by a plurality of devices via a network.
- each step described in the above-described flowchart can be executed by one device or in a shared manner by a plurality of devices.
- the plurality of processes included in one step can be executed by being shared by a plurality of devices in addition to being executed by one device.
- the inverse quantization unit 38, the inverse orthogonal transformation unit 39, the addition unit 40, the motion prediction / compensation unit 47, the inverse quantization unit 163, the inverse orthogonal transformation unit 164, and the addition unit 165 in the first embodiment
- a motion compensation unit 174 may be provided.
- the second embodiment in place of the deblocking filter 41, the adaptive offset filter 42, and the adaptive loop filter 43, and the deblocking filter 166, the adaptive offset filter 167, and the adaptive loop filter 168 of the first embodiment, the second embodiment.
- the filter processing unit 195 and the filter processing unit 235 may be provided.
- the present technology can also have the following configurations.
- a decoding unit that decodes the encoded data and generates an image
- An image processing apparatus comprising: a filter processing unit configured to perform filter processing in parallel on the image generated by the decoding unit in units of processing unrelated to a slice; (2) The filtering is deblocking filtering, The image processing apparatus according to (1), wherein the number of pixels in the horizontal direction or the vertical direction of the processing unit is a multiple of eight. (3) The image processing apparatus according to (2), wherein the pixel in the horizontal direction or the vertical direction of the processing unit includes four pixels centered on a boundary of a LCU (Largest Coding Unit).
- the filter processing unit A holding unit that holds a pixel value of a pixel at the boundary of the processing unit of the image; A processing unit that performs adaptive offset filter processing on the image in parallel in the processing unit using the pixel values held by the holding unit.
- the image processing apparatus according to (1) (8) The image processing apparatus according to (7), wherein the processing unit is an LCU (Largest Coding Unit) unit.
- the filter processing is deblocking filter processing and adaptive offset filter processing, The image processing apparatus according to (1), wherein the number of pixels in the horizontal direction or the vertical direction of the processing unit is a multiple of eight.
- the image processing device Decoding the encoded data to generate an image;
- An image processing method including the steps of: filtering the image generated by the process of the decoding step in parallel in units of processing unrelated to a slice;
- Computer A decoding unit that decodes the encoded data and generates an image;
- An image processing apparatus comprising: a filter processing unit configured to perform filter processing in parallel on the image generated by the decoding unit in units of processing unrelated to tiles;
- 11 encoder 40 adders, 41 deblock filters, 42 adaptive offset filters, 43 adaptive loop filters, 112 buffers, 113-1 to 113-n processors, 160 decoders, 165 adders, 190 encoders, 193 adder, 195 filter processor, 230 decoder, 233 adder
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
<符号化装置の第1実施の形態の構成例>
図1は、本技術を適用した画像処理装置としての符号化装置の第1実施の形態の構成例を示すブロック図である。
図2は、HEVC方式における最大の符号化単位であるLCUを説明する図である。
図3は、逆量子化、逆直交変換、加算処理、および補償処理における並列処理単位の例を示す図である。
図4は、図1のデブロックフィルタ41の構成例を示すブロック図である。
図5乃至図8は、画像の輝度成分(luma)に対するデブロッキングフィルタ処理の並列処理単位を説明する図である。
図9は、図1の適応オフセットフィルタ42の構成例を示すブロック図である。
図10乃至図14は、適応オフセットフィルタ処理の並列処理単位を説明する図である。
図15は、図1の適応ループフィルタ43の構成例を示すブロック図である。
図16乃至図19は、適応ループフィルタ処理の並列処理単位を説明する図である。
図20および図21は、図1の符号化装置11の符号化処理を説明するフローチャートである。この符号化処理は、例えば、フレーム単位で行われる。
図30は、図1の符号化装置11から伝送される符号化ストリームを復号する、本技術を適用した画像処理装置としての復号装置の第1実施の形態の構成例を示すブロック図である。
図31は、図30の復号装置160の復号処理を説明するフローチャートである。この復号処理は、フレーム単位で行われる。
<符号化装置の第2実施の形態の構成例>
図32は、本技術を適用した画像処理装置としての符号化装置の第2実施の形態の構成例を示すブロック図である。
図33は、図32のフィルタ処理部195の構成例を示すブロック図である。
図34と図35は、図32の符号化装置190の符号化処理を説明するフローチャートである。
図38は、図32の符号化装置190から伝送される符号化ストリームを復号する、本技術を適用した画像処理装置としての復号装置の第2実施の形態の構成例を示すブロック図である。
図39は、図38の復号装置230の復号処理を説明するフローチャートである。
<本技術を適用したコンピュータの説明>
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
を備える画像処理装置。
(2)
前記フィルタ処理は、デブロッキングフィルタ処理であり、
前記処理単位の水平方向または垂直方向の画素数は8の倍数である
前記(1)に記載の画像処理装置。
(3)
前記処理単位の水平方向または垂直方向の画素は、LCU(Largest Coding Unit)の境界を中心とした4画素ずつを含む
前記(2)に記載の画像処理装置。
(4)
前記画像がYUV420の輝度画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数であり、前記画像がYUV420の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は4の倍数である
前記(2)または(3)に記載の画像処理装置。
(5)
前記画像がYUV422の色画像である場合、前記処理単位の水平方向の画素数は4の倍数であり、垂直方向の画素数は8の倍数である
前記(2)または(3)に記載の画像処理装置。
(6)
前記画像がYUV444の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数である
前記(2)または(3)に記載の画像処理装置。
(7)
前記フィルタ処理部は、
前記画像の前記処理単位の境界の画素の画素値を保持する保持部と、
前記保持部により保持されている前記画素値を用いて、前記画像に対して前記処理単位で並列に適応オフセットフィルタ処理を行う処理部と
を備える
前記(1)に記載の画像処理装置。
(8)
前記処理単位は、LCU(Largest Coding Unit)単位である
前記(7)に記載の画像処理装置。
(9)
前記フィルタ処理は、デブロッキングフィルタ処理と適応オフセットフィルタ処理であり、
前記処理単位の水平方向または垂直方向の画素数は8の倍数である
前記(1)に記載の画像処理装置。
(10)
前記画像がYUV420の輝度画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数であり、前記画像がYUV420の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は4の倍数である
前記(9)に記載の画像処理装置。
(11)
前記画像がYUV422の色画像である場合、前記処理単位の水平方向の画素数は4の倍数であり、垂直方向の画素数は8の倍数である
前記(9)に記載の画像処理装置。
(12)
前記画像がYUV444の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数である
前記(9)に記載の画像処理装置。
(13)
画像処理装置が、
符号化データを復号し、画像を生成する復号ステップと、
前記復号ステップの処理により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理ステップと
を含む画像処理方法。
(14)
コンピュータを、
符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
して機能させるためのプログラム。
(15)
符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、タイルとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
を備える画像処理装置。
Claims (15)
- 符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
を備える画像処理装置。 - 前記フィルタ処理は、デブロッキングフィルタ処理であり、
前記処理単位の水平方向または垂直方向の画素数は8の倍数である
請求項1に記載の画像処理装置。 - 前記処理単位の水平方向または垂直方向の画素は、LCU(Largest Coding Unit)の境界を中心とした4画素ずつを含む
請求項2に記載の画像処理装置。 - 前記画像がYUV420の輝度画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数であり、前記画像がYUV420の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は4の倍数である
請求項2に記載の画像処理装置。 - 前記画像がYUV422の色画像である場合、前記処理単位の水平方向の画素数は4の倍数であり、垂直方向の画素数は8の倍数である
請求項2に記載の画像処理装置。 - 前記画像がYUV444の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数である
請求項2に記載の画像処理装置。 - 前記フィルタ処理部は、
前記画像の前記処理単位の境界の画素の画素値を保持する保持部と、
前記保持部により保持されている前記画素値を用いて、前記画像に対して前記処理単位で並列に適応オフセットフィルタ処理を行う処理部と
を備える
請求項1に記載の画像処理装置。 - 前記処理単位は、LCU(Largest Coding Unit)単位である
請求項7に記載の画像処理装置。 - 前記フィルタ処理は、デブロッキングフィルタ処理と適応オフセットフィルタ処理であり、
前記処理単位の水平方向または垂直方向の画素数は8の倍数である
請求項1に記載の画像処理装置。 - 前記画像がYUV420の輝度画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数であり、前記画像がYUV420の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は4の倍数である
請求項9に記載の画像処理装置。 - 前記画像がYUV422の色画像である場合、前記処理単位の水平方向の画素数は4の倍数であり、垂直方向の画素数は8の倍数である
請求項9に記載の画像処理装置。 - 前記画像がYUV444の色画像である場合、前記処理単位の水平方向または垂直方向の画素数は8の倍数である
請求項9に記載の画像処理装置。 - 画像処理装置が、
符号化データを復号し、画像を生成する復号ステップと、
前記復号ステップの処理により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理ステップと
を含む画像処理方法。 - コンピュータを、
符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、スライスとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
して機能させるためのプログラム。 - 符号化データを復号し、画像を生成する復号部と、
前記復号部により生成された前記画像に対して、タイルとは無関係の処理単位で並列にフィルタ処理を行うフィルタ処理部と
を備える画像処理装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201380062562.2A CN104823446B (zh) | 2012-12-06 | 2013-11-25 | 图像处理装置、图像处理方法 |
JP2014551037A JP6327153B2 (ja) | 2012-12-06 | 2013-11-25 | 画像処理装置、画像処理方法、およびプログラム |
US14/647,692 US20150312569A1 (en) | 2012-06-06 | 2013-11-25 | Image processing apparatus, image processing method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-267400 | 2012-12-06 | ||
JP2012267400 | 2012-12-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014087861A1 true WO2014087861A1 (ja) | 2014-06-12 |
Family
ID=50883284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/081596 WO2014087861A1 (ja) | 2012-06-06 | 2013-11-25 | 画像処理装置、画像処理方法、およびプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20150312569A1 (ja) |
JP (1) | JP6327153B2 (ja) |
CN (1) | CN104823446B (ja) |
WO (1) | WO2014087861A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016036134A (ja) * | 2014-07-31 | 2016-03-17 | 富士通株式会社 | 画像処理方法及び装置 |
KR20160047375A (ko) * | 2014-10-22 | 2016-05-02 | 삼성전자주식회사 | 실시간으로 인-루프 필터링을 수행할 수 있는 애플리케이션 프로세서, 이의 작동 방법, 및 이를 포함하는 시스템 |
CN105554505A (zh) * | 2014-10-22 | 2016-05-04 | 三星电子株式会社 | 应用处理器及其方法以及包括该应用处理器的*** |
US11924415B2 (en) | 2021-05-11 | 2024-03-05 | Tencent America LLC | Method and apparatus for boundary handling in video coding |
JP7509916B2 (ja) | 2021-05-11 | 2024-07-02 | テンセント・アメリカ・エルエルシー | ビデオ符号化における境界処理のための方法、装置及びプログラム |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107040778A (zh) * | 2016-02-04 | 2017-08-11 | 联发科技股份有限公司 | 环路滤波方法以及环路滤波装置 |
US10609417B2 (en) * | 2016-05-23 | 2020-03-31 | Mediatek Inc. | High efficiency adaptive loop filter processing for video coding |
JP7351207B2 (ja) | 2019-12-16 | 2023-09-27 | 富士電機機器制御株式会社 | 盤内機器診断装置及びサーバ |
CN112822489B (zh) * | 2020-12-30 | 2023-05-16 | 北京博雅慧视智能技术研究院有限公司 | 一种样本自适应偏移补偿滤波的硬件实现方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007060487A (ja) * | 2005-08-26 | 2007-03-08 | Sony Corp | 画像処理装置および画像処理方法、記録媒体、並びに、プログラム |
WO2011122659A1 (ja) * | 2010-03-30 | 2011-10-06 | シャープ株式会社 | 符号化装置および復号装置 |
WO2012035730A1 (ja) * | 2010-09-16 | 2012-03-22 | パナソニック株式会社 | 画像復号装置、画像符号化装置、それらの方法、プログラム、集積回路およびトランスコード装置 |
JP2012114637A (ja) * | 2010-11-24 | 2012-06-14 | Fujitsu Ltd | 動画像符号化装置 |
WO2012128191A1 (ja) * | 2011-03-24 | 2012-09-27 | ソニー株式会社 | 画像処理装置および方法 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3792837B2 (ja) * | 1997-06-11 | 2006-07-05 | 日本放送協会 | デブロッキングフィルタ |
CN101360240B (zh) * | 2001-09-14 | 2012-12-05 | 株式会社Ntt都科摩 | 编码方法、译码方法、编码装置、译码装置、图象处理*** |
US7362810B2 (en) * | 2003-05-13 | 2008-04-22 | Sigmatel, Inc. | Post-filter for deblocking and deringing of video data |
JP5522893B2 (ja) * | 2007-10-02 | 2014-06-18 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP4900721B2 (ja) * | 2008-03-12 | 2012-03-21 | 株式会社メガチップス | 画像処理装置 |
JP5430379B2 (ja) * | 2009-02-03 | 2014-02-26 | キヤノン株式会社 | 撮像装置及びその制御方法及びプログラム |
KR101118091B1 (ko) * | 2009-06-04 | 2012-03-09 | 주식회사 코아로직 | 비디오 데이터 처리 장치 및 방법 |
JP5253312B2 (ja) * | 2009-07-16 | 2013-07-31 | ルネサスエレクトロニクス株式会社 | 動画像処理装置およびその動作方法 |
JP5183664B2 (ja) * | 2009-10-29 | 2013-04-17 | 財團法人工業技術研究院 | ビデオ圧縮のためのデブロッキング装置及び方法 |
JP2011141823A (ja) * | 2010-01-08 | 2011-07-21 | Renesas Electronics Corp | データ処理装置および並列演算装置 |
US20120134425A1 (en) * | 2010-11-29 | 2012-05-31 | Faouzi Kossentini | Method and System for Adaptive Interpolation in Digital Video Coding |
MX355896B (es) * | 2010-12-07 | 2018-05-04 | Sony Corp | Dispositivo de procesamiento de imagenes y metodo de procesamiento de imagenes. |
US9060174B2 (en) * | 2010-12-28 | 2015-06-16 | Fish Dive, Inc. | Method and system for selectively breaking prediction in video coding |
CN106851306B (zh) * | 2011-01-12 | 2020-08-04 | 太阳专利托管公司 | 动态图像解码方法和动态图像解码装置 |
TW201246943A (en) * | 2011-01-26 | 2012-11-16 | Panasonic Corp | Video image encoding method, video image encoding device, video image decoding method, video image decoding device, and video image encoding and decoding device |
US9325999B2 (en) * | 2011-03-10 | 2016-04-26 | Sharp Kabushiki Kaisha | Video decoder for slices |
CN103503456B (zh) * | 2011-05-10 | 2017-03-22 | 联发科技股份有限公司 | 用于重建视频的环内处理方法及其装置 |
EP2767089A4 (en) * | 2011-10-14 | 2016-03-09 | Mediatek Inc | METHOD AND APPARATUS FOR LOOP FILTERING |
US20130114682A1 (en) * | 2011-11-07 | 2013-05-09 | Sharp Laboratories Of America, Inc. | Video decoder with enhanced sample adaptive offset |
-
2013
- 2013-11-25 JP JP2014551037A patent/JP6327153B2/ja not_active Expired - Fee Related
- 2013-11-25 CN CN201380062562.2A patent/CN104823446B/zh not_active Expired - Fee Related
- 2013-11-25 WO PCT/JP2013/081596 patent/WO2014087861A1/ja active Application Filing
- 2013-11-25 US US14/647,692 patent/US20150312569A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007060487A (ja) * | 2005-08-26 | 2007-03-08 | Sony Corp | 画像処理装置および画像処理方法、記録媒体、並びに、プログラム |
WO2011122659A1 (ja) * | 2010-03-30 | 2011-10-06 | シャープ株式会社 | 符号化装置および復号装置 |
WO2012035730A1 (ja) * | 2010-09-16 | 2012-03-22 | パナソニック株式会社 | 画像復号装置、画像符号化装置、それらの方法、プログラム、集積回路およびトランスコード装置 |
JP2012114637A (ja) * | 2010-11-24 | 2012-06-14 | Fujitsu Ltd | 動画像符号化装置 |
WO2012128191A1 (ja) * | 2011-03-24 | 2012-09-27 | ソニー株式会社 | 画像処理装置および方法 |
JP2012213128A (ja) * | 2011-03-24 | 2012-11-01 | Sony Corp | 画像処理装置および方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016036134A (ja) * | 2014-07-31 | 2016-03-17 | 富士通株式会社 | 画像処理方法及び装置 |
KR20160047375A (ko) * | 2014-10-22 | 2016-05-02 | 삼성전자주식회사 | 실시간으로 인-루프 필터링을 수행할 수 있는 애플리케이션 프로세서, 이의 작동 방법, 및 이를 포함하는 시스템 |
CN105554505A (zh) * | 2014-10-22 | 2016-05-04 | 三星电子株式会社 | 应用处理器及其方法以及包括该应用处理器的*** |
CN105554505B (zh) * | 2014-10-22 | 2020-06-12 | 三星电子株式会社 | 应用处理器及其方法以及包括该应用处理器的*** |
KR102299573B1 (ko) | 2014-10-22 | 2021-09-07 | 삼성전자주식회사 | 실시간으로 인-루프 필터링을 수행할 수 있는 애플리케이션 프로세서, 이의 작동 방법, 및 이를 포함하는 시스템 |
US11924415B2 (en) | 2021-05-11 | 2024-03-05 | Tencent America LLC | Method and apparatus for boundary handling in video coding |
JP7509916B2 (ja) | 2021-05-11 | 2024-07-02 | テンセント・アメリカ・エルエルシー | ビデオ符号化における境界処理のための方法、装置及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN104823446B (zh) | 2019-09-10 |
JPWO2014087861A1 (ja) | 2017-01-05 |
US20150312569A1 (en) | 2015-10-29 |
CN104823446A (zh) | 2015-08-05 |
JP6327153B2 (ja) | 2018-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6995952B2 (ja) | インタ予測方法及びその装置 | |
KR102462009B1 (ko) | 영상 부호화 방법 및 컴퓨터로 읽을 수 있는 기록 매체 | |
WO2014087861A1 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
RU2550539C1 (ru) | Способ внутреннего предсказания и кодер и декодер, использующие его | |
RU2654129C2 (ru) | Функциональные возможности режима внутреннего предсказания с блочным копированием для кодирования и декодирования видео и изображений | |
JP2019531025A (ja) | 画像符号化/復号方法、装置及びビットストリームを記憶した記録媒体 | |
JP7419418B2 (ja) | 復号器、プログラム及び方法 | |
JP2024003161A (ja) | 映像復号化方法 | |
CN112385234B (zh) | 图像和视频译码的设备和方法 | |
JP2017535150A (ja) | ビデオ符号化および復号におけるベクトル符号化のための方法および装置 | |
JP2014195269A (ja) | マルチレベル有効性写像スキャニング | |
CN103947203A (zh) | 用于编码/解码图像信息的方法和装置 | |
CN110881129B (zh) | 视频解码方法及视频解码器 | |
JP2024010206A (ja) | ビデオ符号化及び復号のための方法、装置及びコンピュータプログラム | |
EP3878176A1 (en) | Method and apparatus for video coding | |
GB2533905A (en) | Method and apparatus for video coding and decoding | |
KR20180096194A (ko) | 변환 계수 부호화 및 복호화 장치와 이를 구비하는 부호화 장치 및 복호화 장치 | |
JPWO2016194380A1 (ja) | 動画像符号化装置、動画像符号化方法および動画像符号化プログラムを記憶する記録媒体 | |
JP2017073602A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム | |
KR102667983B1 (ko) | 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법, 화상 복호 방법 | |
CN114598873B (zh) | 量化参数的解码方法和装置 | |
JP7412443B2 (ja) | 非線形ループフィルタリングのための方法および装置 | |
JP2016184844A (ja) | 画像処理装置および方法 | |
KR20210122800A (ko) | 인트라 서브 파티션 코딩 모드 도구로부터 서브 파티션의 크기를 제한하는 인코더, 디코더 및 대응하는 방법 | |
CN118301368A (en) | Apparatus and method for image and video coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13861179 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014551037 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14647692 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13861179 Country of ref document: EP Kind code of ref document: A1 |