WO2013076897A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2013076897A1
WO2013076897A1 PCT/JP2012/006327 JP2012006327W WO2013076897A1 WO 2013076897 A1 WO2013076897 A1 WO 2013076897A1 JP 2012006327 W JP2012006327 W JP 2012006327W WO 2013076897 A1 WO2013076897 A1 WO 2013076897A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
reference image
sub
unit
block
Prior art date
Application number
PCT/JP2012/006327
Other languages
French (fr)
Japanese (ja)
Inventor
田中 健
博史 天野
健司 大賀
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2013076897A1 publication Critical patent/WO2013076897A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/583Motion compensation with overlapping blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements

Definitions

  • the present invention relates to an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image.
  • Patent Document 1 As a technique related to an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, a technique described in Patent Document 1, a technique described in Non-Patent Document 1, and a technique described in Non-Patent Document 2. There is a technology.
  • the conventional technology may require a large memory capacity for image processing.
  • processing delay may increase.
  • the present invention provides an image processing apparatus capable of suppressing processing delay while reducing the memory capacity necessary for image processing.
  • An image processing apparatus is an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, and a division unit that divides the block into a plurality of sub-blocks.
  • a calculation unit that calculates an area for acquiring a first reference image corresponding to a first sub-block included in the plurality of sub-blocks using the motion vector corresponding to the block;
  • a prediction image corresponding to the first sub-block is generated from the acquisition unit that acquires the first reference image from the region, excluding at least a part of the already acquired portion, and the first reference image.
  • a generator is an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, and a division unit that divides the block into a plurality of sub-blocks.
  • the image processing apparatus of the present invention can suppress processing delay while reducing the memory capacity necessary for image processing.
  • FIG. 1A is a diagram illustrating a prediction unit according to the related art.
  • FIG. 1B is a diagram illustrating a reference image size corresponding to a prediction unit according to the related art.
  • FIG. 1C is a diagram illustrating a configuration for performing motion compensation according to the related art.
  • FIG. 2A is a time chart illustrating a first example of motion compensation operation according to the related art.
  • FIG. 2B is a time chart illustrating a second example of the motion compensation operation according to the related art.
  • FIG. 3A is a diagram illustrating division of a macroblock according to the related art.
  • FIG. 3B is a diagram illustrating a reference image size corresponding to a sub-block according to the related art.
  • FIG. 4 is a diagram illustrating a problem of the related art.
  • FIG. 1A is a diagram illustrating a prediction unit according to the related art.
  • FIG. 1B is a diagram illustrating a reference image size corresponding to a prediction unit according to the related
  • FIG. 5 is a diagram illustrating a configuration of the image decoding apparatus according to Embodiment 1.
  • FIG. 6 is a diagram illustrating a configuration related to the motion compensation unit according to the first embodiment.
  • FIG. 7A is a diagram showing a sequence according to the first embodiment.
  • FIG. 7B is a diagram showing a picture according to Embodiment 1.
  • FIG. 7C is a diagram showing an encoded stream according to Embodiment 1.
  • FIG. 8A is a diagram illustrating a configuration example of a coding unit according to Embodiment 1.
  • FIG. 8B is a diagram illustrating a configuration example of coding unit data according to Embodiment 1.
  • FIG. 9 is a diagram illustrating the size of the prediction unit according to the first embodiment.
  • FIG. 9 is a diagram illustrating the size of the prediction unit according to the first embodiment.
  • FIG. 10 is a flowchart showing the operation of the image decoding apparatus according to Embodiment 1.
  • FIG. 11 is a flowchart showing a process of decoding the coding unit according to Embodiment 1.
  • FIG. 12 is a diagram illustrating a motion compensation operation according to the first embodiment.
  • FIG. 13A is a diagram illustrating a prediction unit according to Embodiment 1.
  • FIG. 13B is a diagram illustrating division of the prediction unit according to Embodiment 1.
  • FIG. 14 is a flowchart showing the motion compensation operation according to the first embodiment.
  • FIG. 15A is a diagram illustrating a region of a reference image according to Embodiment 1.
  • FIG. 15B is a diagram showing a reference image acquisition area according to Embodiment 1.
  • FIG. 15A is a diagram illustrating a region of a reference image according to Embodiment 1.
  • FIG. 15B is a diagram showing a reference image acquisition area according to Embodiment 1.
  • FIG. 15C is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 1.
  • FIG. 16A is a time chart illustrating a first example of motion compensation operation according to Embodiment 1.
  • FIG. 16B is a time chart illustrating a second example of the motion compensation operation according to Embodiment 1.
  • FIG. 16C is a time chart illustrating a third example of the motion compensation operation according to Embodiment 1.
  • FIG. 17A is a diagram illustrating a reference image acquisition region according to Embodiment 2.
  • FIG. 17B is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 2.
  • FIG. 18A is a diagram illustrating a reference image acquisition area according to Embodiment 3.
  • FIG. 18B is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 3.
  • FIG. 19A is a diagram illustrating a configuration of an image processing device according to the fourth embodiment.
  • FIG. 19B is a diagram illustrating an operation of the image processing apparatus according to the fourth embodiment.
  • FIG. 20 is an overall configuration diagram of a content supply system that implements a content distribution service.
  • FIG. 21 is an overall configuration diagram of a digital broadcasting system.
  • FIG. 22 is a block diagram illustrating a configuration example of a television.
  • FIG. 23 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
  • FIG. 24 is a diagram illustrating a structure example of a recording medium that is an optical disk.
  • FIG. 25 is a configuration diagram illustrating a configuration example of an integrated circuit that implements image decoding processing.
  • FIG. 26 is a configuration diagram illustrating a configuration example of an integrated circuit that implements image decoding processing and image encoding processing.
  • An image encoding apparatus that encodes an image divides each picture constituting the image into a plurality of macroblocks (Macroblock, sometimes referred to as MB for short) each composed of 16 ⁇ 16 pixels. Then, the image encoding device encodes each macroblock in the raster scan order. The image encoding device generates an encoded stream by encoding and compressing an image. The image decoding apparatus decodes this encoded stream for each macroblock in raster scan order, and reproduces each picture of the original image.
  • Macroblock sometimes referred to as MB for short
  • ITU-T H.264 is one of the conventional image encoding methods.
  • H.264 standards see, for example, Non-Patent Document 1).
  • the image decoding apparatus is the H.264 standard.
  • In order to decode an image encoded by the H.264 standard first, an encoded stream is read. Then, the image decoding apparatus performs variable length decoding after decoding various header information. The image decoding apparatus performs inverse frequency conversion by inversely quantizing coefficient information obtained by variable length decoding. Thereby, a difference image is generated.
  • the image decoding apparatus performs in-plane prediction or motion compensation according to the macroblock type obtained by variable length decoding.
  • motion compensation is performed for a maximum of 16 ⁇ 16 pixels.
  • the image decoding apparatus generates a predicted image.
  • the image decoding apparatus performs a reconstruction process by adding the difference image to the predicted image.
  • the image decoding apparatus decodes the decoding target image by performing a deblocking filter process on the reconstructed image.
  • the image encoding apparatus As described above, the image encoding apparatus according to the H.264 standard encodes an image in units of macroblocks configured by 16 ⁇ 16 pixels. However, 16 ⁇ 16 pixels are not necessarily optimal as a unit of encoding. In general, as the image resolution increases, the correlation between adjacent blocks increases. Therefore, when the resolution of the image is high, the image encoding apparatus can improve the compression efficiency more when the encoding unit is increased.
  • Non-patent Document 2 the conventional H.264.
  • the size of the encoding unit block corresponding to the H.264 standard is variable.
  • the image encoding apparatus according to this technique can also encode an image with a block larger than the conventional 16 ⁇ 16 pixels, and can appropriately encode an ultra-high-definition image.
  • an encoding unit (CU: Coding Unit) is defined as an encoding data unit.
  • This encoding unit is a data unit capable of switching between intra prediction for performing intra prediction and inter prediction for performing motion compensation, as in the case of a macroblock in the conventional encoding standard. Is defined as a general block.
  • the size of this encoding unit is any of 8 ⁇ 8 pixels, 16 ⁇ 16 pixels, 32 ⁇ 32 pixels, and 64 ⁇ 64 pixels.
  • the largest coding unit is referred to as a maximum coding unit (LCU).
  • FIG. 1A is a diagram illustrating a prediction unit (PU: Prediction Unit) according to Non-Patent Document 2.
  • the motion compensation process is performed on a prediction unit having a maximum size of 64 ⁇ 64 pixels.
  • FIG. 1B is a diagram showing a reference image size corresponding to the prediction unit of FIG. 1A.
  • the 8TAP filter calculation shown in Non-Patent Document 2 is performed on a 71 ⁇ 71 pixel reference image in which 7 pixels are added around the prediction unit. .
  • a predicted image is generated.
  • the size of the reference image corresponds to the size in which 3 pixels on the top, 4 pixels on the bottom, 3 pixels on the left, and 4 pixels on the right are added to the prediction unit.
  • FIG. 1C is a diagram showing a configuration for performing motion compensation on the prediction unit of FIG. 1A.
  • a DMA (Direct Memory Access) control unit 1002 reads a reference image at a position indicated by a motion vector of an encoded stream among a plurality of reference pictures stored in the frame memory 1003. Then, the DMA control unit 1002 transfers the reference image to the reference image storage unit 1001 of the motion compensation unit 1000.
  • the reference image storage unit 1001 is included in the motion compensation unit 1000.
  • the motion compensation unit 1000 reads a reference image from the reference image storage unit 1001 and performs a motion compensation filter operation. Then, the motion compensation unit 1000 generates a predicted image and outputs the generated predicted image.
  • FIG. 2A is a time chart showing a first example of motion compensation operation executed in the configuration of FIG. 1C.
  • the DMA control unit 1002 transfers a reference image of 71 ⁇ 71 pixels (5041 pixels) corresponding to the prediction unit of 64 ⁇ 64 pixels, and then the motion compensation unit 1000 performs a motion compensation process.
  • FIG. 2B is a time chart showing a second example of the motion compensation operation executed in the configuration of FIG. 1C.
  • the motion compensation operation according to the H.264 standard is shown.
  • H. In the H.264 standard a 6TAP filter is used. Therefore, after the DMA control unit 1002 transfers a reference image of 21 ⁇ 21 pixels (441 pixels) corresponding to a macroblock having a maximum of 16 ⁇ 16 pixels, the motion compensation unit 1000 performs a motion compensation process.
  • the next-generation image coding standard shown in Non-Patent Document 2 Compared to the H.264 standard, a reference image having a data amount of 10 times or more is transferred. Therefore, the capacity of the reference image storage unit 1001 for storing the reference image must be 10 times or more larger. In addition, since the time required for the transfer is long, the processing delay increases.
  • H. As an image decoding apparatus according to the H.264 standard, there is an image decoding apparatus described in Patent Document 1.
  • FIG. 3A is a diagram illustrating division of a macroblock according to Patent Document 1.
  • the image decoding apparatus according to Patent Document 1 divides a 16 ⁇ 16 pixel macroblock into 4 ⁇ 4 pixel sub-blocks, and performs motion compensation processing on the 4 ⁇ 4 pixel sub-blocks.
  • FIG. 3B is a diagram illustrating the size of a reference image corresponding to a 4 ⁇ 4 pixel sub-block.
  • the DMA control unit 1002 transfers a 9 ⁇ 9 pixel (81 pixel) reference image from the frame memory 1003 to the reference image storage unit 1001 of the motion compensation unit 1000 for a 4 ⁇ 4 pixel sub-block.
  • the size of the reference image corresponds to a size in which 2 pixels on the top, 3 pixels on the bottom, 2 pixels on the left, and 3 pixels on the right are added to the sub-block.
  • the motion compensation unit 1000 reads a reference image from the reference image storage unit 1001 and performs a motion compensation filter operation. Then, the motion compensation unit 1000 generates and outputs a predicted image. By repeating these processes 16 times, a prediction image corresponding to a 16 ⁇ 16 pixel macroblock is generated.
  • FIG. 4 is a time chart when the image decoding apparatus of Patent Document 1 performs motion compensation on a prediction unit of 64 ⁇ 64 pixels proposed in the next-generation image coding standard disclosed in Non-Patent Document 2. is there.
  • the image decoding apparatus of Patent Document 1 supports a 16 ⁇ 16 pixel sub-block obtained by dividing a 64 ⁇ 64 pixel prediction unit into 16 parts instead of transferring the 5041 pixel reference image of FIG. 2A. Transfer the reference image. At that time, the image decoding apparatus transfers a reference image of 23 ⁇ 23 pixels (529 pixels) in order to apply the 8TAP filter.
  • the image decoding apparatus of Patent Document 1 can reduce the necessary capacity of the reference image storage unit 1001 of the motion compensation unit 1000 from 5041 pixels to 529 pixels by this division processing.
  • the processing delay is not improved only by the division processing.
  • 8464 pixels are transferred by transferring 529 pixels 16 times. That is, the transfer amount that should be 5041 pixels increases by about 67%.
  • Non-Patent Document 2 describes a technique that uses a size of 64 ⁇ 64 pixels larger than the conventional standard as the size of the prediction unit, that is, the size of motion compensation. Thereby, encoding efficiency improves.
  • the motion compensation for such a large size increases the required memory capacity.
  • the delay of the motion compensation process increases.
  • Patent Document 1 when motion compensation is performed by dividing a macroblock or a prediction unit into small sizes, the required memory capacity becomes small. However, the motion compensation operation delay is not improved. In addition, the transfer amount increases. Then, there is a possibility that the delay of the motion compensation process is further increased due to the increase in the transfer amount.
  • an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, and Using the motion vector corresponding to the block, a division unit that divides into a plurality of subblocks, and an area for acquiring a first reference image corresponding to a first subblock included in the plurality of subblocks, From the calculation unit to calculate, the acquisition unit to acquire the first reference image from the calculated region, excluding at least a part of the already acquired part, and the first reference image from the first reference image A generating unit that generates a predicted image corresponding to the sub-block.
  • the image processing apparatus does not acquire at least a part of the already acquired parts in duplicate. Therefore, an increase in transfer amount is suppressed. Therefore, the required memory capacity is reduced. Further, processing delay is suppressed.
  • the acquisition unit includes the first reference image, the second reference image, and the first reference image that partially overlaps a second reference image corresponding to a second sub-block included in the plurality of sub-blocks. You may acquire except at least one part among the said parts which overlap.
  • the image processing apparatus acquires a reference image having an overlapping portion, excluding at least a part of the overlapping portion.
  • the overlapping part may have already been acquired.
  • the image processing apparatus can reduce processing waste by not acquiring at least a part of the overlapping parts.
  • the acquisition unit acquires at least one of the portions included in the acquired second reference image after acquiring the second reference image corresponding to the second sub block included in the plurality of sub blocks.
  • the first reference image may be acquired except for the section.
  • the acquisition unit may acquire the first reference image while the generation unit generates a prediction image corresponding to a second sub-block included in the plurality of sub-blocks.
  • the image processing apparatus can simultaneously perform the reference image acquisition process and the predicted image generation process. Accordingly, processing delay is further reduced.
  • the acquisition unit acquires the second reference image corresponding to the second sub-block that is adjacent to the first sub-block in the horizontal direction, and then acquires the portion of the portion included in the acquired second reference image.
  • the first reference image may be acquired by excluding at least some of them.
  • the image processing apparatus does not acquire at least a part of the overlapping parts in the horizontal direction.
  • the image processing apparatus obtains a reference image by excluding at least a part of the already acquired overlapping parts without using complicated data management by limiting the exclusion condition to the overlapping parts in the horizontal direction. Can do.
  • the acquisition unit acquires the second reference image corresponding to the second sub-block included in the plurality of sub-blocks immediately before acquiring the first reference image, and then acquired immediately before the second reference image.
  • the first reference image may be acquired by excluding at least a part of the portion included in the second reference image.
  • the image processing apparatus does not acquire at least a part of the overlapping part acquired immediately before.
  • the image processing apparatus obtains a reference image by excluding at least a part of the already acquired overlapping part without using complicated data management by limiting the exclusion condition to the overlapping part acquired immediately before. can do.
  • the acquisition unit acquires the second reference image corresponding to the second sub-block adjacent to the first sub-block in the horizontal direction immediately before acquiring the first reference image, and then immediately before acquiring the second reference image.
  • the first reference image may be acquired by excluding at least a part of the portion included in the second reference image acquired in step S2.
  • the image processing apparatus does not acquire at least a part of the overlapping parts in the horizontal direction acquired immediately before.
  • the image processing apparatus limits the exclusion condition to the overlapped portion in the horizontal direction acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management. Images can be acquired.
  • the division unit may divide the block into the plurality of sub-blocks having the same size.
  • the calculation unit may calculate the area for acquiring the first reference image, excluding at least a part of the already acquired part.
  • the area for acquiring the reference image is appropriately calculated excluding at least a part of the overlapping part. Therefore, the reference image is appropriately acquired except for at least a part of the overlapping parts.
  • the acquisition unit acquires the first reference image larger than the first sub-block except for at least a part of the already acquired part, and the generation unit includes the first reference image.
  • the predicted image having a higher resolution than the sub-block may be generated.
  • the image processing apparatus can generate a high-resolution predicted image for each sub-block while suppressing an increase in the transfer amount.
  • expressions such as 64 ⁇ 64 pixels and 32 ⁇ 32 pixels mean sizes of 64 pixels ⁇ 64 pixels and 32 pixels ⁇ 32 pixels, respectively.
  • each of these expressions may mean data corresponding to the size.
  • expressions such as blocks, data units, and coding units (CUs) each mean a grouped area. Each of them may mean an image area. Alternatively, they may each mean a data area in the encoded stream.
  • a pixel means a data unit in an image or data included in the data unit.
  • the image may be any of a plurality of pictures, a single picture, a part of a picture, etc. constituting a still image or a moving image.
  • the image decoding apparatus decodes an encoded stream.
  • the size of the prediction unit constituting the encoded stream is variable and has a size of 64 ⁇ 64 pixels at the maximum.
  • the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each having a data unit of 16 ⁇ 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, excluding the already transferred part.
  • the image decoding apparatus divides and processes the prediction unit. This reduces the required memory capacity. Furthermore, the image decoding apparatus reduces processing delay by executing reference image transfer processing and motion compensation processing by pipeline processing. Further, the image decoding apparatus can perform the motion compensation process without increasing the transfer amount by transferring the reference image excluding the already transferred part.
  • FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment.
  • the image decoding apparatus according to the present embodiment includes a control unit 501, a frame memory 502, a reconstructed image memory 509, a variable length decoding unit 503, an inverse quantization unit 504, an inverse frequency conversion unit 505, a motion compensation unit 506, an in-plane A prediction unit 507, a reconstruction unit 508, a deblock filter unit 510, and a motion vector calculation unit 511 are provided.
  • the control unit 501 controls the entire image decoding apparatus.
  • the frame memory 502 is a memory for storing the decoded image data.
  • the reconstructed image memory 509 is a memory for storing a part of the generated reconstructed image.
  • the variable length decoding unit 503 reads the encoded stream and decodes the variable length code.
  • the inverse quantization unit 504 performs inverse quantization.
  • the inverse frequency conversion unit 505 performs inverse frequency conversion.
  • the motion vector calculation unit 511 calculates a motion vector based on the predicted motion vector, the difference motion vector, and the like, and outputs the motion vector to the motion compensation unit 506.
  • the motion compensation unit 506 reads a reference image from the frame memory 502, performs motion compensation, and generates a predicted image.
  • the in-plane prediction unit 507 reads a reference image from the reconstructed image memory 509, performs in-plane prediction (also referred to as intra prediction), and generates a predicted image.
  • the reconstruction unit 508 generates a reconstructed image by adding the difference image and the predicted image, and stores a part of the reconstructed image in the reconstructed image memory 509.
  • the deblocking filter unit 510 removes block noise from the reconstructed image and improves the quality of the reconstructed image.
  • FIG. 6 is a configuration diagram around the motion compensation unit 506 according to the present embodiment.
  • the same components as those in FIG. 5 are assigned the same reference numerals, and description thereof is omitted.
  • FIG. 6 shows a DMA control unit 512, a reference image storage unit 513, and a predicted image storage unit 514 in addition to the components shown in FIG. These may be included in the motion compensation unit 506.
  • the DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 based on the motion vector calculated by the motion vector calculation unit 511.
  • the reference image storage unit 513 stores the reference image transferred by the DMA control unit 512.
  • the predicted image storage unit 514 stores the predicted image generated by the motion compensation unit 506.
  • the motion compensation unit 506 performs motion compensation based on the motion vector to generate a predicted image. Thereafter, the motion compensation unit 506 stores the predicted image in the predicted image storage unit 514.
  • the reconstruction unit 508 executes reconstruction processing using the predicted image stored in the predicted image storage unit 514.
  • the encoded stream that is decoded by the image decoding apparatus according to the present embodiment includes an encoding unit (CU), a transform unit (also referred to as TU: Transform Unit), and a prediction unit (PU).
  • CU encoding unit
  • TU transform Unit
  • PU prediction unit
  • the encoding unit is a data unit set with a size of 64 ⁇ 64 pixels to 8 ⁇ 8 pixels and capable of switching between in-plane prediction and inter prediction.
  • the transform unit is set to a size of 64 ⁇ 64 pixels to 4 ⁇ 4 pixels in the area inside the encoding unit.
  • the prediction unit is set to a size of 64 ⁇ 64 pixels to 4 ⁇ 4 pixels in an area inside the encoding unit, and has a prediction mode for intra prediction or a motion vector for inter prediction.
  • FIG. 7A and 7B show a hierarchical configuration of images decoded by the image decoding apparatus according to the present embodiment.
  • a group of a plurality of pictures is called a sequence.
  • each picture is divided into slices, and each slice is further divided into coding units. Note that a picture may not be divided into slices.
  • the size of the maximum coding unit (LCU) is 64 ⁇ 64 pixels.
  • FIG. 7C is a diagram showing an encoded stream according to the present embodiment.
  • the data shown in FIGS. 7A and 7B are hierarchically encoded, whereby the encoded stream shown in FIG. 7C is obtained.
  • the encoded stream shown in FIG. 7C includes a sequence header that controls a sequence, a picture header that controls a picture, a slice header that controls a slice, and encoded unit layer data (CU layer data).
  • a sequence header that controls a sequence
  • a picture header that controls a picture
  • a slice header that controls a slice
  • encoded unit layer data CU layer data.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • FIG. 8A is a diagram showing a configuration example of the coding unit and coding unit layer data according to the present embodiment.
  • Coding unit layer data corresponding to the coding unit includes a CU partition flag and CU data (coding unit data).
  • this CU division flag is “1”, it indicates that the encoding unit is divided into four, and when it is “0”, it indicates that the encoding unit is not divided into four.
  • a 64 ⁇ 64 pixel encoding unit is not divided. That is, the CU partition flag is “0”.
  • FIG. 8B is a diagram showing a configuration example of CU data according to the present embodiment.
  • the CU data includes a CU type, a motion vector or an in-plane prediction mode, and a coefficient.
  • the size of the prediction unit is determined by the CU type.
  • FIG. 9 is a diagram showing examples of selectable prediction unit sizes. Specifically, prediction units such as 64 ⁇ 64 pixels, 32 ⁇ 64 pixels, 64 ⁇ 32 pixels, 32 ⁇ 32 pixels, 16 ⁇ 32 pixels, 32 ⁇ 16 pixels, 16 ⁇ 16 pixels, 16 ⁇ 8 pixels, 8 ⁇ 16 pixels, 8 ⁇ 8 pixels, 8 ⁇ 4 pixels, 4 ⁇ 8 pixels, and 4 ⁇ 4 pixels are shown. Has been. The size of the prediction unit can be selected from sizes of 4 ⁇ 4 pixels or more. The prediction unit may be rectangular.
  • a motion vector or an in-plane prediction mode is designated for each prediction unit. Since only motion vectors are used in the present embodiment, only the motion vectors are shown in FIG. 8B. In addition, as shown in FIG. 9, a 16 ⁇ 64 pixel prediction unit and a 48 ⁇ 64 pixel prediction unit obtained by dividing a square into 1: 3 may be selected.
  • FIG. 10 is a flowchart showing the decoding operation of one sequence included in the encoded stream.
  • the operation of the image decoding apparatus shown in FIG. 5 will be described using the flowchart shown in FIG.
  • the image decoding apparatus first decodes the sequence header (S901).
  • the variable length decoding unit 503 decodes the encoded stream based on the control of the control unit 501.
  • the image decoding apparatus similarly decodes the picture header (S902) and decodes the slice header (S903).
  • the image decoding apparatus decodes the encoding unit (S904).
  • the decoding of the encoding unit will be described in detail later.
  • the image decoding apparatus determines whether the decoded encoding unit is the last encoding unit of the slice (S905). If the decoded encoding unit is not the end of the slice (No in S905), the image decoding apparatus decodes the next encoding unit again (S904).
  • the image decoding apparatus determines whether or not the slice including the decoded encoding unit is the last slice of the picture (S906). If the slice is not the end of the picture (No in S906), the image decoding apparatus decodes the slice header again (S903).
  • the image decoding apparatus determines whether or not the picture including the decoded encoding unit is the last picture in the sequence (S907). If the picture is not at the end of the sequence (No in S907), the image decoding apparatus decodes the picture header again (S902). After decoding all the pictures in the sequence, the image decoding apparatus ends a series of decoding operations.
  • FIG. 11 is a flowchart showing the decoding operation of one encoding unit. The operation of decoding (S904) of the encoding unit of FIG. 10 will be described using the flowchart shown in FIG.
  • variable length decoding unit 503 performs variable length decoding on the processing target encoding unit included in the input encoded stream (S1001).
  • variable length decoding unit 503 In the variable length decoding process (S1001), the variable length decoding unit 503 outputs coding information such as a coding unit type, an intra prediction (intra prediction) mode, motion vector information, and a quantization parameter. The variable length decoding unit 503 outputs coefficient information corresponding to each pixel data.
  • coding information such as a coding unit type, an intra prediction (intra prediction) mode, motion vector information, and a quantization parameter.
  • the variable length decoding unit 503 outputs coefficient information corresponding to each pixel data.
  • Encoding information is output to the control unit 501, and then input to each processing unit.
  • the coefficient information is output to the next inverse quantization unit 504.
  • the inverse quantization unit 504 performs an inverse quantization process (S1002).
  • the inverse frequency transform unit 505 performs inverse frequency transform to generate a difference image (S1003).
  • control unit 501 determines whether inter prediction or in-plane prediction is used for the processing target encoding unit (S1004).
  • the control unit 501 activates the motion vector calculation unit 511.
  • the motion vector calculation unit 511 calculates a motion vector (S1009). Then, the motion vector calculation unit 511 transfers the reference image indicated by the motion vector from the frame memory 502. Next, the control unit 501 activates the motion compensation unit 506. Then, the motion compensation unit 506 generates a predicted image with 1/2 pixel accuracy or 1/4 pixel accuracy (S1005).
  • the control unit 501 activates the in-plane prediction unit 507.
  • the in-plane prediction unit 507 performs in-plane prediction processing and generates a predicted image (S1006).
  • the reconstruction unit 508 adds the predicted image output by the motion compensation unit 506 or the in-plane prediction unit 507 and the difference image output by the inverse frequency conversion unit 505 to generate a reconstructed image (S1007). ).
  • the generated reconstructed image is input to the deblock filter unit 510.
  • the portion used in the in-plane prediction is stored in the reconstructed image memory 509.
  • the deblock filter unit 510 performs deblock filter processing for reducing block noise on the obtained reconstructed image.
  • the deblock filter unit 510 stores the result in the frame memory 502 (S1008).
  • FIG. 12 is an explanatory diagram showing an outline of motion compensation processing. As shown in FIG. 12, the motion compensation process is performed by extracting a part of a previously decoded picture indicated by the motion vector v (vx, vy) decoded from the encoded stream and performing a filter operation. This is a process for generating a predicted image.
  • the reference image extracted from the reference picture is 71 ⁇ 71 pixels.
  • the reference image is a 71 ⁇ 71 pixel rectangle whose upper left is (x + vx ⁇ 3, y + vy ⁇ 3).
  • FIG. 13A is a diagram showing a prediction unit and a motion vector according to the present embodiment.
  • the prediction unit of 64 ⁇ 64 pixels shown in FIG. 13A has one motion vector v.
  • FIG. 13B is a diagram showing division of the prediction unit shown in FIG. 13A.
  • the prediction unit of 64 ⁇ 64 pixels is divided into 16 sub-blocks BK0 to BK15 of 16 ⁇ 16 pixels.
  • the one motion vector v for the 64 ⁇ 64 pixel prediction unit shown in FIG. 13A is the same for every pixel of this prediction unit. That is, as shown in FIG. 13B, even when the prediction unit is divided into 16 sub-blocks, the motion vectors of the respective sub-blocks are all the same motion vector v. Thus, a 64 ⁇ 64 pixel prediction unit is processed as 16 sub-blocks with the same motion vector v.
  • the reference image of the sub-block BK0 is a rectangle of 23 ⁇ 23 pixels whose upper left is (x + vx ⁇ 3, y + vy ⁇ 3).
  • the reference image of the sub-block BK1 is a rectangle of 23 ⁇ 23 pixels whose upper left is (x + vx + 13, y + vy ⁇ 3).
  • two reference images corresponding to two adjacent sub-blocks overlap each other.
  • an example of the reference image of the sub-block BK0 and the reference image of the sub-block BK1 is shown, but the reference image of the sub-block BK0 and the reference image of the sub-block BK4 also overlap each other in the same manner.
  • the image decoding apparatus is characterized in that, when a prediction unit is divided, overlapping portions are not transferred redundantly.
  • FIG. 14 is a flowchart showing an operation related to the motion compensation unit 506 shown in FIG. Operations of the motion vector calculation unit 511 and the motion compensation unit 506 illustrated in FIG. 6 will be described with reference to FIG.
  • the motion vector calculation unit 511 calculates the motion vector of the prediction unit by a method defined by the standard (S1100). Next, the motion vector calculation unit 511 determines whether the prediction unit is larger than 16 ⁇ 16 pixels (S1101). When the prediction unit is not larger than 16 ⁇ 16 pixels (No in S1101), the motion vector calculation unit 511 and the motion compensation unit 506 perform normal operations.
  • the motion vector calculation unit 511 calculates the position and size for acquiring the reference image from the motion vector and the position (coordinates) and size of the prediction unit to be predicted (S1102).
  • the motion vector calculation unit 511 sets the obtained position and size in the DMA control unit 512.
  • the DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 (S1103).
  • the motion compensation unit 506 performs motion compensation using the reference image transferred to the reference image storage unit 513, and writes the result in the predicted image storage unit 514 (S1104).
  • the motion vector calculation unit 511 divides the prediction unit into a plurality of sub-blocks each having 16 ⁇ 16 pixels (S1105).
  • the motion vector calculation unit 511 calculates the position and size for acquiring the reference image for the sub-block obtained by the division (S1106). At this time, the motion vector calculation unit 511 calculates the position and size so as not to transfer the already transferred part.
  • FIG. 15A is a diagram illustrating a reference image area corresponding to a sub-block.
  • the reference image of the sub-block BK1 is 23 ⁇ 23 pixels from the position (x + vx + 13, y + vy ⁇ 3). However, some of these have already been transferred.
  • FIG. 15B is a diagram illustrating a reference image acquisition area corresponding to a sub-block. As shown in FIG. 15B, when the reference image of the sub-block BK1 is transferred, the motion vector calculation unit 511 controls the transfer so that 16 ⁇ 23 pixels are transferred from the position (x + vx + 20, y + vy ⁇ 3).
  • operations related to the sub-block BK0 and the sub-block BK1 are shown.
  • the operations related to sub-block BK1 and sub-block BK2 are the same as the operations related to sub-block BK0 and sub-block BK1.
  • the operations related to the subblock BK0 and the subblock BK4 are the same as the operations related to the subblock BK0 and the subblock BK1 except that two reference images overlap in the vertical direction.
  • the motion vector calculation unit 511 sets the obtained position and size in the DMA control unit 512.
  • the DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 (S1107).
  • the motion compensation unit 506 performs motion compensation using the reference image stored in the reference image storage unit 513, and writes the result in the predicted image storage unit 514 (S1108).
  • the motion vector calculation unit 511 determines whether there is an unprocessed sub-block (S1109). When there is an unprocessed sub-block (Yes in S1109), the motion vector calculation unit 511 calculates a position and a size for acquiring a reference image corresponding to the sub-block (S1106). When there is no unprocessed sub-block (No in S1109), the motion vector calculation unit 511 and the motion compensation unit 506 end the process.
  • FIG. 15C is a diagram showing a plurality of acquisition areas corresponding to a plurality of sub-blocks. As shown in FIG. 15C, the reference image corresponding to each sub-block is not transferred redundantly. For this reason, the transfer amount of the reference image does not increase.
  • FIG. 16A is a time chart showing a first example of motion compensation operation.
  • FIG. 16A shows an example in which the prediction unit is not divided.
  • FIG. 16B is a time chart showing a second example of motion compensation operation.
  • FIG. 16B shows an example of dividing a prediction unit.
  • reference image data transfer is performed in fine data units.
  • motion compensation processing is performed in units of fine data. Therefore, the necessary capacity of the reference image storage unit 513 for holding the reference image is reduced.
  • the total amount of data transfer is the same in the case of FIG. 16A and the case of FIG. 16B. Therefore, the transfer amount does not increase, and the memory bandwidth necessary for transfer does not increase.
  • FIG. 16C is a time chart showing a third example of the motion compensation operation.
  • FIG. 16C shows a modification in the case of dividing the prediction unit.
  • the image decoding apparatus reduces the processing delay by executing the motion compensation process for the reference image and the transfer process for the reference image in parallel by pipeline processing in fine data units. You can also
  • the image decoding apparatus calculates the position and size for acquiring the reference image so as not to overlap the already transferred portion, and transfers the reference image. Specifically, as shown in FIG. 15C, the image decoding apparatus divides the prediction unit into a plurality of sub-blocks, and executes a motion compensation process for each sub-block.
  • the image decoding apparatus divides a 64 ⁇ 64 pixel prediction unit into 16 ⁇ 16 pixels, but may divide into 8 ⁇ 8 pixels or 32 ⁇ 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 ⁇ 32 pixels by 16 ⁇ 16 pixels or another size.
  • data necessary as a reference image is acquired in units of pixels.
  • the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
  • the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap.
  • the DMA control unit 512 may control the transfer so that the already transferred part is not transferred.
  • the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
  • all of the parts that have already been transferred may be excluded from the transfer, or some of the parts that have already been transferred may be excluded from the transfer.
  • each processing unit may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
  • the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
  • an image decoding device is shown.
  • the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing.
  • the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
  • the image decoding apparatus decodes an encoded stream.
  • the size of the prediction unit constituting the encoded stream is variable and has a size of 64 ⁇ 64 pixels at the maximum.
  • the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each having a data unit of 16 ⁇ 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, excluding overlapping portions in the horizontal direction.
  • FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment.
  • FIG. 6 is a configuration diagram around the motion compensation unit 506 included in the image decoding apparatus according to the present embodiment. Since the configuration of the image decoding apparatus according to the present embodiment is the same as that of Embodiment 1, description thereof is omitted.
  • FIG. 17A is a diagram showing a reference image acquisition area according to the present embodiment.
  • the image decoding apparatus according to the present embodiment transfers a reference image, excluding overlapping portions, for a plurality of subblocks arranged in the horizontal direction such as subblocks BK0 to BK3 in the first embodiment.
  • the image decoding apparatus when transferring the reference image of the sub-block BK0, the image decoding apparatus according to the present embodiment transfers 23 ⁇ 23 pixels whose upper left is (x + vx ⁇ 3, y + vy ⁇ 3). Further, when transferring the reference image of the sub-block BK1, the image decoding apparatus transfers 16 ⁇ 23 pixels whose upper left is (x + vx + 20, x + vy ⁇ 3).
  • the reference image of the sub-block BK2 and the reference image of the sub-block BK3 are transferred in the same manner.
  • the image decoding apparatus according to the present embodiment transfers 23 ⁇ 23 pixels whose upper left is (x + vx ⁇ 3, y + vy + 13) when transferring the reference image of sub-block BK4. As a result, the transfer is performed in duplicate for the hatched portion in FIG. 17A.
  • FIG. 17B is a diagram showing a plurality of acquisition areas corresponding to the prediction unit according to the present embodiment. In the entire prediction unit of 64 ⁇ 64 pixels, the transfer is performed in duplicate for the hatched portion in FIG. 17B.
  • the transfer amount of the reference image is 5041 pixels (71 ⁇ 71 pixels).
  • the transfer amount of the reference image is 6532 pixels (71 ⁇ 23 ⁇ 4 pixels). Therefore, the transfer amount of the reference image according to the present embodiment is larger than when the prediction unit is not divided.
  • the image decoding apparatus calculates the position and size for acquiring the reference image so as not to overlap the already transferred horizontal reference image. Then, the image decoding apparatus transfers the reference image except for the overlapping portion in the horizontal direction. Thereby, the memory capacity required for storing the reference image is further reduced. Further, the circuit configuration of the motion compensation unit 506 is simplified.
  • the image decoding apparatus divides a 64 ⁇ 64 pixel prediction unit into 16 ⁇ 16 pixels, but may divide into 8 ⁇ 8 pixels or 32 ⁇ 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 ⁇ 32 pixels by 16 ⁇ 16 pixels or another size.
  • data necessary as a reference image is acquired in units of pixels.
  • the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
  • the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap.
  • the DMA control unit 512 may control the transfer so that the already transferred part is not transferred.
  • the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
  • each processing unit may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
  • the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
  • an image decoding device is shown.
  • the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing.
  • the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
  • the image decoding apparatus decodes an encoded stream.
  • the size of the prediction unit constituting the encoded stream is variable and has a size of 64 ⁇ 64 pixels at the maximum.
  • the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each of which is a data unit of 16 ⁇ 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, except for the overlapped portion in the horizontal direction transferred immediately before.
  • FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment.
  • FIG. 6 is a configuration diagram around the motion compensation unit 506 included in the image decoding apparatus according to the present embodiment. Since the configuration of the image decoding apparatus according to the present embodiment is the same as that of Embodiment 1, description thereof is omitted.
  • FIG. 18A is a diagram showing a reference image acquisition area according to the present embodiment.
  • the image decoding apparatus according to the present embodiment transfers a reference image, excluding overlapping portions, for a plurality of subblocks arranged in the horizontal direction, such as subblock BK0 and subblock BK1 in FIG. 18A.
  • the image decoding apparatus when transferring the reference image of the sub-block BK0, the image decoding apparatus according to the present embodiment transfers 23 ⁇ 23 pixels whose upper left is (x + vx ⁇ 3, y + vy ⁇ 3). Further, when transferring the reference image of the sub-block BK1, the image decoding apparatus transfers 16 ⁇ 23 pixels whose upper left is (x + vx + 20, x + vy ⁇ 3).
  • the image decoding apparatus acquires a reference image of the sub-block BK2 below the sub-block BK0.
  • the image decoding apparatus transfers 23 ⁇ 23 pixels whose upper left is (x + vx ⁇ 3, y + vy + 13). As a result, the transfer is duplicated for the shaded portion in FIG. 18A.
  • the image decoding apparatus transfers the reference image corresponding to the sub-block BK3 on the right side of the sub-block BK2 and the overlapping portion transferred when transferring the reference image corresponding to the sub-block BK2. Transfer, except
  • the image decoding apparatus transfers a reference image corresponding to the upper right sub-block BK4 of the sub-block BK3.
  • the image decoding apparatus transfers the overlapping portion transferred when transferring the reference image corresponding to the sub-block BK1 in an overlapping manner. That is, the image decoding apparatus transfers a reference image of 23 ⁇ 23 pixels whose upper left is (x + vx + 29, x + vy ⁇ 3).
  • FIG. 18B is a diagram showing a plurality of acquisition areas corresponding to the prediction unit according to the present embodiment. In the entire prediction unit of 64 ⁇ 64 pixels, the transfer is performed in duplicate for the hatched portion in FIG. 18B.
  • the transfer amount of the reference image is 5041 pixels (71 ⁇ 71 pixels). In the present embodiment, the transfer amount of the reference image is 7176 pixels (39 ⁇ 23 ⁇ 8 pixels). Therefore, the transfer amount of the reference image according to the present embodiment is larger than when the prediction unit is not divided. Further, the transfer amount of the reference image according to the present embodiment is larger than the transfer amount of the reference image according to the second embodiment.
  • the circuit configuration of the motion compensation unit 506 is simplified. Further, the reference image used for the sub-block BK0 does not need to be retained after being used for the sub-block BK1. Therefore, data management is simplified and the necessary capacity of the reference image storage unit 513 is reduced.
  • the image decoding apparatus calculates the position and size for acquiring the reference image so as not to overlap with the horizontal overlapping portion transferred immediately before. Then, the image decoding apparatus transfers the reference image except for the overlapped portion in the horizontal direction transferred immediately before. Thereby, the memory capacity required for storing the reference image is further reduced. Further, the circuit configuration of the motion compensation unit 506 is simplified.
  • the image decoding apparatus divides a 64 ⁇ 64 pixel prediction unit into 16 ⁇ 16 pixels, but may divide into 8 ⁇ 8 pixels or 32 ⁇ 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 ⁇ 32 pixels by 16 ⁇ 16 pixels or another size.
  • data necessary as a reference image is acquired in units of pixels.
  • the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
  • the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap.
  • the DMA control unit 512 may control the transfer so that the already transferred part is not transferred.
  • the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
  • each processing unit may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
  • the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
  • an image decoding device is shown.
  • the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing.
  • the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
  • FIG. 19A is a diagram illustrating a configuration of an image processing device according to the fourth embodiment.
  • the image processing apparatus 100 in FIG. 19A performs a motion compensation process using a motion vector corresponding to a block in an image.
  • the image processing apparatus 100 includes a dividing unit 101, a calculating unit 102, an acquiring unit 103, and a generating unit 104.
  • the division unit 101 and the calculation unit 102 correspond to the motion vector calculation unit 511 of the first embodiment.
  • the acquisition unit 103 corresponds to the DMA control unit 512 of the first embodiment.
  • the generation unit 104 corresponds to the motion compensation unit 506.
  • FIG. 19B is a diagram showing an operation of the image processing apparatus 100 of FIG. 19A.
  • the dividing unit 101 divides a block into a plurality of sub-blocks (S101).
  • the calculation unit 102 calculates an area for acquiring the first reference image (S102).
  • the calculation unit 102 uses a motion vector corresponding to the block.
  • the first reference image is a reference image corresponding to the first sub block included in the plurality of sub blocks.
  • the acquisition unit 103 acquires the first reference image from the calculated area, excluding at least a part of the already acquired parts (S103). For example, there is a case where a portion included in the first reference image has already been acquired in order to generate a predicted image corresponding to the second sub-block.
  • the acquisition unit 103 acquires the first reference image by excluding at least a part of such a part.
  • the generation unit 104 generates a predicted image corresponding to the first sub-block from the first reference image (S104).
  • the image processing apparatus 100 does not acquire at least a part of the already acquired parts in duplicate. Therefore, an increase in transfer amount is suppressed. Therefore, the required memory capacity is reduced. Further, processing delay is suppressed.
  • the acquisition unit 103 may acquire the first reference image that partially overlaps the second reference image, excluding at least a part of the portion where the first reference image and the second reference image overlap.
  • the second reference image is a reference image corresponding to the second sub-block included in the plurality of divided sub-blocks.
  • the image processing apparatus 100 acquires a reference image having an overlapping portion.
  • the overlapping part may have already been acquired.
  • the image processing apparatus 100 can reduce processing waste by not acquiring at least a part of the overlapping parts.
  • the obtaining unit 103 may obtain the first reference image by excluding at least a part of the portion included in the obtained second reference image. As a result, at least some of the portions acquired when acquiring the reference images of other sub-blocks are not acquired redundantly. Therefore, an increase in transfer amount is suppressed. Also, the required memory capacity is reduced.
  • the acquisition unit 103 may acquire the first reference image while the generation unit 104 is generating the predicted image corresponding to the second sub-block.
  • the image processing apparatus 100 can simultaneously perform the reference image acquisition process and the predicted image generation process. Accordingly, processing delay is further reduced.
  • the above-described second reference image may be limited to a reference image corresponding to a second sub-block that is adjacent to the first sub-block in the horizontal direction.
  • the image processing apparatus 100 does not acquire at least a part of the overlapping parts in the horizontal direction.
  • the image processing apparatus 100 acquires the reference image by excluding at least a part of the already acquired overlapping parts without using complicated data management by limiting the exclusion condition to the overlapping parts in the horizontal direction. be able to.
  • the acquisition unit 103 acquires the second reference image immediately before acquiring the first reference image, and then acquires the first reference image except for at least a part of the portion included in the acquired second reference image. You may get it. Thereby, the image processing apparatus 100 does not acquire at least a part of the overlapping part acquired immediately before.
  • the image processing apparatus 100 limits the exclusion condition to the overlapped portion acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management, and extracts the reference image. Can be acquired.
  • the second reference image described above that is, the second reference image acquired immediately before acquiring the first reference image is limited to the reference image corresponding to the second sub-block adjacent to the first sub-block in the horizontal direction. May be. Thereby, the image processing apparatus 100 does not acquire at least a part of the overlapped portion in the horizontal direction acquired immediately before. The image processing apparatus 100 limits the exclusion condition to the overlapped portion in the horizontal direction acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management. A reference image can be acquired.
  • the acquiring unit 103 may acquire the first reference image by excluding at least a part of the overlapping portion in a predetermined direction, not limited to the overlapping portion in the horizontal direction.
  • the acquisition unit 103 may acquire the first reference image by excluding at least a part of overlapping portions in the direction along the processing order. More specifically, when acquiring the plurality of reference images corresponding to the plurality of sub-blocks in order along the vertical direction, the acquisition unit 103 removes at least a part of the overlapping portions in the vertical direction and performs the first reference. An image may be acquired.
  • the dividing unit 101 may divide the block into a plurality of sub-blocks having the same size. As a result, the motion compensation process is executed for sub-blocks of the same size. Therefore, the motion compensation process is simplified.
  • the calculation unit 102 may calculate the area for acquiring the first reference image, excluding at least a part of the already acquired part. Thereby, the area
  • the acquisition unit 103 may acquire a first reference image larger than the first sub-block, excluding at least a part of the already acquired parts. Then, the generation unit 104 may generate a predicted image having a higher resolution than that of the first sub-block. As a result, the image processing apparatus 100 can generate a high-resolution predicted image for each sub-block while suppressing an increase in the transfer amount.
  • the image processing apparatus has been described based on the embodiment.
  • the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in this embodiment, and forms constructed by combining components in different embodiments are also within the scope of one or more aspects. May be included.
  • another processing unit may execute a process executed by a specific processing unit.
  • the order in which the processes are executed may be changed, or a plurality of processes may be executed in parallel.
  • the above concept can be realized not only as an image processing apparatus, but also as a method using a processing unit constituting the image processing apparatus as a step. For example, these steps are performed by a computer. And said concept is realizable as a program for making a computer perform the step contained in those methods. Further, the above concept can be realized as a computer-readable recording medium such as a CD-ROM in which the program is recorded.
  • the image processing device and the image processing method can be applied to an image encoding device, an image decoding device, an image encoding method, and an image decoding method.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the software that realizes the image processing apparatus according to each of the above embodiments is the following program.
  • this program is an image processing method for performing motion compensation processing on a computer using a motion vector corresponding to a block in an image, the step of dividing the block into a plurality of sub-blocks, A calculation step of calculating a region for acquiring a first reference image corresponding to a first sub-block included in the sub-block using the motion vector corresponding to the block, and the calculated region, An acquisition step of acquiring the first reference image excluding at least a part of the already acquired portion; a generation step of generating a predicted image corresponding to the first sub-block from the first reference image; The image processing method including is executed.
  • the plurality of components included in the image processing apparatus may be realized as an LSI (Large Scale Integration) that is an integrated circuit. These components may be individually made into one chip, or may be made into one chip so as to include a part or all of them. For example, the components other than the memory may be integrated into one chip. Although referred to here as an LSI, it may be referred to as an IC (Integrated Circuit), a system LSI, a super LSI, or an ultra LSI depending on the degree of integration.
  • IC Integrated Circuit
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
  • FIG. 20 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
  • the communication service providing area is divided into desired sizes, and base stations ex106 to ex110, which are fixed radio stations, are installed in each cell.
  • devices such as a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 are mutually connected via a telephone network ex104 and base stations ex106 to ex110. Connected. Each device is connected to the Internet ex101 via the Internet service provider ex102.
  • PDA Personal Digital Assistant
  • each device may be directly connected to the telephone network ex104 without going through the base stations ex106 to ex110 which are fixed wireless stations.
  • the devices may be directly connected to each other via short-range wireless or the like.
  • the camera ex113 is a device that can shoot a moving image such as a digital video camera
  • the camera ex116 is a device that can shoot a still image and a moving image such as a digital camera.
  • the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division MultipleL system).
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • W-CDMA Wideband-Code Division MultipleL system
  • HSPA High Speed Packet Access
  • PHS Personal Handyphone System
  • the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
  • live distribution the content (for example, music live video) captured by the user using the camera ex113 is encoded as described in the above embodiments and transmitted to the streaming server ex103.
  • the streaming server ex103 streams the transmitted content data to the requested client.
  • the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, a game machine ex115, and the like that can decode the encoded data. Each device that has received the distributed data decodes and reproduces the received data.
  • the encoded processing of the captured data may be performed by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be performed in a shared manner.
  • the distributed processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in a shared manner.
  • still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
  • the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
  • these encoding processing and decoding processing are generally executed in a computer ex111 and an LSI (Large Scale Integration) ex500 included in each device.
  • the LSI ex500 may be configured as a single chip or a plurality of chips.
  • image encoding software or image decoding software is incorporated into any recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111 and the like, and encoding processing or decoding processing is performed using the software. May be performed.
  • moving image data acquired by the camera may be transmitted. The moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
  • the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
  • the encoded data can be received and reproduced by the client.
  • the information transmitted by the user can be received, decrypted and reproduced in real time by the client, and even a user who does not have special rights and facilities can realize personal broadcasting.
  • At least one of the image encoding device and the image decoding device of each of the above embodiments can be incorporated in the digital broadcasting system ex200.
  • a bit stream of video information is transmitted to a communication or satellite ex202 via radio waves.
  • This bit stream is an encoded bit stream encoded by the image encoding method described in the above embodiments.
  • the broadcasting satellite ex202 transmits a radio wave for broadcasting, and the home antenna ex204 capable of receiving the satellite broadcast receives the radio wave.
  • the received bit stream is decoded and reproduced by a device such as the television (receiver) ex300 or the set top box (STB) ex217.
  • the image decoding apparatus described in the above embodiment can be mounted on the playback apparatus ex212 that reads and decodes the bitstream recorded on the recording medium ex214 such as a CD and a DVD that are recording media.
  • the reproduced video signal is displayed on the monitor ex213.
  • the image decoding shown in the above embodiments is also performed on the reader / recorder ex218 that reads and decodes the encoded bitstream recorded on the recording medium ex215 such as DVD and BD, or encodes and writes the video signal on the recording medium ex215.
  • the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device and system using the recording medium ex215 in which the encoded bitstream is recorded.
  • an image decoding device may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television. At this time, the image decoding apparatus may be incorporated in the television instead of the set top box.
  • FIG. 22 is a diagram illustrating a television (receiver) ex300 that uses the image decoding method described in each of the above embodiments.
  • the television ex300 includes a tuner ex301 that acquires or outputs a bit stream of video information via the antenna ex204 or the cable ex203 that receives the broadcast, and the encoded data that is demodulated or transmitted to the outside.
  • a modulation / demodulation unit ex302 that modulates and a multiplexing / separation unit ex303 that separates demodulated video data and audio data or multiplexes encoded video data and audio data.
  • the television ex300 decodes each of the audio data and the video data, or encodes each information, an audio signal processing unit ex304, a signal processing unit ex306 including the video signal processing unit ex305, and outputs the decoded audio signal.
  • the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation.
  • the television ex300 includes a control unit ex310 that controls each unit in an integrated manner, and a power supply circuit unit ex311 that supplies power to each unit.
  • the interface unit ex317 includes a bridge ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording such as a hard disk.
  • a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
  • the recording medium ex216 can record information electrically by using a nonvolatile / volatile semiconductor memory element to be stored.
  • Each part of the television ex300 is connected to each other via a synchronous bus.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the video data and audio data demodulated by the modulation / demodulation unit ex302 by the multiplexing / separation unit ex303 based on the control of the control unit ex310 having a CPU or the like. . Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in the above embodiments. The decoded audio signal and video signal are output to the outside from the output unit ex309.
  • these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization.
  • the television ex300 may read the encoded bitstream encoded from the recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from a broadcast or the like.
  • the television ex300 encodes an audio signal and a video signal and transmits them to the outside or writes them on a recording medium.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and encodes an audio signal with the audio signal processing unit ex304 based on the control of the control unit ex310, and converts the video signal with the video signal processing unit ex305.
  • Encoding is performed using the encoding method described in (1).
  • the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320 and ex321 so that the audio signal and the video signal are synchronized.
  • buffers ex318 to ex321 may be provided as shown in the figure, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
  • the television ex300 In addition to acquiring audio data and video data from broadcast and recording media, the television ex300 has a configuration for receiving AV input of a microphone and a camera, and even if encoding processing is performed on the data acquired therefrom Good.
  • the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output. However, these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be.
  • the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
  • the ex300 and the reader / recorder ex218 may share each other.
  • FIG. 23 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to the optical disk.
  • the information reproducing / recording unit ex400 includes elements ex401 to ex407 described below.
  • the optical head ex401 writes information by irradiating a laser spot on the recording surface of the recording medium ex215 that is an optical disk, and reads information by detecting reflected light from the recording surface of the recording medium ex215.
  • the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
  • the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary. To play back information.
  • the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
  • the disk motor ex405 rotates the recording medium ex215.
  • the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking
  • the system control unit ex407 controls the entire information reproduction / recording unit ex400.
  • the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary, as well as the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
  • the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
  • the optical head ex401 has been described as irradiating a laser spot, but it may be configured to perform higher-density recording using near-field light.
  • FIG. 24 shows a schematic diagram of a recording medium ex215 that is an optical disk.
  • Guide grooves grooves
  • address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
  • This address information includes information for specifying the position of the recording block ex231 which is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording and reproducing apparatus.
  • the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
  • the area used for recording the user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner circumference or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
  • the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or encoded data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
  • an optical disk such as a single-layer DVD or BD has been described as an example.
  • the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used. It also has a structure that performs multidimensional recording / reproduction, such as recording information using light of various different wavelengths at the same location on the disc, and recording different layers of information from various angles. It may be an optical disk.
  • the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
  • the configuration of the car navigation ex211 may be, for example, a configuration in which a GPS receiving unit is added in the configuration illustrated in FIG. 22, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
  • the mobile phone ex114 and the like can be used in three ways: a transmitting terminal having only an encoder and a receiving terminal having only a decoder. The implementation form of can be considered.
  • the image encoding method or the image decoding method shown in each of the above embodiments can be used in any of the above-described devices or systems, and by doing so, the effects described in the above embodiments can be obtained. Can be obtained.
  • the image decoding apparatus shown in the first embodiment is realized as an LSI that is typically a semiconductor integrated circuit.
  • the realized form is shown in FIG.
  • the frame memory 502 is realized on the DRAM, and other circuits and memories are configured on the LSI.
  • a bit stream buffer for storing the encoded stream may be realized on the DRAM.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • a drawing device corresponding to various uses can be configured.
  • the present invention can be used as information drawing means in cellular phones, televisions, digital video recorders, digital video cameras, car navigation systems, and the like.
  • a display in addition to a cathode ray tube (CRT), a flat display such as a liquid crystal, a PDP (plasma display panel) and an organic EL, a projection display represented by a projector, and the like can be combined.
  • the LSI in the present embodiment cooperates with a DRAM (Dynamic Random Access Memory) including a bit stream buffer for storing an encoded stream and a frame memory for storing an image, thereby performing an encoding process or a decoding process. May be performed.
  • the LSI in the present embodiment may be linked with other storage devices such as eDRAM (embedded DRAM), SRAM (Static Random Access Memory), or hard disk instead of DRAM.
  • FIG. 26 shows a configuration of an LSI ex500 that is made into one chip.
  • the LSI ex500 includes elements ex502 to ex509 described below, and each element is connected via a bus ex510.
  • the power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
  • the LSI ex500 receives an AV signal input from the microphone ex117, the camera ex113, and the like by the AV I / Oex 509.
  • the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
  • the accumulated data is divided into a plurality of times as appropriate according to the processing amount and processing speed, and sent to the signal processing unit ex507.
  • the signal processing unit ex507 performs encoding of an audio signal and / or encoding of a video signal.
  • the encoding process of the video signal is the encoding process described in the above embodiment.
  • the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 504 to the outside.
  • the output bit stream is transmitted to the base station ex107 or written to the recording medium ex215.
  • the LSI ex500 transmits the encoded data obtained from the base station ex107 by the stream I / Oex 504 or the recording medium ex215 based on the control of the microcomputer (microcomputer) ex502.
  • the encoded data obtained by reading is temporarily stored in the memory ex511 or the like.
  • the accumulated data is appropriately divided into a plurality of times according to the processing amount and the processing speed and sent to the signal processing unit ex507, where the signal processing unit ex507 decodes audio data and / or video data. Decryption is performed.
  • the decoding process of the video signal is the decoding process described in the above embodiments.
  • each signal may be temporarily stored in the memory ex511 or the like so that the decoded audio signal and the decoded video signal can be reproduced in synchronization.
  • the decoded output signal is output from the AV I / Oex 509 to the monitor ex219 or the like through the memory ex511 or the like as appropriate.
  • the memory controller ex503 is used.
  • the memory ex511 has been described as an external configuration of the LSI ex500. However, a configuration included in the LSI ex500 may be used.
  • the LSI ex500 may be made into one chip or a plurality of chips.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • the present invention can be used for various purposes.
  • it can be used for high-resolution information display devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras, or imaging devices, and has high utility value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An image processing device (100), which executes motion compensation processing using a motion vector corresponding to a block within an image, is provided with: a dividing unit (101) that divides the block into a plurality of sub-blocks; a calculation unit (102) that calculates an area for acquiring a first reference image corresponding to a first sub-block contained within the plurality of sub-blocks, using the motion vector corresponding to the block; an acquisition unit (103) that acquires the first reference image, excluding at least some of the portions thereof already acquired, from the calculated area; and a generating unit (104) that generates, from the first reference image, a prediction image corresponding to the first sub-block.

Description

画像処理装置および画像処理方法Image processing apparatus and image processing method
 本発明は、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置に関する。 The present invention relates to an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image.
 画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置に関する技術として、特許文献1に記載の技術、非特許文献1に記載の技術、および、非特許文献2に記載の技術がある。 As a technique related to an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, a technique described in Patent Document 1, a technique described in Non-Patent Document 1, and a technique described in Non-Patent Document 2. There is a technology.
特開2006-311526号公報JP 2006-311526 A
 しかしながら、従来の技術では、画像処理に大きなメモリ容量が必要になる場合がある。また、処理の遅延が大きくなる場合がある。 However, the conventional technology may require a large memory capacity for image processing. In addition, processing delay may increase.
 そこで、本発明は、画像処理に必要なメモリ容量を削減しつつ、処理の遅延を抑制することができる画像処理装置を提供する。 Therefore, the present invention provides an image processing apparatus capable of suppressing processing delay while reducing the memory capacity necessary for image processing.
 本発明の一態様に係る画像処理装置は、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置であって、前記ブロックを複数のサブブロックに分割する分割部と、前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出部と、算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得部と、前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成部とを備える。 An image processing apparatus according to an aspect of the present invention is an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, and a division unit that divides the block into a plurality of sub-blocks. A calculation unit that calculates an area for acquiring a first reference image corresponding to a first sub-block included in the plurality of sub-blocks using the motion vector corresponding to the block; A prediction image corresponding to the first sub-block is generated from the acquisition unit that acquires the first reference image from the region, excluding at least a part of the already acquired portion, and the first reference image. A generator.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの非一時的な記録媒体で実現されてもよく、システム、装置、方法、集積回路、コンピュータプログラムおよび記録媒体の任意な組み合わせで実現されてもよい。 Note that these comprehensive or specific modes may be realized by a system, method, integrated circuit, computer program, or non-transitory recording medium such as a computer-readable CD-ROM. The present invention may be realized by any combination of an integrated circuit, a computer program, and a recording medium.
 本発明の画像処理装置は、画像処理に必要なメモリ容量を削減しつつ、処理の遅延を抑制することができる。 The image processing apparatus of the present invention can suppress processing delay while reducing the memory capacity necessary for image processing.
図1Aは、従来技術に係る予測ユニットを示す図である。FIG. 1A is a diagram illustrating a prediction unit according to the related art. 図1Bは、従来技術に係る予測ユニットに対応する参照画像サイズを示す図である。FIG. 1B is a diagram illustrating a reference image size corresponding to a prediction unit according to the related art. 図1Cは、従来技術に係る動き補償を実行する構成を示す図である。FIG. 1C is a diagram illustrating a configuration for performing motion compensation according to the related art. 図2Aは、従来技術に係る動き補償の動作の第1例を示すタイムチャートである。FIG. 2A is a time chart illustrating a first example of motion compensation operation according to the related art. 図2Bは、従来技術に係る動き補償の動作の第2例を示すタイムチャートである。FIG. 2B is a time chart illustrating a second example of the motion compensation operation according to the related art. 図3Aは、従来技術に係るマクロブロックの分割を示す図である。FIG. 3A is a diagram illustrating division of a macroblock according to the related art. 図3Bは、従来技術に係るサブブロックに対応する参照画像サイズを示す図である。FIG. 3B is a diagram illustrating a reference image size corresponding to a sub-block according to the related art. 図4は、従来技術の課題を示す図である。FIG. 4 is a diagram illustrating a problem of the related art. 図5は、実施の形態1に係る画像復号装置の構成を示す図である。FIG. 5 is a diagram illustrating a configuration of the image decoding apparatus according to Embodiment 1. 図6は、実施の形態1に係る動き補償部に関連する構成を示す図である。FIG. 6 is a diagram illustrating a configuration related to the motion compensation unit according to the first embodiment. 図7Aは、実施の形態1に係るシーケンスを示す図である。FIG. 7A is a diagram showing a sequence according to the first embodiment. 図7Bは、実施の形態1に係るピクチャを示す図である。FIG. 7B is a diagram showing a picture according to Embodiment 1. 図7Cは、実施の形態1に係る符号化ストリームを示す図である。FIG. 7C is a diagram showing an encoded stream according to Embodiment 1. 図8Aは、実施の形態1に係る符号化ユニットの構成例を示す図である。FIG. 8A is a diagram illustrating a configuration example of a coding unit according to Embodiment 1. 図8Bは、実施の形態1に係る符号化ユニットデータの構成例を示す図である。FIG. 8B is a diagram illustrating a configuration example of coding unit data according to Embodiment 1. 図9は、実施の形態1に係る予測ユニットのサイズを示す図である。FIG. 9 is a diagram illustrating the size of the prediction unit according to the first embodiment. 図10は、実施の形態1に係る画像復号装置の動作を示すフローチャートである。FIG. 10 is a flowchart showing the operation of the image decoding apparatus according to Embodiment 1. 図11は、実施の形態1に係る符号化ユニットを復号する処理を示すフローチャートである。FIG. 11 is a flowchart showing a process of decoding the coding unit according to Embodiment 1. 図12は、実施の形態1に係る動き補償の動作を示す図である。FIG. 12 is a diagram illustrating a motion compensation operation according to the first embodiment. 図13Aは、実施の形態1に係る予測ユニットを示す図である。FIG. 13A is a diagram illustrating a prediction unit according to Embodiment 1. 図13Bは、実施の形態1に係る予測ユニットの分割を示す図である。FIG. 13B is a diagram illustrating division of the prediction unit according to Embodiment 1. 図14は、実施の形態1に係る動き補償の動作を示すフローチャートである。FIG. 14 is a flowchart showing the motion compensation operation according to the first embodiment. 図15Aは、実施の形態1に係る参照画像の領域を示す図である。FIG. 15A is a diagram illustrating a region of a reference image according to Embodiment 1. 図15Bは、実施の形態1に係る参照画像の取得領域を示す図である。FIG. 15B is a diagram showing a reference image acquisition area according to Embodiment 1. 図15Cは、実施の形態1に係る予測ユニットに対応する複数の取得領域を示す図である。FIG. 15C is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 1. 図16Aは、実施の形態1に係る動き補償の動作の第1例を示すタイムチャートである。FIG. 16A is a time chart illustrating a first example of motion compensation operation according to Embodiment 1. 図16Bは、実施の形態1に係る動き補償の動作の第2例を示すタイムチャートである。FIG. 16B is a time chart illustrating a second example of the motion compensation operation according to Embodiment 1. 図16Cは、実施の形態1に係る動き補償の動作の第3例を示すタイムチャートである。FIG. 16C is a time chart illustrating a third example of the motion compensation operation according to Embodiment 1. 図17Aは、実施の形態2に係る参照画像の取得領域を示す図である。FIG. 17A is a diagram illustrating a reference image acquisition region according to Embodiment 2. 図17Bは、実施の形態2に係る予測ユニットに対応する複数の取得領域を示す図である。FIG. 17B is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 2. 図18Aは、実施の形態3に係る参照画像の取得領域を示す図である。FIG. 18A is a diagram illustrating a reference image acquisition area according to Embodiment 3. 図18Bは、実施の形態3に係る予測ユニットに対応する複数の取得領域を示す図である。FIG. 18B is a diagram showing a plurality of acquisition regions corresponding to the prediction unit according to Embodiment 3. 図19Aは、実施の形態4に係る画像処理装置の構成を示す図である。FIG. 19A is a diagram illustrating a configuration of an image processing device according to the fourth embodiment. 図19Bは、実施の形態4に係る画像処理装置の動作を示す図である。FIG. 19B is a diagram illustrating an operation of the image processing apparatus according to the fourth embodiment. 図20は、コンテンツ配信サービスを実現するコンテンツ供給システムの全体構成図である。FIG. 20 is an overall configuration diagram of a content supply system that implements a content distribution service. 図21は、デジタル放送用システムの全体構成図である。FIG. 21 is an overall configuration diagram of a digital broadcasting system. 図22は、テレビの構成例を示すブロック図である。FIG. 22 is a block diagram illustrating a configuration example of a television. 図23は、光ディスクである記録メディアに情報の読み書きを行う情報再生/記録部の構成例を示すブロック図である。FIG. 23 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk. 図24は、光ディスクである記録メディアの構造例を示す図である。FIG. 24 is a diagram illustrating a structure example of a recording medium that is an optical disk. 図25は、画像復号処理を実現する集積回路の構成例を示す構成図である。FIG. 25 is a configuration diagram illustrating a configuration example of an integrated circuit that implements image decoding processing. 図26は、画像復号処理および画像符号化処理を実現する集積回路の構成例を示す構成図である。FIG. 26 is a configuration diagram illustrating a configuration example of an integrated circuit that implements image decoding processing and image encoding processing.
 (本発明の基礎となった知見)
 本発明者は、「背景技術」の欄において記載した、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置に関し、以下の問題が生じることを見出した。
(Knowledge that became the basis of the present invention)
The inventor has found that the following problems occur with respect to an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image described in the “Background Art” column.
 画像を符号化する画像符号化装置は、画像を構成する各ピクチャを16x16画素でそれぞれが構成される複数のマクロブロック(Macroblock、略してMBと呼ぶこともある)に分割する。そして、画像符号化装置は、ラスタースキャン順に各マクロブロックを符号化する。画像符号化装置は、画像を符号化し圧縮することにより、符号化ストリームを生成する。画像復号装置は、この符号化ストリームをラスタースキャン順でマクロブロック毎に復号し、元の画像の各ピクチャを再生する。 An image encoding apparatus that encodes an image divides each picture constituting the image into a plurality of macroblocks (Macroblock, sometimes referred to as MB for short) each composed of 16 × 16 pixels. Then, the image encoding device encodes each macroblock in the raster scan order. The image encoding device generates an encoded stream by encoding and compressing an image. The image decoding apparatus decodes this encoded stream for each macroblock in raster scan order, and reproduces each picture of the original image.
 従来の画像符号化方式の1つとしてITU-T H.264規格がある(例えば、非特許文献1を参照)。画像復号装置は、H.264規格で符号化された画像を復号するため、まず、符号化ストリームを読み込む。そして、画像復号装置は、各種ヘッダ情報を復号後、可変長復号を行う。画像復号装置は、可変長復号により得られた係数情報を逆量子化して、逆周波数変換する。これにより、差分画像が生成される。 ITU-T H.264 is one of the conventional image encoding methods. There are H.264 standards (see, for example, Non-Patent Document 1). The image decoding apparatus is the H.264 standard. In order to decode an image encoded by the H.264 standard, first, an encoded stream is read. Then, the image decoding apparatus performs variable length decoding after decoding various header information. The image decoding apparatus performs inverse frequency conversion by inversely quantizing coefficient information obtained by variable length decoding. Thereby, a difference image is generated.
 次に、画像復号装置は、可変長復号により得られたマクロブロックタイプに応じて、面内予測または動き補償を行う。ここで、動き補償は最大16x16画素に対して行われる。これにより、画像復号装置は、予測画像を生成する。その後、画像復号装置は、予測画像に差分画像を加算することにより、再構成処理を行う。そして、画像復号装置は、再構成画像にデブロックフィルタ処理を行うことで復号対象画像を復号する。 Next, the image decoding apparatus performs in-plane prediction or motion compensation according to the macroblock type obtained by variable length decoding. Here, motion compensation is performed for a maximum of 16 × 16 pixels. Thereby, the image decoding apparatus generates a predicted image. Thereafter, the image decoding apparatus performs a reconstruction process by adding the difference image to the predicted image. Then, the image decoding apparatus decodes the decoding target image by performing a deblocking filter process on the reconstructed image.
 H.264規格に係る画像符号化装置は、先ほど述べた通り、16x16画素で構成されるマクロブロック単位で画像を符号化する。しかし、符号化の単位として16x16画素が、必ずしも最適とは限らない。一般に、画像の解像度が高くなるにつれて、隣接ブロック間の相関が高くなる。そのため、画像の解像度が高い場合、画像符号化装置は、符号化の単位を大きくした方が、より圧縮効率を向上させることができる。 H. As described above, the image encoding apparatus according to the H.264 standard encodes an image in units of macroblocks configured by 16 × 16 pixels. However, 16 × 16 pixels are not necessarily optimal as a unit of encoding. In general, as the image resolution increases, the correlation between adjacent blocks increases. Therefore, when the resolution of the image is high, the image encoding apparatus can improve the compression efficiency more when the encoding unit is increased.
 近年、4K2K(3840x2160画素)等のように、超高精細なディスプレイの開発が行われてきている。したがって、画像の解像度がますます高くなっていくことが予想される。H.264規格に係る画像符号化装置は、画像の高解像度化が進むにつれて、高解像度の画像を効率的に符号化することが困難になってきている。 In recent years, ultra-high definition displays such as 4K2K (3840 × 2160 pixels) have been developed. Therefore, it is expected that the resolution of the image will become higher and higher. H. As an image encoding apparatus according to the H.264 standard increases in resolution of an image, it has become difficult to efficiently encode a high-resolution image.
 一方、次世代の画像符号化規格として提案されている技術の中には、このような課題を解決する技術がある(非特許文献2)。この技術では、従来のH.264規格に対応する符号化単位ブロックのサイズが可変になる。そして、この技術に係る画像符号化装置は、従来の16x16画素よりも大きなブロックで画像を符号化することも可能であり、超高精細画像を適切に符号化することができる。 On the other hand, among technologies proposed as next-generation image coding standards, there is a technology that solves such a problem (Non-patent Document 2). In this technique, the conventional H.264. The size of the encoding unit block corresponding to the H.264 standard is variable. The image encoding apparatus according to this technique can also encode an image with a block larger than the conventional 16 × 16 pixels, and can appropriately encode an ultra-high-definition image.
 具体的には、非特許文献2では、符号化のデータ単位として、符号化ユニット(CU:Coding Unit)が定義されている。この符号化ユニットは、従来の符号化規格におけるマクロブロックと同様に、面内予測を行うイントラ予測と、動き補償を行うインター予測とを切り替えることが可能なデータ単位であり、符号化の最も基本的なブロックとして規定されている。 Specifically, in Non-Patent Document 2, an encoding unit (CU: Coding Unit) is defined as an encoding data unit. This encoding unit is a data unit capable of switching between intra prediction for performing intra prediction and inter prediction for performing motion compensation, as in the case of a macroblock in the conventional encoding standard. Is defined as a general block.
 この符号化ユニットのサイズは、8x8画素、16x16画素、32x32画素、64x64画素のいずれかである。最も大きな符号化ユニットは、最大符号化ユニット(LCU:Largest Coding Unit)と呼ばれる。 The size of this encoding unit is any of 8 × 8 pixels, 16 × 16 pixels, 32 × 32 pixels, and 64 × 64 pixels. The largest coding unit is referred to as a maximum coding unit (LCU).
 図1Aは、非特許文献2に係る予測ユニット(PU:Prediction Unit)を示す図である。動き補償処理は、最大64x64画素である予測ユニットに対して行われる。 FIG. 1A is a diagram illustrating a prediction unit (PU: Prediction Unit) according to Non-Patent Document 2. The motion compensation process is performed on a prediction unit having a maximum size of 64 × 64 pixels.
 図1Bは、図1Aの予測ユニットに対応する参照画像サイズを示す図である。64x64画素の予測ユニットに対して動き補償処理を行う場合、予測ユニットの周辺に7画素が付加された71x71画素の参照画像に対して、非特許文献2に示されている8TAPフィルタ演算が行われる。これにより、予測画像が生成される。具体的には、参照画像のサイズは、上に3画素、下に4画素、左に3画素、右に4画素が予測ユニットに加えられたサイズに対応する。 FIG. 1B is a diagram showing a reference image size corresponding to the prediction unit of FIG. 1A. When motion compensation processing is performed on a 64 × 64 pixel prediction unit, the 8TAP filter calculation shown in Non-Patent Document 2 is performed on a 71 × 71 pixel reference image in which 7 pixels are added around the prediction unit. . Thereby, a predicted image is generated. Specifically, the size of the reference image corresponds to the size in which 3 pixels on the top, 4 pixels on the bottom, 3 pixels on the left, and 4 pixels on the right are added to the prediction unit.
 図1Cは、図1Aの予測ユニットに対して、動き補償を実行する構成を示す図である。動き補償の処理において、DMA(Direct Memory Access)制御部1002は、フレームメモリ1003に格納されている複数の参照ピクチャのうち、符号化ストリームの動きベクトルが指す位置の参照画像を読み出す。そして、DMA制御部1002は、参照画像を動き補償部1000の参照画像記憶部1001に転送する。典型的には、参照画像記憶部1001は、動き補償部1000に含まれる。 FIG. 1C is a diagram showing a configuration for performing motion compensation on the prediction unit of FIG. 1A. In the motion compensation process, a DMA (Direct Memory Access) control unit 1002 reads a reference image at a position indicated by a motion vector of an encoded stream among a plurality of reference pictures stored in the frame memory 1003. Then, the DMA control unit 1002 transfers the reference image to the reference image storage unit 1001 of the motion compensation unit 1000. Typically, the reference image storage unit 1001 is included in the motion compensation unit 1000.
 動き補償部1000は、参照画像記憶部1001から参照画像を読み出して、動き補償のフィルタ演算を行う。そして、動き補償部1000は、予測画像を生成し、生成された予測画像を出力する。 The motion compensation unit 1000 reads a reference image from the reference image storage unit 1001 and performs a motion compensation filter operation. Then, the motion compensation unit 1000 generates a predicted image and outputs the generated predicted image.
 図2Aは、図1Cの構成で実行される動き補償の動作の第1例を示すタイムチャートである。予測ユニットが64x64画素である場合、DMA制御部1002が64x64画素の予測ユニットに対応する71x71画素(5041画素)の参照画像を転送した後、動き補償部1000が動き補償処理を実行する。 FIG. 2A is a time chart showing a first example of motion compensation operation executed in the configuration of FIG. 1C. When the prediction unit is 64 × 64 pixels, the DMA control unit 1002 transfers a reference image of 71 × 71 pixels (5041 pixels) corresponding to the prediction unit of 64 × 64 pixels, and then the motion compensation unit 1000 performs a motion compensation process.
 図2Bは、図1Cの構成で実行される動き補償の動作の第2例を示すタイムチャートである。図2Bには、従来のH.264規格に係る動き補償の動作が示されている。H.264規格では、6TAPフィルタが用いられる。そのため、DMA制御部1002が、最大16x16画素のマクロブロックに対応する21x21画素(441画素)の参照画像を転送した後、動き補償部1000が動き補償処理を実行する。 FIG. 2B is a time chart showing a second example of the motion compensation operation executed in the configuration of FIG. 1C. In FIG. The motion compensation operation according to the H.264 standard is shown. H. In the H.264 standard, a 6TAP filter is used. Therefore, after the DMA control unit 1002 transfers a reference image of 21 × 21 pixels (441 pixels) corresponding to a macroblock having a maximum of 16 × 16 pixels, the motion compensation unit 1000 performs a motion compensation process.
 図2Aと図2Bとの違いのように、非特許文献2に示す次世代の画像符号化規格では、H.264規格に比べて、10倍以上のデータ量の参照画像が転送される。そのため、参照画像を格納するための参照画像記憶部1001の容量が10倍以上大きくなければならない。また、転送にかかる時間が長いため、処理の遅延が大きくなる。 As the difference between FIG. 2A and FIG. 2B, the next-generation image coding standard shown in Non-Patent Document 2 Compared to the H.264 standard, a reference image having a data amount of 10 times or more is transferred. Therefore, the capacity of the reference image storage unit 1001 for storing the reference image must be 10 times or more larger. In addition, since the time required for the transfer is long, the processing delay increases.
 一方、H.264規格に係る画像復号装置として、特許文献1に記載された画像復号装置がある。 On the other hand, H. As an image decoding apparatus according to the H.264 standard, there is an image decoding apparatus described in Patent Document 1.
 図3Aは、特許文献1に係るマクロブロックの分割を示す図である。特許文献1に係る画像復号装置は、16x16画素のマクロブロックを4x4画素のサブブロックに分割し、4x4画素のサブブロックに対して、動き補償処理を実行する。 FIG. 3A is a diagram illustrating division of a macroblock according to Patent Document 1. The image decoding apparatus according to Patent Document 1 divides a 16 × 16 pixel macroblock into 4 × 4 pixel sub-blocks, and performs motion compensation processing on the 4 × 4 pixel sub-blocks.
 図3Bは、4x4画素のサブブロックに対応する参照画像のサイズを示す図である。DMA制御部1002は、4x4画素のサブブロックに対して、9x9画素(81画素)の参照画像をフレームメモリ1003から動き補償部1000の参照画像記憶部1001に転送する。具体的には、参照画像のサイズは、上に2画素、下に3画素、左に2画素、右に3画素がサブブロックに加えられたサイズに対応する。 FIG. 3B is a diagram illustrating the size of a reference image corresponding to a 4 × 4 pixel sub-block. The DMA control unit 1002 transfers a 9 × 9 pixel (81 pixel) reference image from the frame memory 1003 to the reference image storage unit 1001 of the motion compensation unit 1000 for a 4 × 4 pixel sub-block. Specifically, the size of the reference image corresponds to a size in which 2 pixels on the top, 3 pixels on the bottom, 2 pixels on the left, and 3 pixels on the right are added to the sub-block.
 動き補償部1000は、参照画像記憶部1001から参照画像を読み出して動き補償のフィルタ演算を行う。そして、動き補償部1000は、予測画像を生成し、出力する。これらの処理が16回繰り返されることにより、16x16画素のマクロブロックに対応する予測画像が生成される。 The motion compensation unit 1000 reads a reference image from the reference image storage unit 1001 and performs a motion compensation filter operation. Then, the motion compensation unit 1000 generates and outputs a predicted image. By repeating these processes 16 times, a prediction image corresponding to a 16 × 16 pixel macroblock is generated.
 特許文献1の画像復号装置では、動き補償のサイズが一定になる。したがって、動き補償の構成が簡素化される。 In the image decoding device of Patent Document 1, the size of motion compensation is constant. Therefore, the configuration of motion compensation is simplified.
 しかしながら、特許文献1の画像復号装置では、データの転送量が大きくなる。以下に具体的に説明する。 However, in the image decoding device of Patent Document 1, the data transfer amount is large. This will be specifically described below.
 図4は、特許文献1の画像復号装置が非特許文献2に示された次世代の画像符号化規格で提案されている64x64画素の予測ユニットに対して動き補償を実行する場合のタイムチャートである。 FIG. 4 is a time chart when the image decoding apparatus of Patent Document 1 performs motion compensation on a prediction unit of 64 × 64 pixels proposed in the next-generation image coding standard disclosed in Non-Patent Document 2. is there.
 図4の例において、特許文献1の画像復号装置は、図2Aの5041画素の参照画像を転送するのではなく、64x64画素の予測ユニットを16分割することにより得られる16x16画素のサブブロックに対応する参照画像を転送する。その際、画像復号装置は、8TAPフィルタを施すため、23x23画素(529画素)の参照画像を転送する。 In the example of FIG. 4, the image decoding apparatus of Patent Document 1 supports a 16 × 16 pixel sub-block obtained by dividing a 64 × 64 pixel prediction unit into 16 parts instead of transferring the 5041 pixel reference image of FIG. 2A. Transfer the reference image. At that time, the image decoding apparatus transfers a reference image of 23 × 23 pixels (529 pixels) in order to apply the 8TAP filter.
 特許文献1の画像復号装置は、この分割処理により、動き補償部1000の参照画像記憶部1001の必要容量を5041画素から529画素に削減することができる。しかし、分割処理だけでは処理の遅延が改善されない。また、529画素を16回転送することにより、8464画素が転送される。すなわち、本来5041画素であるべき転送量が、約67%増加してしまう。 The image decoding apparatus of Patent Document 1 can reduce the necessary capacity of the reference image storage unit 1001 of the motion compensation unit 1000 from 5041 pixels to 529 pixels by this division processing. However, the processing delay is not improved only by the division processing. Moreover, 8464 pixels are transferred by transferring 529 pixels 16 times. That is, the transfer amount that should be 5041 pixels increases by about 67%.
 上述のように、非特許文献2では、予測ユニットのサイズ、すなわち、動き補償のサイズとして、従来の規格より大きい64x64画素のサイズを用いる技術が述べられている。これにより、符号化効率が向上する。しかしながら、このような大きなサイズに対する動き補償では、必要なメモリ容量が増加する。また、転送量が大きくなることにより、動き補償処理の遅延が大きくなる。 As described above, Non-Patent Document 2 describes a technique that uses a size of 64 × 64 pixels larger than the conventional standard as the size of the prediction unit, that is, the size of motion compensation. Thereby, encoding efficiency improves. However, the motion compensation for such a large size increases the required memory capacity. In addition, as the transfer amount increases, the delay of the motion compensation process increases.
 一方、特許文献1のように、マクロブロックまたは予測ユニットを小さいサイズに分割して、動き補償が実行される場合、必要なメモリ容量は小さくなる。しかし、動き補償の動作の遅延が改善されない。また、転送量が増加する。そして、転送量の増加により、動き補償処理の遅延がさらに大きくなる可能性がある。 On the other hand, as in Patent Document 1, when motion compensation is performed by dividing a macroblock or a prediction unit into small sizes, the required memory capacity becomes small. However, the motion compensation operation delay is not improved. In addition, the transfer amount increases. Then, there is a possibility that the delay of the motion compensation process is further increased due to the increase in the transfer amount.
 このような問題を解決するために、本発明の一態様に係る画像処理装置は、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置であって、前記ブロックを複数のサブブロックに分割する分割部と、前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出部と、算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得部と、前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成部とを備える。 In order to solve such a problem, an image processing apparatus according to an aspect of the present invention is an image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image, and Using the motion vector corresponding to the block, a division unit that divides into a plurality of subblocks, and an area for acquiring a first reference image corresponding to a first subblock included in the plurality of subblocks, From the calculation unit to calculate, the acquisition unit to acquire the first reference image from the calculated region, excluding at least a part of the already acquired part, and the first reference image from the first reference image A generating unit that generates a predicted image corresponding to the sub-block.
 これにより、画像処理装置は、既に取得された部分のうち少なくとも一部を重複して取得しない。したがって、転送量の増加が抑制される。よって、必要なメモリ容量が減少する。また、処理の遅延が抑制される。 Thereby, the image processing apparatus does not acquire at least a part of the already acquired parts in duplicate. Therefore, an increase in transfer amount is suppressed. Therefore, the required memory capacity is reduced. Further, processing delay is suppressed.
 例えば、前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像に部分的に重なる前記第1参照画像を、前記第1参照画像と前記第2参照画像とが重なる前記部分のうち少なくとも一部を除いて、取得してもよい。 For example, the acquisition unit includes the first reference image, the second reference image, and the first reference image that partially overlaps a second reference image corresponding to a second sub-block included in the plurality of sub-blocks. You may acquire except at least one part among the said parts which overlap.
 これにより、画像処理装置は、重複部分を有する参照画像を、その重複部分の少なくとも一部を除いて取得する。重複部分は、既に取得されている場合がある。画像処理装置は、重複部分のうち少なくとも一部を重複して取得しないことにより、処理の無駄を削減できる。 Thereby, the image processing apparatus acquires a reference image having an overlapping portion, excluding at least a part of the overlapping portion. The overlapping part may have already been acquired. The image processing apparatus can reduce processing waste by not acquiring at least a part of the overlapping parts.
 また、例えば、前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像を取得した後、取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得してもよい。 In addition, for example, the acquisition unit acquires at least one of the portions included in the acquired second reference image after acquiring the second reference image corresponding to the second sub block included in the plurality of sub blocks. The first reference image may be acquired except for the section.
 これにより、サブブロックの参照画像のうち、他のサブブロックの参照画像を取得する時に取得された部分のうち少なくとも一部は、重複して取得されない。したがって、転送量の増加が抑制される。また、必要なメモリ容量が減少する。 Thereby, at least a part of the reference images of the sub-blocks acquired when acquiring the reference images of the other sub-blocks is not acquired redundantly. Therefore, an increase in transfer amount is suppressed. Also, the required memory capacity is reduced.
 また、例えば、前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する予測画像を前記生成部が生成している間に、前記第1参照画像を取得してもよい。 For example, the acquisition unit may acquire the first reference image while the generation unit generates a prediction image corresponding to a second sub-block included in the plurality of sub-blocks.
 これにより、画像処理装置は、参照画像の取得処理と予測画像の生成処理とを同時に行うことができる。したがって、処理の遅延がさらに減少する。 Thereby, the image processing apparatus can simultaneously perform the reference image acquisition process and the predicted image generation process. Accordingly, processing delay is further reduced.
 また、例えば、前記取得部は、前記第1サブブロックに水平方向に隣接する第2サブブロックに対応する第2参照画像を取得した後、取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得してもよい。 In addition, for example, the acquisition unit acquires the second reference image corresponding to the second sub-block that is adjacent to the first sub-block in the horizontal direction, and then acquires the portion of the portion included in the acquired second reference image. The first reference image may be acquired by excluding at least some of them.
 これにより、画像処理装置は、水平方向の重複部分のうち少なくとも一部を重複して取得しない。画像処理装置は、除外条件を水平方向の重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 Thereby, the image processing apparatus does not acquire at least a part of the overlapping parts in the horizontal direction. The image processing apparatus obtains a reference image by excluding at least a part of the already acquired overlapping parts without using complicated data management by limiting the exclusion condition to the overlapping parts in the horizontal direction. Can do.
 また、例えば、前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像を、前記第1参照画像を取得する直前に取得した後、直前に取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得してもよい。 Further, for example, the acquisition unit acquires the second reference image corresponding to the second sub-block included in the plurality of sub-blocks immediately before acquiring the first reference image, and then acquired immediately before the second reference image. The first reference image may be acquired by excluding at least a part of the portion included in the second reference image.
 これにより、画像処理装置は、直前に取得された重複部分のうち少なくとも一部を重複して取得しない。画像処理装置は、除外条件を直前に取得された重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 Thereby, the image processing apparatus does not acquire at least a part of the overlapping part acquired immediately before. The image processing apparatus obtains a reference image by excluding at least a part of the already acquired overlapping part without using complicated data management by limiting the exclusion condition to the overlapping part acquired immediately before. can do.
 また、例えば、前記取得部は、前記第1サブブロックに水平方向に隣接する前記第2サブブロックに対応する前記第2参照画像を、前記第1参照画像を取得する直前に取得した後、直前に取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得してもよい。 Further, for example, the acquisition unit acquires the second reference image corresponding to the second sub-block adjacent to the first sub-block in the horizontal direction immediately before acquiring the first reference image, and then immediately before acquiring the second reference image. The first reference image may be acquired by excluding at least a part of the portion included in the second reference image acquired in step S2.
 これにより、画像処理装置は、直前に取得された水平方向の重複部分のうち少なくとも一部を重複して取得しない。画像処理装置は、除外条件を直前に取得された水平方向の重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 Thereby, the image processing apparatus does not acquire at least a part of the overlapping parts in the horizontal direction acquired immediately before. The image processing apparatus limits the exclusion condition to the overlapped portion in the horizontal direction acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management. Images can be acquired.
 また、例えば、前記分割部は、前記ブロックを大きさが互いに等しい前記複数のサブブロックに分割してもよい。 For example, the division unit may divide the block into the plurality of sub-blocks having the same size.
 これにより、動き補償処理が同じ大きさのサブブロックに対して実行される。したがって、動き補償処理が簡素化される。 Thereby, motion compensation processing is executed for sub-blocks of the same size. Therefore, the motion compensation process is simplified.
 また、例えば、前記算出部は、前記第1参照画像を取得するための前記領域を、既に取得された前記部分のうち少なくとも一部を除いて、算出してもよい。 Further, for example, the calculation unit may calculate the area for acquiring the first reference image, excluding at least a part of the already acquired part.
 これにより、参照画像を取得するための領域が、その重複部分の少なくとも一部を除いて、適切に算出される。したがって、参照画像が、重複部分のうち少なくとも一部を除いて、適切に取得される。 Thereby, the area for acquiring the reference image is appropriately calculated excluding at least a part of the overlapping part. Therefore, the reference image is appropriately acquired except for at least a part of the overlapping parts.
 また、例えば、前記取得部は、前記第1サブブロックよりも大きい前記第1参照画像を、既に取得された前記部分のうち少なくとも一部を除いて、取得し、前記生成部は、前記第1サブブロックよりも解像度の高い前記予測画像を生成してもよい。 Further, for example, the acquisition unit acquires the first reference image larger than the first sub-block except for at least a part of the already acquired part, and the generation unit includes the first reference image. The predicted image having a higher resolution than the sub-block may be generated.
 これにより、画像処理装置は、転送量の増加を抑制しつつ、サブブロック毎に高解像度の予測画像を生成できる。 Thereby, the image processing apparatus can generate a high-resolution predicted image for each sub-block while suppressing an increase in the transfer amount.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの非一時的な記録媒体で実現されてもよく、システム、装置、方法、集積回路、コンピュータプログラムまたは記録媒体の任意な組み合わせで実現されてもよい。 Note that these comprehensive or specific modes may be realized by a system, method, integrated circuit, computer program, or non-transitory recording medium such as a computer-readable CD-ROM. The present invention may be realized by any combination of an integrated circuit, a computer program, or a recording medium.
 以下、実施の形態について、図面を用いて詳細に説明する。なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示す。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置および接続形態、ステップ、ステップの順序などは、一例であり、本発明を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 Hereinafter, embodiments will be described in detail with reference to the drawings. It should be noted that each of the embodiments described below shows a comprehensive or specific example. The numerical values, shapes, materials, constituent elements, arrangement positions and connection forms of the constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept are described as optional constituent elements.
 また、64x64画素、および、32x32画素等の表現は、それぞれ、64画素x64画素、および、32画素x32画素等のサイズを意味する。あるいは、これらの表現は、それぞれ、そのサイズに対応するデータを意味する場合もある。 In addition, expressions such as 64 × 64 pixels and 32 × 32 pixels mean sizes of 64 pixels × 64 pixels and 32 pixels × 32 pixels, respectively. Alternatively, each of these expressions may mean data corresponding to the size.
 また、以下において、ブロック、データ単位および符号化ユニット(CU)等の表現は、それぞれ、まとまった領域を意味する。それらは、それぞれ、画像領域を意味する場合もある。あるいは、それらは、それぞれ、符号化ストリームにおけるデータ領域を意味する場合もある。画素は、画像におけるデータ単位、または、そのデータ単位に含まれるデータを意味する。 Also, in the following, expressions such as blocks, data units, and coding units (CUs) each mean a grouped area. Each of them may mean an image area. Alternatively, they may each mean a data area in the encoded stream. A pixel means a data unit in an image or data included in the data unit.
 また、画像は、静止画像または動画像を構成する複数のピクチャ、1つのピクチャ、および、ピクチャの一部等のいずれでもよい。 Further, the image may be any of a plurality of pictures, a single picture, a part of a picture, etc. constituting a still image or a moving image.
 (実施の形態1)
  (1-1.概要)
 まず、本実施の形態に係る画像復号装置の概要について説明する。本実施の形態に係る画像復号装置は、符号化ストリームを復号する。符号化ストリームを構成する予測ユニットのサイズは可変であり、最大で64x64画素のサイズである。
(Embodiment 1)
(1-1. Overview)
First, an overview of the image decoding apparatus according to the present embodiment will be described. The image decoding apparatus according to the present embodiment decodes an encoded stream. The size of the prediction unit constituting the encoded stream is variable and has a size of 64 × 64 pixels at the maximum.
 その際、画像復号装置は、予測ユニットのサイズが16x16画素よりも大きい場合、それぞれが16x16画素のデータ単位である複数のサブブロックに分割する。そして、画像復号装置は、複数のサブブロックのそれぞれの参照画像を、既に転送された部分を除いて、フレームメモリから動き補償部に転送する。 At that time, when the size of the prediction unit is larger than 16 × 16 pixels, the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each having a data unit of 16 × 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, excluding the already transferred part.
 これにより、画像復号装置は、予測ユニットのサイズが大きい場合、予測ユニットを分割して処理する。これにより、必要なメモリ容量が小さくなる。さらに、画像復号装置は、参照画像の転送処理と、動き補償の処理とをパイプライン処理で実行することすることにより、処理の遅延を小さくする。また、画像復号装置は、参照画像を、既に転送された部分を除いて、転送することにより、転送量を増加させることなく、動き補償の処理を行うことができる。 Thereby, when the size of the prediction unit is large, the image decoding apparatus divides and processes the prediction unit. This reduces the required memory capacity. Furthermore, the image decoding apparatus reduces processing delay by executing reference image transfer processing and motion compensation processing by pipeline processing. Further, the image decoding apparatus can perform the motion compensation process without increasing the transfer amount by transferring the reference image excluding the already transferred part.
  (1-2.構成)
 次に、本実施の形態に係る画像復号装置の構成について説明する。
(1-2. Configuration)
Next, the configuration of the image decoding apparatus according to this embodiment will be described.
 図5は、本実施の形態に係る画像復号装置の構成図である。本実施の形態に係る画像復号装置は、制御部501、フレームメモリ502、再構成画像メモリ509、可変長復号部503、逆量子化部504、逆周波数変換部505、動き補償部506、面内予測部507、再構成部508、デブロックフィルタ部510および動きベクトル演算部511を備える。 FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment. The image decoding apparatus according to the present embodiment includes a control unit 501, a frame memory 502, a reconstructed image memory 509, a variable length decoding unit 503, an inverse quantization unit 504, an inverse frequency conversion unit 505, a motion compensation unit 506, an in-plane A prediction unit 507, a reconstruction unit 508, a deblock filter unit 510, and a motion vector calculation unit 511 are provided.
 制御部501は、画像復号装置の全体を制御する。フレームメモリ502は、復号された画像データを記憶するためのメモリである。再構成画像メモリ509は、生成された再構成画像の一部を記憶するためのメモリである。可変長復号部503は、符号化ストリームを読み込み、可変長符号を復号する。逆量子化部504は、逆量子化を行う。逆周波数変換部505は、逆周波数変換を行う。 The control unit 501 controls the entire image decoding apparatus. The frame memory 502 is a memory for storing the decoded image data. The reconstructed image memory 509 is a memory for storing a part of the generated reconstructed image. The variable length decoding unit 503 reads the encoded stream and decodes the variable length code. The inverse quantization unit 504 performs inverse quantization. The inverse frequency conversion unit 505 performs inverse frequency conversion.
 動きベクトル演算部511は、予測動きベクトルおよび差分動きベクトル等に基づいて、動きベクトルを算出し、動きベクトルを動き補償部506に出力する。動き補償部506は、フレームメモリ502から参照画像を読み出して、動き補償を行い、予測画像を生成する。面内予測部507は、再構成画像メモリ509から参照画像を読み出して、面内予測(イントラ予測とも呼ぶ)を行い、予測画像を生成する。 The motion vector calculation unit 511 calculates a motion vector based on the predicted motion vector, the difference motion vector, and the like, and outputs the motion vector to the motion compensation unit 506. The motion compensation unit 506 reads a reference image from the frame memory 502, performs motion compensation, and generates a predicted image. The in-plane prediction unit 507 reads a reference image from the reconstructed image memory 509, performs in-plane prediction (also referred to as intra prediction), and generates a predicted image.
 再構成部508は、差分画像と予測画像とを加算して再構成画像を生成し、その一部を再構成画像メモリ509に格納する。デブロックフィルタ部510は、再構成画像のブロックノイズを除去し、再構成画像を高画質化する。 The reconstruction unit 508 generates a reconstructed image by adding the difference image and the predicted image, and stores a part of the reconstructed image in the reconstructed image memory 509. The deblocking filter unit 510 removes block noise from the reconstructed image and improves the quality of the reconstructed image.
 図6は、本実施の形態に係る動き補償部506の周辺の構成図である。図5と同様の構成要素には同じ符号を割り当て、説明を省略する。図6には、図5で示された構成要素以外に、DMA制御部512、参照画像記憶部513および予測画像記憶部514が示されている。これらは、動き補償部506に含まれていてもよい。 FIG. 6 is a configuration diagram around the motion compensation unit 506 according to the present embodiment. The same components as those in FIG. 5 are assigned the same reference numerals, and description thereof is omitted. FIG. 6 shows a DMA control unit 512, a reference image storage unit 513, and a predicted image storage unit 514 in addition to the components shown in FIG. These may be included in the motion compensation unit 506.
 DMA制御部512は、動きベクトル演算部511によって算出された動きベクトルに基づいて、フレームメモリ502から参照画像記憶部513へ参照画像を転送する。参照画像記憶部513には、DMA制御部512によって転送された参照画像が記憶される。また、予測画像記憶部514には、動き補償部506によって生成された予測画像が記憶される。 The DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 based on the motion vector calculated by the motion vector calculation unit 511. The reference image storage unit 513 stores the reference image transferred by the DMA control unit 512. Further, the predicted image storage unit 514 stores the predicted image generated by the motion compensation unit 506.
 動き補償部506は、動きベクトルに基づいて、動き補償を実行して、予測画像を生成する。その後、動き補償部506は、予測画像を予測画像記憶部514に格納する。再構成部508は、予測画像記憶部514に格納された予測画像を用いて、再構成処理を実行する。 The motion compensation unit 506 performs motion compensation based on the motion vector to generate a predicted image. Thereafter, the motion compensation unit 506 stores the predicted image in the predicted image storage unit 514. The reconstruction unit 508 executes reconstruction processing using the predicted image stored in the predicted image storage unit 514.
 以上が、本実施の形態に係る画像復号装置の構成についての説明である。 The above is the description of the configuration of the image decoding apparatus according to the present embodiment.
  (1-3.動作)
 次に、本実施の形態に係る画像復号装置の動作を説明する。本実施の形態に係る画像復号装置が復号する符号化ストリームは、符号化ユニット(CU)と、変換ユニット(TU:Transform Unit、周波数変換ユニットとも呼ばれる)と、予測ユニット(PU)とで構成される。
(1-3. Operation)
Next, the operation of the image decoding apparatus according to this embodiment will be described. The encoded stream that is decoded by the image decoding apparatus according to the present embodiment includes an encoding unit (CU), a transform unit (also referred to as TU: Transform Unit), and a prediction unit (PU). The
 符号化ユニットは、64x64画素~8x8画素のサイズで設定され、面内予測とインター予測との切り替えが可能なデータ単位である。変換ユニットは、符号化ユニットの内部の領域において、64x64画素~4x4画素のサイズで設定される。予測ユニットは、符号化ユニットの内部の領域において、64x64画素~4x4画素のサイズで設定され、面内予測のための予測モード、または、インター予測の動きベクトルを有する。以下、図7A~図9を用いて符号化ストリームの構成について説明する。 The encoding unit is a data unit set with a size of 64 × 64 pixels to 8 × 8 pixels and capable of switching between in-plane prediction and inter prediction. The transform unit is set to a size of 64 × 64 pixels to 4 × 4 pixels in the area inside the encoding unit. The prediction unit is set to a size of 64 × 64 pixels to 4 × 4 pixels in an area inside the encoding unit, and has a prediction mode for intra prediction or a motion vector for inter prediction. Hereinafter, the configuration of the encoded stream will be described with reference to FIGS. 7A to 9.
 図7Aおよび図7Bは、本実施の形態に係る画像復号装置が復号する画像の階層的な構成を示している。図7Aに示すように、複数のピクチャのまとまりは、シーケンスと呼ばれる。また、図7Bに示すように、各ピクチャはスライスに分割され、各スライスはさらに符号化ユニットに分割される。なお、ピクチャはスライスに分割されない場合もある。 7A and 7B show a hierarchical configuration of images decoded by the image decoding apparatus according to the present embodiment. As shown in FIG. 7A, a group of a plurality of pictures is called a sequence. Also, as shown in FIG. 7B, each picture is divided into slices, and each slice is further divided into coding units. Note that a picture may not be divided into slices.
 本実施の形態において、最大符号化ユニット(LCU)のサイズは、64x64画素である。 In the present embodiment, the size of the maximum coding unit (LCU) is 64 × 64 pixels.
 図7Cは、本実施の形態に係る符号化ストリームを示す図である。図7Aおよび図7Bに示されたデータが階層的に符号化されることにより、図7Cに示された符号化ストリームが得られる。 FIG. 7C is a diagram showing an encoded stream according to the present embodiment. The data shown in FIGS. 7A and 7B are hierarchically encoded, whereby the encoded stream shown in FIG. 7C is obtained.
 図7Cに示された符号化ストリームは、シーケンスを制御するシーケンスヘッダ、ピクチャを制御するピクチャヘッダ、スライスを制御するスライスヘッダ、および、符号化ユニットレイヤデータ(CUレイヤデータ)で構成される。H.264規格において、シーケンスヘッダは、SPS(Sequence Parameter Set)とも呼ばれ、ピクチャヘッダは、PPS(Picture Parameter Set)とも呼ばれる。 The encoded stream shown in FIG. 7C includes a sequence header that controls a sequence, a picture header that controls a picture, a slice header that controls a slice, and encoded unit layer data (CU layer data). H. In the H.264 standard, the sequence header is also called SPS (Sequence Parameter Set), and the picture header is also called PPS (Picture Parameter Set).
 図8Aは、本実施の形態に係る符号化ユニットと符号化ユニットレイヤデータの構成例を示す図である。符号化ユニットに対応する符号化ユニットレイヤデータは、CU分割フラグ、および、CUデータ(符号化ユニットデータ)で構成される。このCU分割フラグは、「1」の場合、符号化ユニットを4分割することを示し、「0」の場合、符号化ユニットを4分割しないことを示す。図8Aでは、64x64画素の符号化ユニットは、分割されない。すなわち、CU分割フラグは「0」である。 FIG. 8A is a diagram showing a configuration example of the coding unit and coding unit layer data according to the present embodiment. Coding unit layer data corresponding to the coding unit includes a CU partition flag and CU data (coding unit data). When this CU division flag is “1”, it indicates that the encoding unit is divided into four, and when it is “0”, it indicates that the encoding unit is not divided into four. In FIG. 8A, a 64 × 64 pixel encoding unit is not divided. That is, the CU partition flag is “0”.
 図8Bは、本実施の形態に係るCUデータの構成例を示す図である。CUデータは、CUタイプ、動きベクトルまたは面内予測モード、および、係数を含む。CUタイプによって、予測ユニットのサイズが決定される。 FIG. 8B is a diagram showing a configuration example of CU data according to the present embodiment. The CU data includes a CU type, a motion vector or an in-plane prediction mode, and a coefficient. The size of the prediction unit is determined by the CU type.
 図9は、選択可能な予測ユニットのサイズの例を示す図である。具体的には、64x64画素、32x64画素、64x32画素、32x32画素、16x32画素、32x16画素、16x16画素、16x8画素、8x16画素、8x8画素、8x4画素、4x8画素および、4x4画素等の予測ユニットが示されている。予測ユニットのサイズは、4x4画素以上のサイズから選択可能である。また、予測ユニットの形状は、長方形でもよい。 FIG. 9 is a diagram showing examples of selectable prediction unit sizes. Specifically, prediction units such as 64 × 64 pixels, 32 × 64 pixels, 64 × 32 pixels, 32 × 32 pixels, 16 × 32 pixels, 32 × 16 pixels, 16 × 16 pixels, 16 × 8 pixels, 8 × 16 pixels, 8 × 8 pixels, 8 × 4 pixels, 4 × 8 pixels, and 4 × 4 pixels are shown. Has been. The size of the prediction unit can be selected from sizes of 4 × 4 pixels or more. The prediction unit may be rectangular.
 そして、予測ユニット毎に、動きベクトルまたは面内予測モードが指定される。本実施の形態では、動きベクトルのみが用いられるため、図8Bでは動きベクトルのみが示されている。また、図9のように、正方形を1:3に分割することにより得られる16x64画素の予測ユニットおよび48x64画素の予測ユニットが選択される場合もある。 Then, a motion vector or an in-plane prediction mode is designated for each prediction unit. Since only motion vectors are used in the present embodiment, only the motion vectors are shown in FIG. 8B. In addition, as shown in FIG. 9, a 16 × 64 pixel prediction unit and a 48 × 64 pixel prediction unit obtained by dividing a square into 1: 3 may be selected.
 図10は、符号化ストリームに含まれる1シーケンスの復号動作を示すフローチャートである。図10に示すフローチャートを用いて、図5に示した画像復号装置の動作を説明する。図10のように、画像復号装置は、まず、シーケンスヘッダを復号する(S901)。その際、可変長復号部503は、制御部501の制御に基づいて、符号化ストリームを復号する。次に、画像復号装置は、同様に、ピクチャヘッダを復号し(S902)、スライスヘッダを復号する(S903)。 FIG. 10 is a flowchart showing the decoding operation of one sequence included in the encoded stream. The operation of the image decoding apparatus shown in FIG. 5 will be described using the flowchart shown in FIG. As shown in FIG. 10, the image decoding apparatus first decodes the sequence header (S901). At that time, the variable length decoding unit 503 decodes the encoded stream based on the control of the control unit 501. Next, the image decoding apparatus similarly decodes the picture header (S902) and decodes the slice header (S903).
 次に、画像復号装置は、符号化ユニットを復号する(S904)。符号化ユニットの復号については後で詳しく説明する。画像復号装置は、符号化ユニットの復号後、復号された符号化ユニットがスライスの最後の符号化ユニットであるか否かを判定する(S905)。そして、復号された符号化ユニットがスライスの最後でない場合(S905でNo)、再度、画像復号装置は、次の符号化ユニットを復号する(S904)。 Next, the image decoding apparatus decodes the encoding unit (S904). The decoding of the encoding unit will be described in detail later. After decoding the encoding unit, the image decoding apparatus determines whether the decoded encoding unit is the last encoding unit of the slice (S905). If the decoded encoding unit is not the end of the slice (No in S905), the image decoding apparatus decodes the next encoding unit again (S904).
 さらに、画像復号装置は、復号された符号化ユニットを含むスライスがピクチャの最後のスライスであるか否かを判定する(S906)。そして、スライスがピクチャの最後でない場合(S906でNo)、画像復号装置は、再度、スライスヘッダを復号する(S903)。 Further, the image decoding apparatus determines whether or not the slice including the decoded encoding unit is the last slice of the picture (S906). If the slice is not the end of the picture (No in S906), the image decoding apparatus decodes the slice header again (S903).
 さらに、画像復号装置は、復号された符号化ユニットを含むピクチャがシーケンスの最後のピクチャであるか否かを判定する(S907)。そして、ピクチャがシーケンスの最後でない場合(S907でNo)、画像復号装置は、再度、ピクチャヘッダを復号する(S902)。画像復号装置は、シーケンスのすべてのピクチャの復号後、一連の復号動作を終了する。 Further, the image decoding apparatus determines whether or not the picture including the decoded encoding unit is the last picture in the sequence (S907). If the picture is not at the end of the sequence (No in S907), the image decoding apparatus decodes the picture header again (S902). After decoding all the pictures in the sequence, the image decoding apparatus ends a series of decoding operations.
 図11は、1つの符号化ユニットの復号動作を示すフローチャートである。図11に示すフローチャートを用いて、図10の符号化ユニットの復号(S904)の動作を説明する。 FIG. 11 is a flowchart showing the decoding operation of one encoding unit. The operation of decoding (S904) of the encoding unit of FIG. 10 will be described using the flowchart shown in FIG.
 まず、可変長復号部503は、入力された符号化ストリームに含まれる処理対象の符号化ユニットについて、可変長復号を行う(S1001)。 First, the variable length decoding unit 503 performs variable length decoding on the processing target encoding unit included in the input encoded stream (S1001).
 可変長復号処理(S1001)において、可変長復号部503は、符号化ユニットタイプ、面内予測(イントラ予測)モード、動きベクトル情報および量子化パラメータなどの符号化情報を出力する。また、可変長復号部503は、各画素データに対応する係数情報を出力する。 In the variable length decoding process (S1001), the variable length decoding unit 503 outputs coding information such as a coding unit type, an intra prediction (intra prediction) mode, motion vector information, and a quantization parameter. The variable length decoding unit 503 outputs coefficient information corresponding to each pixel data.
 符号化情報は、制御部501に出力され、その後、各処理部に入力される。係数情報は、次の逆量子化部504に出力される。次に、逆量子化部504は、逆量子化処理を行う(S1002)。その後、逆周波数変換部505は、逆周波数変換を行って差分画像を生成する(S1003)。 Encoding information is output to the control unit 501, and then input to each processing unit. The coefficient information is output to the next inverse quantization unit 504. Next, the inverse quantization unit 504 performs an inverse quantization process (S1002). Thereafter, the inverse frequency transform unit 505 performs inverse frequency transform to generate a difference image (S1003).
 次に、制御部501は、処理対象の符号化ユニットにインター予測が用いられるか、面内予測が用いられるかの判定を行う(S1004)。 Next, the control unit 501 determines whether inter prediction or in-plane prediction is used for the processing target encoding unit (S1004).
 インター予測が用いられる場合(S1004でYes)、制御部501は、動きベクトル演算部511を起動する。動きベクトル演算部511は、動きベクトルの算出を行う(S1009)。そして、動きベクトル演算部511は、動きベクトルにより指し示される参照画像をフレームメモリ502から転送する。次に、制御部501は、動き補償部506を起動する。そして、動き補償部506は、1/2画素精度または1/4画素精度等の予測画像を生成する(S1005)。 When the inter prediction is used (Yes in S1004), the control unit 501 activates the motion vector calculation unit 511. The motion vector calculation unit 511 calculates a motion vector (S1009). Then, the motion vector calculation unit 511 transfers the reference image indicated by the motion vector from the frame memory 502. Next, the control unit 501 activates the motion compensation unit 506. Then, the motion compensation unit 506 generates a predicted image with 1/2 pixel accuracy or 1/4 pixel accuracy (S1005).
 一方、インター予測が用いられない場合(S1004でNo)、すなわち、面内予測が用いられる場合、制御部501は、面内予測部507を起動する。面内予測部507は、面内予測の処理を行い、予測画像を生成する(S1006)。 On the other hand, when inter prediction is not used (No in S1004), that is, when in-plane prediction is used, the control unit 501 activates the in-plane prediction unit 507. The in-plane prediction unit 507 performs in-plane prediction processing and generates a predicted image (S1006).
 再構成部508は、動き補償部506または面内予測部507によって出力された予測画像と、逆周波数変換部505によって出力された差分画像とを加算することにより、再構成画像を生成する(S1007)。 The reconstruction unit 508 adds the predicted image output by the motion compensation unit 506 or the in-plane prediction unit 507 and the difference image output by the inverse frequency conversion unit 505 to generate a reconstructed image (S1007). ).
 生成された再構成画像は、デブロックフィルタ部510に入力される。同時に、面内予測で用いられる部分は、再構成画像メモリ509に格納される。最後に、デブロックフィルタ部510は、得られた再構成画像に対して、ブロックノイズを低減するためのデブロックフィルタ処理を行う。そして、デブロックフィルタ部510は、フレームメモリ502に結果を格納する(S1008)。以上で、画像復号装置は、符号化ユニットの復号動作を終了する。 The generated reconstructed image is input to the deblock filter unit 510. At the same time, the portion used in the in-plane prediction is stored in the reconstructed image memory 509. Finally, the deblock filter unit 510 performs deblock filter processing for reducing block noise on the obtained reconstructed image. The deblock filter unit 510 stores the result in the frame memory 502 (S1008). Thus, the image decoding device ends the decoding operation of the encoding unit.
 次に、動きベクトル演算部511および動き補償部506の動作について、詳細に説明する。 Next, operations of the motion vector calculation unit 511 and the motion compensation unit 506 will be described in detail.
 図12は、動き補償処理の概略を示す説明図である。図12のように、動き補償処理は、符号化ストリームから復号された動きベクトルv(vx、vy)によって指し示される、過去に復号されたピクチャの一部を取り出し、フィルタ演算を行うことによって、予測画像を生成する処理である。 FIG. 12 is an explanatory diagram showing an outline of motion compensation processing. As shown in FIG. 12, the motion compensation process is performed by extracting a part of a previously decoded picture indicated by the motion vector v (vx, vy) decoded from the encoded stream and performing a filter operation. This is a process for generating a predicted image.
 例えば、予測される予測ユニットのサイズが64x64画素である場合、かつ、8TAPフィルタが用いられる場合、64x64画素に7画素が垂直方向および水平方向に付加される。具体的には、左に3画素、右に4画素、上に3画素、および、下に4画素が付加される。したがって、参照ピクチャから取り出される参照画像は、71x71画素である。予測ユニットの左上の座標が(x、y)である場合、参照画像は、左上が(x+vx-3、y+vy-3)である71x71画素の矩形である。 For example, when the size of the prediction unit to be predicted is 64 × 64 pixels and an 8TAP filter is used, 7 pixels are added to the 64 × 64 pixels in the vertical direction and the horizontal direction. Specifically, 3 pixels on the left, 4 pixels on the right, 3 pixels on the top, and 4 pixels on the bottom are added. Accordingly, the reference image extracted from the reference picture is 71 × 71 pixels. When the upper left coordinate of the prediction unit is (x, y), the reference image is a 71 × 71 pixel rectangle whose upper left is (x + vx−3, y + vy−3).
 図13Aは、本実施の形態に係る予測ユニットと動きベクトルを示す図である。図13Aに示された64x64画素の予測ユニットは、1つの動きベクトルvを有する。 FIG. 13A is a diagram showing a prediction unit and a motion vector according to the present embodiment. The prediction unit of 64 × 64 pixels shown in FIG. 13A has one motion vector v.
 図13Bは、図13Aに示された予測ユニットの分割を示す図である。図13Bの例において、64x64画素の予測ユニットは、16x16画素の16個のサブブロックBK0~BK15に分割されている。 FIG. 13B is a diagram showing division of the prediction unit shown in FIG. 13A. In the example of FIG. 13B, the prediction unit of 64 × 64 pixels is divided into 16 sub-blocks BK0 to BK15 of 16 × 16 pixels.
 図13Aに示された64x64画素の予測ユニットに対する1つの動きベクトルvは、この予測ユニットのどの画素に対しても、同じである。つまり、図13Bのように、予測ユニットが16個のサブブロックに分割された場合も、それぞれのサブブロックの動きベクトルは、すべて同じ動きベクトルvである。したがって、64x64画素の予測ユニットは、同じ動きベクトルvを持つ16個のサブブロックとして処理される。 The one motion vector v for the 64 × 64 pixel prediction unit shown in FIG. 13A is the same for every pixel of this prediction unit. That is, as shown in FIG. 13B, even when the prediction unit is divided into 16 sub-blocks, the motion vectors of the respective sub-blocks are all the same motion vector v. Thus, a 64 × 64 pixel prediction unit is processed as 16 sub-blocks with the same motion vector v.
 また、このとき、サブブロックBK0の参照画像は、その左上が(x+vx-3、y+vy-3)である23x23画素の矩形である。サブブロックBK1の参照画像は、その左上が(x+vx+13、y+vy-3)である23x23画素の矩形である。 At this time, the reference image of the sub-block BK0 is a rectangle of 23 × 23 pixels whose upper left is (x + vx−3, y + vy−3). The reference image of the sub-block BK1 is a rectangle of 23 × 23 pixels whose upper left is (x + vx + 13, y + vy−3).
 したがって、隣り合う2つのサブブロックに対応する2つの参照画像は、互いに重なる。ここでは、サブブロックBK0の参照画像、および、サブブロックBK1の参照画像の例が示されたが、サブブロックBK0の参照画像、および、サブブロックBK4の参照画像も同じように互いに重なる。 Therefore, two reference images corresponding to two adjacent sub-blocks overlap each other. Here, an example of the reference image of the sub-block BK0 and the reference image of the sub-block BK1 is shown, but the reference image of the sub-block BK0 and the reference image of the sub-block BK4 also overlap each other in the same manner.
 このように、1つの予測ユニットが複数のサブブロックに分割された場合、あるサブブロックに対応する参照画像は、他のサブブロックに対応する参照画像に重なる。本実施の形態の画像復号装置は、予測ユニットを分割する場合に重なりあう部分を重複して転送しないようにすることを特徴とする。 In this way, when one prediction unit is divided into a plurality of sub-blocks, a reference image corresponding to a certain sub-block overlaps a reference image corresponding to another sub-block. The image decoding apparatus according to the present embodiment is characterized in that, when a prediction unit is divided, overlapping portions are not transferred redundantly.
 図14は、図6に示された動き補償部506に係る動作を示すフローチャートである。図6に示された動きベクトル演算部511および動き補償部506の動作について、図14を用いて説明する。 FIG. 14 is a flowchart showing an operation related to the motion compensation unit 506 shown in FIG. Operations of the motion vector calculation unit 511 and the motion compensation unit 506 illustrated in FIG. 6 will be described with reference to FIG.
 まず、動きベクトル演算部511は、予測ユニットの動きベクトルを規格で定められた方法により算出する(S1100)。次に、動きベクトル演算部511は、予測ユニットが16x16画素よりも大きいか否かを判定する(S1101)。予測ユニットが16x16画素よりも大きくない場合(S1101でNo)、動きベクトル演算部511および動き補償部506は、通常の動作を行う。 First, the motion vector calculation unit 511 calculates the motion vector of the prediction unit by a method defined by the standard (S1100). Next, the motion vector calculation unit 511 determines whether the prediction unit is larger than 16 × 16 pixels (S1101). When the prediction unit is not larger than 16 × 16 pixels (No in S1101), the motion vector calculation unit 511 and the motion compensation unit 506 perform normal operations.
 具体的には、動きベクトル演算部511は、動きベクトル、および、予測対象の予測ユニットの位置(座標)および大きさから、参照画像を取得するための位置およびサイズを算出する(S1102)。 Specifically, the motion vector calculation unit 511 calculates the position and size for acquiring the reference image from the motion vector and the position (coordinates) and size of the prediction unit to be predicted (S1102).
 動きベクトル演算部511は、得られた位置およびサイズをDMA制御部512にセットする。DMA制御部512は、フレームメモリ502から参照画像記憶部513に参照画像を転送する(S1103)。次に、動き補償部506は、参照画像記憶部513に転送された参照画像を用いて、動き補償の演算を行い、結果を予測画像記憶部514に書き込む(S1104)。 The motion vector calculation unit 511 sets the obtained position and size in the DMA control unit 512. The DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 (S1103). Next, the motion compensation unit 506 performs motion compensation using the reference image transferred to the reference image storage unit 513, and writes the result in the predicted image storage unit 514 (S1104).
 予測ユニットが16x16画素より大きい場合(S1101でYes)、動きベクトル演算部511は、予測ユニットをそれぞれが16x16画素である複数のサブブロックに分割する(S1105)。動きベクトル演算部511は、分割により得られたサブブロックに対して、参照画像を取得するための位置およびサイズを算出する(S1106)。このとき、動きベクトル演算部511は、既に転送済みの部分を転送しないように、位置およびサイズを算出する。 When the prediction unit is larger than 16 × 16 pixels (Yes in S1101), the motion vector calculation unit 511 divides the prediction unit into a plurality of sub-blocks each having 16 × 16 pixels (S1105). The motion vector calculation unit 511 calculates the position and size for acquiring the reference image for the sub-block obtained by the division (S1106). At this time, the motion vector calculation unit 511 calculates the position and size so as not to transfer the already transferred part.
 図15Aは、サブブロックに対応する参照画像の領域を示す図である。図15Aのように、サブブロックBK1の参照画像は、(x+vx+13、y+vy-3)の位置から23x23画素である。しかし、このうちの一部は、既に転送済みである。 FIG. 15A is a diagram illustrating a reference image area corresponding to a sub-block. As illustrated in FIG. 15A, the reference image of the sub-block BK1 is 23 × 23 pixels from the position (x + vx + 13, y + vy−3). However, some of these have already been transferred.
 図15Bは、サブブロックに対応する参照画像の取得領域を示す図である。図15Bのように、動きベクトル演算部511は、サブブロックBK1の参照画像を転送する時、(x+vx+20、y+vy-3)の位置から16x23画素を転送するように、転送を制御する。 FIG. 15B is a diagram illustrating a reference image acquisition area corresponding to a sub-block. As shown in FIG. 15B, when the reference image of the sub-block BK1 is transferred, the motion vector calculation unit 511 controls the transfer so that 16 × 23 pixels are transferred from the position (x + vx + 20, y + vy−3).
 ここでは、サブブロックBK0およびサブブロックBK1に関する動作が示されている。しかし、サブブロックBK1およびサブブロックBK2に関する動作も、サブブロックBK0およびサブブロックBK1に関する動作と同様である。また、サブブロックBK0およびサブブロックBK4に関する動作も、2つの参照画像が垂直方向に重複していること以外は、サブブロックBK0およびサブブロックBK1に関する動作と同様である。 Here, operations related to the sub-block BK0 and the sub-block BK1 are shown. However, the operations related to sub-block BK1 and sub-block BK2 are the same as the operations related to sub-block BK0 and sub-block BK1. The operations related to the subblock BK0 and the subblock BK4 are the same as the operations related to the subblock BK0 and the subblock BK1 except that two reference images overlap in the vertical direction.
 次に、動きベクトル演算部511は、得られた位置およびサイズをDMA制御部512にセットする。DMA制御部512は、フレームメモリ502から参照画像記憶部513に参照画像を転送する(S1107)。次に、動き補償部506は、参照画像記憶部513に格納されている参照画像を用いて、動き補償の演算を行い、結果を予測画像記憶部514に書き込む(S1108)。 Next, the motion vector calculation unit 511 sets the obtained position and size in the DMA control unit 512. The DMA control unit 512 transfers the reference image from the frame memory 502 to the reference image storage unit 513 (S1107). Next, the motion compensation unit 506 performs motion compensation using the reference image stored in the reference image storage unit 513, and writes the result in the predicted image storage unit 514 (S1108).
 次に、動きベクトル演算部511は、未処理のサブブロックがあるか否かを判定する(S1109)。未処理のサブブロックがある場合(S1109でYes)、動きベクトル演算部511は、そのサブブロックに対応する参照画像を取得するための位置およびサイズを算出する(S1106)。未処理のサブブロックがない場合(S1109でNo)、動きベクトル演算部511および動き補償部506は、処理を終了する。 Next, the motion vector calculation unit 511 determines whether there is an unprocessed sub-block (S1109). When there is an unprocessed sub-block (Yes in S1109), the motion vector calculation unit 511 calculates a position and a size for acquiring a reference image corresponding to the sub-block (S1106). When there is no unprocessed sub-block (No in S1109), the motion vector calculation unit 511 and the motion compensation unit 506 end the process.
 図15Cは、複数のサブブロックに対応する複数の取得領域を示す図である。図15Cのように、各サブブロックに対応する参照画像は、重複して転送されることがない。そのため、参照画像の転送量は増加しない。 FIG. 15C is a diagram showing a plurality of acquisition areas corresponding to a plurality of sub-blocks. As shown in FIG. 15C, the reference image corresponding to each sub-block is not transferred redundantly. For this reason, the transfer amount of the reference image does not increase.
 図16Aは、動き補償の動作の第1例を示すタイムチャートである。図16Aには、予測ユニットを分割しない場合の例が示されている。 FIG. 16A is a time chart showing a first example of motion compensation operation. FIG. 16A shows an example in which the prediction unit is not divided.
 図16Bは、動き補償の動作の第2例を示すタイムチャートである。図16Bには、予測ユニットを分割する場合の例が示されている。図16Bの例では、参照画像のデータ転送が、細かいデータ単位で行われる。そして、細かいデータ単位で、動き補償処理が行われる。そのため、参照画像を保持するための参照画像記憶部513の必要容量が小さくなる。また、データ転送の総量は、図16Aの場合と、図16Bの場合とで同じである。したがって、転送量は増加せず、転送に必要なメモリバンド幅も増加しない。 FIG. 16B is a time chart showing a second example of motion compensation operation. FIG. 16B shows an example of dividing a prediction unit. In the example of FIG. 16B, reference image data transfer is performed in fine data units. Then, motion compensation processing is performed in units of fine data. Therefore, the necessary capacity of the reference image storage unit 513 for holding the reference image is reduced. The total amount of data transfer is the same in the case of FIG. 16A and the case of FIG. 16B. Therefore, the transfer amount does not increase, and the memory bandwidth necessary for transfer does not increase.
 図16Cは、動き補償の動作の第3例を示すタイムチャートである。図16Cには、予測ユニットを分割する場合の変形例が示されている。図16Cの例のように、画像復号装置は、細かいデータ単位で、参照画像に対する動き補償処理と、参照画像の転送処理とをパイプライン処理で並行して実行することにより、処理の遅延を小さくすることもできる。 FIG. 16C is a time chart showing a third example of the motion compensation operation. FIG. 16C shows a modification in the case of dividing the prediction unit. As in the example of FIG. 16C, the image decoding apparatus reduces the processing delay by executing the motion compensation process for the reference image and the transfer process for the reference image in parallel by pipeline processing in fine data units. You can also
  (1-4.効果)
 上述のように、画像復号装置は、参照画像を取得するための位置およびサイズを、既に転送された部分と重ならないように算出して、参照画像を転送する。具体的には、画像復号装置は、図15Cのように、予測ユニットを複数のサブブロックに分割して、サブブロック毎に動き補償処理を実行する。
(1-4. Effects)
As described above, the image decoding apparatus calculates the position and size for acquiring the reference image so as not to overlap the already transferred portion, and transfers the reference image. Specifically, as shown in FIG. 15C, the image decoding apparatus divides the prediction unit into a plurality of sub-blocks, and executes a motion compensation process for each sub-block.
 これにより、参照画像を記憶するために必要なメモリ容量が小さくなる。また、参照画像の転送量の増加が抑制される。また、転送処理と動き補償処理とが同時に実行されることで、処理の遅延が小さくなる。 This reduces the memory capacity required to store the reference image. In addition, an increase in the transfer amount of the reference image is suppressed. In addition, since the transfer process and the motion compensation process are executed at the same time, the processing delay is reduced.
  (1-5.補足)
 なお、本実施の形態において、画像復号装置は、64x64画素の予測ユニットを、16x16画素に分割しているが、8x8画素に分割しても、32x32画素に分割しても構わない。また、画像復号装置は、64x32画素などの正方形でない予測ユニットを、16x16画素または他のサイズで分割しても構わない。
(1-5. Supplement)
In the present embodiment, the image decoding apparatus divides a 64 × 64 pixel prediction unit into 16 × 16 pixels, but may divide into 8 × 8 pixels or 32 × 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 × 32 pixels by 16 × 16 pixels or another size.
 分割のサイズが大きい程、処理は単純化するが、参照画像を記憶するために必要なメモリ容量が大きくなり、処理の遅延も大きくなる。分割のサイズが小さい程、処理は複雑化するが、参照画像を記憶するための必要なメモリ容量が小さくなり、処理の遅延も小さくなる。従来のH.264規格などでは、動き補償のデータ単位の最大が16x16画素である。そのため、16x16画素が、参照画像を記憶するためのメモリのサイズ、および、処理の遅延の観点から、分割のサイズとして、最も妥当である。 The larger the division size, the more simplified the process, but the larger the memory capacity required to store the reference image, the greater the processing delay. The smaller the size of the division, the more complicated the process, but the smaller the memory capacity required to store the reference image, and the smaller the processing delay. Conventional H.264. In the H.264 standard, the maximum data unit for motion compensation is 16 × 16 pixels. Therefore, 16 × 16 pixels are most appropriate as the size of the division from the viewpoint of the size of the memory for storing the reference image and the processing delay.
 また、本実施の形態において、参照画像として必要なデータが、1画素単位で取得されている。しかし、画像復号装置は、必要なデータを必ずしも1画素単位で取得する必要はなく、4画素、8画素、あるいは、さらに大きなデータ単位で、必要なデータを取得してもよい。 In the present embodiment, data necessary as a reference image is acquired in units of pixels. However, the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
 また、本実施の形態において、参照画像を取得するための領域が重ならないように、動きベクトル演算部511が、位置とサイズを算出している。しかし、DMA制御部512が、既に転送された部分が転送されないように、転送を制御してもよい。また、参照画像記憶部513に格納されている部分が転送されないように、参照画像記憶部513が転送を制御しても構わない。 Also, in the present embodiment, the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap. However, the DMA control unit 512 may control the transfer so that the already transferred part is not transferred. Further, the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
 また、既に転送された部分の全てが転送から除外されてもよいし、既に転送された部分の一部が転送から除外されてもよい。 Also, all of the parts that have already been transferred may be excluded from the transfer, or some of the parts that have already been transferred may be excluded from the transfer.
 また、本実施の形態に係る各処理部の構成の一部または全部が、専用ハードウェアによる回路で実現されてもよいし、プロセッサで実行されるプログラムで実現されてもよい。 Further, part or all of the configuration of each processing unit according to the present embodiment may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
 また、本実施の形態において、フレームメモリ502、参照画像記憶部513および予測画像記憶部514は、メモリまたは記憶部として示されている。しかし、これらは、データの記憶が可能な記憶素子であれば、フリップフロップまたはレジスタなどいずれの構成でもよい。さらには、プロセッサのメモリ領域の一部、または、キャッシュメモリの一部が、フレームメモリ502、参照画像記憶部513および予測画像記憶部514として、用いられてもよい。 Further, in the present embodiment, the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
 また、本実施の形態において、画像復号装置が示されている。しかし、復号に限定されず、復号処理を逆の手順で実行する画像符号化装置も、同様に、予測ユニットを分割して、動き補償処理を行うことが可能である。また、符号化または復号に限定されず、画像処理装置が、予測ユニットを分割して、動き補償処理を行ってもよい。 In the present embodiment, an image decoding device is shown. However, the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing. Further, the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
 (実施の形態2)
  (2-1.概要)
 まず、本実施の形態に係る画像復号装置の概要について説明する。本実施の形態に係る画像復号装置は、符号化ストリームを復号する。符号化ストリームを構成する予測ユニットのサイズは可変であり、最大で64x64画素のサイズである。
(Embodiment 2)
(2-1. Overview)
First, an overview of the image decoding apparatus according to the present embodiment will be described. The image decoding apparatus according to the present embodiment decodes an encoded stream. The size of the prediction unit constituting the encoded stream is variable and has a size of 64 × 64 pixels at the maximum.
 その際、画像復号装置は、予測ユニットのサイズが16x16画素よりも大きい場合、それぞれが16x16画素のデータ単位である複数のサブブロックに分割する。そして、画像復号装置は、複数のサブブロックのそれぞれの参照画像を、水平方向の重複部分を除いて、フレームメモリから動き補償部に転送する。 At that time, when the size of the prediction unit is larger than 16 × 16 pixels, the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each having a data unit of 16 × 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, excluding overlapping portions in the horizontal direction.
 これにより、除外範囲は、水平方向の重複部分のみに限られる。したがって、実施の形態1に比べて、構成が簡素化される。 こ れ This limits the exclusion range to overlapping parts in the horizontal direction. Therefore, the configuration is simplified compared to the first embodiment.
 以上が本実施の形態に係る画像復号装置の概要についての説明である。 This completes the description of the outline of the image decoding apparatus according to the present embodiment.
  (2-2.構成)
 図5は、本実施の形態に係る画像復号装置の構成図である。図6は、本実施の形態に係る画像復号装置に含まれる動き補償部506の周辺の構成図である。本実施の形態に係る画像復号装置の構成は、実施の形態1とすべて同じであるので、説明を省略する。
(2-2. Configuration)
FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment. FIG. 6 is a configuration diagram around the motion compensation unit 506 included in the image decoding apparatus according to the present embodiment. Since the configuration of the image decoding apparatus according to the present embodiment is the same as that of Embodiment 1, description thereof is omitted.
  (2-3.動作)
 本実施の形態では、実施の形態1と同様に、図7A~図9に示された符号化ストリームの構造が用いられる。本実施の形態に係る画像復号装置の動作フローは、図10および図11に示された実施の形態1の動作フローと同様であるので、説明を省略する。
(2-3. Operation)
In the present embodiment, similarly to Embodiment 1, the structure of the encoded stream shown in FIGS. 7A to 9 is used. The operation flow of the image decoding apparatus according to the present embodiment is the same as the operation flow of the first embodiment shown in FIGS.
 本実施の形態と、実施の形態1とでは、参照画像を取得するための領域を算出する動作(図14のS1106)に違いがある。 In the present embodiment and the first embodiment, there is a difference in the operation (S1106 in FIG. 14) for calculating a region for acquiring a reference image.
 図17Aは、本実施の形態に係る参照画像の取得領域を示す図である。本実施の形態に係る画像復号装置は、実施の形態1におけるサブブロックBK0~BK3のような水平方向に並んでいる複数のサブブロックについて、重複部分を除いて、参照画像を転送する。 FIG. 17A is a diagram showing a reference image acquisition area according to the present embodiment. The image decoding apparatus according to the present embodiment transfers a reference image, excluding overlapping portions, for a plurality of subblocks arranged in the horizontal direction such as subblocks BK0 to BK3 in the first embodiment.
 例えば、本実施の形態に係る画像復号装置は、サブブロックBK0の参照画像を転送する際、左上が(x+vx-3、y+vy-3)である23x23画素を転送する。また、画像復号装置は、サブブロックBK1の参照画像を転送する際、左上が(x+vx+20、x+vy-3)である16x23画素を転送する。 For example, when transferring the reference image of the sub-block BK0, the image decoding apparatus according to the present embodiment transfers 23 × 23 pixels whose upper left is (x + vx−3, y + vy−3). Further, when transferring the reference image of the sub-block BK1, the image decoding apparatus transfers 16 × 23 pixels whose upper left is (x + vx + 20, x + vy−3).
 サブブロックBK2の参照画像、および、サブブロックBK3の参照画像も、同様に転送される。本実施の形態に係る画像復号装置は、実施の形態1とは異なり、サブブロックBK4の参照画像を転送する際、左上が(x+vx-3、y+vy+13)である23x23画素を転送する。この結果、図17Aの斜線部分について、転送が重複して行われる。 The reference image of the sub-block BK2 and the reference image of the sub-block BK3 are transferred in the same manner. Unlike the first embodiment, the image decoding apparatus according to the present embodiment transfers 23 × 23 pixels whose upper left is (x + vx−3, y + vy + 13) when transferring the reference image of sub-block BK4. As a result, the transfer is performed in duplicate for the hatched portion in FIG. 17A.
 図17Bは、本実施の形態に係る予測ユニットに対応する複数の取得領域を示す図である。64x64画素の予測ユニット全体では、図17Bの斜線部分について、転送が重複して行われる。 FIG. 17B is a diagram showing a plurality of acquisition areas corresponding to the prediction unit according to the present embodiment. In the entire prediction unit of 64 × 64 pixels, the transfer is performed in duplicate for the hatched portion in FIG. 17B.
 予測ユニットが分割されない場合、参照画像の転送量は、5041画素(71x71画素)である。本実施の形態では、参照画像の転送量は、6532画素(71x23x4画素)である。したがって、本実施の形態に係る参照画像の転送量は、予測ユニットが分割されない場合よりも大きい。 When the prediction unit is not divided, the transfer amount of the reference image is 5041 pixels (71 × 71 pixels). In the present embodiment, the transfer amount of the reference image is 6532 pixels (71 × 23 × 4 pixels). Therefore, the transfer amount of the reference image according to the present embodiment is larger than when the prediction unit is not divided.
 しかしながら、本実施の形態では、垂直方向の重複が除外されない。したがって、動き補償部506の回路構成が簡素化される。また、サブブロックBK0に用いられた参照画像は、サブブロックBK1に用いられた後、保持される必要がない。したがって、データ管理が簡素化され、かつ、参照画像記憶部513の必要容量が削減される。 However, in the present embodiment, vertical duplication is not excluded. Therefore, the circuit configuration of the motion compensation unit 506 is simplified. Further, the reference image used for the sub-block BK0 does not need to be retained after being used for the sub-block BK1. Therefore, data management is simplified and the necessary capacity of the reference image storage unit 513 is reduced.
  (2-4.効果)
 このように、本実施の形態に係る画像復号装置は、参照画像を取得するための位置とサイズを、既に転送した水平方向の参照画像と重ならないように算出する。そして、画像復号装置は、水平方向の重複部分を除いて、参照画像を転送する。これにより、参照画像を記憶するために必要なメモリ容量が、さらに削減される。また、動き補償部506の回路構成が簡素化される。
(2-4. Effect)
As described above, the image decoding apparatus according to the present embodiment calculates the position and size for acquiring the reference image so as not to overlap the already transferred horizontal reference image. Then, the image decoding apparatus transfers the reference image except for the overlapping portion in the horizontal direction. Thereby, the memory capacity required for storing the reference image is further reduced. Further, the circuit configuration of the motion compensation unit 506 is simplified.
  (2-5.補足)
 なお、本実施の形態において、画像復号装置は、64x64画素の予測ユニットを、16x16画素に分割しているが、8x8画素に分割しても、32x32画素に分割しても構わない。また、画像復号装置は、64x32画素などの正方形でない予測ユニットを、16x16画素または他のサイズで分割しても構わない。
(2-5. Supplement)
In the present embodiment, the image decoding apparatus divides a 64 × 64 pixel prediction unit into 16 × 16 pixels, but may divide into 8 × 8 pixels or 32 × 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 × 32 pixels by 16 × 16 pixels or another size.
 分割のサイズが大きい程、処理は単純化するが、参照画像を記憶するために必要なメモリ容量が大きくなり、処理の遅延も大きくなる。分割のサイズが小さい程、処理は複雑化するが、参照画像を記憶するための必要なメモリ容量が小さくなり、処理の遅延も小さくなる。従来のH.264規格などでは、動き補償のデータ単位の最大が16x16画素である。そのため、16x16画素が、参照画像を記憶するためのメモリのサイズ、および、処理の遅延の観点から、分割のサイズとして、最も妥当である。 The larger the division size, the more simplified the process, but the larger the memory capacity required to store the reference image, the greater the processing delay. The smaller the size of the division, the more complicated the process, but the smaller the memory capacity required to store the reference image, and the smaller the processing delay. Conventional H.264. In the H.264 standard, the maximum data unit for motion compensation is 16 × 16 pixels. Therefore, 16 × 16 pixels are most appropriate as the size of the division from the viewpoint of the size of the memory for storing the reference image and the processing delay.
 また、本実施の形態において、参照画像として必要なデータが、1画素単位で取得されている。しかし、画像復号装置は、必要なデータを必ずしも1画素単位で取得する必要はなく、4画素、8画素、あるいは、さらに大きなデータ単位で、必要なデータを取得してもよい。 In the present embodiment, data necessary as a reference image is acquired in units of pixels. However, the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
 また、本実施の形態において、参照画像を取得するための領域が重ならないように、動きベクトル演算部511が、位置とサイズを算出している。しかし、DMA制御部512が、既に転送された部分が転送されないように、転送を制御してもよい。また、参照画像記憶部513に格納された部分が転送されないように、参照画像記憶部513が転送を制御しても構わない。 Also, in the present embodiment, the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap. However, the DMA control unit 512 may control the transfer so that the already transferred part is not transferred. Further, the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
 また、本実施の形態に係る各処理部の構成の一部または全部が、専用ハードウェアによる回路で実現されてもよいし、プロセッサで実行されるプログラムで実現されてもよい。 Further, part or all of the configuration of each processing unit according to the present embodiment may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
 また、本実施の形態において、フレームメモリ502、参照画像記憶部513および予測画像記憶部514は、メモリまたは記憶部として示されている。しかし、これらは、データの記憶が可能な記憶素子であれば、フリップフロップまたはレジスタなどいずれの構成でもよい。さらには、プロセッサのメモリ領域の一部、または、キャッシュメモリの一部が、フレームメモリ502、参照画像記憶部513および予測画像記憶部514として、用いられてもよい。 Further, in the present embodiment, the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
 また、本実施の形態において、画像復号装置が示されている。しかし、復号に限定されず、復号処理を逆の手順で実行する画像符号化装置も、同様に、予測ユニットを分割して、動き補償処理を行うことが可能である。また、符号化または復号に限定されず、画像処理装置が、予測ユニットを分割して、動き補償処理を行ってもよい。 In the present embodiment, an image decoding device is shown. However, the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing. Further, the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
 (実施の形態3)
  (3-1.概要)
 まず、本実施の形態に係る画像復号装置の概要について説明する。本実施の形態に係る画像復号装置は、符号化ストリームを復号する。符号化ストリームを構成する予測ユニットのサイズは可変であり、最大で64x64画素のサイズである。
(Embodiment 3)
(3-1. Overview)
First, an overview of the image decoding apparatus according to the present embodiment will be described. The image decoding apparatus according to the present embodiment decodes an encoded stream. The size of the prediction unit constituting the encoded stream is variable and has a size of 64 × 64 pixels at the maximum.
 その際、画像復号装置は、予測ユニットのサイズが16x16画素より大きい場合、それぞれが16x16画素のデータ単位である複数のサブブロックに分割する。そして、画像復号装置は、複数のサブブロックのそれぞれの参照画像を、直前に転送された水平方向の重複部分を除いて、フレームメモリから動き補償部に転送する。 At that time, when the size of the prediction unit is larger than 16 × 16 pixels, the image decoding apparatus divides the prediction unit into a plurality of sub-blocks each of which is a data unit of 16 × 16 pixels. Then, the image decoding apparatus transfers the reference images of the plurality of sub-blocks from the frame memory to the motion compensation unit, except for the overlapped portion in the horizontal direction transferred immediately before.
 これにより、除外範囲は、直前に転送された水平方向の重複部分のみに限られる。したがって、実施の形態1および実施の形態2に比べて構成が、簡素化される。さらに、参照画像を記憶するために必要な容量が、小さくなる。 こ れ This limits the exclusion range to the horizontal overlap that was transferred immediately before. Therefore, the configuration is simplified compared to the first and second embodiments. Furthermore, the capacity required for storing the reference image is reduced.
 以上が本実施の形態に係る画像復号装置の概要についての説明である。 This completes the description of the outline of the image decoding apparatus according to the present embodiment.
  (3-2.構成)
 図5は、本実施の形態に係る画像復号装置の構成図である。図6は、本実施の形態に係る画像復号装置に含まれる動き補償部506の周辺の構成図である。本実施の形態に係る画像復号装置の構成は、実施の形態1とすべて同じであるので、説明を省略する。
(3-2. Configuration)
FIG. 5 is a configuration diagram of the image decoding apparatus according to the present embodiment. FIG. 6 is a configuration diagram around the motion compensation unit 506 included in the image decoding apparatus according to the present embodiment. Since the configuration of the image decoding apparatus according to the present embodiment is the same as that of Embodiment 1, description thereof is omitted.
  (3-3.動作)
 本実施の形態では、実施の形態1と同様に、図7A~図9に示された符号化ストリームの構造が用いられる。本実施の形態に係る画像復号装置の動作フローは、図10および図11に示された実施の形態1の動作フローと同様であるので、説明を省略する。
(3-3. Operation)
In the present embodiment, similarly to Embodiment 1, the structure of the encoded stream shown in FIGS. 7A to 9 is used. The operation flow of the image decoding apparatus according to the present embodiment is the same as the operation flow of the first embodiment shown in FIGS.
 本実施の形態と実施の形態1とでは、参照画像を取得するための領域を算出する動作(図14のS1106)に違いがある。 In the present embodiment and the first embodiment, there is a difference in the operation (S1106 in FIG. 14) for calculating a region for acquiring a reference image.
 図18Aは、本実施の形態に係る参照画像の取得領域を示す図である。本実施の形態に係る画像復号装置は、図18AのサブブロックBK0およびサブブロックBK1のように、水平方向に並んでいる複数のサブブロックについて、重複部分を除いて、参照画像を転送する。 FIG. 18A is a diagram showing a reference image acquisition area according to the present embodiment. The image decoding apparatus according to the present embodiment transfers a reference image, excluding overlapping portions, for a plurality of subblocks arranged in the horizontal direction, such as subblock BK0 and subblock BK1 in FIG. 18A.
 例えば、本実施の形態に係る画像復号装置は、サブブロックBK0の参照画像を転送する際、左上が(x+vx-3、y+vy-3)である23x23画素を転送する。また、画像復号装置は、サブブロックBK1の参照画像を転送する際、左上が(x+vx+20、x+vy-3)である16x23画素を転送する。 For example, when transferring the reference image of the sub-block BK0, the image decoding apparatus according to the present embodiment transfers 23 × 23 pixels whose upper left is (x + vx−3, y + vy−3). Further, when transferring the reference image of the sub-block BK1, the image decoding apparatus transfers 16 × 23 pixels whose upper left is (x + vx + 20, x + vy−3).
 次に、本実施の形態に係る画像復号装置は、実施の形態1および実施の形態2とは異なり、サブブロックBK0の下側のサブブロックBK2の参照画像を取得する。画像復号装置は、サブブロックBK2の参照画像を取得する際、左上が(x+vx-3、y+vy+13)である23x23画素を転送する。この結果、図18Aの斜線部分について、転送が重複して行われる。 Next, unlike the first and second embodiments, the image decoding apparatus according to the present embodiment acquires a reference image of the sub-block BK2 below the sub-block BK0. When acquiring the reference image of the sub-block BK2, the image decoding apparatus transfers 23 × 23 pixels whose upper left is (x + vx−3, y + vy + 13). As a result, the transfer is duplicated for the shaded portion in FIG. 18A.
 図18Aでは記載が省略されているが、画像復号装置は、サブブロックBK2の右隣のサブブロックBK3に対応する参照画像を、サブブロックBK2に対応する参照画像を転送する時に転送された重複部分を除いて、転送する。 Although not shown in FIG. 18A, the image decoding apparatus transfers the reference image corresponding to the sub-block BK3 on the right side of the sub-block BK2 and the overlapping portion transferred when transferring the reference image corresponding to the sub-block BK2. Transfer, except
 次に、画像復号装置は、サブブロックBK3の右上のサブブロックBK4に対応する参照画像を転送する。画像復号装置は、サブブロックBK1に対応する参照画像を転送する時に転送された重複部分を、重複して転送する。すなわち、画像復号装置は、左上が(x+vx+29、x+vy-3)である23x23画素の参照画像を転送する。 Next, the image decoding apparatus transfers a reference image corresponding to the upper right sub-block BK4 of the sub-block BK3. The image decoding apparatus transfers the overlapping portion transferred when transferring the reference image corresponding to the sub-block BK1 in an overlapping manner. That is, the image decoding apparatus transfers a reference image of 23 × 23 pixels whose upper left is (x + vx + 29, x + vy−3).
 図18Bは、本実施の形態に係る予測ユニットに対応する複数の取得領域を示す図である。64x64画素の予測ユニット全体では、図18Bの斜線部分について、転送が重複して行われる。 FIG. 18B is a diagram showing a plurality of acquisition areas corresponding to the prediction unit according to the present embodiment. In the entire prediction unit of 64 × 64 pixels, the transfer is performed in duplicate for the hatched portion in FIG. 18B.
 予測ユニットが分割されない場合、参照画像の転送量は、5041画素(71x71画素)である。本実施の形態では、参照画像の転送量は、7176画素(39x23x8画素)である。したがって、本実施の形態に係る参照画像の転送量は、予測ユニットが分割されない場合よりも大きい。また、本実施の形態に係る参照画像の転送量は、実施の形態2に係る参照画像の転送量よりも大きい。 When the prediction unit is not divided, the transfer amount of the reference image is 5041 pixels (71 × 71 pixels). In the present embodiment, the transfer amount of the reference image is 7176 pixels (39 × 23 × 8 pixels). Therefore, the transfer amount of the reference image according to the present embodiment is larger than when the prediction unit is not divided. Further, the transfer amount of the reference image according to the present embodiment is larger than the transfer amount of the reference image according to the second embodiment.
 しかしながら、本実施の形態では、水平方向に重複し、かつ、復号順序で直前に転送された部分のみが除外される。したがって、動き補償部506の回路構成が簡素化される。また、サブブロックBK0に用いられた参照画像は、サブブロックBK1に用いられた後、保持される必要がない。したがって、データ管理が簡素化され、かつ、参照画像記憶部513の必要容量が削減される。 However, in the present embodiment, only the portion that overlaps in the horizontal direction and is transferred immediately before in the decoding order is excluded. Therefore, the circuit configuration of the motion compensation unit 506 is simplified. Further, the reference image used for the sub-block BK0 does not need to be retained after being used for the sub-block BK1. Therefore, data management is simplified and the necessary capacity of the reference image storage unit 513 is reduced.
  (3-4.効果)
 このように、本実施の形態に係る画像復号装置は、参照画像を取得するための位置とサイズを、直前に転送された水平方向の重複部分と重ならないように算出する。そして、画像復号装置は、直前に転送された水平方向の重複部分を除いて、参照画像を転送する。これにより、参照画像を記憶するための必要なメモリ容量が、さらに削減される。また、動き補償部506の回路構成が簡素化される。
(3-4. Effect)
As described above, the image decoding apparatus according to the present embodiment calculates the position and size for acquiring the reference image so as not to overlap with the horizontal overlapping portion transferred immediately before. Then, the image decoding apparatus transfers the reference image except for the overlapped portion in the horizontal direction transferred immediately before. Thereby, the memory capacity required for storing the reference image is further reduced. Further, the circuit configuration of the motion compensation unit 506 is simplified.
  (3-5.補足)
 なお、本実施の形態において、画像復号装置は、64x64画素の予測ユニットを、16x16画素に分割しているが、8x8画素に分割しても、32x32画素に分割しても構わない。また、画像復号装置は、64x32画素などの正方形でない予測ユニットを、16x16画素または他のサイズで分割しても構わない。
(3-5. Supplement)
In the present embodiment, the image decoding apparatus divides a 64 × 64 pixel prediction unit into 16 × 16 pixels, but may divide into 8 × 8 pixels or 32 × 32 pixels. Further, the image decoding apparatus may divide a non-square prediction unit such as 64 × 32 pixels by 16 × 16 pixels or another size.
 分割のサイズが大きい程、処理は単純化するが、参照画像を記憶するために必要なメモリ容量が大きくなり、処理の遅延も大きくなる。分割のサイズが小さい程、処理は複雑化するが、参照画像を記憶するための必要なメモリ容量が小さくなり、処理の遅延も小さくなる。従来のH.264規格などでは、動き補償のデータ単位の最大が16x16画素である。そのため、16x16画素が、参照画像を記憶するためのメモリのサイズ、および、処理の遅延の観点から、分割のサイズとして、最も妥当である。 The larger the division size, the more simplified the process, but the larger the memory capacity required to store the reference image, the greater the processing delay. The smaller the size of the division, the more complicated the process, but the smaller the memory capacity required to store the reference image, and the smaller the processing delay. Conventional H.264. In the H.264 standard, the maximum data unit for motion compensation is 16 × 16 pixels. Therefore, 16 × 16 pixels are most appropriate as the size of the division from the viewpoint of the size of the memory for storing the reference image and the processing delay.
 また、本実施の形態において、参照画像として必要なデータが、1画素単位で取得されている。しかし、画像復号装置は、必要なデータを必ずしも1画素単位で取得する必要はなく、4画素、8画素、あるいは、さらに大きなデータ単位で、必要なデータを取得してもよい。 In the present embodiment, data necessary as a reference image is acquired in units of pixels. However, the image decoding apparatus does not necessarily acquire necessary data in units of one pixel, and may acquire necessary data in units of four pixels, eight pixels, or even larger data units.
 また、本実施の形態において、参照画像を取得するための領域が重ならないように、動きベクトル演算部511が、位置とサイズを算出している。しかし、DMA制御部512が、既に転送された部分が転送されないように、転送を制御してもよい。また、参照画像記憶部513に格納された部分が転送されないように、参照画像記憶部513が転送を制御しても構わない。 Also, in the present embodiment, the motion vector calculation unit 511 calculates the position and size so that the areas for acquiring the reference images do not overlap. However, the DMA control unit 512 may control the transfer so that the already transferred part is not transferred. Further, the reference image storage unit 513 may control the transfer so that the portion stored in the reference image storage unit 513 is not transferred.
 また、本実施の形態に係る各処理部の構成の一部あるいは全部が、専用ハードウェアによる回路で実現されてもよいし、プロセッサで実行されるプログラムで実現されてもよい。 Further, part or all of the configuration of each processing unit according to the present embodiment may be realized by a circuit using dedicated hardware, or may be realized by a program executed by a processor.
 また、本実施の形態において、フレームメモリ502、参照画像記憶部513および予測画像記憶部514は、メモリまたは記憶部として示されている。しかし、これらは、データの記憶が可能な記憶素子であれば、フリップフロップまたはレジスタなどいずれの構成でもよい。さらには、プロセッサのメモリ領域の一部、または、キャッシュメモリの一部が、フレームメモリ502、参照画像記憶部513および予測画像記憶部514として、用いられてもよい。 Further, in the present embodiment, the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514 are shown as a memory or a storage unit. However, these may be any configuration such as a flip-flop or a register as long as it is a storage element capable of storing data. Furthermore, a part of the memory area of the processor or a part of the cache memory may be used as the frame memory 502, the reference image storage unit 513, and the predicted image storage unit 514.
 また、本実施の形態において、画像復号装置が示されている。しかし、復号に限定されず、復号処理を逆の手順で実行する画像符号化装置も、同様に、予測ユニットを分割して、動き補償処理を行うことが可能である。また、符号化または復号に限定されず、画像処理装置が、予測ユニットを分割して、動き補償処理を行ってもよい。 In the present embodiment, an image decoding device is shown. However, the present invention is not limited to decoding, and an image coding apparatus that executes decoding processing in the reverse procedure can similarly divide a prediction unit and perform motion compensation processing. Further, the present invention is not limited to encoding or decoding, and the image processing apparatus may divide the prediction unit and perform the motion compensation process.
 (実施の形態4)
 図19Aは、実施の形態4に係る画像処理装置の構成を示す図である。図19Aの画像処理装置100は、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う。また、画像処理装置100は、分割部101、算出部102、取得部103および生成部104を備える。例えば、分割部101および算出部102は、実施の形態1の動きベクトル演算部511に対応する。また、取得部103は、実施の形態1のDMA制御部512に対応する。また、生成部104は、動き補償部506に対応する。
(Embodiment 4)
FIG. 19A is a diagram illustrating a configuration of an image processing device according to the fourth embodiment. The image processing apparatus 100 in FIG. 19A performs a motion compensation process using a motion vector corresponding to a block in an image. The image processing apparatus 100 includes a dividing unit 101, a calculating unit 102, an acquiring unit 103, and a generating unit 104. For example, the division unit 101 and the calculation unit 102 correspond to the motion vector calculation unit 511 of the first embodiment. The acquisition unit 103 corresponds to the DMA control unit 512 of the first embodiment. The generation unit 104 corresponds to the motion compensation unit 506.
 図19Bは、図19Aの画像処理装置100の動作を示す図である。まず、分割部101は、ブロックを複数のサブブロックに分割する(S101)。次に、算出部102は、第1参照画像を取得するための領域を算出する(S102)。この時、算出部102は、ブロックに対応する動きベクトルを用いる。また、第1参照画像は、複数のサブブロックに含まれる第1サブブロックに対応する参照画像である。 FIG. 19B is a diagram showing an operation of the image processing apparatus 100 of FIG. 19A. First, the dividing unit 101 divides a block into a plurality of sub-blocks (S101). Next, the calculation unit 102 calculates an area for acquiring the first reference image (S102). At this time, the calculation unit 102 uses a motion vector corresponding to the block. The first reference image is a reference image corresponding to the first sub block included in the plurality of sub blocks.
 次に、取得部103は、算出された領域から、第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する(S103)。例えば、第2サブブロックに対応する予測画像の生成のため、第1参照画像に含まれる部分が既に取得されている場合がある。取得部103は、このような部分のうち少なくとも一部を除いて、第1参照画像を取得する。次に、生成部104は、第1参照画像から、第1サブブロックに対応する予測画像を生成する(S104)。 Next, the acquisition unit 103 acquires the first reference image from the calculated area, excluding at least a part of the already acquired parts (S103). For example, there is a case where a portion included in the first reference image has already been acquired in order to generate a predicted image corresponding to the second sub-block. The acquisition unit 103 acquires the first reference image by excluding at least a part of such a part. Next, the generation unit 104 generates a predicted image corresponding to the first sub-block from the first reference image (S104).
 これにより、画像処理装置100は、既に取得された部分のうち少なくとも一部を重複して取得しない。したがって、転送量の増加が抑制される。よって、必要なメモリ容量が減少する。また、処理の遅延が抑制される。 Thereby, the image processing apparatus 100 does not acquire at least a part of the already acquired parts in duplicate. Therefore, an increase in transfer amount is suppressed. Therefore, the required memory capacity is reduced. Further, processing delay is suppressed.
 なお、取得部103は、第2参照画像に部分的に重なる第1参照画像を、第1参照画像と第2参照画像とが重なる部分のうち少なくとも一部を除いて、取得してもよい。ここで、第2参照画像は、分割後の複数のサブブロックに含まれる第2サブブロックに対応する参照画像である。これにより、画像処理装置100は、重複部分を有する参照画像を取得する。重複部分は、既に取得されている場合がある。画像処理装置100は、重複部分のうち少なくとも一部を重複して取得しないことにより、処理の無駄を削減できる。 Note that the acquisition unit 103 may acquire the first reference image that partially overlaps the second reference image, excluding at least a part of the portion where the first reference image and the second reference image overlap. Here, the second reference image is a reference image corresponding to the second sub-block included in the plurality of divided sub-blocks. Thereby, the image processing apparatus 100 acquires a reference image having an overlapping portion. The overlapping part may have already been acquired. The image processing apparatus 100 can reduce processing waste by not acquiring at least a part of the overlapping parts.
 また、取得部103は、第2参照画像を取得した後、取得された第2参照画像に含まれる部分のうち少なくとも一部を除いて、第1参照画像を取得してもよい。これにより、他のサブブロックの参照画像を取得する時に取得された部分のうち少なくとも一部は、重複して取得されない。したがって、転送量の増加が抑制される。また、必要なメモリ容量が減少する。 Further, after obtaining the second reference image, the obtaining unit 103 may obtain the first reference image by excluding at least a part of the portion included in the obtained second reference image. As a result, at least some of the portions acquired when acquiring the reference images of other sub-blocks are not acquired redundantly. Therefore, an increase in transfer amount is suppressed. Also, the required memory capacity is reduced.
 また、取得部103は、第2サブブロックに対応する予測画像を生成部104が生成している間に、第1参照画像を取得してもよい。これにより、画像処理装置100は、参照画像の取得処理と、予測画像の生成処理とを同時に行うことができる。したがって、処理の遅延がさらに減少する。 Further, the acquisition unit 103 may acquire the first reference image while the generation unit 104 is generating the predicted image corresponding to the second sub-block. As a result, the image processing apparatus 100 can simultaneously perform the reference image acquisition process and the predicted image generation process. Accordingly, processing delay is further reduced.
 また、上述の第2参照画像は、第1サブブロックに水平方向に隣接する第2サブブロックに対応する参照画像に限定されてもよい。これにより、画像処理装置100は、水平方向の重複部分のうち少なくとも一部を重複して取得しない。画像処理装置100は、除外条件を水平方向の重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 Also, the above-described second reference image may be limited to a reference image corresponding to a second sub-block that is adjacent to the first sub-block in the horizontal direction. Thereby, the image processing apparatus 100 does not acquire at least a part of the overlapping parts in the horizontal direction. The image processing apparatus 100 acquires the reference image by excluding at least a part of the already acquired overlapping parts without using complicated data management by limiting the exclusion condition to the overlapping parts in the horizontal direction. be able to.
 また、取得部103は、第1参照画像を取得する直前に第2参照画像を取得した後、取得された第2参照画像に含まれる部分のうち少なくとも一部を除いて、第1参照画像を取得してもよい。これにより、画像処理装置100は、直前に取得された重複部分のうち少なくとも一部を重複して取得しない。画像処理装置100は、除外条件を直前に取得された重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 In addition, the acquisition unit 103 acquires the second reference image immediately before acquiring the first reference image, and then acquires the first reference image except for at least a part of the portion included in the acquired second reference image. You may get it. Thereby, the image processing apparatus 100 does not acquire at least a part of the overlapping part acquired immediately before. The image processing apparatus 100 limits the exclusion condition to the overlapped portion acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management, and extracts the reference image. Can be acquired.
 また、上述の第2参照画像、すなわち、第1参照画像を取得する直前に取得される第2参照画像は、第1サブブロックに水平方向に隣接する第2サブブロックに対応する参照画像に限定されてもよい。これにより、画像処理装置100は、直前に取得された水平方向の重複部分のうち少なくとも一部を重複して取得しない。画像処理装置100は、除外条件を直前に取得された水平方向の重複部分に限定することにより、複雑なデータ管理を用いることなく、既に取得された重複部分のうち少なくとも一部を除外して、参照画像を取得することができる。 Further, the second reference image described above, that is, the second reference image acquired immediately before acquiring the first reference image is limited to the reference image corresponding to the second sub-block adjacent to the first sub-block in the horizontal direction. May be. Thereby, the image processing apparatus 100 does not acquire at least a part of the overlapped portion in the horizontal direction acquired immediately before. The image processing apparatus 100 limits the exclusion condition to the overlapped portion in the horizontal direction acquired immediately before, and excludes at least a part of the already acquired overlapped portion without using complicated data management. A reference image can be acquired.
 また、取得部103は、水平方向の重複部分に限らず、予め定められた方向の重複部分のうち少なくとも一部を除いて、第1参照画像を取得してもよい。例えば、取得部103は、処理順序に沿う方向の重複部分のうち少なくとも一部を除いて、第1参照画像を取得してもよい。より具体的には、取得部103は、複数のサブブロックに対応する複数の参照画像を垂直方向に沿って順に取得する際、垂直方向の重複部分のうち少なくとも一部を除いて、第1参照画像を取得してもよい。 Further, the acquiring unit 103 may acquire the first reference image by excluding at least a part of the overlapping portion in a predetermined direction, not limited to the overlapping portion in the horizontal direction. For example, the acquisition unit 103 may acquire the first reference image by excluding at least a part of overlapping portions in the direction along the processing order. More specifically, when acquiring the plurality of reference images corresponding to the plurality of sub-blocks in order along the vertical direction, the acquisition unit 103 removes at least a part of the overlapping portions in the vertical direction and performs the first reference. An image may be acquired.
 また、分割部101は、ブロックを大きさが互いに等しい複数のサブブロックに分割してもよい。これにより、動き補償処理が同じ大きさのサブブロックに対して実行される。したがって、動き補償処理が簡素化される。 Further, the dividing unit 101 may divide the block into a plurality of sub-blocks having the same size. As a result, the motion compensation process is executed for sub-blocks of the same size. Therefore, the motion compensation process is simplified.
 また、算出部102は、第1参照画像を取得するための領域を、既に取得された部分のうち少なくとも一部を除いて、算出してもよい。これにより、参照画像を取得するための領域が、重複部分のうち少なくとも一部を除いて、適切に算出される。したがって、参照画像が、重複部分のうち少なくとも一部を除いて、適切に取得される。 In addition, the calculation unit 102 may calculate the area for acquiring the first reference image, excluding at least a part of the already acquired part. Thereby, the area | region for acquiring a reference image is calculated appropriately except at least one part among overlapping parts. Therefore, the reference image is appropriately acquired except for at least a part of the overlapping parts.
 また、取得部103は、第1サブブロックよりも大きい第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得してもよい。そして、生成部104は、第1サブブロックよりも解像度の高い予測画像を生成してもよい。これにより、画像処理装置100は、転送量の増加を抑制しつつ、サブブロック毎に高解像度の予測画像を生成できる。 Also, the acquisition unit 103 may acquire a first reference image larger than the first sub-block, excluding at least a part of the already acquired parts. Then, the generation unit 104 may generate a predicted image having a higher resolution than that of the first sub-block. As a result, the image processing apparatus 100 can generate a high-resolution predicted image for each sub-block while suppressing an increase in the transfer amount.
 以上、一つまたは複数の態様に係る画像処理装置について、実施の形態に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態における構成要素を組み合わせて構築される形態も、一つまたは複数の態様の範囲内に含まれてもよい。 As described above, the image processing apparatus according to one or more aspects has been described based on the embodiment. However, the present invention is not limited to this embodiment. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in this embodiment, and forms constructed by combining components in different embodiments are also within the scope of one or more aspects. May be included.
 例えば、特定の処理部が実行する処理を別の処理部が実行してもよい。また、処理を実行する順番が変更されてもよいし、複数の処理が並行して実行されてもよい。 For example, another processing unit may execute a process executed by a specific processing unit. In addition, the order in which the processes are executed may be changed, or a plurality of processes may be executed in parallel.
 また、上記の概念は、画像処理装置として実現できるだけでなく、画像処理装置を構成する処理手段をステップとする方法として実現できる。例えば、それらのステップは、コンピュータによって実行される。そして、上記の概念は、それらの方法に含まれるステップを、コンピュータに実行させるためのプログラムとして実現できる。さらに、上記の概念は、そのプログラムを記録したCD-ROM等のコンピュータ読み取り可能な記録媒体として実現できる。 Further, the above concept can be realized not only as an image processing apparatus, but also as a method using a processing unit constituting the image processing apparatus as a step. For example, these steps are performed by a computer. And said concept is realizable as a program for making a computer perform the step contained in those methods. Further, the above concept can be realized as a computer-readable recording medium such as a CD-ROM in which the program is recorded.
 また、画像処理装置および画像処理方法は、画像符号化装置、画像復号装置、画像符号化方法および画像復号方法にも適用可能である。 Also, the image processing device and the image processing method can be applied to an image encoding device, an image decoding device, an image encoding method, and an image decoding method.
 また、上記各実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。ここで、上記各実施の形態の画像処理装置などを実現するソフトウェアは、次のようなプログラムである。 Further, in each of the above embodiments, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, the software that realizes the image processing apparatus according to each of the above embodiments is the following program.
 すなわち、このプログラムは、コンピュータに、画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理方法であって、前記ブロックを複数のサブブロックに分割する分割ステップと、前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出ステップと、算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得ステップと、前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成ステップとを含む画像処理方法を実行させる。 That is, this program is an image processing method for performing motion compensation processing on a computer using a motion vector corresponding to a block in an image, the step of dividing the block into a plurality of sub-blocks, A calculation step of calculating a region for acquiring a first reference image corresponding to a first sub-block included in the sub-block using the motion vector corresponding to the block, and the calculated region, An acquisition step of acquiring the first reference image excluding at least a part of the already acquired portion; a generation step of generating a predicted image corresponding to the first sub-block from the first reference image; The image processing method including is executed.
 また、画像処理装置に含まれる複数の構成要素は、集積回路であるLSI(Large Scale Integration)として実現されてもよい。これらの構成要素は、個別に1チップ化されてもよいし、一部または全部を含むように1チップ化されてもよい。例えば、メモリ以外の構成要素が、1チップ化されてもよい。ここでは、LSIとしたが、集積度の違いにより、IC(Integrated Circuit)、システムLSI、スーパーLSIまたはウルトラLSIと呼称されることもある。 Further, the plurality of components included in the image processing apparatus may be realized as an LSI (Large Scale Integration) that is an integrated circuit. These components may be individually made into one chip, or may be made into one chip so as to include a part or all of them. For example, the components other than the memory may be integrated into one chip. Although referred to here as an LSI, it may be referred to as an IC (Integrated Circuit), a system LSI, a super LSI, or an ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。プログラムすることが可能なFPGA(Field Programmable Gate Array)、または、LSI内部の回路セルの接続および設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed, or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて、画像処理装置に含まれる構成要素の集積回路化を行ってもよい。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is natural that even if the technology is used, components included in the image processing apparatus can be integrated into an integrated circuit. Good.
 (実施の形態5)
 上記各実施の形態で示した画像符号化方法および画像復号方法の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。
(Embodiment 5)
By recording a program for realizing the configuration of the image encoding method and the image decoding method described in each of the above embodiments on a storage medium, the processing described in each of the above embodiments can be easily performed in an independent computer system. It becomes possible to carry out. The storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
 さらに、ここで、上記各実施の形態で示した画像符号化方法および画像復号方法の応用例とそれを用いたシステムを説明する。 Furthermore, application examples of the image coding method and the image decoding method shown in the above embodiments and a system using the same will be described here.
 図20は、コンテンツ配信サービスを実現するコンテンツ供給システムex100の全体構成を示す図である。通信サービスの提供エリアを所望の大きさに分割し、各セル内にそれぞれ固定無線局である基地局ex106~ex110が設置されている。 FIG. 20 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service. The communication service providing area is divided into desired sizes, and base stations ex106 to ex110, which are fixed radio stations, are installed in each cell.
 このコンテンツ供給システムex100では、電話網ex104、および、基地局ex106~ex110を介して、コンピュータex111、PDA(Personal Digital Assistant)ex112、カメラex113、携帯電話ex114、ゲーム機ex115などの各機器が相互に接続される。また、各機器が、インターネットサービスプロバイダex102を介して、インターネットex101に接続されている。 In this content supply system ex100, devices such as a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 are mutually connected via a telephone network ex104 and base stations ex106 to ex110. Connected. Each device is connected to the Internet ex101 via the Internet service provider ex102.
 しかし、コンテンツ供給システムex100は、図20のような構成に限定されず、いずれかの要素を組み合せて接続するようにしてもよい。また、固定無線局である基地局ex106~ex110を介さずに、各機器が電話網ex104に直接接続されてもよい。また、各機器が近距離無線等を介して直接相互に接続されていてもよい。 However, the content supply system ex100 is not limited to the configuration shown in FIG. 20 and may be connected by combining any of the elements. Further, each device may be directly connected to the telephone network ex104 without going through the base stations ex106 to ex110 which are fixed wireless stations. In addition, the devices may be directly connected to each other via short-range wireless or the like.
 カメラex113は、デジタルビデオカメラ等の動画撮影が可能な機器であり、カメラex116は、デジタルカメラ等の静止画撮影、動画撮影が可能な機器である。また、携帯電話ex114は、GSM(登録商標)(Global System for Mobile Communications)方式、CDMA(Code Division Multiple Access)方式、W-CDMA(Wideband-Code Division Multiple Access)方式、LTE(Long Term Evolution)方式、若しくは、HSPA(High Speed Packet Access)方式の携帯電話、または、PHS(Personal Handyphone System)等であり、いずれでも構わない。 The camera ex113 is a device that can shoot a moving image such as a digital video camera, and the camera ex116 is a device that can shoot a still image and a moving image such as a digital camera. In addition, the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division MultipleL system). Alternatively, an HSPA (High Speed Packet Access) type mobile phone or a PHS (Personal Handyphone System) may be used.
 コンテンツ供給システムex100では、カメラex113等が基地局ex109、電話網ex104を通じてストリーミングサーバex103に接続されることで、ライブ配信等が可能になる。ライブ配信では、ユーザがカメラex113を用いて撮影するコンテンツ(例えば、音楽ライブの映像等)に対して、上記各実施の形態で説明したように符号化処理を行い、ストリーミングサーバex103に送信する。一方、ストリーミングサーバex103は、要求のあったクライアントに対して、送信されたコンテンツデータをストリーム配信する。クライアントとしては、上記符号化処理されたデータを復号することが可能な、コンピュータex111、PDAex112、カメラex113、携帯電話ex114、ゲーム機ex115等がある。配信されたデータを受信した各機器では、受信したデータを復号処理して再生する。 In the content supply system ex100, the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like. In live distribution, the content (for example, music live video) captured by the user using the camera ex113 is encoded as described in the above embodiments and transmitted to the streaming server ex103. On the other hand, the streaming server ex103 streams the transmitted content data to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, a game machine ex115, and the like that can decode the encoded data. Each device that has received the distributed data decodes and reproduces the received data.
 なお、撮影したデータの符号化処理は、カメラex113で行っても、データの送信処理をするストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。同様に配信されたデータの復号処理は、クライアントで行っても、ストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。また、カメラex113に限らず、カメラex116で撮影した静止画像および/または動画像データを、コンピュータex111を介してストリーミングサーバex103に送信してもよい。この場合の符号化処理は、カメラex116、コンピュータex111およびストリーミングサーバex103のいずれで行ってもよいし、互いに分担して行ってもよい。 Note that the encoded processing of the captured data may be performed by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be performed in a shared manner. Similarly, the distributed processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in a shared manner. In addition to the camera ex113, still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111. The encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
 また、これら符号化処理および復号処理は、一般的にコンピュータex111および各機器が有するLSI(Large Scale Integration)ex500において実行される。LSIex500は、ワンチップであっても複数チップからなる構成であってもよい。なお、画像符号化用のソフトウェアまたは画像復号用のソフトウェアをコンピュータex111等で読み取り可能な何らかの記録メディア(CD-ROM、フレキシブルディスク、ハードディスクなど)に組み込み、そのソフトウェアを用いて符号化処理または復号処理を行ってもよい。さらに、携帯電話ex114がカメラ付きである場合には、そのカメラで取得した動画像データを送信してもよい。このときの動画像データは、携帯電話ex114が有するLSIex500で符号化処理されたデータである。 In addition, these encoding processing and decoding processing are generally executed in a computer ex111 and an LSI (Large Scale Integration) ex500 included in each device. The LSI ex500 may be configured as a single chip or a plurality of chips. It should be noted that image encoding software or image decoding software is incorporated into any recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111 and the like, and encoding processing or decoding processing is performed using the software. May be performed. Furthermore, when the mobile phone ex114 is equipped with a camera, moving image data acquired by the camera may be transmitted. The moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
 また、ストリーミングサーバex103は、複数のサーバまたは複数のコンピュータであって、データを分散して処理したり記録したり配信するものであってもよい。 Further, the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
 以上のようにして、コンテンツ供給システムex100では、符号化されたデータをクライアントが受信して再生することができる。このようにコンテンツ供給システムex100では、ユーザが送信した情報をリアルタイムでクライアントが受信して復号し、再生することができ、特別な権利および設備を有さないユーザでも個人放送を実現できる。 As described above, in the content supply system ex100, the encoded data can be received and reproduced by the client. As described above, in the content supply system ex100, the information transmitted by the user can be received, decrypted and reproduced in real time by the client, and even a user who does not have special rights and facilities can realize personal broadcasting.
 なお、コンテンツ供給システムex100の例に限らず、図21に示すように、デジタル放送用システムex200にも、上記各実施の形態の少なくとも画像符号化装置または画像復号装置のいずれかを組み込むことができる。具体的には、放送局ex201では映像情報のビットストリームが電波を介して通信または衛星ex202に伝送される。このビットストリームは、上記各実施の形態で説明した画像符号化方法により符号化された符号化ビットストリームである。これを受けた放送衛星ex202は、放送用の電波を発信し、この電波を衛星放送の受信が可能な家庭のアンテナex204が受信する。受信したビットストリームを、テレビ(受信機)ex300またはセットトップボックス(STB)ex217等の装置が復号して再生する。 In addition to the example of the content supply system ex100, as shown in FIG. 21, at least one of the image encoding device and the image decoding device of each of the above embodiments can be incorporated in the digital broadcasting system ex200. . Specifically, in the broadcasting station ex201, a bit stream of video information is transmitted to a communication or satellite ex202 via radio waves. This bit stream is an encoded bit stream encoded by the image encoding method described in the above embodiments. Receiving this, the broadcasting satellite ex202 transmits a radio wave for broadcasting, and the home antenna ex204 capable of receiving the satellite broadcast receives the radio wave. The received bit stream is decoded and reproduced by a device such as the television (receiver) ex300 or the set top box (STB) ex217.
 また、記録媒体であるCDおよびDVD等の記録メディアex214に記録したビットストリームを読み取り、復号する再生装置ex212にも上記実施の形態で示した画像復号装置を実装することが可能である。この場合、再生された映像信号はモニタex213に表示される。 In addition, the image decoding apparatus described in the above embodiment can be mounted on the playback apparatus ex212 that reads and decodes the bitstream recorded on the recording medium ex214 such as a CD and a DVD that are recording media. In this case, the reproduced video signal is displayed on the monitor ex213.
 また、DVD、BD等の記録メディアex215に記録した符号化ビットストリームを読み取り復号する、または、記録メディアex215に映像信号を符号化し書き込むリーダ/レコーダex218にも上記各実施の形態で示した画像復号装置または画像符号化装置を実装することが可能である。この場合、再生された映像信号はモニタex219に表示され、符号化ビットストリームが記録された記録メディアex215により他の装置およびシステムにおいて映像信号を再生することができる。また、ケーブルテレビ用のケーブルex203または衛星/地上波放送のアンテナex204に接続されたセットトップボックスex217内に画像復号装置を実装し、これをテレビのモニタex219で表示してもよい。このときセットトップボックスではなく、テレビ内に画像復号装置を組み込んでもよい。 In addition, the image decoding shown in the above embodiments is also performed on the reader / recorder ex218 that reads and decodes the encoded bitstream recorded on the recording medium ex215 such as DVD and BD, or encodes and writes the video signal on the recording medium ex215. It is possible to implement a device or an image coding device. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device and system using the recording medium ex215 in which the encoded bitstream is recorded. Further, an image decoding device may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television. At this time, the image decoding apparatus may be incorporated in the television instead of the set top box.
 図22は、上記各実施の形態で説明した画像復号方法を用いたテレビ(受信機)ex300を示す図である。テレビex300は、上記放送を受信するアンテナex204またはケーブルex203等を介して映像情報のビットストリームを取得または出力するチューナex301と、受信した符号化データを復調する、または外部に送信する符号化データに変調する変調/復調部ex302と、復調した映像データ、音声データを分離する、または符号化された映像データ、音声データを多重化する多重/分離部ex303を備える。 FIG. 22 is a diagram illustrating a television (receiver) ex300 that uses the image decoding method described in each of the above embodiments. The television ex300 includes a tuner ex301 that acquires or outputs a bit stream of video information via the antenna ex204 or the cable ex203 that receives the broadcast, and the encoded data that is demodulated or transmitted to the outside. A modulation / demodulation unit ex302 that modulates and a multiplexing / separation unit ex303 that separates demodulated video data and audio data or multiplexes encoded video data and audio data.
 また、テレビex300は、音声データ、映像データそれぞれを復号する、またはそれぞれの情報を符号化する音声信号処理部ex304、映像信号処理部ex305を有する信号処理部ex306と、復号した音声信号を出力するスピーカex307、復号した映像信号を表示するディスプレイ等の表示部ex308を有する出力部ex309とを有する。さらに、テレビex300は、ユーザ操作の入力を受け付ける操作入力部ex312等を有するインタフェース部ex317を有する。さらに、テレビex300は、各部を統括的に制御する制御部ex310、各部に電力を供給する電源回路部ex311を有する。 Further, the television ex300 decodes each of the audio data and the video data, or encodes each information, an audio signal processing unit ex304, a signal processing unit ex306 including the video signal processing unit ex305, and outputs the decoded audio signal. A speaker ex307 and an output unit ex309 including a display unit ex308 such as a display for displaying the decoded video signal; Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that controls each unit in an integrated manner, and a power supply circuit unit ex311 that supplies power to each unit.
 インタフェース部ex317は、操作入力部ex312以外に、リーダ/レコーダex218等の外部機器と接続されるブリッジex313、SDカード等の記録メディアex216を装着可能とするためのスロット部ex314、ハードディスク等の外部記録メディアと接続するためのドライバex315、電話網と接続するモデムex316等を有していてもよい。なお、記録メディアex216は、格納する不揮発性/揮発性の半導体メモリ素子により電気的に情報の記録を可能としたものである。 In addition to the operation input unit ex312, the interface unit ex317 includes a bridge ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording such as a hard disk. A driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included. The recording medium ex216 can record information electrically by using a nonvolatile / volatile semiconductor memory element to be stored.
 テレビex300の各部は、同期バスを介して互いに接続されている。 Each part of the television ex300 is connected to each other via a synchronous bus.
 まず、テレビex300がアンテナex204等により外部から取得したデータを復号し、再生する構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、CPU等を有する制御部ex310の制御に基づいて、変調/復調部ex302で復調した映像データ、音声データを多重/分離部ex303で分離する。さらにテレビex300は、分離した音声データを音声信号処理部ex304で復号し、分離した映像データを映像信号処理部ex305で上記各実施の形態で説明した復号方法を用いて復号する。復号した音声信号、映像信号は、それぞれ出力部ex309から外部に向けて出力される。出力する際には、音声信号と映像信号が同期して再生するよう、バッファex318、ex319等に一旦これらの信号を蓄積するとよい。また、テレビex300は、放送等からではなく、磁気/光ディスク、SDカード等の記録メディアex215、ex216から符号化された符号化ビットストリームを読み出してもよい。 First, a configuration in which the television ex300 decodes and reproduces data acquired from the outside by the antenna ex204 and the like will be described. The television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the video data and audio data demodulated by the modulation / demodulation unit ex302 by the multiplexing / separation unit ex303 based on the control of the control unit ex310 having a CPU or the like. . Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in the above embodiments. The decoded audio signal and video signal are output to the outside from the output unit ex309. When outputting, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Further, the television ex300 may read the encoded bitstream encoded from the recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from a broadcast or the like.
 次に、テレビex300が音声信号および映像信号を符号化し、外部に送信または記録メディア等に書き込む構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、制御部ex310の制御に基づいて、音声信号処理部ex304で音声信号を符号化し、映像信号処理部ex305で映像信号を上記各実施の形態で説明した符号化方法を用いて符号化する。符号化した音声信号、映像信号は多重/分離部ex303で多重化され外部に出力される。多重化する際には、音声信号と映像信号が同期するように、バッファex320、ex321等に一旦これらの信号を蓄積するとよい。 Next, a description will be given of a configuration in which the television ex300 encodes an audio signal and a video signal and transmits them to the outside or writes them on a recording medium. The television ex300 receives a user operation from the remote controller ex220 or the like, and encodes an audio signal with the audio signal processing unit ex304 based on the control of the control unit ex310, and converts the video signal with the video signal processing unit ex305. Encoding is performed using the encoding method described in (1). The encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320 and ex321 so that the audio signal and the video signal are synchronized.
 なお、バッファex318~ex321は図示しているように複数備えていてもよいし、1つ以上のバッファを共有する構成であってもよい。さらに、図示している以外に、例えば変調/復調部ex302および多重/分離部ex303の間等でもシステムのオーバフロー、アンダーフローを避ける緩衝材としてバッファにデータを蓄積することとしてもよい。 It should be noted that a plurality of buffers ex318 to ex321 may be provided as shown in the figure, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
 また、テレビex300は、放送および記録メディア等から音声データおよび映像データを取得する以外に、マイクおよびカメラのAV入力を受け付ける構成を備え、それらから取得したデータに対して符号化処理を行ってもよい。なお、ここではテレビex300は上記の符号化処理、多重化、および外部出力ができる構成として説明したが、これらの処理を行うことはできず、上記受信、復号処理、外部出力のみが可能な構成であってもよい。 In addition to acquiring audio data and video data from broadcast and recording media, the television ex300 has a configuration for receiving AV input of a microphone and a camera, and even if encoding processing is performed on the data acquired therefrom Good. Here, the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output. However, these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be.
 また、リーダ/レコーダex218で記録メディアから符号化ビットストリームを読み出す、または、書き込む場合には、上記復号処理または符号化処理はテレビex300とリーダ/レコーダex218とのいずれで行ってもよいし、テレビex300とリーダ/レコーダex218とが互いに分担して行ってもよい。 When the encoded bitstream is read from or written to the recording medium by the reader / recorder ex218, the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218, The ex300 and the reader / recorder ex218 may share each other.
 一例として、光ディスクからデータの読み込みまたは書き込みをする場合の情報再生/記録部ex400の構成を図23に示す。情報再生/記録部ex400は、以下に説明する要素ex401~ex407を備える。 As an example, FIG. 23 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to the optical disk. The information reproducing / recording unit ex400 includes elements ex401 to ex407 described below.
 光ヘッドex401は、光ディスクである記録メディアex215の記録面にレーザスポットを照射して情報を書き込み、記録メディアex215の記録面からの反射光を検出して情報を読み込む。変調記録部ex402は、光ヘッドex401に内蔵された半導体レーザを電気的に駆動し記録データに応じてレーザ光の変調を行う。再生復調部ex403は、光ヘッドex401に内蔵されたフォトディテクタにより記録面からの反射光を電気的に検出した再生信号を増幅し、記録メディアex215に記録された信号成分を分離して復調し、必要な情報を再生する。バッファex404は、記録メディアex215に記録するための情報および記録メディアex215から再生した情報を一時的に保持する。ディスクモータex405は、記録メディアex215を回転させる。サーボ制御部ex406は、ディスクモータex405の回転駆動を制御しながら光ヘッドex401を所定の情報トラックに移動させ、レーザスポットの追従処理を行う。 The optical head ex401 writes information by irradiating a laser spot on the recording surface of the recording medium ex215 that is an optical disk, and reads information by detecting reflected light from the recording surface of the recording medium ex215. The modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data. The reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary. To play back information. The buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215. The disk motor ex405 rotates the recording medium ex215. The servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
 システム制御部ex407は、情報再生/記録部ex400全体の制御を行う。上記の読み出しおよび書き込みの処理はシステム制御部ex407が、バッファex404に保持された各種情報を利用し、また必要に応じて新たな情報の生成および追加を行うと共に、変調記録部ex402、再生復調部ex403、サーボ制御部ex406を協調動作させながら、光ヘッドex401を通して、情報の記録再生を行うことにより実現される。システム制御部ex407は、例えばマイクロプロセッサで構成され、読み出し書き込みのプログラムを実行することでそれらの処理を実行する。 The system control unit ex407 controls the entire information reproduction / recording unit ex400. In the reading and writing processes described above, the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary, as well as the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner. The system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
 以上では、光ヘッドex401はレーザスポットを照射するとして説明したが、近接場光を用いてより高密度な記録を行う構成であってもよい。 In the above, the optical head ex401 has been described as irradiating a laser spot, but it may be configured to perform higher-density recording using near-field light.
 図24に光ディスクである記録メディアex215の模式図を示す。記録メディアex215の記録面には案内溝(グルーブ)がスパイラル状に形成され、情報トラックex230には、予めグルーブの形状の変化によってディスク上の絶対位置を示す番地情報が記録されている。この番地情報はデータを記録する単位である記録ブロックex231の位置を特定するための情報を含み、記録および再生を行う装置において情報トラックex230を再生し番地情報を読み取ることで記録ブロックを特定することができる。また、記録メディアex215は、データ記録領域ex233、内周領域ex232、外周領域ex234を含んでいる。ユーザデータを記録するために用いる領域がデータ記録領域ex233であり、データ記録領域ex233より内周または外周に配置されている内周領域ex232と外周領域ex234は、ユーザデータの記録以外の特定用途に用いられる。 FIG. 24 shows a schematic diagram of a recording medium ex215 that is an optical disk. Guide grooves (grooves) are formed in a spiral shape on the recording surface of the recording medium ex215, and address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove. This address information includes information for specifying the position of the recording block ex231 which is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording and reproducing apparatus. Can do. Further, the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234. The area used for recording the user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner circumference or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
 情報再生/記録部ex400は、このような記録メディアex215のデータ記録領域ex233に対して、符号化された音声データ、映像データまたはそれらのデータを多重化した符号化データの読み書きを行う。 The information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or encoded data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
 以上では、1層のDVD、BD等の光ディスクを例に挙げ説明したが、これらに限ったものではなく、多層構造であって表面以外にも記録可能な光ディスクであってもよい。また、ディスクの同じ場所にさまざまな異なる波長の色の光を用いて情報を記録したり、さまざまな角度から異なる情報の層を記録したりするなど、多次元的な記録/再生を行う構造の光ディスクであってもよい。 In the above description, an optical disk such as a single-layer DVD or BD has been described as an example. However, the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used. It also has a structure that performs multidimensional recording / reproduction, such as recording information using light of various different wavelengths at the same location on the disc, and recording different layers of information from various angles. It may be an optical disk.
 また、デジタル放送用システムex200において、アンテナex205を有する車ex210で衛星ex202等からデータを受信し、車ex210が有するカーナビゲーションex211等の表示装置に動画を再生することも可能である。なお、カーナビゲーションex211の構成は例えば図22に示す構成のうち、GPS受信部を加えた構成が考えられ、同様なことがコンピュータex111および携帯電話ex114等でも考えられる。また、上記携帯電話ex114等の端末は、テレビex300と同様に、符号化器および復号器を両方持つ送受信型端末の他に、符号化器のみの送信端末、復号器のみの受信端末という3通りの実装形式が考えられる。 Also, in the digital broadcasting system ex200, the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has. Note that the configuration of the car navigation ex211 may be, for example, a configuration in which a GPS receiving unit is added in the configuration illustrated in FIG. 22, and the same may be considered for the computer ex111, the mobile phone ex114, and the like. In addition to the transmission / reception terminal having both an encoder and a decoder, the mobile phone ex114 and the like can be used in three ways: a transmitting terminal having only an encoder and a receiving terminal having only a decoder. The implementation form of can be considered.
 このように、上記各実施の形態で示した画像符号化方法あるいは画像復号方法を上述したいずれの機器またはシステムに用いることは可能であり、そうすることで、上記各実施の形態で説明した効果を得ることができる。 As described above, the image encoding method or the image decoding method shown in each of the above embodiments can be used in any of the above-described devices or systems, and by doing so, the effects described in the above embodiments can be obtained. Can be obtained.
 また、本発明はかかる上記実施の形態に限定されるものではなく、本発明の範囲を逸脱することなく種々の変形または修正が可能である。 Further, the present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the scope of the present invention.
 (実施の形態6)
 本実施の形態では、実施の形態1に示した画像復号装置を、典型的には半導体集積回路であるLSIとして実現する。実現した形態を図25に示す。フレームメモリ502をDRAM上に実現し、その他の回路およびメモリをLSI上に構成している。符号化ストリームを格納するビットストリームバッファをDRAM上に実現してもよい。
(Embodiment 6)
In the present embodiment, the image decoding apparatus shown in the first embodiment is realized as an LSI that is typically a semiconductor integrated circuit. The realized form is shown in FIG. The frame memory 502 is realized on the DRAM, and other circuits and memories are configured on the LSI. A bit stream buffer for storing the encoded stream may be realized on the DRAM.
 これらは個別に1チップ化されてもよいし、一部または全てを含むように1チップ化されても良い。ここではLSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 These may be individually made into one chip, or may be made into one chip so as to include a part or all of them. The name used here is LSI, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現しても良い。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)、または、LSI内部の回路セルの接続および設定を再構成可能なリコンフィギュラブル・プロセッサを利用しても良い。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行っても良い。バイオ技術の適応などが可能性として有り得る。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technologies, it is naturally also possible to integrate functional blocks using this technology. Biotechnology can be applied as a possibility.
 さらに加えて、本実施の形態の画像復号装置を集積化した半導体チップと、画像を描画するためのディスプレイとを組み合せて、様々な用途に応じた描画機器を構成することができる。携帯電話、テレビ、デジタルビデオレコーダー、デジタルビデオカメラおよびカーナビゲーション等における情報描画手段として、本発明を利用することが可能である。ディスプレイとしては、ブラウン管(CRT)の他、液晶、PDP(プラズマディスプレイパネル)および有機ELなどのフラットディスプレイ、プロジェクターを代表とする投射型ディスプレイなどと組み合わせることが可能である。 In addition, by combining a semiconductor chip in which the image decoding apparatus according to this embodiment is integrated with a display for drawing an image, a drawing device corresponding to various uses can be configured. The present invention can be used as information drawing means in cellular phones, televisions, digital video recorders, digital video cameras, car navigation systems, and the like. As a display, in addition to a cathode ray tube (CRT), a flat display such as a liquid crystal, a PDP (plasma display panel) and an organic EL, a projection display represented by a projector, and the like can be combined.
 また、本実施の形態におけるLSIは、符号化ストリームを蓄積するビットストリームバッファ、および、画像を蓄積するフレームメモリ等を備えるDRAM(Dynamic Random Access Memory)と連携することにより、符号化処理または復号処理を行ってもよい。また、本実施の形態におけるLSIは、DRAMではなく、eDRAM(embeded DRAM)、SRAM(Static Random Access Memory)、または、ハードディスクなど他の記憶装置と連携しても構わない。 Further, the LSI in the present embodiment cooperates with a DRAM (Dynamic Random Access Memory) including a bit stream buffer for storing an encoded stream and a frame memory for storing an image, thereby performing an encoding process or a decoding process. May be performed. Further, the LSI in the present embodiment may be linked with other storage devices such as eDRAM (embedded DRAM), SRAM (Static Random Access Memory), or hard disk instead of DRAM.
 (実施の形態7)
 上記各実施の形態で示した画像符号化装置、画像復号装置、画像符号化方法および画像復号方法は、典型的には集積回路であるLSIで実現される。一例として、図26に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex502~ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。
(Embodiment 7)
The image encoding device, the image decoding device, the image encoding method, and the image decoding method described in the above embodiments are typically realized by an LSI that is an integrated circuit. As an example, FIG. 26 shows a configuration of an LSI ex500 that is made into one chip. The LSI ex500 includes elements ex502 to ex509 described below, and each element is connected via a bus ex510. The power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
 例えば、符号化処理を行う場合には、LSIex500は、AV I/Oex509によりマイクex117およびカメラex113等からAV信号の入力を受け付ける。入力されたAV信号は、一旦SDRAM等の外部のメモリex511に蓄積される。蓄積したデータは、処理量および処理速度に応じて適宜複数回に分けるなどされ、信号処理部ex507に送られる。信号処理部ex507は、音声信号の符号化および/または映像信号の符号化を行う。ここで映像信号の符号化処理は、上記実施の形態で説明した符号化処理である。信号処理部ex507では、さらに、場合により符号化された音声データと符号化された映像データを多重化するなどの処理を行い、ストリームI/Oex504から外部に出力する。この出力されたビットストリームは、基地局ex107に向けて送信されたり、または、記録メディアex215に書き込まれたりする。 For example, when encoding processing is performed, the LSI ex500 receives an AV signal input from the microphone ex117, the camera ex113, and the like by the AV I / Oex 509. The input AV signal is temporarily stored in an external memory ex511 such as SDRAM. The accumulated data is divided into a plurality of times as appropriate according to the processing amount and processing speed, and sent to the signal processing unit ex507. The signal processing unit ex507 performs encoding of an audio signal and / or encoding of a video signal. Here, the encoding process of the video signal is the encoding process described in the above embodiment. The signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 504 to the outside. The output bit stream is transmitted to the base station ex107 or written to the recording medium ex215.
 また、例えば、復号処理を行う場合には、LSIex500は、マイコン(マイクロコンピュータ)ex502の制御に基づいて、ストリームI/Oex504によって、基地局ex107から得られた符号化データ、または、記録メディアex215から読み出して得た符号化データを一旦メモリex511等に蓄積する。マイコンex502の制御に基づいて、蓄積したデータは処理量および処理速度に応じて適宜複数回に分けるなどされ信号処理部ex507に送られ、信号処理部ex507において音声データの復号および/または映像データの復号が行われる。ここで映像信号の復号処理は上記各実施の形態で説明した復号処理である。さらに、場合により復号された音声信号と復号された映像信号を同期して再生できるようそれぞれの信号を一旦メモリex511等に蓄積するとよい。復号された出力信号はメモリex511等を適宜介しながら、AV I/Oex509からモニタex219等に出力される。メモリex511にアクセスする際にはメモリコントローラex503を介する構成である。 Further, for example, when performing the decoding process, the LSI ex500 transmits the encoded data obtained from the base station ex107 by the stream I / Oex 504 or the recording medium ex215 based on the control of the microcomputer (microcomputer) ex502. The encoded data obtained by reading is temporarily stored in the memory ex511 or the like. Based on the control of the microcomputer ex502, the accumulated data is appropriately divided into a plurality of times according to the processing amount and the processing speed and sent to the signal processing unit ex507, where the signal processing unit ex507 decodes audio data and / or video data. Decryption is performed. Here, the decoding process of the video signal is the decoding process described in the above embodiments. Further, in some cases, each signal may be temporarily stored in the memory ex511 or the like so that the decoded audio signal and the decoded video signal can be reproduced in synchronization. The decoded output signal is output from the AV I / Oex 509 to the monitor ex219 or the like through the memory ex511 or the like as appropriate. When accessing the memory ex511, the memory controller ex503 is used.
 なお、上記では、メモリex511がLSIex500の外部の構成として説明したが、LSIex500の内部に含まれる構成であってもよい。また、LSIex500は1チップ化されてもよいし、複数チップ化されてもよい。 In the above description, the memory ex511 has been described as an external configuration of the LSI ex500. However, a configuration included in the LSI ex500 may be used. The LSI ex500 may be made into one chip or a plurality of chips.
 なお、ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 In addition, although it was set as LSI here, it may be called IC, system LSI, super LSI, and ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)、または、LSI内部の回路セルの接続および設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI, or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適応等が可能性としてありえる。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally also possible to integrate functional blocks using this technology. Biotechnology can be applied.
 本発明は、様々な用途に利用可能である。例えば、テレビ、デジタルビデオレコーダー、カーナビゲーション、携帯電話、デジタルカメラ、デジタルビデオカメラ等の高解像度の情報表示機器、または、撮像機器に利用可能であり、利用価値が高い。 The present invention can be used for various purposes. For example, it can be used for high-resolution information display devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras, or imaging devices, and has high utility value.
  100 画像処理装置
  101 分割部
  102 算出部
  103 取得部
  104 生成部
  501 制御部
  502、1003 フレームメモリ
  503 可変長復号部
  504 逆量子化部
  505 逆周波数変換部
  506、1000 動き補償部
  507 面内予測部
  508 再構成部
  509 再構成画像メモリ
  510 デブロックフィルタ部
  511 動きベクトル演算部
  512、1002 DMA制御部
  513、1001 参照画像記憶部
  514 予測画像記憶部
  ex100 コンテンツ供給システム
  ex101 インターネット
  ex102 インターネットサービスプロバイダ
  ex103 ストリーミングサーバ
  ex104 電話網
  ex106、ex107、ex108、ex109、ex110 基地局
  ex111 コンピュータ
  ex112 PDA(Personal Digital Assistant)
  ex113、ex116 カメラ
  ex114 携帯電話
  ex115 ゲーム機
  ex117 マイク
  ex200 デジタル放送用システム
  ex201 放送局
  ex202 放送衛星(衛星)
  ex203 ケーブル
  ex204、ex205 アンテナ
  ex210 車
  ex211 カーナビゲーション(カーナビ)
  ex212 再生装置
  ex213、ex219 モニタ
  ex214、ex215、ex216 記録メディア
  ex217 セットトップボックス(STB)
  ex218 リーダ/レコーダ
  ex220 リモートコントローラ
  ex230 情報トラック
  ex231 記録ブロック
  ex232 内周領域
  ex233 データ記録領域
  ex234 外周領域
  ex300 テレビ(受信機)
  ex301 チューナ
  ex302 変調/復調部
  ex303 多重/分離部
  ex304 音声信号処理部
  ex305 映像信号処理部
  ex306、ex507 信号処理部
  ex307 スピーカ
  ex308 表示部
  ex309 出力部
  ex310 制御部
  ex311、ex505 電源回路部
  ex312 操作入力部
  ex313 ブリッジ
  ex314 スロット部
  ex315 ドライバ
  ex316 モデム
  ex317 インタフェース部
  ex318、ex319、ex320、ex321、ex404 バッファ
  ex400 情報再生/記録部
  ex401 光ヘッド
  ex402 変調記録部
  ex403 再生復調部
  ex405 ディスクモータ
  ex406 サーボ制御部
  ex407 システム制御部
  ex500 LSI
  ex502 マイコン(マイクロコンピュータ)
  ex503 メモリコントローラ
  ex504 ストリームI/O
  ex509 AV I/O
  ex510 バス
  ex511 メモリ
DESCRIPTION OF SYMBOLS 100 Image processing apparatus 101 Dividing part 102 Calculation part 103 Acquisition part 104 Generation part 501 Control part 502, 1003 Frame memory 503 Variable length decoding part 504 Inverse quantization part 505 Inverse frequency conversion part 506, 1000 Motion compensation part 507 In-plane prediction part 508 Reconstruction unit 509 Reconstruction image memory 510 Deblock filter unit 511 Motion vector calculation unit 512, 1002 DMA control unit 513, 1001 Reference image storage unit 514 Predictive image storage unit ex100 Content supply system ex101 Internet ex102 Internet service provider ex103 Streaming server ex104 Telephone network ex106, ex107, ex108, ex109, ex110 Base station ex111 Computer ex112 PDA (Personal Digital Assistant)
ex113, ex116 Camera ex114 Mobile phone ex115 Game machine ex117 Microphone ex200 Digital broadcasting system ex201 Broadcast station ex202 Broadcast satellite (satellite)
ex203 Cable ex204, ex205 Antenna ex210 Car ex211 Car navigation (car navigation system)
ex212 Playback device ex213, ex219 Monitor ex214, ex215, ex216 Recording media ex217 Set-top box (STB)
ex218 reader / recorder ex220 remote controller ex230 information track ex231 recording block ex232 inner circumference area ex233 data recording area ex234 outer circumference area ex300 television (receiver)
ex301 tuner ex302 modulation / demodulation unit ex303 multiplexing / separation unit ex304 audio signal processing unit ex305 video signal processing unit ex306, ex507 signal processing unit ex307 speaker ex308 display unit ex309 output unit ex310 control unit ex311, ex505 power supply circuit unit ex312 operation input unit ex313 Bridge ex314 Slot part ex315 Driver ex316 Modem ex317 Interface part ex318, ex319, ex320, ex321, ex404 Buffer ex400 Information reproduction / recording part ex401 Optical head ex402 Modulation recording part ex403 Playback demodulation part ex405 Disk motor ex406 Servo control part ex407 System control part ex500 LSI
ex502 Microcomputer (microcomputer)
ex503 Memory controller ex504 Stream I / O
ex509 AV I / O
ex510 bus ex511 memory

Claims (13)

  1.  画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理装置であって、
     前記ブロックを複数のサブブロックに分割する分割部と、
     前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出部と、
     算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得部と、
     前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成部とを備える
     画像処理装置。
    An image processing apparatus that performs motion compensation processing using a motion vector corresponding to a block in an image,
    A dividing unit for dividing the block into a plurality of sub-blocks;
    A calculation unit that calculates an area for acquiring a first reference image corresponding to a first sub-block included in the plurality of sub-blocks using the motion vector corresponding to the block;
    An acquisition unit that acquires the first reference image from the calculated area, excluding at least a part of the already acquired part;
    An image processing apparatus comprising: a generation unit that generates a predicted image corresponding to the first sub-block from the first reference image.
  2.  前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像に部分的に重なる前記第1参照画像を、前記第1参照画像と前記第2参照画像とが重なる前記部分のうち少なくとも一部を除いて、取得する
     請求項1に記載の画像処理装置。
    The acquisition unit overlaps the first reference image and the second reference image with the first reference image partially overlapping a second reference image corresponding to a second sub-block included in the plurality of sub-blocks. The image processing apparatus according to claim 1, wherein the image processing apparatus is obtained by excluding at least a part of the part.
  3.  前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像を取得した後、取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得する
     請求項1または2に記載の画像処理装置。
    The acquisition unit acquires a second reference image corresponding to a second sub-block included in the plurality of sub-blocks, and then removes at least a part of the portion included in the acquired second reference image. The image processing apparatus according to claim 1, wherein the first reference image is acquired.
  4.  前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する予測画像を前記生成部が生成している間に、前記第1参照画像を取得する
     請求項1~3のいずれか1項に記載の画像処理装置。
    The acquisition unit acquires the first reference image while the generation unit generates a prediction image corresponding to a second sub-block included in the plurality of sub-blocks. The image processing apparatus according to item 1.
  5.  前記取得部は、前記第1サブブロックに水平方向に隣接する第2サブブロックに対応する第2参照画像を取得した後、取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得する
     請求項1~4のいずれか1項に記載の画像処理装置。
    The acquisition unit acquires at least a part of the portion included in the acquired second reference image after acquiring the second reference image corresponding to the second sub block adjacent to the first sub block in the horizontal direction. 5. The image processing apparatus according to claim 1, wherein the first reference image is acquired except for.
  6.  前記取得部は、前記複数のサブブロックに含まれる第2サブブロックに対応する第2参照画像を、前記第1参照画像を取得する直前に取得した後、直前に取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得する
     請求項1~5のいずれか1項に記載の画像処理装置。
    The acquisition unit acquires the second reference image corresponding to the second sub-block included in the plurality of sub-blocks immediately before acquiring the first reference image, and then acquired immediately before the second reference image. The image processing apparatus according to any one of claims 1 to 5, wherein the first reference image is acquired by excluding at least a part of the part included in the image.
  7.  前記取得部は、前記第1サブブロックに水平方向に隣接する前記第2サブブロックに対応する前記第2参照画像を、前記第1参照画像を取得する直前に取得した後、直前に取得された前記第2参照画像に含まれる前記部分のうち少なくとも一部を除いて、前記第1参照画像を取得する
     請求項6に記載の画像処理装置。
    The acquisition unit acquires the second reference image corresponding to the second sub-block adjacent to the first sub-block in the horizontal direction immediately before acquiring the first reference image, and then acquired immediately before the first reference image. The image processing apparatus according to claim 6, wherein the first reference image is acquired by excluding at least a part of the portion included in the second reference image.
  8.  前記分割部は、前記ブロックを大きさが互いに等しい前記複数のサブブロックに分割する
     請求項1~7のいずれか1項に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the dividing unit divides the block into the plurality of sub-blocks having the same size.
  9.  前記算出部は、前記第1参照画像を取得するための前記領域を、既に取得された前記部分のうち少なくとも一部を除いて、算出する
     請求項1~8のいずれか1項に記載の画像処理装置。
    The image according to any one of claims 1 to 8, wherein the calculation unit calculates the region for acquiring the first reference image, excluding at least a part of the already acquired portion. Processing equipment.
  10.  前記取得部は、前記第1サブブロックよりも大きい前記第1参照画像を、既に取得された前記部分のうち少なくとも一部を除いて、取得し、
     前記生成部は、前記第1サブブロックよりも解像度の高い前記予測画像を生成する
     請求項1~9のいずれか1項に記載の画像処理装置。
    The acquisition unit acquires the first reference image larger than the first sub-block, excluding at least a part of the already acquired part,
    The image processing device according to any one of claims 1 to 9, wherein the generation unit generates the predicted image having a higher resolution than the first sub-block.
  11.  画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う画像処理方法であって、
     前記ブロックを複数のサブブロックに分割する分割ステップと、
     前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出ステップと、
     算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得ステップと、
     前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成ステップとを含む
     画像処理方法。
    An image processing method for performing motion compensation using a motion vector corresponding to a block in an image,
    A dividing step of dividing the block into a plurality of sub-blocks;
    A calculation step of calculating a region for acquiring a first reference image corresponding to a first sub-block included in the plurality of sub-blocks using the motion vector corresponding to the block;
    An acquisition step of acquiring the first reference image by removing at least a part of the already acquired portion from the calculated area;
    A generation step of generating a predicted image corresponding to the first sub-block from the first reference image.
  12.  請求項11に記載の画像処理方法に含まれるステップをコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the steps included in the image processing method according to claim 11.
  13.  画像内のブロックに対応する動きベクトルを用いて、動き補償処理を行う集積回路であって、
     前記ブロックを複数のサブブロックに分割する分割部と、
     前記複数のサブブロックに含まれる第1サブブロックに対応する第1参照画像を取得するための領域を、前記ブロックに対応する前記動きベクトルを用いて、算出する算出部と、
     算出された前記領域から、前記第1参照画像を、既に取得された部分のうち少なくとも一部を除いて、取得する取得部と、
     前記第1参照画像から、前記第1サブブロックに対応する予測画像を生成する生成部とを備える
     集積回路。
    An integrated circuit that performs motion compensation using a motion vector corresponding to a block in an image,
    A dividing unit for dividing the block into a plurality of sub-blocks;
    A calculation unit that calculates an area for acquiring a first reference image corresponding to a first sub-block included in the plurality of sub-blocks using the motion vector corresponding to the block;
    An acquisition unit that acquires the first reference image from the calculated area, excluding at least a part of the already acquired part;
    An integrated circuit comprising: a generation unit that generates a predicted image corresponding to the first sub-block from the first reference image.
PCT/JP2012/006327 2011-11-24 2012-10-03 Image processing device and image processing method WO2013076897A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-256835 2011-11-24
JP2011256835 2011-11-24

Publications (1)

Publication Number Publication Date
WO2013076897A1 true WO2013076897A1 (en) 2013-05-30

Family

ID=48469369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/006327 WO2013076897A1 (en) 2011-11-24 2012-10-03 Image processing device and image processing method

Country Status (1)

Country Link
WO (1) WO2013076897A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015145504A1 (en) * 2014-03-25 2015-10-01 株式会社ソシオネクスト Image decoding device, image decoding method, and integrated circuit
CN111885379A (en) * 2015-03-23 2020-11-03 Lg 电子株式会社 Method and apparatus for processing image based on intra prediction mode

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10215457A (en) * 1997-01-30 1998-08-11 Toshiba Corp Moving image decoding method and device
JP2006311526A (en) * 2005-03-31 2006-11-09 Matsushita Electric Ind Co Ltd Video decoding device, video decoding method, video decoding program, and video decoding integrated circuit
JP2008271292A (en) * 2007-04-23 2008-11-06 Nec Electronics Corp Motion compensating apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10215457A (en) * 1997-01-30 1998-08-11 Toshiba Corp Moving image decoding method and device
JP2006311526A (en) * 2005-03-31 2006-11-09 Matsushita Electric Ind Co Ltd Video decoding device, video decoding method, video decoding program, and video decoding integrated circuit
JP2008271292A (en) * 2007-04-23 2008-11-06 Nec Electronics Corp Motion compensating apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015145504A1 (en) * 2014-03-25 2015-10-01 株式会社ソシオネクスト Image decoding device, image decoding method, and integrated circuit
CN106134192A (en) * 2014-03-25 2016-11-16 株式会社索思未来 Picture decoding apparatus, picture decoding method and integrated circuit
JPWO2015145504A1 (en) * 2014-03-25 2017-04-13 株式会社ソシオネクスト Image decoding apparatus, image decoding method, and integrated circuit
US10306255B2 (en) 2014-03-25 2019-05-28 Socionext Inc. Image decoding device, image decoding method, and integrated circuit
CN106134192B (en) * 2014-03-25 2020-08-11 株式会社索思未来 Image decoding device, image decoding method, and integrated circuit
CN111885379A (en) * 2015-03-23 2020-11-03 Lg 电子株式会社 Method and apparatus for processing image based on intra prediction mode
CN111885379B (en) * 2015-03-23 2023-10-27 Lg 电子株式会社 Method and apparatus for processing image based on intra prediction mode

Similar Documents

Publication Publication Date Title
JP5518069B2 (en) Image decoding apparatus, image encoding apparatus, image decoding method, image encoding method, program, and integrated circuit
JP6390883B2 (en) Image processing device
WO2012035728A1 (en) Image decoding device and image encoding device, methods therefor, programs thereof, integrated circuit, and transcoding device
JP5805281B2 (en) Encoding / decoding device
WO2011161949A1 (en) Image decoding apparatus, image decoding method, integrated circuit, and program
WO2012046435A1 (en) Image processing device, image coding method and image processing method
WO2010109904A1 (en) Coding method, error detection method, decoding method, coding device, error detection device, and decoding device
WO2011048764A1 (en) Decoding apparatus, decoding method, program and integrated circuit
JP5999515B2 (en) Image processing apparatus and image processing method
WO2013108330A1 (en) Image decoding device, image encoding device, image decoding method, and image encoding method
JP6260921B2 (en) Image processing apparatus and image processing method
JP5546044B2 (en) Image decoding apparatus, image encoding apparatus, image decoding circuit, and image decoding method
JP5468604B2 (en) Image decoding apparatus, integrated circuit, image decoding method, and image decoding system
WO2013076897A1 (en) Image processing device and image processing method
WO2011129052A1 (en) Image decoding apparatus, image encoding apparatus, image decoding method and image encoding method
JP2011182132A (en) Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method, integrated circuit, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12851245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12851245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP