WO2012014471A1 - 画像復号化装置、画像復号化方法、画像符号化装置および画像符号化方法 - Google Patents
画像復号化装置、画像復号化方法、画像符号化装置および画像符号化方法 Download PDFInfo
- Publication number
- WO2012014471A1 WO2012014471A1 PCT/JP2011/004259 JP2011004259W WO2012014471A1 WO 2012014471 A1 WO2012014471 A1 WO 2012014471A1 JP 2011004259 W JP2011004259 W JP 2011004259W WO 2012014471 A1 WO2012014471 A1 WO 2012014471A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- stream
- slice
- unit
- divided
- decoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the present invention relates to an image decoding apparatus and an image decoding method for decoding an encoded stream in which image data is encoded, and an image encoding apparatus and an image encoding method for encoding image data into an encoded stream.
- the present invention relates to an image decoding apparatus and an image decoding method that execute decoding and encoding in parallel, and an image encoding apparatus and an image encoding method.
- An image encoding apparatus that encodes a moving image divides each picture constituting the moving image into macro blocks, and encodes each macro block. Then, the image encoding device generates an encoded stream indicating the encoded moving image.
- FIG. 44 is a diagram showing a configuration of a picture to be encoded.
- the picture is divided into 16 ⁇ 16 pixel macroblocks and encoded.
- a slice is composed of a plurality of macroblocks included in the picture, and a picture is composed of the plurality of slices.
- a structural unit having one column composed of a plurality of macroblocks arranged in the horizontal direction from the left end to the right end of a picture is called a macroblock line (MB line).
- FIG. 45 is a diagram showing a configuration of an encoded stream.
- the encoded stream is hierarchized and includes a header and a plurality of pictures arranged in the encoding order as shown in FIG.
- the header described above includes, for example, a sequence parameter set (SPS) that is referred to for decoding a sequence of a plurality of pictures.
- the encoded picture is configured to include a header and a plurality of slices as shown in FIG. 45B, and the slice includes a header and a header as shown in FIG. And a plurality of macroblocks (MB).
- the header at the head of the picture shown in (b) of FIG. 45 includes, for example, a picture parameter set (PPS) that is referred to for decoding the picture.
- PPS picture parameter set
- FIG. 46 is a diagram showing a configuration of a conventional image decoding apparatus.
- the image decoding apparatus 1300 includes a memory 1310 and a decoding engine 1320.
- the memory 1310 includes a stream buffer 1311 having an area for storing an encoded stream, and a frame memory 1312 having an area for storing decoded image data output from the decoding engine 1320.
- the image decoding apparatus 1300 sequentially acquires encoded image data such as macroblocks and pictures included in the encoded stream from the top side, the image decoding apparatus 1300 stores the encoded image data in the stream buffer 1311.
- the decoding engine 1320 sequentially reads out the encoded image data from the stream buffer 1311 in the decoding order, decodes the decoded image data, and stores the decoded image data generated by the decoding in the frame memory 1312. Further, when decoding, the decoding engine 1320 decodes the encoded image data while referring to the decoded image data already stored in the frame memory 1312.
- the decoded image data stored in the frame memory 1312 is output and displayed on the display device in the display order.
- FIG. 47 is a diagram showing a configuration of the decode engine 1320.
- the decoding engine 1320 includes an entropy decoding unit 1321, an inverse transformation unit 1322, an adder 1323, a deblocking filter 1324, a motion compensation unit 1325, a weighted prediction unit 1326, an in-screen prediction unit 1327, and a switch 1328.
- the entropy decoding unit 1321 entropy-decodes the encoded image data to generate quantized data indicating a quantized value, and outputs the quantized data to the inverse transform unit 1322.
- the inverse transform unit 1322 transforms the quantized data into difference image data by performing inverse quantization and inverse orthogonal transform on the quantized data.
- the adder 1323 adds decoded image data by adding the difference image data output from the inverse transform unit 1322 and the predicted image data output from the weighted prediction unit 1326 or the intra-screen prediction unit 1327 via the switch 1328. Is generated.
- the deblocking filter 1324 removes the coding distortion included in the decoded image data generated by the adder 1323 and stores the decoded image data from which the coding distortion has been removed in the frame memory 1312.
- the motion compensation unit 1325 reads the decoded image data stored in the frame memory 1312 and performs motion compensation to generate predicted image data, and outputs the predicted image data to the weighted prediction unit 1326.
- the weighted prediction unit 1326 weights the predicted image data output from the motion compensation unit 1325 and outputs the weighted data to the switch 1328.
- the in-screen prediction unit 1327 performs in-screen prediction. That is, the intra-screen prediction unit 1327 generates predicted image data by performing intra-screen prediction using the decoded image data generated by the adder 1323 and outputs the predicted image data to the switch 1328.
- the switch 1328 outputs the predicted image data output from the intra-screen prediction unit 1327 to the adder 1323 when the difference image data output from the inverse transform unit 1322 is generated by the intra-screen prediction. In addition, when the difference image data output from the inverse transform unit 1322 is generated by inter-screen prediction, the switch 1328 outputs the predicted image data output from the weighted prediction unit 1326 to the adder 1323.
- HD High Definition
- FIG. 48 is an explanatory diagram for explaining HD and 4k2k.
- the HD encoded stream is distributed by terrestrial digital broadcasting or BS digital broadcasting, and a picture with a resolution of “1920 ⁇ 1080 pixels” is decoded and displayed at a frame rate of 30 frames / second.
- the 4k2k encoded stream is scheduled to be distributed on a trial basis in advanced BS digital broadcasting from 2011, and a picture with a resolution of “3840 ⁇ 2160 pixels” is decoded and displayed at a frame rate of 60 frames / second. .
- 4k2k has double the resolution and double the frame rate both vertically and horizontally than HD.
- the processing load on the decoding engine of the image decoding device increases.
- the decoding engine 1320 of the image decoding device 1300 shown in FIG. 46 requires an operating frequency of 1 GHz or higher, which is difficult in practice. Therefore, parallel processing for decoding has been studied.
- FIG. 49 is a block diagram showing an example of the configuration of an image decoding apparatus that executes parallel decoding processing.
- the image decoding apparatus 1400 includes a memory 1310 and a decoder 1420.
- Each of the N decoding engines 1421 (the first decoding engine 1421 to the Nth decoding engine 1421) extracts a portion to be processed from the encoded stream stored in the stream buffer 1311 and extracts the extracted portion. Is decoded and output to the frame memory 1312.
- FIG. 50A and FIG. 50B are explanatory diagrams for explaining an example of parallel processing of decoding.
- the image decoding apparatus 1400 obtains an encoded stream composed of four area encoded streams and stores it in the stream buffer 1311.
- Each of these four area encoded streams is an independent stream, and as shown in FIG. 50A, is a stream indicating a moving image of one area of one screen divided into four.
- the image decoding apparatus 1400 acquires an encoded stream including a picture composed of four slices and stores it in the stream buffer 1311. As shown in FIG. 50B, the four slices are generated by dividing the picture into four equal parts in the vertical direction.
- this image decoding apparatus separates a picture of an encoded stream generated by MPEG-2 for each slice, and executes decoding processing for each slice in parallel.
- this image decoding apparatus cannot properly execute parallel decoding processing. That is, in this image decoding apparatus, a picture is divided into slices and decoding of a plurality of slices is performed in parallel. Decoding parallel processing cannot be appropriately executed on an encoded stream in which the slice size and position are arbitrarily set, such as an H.264 / AVC encoded stream. In other words, load imbalance occurs in each of a plurality of decoding engines provided in the image decoding apparatus, and decoding using effectively parallel processing cannot be executed. For example, when one picture is composed of one slice, the picture cannot be divided, and one decoding engine must perform decoding of the entire picture.
- a decoding device has been proposed (see, for example, Patent Document 1).
- FIG. 51 is an explanatory diagram for explaining a decoding process performed by the image decoding apparatus disclosed in Patent Document 1.
- the first decoding engine of the image decoding apparatus decodes the MB line of the 0th row in the picture
- the second decoding engine decodes the MB line of the 1st row in the picture
- the third decoding engine The second MB line in the picture is decoded.
- each decoding engine sequentially decodes macroblocks from the left end to the right end of the MB line.
- decoding of a macroblock there is a dependency relationship between a macroblock to be decoded and the left, upper left, upper, and upper right macroblocks as viewed from the decoding target macroblock. That is, each decoding engine needs information obtained by decoding the left, upper left, upper, and upper right macroblocks as viewed from the decoding target macroblock when decoding the macroblock. Therefore, each decoding engine starts decoding the decoding target macroblock after the decoding of those macroblocks is completed.
- Each decoding engine if any of the left, upper left, upper, and upper right macroblocks does not exist, after decoding of the other macroblocks, Start decryption.
- the image decoding apparatus executes in parallel the decoding of the macroblock at the Keima jump position.
- the H.264 A slice included in a H.264 / AVC picture may be divided.
- each decoding engine needs to have a function for properly recognizing a part of the divided slice as a slice, and the configuration of the image decoding apparatus becomes complicated.
- Patent Document 2 An image decoding apparatus that appropriately executes parallel decoding processing with a simple configuration has been proposed (see, for example, Patent Document 2).
- FIG. 52 is a block diagram showing a configuration of the image decoding apparatus disclosed in Patent Document 2.
- the image decoding apparatus 1100 of Patent Document 2 includes a memory 1150 having a stream buffer 1151, a divided stream buffer 1152, and a frame memory 1153, and a decoder 1110 having a stream dividing unit 1130 and N decoding engines 1120.
- the stream division unit 1130 divides the encoded picture into a plurality of macroblock lines for each encoded picture included in the encoded stream accumulated in the stream buffer 1151, and generates each of the plurality of macroblock lines as a generation target.
- N is an integer equal to or larger than 2
- the N decoding engines 1120 obtain the N divided streams from the stream dividing unit 1130 via the divided stream buffer 1152, and decode each of the N divided streams in parallel.
- the stream segmentation unit 1130 when the stream segmentation unit 1130 generates N segment streams, when a slice included in an encoded picture is segmented into a plurality of slice portions and assigned to a plurality of segment streams, for each segment stream, A slice portion group consisting of at least one slice portion allocated to the divided stream is reconfigured as a new slice.
- the encoded picture is divided into a plurality of macroblock lines, and each of the plurality of macroblock lines is allocated to the N decoding engines 1120 as a part of the divided stream and decoded.
- the load of the decoding process by the engine 1120 can be equalized, and the parallel decoding process can be appropriately executed.
- H.M Even when an H.264 / AVC encoded picture is composed of one slice, the encoded picture is divided into a plurality of macroblock lines, so that decoding of the one slice is performed by one decoding engine 1120. Without burdening, the N decoding engines 1120 can bear the burden equally.
- a slice extending over the plurality of macroblock lines may be divided into a plurality of slice portions, and these slice portions may be assigned to different divided streams.
- one divided stream does not include the entire slice of the encoded picture, but includes a slice portion group configured by collecting one or more slice portions that are fragments of the slice.
- such a slice portion group may not include a header indicating its head or end information indicating its end.
- the decoding engine 1120 that decodes the divided stream including the slice portion group recognizes the slice portion group.
- the slice portion group can be easily recognized as a new slice and appropriately decoded without requiring a special process for appropriate decoding. That is, in the image decoding apparatus 1100 of Patent Document 2 described above, since it is not necessary to provide a function or configuration for performing special processing in each of the N decoding engines 1120, the decoding engine 1120 that decodes a divided stream has a conventional method. Therefore, the entire configuration of the image decoding apparatus can be simplified.
- the image decoding apparatus of Patent Document 3 performs variable length decoding processing on a plurality of pictures or slices included in an encoded stream in parallel, and stores intermediate data obtained by the variable length decoding processing in an intermediate data buffer. Then, a picture is extracted from the intermediate data stored in the intermediate data buffer, and the picture is decoded in parallel in units of MB lines using a plurality of image decoding processing units.
- JP 2006-129284 A International Publication No. 2010/041472 JP 2008-67026 A
- the image decoding apparatus 1100 of Patent Document 2 has a problem that the decoding processing speed cannot be sufficiently increased. That is, when the data amount of the encoded stream is large, the processing speed can be increased by increasing the number of decoding engines 1120 and increasing the number of parallel processes. Since the number cannot be increased, the processing speed cannot be increased.
- the image decoding apparatus 1100 of Patent Document 2 generates N divided streams for a plurality of pictures or slices included in the encoded stream, as in the image decoding apparatus of Patent Document 3. It can be done in parallel.
- the stream dividing unit 1130 performs processing and configuration for causing the N decoding engines 1120 to recognize the divided streams to be decoded in parallel. Cost. Furthermore, the N decoding engines 1120 require processing and configuration for finding the divided streams to be decoded in parallel. Therefore, since each component of the image decoding apparatus 1100 of Patent Document 2 requires processing and configuration change, the overall configuration of the image decoding apparatus becomes complicated.
- the present invention has been made in view of such a problem, and an object thereof is to provide an image decoding apparatus and an image decoding method that appropriately execute parallel decoding processing with a simple configuration. Another object of the present invention is to provide an image encoding apparatus and an image encoding method corresponding to these apparatuses and methods.
- an image decoding apparatus is an image decoding apparatus that decodes an encoded stream in which image data is encoded, and is included in the encoded stream
- a first division control unit that designates a processing target region, and every time the processing target region is designated by the first division control unit, N (N is an integer of 2 or more) divisions from the processing target region
- N is an integer of 2 or more divisions from the processing target region
- a second division control unit that selects a part of at least one divided stream from the M ⁇ N divided streams generated by the M stream division units, and the second division control unit N decoding units each decoding a part of each of the N divided streams including a part of the at least one divided stream each time a part of the at least one divided stream is selected by
- Each of the M stream dividing units divides the processing target area into a plurality of structural units, and each of the plurality of structural units is generated from any of the N divided streams to be generated.
- the stream division process is executed, and the process target area is divided into a plurality of structural units, thereby being included in the process target area.
- Slice when it is divided into a plurality of slices partially allocated to a plurality of split streams, each split stream to reconstruct a slice subgroup of at least one slice portion allocated to the segment stream as a new slice.
- stream division processing is executed in parallel on M processing target areas (for example, slices or pictures). Therefore, when the data amount of the encoded stream is large, the processing speed can be increased by increasing the number of decoding units and the number of parallel processes, and the number of stream division units is also increased. Can be increased and the processing speed can be increased.
- M processing target areas are designated for M stream division units. That is, the stream division processing for dividing the processing target area into a plurality of structural units (for example, macroblock lines) is distributed to each of the M stream division units.
- the order relationship of the plurality of processing target areas included in the encoded stream cannot be maintained in the M ⁇ N divided streams generated by the M stream dividing units, and M ⁇ N divisions are performed.
- the stream cannot be decoded as it is. Therefore, in the image decoding apparatus according to an aspect of the present invention, for each designated processing target area, M processing is performed based on the arrangement of the processing target areas, that is, the decoding order of the processing target areas in the encoded stream.
- a part of at least one divided stream is selected from the M ⁇ N divided streams generated by the stream dividing unit. For example, if a part of each of the N divided streams corresponding to the processing target area is stored in the same buffer (divided stream buffer), the buffer is selected. Then, a part of each of the selected N divided streams is decoded in parallel. Therefore, M ⁇ N segment streams can be decoded in the correct order relationship. Furthermore, in the image decoding apparatus according to an aspect of the present invention, the specification of the processing target region and the selection of a part of the divided stream are performed by different components from the M stream dividing units and the N decoding units. It is done centrally.
- each of the M stream dividing units analyzes each first header information included in the encoded stream regardless of the designated processing target region, and based on the analysis result, the N pieces of the stream headers are analyzed. Generate a split stream.
- each of the stream division units analyzes each first header information such as SPS, PPS, and slice header included in the encoded stream, and thus each processing target region included in the encoded stream and each of the first header information are analyzed. Based on the reference relationship of one header information, N divided streams can be appropriately generated.
- any one of the M stream division units generates N division streams including second header information included in the encoded stream, and generates the M stream divisions. All the other stream segmentation units of the unit generate N segment streams that do not include the second header information.
- the second header information such as SPS or PPS is included only in one divided stream, and the second header information is not included in the other N ⁇ 1 divided streams. It is possible to prevent the N decoding units from processing the second header information, and it is possible to avoid a degradation in processing performance of the decoding unit due to duplicate decoding of the second header information.
- the second division control unit further generates selection information indicating a part of the selected divided stream, and outputs the selection information to each of the N decoding units, and the N decoding units , Part of each of the N divided streams indicated by the selection information output from the second division control unit is decoded in parallel.
- the N decoding units can decode a part of each of the N divided streams in parallel with the correct order relationship using the selection information.
- the second division control unit outputs the selection information including the size of a part of the data of the selected divided stream to each of the N decoding units, and the N decoding units Based on the size of the data included in the selection information output from the second division control unit, each part of the N divided streams is specified, and those parts are decoded in parallel. .
- the second division control unit outputs the selection information including the number of data configuration units constituting each of the N divided streams or the bit amount as the size.
- the N decoding units can appropriately recognize and decode portions to be decoded in parallel from each of the N divided streams. Further, when the selection information indicates the number of data constituent units (for example, H.264 / AVC NAL unit), the contents indicated by the selection information can be simplified.
- the first division control unit further determines, for each stream division unit, whether or not the stream division processing for one processing target area executed by the stream division unit has been completed, and has ended. When the determination is made, a new processing target area is preferentially designated for the stream division unit that has finished the stream division processing.
- the stream division processing for one processing target area is completed in the stream division unit, the stream division processing for the new processing target area is allocated to the stream division unit, and thus the processing amount of the M stream division units is equalized. Can be.
- the N decoding units include first and second decoding units, and the first decoding unit is assigned to the first decoding unit of the N divided streams.
- the first slice portion included in the divided stream is decoded
- the second decoding unit is included in the divided stream assigned to the second decoding unit among the N divided streams.
- the first decoding unit performs the second decoding.
- Adjacent information generated by decoding the part is converted into the first decoded information. Obtained from unit, for decoding the second slice portion using said neighbor information, or decrypts the second slice portion without using the adjacency information.
- the first decoding unit and the second decoding are performed even though the first slice portion included in the divided stream and the second slice portion included in the other divided streams are adjacent to each other. Even when decryption is individually performed by each of the encoding units, the adjacent information is passed from the first decoding unit to the second decoding unit. It is possible to appropriately decode the second slice portion included in the divided streams.
- the first division control unit designates a slice, a picture, or a picture group including a plurality of pictures included in the encoded stream as the processing target area.
- the processing target area that is, the stream division processing is distributed to M stream division units with the minimum granularity. It becomes easy to equalize the amount.
- the processing target area that is, the stream division processing is distributed to M stream division units with a relatively large granularity. It is possible to reduce the load on each process of specifying the processing target area by the division control unit and selecting N divided streams.
- the n th decoding unit (n is an integer greater than or equal to 1 and less than or equal to N) is the n th decoding stream among the N divided streams generated by the stream division unit. It is preferable to decode the split stream. In other words, it is preferable that the decoding unit respectively decodes a part of the divided stream that is sequentially selected from M predetermined divided streams among the N ⁇ M divided streams. As a result, each of the N decoding units decodes a part of the divided streams sequentially selected by the second division control unit, so that M of the M ⁇ N divided streams is obtained. Decode pieces of the divided stream. Thereby, the N decoding units can decode only the divided streams assigned to each of them.
- the stream dividing unit may perform the stream dividing process only on the designated processing target area by skipping the processing target area not designated by the first division control unit. Thereby, the stream dividing unit can process only the designated processing target area without performing complicated processing.
- the header information may be header information of a picture layer or higher.
- the stream division units analyze the header information above the picture layer. Therefore, the stream dividing unit can appropriately generate a divided stream.
- An image encoding apparatus is an image encoding apparatus that generates an encoded stream by encoding image data, and for each picture included in the image data, By encoding a plurality of constituent units included in parallel, N encoding units that generate N (N is an integer greater than or equal to 2) divided streams, and processing target areas that configure the encoded streams Combining processing for generating a joint encoding region that is the processing target region by combining the first combining control unit to be specified and the partial regions corresponding to the processing target regions included in each of the N divided streams.
- M stream combining units that execute M in parallel with respect to M processing target areas designated by the first combining control unit (M is an integer of 2 or more), and the first Based on the arrangement of the M processing target areas designated by the combination control unit in the encoded stream, multiplexing is performed from the M combined encoding areas generated by the M stream combining units.
- Each of the M stream combining units is configured such that, when performing the combining process, when the partial region is configured from a plurality of encoded structural units, When the sub-region is divided into a plurality of encoded structural units and recombined to generate the joint encoded region and the recombination is performed, a plurality of slices are included in the image data. When divided and encoded into slice parts and assigned to the N divided streams, a slice part group including a plurality of encoded slice parts is reconstructed as a new slice in the combined coding area. To do.
- the picture is divided into structural units such as a plurality of macroblock lines, and each of the plurality of macroblock lines is allocated and encoded by N encoding units.
- the burden of the encoding process can be equalized, and the parallel processing of encoding can be executed appropriately.
- H.M Even when an H.264 / AVC encoded picture is configured with one slice, the picture is divided into a plurality of macroblock lines, so that encoding of the one slice is not burdened by one encoding unit.
- the N coding units can be equally burdened.
- the combination process (stream combination process) is distributed to the M stream combination units in units of processing target areas by the designation of the processing target areas by the first combination control unit, This can be done in parallel at the stream combiner.
- a slice straddling the plurality of macroblock lines is divided into a plurality of slice portions, and these slice portions are sequentially assigned to the divided streams. is there. That is, slice portions that are slice fragments are distributed in each divided stream. Therefore, the plurality of slice portions dispersed in this way do not have a context in the image data. Therefore, when a plurality of continuous macroblock lines have a dependency relationship based on a predetermined code word, a plurality of distributed slice portions cannot maintain the dependency relationship, and an encoded stream conforming to the encoding method cannot be used as it is. Cannot be generated.
- a slice part group that is a set of a plurality of distributed slice parts is reconfigured as a new slice, so that the combination including the slice part group is performed.
- a coding area (for example, a slice or a picture) can be formatted according to a coding method.
- the image encoding device for each designated processing target area, M pieces of processing target areas are arranged based on the arrangement of the processing target areas, that is, the encoding order of the processing target areas in the encoded stream. From the M joint coding regions generated by the stream joint unit, joint coding regions to be multiplexed are sequentially selected. Then, M joint coding regions are multiplexed in the selected order. Therefore, it is possible to multiplex M joint coding regions in the correct order relationship. Furthermore, in the image encoding device according to an aspect of the present invention, the designation of the processing target region and the selection of the combined encoding region to be multiplexed are performed by M stream combining units and N encoding units.
- the second joint control unit each time the second joint control unit selects a joint coding region to be multiplexed, the second joint control unit generates selection information indicating the joint coding region and outputs the selection information to the multiplexing unit.
- the multiplexing unit acquires the selection information from the second combination control unit, the multiplexing unit multiplexes the combined coding region indicated by the selection information into the encoded stream.
- the multiplexing unit can multiplex the M joint coding regions in the correct order using the selection information.
- the second combination control unit outputs the selection information including the size of data of the selected combination coding region to the multiplexing unit, and the multiplexing unit has a size included in the selection information.
- the combined coding area is multiplexed into the coded stream.
- the multiplexing unit when the joint encoding region is sequentially generated from the stream combining unit by repeating the designation of the processing target region by the first combining control unit, the multiplexing unit generates a plurality of the generated regions. It is possible to appropriately recognize and multiplex the joint coding region to be multiplexed from the joint coding region.
- the second combination control unit outputs the selection information including the number of data configuration units constituting the combined coding area or the bit amount as the size.
- the selection information indicates the number of data constituent units (for example, HAL.264 / AVC NAL unit)
- the contents indicated by the selection information can be simplified.
- the first combining control unit further determines, for each stream combining unit, whether or not the combining process executed by the stream combining unit has ended.
- a new processing target area is preferentially designated for the stream combining unit that has completed the combining process.
- the N encoding units include first and second encoding units, and the first encoding unit is assigned to the first encoding unit among the N constituent units.
- the first structural unit is encoded, and the second encoding unit encodes the second structural unit assigned to the second encoding unit among the N structural units.
- the first encoding unit is configured to use the second structural unit by the second encoding unit.
- the encoding of the first constituent unit is started, and the second encoding unit is generated by encoding the first constituent unit by the first encoding unit Obtaining adjacent information from the first encoding unit, and encoding the second structural unit using the adjacent information; Other encodes the second structural unit without using the adjacency information.
- the processing target region is divided into a plurality of structural units such as a macroblock line, and each of the first and second structural units is encoded in parallel by the first and second encoding units.
- each of the first and second structural units is encoded in parallel by the first and second encoding units.
- the adjacent structural information is appropriately used to encode the second structural unit. be able to.
- the stream combining unit skips a partial area corresponding to a processing target area not specified by the first combining control unit included in each of the N divided streams, thereby specifying the processing target specified.
- the combination process may be performed only on the partial region corresponding to the region.
- the stream combining unit can process only the partial region corresponding to the designated processing target region without performing complicated processing.
- the processing target area may be a slice. Accordingly, since the processing target area, that is, the combining process is distributed to the M stream combining units with the minimum granularity, it is easy to equalize the processing amount of the M stream combining units.
- the processing target area may be a picture or a picture group composed of a plurality of pictures.
- the processing target region is designated by the first and second combining control units, and the joint encoding region It is possible to reduce the load on each process of selection.
- the present invention can be realized not only as such an image decoding device and an image encoding device, but also as a method and program thereof, a storage medium storing the program, and an integrated circuit.
- the image decoding apparatus and the image encoding apparatus according to the present invention have an operational effect that parallel processing of decoding and encoding can be appropriately executed with a simple configuration.
- FIG. 1 is a block diagram showing a configuration of an image decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 2A is a diagram showing a decoding order when a picture according to Embodiment 1 of the present invention is not composed of MBAFF.
- FIG. 2B is a diagram showing a decoding order in the case where the picture according to Embodiment 1 of the present invention is composed of MBAFF.
- FIG. 3 is an explanatory diagram for explaining slice header insertion processing according to Embodiment 1 of the present invention.
- FIG. 4 is an explanatory diagram for explaining the MB address information update processing according to Embodiment 1 of the present invention.
- FIG. 5 is an explanatory diagram for explaining slice termination processing according to Embodiment 1 of the present invention.
- FIG. 1 is a block diagram showing a configuration of an image decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 2A is a diagram showing a decoding order when a picture according to Embodi
- FIG. 6A is a diagram showing an encoded stream according to Embodiment 1 of the present invention.
- FIG. 6B is an explanatory diagram showing a specific example of slice distribution processing by the stream division control unit according to Embodiment 1 of the present invention.
- FIG. 7 is a diagram showing a state of the divided stream buffer when slice distribution and stream division processing are performed by the stream division control unit according to Embodiment 1 of the present invention.
- FIG. 8 is a diagram illustrating an example of a format of selection information when slice distribution and stream division processing are performed by the stream division control unit according to Embodiment 1 of the present invention.
- FIG. 9 is a flowchart showing the overall operation of the image decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 10 is a block diagram showing the configuration of the stream division unit in Embodiment 1 of the present invention.
- FIG. 11 is an explanatory diagram for explaining operations of the slice header insertion unit and the slice data processing unit according to Embodiment 1 of the present invention.
- FIG. 12 is a block diagram showing the configuration of the slice header insertion unit in Embodiment 1 of the present invention.
- FIG. 13 is a diagram showing MB lines and slice headers allocated to the first to fourth areas of the divided stream buffer according to Embodiment 1 of the present invention.
- FIG. 14A is a diagram showing a position where slice end information is set in Embodiment 1 of the present invention.
- FIG. 14B is a diagram showing a position where slice end information is set in Embodiment 1 of the present invention.
- FIG. 14A is a diagram showing a position where slice end information is set in Embodiment 1 of the present invention.
- FIG. 14B is a diagram showing a position where slice end information is set in Embod
- FIG. 15 is a flowchart showing the operation of the dividing point detection unit according to Embodiment 1 of the present invention.
- FIG. 16A is an explanatory diagram for explaining the MB skip run information correction processing according to Embodiment 1 of the present invention.
- FIG. 16B is an explanatory diagram for explaining the MB skip run information correction processing according to Embodiment 1 of the present invention.
- FIG. 17 is a block diagram showing a configuration of a skip run correction unit according to Embodiment 1 of the present invention.
- FIG. 18 is a flowchart showing the MB skip run information correction operation by the skip run correction unit according to the first embodiment of the present invention.
- FIG. 19A is an explanatory diagram for explaining the correction process for the QP change amount according to Embodiment 1 of the present invention.
- FIG. 19B is an explanatory diagram for explaining a QP variation correction process according to Embodiment 1 of the present invention.
- FIG. 20 is an explanatory diagram for explaining the accumulation of QP variation in the first embodiment of the present invention.
- FIG. 21 is a flowchart showing a QP variation correction process by the QP delta correction unit according to the first embodiment of the present invention.
- FIG. 22A is an explanatory diagram for describing high-resolution decoding according to Embodiment 1 of the present invention.
- FIG. 22B is an explanatory diagram for describing high-speed decoding according to Embodiment 1 of the present invention.
- FIG. 22C is an explanatory diagram for explaining multi-channel decoding in Embodiment 1 of the present invention.
- FIG. 22A is an explanatory diagram for describing high-resolution decoding according to Embodiment 1 of the present invention.
- FIG. 22B is an explanatory diagram for describing high-speed decoding according to Embodiment 1 of the present invention
- FIG. 23 is a block diagram showing the configuration of the image decoding apparatus according to Embodiment 2 of the present invention.
- FIG. 24 is a block diagram showing the configuration of the stream division unit in Embodiment 2 of the present invention.
- FIG. 25 is an explanatory diagram for explaining MB skip run information correction processing and QP change amount insertion processing in Embodiment 2 of the present invention.
- FIG. 26 is a block diagram showing a configuration of a skip run correction unit according to Embodiment 2 of the present invention.
- FIG. 27 is a flowchart showing the MB skip run information correction operation by the skip run correction unit according to the second embodiment of the present invention.
- FIG. 28 is a flowchart showing processing for inserting the accumulated QP variation by the QP delta insertion unit according to Embodiment 2 of the present invention.
- FIG. 29 is a block diagram showing the configuration of the image coding apparatus according to Embodiment 3 of the present invention.
- FIG. 30A is a diagram showing an encoding order when pictures in Embodiment 3 of the present invention are not configured with MBAFF.
- FIG. 30B is a diagram showing an encoding order in the case where a picture according to Embodiment 3 of the present invention is composed of MBAFF.
- FIG. 31 is an explanatory diagram for explaining slice header insertion processing and slice end processing according to Embodiment 3 of the present invention.
- FIG. 32A is a diagram showing a divided stream according to Embodiment 3 of the present invention.
- FIG. 32B is an explanatory diagram showing a specific example of slice distribution processing by the stream combination control unit according to Embodiment 3 of the present invention.
- FIG. 33 is a diagram showing the state of the partial stream buffer when slice distribution and stream division processing are performed by the stream combination control unit according to Embodiment 3 of the present invention.
- FIG. 34 is a diagram illustrating an example of a format of selection information when slice distribution and stream division processing are performed by the stream combination control unit according to Embodiment 3 of the present invention.
- FIG. 35 is a block diagram showing the configuration of the stream combining unit in Embodiment 3 of the present invention.
- FIG. 36 is an explanatory diagram for explaining MB skip run information correction processing according to Embodiment 3 of the present invention.
- FIG. 37A is an explanatory diagram for describing the correction process of the QP variation amount according to Embodiment 3 of the present invention.
- FIG. 37B is an explanatory diagram for explaining the QP change amount correction processing according to Embodiment 3 of the present invention.
- FIG. 38A is a block diagram showing a configuration of an image encoding device including only one stream combining unit according to Embodiment 3 of the present invention.
- FIG. 38B is a flowchart showing an operation of the image coding apparatus including only one stream combining unit according to Embodiment 3 of the present invention.
- FIG. 38A is a block diagram showing a configuration of an image encoding device including only one stream combining unit according to Embodiment 3 of the present invention.
- FIG. 38B is a flowchart showing an operation of the image coding apparatus including only one stream combining unit according to Embodi
- FIG. 39 is a diagram illustrating an application example of the image decoding device and the image coding device according to the present invention.
- FIG. 40 is a block diagram showing the minimum configuration of the image decoding apparatus according to the present invention.
- FIG. 41 is a flowchart showing an image decoding method by the image decoding apparatus according to the present invention.
- FIG. 42 is a block diagram showing a minimum configuration of an image encoding device according to the present invention.
- FIG. 43 is a flowchart showing an image coding method by the image coding apparatus according to the present invention.
- FIG. 44 is a diagram illustrating a configuration of a picture to be encoded.
- FIG. 45 is a diagram illustrating a configuration of an encoded stream.
- FIG. 46 is a diagram showing a configuration of a conventional image decoding apparatus.
- FIG. 47 is a diagram showing a configuration of a conventional decoding engine.
- FIG. 48 is an explanatory diagram for explaining HD and 4k2k.
- FIG. 49 is a block diagram showing a configuration of an image decoding apparatus that executes conventional decoding parallel processing.
- FIG. 50A is an explanatory diagram for explaining an example of conventional decoding parallel processing.
- FIG. 50B is an explanatory diagram for explaining an example of conventional decoding parallel processing.
- FIG. 51 is an explanatory diagram for explaining a decoding process performed by a conventional image decoding apparatus.
- FIG. 52 is a block diagram showing a configuration of an image decoding apparatus including a conventional stream dividing unit.
- FIG. 1 is a block diagram showing a configuration of an image decoding apparatus according to Embodiment 1 of the present invention.
- the image decoding apparatus 100 is an apparatus that appropriately executes parallel decoding processing with a simple configuration, and includes a decoder 110 and a memory 150.
- the memory 150 has an area for storing data input to the decoder 110, data generated intermediately by the decoder 110, and data finally generated and output by the decoder 110.
- the memory 150 includes a stream buffer 151, M divided stream buffers (first divided stream buffer to Mth divided stream buffer) 152, and a frame memory 153.
- the stream buffer 151 stores the encoded stream generated and transmitted by the image encoding device.
- M ⁇ N divided streams generated by the decoder 110 are stored as the above-described intermediately generated data.
- Each divided stream buffer 152 has an area allocated to each of the N decoding engines 120.
- the N divided decoded image data generated by the N decoding engines (decoding units) 120 are stored as data that is finally generated and output.
- the decoded image data is stored in the frame memory 153, read into the display device, and displayed as a moving image.
- the decoder 110 reads out and decodes the encoded stream stored in the stream buffer 151 of the memory 150 to generate decoded image data, and stores the decoded image data in the frame memory 153 of the memory 150.
- the decoder 110 includes an M number of stream division units (first stream division unit to Mth stream division unit) 130, a stream division control unit 140, and N decoding engines (first decoding engine to Nth decoding unit). Engine) 120.
- the decoding engine 120 in this embodiment has a processing capability capable of decoding two channels of HD images (1920 ⁇ 1088 pixels, 60i).
- the stream division control unit 140 acquires mode information to be described later, and in order to equalize the amount of stream division processing in the M stream division units 130 according to the mode information, each stream division unit 130 To each of them, distribution control information for distributing the stream division processing in a predetermined unit is notified.
- the stream division process will be described later.
- the stream division control unit 140 will be described assuming that the stream division processing in the M stream division units 130 is distributed in units of slices.
- the stream division control unit 140 performs stream division processing on the processing target area for each processing target area (slice) included in the encoded stream based on the notification of distribution control information, among the M stream division units 130. Any one of them is executed.
- the distribution control information indicates a slice number for identifying a slice to be subjected to stream division processing.
- the stream division control unit 140 transmits the distribution control information to any one of the M stream division units 130, so that the stream division processing target is transmitted to the stream division unit 130. Specify the processing target area.
- the stream division control unit 140 assigns which divided stream of the M divided stream buffers 152 to the N decoding engines 120 based on the result of distributing the stream division processing to the M stream dividing units 130 in slice units. The selection information indicating whether or not the divided stream should be acquired from the buffer 152 is notified.
- Each of the M streams dividing unit 130 obtains the mode information and allocation control information, in accordance with the mode information and allocation control information, extracts the slice to be processed from the encoded stream (process target area), the A slice is divided in parallel into N divided streams (first divided stream to Nth divided stream). That is, the stream division unit 130 in this embodiment divides the slice into one or a plurality of MB lines for each slice distributed by the stream division control unit 140 included in the encoded stream. Then, the stream dividing unit 130 generates N divided streams by assigning each of the plurality of MB lines to any one of the N divided streams to be generated.
- the above-described stream division process is a process of dividing a slice (processing target area) into a plurality of MB lines and assigning it to any one of N divided streams.
- the M stream segmentation units 130 generate N ⁇ M segment streams by executing the stream segmentation processing in parallel.
- the MB line is a structural unit having one column composed of a plurality of macroblocks arranged in the horizontal direction from the left end to the right end of the picture.
- the stream dividing unit 130 divides the processing target area (slice) into a plurality of MB lines, but the picture is configured with MBAFF. If so, two MB lines are handled as one structural unit (hereinafter referred to as an MB line pair), and the processing target area (slice) is divided into a plurality of MB line pairs.
- the stream division unit 130 divides the processing target area (slice) into a plurality of MB lines when the picture is configured by MBAFF, and divides the two MB lines belonging to the MB line pair into the same division. Assign to part of the stream.
- the present invention will be described on the assumption that the picture is not configured with MBAFF. However, in the present invention, different processing is required depending on whether the picture is configured with MBAFF or not. In each case, processing unique to MBAFF will be described.
- the MB line is replaced with an MB line pair, and the description is made when the picture is composed of MBAFF. It can be replaced with a description of the invention.
- Each of the M stream segmentation units 130 divides the N segment streams thus divided into one segment stream buffer 152 associated with the stream segmentation unit 130 out of the M segment stream buffers 152. To store. That is, the first stream division unit 130 stores the divided N divided streams in the first divided stream buffer 152, and the second stream division unit 130 stores the divided N divided streams in the second divided stream buffer 152. The Mth stream dividing unit 130 stores the divided N divided streams in the Mth divided stream buffer 152.
- Each of the M stream division units 130 handles a slice as a predetermined unit, and when dividing the slice into a plurality of MB lines, belongs to the MB line immediately before the MB line in the encoded stream or to the MB line. If there is a header between two macroblocks, the header is attached to the MB line and assigned to a part of the divided stream.
- the slice arranged over a plurality of MB lines included in the picture is divided by the division for each MB line by the stream dividing unit 130 as described above. Further, when dividing the stream into N divided streams, the stream dividing unit 130 removes the dependency in the variable length decoding process between the macroblocks straddling each of the N divided streams.
- the N decoding engines 120 acquire the mode information and the selection information, and read and read the divided streams to be processed by each of them from any of the M divided stream buffers 152 according to the mode information and the selection information.
- N divided decoded image data are generated by decoding the divided streams in parallel.
- the first decoding engine 120 performs the first decoding of the first divided stream buffer 152.
- the first divided stream is read from the area allocated to the engine 120.
- the second decoding engine 120 reads the second divided stream from the area allocated to the second decoding engine 120 in the first divided stream buffer 152
- the third decoding engine 120 reads the second divided stream in the first divided stream buffer 152.
- the third divided stream is read from the area allocated to the third decoding engine 120
- the fourth decoding engine 120 reads the fourth divided stream from the area allocated to the fourth decoding engine 120 in the first divided stream buffer 152.
- the first decoding engine 120 to the fourth decoding engine 120 decode the first divided stream to the fourth divided stream in parallel, respectively.
- the first decoding engine 120 stores the first number in the Mth divided stream buffer 152.
- the first divided stream is read from the area allocated to one decode engine 120.
- the second decoding engine 120 reads the second divided stream from the area allocated to the second decoding engine 120 of the Mth divided stream buffer 152
- the third decoding engine 120 reads the Mth divided stream.
- the third divided stream is read from the area allocated to the third decoding engine 120 in the stream buffer 152
- the fourth decoding engine 120 performs the fourth division from the area allocated to the fourth decoding engine 120 in the Mth divided stream buffer 152. Read the stream.
- the first decoding engine 120 to the fourth decoding engine 120 decode the first divided stream to the fourth divided stream in parallel, respectively.
- each of the N decoding engine 120 when decoding the coded macroblock by intraframe prediction contained in the split stream, the upper left of the decoding target macroblock, a macroblock at the upper and upper right From the decoded decoding engine 120, information on these decoded macroblocks is acquired as adjacent MB information.
- the decoding engine 120 that has acquired the adjacent MB information decodes the decoding target macroblock using the adjacent MB information. Further, the decoding engine 120, for example, the deblocking filtering or even in the case of performing the motion vector prediction process, similarly to the above, the upper left of the macro block to be processed, the information of the decoded macroblock is above and upper-right Obtained as adjacent MB information, the above-described processing is performed.
- FIG. 2A is a diagram showing a decoding order when a picture is not composed of MBAFF.
- the first decoding engine 120 decodes the 0th MB line
- the second decoding engine 120 decodes the first MB line
- the third decoding engine 120 decodes the second MB line
- the 4 decode engine 120 decodes the third MB line.
- the k-th (k is an integer equal to or greater than 0) MB line indicates the k-th MB line from the upper end of the picture.
- the 0-th MB line is the 0-th MB line from the upper end of the picture.
- the first decoding engine 120 starts decoding the 0th MB line.
- the second decoding engine 120 starts decoding the leftmost macroblock of the first MB line.
- the third decoding engine 120 starts decoding the macroblock at the left end of the second MB line.
- the fourth decoding engine 120 starts decoding the macroblock at the left end of the third MB line.
- the (k + 1) th MB line is decoded from the leftmost macroblock to the rightmost macroblock with a delay of two macroblocks compared to the kMB line.
- FIG. 2B is a diagram showing a decoding order when a picture is composed of MBAFF.
- an MB line pair is a structural unit having two columns (MB lines) composed of a plurality of macroblocks arranged in the horizontal direction from the left end to the right end of the picture as described above. is there.
- the MB line pair is adaptively frame / field encoded for each of the two macro blocks (macro block pair) located above and below.
- the upper macroblock is first decoded, and then the lower macroblock is decoded.
- the first decoding engine 120 decodes the 0th MB line pair
- the second decoding engine 120 decodes the first MB line pair
- the third decoding engine is a structural unit having two columns (MB lines) composed of a plurality of macroblocks arranged in the horizontal direction from the left end to the right end of the picture as described above.
- the fourth decoding engine 120 decodes the third MB line pair.
- the k-th (k is an integer equal to or greater than 0) MB line pair indicates a structural unit including two k-line MB lines from the upper end of the picture.
- the 0-th MB line pair is 0th from the upper end of the picture. Is a unit composed of two MB lines.
- the first decoding engine 120 starts decoding the 0th MB line pair. Then, the decoding of two macroblock pair on the left end of the 0MB line pair is completed, the second decoding engine 120 starts decoding the macroblock of the upper left point of the first 1MB line pair. Then, the decoding of two macroblock pair at the left end of the first 1MB line pair is completed, the third decoding engine 120 starts decoding the macroblock at the upper left end of the 2MB line pair. Similarly, when the decoding of the two macroblock pairs at the left end of the second MB line pair is completed, the fourth decoding engine 120 starts decoding the macroblock at the upper left end of the third MB line pair.
- the first (k + 1) MB line pairs compared with the kMB line pair, only two macroblock pair delayed by, is decoded from the left end of the macroblock pair and the right edge of macroblock pair.
- the (k + 1) th MB line or the (k + 1) th MB line pair is the kMB line or the kMB line in each of the case where the picture is not composed of MBAFF and the case where the picture is composed of MBAFF.
- Decoding should be delayed by at least two macroblocks or two macroblock pairs compared to the pair. That is, the decoding may be delayed by 3 macroblocks or 3 macroblock pairs.
- the (k + 1) th MB line or the (k + 1) th MB line pair is decoded with a delay of 2 macroblocks or 2 macroblock pairs compared to the kMB line or the kMB line pair.
- the time required for decoding a picture can be minimized, and when decoding is delayed by 3 macroblocks or 3 macroblock pairs or more, the time required for decoding pictures according to the amount of delay Becomes longer.
- Such a feature of the image decoding apparatus 100 is that a slice part group made up of one or a plurality of parts (slice parts) of a slice generated by the division by the stream dividing unit 130 is added to one new piece. To reconstruct it as a slice.
- the slice reconstruction includes slice header insertion processing, slice termination processing, MB address information update processing, skip run correction processing, and QP delta setting processing.
- the QP delta setting process includes a QP delta correction process and a QP delta insertion process. In the present embodiment, a case where the QP delta setting process is a QP delta correction process will be described.
- FIG. 3 is an explanatory diagram for explaining slice header insertion processing.
- the M stream dividing units 130 divide the picture p1 shown in FIG.
- the entire M stream dividing units 130 are referred to as a stream dividing unit group 130a.
- the processing target area that is a unit of distribution by the stream division control unit 140 is described as a slice. Therefore, when the picture p1 is composed of a plurality of slices, each stream dividing unit 130 divides the slice included in the picture p1. As a result, the stream dividing unit group 130a that is the entirety of the M stream dividing units 130 divides the picture p1 as described below. Note that when the processing target area that is a unit of distribution by the stream division control unit 140 is a picture, one stream division unit 130 divides the picture p1 as described below.
- the picture p1 is composed of a slice A, a slice B, and a slice C, and is composed of MB lines L1 to L12.
- the slice A is arranged across the MB lines L1 to L7, and has a slice header ha and a plurality of macroblocks mba arranged continuously from the slice header ha.
- the slice B is arranged across the MB lines L7 to L8, and has a slice header hb and a plurality of macroblocks mbb arranged continuously from the slice header hb.
- the slice C is arranged across the MB lines L9 to L12, and has a slice header hc and a plurality of macroblocks mbc arranged continuously from the slice header hc.
- the slice header includes auxiliary information necessary for decoding the slice having the slice header.
- the stream dividing unit group 130a divides the picture p1 described above for each MB line, as shown in FIG. Then, the stream division unit group 130a assigns each of the MB lines L1 to L12 to a part of any of the first divided stream to the fourth divided stream in order from the top. For example, the stream division unit group 130a assigns the MB line L1 to a part of the first divided stream, assigns the MB line L2 to a part of the second divided stream, and assigns the MB line L3 to a part of the third divided stream. , MB line L4 is allocated to a part of the fourth divided stream.
- the stream division unit group 130a repeats the assignment of the MB line to the first divided stream. That is, the stream division unit group 130a assigns the MB line L5 to a part of the first divided stream, assigns the MB line L6 to a part of the second divided stream, and assigns the MB line L7 to a part of the third divided stream. , MB line L8 is allocated to a part of the fourth divided stream.
- the first split stream includes MB lines L1, L5, L9 successive
- second segment stream includes MB lines L2, L6, L10 successive
- L4 is included
- the fourth divided stream includes consecutive MB lines L4, L8, and L12.
- a slice portion group (slice A in the first divided stream) is configured from MB lines L1 and L5 which are slice portions of the slice A.
- a slice portion group (slice B in the second divided stream) is configured from MB lines L2 and L6 which are slice portions of slice B.
- the first divided stream includes MB lines L1, L5, and L9 that are continuous as described above.
- the MB lines L1 and L5 should be recognized as the slice A
- the MB line L9 should be recognized as the slice C.
- the slice header hc of the slice C needs to be arranged at the head of the line L9. In the example shown in FIG.
- the stream dividing unit group 130a since the slice headers ha and hc are arranged in advance at the heads of the MB lines L1 and L9, the stream dividing unit group 130a includes the MB lines L1, L5, and L9 together with the slice headers ha and hc. What is necessary is just to allocate to 1 division
- the second divided stream includes MB lines L2, L6, and L10 that are continuous as described above.
- MB lines L2 and L6 should be recognized as slice A
- MB line L10 should be recognized as slice C.
- MB lines L2 should be the head of the slice A in the second split stream is located
- MB should be the head of the slice C in the second split stream
- the slice header hc of the slice C needs to be arranged at the head of the line L10.
- the stream dividing unit group 130a in the present embodiment duplicates the slice headers ha, hb, hc as necessary, thereby duplicating the slice headers ha ′, hb ′. , Hc ′ and insert them into the split stream.
- the stream division unit group 130a generates three duplicate slice headers ha 'by duplicating the slice header ha, and inserts the duplicate slice headers ha' immediately before the MB lines L2, L3, and L4. Furthermore, the stream division unit group 130a generates one duplicate slice header hb 'by duplicating the slice header hb, and inserts the duplicate slice header hb' immediately before the MB line L8. Further, the stream division unit group 130a generates three duplicate slice headers hc 'by duplicating the slice header hc, and inserts the duplicate slice header hc' immediately before the MB lines L10, L11, and L12.
- a duplicate slice header ha ′ that is a duplicate of the slice header ha of the slice A is arranged immediately before the MB line L2 that is the head of the slice A, and the MB line L10 that is the head of the slice C
- a duplicate slice header hc ′ which is a duplicate of the slice header hc of the slice C, is arranged immediately before.
- the stream division unit group 130a updates the MB address information included in the duplicate slice header according to the insertion position.
- the slice header of each slice constituting the picture included in the encoded stream includes MB address information “first_mb_in_slice” for specifying the address in the picture of the first macroblock of the slice. Therefore, the duplicate slice header generated by duplicating such a slice header initially includes the same MB address information as the MB address information of the original slice header. As a result, if such a duplicate slice header is inserted in a position different from the original slice header in the picture, the address specified by the MB address information of the duplicate slice header points to an incorrect address. It will be.
- the address specified by the MB address information of the duplicate slice header does not indicate the address in the picture of the first macroblock of the slice having the duplicate slice header in the divided stream, and the address of the slice having the duplicate slice header. Indicates the address of the first macroblock.
- the slice header ha of the slice A of the picture p1 includes MB address information indicating the address in the picture p1 of the first macroblock of the slice A (first macroblock of the MB line L1).
- the duplicate slice header ha ′ generated by duplicating such a slice header ha initially includes MB address information for specifying an address in the picture p1 of the first macroblock of the MB line L1. Yes.
- the address specified by the MB address information of the duplicate slice header ha ′ is the duplicate slice header in the second divided stream.
- the address in the picture p1 of the first macroblock of the slice A having ha ′ (the first macroblock of the MB line L2) is not indicated, but the address of the first macroblock of the MB line L1 is indicated.
- the stream division unit group 130a in the present embodiment updates the MB address information included in the duplicate slice header as described above.
- FIG. 4 is an explanatory diagram for explaining the update processing of MB address information.
- the stream segmentation unit 130 first acquires “pic_width_in_mbs_minus1”, which is information related to the number of macroblocks in the horizontal direction of a picture, from an SPS (sequence parameter set) included in the encoded stream.
- SPS sequence parameter set
- the stream segmentation unit 130 uses the MB address information “first_mb_in_slice” included in the copy source slice header to calculate the address of the first macroblock of the slice having the copy source slice header.
- the stream segmentation unit 130 calculates a value “mbposv” indicating what line the MB line of the head macroblock is in the picture based on the calculated address of the head macroblock.
- the value “mbposv” is an integer of 0 or more.
- the stream division unit 130 updates the MB address information initially included in the duplicate slice header to the MB address information calculated as described above. Thereby, the address specified by the MB address information of the duplicate slice header correctly points to the address in the picture of the first macroblock of the slice having the duplicate slice header in the divided stream.
- slice end information indicating the end of the slice is set. Therefore, as shown in FIG. 3, when a picture is simply divided into a plurality of MB lines and the plurality of MB lines are respectively assigned to some of the first to fourth divided streams. In some cases, the decoding engine 120 cannot properly recognize the end of the slice included in the divided stream.
- the stream segmentation unit 130 also executes a slice end process in the same manner as the slice header insertion process.
- FIG. 5 is an explanatory diagram for explaining slice termination processing.
- the slice C included in the picture p1 of the encoded stream includes a slice header hc, MB lines L9 to L12, and slice end information ec.
- Stream division unit group 130a divides picture p1 into MB lines.
- the MB line L9 is assigned to the first divided stream together with the slice header hc by one stream dividing unit 130, and the MB line L10 is assigned to the second divided stream.
- MB line L11 is assigned to the third divided stream, and MB line L12 is assigned to the fourth divided stream.
- the stream dividing unit 130 duplicates the slice header hc to generate three duplicate slice headers hc ′ by the above-described slice header insertion processing, and each of the three duplicate slice headers hc ′ is second divided.
- the stream is inserted immediately before the MB lines L10, L11, and L12 of the fourth divided stream.
- the stream dividing unit 130 updates the MB address information included in the duplicate slice header hc ′ according to the position of the duplicate slice header hc ′ to be inserted by the above-described update processing of the MB address information.
- the stream segmentation unit 130 terminates the slice C (MB line L9) in the first segment stream, terminates the slice C (MB line L10) in the second segment stream, and slices in the third segment stream.
- Slice end information ec ′ indicating the end of C (MB line L11) and the end of slice C (MB line L12) in the fourth divided stream is generated.
- the stream division unit 130 sets the generated slice end information ec 'immediately after the MB lines L9, L10, L11, and L12 of the first divided stream to the fourth divided stream. Note that when the encoded stream is divided for each MB line, the stream dividing unit 130 discards the slice end information ec originally included in the encoded stream. If the slice end information ec and the slice end information ec are the same information, the MB line L12 is finally assigned to the fourth divided stream together with the slice end information ec '(ec).
- each decoding engine 120 can appropriately recognize the end of the slice included in the divided stream.
- the image decoding apparatus 100 includes the M stream division units 130 and the N decoding engines 120, and performs the decoding processing of the encoded streams in parallel. It has become. In this way, by enabling parallel processing in the entire system, the performance of the entire decoding process is improved.
- each of the M stream division units 130 divides a slice of a coded picture included in the coded stream into one or a plurality of MB lines as a predetermined unit (processing target region). Since the length is not constant, the amount of processing varies from slice to slice.
- the processing target is an encoded stream, so the processing amount depends on the code amount for each slice.
- the encoded stream is variable-length encoded, and the code amount varies depending on the data content.
- the slices in the H.264 / AVC format include types called I slices, P slices, B slices, and the like. There is a large amount of code in the I slice where the intra-frame encoding process is performed, and there is a tendency that the code amount is small in the P slice and the B slice where the inter-frame encoding process is performed in addition to the intra-frame encoding process.
- the code amount of the encoded slice included in the encoded stream is not constant and can vary greatly. Therefore, simply allocating the input encoded slices to each of the M stream division units 130 in order does not equalize the processing amount of each stream division unit 130, and the effect of improving the processing performance by parallelization is sufficient. I can't get it.
- the stream division control unit 140 distributes each slice to each of the stream division units 130 so that the processing amount of each stream division unit 130 becomes equal.
- FIG. 6A and 6B are explanatory diagrams showing a specific example of slice distribution processing by the stream division control unit 140.
- FIG. In the present embodiment, in order to simplify the description, hereinafter, M will be described as M 2.
- FIG. 2 is a diagram illustrating an example of an H.264 / AVC format encoded stream.
- the encoded stream in this example is composed of SPS (sequence parameter set), PPS (picture parameter set), and slice data (slice) constituting a picture.
- Picture 0 is composed of slice 0 only.
- Picture 1 is composed of slice 1 and slice 2.
- Picture 2 is composed of slice 3 and slice 4.
- the arrows in FIG. 6A indicate the reference relationship between slice data and PPS, and the reference relationship between PPS and SPS.
- slice 0 is decoded using header information included in PPS0
- slice 1 and slice 2 Is decoded using the header information included in PPS1.
- the SPS including the stream sequence information is referred to by PPS 0 to 2. That is, the SPS shown in FIG. 6A is used for decoding all slices of slice 0 to slice 4.
- a plurality of slices may refer to the same header information (SPS, PPS). Therefore, when stream division processing is distributed to each of the M stream division units 130, header information such as SPS and PPS needs to be equally decoded and analyzed by all the stream division units 130.
- FIG. 6B is a diagram showing a series of slice distribution processing by the stream division control unit 140.
- each stream division unit 130 holds the number of the slice to be processed.
- the stream division control unit 140 transmits the distribution control information, so that the first stream dividing unit 140 The decoding of slice 0 is notified to 130, and the decoding of slice 1 is notified to the second stream dividing unit 130.
- the first stream segmentation unit 130 When the first stream segmentation unit 130 is notified of the decoding of slice 0 from the stream segmentation control unit 140, the first stream segmentation unit 130 stores the slice number of the slice notified of decoding and the number of the slice to be processed from now on. Compare the value. At this timing, since both are equal to 0, the first stream segmentation unit 130 performs stream segmentation processing on the first input slice.
- the first stream segmentation unit 130 first decodes and analyzes the SPS, extracts various parameters necessary for stream segmentation processing, and also determines the slice number and SN1 value notified by the allocation control information. Therefore, the input SPS is output to the first divided stream buffer 152 as it is.
- the first stream segmentation unit 130 decodes and analyzes the PPS0, extracts various parameters necessary for the stream segmentation process, and similarly inputs the input PPS0 to the first segment stream buffer 152 as it is. Output.
- the first stream segmentation unit 130 performs stream segmentation processing on slice 0 and outputs the generated N segment streams to the first segment stream buffer 152.
- the second stream division unit 130 when the second stream division unit 130 is notified of the decoding of slice 1 from the stream division control unit 140 by the distribution control information, the slice number of the slice notified of the decoding and the number of the slice to be processed from now on And the value of SN2 held as At this timing, the slice number of the slice notified of decoding indicates 1, SN2 indicates 0, and the difference is 1. Therefore, the second stream segmentation unit 130 skips processing for one input slice and performs stream segmentation processing on the second input slice. That is, the second stream segmentation unit 130 skips the stream segmentation process for the slices corresponding to the number of differences.
- the second stream segmentation unit 130 first decodes and analyzes the SPS, and extracts various parameters necessary for the stream segmentation process.
- the second stream segmentation unit 130 does not output the SPS to the second segment stream buffer 152 since the slice number of the slice notified of decoding does not match the SN2 value.
- the second stream segmentation unit 130 decodes and analyzes PPS0 and extracts various parameters necessary for stream segmentation processing, but similarly does not output PPS0 to the second segment stream buffer 152.
- the second stream segmentation unit 130 skips the stream segmentation process for the input slice 0. Therefore, the second stream segmentation unit 130 does not output the result of the stream segmentation process for slice 0 to the second segment stream buffer 152.
- the stream division process is skipped by searching for a start code accompanying the encoded data.
- the second stream dividing unit 130 does not output the SPS and PPS0 the second segment stream buffer 152 is a so that at the N decoding engine 120 does not receive duplicate the same header information . That is, since SPS and PPS0 are output to the first divided stream buffer 152 by the first stream dividing unit 130, the output from the second stream dividing unit 130 of SPS and PPS0 is suppressed.
- the second stream dividing section 130 analyzes and decodes the PPS1, extracts the various parameters necessary for the stream segmentation processing, and the value of the slice number and SN2 slices is notified of the decoding match Therefore, the input PPS1 is output to the second divided stream buffer 152 as it is.
- the second stream segmentation unit 130 performs stream segmentation processing on slice 1 and outputs the generated N segment streams to the second segment stream buffer 152.
- the second stream division unit 130 notifies the stream division control unit 140 of the completion of the process and the second divided stream. Information of the divided stream output to the buffer 152 is notified. Specifically, the second stream segmentation unit 130 notifies the number of NAL units constituting each of PPS1 and slice 1 that are actually output to the segment stream buffer 152.
- the NAL unit is an H.264 unit.
- H.264 / AVC stream is a structural unit, and SPS, PPS, slice, and the like are each included in this NAL unit.
- the number of MB lines included in the slice may be less than N, which is the number of divided streams. For this reason, the number of NAL units corresponding to the slices processed by the stream dividing unit 130 differs for each of the N divided streams to be generated, and the value is 0 or 1. Therefore, the stream segmentation unit 130 notifies the stream segmentation control unit 140 of the number of NAL units output for each of N segment streams.
- the second stream segmentation unit 130 notifies the stream segmentation control unit 140 that PPS1 and slice 1 have processed a total of two NAL units.
- the stream division control unit 140 that has received the process completion notification from the second stream division unit 130 notifies the second stream division unit 130 of decoding of slice 2.
- the second stream segmentation unit 130 When the second stream segmentation unit 130 is notified of the decoding of slice 2 from the stream segmentation control unit 140, the second stream segmentation unit 130 stores the slice number of the slice notified of the decoding and the SN2 held as the slice number to be processed from now on. Compare the value. At this timing, since they are equal to 2, the second stream segmentation unit 130 performs stream segmentation processing on the first input slice.
- the second stream segmentation unit 130 performs stream segmentation processing on slice 2 and generates the generated N because the slice number of the slice notified of decoding matches the value of SN2.
- the number of divided streams is output to the second divided stream buffer 152.
- the first stream splitting unit 130 notifies the stream splitting control unit 140 of the completion of the processing and also the first split stream.
- the number of NAL units “3” constituting each of SPS, PPS0, and slice 0 is notified.
- the stream division control unit 140 that has received the process completion notification from the first stream division unit 130 notifies the first stream division unit 130 of the decoding of slice 3 based on the distribution control information.
- the first stream segmentation unit 130 When the first stream segmentation unit 130 is notified of the decoding of slice 3 from the stream segmentation control unit 140, the first stream segmentation unit 130 stores the slice number of the slice notified of the decoding, and the SN1 held as the slice number to be processed from now on Compare the value. At this timing, the slice number of the notified slice indicates 3, SN2 indicates 1, and the difference is 2. Therefore, the first stream segmentation unit 130 operates so as to skip the process for the two input slices and perform the stream segmentation process for the third input slice.
- the first stream segmentation unit 130 and analyzes first decrypt the PPS1, and is to extract various parameters necessary for the stream segmentation processing, the slice notified decoding slice number and SN1 value Are not matched, PPS1 is not output to the first divided stream buffer 152.
- the first stream segmentation unit 130 skips processing of the input slice 1. Therefore, the first stream segmentation unit 130 does not output the result of the stream segmentation process for slice 1 to the first segment stream buffer 152.
- the first stream dividing section 130 analyzes and decodes the PPS2, extracts the various parameters necessary for the stream segmentation processing, and the value of the slice number and SN1 slices is notified of the decoding match Therefore, the input PPS 2 is output to the first divided stream buffer 152 as it is.
- the first stream segmentation unit 130 performs stream segmentation processing on slice 3 and outputs the generated N segment streams to the first segment stream buffer 152.
- the stream division processing for slice 2 by the second stream dividing unit 130 ends, so the second stream dividing unit 130 notifies the stream division control unit 140 of the completion of the processing and also the second divided stream.
- the number “1” of NAL units constituting the slice 2 is notified.
- the stream division control unit 140 that has received the process completion notification from the second stream division unit 130 notifies the second stream division unit 130 of the decoding of the slice 4 based on the distribution control information.
- the second stream division unit 130 When the second stream division unit 130 is notified of the decoding of the slice 4 from the stream division control unit 140, the second stream division unit 130 stores the slice number of the slice notified of the decoding and the SN2 held as the number of the slice to be processed from now on. Compare the value. At this timing, the slice number of the slice notified of decoding indicates 4, SN2 indicates 3, and the difference is 1. Therefore, the second stream segmentation unit 130 skips processing for one input slice and performs stream segmentation processing on the second input slice.
- the second stream segmentation unit 130 first decodes and analyzes PPS2, and extracts various parameters necessary for the stream segmentation process.
- the second stream segmentation unit 130 extracts the slice number and SN2 value of the slice notified of the decoding. Therefore, the PPS2 is not output to the second divided stream buffer 152.
- the second stream dividing unit 130 skips the processing of the input slice 3. Therefore, the second stream segmentation unit 130 does not output the result of the stream segmentation process for slice 3 to the second segment stream buffer 152.
- the second stream segmentation unit 130 performs the stream segmentation process on slice 4 and generates the generated N number of streams
- the divided stream is output to the second divided stream buffer 152.
- the stream division processing for the slice 3 by the first stream dividing unit 130 is completed, so the first stream dividing unit 130 notifies the stream division control unit 140 of the completion of the processing and the first divided stream.
- the number “2” of NAL units constituting each of the PPS 2 and the slice 3 is notified.
- the stream division processing for the slice 4 by the second stream division unit 130 ends, so the second stream division unit 130 notifies the stream division control unit 140 of the completion of the process and As the information of the divided stream output to the two-divided stream buffer 152, the number “1” of NAL units constituting the slice 4 is notified.
- the stream division control unit 140 sequentially allocates slice decoding processing (stream division processing) to the stream division unit 130 that has completed the processing. Thereby, the processing amount of each stream dividing unit 130 becomes equal.
- FIG. 7 is a diagram illustrating a state of the divided stream buffer 152 when the slice distribution and the stream division processing illustrated in FIG. 6B are performed. Note that slices 0 to 4 shown in FIG. 7 each indicate data of a part of the slice.
- the stream division control unit 140 determines from which divided stream buffer of the M divided stream buffers 152 the N decoding engines 120 can acquire the divided streams in the same slice order as the encoded stream before division.
- the N decoding engines 120 are notified of selection information indicating whether or not to obtain a divided stream.
- FIG. 8 is a diagram illustrating an example of a format of selection information when the slice distribution and the stream division processing illustrated in FIG. 6B are performed.
- the selection information includes divided stream buffer information and NAL unit number information for each slice, and is generated each time slice distribution processing is performed by the stream division control unit 140.
- the segment stream buffer information indicates to which of the first stream segmentation unit 130 and the second stream segmentation unit 130 the stream segmentation control unit 140 allocated the slice. That is, the divided stream buffer information indicates the divided stream buffer in which the divided stream output from the stream dividing unit 130 is stored for the slice as a result of the stream dividing unit 130 performing the stream dividing process on the slice.
- the NAL unit number information indicates the number of NAL units output when the target segment processing is performed in the stream segmentation unit 130. When the processing of the stream segmentation unit 130 is completed, the NAL unit number information is output from the stream segmentation unit 130. This is notified to the stream division control unit 140.
- the NAL unit number information may be different for each of the N divided streams. Therefore, selection information of different contents corresponding to the N divided streams is notified to each of the N decoding engines 120. That is, the first decoding engine is notified of selection information corresponding to the first divided stream, the second decoding engine is notified of selection information corresponding to the second divided stream, and the third decoding engine is notified of the first decoding stream. Selection information corresponding to the three divided stream is notified, and selection information corresponding to the fourth divided stream is notified to the fourth decoding engine.
- the number of MB lines included in each slice is sufficiently larger than N, and the number of NAL units corresponding to one slice is 1 regardless of the divided streams. It is assumed that the selection information notified to is the same.
- the selection information generated by the stream division control unit 140 is notified to the N decoding engines 120 and stored in, for example, a FIFO (first-in first-out) memory in the decoding engine 120.
- the selection information stored in the FIFO is read by the decode engine 120 in the notified order and used for stream acquisition processing from the divided stream buffer 152.
- the decode engine 120 acquires three NAL units (SPS, PPS0, slice 0) from the first divided stream buffer 152 according to the selection information of slice 0.
- the decode engine 120 acquires two NAL units (PPS1, slice 1) from the second divided stream buffer 152 according to the selection information of slice 1.
- the decoding engine 120 uses the selection information notified from the stream division control unit 140, so that the divided streams from the M divided stream buffers 152 are in the same slice order as the encoded stream input to the decoder 110. (Slice of a divided stream) can be acquired.
- slice distribution processing described with reference to FIGS. 6A to 8 is an example of the processing operation of the image decoding apparatus 100 of the present invention, and the present invention is not limited to the processing operation described here.
- the stream division control unit 140 designates the slice number when notifying the stream division unit 130 of the decoding of the slice, but designates not the slice number but the number of slices to skip processing. May be.
- the stream division control unit 140 stores the number of slices allocated to each of the M stream division units 130, and calculates the number of slices to skip processing based on that.
- the stream segmentation unit 130 completes the process to the stream segmentation control unit 140 and the number of NAL units output to the segment stream buffer 152.
- the number of bits of the divided stream output to the divided stream buffer 152 may be notified. That is, it is only necessary that the decoding engine 120 can notify information that can determine the size of the divided stream to be acquired from the divided stream buffer 152.
- the divided stream buffer information directly specifies the number of the divided stream buffer 152, but may be information different from the number.
- the divided stream buffer information includes the divided stream buffer 152 that stores the divided stream corresponding to the slice to be processed, and the divided stream buffer that stores the divided stream corresponding to the immediately preceding slice. It may be information indicating whether the stream buffer 152 is the same. That is, the divided stream buffer information may be information that can appropriately acquire a divided stream from the plurality of divided stream buffers 152.
- the selection information includes the NAL unit number information. However, as described above, the selection information may include information indicating the number of bits of the divided stream instead of the NAL unit number information.
- FIG. 9 is a flowchart showing the overall operation of the image decoding apparatus 100 according to the present embodiment.
- the image decoding apparatus 100 acquires an encoded stream (step S10), specifies an encoded picture to be processed from the encoded stream, and performs parallel processing of stream division.
- the slices are distributed so as to be even (step S12).
- the image decoding apparatus 100 extracts one MB line by dividing the picture to be processed (step S14). If there is a slice header immediately before the MB line or between two macroblocks belonging to the MB line, the MB line is extracted together with the slice header.
- the image decoding apparatus 100 needs to perform slice reconstruction processing before assigning one MB line extracted by the division in step S14 to any of the first to Nth divided streams to be generated.
- slice header needs to be inserted immediately before the MB line
- Slice end information needs to be set immediately after the already allocated MB line
- MB skip run It is determined whether or not the information needs to be corrected and whether or not the QP change amount needs to be set (step S16).
- the image decoding apparatus 100 determines in step S16 that it is necessary to perform the slice reconstruction process (Yes in step S16), the image decoding apparatus 100 executes the slice reconstruction process (step S18). That is, the image decoding apparatus 100 executes at least one of the above-described slice header insertion process, slice end process, skip run correction process, and QP delta setting process. The image decoding apparatus 100 also executes MB address information update processing when executing slice header insertion processing. Note that the stream division processing in steps S14 to S20 is performed in parallel in units of slices.
- the image decoding apparatus 100 allocates the MB line to any of the first to Nth divided streams to be generated (step S20).
- step S20 MB lines are sequentially assigned to the first to Nth divided streams, and the first to Nth divided streams are generated.
- the image decoding apparatus 100 decodes the MB lines assigned to the first to Nth divided streams in parallel (step S22). If no MB line is assigned to any one of the first to N-th divided streams, the image decoding apparatus 100 excludes the divided streams to which the MB line is not assigned. Are decoded.
- the image decoding apparatus 100 determines whether or not all MB lines included in the picture have been allocated (step S24). When it is determined that they have not been allocated (No in step S24), the processing from step S14 is performed. Repeatedly. On the other hand, when determining that all MB lines have been allocated (Yes in step S24), the image decoding apparatus 100 further determines whether all the pictures included in the encoded stream have been divided (step S26). . Here, when the image decoding apparatus 100 determines that all the pictures are not divided (No in step S26), the process from step S12 is repeatedly executed, and when it is determined that all the pictures are divided (step S26). In step S26 (Yes), the decoding process ends.
- processing operation shown in the flowchart of FIG. 9 is an example of the processing operation of the image decoding apparatus 100 of the present invention, and the present invention is not limited to the processing operation shown in this flowchart.
- the stream dividing unit 130 of the image decoding device 100 performs the slice header insertion process in the slice reconstruction process in step S18, but the slice header insertion process is not performed.
- the duplicate slice header may be passed directly to a decoding engine 120 that requires
- the stream segmentation unit 130 performs the MB address information update process in the slice reconstruction process in step S18, but the update process may not be performed.
- the decoding engine 120 updates the MB address information of the duplicate slice header included in the divided stream.
- the stream segmentation unit 130 performs the slice termination process in the slice reconstruction process in step S ⁇ b> 18, but may not perform the slice reconstruction process.
- the next new MB line is assigned from the stream dividing unit 130 to any one of the divided streams.
- slice termination processing may be performed on an already allocated MB line.
- FIG. 10 is a block diagram showing the configuration of the stream segmentation unit 130.
- the stream segmentation unit 130 includes a process management unit 130m, a selector Sct, a start code detection unit 131, an EPB removal unit 132a, an EPB insertion unit 132b, a slice header insertion unit 133, and slice data processing units 134a and 134b.
- the process management unit 130m acquires the mode information and the distribution control information, and controls other components included in the stream dividing unit 130 according to the information. That is, so that the stream division processing as shown in FIGS. 6A to 8 is performed, the process management unit 130m holds, for example, a slice number (SN1 or SN2) to be processed from now on, and selects a selector based on the number. Control Sct. As a result, the process management unit 130m causes the divided stream generated from the SPS, PPS, or the slice to be processed to be output, or stops the output.
- the start code detection unit 131 reads the encoded stream from the stream buffer 151 and detects the start code for each NAL unit.
- the EPB removal unit 132a removes the EPB (emulation prevention byte) from the encoded stream, and outputs the encoded stream from which the EPB has been removed to the slice data processing units 134a and 134b. Further, the EPB removal unit 132a acquires information on a layer higher than a slice such as SPS (sequence parameter set) and PPS (picture parameter set) included in the encoded stream, and the information is obtained for each of the four divided streams. The information is output to the EPB insertion unit 132b so as to be inserted.
- SPS sequence parameter set
- PPS picture parameter set
- the EPB insertion unit 132b inserts the EPB removed by the EPB removal unit 132a into the divided stream generated by dividing the encoded stream.
- the slice header insertion unit 133 executes the above-described slice header insertion process and MB address information update process. Note that the slice header insertion unit 133 sends a slice header processing content notification M1 indicating whether or not to perform slice header insertion processing to the slice data processing units 134a and 134b at a predetermined timing, and the slice data processing units 134a and 134b. When the termination processing completion notification M2 is received from the above, the slice header insertion processing is executed. Then, the slice header insertion unit 133 outputs the slice header immediately before the MB line and the duplicate slice header with the updated MB address information to the EPB insertion unit 132b by the slice header insertion process.
- the slice data processing units 134a and 134b generate four divided streams by dividing the encoded stream from which the EPB has been removed, and output the four divided streams. Note that the divided streams output from the slice data processing units 134a and 134b do not include the slice header and the duplicate slice header immediately before or within the MB line described above.
- the slice data processing unit 134a performs processing according to CAVLD (Context-Adaptive Variable Length Decoding), and divides the encoded stream generated by CAVLC (Context-Adaptive Variable Length Coding) into four streams. To do.
- CAVLD Context-Adaptive Variable Length Decoding
- the slice data processing unit 134b performs processing according to CABAD (Context-Adaptive Binary Arithmetic Decoding), and divides the encoded stream generated by CABAC (Context-Adaptive Binary Arithmetic Coding) into four encoded streams. .
- CABAD Context-Adaptive Binary Arithmetic Decoding
- CABAC Context-Adaptive Binary Arithmetic Coding
- the slice data processing unit 134a includes a slice data layer decoding unit 135a, a macroblock layer decoding unit 136a, a skip run correction unit 137a, a QP delta correction unit 138a, and a division point detection unit 139a.
- the slice data layer decoding unit 135a performs variable length decoding on the encoded data of the slice data layer included in the encoded stream.
- the macroblock layer decoding unit 136a performs variable length decoding on the encoded data of the macroblock layer included in the encoded stream.
- the dependency relationship between adjacent macroblocks is removed.
- the slice data layer decoding unit 135a and the macroblock layer decoding unit 136a only provide information (specifically, CAVLC nC (non-zero coefficient), etc.) depending on the macroblock adjacent to the processing target macroblock. It may be decrypted.
- the skip run correction unit 137a corrects the MB skip run information “mb_skip_run” decoded by the slice data layer decoding unit 135a, re-encodes the corrected MB skip run information, and stores the encoded MB skip run information. Output. That is, when the MB skip run information indicates the number of blocks that are continuous across at least two consecutive slice portions in the encoded stream, the skip run correction unit 137a divides the number of consecutive blocks and determines the slice portion. The MB skip run information modified so as to indicate the number of each block is set to a divided stream to which at least two consecutive slice portions are respectively allocated.
- the skip run correcting unit 137a converts the plurality of MB skip run information into the plurality of MB skip run information.
- Each of the MB skip run information is converted into one MB skip run information indicating the sum of the numbers indicated.
- the MB skip run information is an example of a first codeword indicating the number of consecutive blocks when a specific type of block is continuous in a slice included in the encoded picture. Specifically, the MB skip run information indicates the number of macro blocks that are skipped continuously.
- the MB skip run information decoded by the slice data layer decoding unit 135a is: The number of continuously skipped macroblocks included in the set is shown.
- the number of macroblocks that are skipped continuously in each divided stream changes. That is, the dependency relationship between MB lines by the MB skip run information is broken.
- the skip run modification unit 137a specifies, for each MB line that includes a part of the above-described set, the number of macroblocks that are continuously skipped and that constitute the part included in the MB line. Then, the skip run correction unit 137a corrects the MB skip run information so that the number indicated by the MB skip run information becomes the number specified for the MB line for each MB line.
- the QP delta correction unit 138a corrects the QP variation “mb_qp_delta” of the macroblock decoded by the macroblock layer decoding unit 136a for each macroblock, re-encodes the modified QP variation,
- the converted QP change amount is output. That is, when the QP change amount indicates a change amount between blocks straddling two MB lines, the QP delta correction unit 138a calculates the change amount of the coding coefficient based on the new context of the blocks in each divided stream. calculate. Then, the QP delta correction unit 138a corrects the QP change amount to the calculated change amount.
- the QP change amount is an example of a second codeword indicating the change amount of the coding coefficient between consecutive blocks in the slice included in the coded picture.
- the amount of QP change is included in a macroblock (target macroblock), and a difference value between the QP value of the target macroblock and the QP value of the macroblock located immediately before the target macroblock. Indicates.
- the decoding engine 120 that decodes the divided stream including the one macroblock (target macroblock) that is continuous with each other can decode the QP value of the target macroblock from the QP variation of the target macroblock. Can not be derived. That is, the dependency between MB lines due to the amount of QP change is broken.
- the QP delta correction unit 138a recalculates the QP change amount of the macroblock (target macroblock) based on the context of the new macroblock in the divided stream.
- the division point detection unit 139a divides the encoded stream into four divided streams. That is, the division point detection unit 139a divides a picture or a slice into a plurality of MB lines, and assigns each MB line to one of the four divided streams. When there is a slice header immediately before the MB line or between two macroblocks belonging to the MB line, the division point detection unit 139a converts only the MB line into the divided stream without assigning the slice header. assign. Further, the division point detection unit 139a includes the MB skip run information acquired from the skip run correction unit 137a and the QP change amount acquired from the QP delta correction unit 138a in each of the divided streams.
- the division point detection unit 139a detects the end of the slice of the division stream and receives the slice header processing content notification M1 from the slice header insertion unit 133, the division point detection unit 139a described above according to the content indicated by the slice header processing content notification M1. Execute slice termination processing. Further, when the slice termination process is completed, the division point detection unit 139a sends a termination process completion notification M2 to the slice header insertion unit 133.
- the slice data processing unit 134b includes a slice data layer decoding unit 135b, a macroblock layer decoding unit 136b, a QP delta correction unit 138b, and a division point detection unit 139b.
- the slice data layer decoding unit 135b performs variable length decoding (arithmetic decoding) on the encoded data of the slice data layer included in the encoded stream.
- the macroblock layer decoding unit 136b performs variable length decoding (arithmetic decoding) on the encoded data of the macroblock layer included in the encoded stream.
- the QP delta correction unit 138b corrects and corrects the QP change amount “mb_qp_delta” of the macroblock decoded by the macroblock layer decoding unit 136b for each macroblock, similarly to the QP delta correction unit 138a described above.
- the QP change amount is encoded again, and the encoded QP change amount is output.
- the division point detection unit 139b divides the encoded stream into four division streams in the same manner as the division point detection unit 139a. At this time, the dividing point detection unit 139b includes the QP change amount acquired from the QP delta correction unit 138b in each of the divided streams. Furthermore, when the division point detection unit 139b detects the end of the slice of the division stream and receives the slice header processing content notification M1 from the slice header insertion unit 133, the division point detection unit 139b, according to the content indicated by the slice header processing content notification M1, The slice termination process described above is executed. In addition, when the slice end processing is completed, the division point detection unit 139b sends a termination processing completion notification M2 to the slice header insertion unit 133.
- slice header insertion unit 133 and the slice data processing units 134a and 134b will be described in detail. Note that when the functions and processing operations common to the slice data processing units 134a and 134b are described, they are collectively referred to as the slice data processing unit 134 without distinction.
- FIG. 11 is an explanatory diagram for explaining the operations of the slice header insertion unit 133 and the slice data processing unit 134.
- the slice A and the slice B included in the picture are distributed to the stream dividing unit 130 as the slices to be stream divided.
- the slice data processing unit 134 divides the picture including the slice A and the slice B into MB lines, and each MB line is sequentially included in the divided stream buffer 152 from the first MB line via the EPB insertion unit 132b.
- the data is stored in four areas (first area df1 to fourth area df4).
- the slice data processing unit 134 repeats the MB line storage destination in the order of the first area df1, the second area df2, the third area df3, the fourth area df4, and the first area df1. change.
- the slice data processing unit 134 stores the MB line La1 of the slice A in the first area df1 of the divided stream buffer 152, and stores the next MB line La2 of the slice A. And the second MB line La3 of the slice A is stored in the third area df3 of the divided stream buffer 152. Further, the slice data processing unit 134 stores the MB line Lb1 of the slice B next to the slice A in the fourth area df4 of the divided stream buffer 152.
- MB lines are stored in each of the four first areas df1 to df4 of the divided stream buffer 152.
- the next MB line of slice B is stored again in the first area df1. It will be in the state just before.
- the slice data processing unit 134 when storing the MB line La3 in the third region df3, the slice data processing unit 134 stores the slice end information ea even if there is slice end information ea immediately after the MB line La3 in the encoded stream. Instead, only the MB line La3 is stored in the third area df3. Then, when the slice data processing unit 134 subsequently stores an MB line belonging to a new slice in the third area df3, the slice end information ea ′ corresponding to the slice end information ea is stored in the third area df3. To store.
- the slice header insertion unit 133 stores the slice header hb of the slice B in the fourth area df4 in advance.
- another MB line of the slice A exists before the MB line La1 of the slice A. Therefore, the duplicate slice header of slice A is not inserted immediately before the MB lines La1, La2, La3 in the first area df1, the second area df2, and the third area df3.
- the division point detection units 139a and 139b of the slice data processing unit 134 determine whether or not all macroblocks in the 1 MB line have been output each time a macroblock is output. As a result, when the division point detection units 139a and 139b determine that all macroblocks have been output, the division point detection units 139a and 139b detect MB line boundaries (ends of MB lines). Each time the division point detection units 139a and 139b detect the MB line boundary, the division point detection units 139a and 139b interrupt the macroblock output process and notify the slice header insertion unit 133 that the MB line boundary has been detected.
- the division point detectors 139a and 139b of the slice data processor 134 receive notification that the MB line boundary has been detected.
- the slice header insertion unit 133 Upon receipt of the MB line boundary detection notification, the slice header insertion unit 133 sends a slice header processing content notification M1 to the slice data processing unit 134, as shown in FIG. Whether or not this slice header processing content notification M1 is scheduled to be output and stored in the divided stream buffer 152 immediately before the next MB line is stored in the divided stream buffer 152 from the slice data processing unit 134.
- the slice header processing content notification M1 indicating “output” is a notification that prompts the slice data processing unit 134 to perform the slice end processing.
- the slice header insertion unit 133 should output the duplicate slice header hb ′ to the divided stream buffer 152 and store it immediately before the next MB line Lb2 is stored in the divided stream buffer 152 from the slice data processing unit 134. to decide. At this time, the slice header insertion unit 133 outputs a slice header processing content notification M1 indicating “output” to the slice data processing unit 134.
- the slice data processing unit 134 acquires the slice header processing content notification M1 if the slice header processing content notification M1 indicates “output”, the slice data processing unit 134 generates slice end information and stores it in the divided stream buffer 152. The termination processing completion notification M2 is output to the slice header insertion unit 133. On the other hand, if the slice header processing content notification M1 indicates “non-output”, the slice data processing unit 134 sends the termination processing completion notification M2 to the slice header insertion unit without storing the slice termination information in the divided stream buffer 152. To 133.
- the slice data processing unit 134 when the slice data processing unit 134 obtains the slice header processing content notification M1 indicating “output”, the slice data processing unit 134 generates slice end information ea ′ as illustrated in FIG. Store in one area df1. When the storage is completed, the slice data processing unit 134 outputs a termination processing completion notification M2 to the slice header insertion unit 133.
- the slice header insertion unit 133 acquires the termination processing completion notification M2 from the slice data processing unit 134, if the slice header processing content notification M1 output immediately before indicates “output”, the slice header insertion unit 133 passes the EPB insertion unit 132b. Then, the slice header is output to and stored in the divided stream buffer 152, and then the slice header processing completion notification M3 is output to the slice data processing unit 134. On the other hand, if the slice header processing content notification M1 output immediately before indicates “non-output”, the slice header insertion unit 133 notifies the slice header processing completion notification without storing the slice header in the divided stream buffer 152. M3 is output to the slice data processing unit 134.
- the slice header insertion unit 133 acquires the termination processing completion notification M2 from the slice data processing unit 134. ), A duplicate slice header hb ′ is generated and stored in the first area df1 of the divided stream buffer 152. Thereafter, the slice header insertion unit 133 outputs a slice header processing completion notification M3 to the slice data processing unit 134.
- the division point detection units 139a and 139b of the slice data processing unit 134 Upon receiving the slice header processing completion notification M3 from the slice header insertion unit 133, the division point detection units 139a and 139b of the slice data processing unit 134 resume the output processing of the interrupted macroblock and output the next MB line. And stored in the divided stream buffer 152.
- the slice data processing unit 134 outputs the next MB line Lb2 and stores it in the first area df1 of the divided stream buffer 152.
- FIG. 12 is a block diagram showing the configuration of the slice header insertion unit 133.
- division point detection unit 139 when the functions and processing operations common to the division point detection units 139a and 139b are described with reference to FIG. 12, they are collectively referred to as the division point detection unit 139 without distinction.
- the slice header insertion unit 133 includes a NAL type determination unit 133a, a header insertion counter 133b, a header address update unit 133c, and a header buffer 133d.
- the NAL type determination unit 133a determines whether the type of the NAL unit is a slice. If the NAL type determination unit 133a determines that the slice is a slice, the NAL type determination unit 133a notifies the header buffer 133d and the header insertion counter 133b that the type of the NAL unit is a slice.
- the header buffer 133d Upon receipt of the notification from the NAL type determination unit 133a, the header buffer 133d extracts and stores the slice header from the NAL unit if the slice header is included in the NAL unit corresponding to the notification. Furthermore, if the new NAL unit includes a new slice header, the header buffer 133d replaces the already stored slice header with the new slice header. That is, the header buffer 133d always holds the latest slice header.
- the header insertion counter 133b counts the number of MB line boundaries (terminations) detected in the encoded stream detected by the division point detection unit 139 in order to specify the timing for generating and inserting a duplicate slice header. Specifically, the header insertion counter 133b counts values from 0 to 4 (total number of decode engines 120). When receiving a notification from the NAL type determination unit 133a, the header insertion counter 133b resets the count value to 0 if a slice header is included in the NAL unit corresponding to the notification. Further, the header insertion counter 133b counts up the count value by 1 when the boundary of the MB line (the end of the MB line) is detected. The header insertion counter 133b keeps the count value at 4 without counting up when the boundary of the MB line is further detected after the count value reaches 4.
- the header insertion counter 133b updates or holds the count value when the boundary of the MB line is detected, and resets the count value to 0 if the slice header is included in the NAL unit.
- a slice header processing content notification M 1 indicating “output” or “non-output” is output to the dividing point detection unit 139. Specifically, when the count value immediately after the MB line boundary is detected is 0 to 3, the header insertion counter 133b outputs a slice header processing content notification M1 indicating “output”, and the count value is 4 In this case, a slice header processing content notification M1 indicating “non-output” is output.
- the header insertion counter 133b outputs a slice header processing content notification M1 indicating “output” not only when the boundary of the MB line is detected but also when the count value becomes zero.
- the header insertion counter 133b when the header insertion counter 133b outputs the slice header processing content notification M1 to the division point detection unit 139 and then receives the termination processing completion notification M2 from the division point detection unit 139, the output slice header processing content notification M1. Indicates “output”, the slice header stored in the header buffer 133d is output from the header buffer 133d. Thereafter, the header insertion counter 133b outputs a slice header processing completion notification M3 to the division point detection unit 139.
- the slice header insertion unit 133 is an area that is a storage destination of the divided stream buffer 152 according to the value indicated by the MB address information included in the slice header. Select.
- the slice header insertion unit 133 stores the slice header in the selected storage destination area.
- the header insertion counter 133b does not output the slice header stored in the header buffer 133d from the header buffer 133d when the output slice header processing content notification M1 indicates “non-output”. Keep it in a stored state. Thereafter, as described above, the header insertion counter 133b outputs a slice header processing completion notification M3 to the division point detection unit 139.
- FIG. 13 is a diagram showing MB lines and slice headers allocated to the first area df1 to the fourth area df4 of the divided stream buffer 152.
- the stream division unit 130 reads the slices A to C of the encoded stream stored in the stream buffer 151 in the order of slice A, slice B, and slice C.
- the header buffer 133d of the slice header insertion unit 133 extracts and stores the slice header ha from the head of the slice A.
- the header insertion counter 133b resets the count value to zero. Therefore, since the count value is 0, the header buffer 133d outputs the stored slice header ha to store the slice header ha in the first area df1 of the divided stream buffer 152.
- the slice data processing unit 134 When the slice header ha is output from the header buffer 133d, the slice data processing unit 134 outputs the first MB line following the slice header ha of the slice A in the encoded stream, thereby dividing the first MB line into the divided stream buffer. Stored in the first area df1 of 152. As a result, the data is stored in the first area df1 in the order of the slice header ha and the first MB line belonging to the slice A.
- the header insertion counter 133b When the first MB line is output from the slice data processing unit 134, the header insertion counter 133b described above increments the count value to 1. Therefore, since the count value is 1 at the end of the first MB line, the header buffer 133d outputs the stored slice header ha as the duplicate slice header ha ′, thereby the duplicate slice header ha ′ is divided stream buffer Stored in the second area df2 of 152. Note that the MB address information of the duplicate slice header ha 'is updated by the header address update unit 133c.
- the slice data processing unit 134 When the duplicate slice header ha ′ is output from the header buffer 133d, the slice data processing unit 134 outputs the second MB line following the first MB line in the encoded stream, thereby dividing the second MB line into the divided stream buffer 152. In the second area df2.
- the second MB line includes a plurality of macroblocks belonging to slice A, a slice header hb of slice B, and a plurality of macroblocks belonging to slice B. Therefore, the division point detection unit 139 of the slice data processing unit 134 first stores all macroblocks belonging to the slice A included in the second MB line in the second area df2. When the storage ends, the division point detection unit 139 temporarily stops the macroblock output processing, and waits until the slice header processing content notification M1 is received from the slice header insertion unit 133.
- the slice header insertion unit 133 resets the count value to 0 in order to detect the slice header hb of the slice B, and sends a slice header processing content notification M1 indicating “output” to the division point detection unit 139.
- the division point detection unit 139 Upon receiving this slice header processing content notification M1, the division point detection unit 139 performs slice termination processing on the termination of the slice A in the second area df2, and sends a termination processing completion notification M2 to the slice header insertion unit 133.
- the slice header insertion unit 133 that has received this termination processing completion notification M2 stores the slice header hb of the slice B in the second area df2, and sends a slice header processing completion notification M3 to the division point detection unit 139.
- the dividing point detection unit 139 resumes the interrupted output processing and stores a plurality of macroblocks belonging to the next slice B included in the second MB line in the second area df2. To do.
- the header buffer 133d of the slice header insertion unit 133 extracts and stores the slice header hc from the head of the slice C following the second MB line in the encoded stream. .
- the header insertion counter 133b resets the count value to zero. Therefore, since the count value is 0 at the end of the second MB line, the header buffer 133d outputs the stored slice header hc to store the slice header hc in the third area df3 of the divided stream buffer 152. To do.
- the slice data processing unit 134 When the slice header hc is output from the header buffer 133d, the slice data processing unit 134 outputs the third MB line following the slice header hc of the slice C in the encoded stream, thereby dividing the third MB line into the divided stream buffer.
- the data is stored in the third area df3 of 152. As a result, the data is stored in the third area df3 in the order of the slice header hc and the third MB line belonging to the slice C.
- the header insertion counter 133b When the third MB line is output from the slice data processing unit 134, the header insertion counter 133b described above increments the count value to 1. Therefore, since the count value is 1 at the end of the third MB line, the header buffer 133d outputs the stored slice header hc as the duplicate slice header hc ′, thereby dividing the duplicate slice header hc ′ into the divided stream buffer.
- the data is stored in the fourth area df4 of 152.
- the MB address information of the duplicate slice header hc ′ is updated by the header address update unit 133c.
- data is sequentially stored in the first area df1 to the fourth area df4 of the divided stream buffer 152.
- the first divided stream to the fourth divided stream are stored in the first area df1 to the fourth area df4, respectively.
- 14A and 14B are diagrams illustrating positions where slice end information is set.
- the picture includes a slice A and a slice B, and the top macroblock of the slice B following the slice A is at the left end of the MB line.
- the division point detection unit 139 of the slice data processing unit 134 is 4 MB lines before the first MB line of the slice B immediately before the slice header hb of the slice B is output from the slice header insertion unit 133.
- the slice end information ea ′ of slice A is set at the end of the MB line of slice A.
- the division point detection unit 139 of the slice data processing unit 134 is 3 MB lines before the first MB line of the slice B immediately before the duplicate slice header hb ′ of the slice B is output from the slice header insertion unit 133.
- Slice end information ea ′ of slice A is set at the end of the MB line of slice A.
- the slice end information ea ' is set at the end of each MB line that is 1 to 4 MB lines before the MB line.
- the picture includes a slice A and a slice B, and the top macroblock of the slice B following the slice A is other than the left end of the MB line.
- the division point detection unit 139 of the slice data processing unit 134 the MB line including the slice header hb of the slice B immediately before the duplicate slice header hb ′ of the slice B is output from the slice header insertion unit 133
- Slice end information ea ′ of slice A is set at the end of the MB line of slice A, which is 3 MB lines before.
- the slice end is set at the boundary of the slice in the MB line and the end of each MB line that is 1 to 3 MB lines before the MB line.
- Information ea ′ is set.
- FIG. 15 is a flowchart showing the operation of the dividing point detection unit 139.
- the division point detection unit 139 specifies and outputs data to be processed (for example, a macro block) from the head side of the encoded stream, and stores it in the division stream buffer 152 (step S100).
- the division point detection unit 139 manages the address (MB address value) of the output macroblock. That is, if the output macroblock is the first macroblock of the slice included in the encoded stream, the division point detection unit 139 includes the MB address value of the output macroblock in the slice header of the slice. Update to the value indicated by the MB address information. The division point detection unit 139 increments the MB address value each time a macro block subsequent to the head macro block is output.
- the MB address value is an integer of 0 or more.
- W “pic_width_in_mbs_minus1” +1 and indicates the number of macroblocks in the horizontal direction of the picture.
- the division point detection unit 139 determines whether there is data to be processed next in the encoded stream, that is, whether the output process should be terminated. It discriminate
- slice end processing is executed (step S108). That is, when the encoded stream is decoded by CABAD, the division point detection unit 139 sets “1” to “end_of_slice_flag” as slice end information. Further, when the encoded stream is decoded by CAVLD, the division point detection unit 139 adds “rbsp_slice_trailing_bits” as slice end information.
- the division point detection unit 139 After determining that “output” is not indicated in step S106 (No in step S106) or after executing the slice termination process in step S108, the division point detection unit 139 completes the termination process in the slice header insertion unit 133. A notification M2 is sent (step S110). Thereafter, the dividing point detection unit 139 determines whether or not a slice header processing completion notification M3 is received from the slice header insertion unit 133 (step S112). If the division point detection unit 139 determines that the slice header processing completion notification M3 has not been received (No in step S112), the division point detection unit 139 waits until the slice header processing completion notification M3 is received.
- step S114 when the division point detection unit 139 determines that the slice header processing completion notification M3 is received in step S112 (Yes in step S112), whether there is data to be processed next in the encoded stream, that is, It is determined whether or not the output process should be terminated (step S114).
- step S114 when it is determined that the division point detection unit 139 should end (Yes in step S114), the process ends, and when it is determined that it should not be ended (No in step S114), the next processing target is again processed. Data to be output is output and stored in the divided stream buffer 152 (step S100).
- the skip run modification unit 137a modifies “mb_skip_run” that is the MB skip run information as described above.
- the MB skip run information is a codeword included in an encoded stream when CAVLC is used as an encoding method, and indicates the number of skip macroblocks (hereinafter also referred to as “length”).
- the length of MB skip run information means the number of consecutive skip macroblocks indicated by the MB skip run information.
- FIG. 16A and FIG. 16B are explanatory diagrams for explaining the MB skip run information correction processing.
- the MB lines included in the first divided stream are L1 and L5, and there are two skip macroblocks continuous at the end of the MB line L5.
- the MB lines included in the second divided stream are L2 and L6, and the MB line includes nine skip macroblocks that continue from L2 to L6.
- the MB lines included in the third divided stream are L3 and L7, and the MB line has three skip macroblocks continuous at the head of L3.
- the MB lines included in the fourth divided stream are L4 and L8, and there are no skip macroblocks.
- the MB skip run information originally included in the encoded stream has the lengths 8 and 6, whereas the MB skip run information having the length 2 for the first divided stream is It is necessary to output MB skip run information having a length of 9 for the 2-part stream and MB skip run information having a length of 3 for the third part stream. That is, when MB skip run information indicating the number of consecutive skip macroblocks across a plurality of MB lines has a dependency relationship with each other, the dependency relationship is determined by the MB line in each divided stream. It is necessary to modify the MB skip run information so that a new dependency according to the context of
- the skip run correcting unit 137a first performs MB skip at the MB line boundary.
- Divide run information is to divide the number of skip macroblocks continuous across a plurality of MB lines, and to indicate the number of skip macroblocks for each MB line. Means to generate the MB skip run information.
- the skip run modification unit 137a sends MB skip run information corresponding to the set of eight skip macroblocks existing across the MB lines L2 to L3 to the MB line L2. This is divided into MB skip run information corresponding to the set of five skip macroblocks included and MB skip run information corresponding to the set of three skip macroblocks included in the MB line L3. Similarly, the skip run modification unit 137a obtains MB skip run information corresponding to a set of six skip macroblocks existing across the MB lines L5 to L6, and sets the two skip macroblocks included in the MB line L5. The information is divided into MB skip run information corresponding to the set and MB skip run information corresponding to the set of four skip macroblocks included in the MB line L6.
- the skip run modification unit 137a recombines a plurality of MB skip run information corresponding to a set of skip macroblocks continuous in each divided stream among the divided MB skip run information.
- the recombination of a plurality of MB skip run information means that the plurality of MB skip run information is converted into one MB skip run information indicating the sum of the numbers indicated by each of the plurality of MB skip run information. To do.
- the skip run modification unit 137a combines the two MB skip run information respectively corresponding to the set of these two skip macroblocks and converts the information into MB skip run information of length 9.
- the skip run modification unit 137a encodes the MB skip run information obtained in this manner again, and outputs the encoded MB skip run information.
- the skip run modification unit 137a divides the input MB skip run information at the MB line boundary, and then recombines as necessary, so that an appropriate length is obtained for each divided stream. MB skip run information can be output.
- the skip run modification unit 137a recombines the MB skip run information that is continuous in each divided stream as needed without leaving it divided. This is because the H.264 / AVC standard does not allow a plurality of MB skip run information to exist continuously. That is, H.H. Since the H.264 / AVC standard does not allow the number of consecutive skip macroblocks to be expressed using a plurality of MB skip run information, the skip run modification unit 137a combines the plurality of MB skip run information. . In this way, the skip run correction unit 137a is H.264. As the MB skip run information is corrected in a format compliant with the H.264 / AVC standard, each divided stream becomes H.264. It is generated in a format compliant with the H.264 / AVC standard. As a result, the subsequent decoding engine 120 can decode the divided stream without requiring special processing.
- the skip run correction unit 137a performs one continuous skip in each divided stream. A process of recombining a plurality of MB skip run information indicating the number of macroblocks is performed. That is, the skip run modification unit 137a combines the MB skip run information having a length of 5 and the MB skip run information having a length of 3 in the second divided stream, and converts the combined information into MB skip run information having a length of 8.
- the skip run modification unit 137a re-encodes the MB skip run information obtained in this way, and outputs the encoded MB skip run information.
- FIG. 17 is a block diagram showing the configuration of the skip run correction unit 137a.
- the skip run correcting unit 137a includes a skip run extracting unit 160, a skip run dividing unit 161, a skip run accumulating / holding unit 162, an adding unit 163, and a skip run encoding unit 164.
- the skip run extraction unit 160 detects and extracts MB skip run information from the stream output from the slice data layer decoding unit.
- the extracted MB skip run information is output to the skip run division unit 161, and other information is output to the division point detection unit 139a as it is.
- the skip run division unit 161 determines the number of skip macroblocks in which the input MB skip run information continues across a plurality of MB lines from the length of the MB skip run information and the position information of the macro block where the MB skip run information exists. It is determined whether or not.
- the skip run division unit 161 divides the MB skip run information using the MB line boundary as a division point.
- MB skip run information indicating the number of skipped macroblocks exceeding the MB line boundary is output to the adding unit 163, and MB indicating the number of skipped macroblocks not exceeding the MB line boundary.
- the skip run information is output to the skip run accumulation / holding unit 162.
- the MB skip run information may indicate the number of skip macroblocks that are continuous across three or more MB lines. In that case, since two or more MB line boundaries exist between consecutive skip macroblocks, the skip run dividing unit 161 repeatedly performs division using each MB line boundary as a division point. At that time, among the divided MB skip run information, MB skip run information indicating the number of skip macroblocks exceeding the last MB line boundary is output to the adder 163, and the other MB skip run information is skip run information. The data is output to the accumulation / holding unit 162.
- the skip run accumulating / holding unit 162 receives the divided MB skip run information output from the skip run dividing unit 161, and holds the value for each divided stream as preceding MB skip run information. That is, when the skip run accumulating / holding unit 162 receives MB skip run information included in the first divided stream, the skip run accumulating / holding unit 162 holds the preceding MB skip run information of the first divided stream. Also, the skip run accumulation / holding unit 162 holds the MB skip run information included in the second divided stream as the preceding MB skip run information of the second divided stream.
- the skip run accumulation / holding unit 162 when the skip run accumulation / holding unit 162 receives the MB skip run information included in the third divided stream, the skip run accumulating / holding unit 162 holds the preceding MB skip run information of the third divided stream. Further, when the MB skip run information included in the fourth divided stream is received, the skip run accumulating / holding unit 162 holds the preceding MB skip run information of the fourth divided stream.
- the skip run accumulating / holding unit 162 accumulates the MB skip run information received from the skip run dividing unit 161, and newly As the MB skip run information, the value is held for each divided stream. That is, the skip run accumulating / holding unit 162 adds the MB skip run information received from the skip run splitting unit 161 to the held preceding MB skip run information for each split stream.
- the adding unit 163 receives the MB skip run information from the skip run dividing unit 161, and the preceding MB skip run information held in the skip run accumulating / holding unit 162 corresponding to the divided stream including the MB skip run information. read out. Then, the adding unit 163 adds the value of the MB skip run information received from the skip run dividing unit 161 and the value of the preceding MB skip run information read from the skip run accumulating / holding unit 162, and corrects the result. It outputs to the skip run encoding part 164 as MB skip run information performed. By this processing, MB skip run information is recombined.
- the skip run encoding unit 164 re-encodes the corrected MB skip run information output from the addition unit 163 and outputs it to the division point detection unit 139a, thereby re-embedding the corrected MB skip run information in the stream. .
- FIG. 18 is a flowchart showing the MB skip run information correction operation by the skip run correction unit 137a.
- the skip run correction unit 137a determines whether or not the stream being processed has reached the end of the slice (step S200). This is because the MB skip run information does not indicate the number of skip macroblocks that continue across the slice boundary, so when the end of the slice is reached, the preceding MB skip held inside the skip run modification unit 137a This is because it is necessary to output all run information. If it is determined here that the end of the slice has been reached (Yes in step S200), the process proceeds to step S224. Details of the processing will be described later.
- the skip run modification unit 137a checks whether or not the MB skip run information “mb_skip_run” has been acquired (step S202). If the MB skip run information has not been acquired yet (No in step S202), the skip run correction unit 137a returns to the beginning of the process and reads the stream again.
- the skip run correction unit 137a determines the position of the macro block in the picture from the address information of the macro block including the corresponding MB skip run information. Calculate (step S204).
- the skip run modification unit 137a specifies the position of the first skip macro block among the consecutive skip macro blocks indicated by the acquired MB skip run information.
- the skip run correction unit 137a calculates the position of the macro block in step S204, the skip run correction unit 137a checks whether or not consecutive skip macro blocks reach the MB line boundary from the position information of the macro block and the length of the MB skip run information. Thus, it is determined whether or not the acquired MB skip run information needs to be divided (step S206).
- the skip run correction unit 137a determines that consecutive skip macroblocks reach the MB line boundary.
- the skip run correction unit 137a similarly performs macroblock position information and MB skipping. From the length of the run information, it may be determined whether consecutive skip macroblocks reach the division boundary.
- step S206 If it is determined that consecutive skip macroblocks reach the MB line boundary (Yes in step S206), the skip run modification unit 137a proceeds to step S216 to divide the MB skip run information. Details will be described later.
- the skip run modification unit 137a determines whether the corresponding MB skip run information is located at the head of the MB line. (Step S208). That is, the skip run modification unit 137a determines whether or not the corresponding MB skip run information needs to be combined with the preceding MB skip run information.
- the skip run correction unit 137a When it is determined that the corresponding MB skip run information is located at the head of the MB line (Yes in step S208), the skip run correction unit 137a performs preceding MB skip run information for the MB skip run information “mb_skip_run”. By adding “prev_mb_skip_run”, MB skip run information is recombined (step S210). Note that this MB skip run information recombination processing needs to be performed independently for each divided stream. That is, the preceding MB skip run information is held in the skip run modification unit 137a for each divided stream, and the preceding MB skip run information corresponding to the MB line including the corresponding MB skip run information is added.
- step S208 When it is determined in step S208 that the corresponding MB skip run information is not located at the head of the MB line (No in step S208), or after the MB skip run information is combined in step S210, the skip run correction is performed.
- the unit 137a performs re-encoding processing of MB skip run information (step S212). This means that the split stream is H.264. This is because the format conforms to the H.264 / AVC standard.
- the skip run modification unit 137a outputs the re-encoded MB skip run information to the division point detection unit 139a and ends the process (step S214).
- step S206 if it is determined in step S206 that consecutive skip macroblocks reach the MB line boundary (Yes in step S206), the skip run correction unit 137a sets the corresponding MB skip run information as the division point. The first half and the second half are divided (step S216). Note that if the continuous skip macroblocks reach the MB line boundary but do not cross the MB line boundary, the latter half may be zero.
- the skip run modification unit 137a holds the first half of the divided MB skip run information as preceding MB skip run information “prev_mb_skip_run”. At this time, if there is the preceding MB skip run information already held in the skip run correction unit 137a, the skip run correction unit 137a has the preceding MB in which the length of the first half of the newly generated MB skip run information is held. It is added to the length of skip run information and held (step S218). As described above, the preceding MB skip run information is held independently for each divided stream as the preceding MB skip run information of the divided stream corresponding to the MB line in which the preceding MB skip run information is included.
- the skip run correcting unit 137a determines whether the length is zero (step S220). S222).
- step S222 If it is determined in step S222 that the length of the new MB skip run information is 0 (Yes in step S222), the skip run correction unit 137a determines that there is no more MB skip run information to be processed. Exit.
- step S222 If it is determined in step S222 that the length of the new MB skip run information is not 0 (Yes in step S222), the skip run correction unit 137a determines that there is still MB skip run information to be processed, and proceeds to step S204. Returning, division / recombination of MB skip run information and output processing are performed again. As described above, when the skip run correction unit 137a repeatedly divides and recombines MB skip run information, the MB skip run information straddling 3 MB lines or more is correctly divided and recombined.
- step S200 if it is determined in step S200 that it is the end of the slice (Yes in step S200), the preceding MB skip run information “prev_mb_skip_run” is output in order to output the preceding MB skip run information held in the skip run modification unit 137a.
- the MB skip run information “mb_skip_run” is replaced as it is (step S224).
- step S212 the skip run correction unit 137a re-encodes the replaced MB skip run information, and then outputs the information to the division point detection unit 139a to end the process.
- the preceding MB skip run information is output for all the divided streams.
- the processing described above is repeated until the end of the stream in the skip run modification unit 137a.
- the skip run correction unit 137a appropriately divides and recombines MB skip run information.
- the QP delta correction units 138a and 138b will be described in detail.
- the functions and processing operations common to the QP delta correction units 138a and 138b are described, they are collectively referred to as the QP delta correction unit 138 without distinction.
- the QP delta correction unit 138 corrects the QP variation “mb_qp_delta” that exists in principle for each macroblock.
- the QP change amount is a codeword included in the encoded stream in order to decode the QP value that is the quantization parameter of the macroblock, and the difference value between the QP values of the target macroblock and the macroblock processed immediately before Is shown.
- the decoding of the QP value is performed by the following equation (1).
- QPY ((QPY, PREV + mb_qp_delta + 52 + 2 * QpBdOffsetY)% (52 + QpBdOffsetY))-QpBdOffsetY (1)
- QPY indicates the QP value of the luminance of the processing target macroblock
- QPY and PREV indicate the QP value of the luminance of the immediately preceding macroblock.
- Equation (2) when the bit depth of the pixel is 8, the QP value is decoded using the QP variation “mb_qp_delta” so as to be in the range of 0 to 51.
- “Mb_qp_delta” is a value that can take a range of ⁇ 26 to +25.
- the decoding of the QP value which is a quantization parameter, has a dependency relationship between consecutive macroblocks in the processing order, but if there is a slice boundary in the middle, the dependency relationship is canceled there. That is, the QP value that is a quantization parameter is initialized with the slice QP value at the head of the slice. In the first macroblock of the slice, a difference value between the QP value of the corresponding macroblock and the slice QP value is encoded as a QP change amount.
- FIG. 19A and FIG. 19B are explanatory diagrams for explaining the correction process of the QP change amount.
- the macroblock that is originally processed immediately before is the macroblock B. Therefore, in the macroblock C, a difference value between the QP value of the macroblock B and the QP value of the macroblock C is encoded as a QP change amount.
- the decoding engine 120 decodes the second divided stream as it is, the QP change amount that is a difference value between the QP value of the macroblock B and the QP value of the macroblock C is reflected on the QP value of the macroblock A.
- the QP value of the macroblock C cannot be correctly decoded. That is, the dependency relationship between the MB lines due to the QP change amount indicating the change amount between the macroblocks straddling the two MB lines is broken.
- the QP delta correction unit 138 corrects the QP change amount so as to correct the change in the context of the macroblock generated by dividing the stream.
- the dependency is the context of the MB lines in each segment stream. The amount of QP change is corrected so that a new dependency relationship is obtained.
- the QP change amount is calculated based on the context of the new macroblock after stream division.
- a method to re-execute is conceivable. However, this method requires two processes of QP value decoding and QP change amount calculation, and the amount of processing in the QP delta correction unit 138 increases.
- the QP delta correction unit 138 corrects the QP value without decoding the QP value by accumulating the QP change amount of the macroblock that is not allocated to the target divided stream for each divided stream.
- the QP variation is directly derived.
- FIG. 20 is an explanatory diagram for explaining the accumulation of QP variation.
- the horizontal axis represents the QP value
- QP1 to QP4 represent the QP value in the continuous macroblock.
- “mb_qp_delta” representing a difference value between the QP values is shown.
- the two-digit number attached to the end of “mb_qp_delta” indicates the number of the QP value corresponding to the preceding macroblock on the upper side, and indicates the number of the QP value corresponding to the subsequent macroblock on the lower side.
- “mb_qp_delta12” represents a difference value between QP1 and QP2.
- “mb_qp_delta” represents a difference value of QP values in an axis in which 0 that is a minimum value of QP values and 51 that is a maximum value are continuously connected.
- QP2 is obtained from QP1 and “mb_qp_delta12” using equation (2).
- QP3 is obtained from QP2 and “mb_qp_delta23”.
- QP4 is obtained from QP3 and “mb_qp_delta34”.
- the total amount of QP change represented by “mb_qp_delta12”, “mb_qp_delta23”, and “mb_qp_delta34” is equal to “mb_qp_delta14” indicating the difference value between QP1 and QP4. That is, it can be seen that in order to obtain the QP variation between non-adjacent macroblocks, all the QP variations “mb_qp_delta” between them may be accumulated.
- the accumulation is performed according to the following formula (3).
- Acc_mb_qp_delta (acc_mb_qp_delta + mb_qp_delta + 52)% 52 (3)
- acc_mb_qp_delta indicates accumulated “mb_qp_delta”.
- mb_qp_delta the axis in which the minimum value 0 and the maximum value 51 shown in FIG. The total amount of QP change can be obtained.
- the macroblock located immediately before the macroblock C is the macroblock A. Therefore, the macroblock C includes the QP value of the macroblock A and the macroblock C. The difference value of the QP value needs to be included as the QP change amount.
- the QP delta correction unit 138 accumulates the QP change amounts of all the macroblocks included in the MB lines L3 to L5 and the QP change amount of the macroblock C. In this way, by accumulating the QP change amounts of all the macroblocks between the macroblock A and the macroblock C, the modified QP change which is the difference value between the QP value of the macroblock A and the QP value of the macroblock C The amount can be determined.
- the amount of QP change obtained here is derived from Equation (3), and thus is a value indicating a range of 0 to 51. Since the original QP variation “mb_qp_delta” is a value in the range of ⁇ 26 to +25, the QP delta correction unit 138 sets the QP variation “mb_qp_delta” in the range of ⁇ 26 to +25 according to the following equation (4). Modify as follows.
- the same processing is performed on the first macroblock of all MB lines.
- the QP change amount of all the macroblocks in the MB lines L4 to L6 is accumulated and reflected to the corresponding macroblock.
- the amount of QP change is derived.
- the QP delta correction unit 138 re-encodes the modified QP variation obtained in this way, and outputs the encoded modified QP variation to the dividing point detection unit 139.
- the QP delta correction unit 138a performs encoding using the CAVLC method
- the QP delta correction unit 138b performs encoding using the CABAC method.
- the QP delta correction unit 138 sets an appropriate QP change amount for each divided stream by correcting the input QP change amount so as to match the context of macroblocks in the divided stream. can do. As a result, the subsequent decoding engine 120 can decode the divided stream without requiring special processing.
- FIG. 19B shows an example in which a slice is divided at the boundary between MB lines L4 and L5.
- MB lines L1 to L4 are included in slice A
- MB lines L5 to L8 are included in slice B.
- the macroblock A and the macroblock C are continuous as in FIG. 19A, but the slices including the respective macroblocks are different, and the macroblock A and the macroblock C are between.
- the dependency is gone.
- the QP change amount of the macroblock C indicates a difference value between the QP value of the macroblock C and the slice QP value of the slice B. Need to be.
- the QP delta correction unit 138 accumulates the QP change amount for the macroblocks included between the macroblock A and the macroblock C included in the slice B, thereby obtaining the slice QP value of the slice B from the slice QP value.
- the difference value can be obtained.
- whether or not the slice is actually divided cannot be determined until the head of slice B arrives.
- the QP delta correction unit 138 accumulates the QP variation for all the macroblocks included between the macroblock A and the macroblock C, and at the start of the macroblock processing at the head of the slice B, the accumulated QP variation “ acc_mb_qp_delta "is reset to zero. In this way, the QP change amount can be accumulated only for the macroblocks included in the slice B, and the corrected QP change amount of the macroblock C can be obtained correctly.
- the modified QP variation obtained in this way is encoded again, and the encoded modified QP variation is output to the dividing point detection unit 139.
- FIG. 21 is a flowchart showing the QP change correction process performed by the QP delta correction unit 138.
- the QP delta correction unit 138 calculates the position of the macro block in the picture from the address information of the processing target macro block (step S300).
- the QP delta correction unit 138 determines whether or not the processing target macroblock is the head of the slice (step S302).
- step S304 the accumulated QP change amount “acc_mb_qp_delta” corresponding to all the divided streams is reset to 0 (step S304).
- the QP change amount is determined depending on whether or not it is the first macroblock of the slice.
- the accumulated QP change amount may be reset.
- step S302 When it is determined in step S302 that the processing target macroblock is not the head of the slice (No in step S302), or after the accumulated QP change amount is reset in step S304, the QP delta correction unit 138 obtains in step S300. Then, the division stream output destination of the processing target macro block is determined from the position information of the macro block (step S306).
- the QP delta correction unit 138 When it is determined in step S306 that the output destination of the processing target macroblock is the first divided stream, the QP delta correction unit 138 performs the cumulative QP change amount corresponding to the second, third, and fourth divided streams. The amount of QP change of the processing target macroblock is accumulated according to equation (3) (step S308).
- the QP delta correction unit 138 sets the accumulated QP change amount corresponding to the first, third, and fourth divided streams.
- the QP change amount of the processing target macroblock is accumulated according to the equation (3) (step S310).
- step S306 If it is determined in step S306 that the output destination of the processing target macroblock is the third divided stream, the QP delta correction unit 138 sets the accumulated QP change amount corresponding to the first, second, and fourth divided streams. On the other hand, the QP change amount of the processing target macroblock is accumulated according to the equation (3) (step S312).
- step S306 If it is determined in step S306 that the output destination of the processing target macroblock is the fourth divided stream, the QP delta correction unit 138 sets the accumulated QP change amount corresponding to the first, second, and third divided streams. On the other hand, the QP change amount of the processing target macroblock is accumulated according to the equation (3) (step S314).
- the QP change amount corresponding to the divided stream other than the divided stream that is the output destination of the processing target macroblock is accumulated. This means that in each divided stream, the QP change amount accumulation of the macroblocks included in the 3 MB line that is not assigned to the target divided stream is performed.
- the QP delta correction unit 138 can correct the difference value between the QP values of the macroblocks before and after the 3 MB line that has not been assigned to the target divided stream.
- the QP delta correction unit 138 determines that the processing target macroblock is the head of the MB line from the macroblock position information obtained in step S300. It is judged whether it is located in (step S316).
- amendment part 138 similarly processes from the positional information on a macroblock. What is necessary is just to determine whether the object macroblock is located at the head of the division unit.
- the QP delta correction unit 138 sets the accumulated QP change amount corresponding to the divided stream including the processing target macroblock.
- the QP change amount of the processing target macroblock is accumulated according to the equation (3).
- the QP delta correction unit 138 corrects the obtained accumulated QP change amount so as to fall within the range of ⁇ 26 to +25 according to the equation (4), and replaces it with the QP change amount of the processing target macroblock.
- the QP delta correction unit 138 resets the accumulated QP change amount corresponding to the divided stream including the processing target macroblock to 0 (step S318).
- the accumulated QP variation is reflected in the QP variation of the macroblock at the head of the MB line, so that the QP variation is corrected.
- the QP delta correction unit 138 performs re-encoding processing of the QP change amount of the processing target macroblock (step S320).
- the split stream is H.264. This is because the format conforms to the H.264 / AVC standard.
- the QP delta correction unit 138 outputs the re-encoded QP change amount to the division point detection unit 139 and ends the process (step S322).
- the QP delta correction unit 138 corrects the QP change amount for the macroblock at the head of the MB line, and then performs the QP change as it is for the macroblocks other than the head of the MB line. Performs re-encoding / output of quantity.
- the encoded picture is divided into a plurality of MB lines (configuration units), and each of the plurality of MB lines is divided into N pieces as a part of the divided stream. Since it is assigned to the decoding engine 120 and decoding is performed, the burden of decoding processing by the N decoding engines 120 can be equalized, and decoding parallel processing can be appropriately executed. For example, H.M. Even when a H.264 / AVC encoded picture is composed of one slice, since the encoded picture is divided into a plurality of MB lines, one decoding engine 120 bears the decoding of the one slice. The N decoding engines 120 can be equally burdened without doing so.
- a slice straddling the plurality of MB lines is divided into a plurality of slice portions (for example, each of the MB lines L1 to L6 shown in FIG.
- the slice portion is assigned to different divided streams. That is, one divided stream does not include the entire slice of the coded picture, and a slice portion group (for example, the first slice shown in FIG. 3) is configured by collecting one or more slice portions that are fragments of the slice.
- MB lines L2 and L6) included in the two-divided stream are included.
- such a slice portion group (MB lines L2, L6) may not include a slice header indicating its head or slice end information indicating its end.
- a plurality of MB lines may have a dependency relationship.
- H.M. In H.264 / AVC, there are cases where a plurality of MB lines have a dependency relationship depending on MB skip run information “mb_skip_run” and QP change amount “mb_qp_delta”. If such an encoded stream is divided into a plurality of MB lines and assigned to different divided streams, the dependency relationship between MB lines cannot be maintained correctly.
- the stream dividing unit 130 reconfigures the slice portion group as a new slice.
- the decoding engine 120 that decodes the divided stream including the slice portion group recognizes the slice portion group and does not require special processing to appropriately decode the slice portion group. Can be easily recognized and properly decoded. That is, in the present embodiment, it is not necessary to provide a function or configuration for performing special processing in each of the N decoding engines 120, so that the overall configuration of the image decoding device 100 can be simplified.
- the decoding process can be speeded up as compared with the image decoding apparatus of Patent Document 1.
- the image decoding apparatus of Patent Document 1 does not parallelize the variable length decoding of the encoded stream and the deblocking filter process.
- the image decoding apparatus disclosed in Patent Document 1 does not properly divide the encoded stream.
- each of the decoding engines 120 is variable like a decoding engine 1421 shown in FIG. Long decoding and deblocking filtering can be performed in parallel. As a result, the image decoding apparatus according to the present embodiment can increase the speed of the decoding process.
- the capacity required for the intermediate data buffer can be reduced as compared with the image decoding apparatus disclosed in Patent Document 3.
- the variable length code included in the encoded stream is decoded in parallel in units of pictures using a plurality of variable length decoding processing units, and the decoded data is intermediate
- the subsequent image decoding processing unit which is stored in the data buffer, performs decoding processing in parallel from the decoded data in units of MB lines.
- the image decoding apparatus disclosed in Patent Document 3 stores the variable length code in the intermediate data buffer in a decoded state, so that the capacity required for the intermediate data buffer increases.
- the intermediate data buffer requires a capacity capable of storing a plurality of pictures, and the size thereof is enormous.
- the capacity of the divided stream buffer 152 can be saved.
- such an image decoding apparatus 100 among the high-resolution decoding, the high-speed decoding, and the multi-channel decoding, according to the mode information input to the M stream dividing units 130. Do either.
- FIG. 22A is an explanatory diagram for explaining high-resolution decoding.
- FIG. 22B is an explanatory diagram for describing high-speed decoding.
- FIG. 22C is an explanatory diagram for describing multi-channel decoding.
- the 4k2k encoded stream is converted into the above-described encoded stream. In this way, it is divided into four divided streams, and each of the four divided streams is decoded by each decoding engine 120.
- each of the four decoding engines 120 has a processing capability capable of decoding HD images (1920 ⁇ 1088 pixels, 60i) for two channels
- the image decoding device 100 has a 4k2k image (3840 ⁇ 2160 pixels, 60p) can be processed in real time.
- the HD encoded stream is described above.
- the data is divided into four divided streams, and each of the four divided streams is decoded by each decoding engine 120.
- each of the four decoding engines 120 has a processing capability capable of decoding HD images (1920 ⁇ 1088 pixels, 60i) for two channels, so that the image decoding apparatus 100 converts an HD image to 8 ⁇ speed (4 ⁇ 2).
- each decoding engine 120 is made to decode each of the plurality of encoded streams without dividing the stream.
- the M stream division units 130 do not duplicate and insert various NAL units such as SPS, PPS, and slice, and the encoded streams (channels) to each area of the divided stream buffer 152 ) Only.
- each of the four decoding engines 120 has a processing capability capable of decoding HD images (1920 ⁇ 1088 pixels, 60i) for two channels
- the image decoding apparatus 100 has a maximum of eight channels, that is, eight HD images.
- the encoded stream can be decoded simultaneously.
- the clock frequency of the decoding engine 120 can be lowered to reduce power consumption.
- the first decoding engine 120 and the second decoding engine 120 execute the decoding of two channels, and the remaining third decoding engine 120 and the fourth decoding engine 120 are stopped. Let Alternatively, the first decoding engine 120 to the fourth decoding engine 120 are used and their clock frequencies are halved.
- the image decoding apparatus 100 can switch the decoding process to any one of the high resolution decoding, the high speed decoding, and the multi-channel decoding according to the mode information. Can be improved.
- high-resolution decoding and high-speed decoding in the image decoding apparatus 100 are processes that divide an encoded stream into four divided streams and decode them in parallel, respectively. That is, only the resolution and frame rate (4k2k or HD) of the encoded stream to be decoded are different between the high resolution decoding and the high speed decoding.
- the image decoding apparatus 100 switches the decoding process between high-resolution decoding or high-speed decoding and multi-channel decoding according to the mode information, and further performs decoding according to the resolution and frame rate of the encoded stream.
- the processing is switched between high resolution decoding and high speed decoding.
- the image decoding apparatus 100 corrects the MB skip run information and the QP change amount to change the dependency relationship between consecutive MB lines in the encoded stream before the division, and the MB line in each divided stream.
- the dependency was modified according to the context.
- the encoded stream may include a macroblock that does not include the QP change amount.
- a macroblock that does not include a QP change amount corresponds to a macroblock that is not quantized.
- the macroblocks that do not include the QP variation are, for example, (1) skip macroblock, (2) uncompressed macroblock (I_PCM), or (3) the prediction mode of intra prediction is “ It is not an Intra16 ⁇ 16 ”and“ coded_block_pattern ”is 0 (no non-zero coefficient is included).
- the image decoding apparatus 100 When such a macroblock that does not include the QP change amount is present at the beginning of the MB line, the image decoding apparatus 100 according to Embodiment 1 does not have the QP change amount to be corrected, and thus the QP change amount. Can not be corrected.
- the image decoding apparatus 200 includes the macroblock when generating a plurality of divided streams from the encoded stream including the macroblock that does not include the QP variation amount at the head of the MB line.
- One feature is that a QP variation is inserted into the divided stream. Thereby, the image decoding apparatus 200 can appropriately set the QP change amount based on the new context in each divided stream.
- FIG. 23 is a block diagram showing the configuration of the image decoding apparatus according to Embodiment 2 of the present invention.
- the same components as those in FIG. 1 are denoted by the same reference numerals, and description thereof is omitted.
- the image decoding apparatus 200 includes a decoder 210 and a memory 150.
- the decoder 210 reads out and decodes the encoded stream stored in the stream buffer 151 of the memory 150 to generate decoded image data, and the decoded image data is generated in the frame memory 153 of the memory 150. To store.
- the decoder 210 also includes a stream division control unit 140, M stream division units (first stream division unit to Mth stream division unit) 230, and N decoding engines (first decoding engine to Nth decoding unit). Engine) 220.
- the M stream splitting units 230 execute processing equivalent to that of the M stream splitting units 130 except for the MB skip run information correction processing and the QP change amount correction processing. Details of the stream dividing unit 230 will be described later with reference to FIG.
- the N decoding engines 220 correspond to N decoding units that decode each of the N divided streams in parallel.
- the N decoding engines 220 perform processing equivalent to that of the N decoding engines 120 except that the QP value is calculated using the accumulated QP change amount inserted in the divided stream by the stream dividing unit 230.
- FIG. 24 is a block diagram showing the configuration of the stream segmentation unit 230.
- the stream dividing unit 230 includes a skip run correcting unit 237a and QP delta inserting units 238a and 238b instead of the process management unit 130m, the selector Sct, the skip run correcting unit 137a, and the QP delta correcting units 138a and 138b. , Different from the stream dividing unit 130 shown in FIG.
- the process management unit 130m acquires mode information and distribution control information, and controls other components included in the stream dividing unit 230 according to the information. That is, the process management unit 130 controls, for example, the selector Sct so that the stream division processing as illustrated in FIGS. 6A to 8 is performed, and the divided stream generated from the SPS, PPS, or the slice to be processed. Is output or the output is stopped.
- the skip run modification unit 237a when the MB skip run information indicates the number of macroblocks continuous across at least two MB lines assigned to different divided streams, The MB skip run information is divided so as to indicate the number of blocks for each line.
- the skip run modification unit 237a does not combine a plurality of MB skip run information into one MB skip run information in each divided stream.
- the QP delta insertion units 238a and 238b when the QP change amount indicates a change amount between macroblocks straddling two MB lines, The amount of QP change based on the new context is calculated.
- the QP delta insertion units 238a and 238b output the calculated QP change amount to the dividing point detection unit 139 as a new QP change amount. As a result, a new QP change amount is inserted (set) into each divided stream. That is, the QP delta insertion units 238a and 238b do not correct the amount of QP change included in each macroblock.
- the division point detection unit 139a includes the MB skip run information acquired from the skip run correction unit 237a and the accumulated QP change amount acquired from the QP delta insertion unit 238a in each of the divided streams.
- the division point detection unit 139b includes the accumulated QP change amount acquired from the QP delta insertion unit 238b in each of the divided streams.
- FIG. 25 is an explanatory diagram for explaining MB skip run information correction processing and QP change amount insertion processing.
- the picture shown in FIG. 25 includes 5 consecutive skip macros at the end of MB line L2, 3 at the beginning of MB line L3, 2 at the end of MB line L5, and 4 at the beginning of MB line L6. A block exists.
- the skip macroblock does not include the QP change amount. Therefore, in the macroblock C, a difference value between the QP value of the macroblock B and the QP value of the macroblock C is encoded as a QP change amount.
- the decoding engine 220 decodes the second divided stream as it is, the QP change amount that is the difference value between the QP value of the macroblock B and the QP value of the macroblock C is reflected to the QP value of the macroblock A. As a result, the QP value of the macroblock C cannot be correctly decoded.
- the QP delta insertion unit 238 outputs a new QP change amount for insertion into the MB line to the division point detection unit 139 so as to correct the change in the context of the macroblocks generated by dividing the stream. . That is, the QP delta insertion unit 238 breaks the dependency when the QP change amount indicates the difference value of the QP value between the blocks across the two MB lines and the two MB lines have the dependency. As described above, a new QP change amount to be inserted at the head of the MB line is output to the dividing point detection unit 139.
- the QP change amount is inserted at the head of the MB line, but it is not always necessary to insert the QP change amount at the head of the MB line.
- the amount of QP change may be inserted in another place such as in the first macroblock of the MB line.
- a method of calculating the QP change amount by restoring the QP value for all macroblocks, and the target divided stream A method of calculating the QP change amount by accumulating the QP change amounts of the macroblocks that have not been assigned to can be considered. In the present embodiment, the latter calculation method will be described.
- the QP delta insertion unit 238 calculates a cumulative QP change amount obtained by accumulating the QP change amounts of all macroblocks included in the MB lines L3 to L5 including the QP change amount. Then, the QP delta insertion unit 238 encodes the calculated accumulated QP change amount again, and outputs the encoded accumulated QP change amount to the division point detection unit 139 as a new QP change amount. As a result, as shown in FIG. 25, the accumulated QP variation is inserted at the head of the MB line L6. That is, the accumulated QP variation and the QP variation originally included in the macroblock C are set in the second divided stream. Note that the detailed calculation method of the accumulated QP variation is the same as that in the first embodiment, and a description thereof will be omitted.
- the QP delta insertion unit 238 in order to insert the accumulated QP change amount at the head of the MB line, also encodes the accumulated QP encoded for other MB lines.
- the change amount is output to the dividing point detection unit 139.
- the QP delta insertion unit 238 accumulates the QP change amounts of all macroblocks including the QP change amount included in the MB lines L4 to L6 as the accumulated QP change amount to be inserted at the head of the MB line L7. Calculate cumulative QP variation. Then, the QP delta insertion unit 238 encodes the calculated cumulative QP change amount and outputs the encoded cumulative QP change amount to the division point detection unit 139.
- the skip run modification unit 237a since the accumulated QP change amount is inserted at the head of the MB line, skip macroblocks do not continue across the MB line. That is, unlike the skip run modification unit 137a in the first embodiment, the skip run modification unit 237a does not perform the MB skip run information combining process.
- the skip run modification unit 237a performs MB skip run corresponding to eight skip macroblocks that exist across the MB lines L2 to L3.
- the information is divided into MB skip run information corresponding to five skip macroblocks included in the MB line L2 and MB skip run information corresponding to three skip macroblocks included in the MB line L3.
- the skip run correction unit 237a corresponds to the two skip macroblocks included in the MB line L5, with the MB skip run information corresponding to the six skip macroblocks existing across the MB lines L5 to L6. This is divided into MB skip run information and MB skip run information corresponding to four skip macroblocks included in the MB line L6.
- the skip run modification unit 237a includes MB skip run information corresponding to five consecutive skip macroblocks included in the MB line L2 and MBs corresponding to four consecutive skip macroblocks included in the MB line L6. Do not recombine with skip run information.
- the skip run correction unit 237a encodes the MB skip run information obtained in this way again, and outputs the encoded MB skip run information, similarly to the skip run correction unit 137a.
- FIG. 26 is a block diagram showing a configuration of the skip run correction unit 237a.
- the same components as those in FIG. 17 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the skip run modification unit 237a includes a skip run extraction unit 160, a skip run division unit 161, and a skip run encoding unit 164. That is, the skip run correcting unit 237a has the same configuration as that of the skip run correcting unit 137a in Embodiment 1 except for the skip run accumulating / holding unit 162 and the adding unit 163. Note that the skip run division unit 161 outputs the divided MB skip run information to the skip run encoding unit 164.
- FIG. 27 is a flowchart showing the MB skip run information correction operation by the skip run correction unit 237a. Note that, in FIG. 27, steps that perform the same processing as in FIG.
- the skip run correction unit 237a checks whether or not the MB skip run information “mb_skip_run” has been acquired (step S202). That is, the skip run correction unit 237a does not determine whether the stream being processed has reached the end of the slice. This is because, as will be described later, the skip run correction unit 237a does not hold the preceding MB skip run information internally, and therefore the output process of the preceding MB skip run information when the slice end is reached is unnecessary.
- the skip run correction unit 237a returns to the beginning of the process and reads the stream again.
- the skip run correction unit 237a uses the address information of the macro block including the corresponding MB skip run information, as in the first embodiment.
- the position of the macroblock in the picture is calculated (step S204).
- the skip run modification unit 237a determines whether it is necessary to divide the acquired MB skip run information as in the first embodiment (step S206).
- step S206 if it is determined that consecutive skip macroblocks reach the MB line boundary (Yes in step S206), the skip run modification unit 237a proceeds to step S216 to divide the MB skip run information. Details will be described later.
- step S212 the skip run modification unit 237a performs re-encoding processing of the MB skip run information.
- step S212 the reason why the processing of step S208 and step S210 shown in FIG. 18 is not executed is that skip run correction section 237a in the present embodiment does not recombine MB skip run information.
- the skip run modification unit 237a outputs the re-encoded MB skip run information to the division point detection unit 139a and ends the process (step S214).
- step S206 When it is determined in step S206 that consecutive skip macroblocks reach the MB line boundary (Yes in step S206), the skip run correction unit 237a uses the MB line boundary as a division point, as in the first embodiment.
- the relevant MB skip run information is divided into the first half and the second half (step S216).
- the skip run modification unit 237a re-encodes the first half of the divided MB skip run information as MB skip run information (step S250). Thereafter, the skip run modification unit 237a outputs the re-encoded MB skip run information to the division point detection unit 139a (step S252). That is, in this embodiment, since it is not necessary to recombine MB skip run information, the preceding MB skip run information is not held inside as in the first embodiment.
- the skip run correcting unit 237a sets the latter half of the divided MB skip run information as new MB skip run information (step S220), and the length is 0, as in the first embodiment. It is determined whether or not there is (step S222).
- step S222 If it is determined in step S222 that the length of the new MB skip run information is 0 (Yes in step S222), the skip run correction unit 237a determines that there is no more MB skip run information to be processed. Exit.
- step S222 If it is determined in step S222 that the length of the new MB skip run information is not 0 (Yes in step S222), the skip run correction unit 137a determines that there is still MB skip run information to be processed, and proceeds to step S204. Returning, the MB skip run information is divided and output again.
- the processing described above is repeated until the end of the stream in the skip run modification unit 237a.
- the skip run correction unit 237a appropriately divides the MB skip run information.
- FIG. 28 is a flowchart showing the process of inserting the accumulated QP variation by the QP delta insertion unit 238.
- steps that perform the same processing as in FIG. 21 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the QP delta insertion unit 238 executes the processing from step S300 to step S316 as in the first embodiment.
- the QP delta insertion unit 238 changes the accumulated QP corresponding to the divided stream to which the processing target macroblock is allocated.
- the amount is encoded and output to the dividing point detection unit 139 (step S352). That is, the QP delta insertion unit 238 outputs the accumulated QP change amount before outputting the MB information.
- the QP delta insertion unit 238 resets the accumulated QP change amount corresponding to the divided stream including the processing target macroblock to 0 (step S354). Then, the QP delta insertion unit 238 outputs the MB information to the division point detection unit 139 and ends the process (step S356). That is, if the QP change amount is included in the MB information, the QP delta insertion unit 238 outputs the QP change amount without modification.
- the QP delta insertion unit 238 outputs the MB information to the division point detection unit 139 and ends the processing. (Step S350). That is, when the QP change amount is included in the MB information, the QP delta insertion unit 238 outputs the MB information without correcting the QP change amount.
- the QP delta insertion unit 238 divides the new QP change amount (cumulative QP change amount) accumulated for insertion at the head of the MB line and the QP change amount included in the encoded stream. It outputs to the point detection part 139.
- the image decoding apparatus 200 similarly to the image decoding apparatus 100 according to the first embodiment, it is possible to equalize the burden of decoding processing performed by the N decoding engines 220. Decoding parallel processing can be appropriately executed.
- the image decoding apparatus 200 inserts a new QP change amount into the MB line, A correct QP value can be obtained when the decoding process is performed in parallel.
- such an image decoding apparatus 200 similar to the image decoding apparatus 100 according to the first embodiment, corresponds to the mode information input to the M stream division units 230, as shown in FIG. Perform any one of high resolution decoding, high speed decoding, and multi-channel decoding shown in FIG. 22C.
- the image decoding apparatus 200 can switch the decoding process to one of high-resolution decoding, high-speed decoding, and multi-channel decoding according to the mode information. Can be improved.
- FIG. 29 is a block diagram showing a configuration of an image encoding device according to Embodiment 3 of the present invention.
- the image encoding apparatus 300 is an apparatus that appropriately executes parallel processing of encoding with a simple configuration, and includes an encoder 310 and a memory 360.
- the memory 360 has an area for storing original image data input to the encoder 310 and data generated intermediately by the encoder 310.
- the memory 360 includes a frame memory 361, a divided stream buffer 362, and M partial stream buffers (first partial stream buffer to Mth partial stream buffer) 363.
- the frame memory 361 stores original image data of a picture to be encoded and N divided local decoded image data generated by N encoding engines (encoding units) 320.
- N divided streams generated by the encoder 310 are stored as the above-described intermediately generated data.
- the divided stream buffer 362 has an area allocated to each of the N encoding engines 320.
- Each of the M partial stream buffers 363 stores a partial stream (joined coding area) generated by the encoder 310.
- the encoder 310 generates and outputs an encoded stream by reading and encoding the original image data stored in the frame memory 361 of the memory 360.
- the encoder 310 also includes N encoding engines (first encoding engine to Nth encoding engine) 320, M stream combining units (first stream combining unit to Mth stream combining unit) 330, and stream combining.
- N encoding engines first encoding engine to Nth encoding engine
- M stream combining units first stream combining unit to Mth stream combining unit
- a control unit 340 and a multiplexing unit 350 are provided.
- the encoding engine 320 in the present embodiment has a processing capability capable of encoding an HD image (1920 ⁇ 1088 pixels, 60i) for two channels.
- the N encoding engines 320 acquire mode information, read original image data of a picture to be encoded for each MB line or MB line pair according to the mode information, and perform encoding in parallel. That is, image coding apparatus 300 according to the present embodiment divides a picture into a plurality of MB lines or MB line pairs as in image decoding apparatus 100 according to the first embodiment, and performs N coding processes for each of the N coding processes. By assigning to the encoding engine 320, parallelization of the encoding process is realized.
- each of the N encoding engines 320 encodes and locally decodes macroblocks at the upper left, upper, and upper right of the encoding target macroblock when encoding the macroblock by intra prediction. Then, the information of those locally decoded macroblocks is acquired as adjacent MB information.
- the encoding engine 320 that acquired the adjacent MB information encodes the encoding target macroblock using the adjacent MB information.
- the encoding engine 320 also performs encoding and decoding at the upper left, upper, and upper right of the macroblock to be processed in the same manner as described above, for example, when performing the deblocking filter process, the motion vector prediction process, and the variable length encoding process.
- the locally decoded macro block information is acquired as adjacent MB information, and the above-described processing is performed.
- FIG. 30A is a diagram illustrating an encoding sequence when a picture is encoded without using MBAFF.
- the first encoding engine 320 encodes the 0th MB line
- the second encoding engine 320 encodes the first MB line
- the third encoding engine 320 encodes the second MB line.
- the fourth encoding engine 320 encodes the third MB line.
- the k-th (k is an integer equal to or greater than 0) MB line indicates the k-th MB line from the upper end of the picture.
- the 0-th MB line is the 0-th MB line from the upper end of the picture.
- the first encoding engine 320 starts encoding the 0th MB line.
- the second encoding engine 320 starts encoding the macroblock at the left end of the first MB line.
- the third encoding engine 320 starts encoding the macroblock at the left end of the second MB line.
- the fourth encoding engine 320 starts encoding the macroblock at the left end of the third MB line.
- the (k + 1) th MB line is encoded from the leftmost macroblock to the rightmost macroblock with a delay of two macroblocks compared to the kMB line.
- FIG. 30B is a diagram illustrating an encoding sequence when a picture is encoded using MBAFF.
- the first encoding engine 320 encodes the 0th MB line pair and the second encoding engine 320 uses the first MB.
- the line pair is encoded
- the third encoding engine 320 encodes the second MB line pair
- the fourth encoding engine 320 encodes the third MB line pair.
- the k-th (k is an integer equal to or greater than 0) MB line pair indicates a structural unit including two k-line MB lines from the upper end of the picture.
- the 0-th MB line pair is 0th from the upper end of the picture. Is a unit composed of two MB lines.
- the first encoding engine 320 starts encoding of the 0th MB line pair.
- the second encoding engine 320 starts encoding the macroblock at the upper left end of the first MB line pair.
- the third encoding engine 320 starts encoding the macroblock at the upper left end of the second MB line pair.
- the fourth encoding engine 320 starts encoding the macroblock at the upper left end of the third MB line pair.
- the (k + 1) th MB line pair is encoded from the leftmost macroblock pair to the rightmost macroblock pair with a delay of two macroblock pairs compared to the kMB line pair.
- the (k + 1) th MB line or the (k + 1) th MB line pair is the kMB
- the encoding may be delayed by at least two macroblocks or two macroblock pairs compared to the line or the kMB line pair. That is, encoding may be delayed by 3 macroblocks or 3 macroblock pairs.
- encoding may be delayed by 3 macroblocks or 3 macroblock pairs.
- the (k + 1) th MB line or the (k + 1) th MB line pair is encoded with a delay of 2 macroblocks or 2 macroblock pairs compared to the kMB line or the kMB line pair.
- the time required for encoding a picture can be minimized, and when encoding is delayed by 3 macroblocks or 3 macroblock pairs or more, the time required for encoding a picture according to the amount of delay Becomes longer.
- the N encoding engines 320 perform encoding processing including variable length encoding processing when performing encoding using CAVLC as an encoding method, but encoding using CABAC as an encoding method.
- encoding using CABAC CABAC as an encoding method.
- the arithmetic encoding process using the CABAC method parallel processing cannot be performed because the N encoding engines 320 have a dependency relationship across a plurality of MB lines.
- arithmetic coding processing not performed by the N encoding engines 320 is performed by M stream combining units 330 described later.
- a temporary start code is assigned to each slice so that the subsequent stream combining unit 330 can correctly recognize the slice, and an EPB (emulation prevention byte) is added. Insert it.
- the data encoded by the N encoding engines 320 is stored in the divided stream buffer 362 as N divided streams.
- the stream combination control unit 340 acquires the mode information, and in order to equalize the processing amount of the stream combination processing in the M stream combination units 330 according to the mode information, each of the stream combination units 330 In response to this, distribution control information for distributing the stream combining process in a predetermined unit is notified.
- the stream combination control unit 340 will be described assuming that the stream combination processing in the M stream combination units 330 is distributed in units of slices. That is, the stream combination control unit 340 performs, for each slice to be generated to be included in the encoded stream, the stream combination process for one of the M stream combination units 330 according to the distribution control information notification. To run.
- the distribution control information indicates a slice number for identifying a slice that is a target of stream combination processing.
- the stream combination control unit 340 distributes the stream combination process to the M stream combination units 330 in units of slices, and determines which partial stream buffer 363 of the M partial stream buffers 363 to the multiplexing unit 350.
- the selection information indicating whether or not the partial stream should be acquired is notified.
- Each of the M stream combination units 330 acquires mode information and distribution control information, and in accordance with the mode information and distribution control information, the N stream streams (numbers) included in the slice to be processed from the divided stream buffer 362. (1 divided stream to Nth divided stream) are extracted, and the extracted N divided streams are combined to reconfigure the slice as a predetermined unit. That is, for each slice distributed by the stream combination control unit 340, the stream combination unit 330 according to the present embodiment divides the N divided streams constituting the slice into a plurality of MB lines. Then, the stream combining unit 330 combines the divided streams into one by sequentially assigning each of the plurality of MB lines to the slice to be generated, and reconfigures the slice.
- the above stream combining process is performed by adding slices (partial regions) corresponding to the original slices of the original image data included in each of the N divided streams generated by the N encoding engines 320 to 1
- This is a process of reconfiguring into one slice (joined coding region).
- this stream combining process when a slice (partial region) is composed of a plurality of encoded MB lines (units), the slice is divided into a plurality of encoded MB lines.
- the above-described one slice (joined coding region) is generated by sequentially assigning the plurality of MB lines to the slice to be generated.
- the M stream combining units 330 execute the stream combining processes in parallel. Accordingly, each of the M stream combination units 330 generates the reconstructed slice as a partial stream.
- each of the M stream combination units 330 searches for the start code from each of the N divided streams stored in the divided stream buffer 362, and extracts the slice notified by the distribution control information.
- the M stream combining units 330 reconfigure slices while performing arithmetic encoding on N divided streams.
- the M stream combination units 330 store the slices thus reconfigured in the M corresponding partial stream buffers 363 as M partial streams (first partial stream to Mth partial stream). That is, the first stream combination unit 330 stores the first partial stream in the first partial stream buffer 363, the second stream combination unit 330 stores the second partial stream in the second partial stream buffer 363, and the Mth The stream combining unit 330 stores the Mth partial stream in the Mth partial stream buffer 363.
- each of the M stream combination units 330 outputs header information such as SPS, PPS, and slice header along with the slice when generating a partial stream by handling the slice as a predetermined unit.
- the multiplexing unit 350 acquires mode information and selection information, reads a partial stream to be processed from one of M partial stream buffers 363 according to the mode information and selection information, and sequentially reads the read partial streams. By outputting, M partial streams are multiplexed, and as a result, one encoded stream is generated and output.
- the multiplexing unit 350 converts the first partial stream into the first partial stream buffer. Read from 363. Also, when notified by the selection information from the stream combination control unit 340 to acquire the partial stream from the Mth partial stream buffer 363, the multiplexing unit 350 converts the Mth partial stream into the Mth partial stream buffer. Read from 363. Then, the multiplexing unit 350 multiplexes the partial streams read from the M partial stream buffers 363, and outputs the result as an encoded stream.
- Such a feature of the image coding apparatus 300 according to the present embodiment is that the stream combining unit 330 divides each slice included in the N divided streams into a plurality of MB lines, and combines them into one MB line. To reconstruct a new slice.
- the slice reconstruction includes slice header insertion processing, slice end processing, skip run correction processing, and QP delta correction processing.
- FIG. 31 is an explanatory diagram for explaining slice header insertion processing and slice termination processing.
- the N encoding engines 320 encode different MB lines included in this slice in parallel. As a result, each of the N encoding engines 320 encodes data including one or more MB lines as a single slice. That is, the first encoding engine 320 encodes the data consisting of MB line 0 and MB line 4 as a single slice, the second encoding engine 320 encodes the data consisting of MB line 1 and MB line 5 as a single slice, The third encoding engine 320 encodes data consisting of the MB line 2 as a single slice, and the fourth encoding engine 320 encodes data consisting of the MB line 3 as a single slice.
- the slices included in each of the divided streams generated by the N encoding engines 320 are data of one or more MB lines and slice end information. including. That is, the first divided stream includes data of MB line 0 and MB line 4 and slice end information ec1, and the second divided stream includes data of MB line 1 and MB line 5 and slice end information ec2, The segment stream includes MB line 2 data and slice end information ec3, and the fourth segment stream includes MB line 3 data and slice end information ec4.
- the stream combining unit 330 combines the slices of the first divided stream to the fourth divided stream to reconstruct a new slice, thereby generating a partial stream (joined coding region).
- the stream combining unit 330 assigns an appropriate slice header to the new slice and assigns it to the partial stream.
- the stream combining unit 330 extracts MB line 0 data from the first divided stream and assigns it to the partial stream.
- the stream combining unit 330 extracts the data of MB line 1 from the second divided stream and assigns it to the partial stream.
- the stream combining unit 330 extracts MB line 2 data from the third divided stream.
- the data of MB line 2 is accompanied by slice end information ec3, but since the new slice to be reconstructed continues to MB line 5, slice end information ec3 is removed, and slices continue after MB line 2 MB line 2 is assigned to the partial stream.
- the stream combination unit 330 extracts the data of the MB line 3 and the slice end information ec4 from the fourth divided stream, but removes the slice end information ec4, and the MB line 3 and the subsequent lines are assumed to continue the slice. 3 is assigned to the partial stream.
- the stream combining unit 330 extracts the data of the MB line 4 and the slice end information ec1 from the first divided stream, but removes the slice end information ec1 and assumes that the MB line 4 and the subsequent slices continue the MB line. 4 is assigned to the partial stream.
- the stream combining unit 330 extracts the data of the MB line 5 and the slice end information ec2 from the second divided stream.
- the stream combining unit 330 assigns the MB line 5 to the partial stream, removes the slice end information ec2, generates the slice end information ecc appropriate for the new slice to be reconfigured, and assigns the partial stream.
- the appropriate slice end information ecc is reassigned to the new slice to be reconstructed because the bit position at the end of the new slice reconstructed by combining the MB lines is the end of the slice in the original divided stream. This is because the bit position may be different.
- the stream combination unit 330 adjusts the end of the slice to the byte boundary by giving appropriate slice end information to the new slice to be reconfigured.
- the stream combining unit 330 performs appropriate slice header insertion and slice termination processing on the slices, and combines the MB line data extracted from the divided streams, so that the image encoding device 300 can perform the processing. It is possible to reconstruct a slice according to the method of the encoded stream to be output.
- the image encoding device 300 includes N encoding engines 320 and M stream combining units 330, and is used for moving image data (original image data).
- the encoding process is parallelized. In this way, by enabling parallel processing in the entire system, the performance of the entire encoding process is improved.
- the M stream combining units 330 reconstruct a new slice by combining slices of the divided streams generated by the N encoding engines 320 as a predetermined unit. Is not constant, the amount of processing varies from slice to slice.
- the processing target is an encoded stream, so the processing amount depends on the code amount for each slice.
- the encoded stream is variable-length encoded, and the code amount varies depending on the data content.
- the slices in the H.264 / AVC format include types called I slices, P slices, B slices, and the like. There is a large amount of code in the I slice where the intra-frame encoding process is performed, and there is a tendency that the code amount is small in the P slice and the B slice where the inter-frame encoding process is performed in addition to the intra-frame encoding process.
- the code amount of the encoded slice included in the encoded stream is not constant and can vary greatly. Therefore, the processing amount of each stream combining unit 330 is not equalized by simply distributing the slices of each divided stream generated by the N encoding engines 320 to the M stream combining units 330 in order. The improvement effect cannot be obtained sufficiently.
- the stream combination control unit 340 distributes each slice to each of the stream combination units 330 so that the processing amount of each stream combination unit 330 becomes equal.
- FIG. 32A and 32B are explanatory diagrams showing a specific example of slice allocation processing by the stream combination control unit 340.
- FIG. In the present embodiment, in order to simplify the description, hereinafter, M will be described as M 2.
- FIG. 32A shows an example of N divided streams generated by N encoding engines 320.
- the N divided streams (first divided stream to fourth divided stream) in this example are composed of slice data (slices) constituting a picture.
- Picture 0 is composed of slice 0 only.
- Picture 1 is composed of slice 1 and slice 2.
- Picture 2 is composed of slice 3 and slice 4.
- FIG. 32B is a diagram showing a series of slice distribution processing by the stream combination control unit 340.
- each stream combination unit 330 holds the number of the slice to be processed.
- the stream combination control unit 340 notifies the reconfiguration of slices to each stream combination unit 330 by transmitting distribution control information according to the processing status of each stream combination unit 330 and, if necessary, SPS. , PPS or the like is instructed to each stream combination unit 330 to add header information.
- the stream combining unit 330 notified of the slice reconstruction reads out the plurality of divided streams including the target slice from the divided stream buffer 362, divides the read plurality of divided streams into MB lines, and the divided MB lines. Are recombined and a new slice is reconfigured by adding a slice header or the like.
- the series of processes in the stream combination unit 330 is referred to as a stream combination process.
- the stream combination control unit 340 notifies the first stream combination unit 330 of the stream combination of slice 0 by transmitting the distribution control information, and instructs the addition of the SPS that should exist at the head of the stream, In addition, it instructs the addition of PPS0 that should exist at the head of picture 0.
- the stream combination control unit 340 notifies the second stream combination unit 330 of the stream combination process of slice 1 by transmitting the distribution control information, and adds PPS1 that should exist at the head of picture 1 Instruct.
- the distribution control information includes, for example, a slice number for identifying a slice that is a target of stream combination processing, and information for instructing whether SPS or PPS should be added. Show.
- the first stream combination unit 330 stores the slice number of the slice notified of the stream combination and the number of the slice to be processed from now on. Compare with the value of SN1. At this timing, since both are equal to 0, the first stream combining unit 330 performs the stream combining process on the first input slice.
- the first stream combining unit 330 first generates and assigns SPS and PPS0. Next, the first stream combination unit 330 performs stream combination processing on slice 0 and outputs a new slice 0 of the generated partial stream to the first partial stream buffer 363.
- the second stream combination unit 330 when the second stream combination unit 330 is notified of the stream combination of the slice 1 from the stream combination control unit 340 by the distribution control information, the slice number of the slice notified of the stream combination and the number of the slice to be processed from now on And the value of SN2 held as At this timing, the slice number of the slice notified of stream combination indicates 1, SN2 indicates 0, and the difference is 1. Therefore, the second stream combination unit 330 skips the process for one input slice and performs the stream combination process on the second input slice. That is, the second stream combining unit 330 skips the stream combining process for the slices corresponding to the number of differences.
- the second stream combining unit 330 generates and assigns PPS1 because the slice number of the slice notified of stream combining and the value of SN2 match, and performs stream combining processing on slice 1 And a new slice 1 of the generated partial stream is output to the second partial stream buffer 363.
- the second stream combining unit 330 notifies the stream combining control unit 340 of the completion of the process and the second partial stream.
- the partial stream information output to the buffer 363 is notified. Specifically, the number of NAL units constituting the PPS 1 and slice 1 actually output to the partial stream buffer 363 is notified.
- the second stream combination unit 330 notifies the stream combination control unit 340 that a total of two NAL units have been processed by the PPS 1 and the slice 1.
- the stream combining control unit 340 Upon receiving the processing completion notification from the second stream combining unit 330, the stream combining control unit 340 notifies the second stream combining unit 330 of the stream combination of slice 2.
- the second stream combination unit 330 stores the slice number of the slice notified of the stream combination and the SN2 held as the slice number to be processed from now on. Compare the value. At this timing, since both coincide with each other, the second stream combining unit 330 performs the stream combining process on the first input slice. Specifically, the second stream combination unit 330 performs stream combination processing on slice 2.
- the first stream combination unit 330 completes the stream combination process for slice 0, and thus the first stream combination unit 330 notifies the stream combination control unit 340 of the completion of the process and the first partial stream.
- the partial stream information output to the buffer 363 the number of NAL units “3” constituting the SPS, PPS0, and slice 0 is notified.
- the stream combining control unit 340 Upon receiving the processing completion notification from the first stream combining unit 330, the stream combining control unit 340 notifies the first stream combining unit 330 of the stream combination of the slice 3 by using the distribution control information, and at the top of the picture 2 Instructs to give PPS2 which should exist.
- the first stream combination unit 330 stores the slice number of the slice notified of the stream combination and the SN1 held as the number of the slice to be processed from now on. Compare the value. At this timing, the slice number of the slice notified of stream combination indicates 3, SN2 indicates 1, and the difference is 2. Therefore, the first stream combination unit 330 skips the processing for the two input slices and performs the stream combination processing for the third input slice.
- the first stream combining unit 330 first skips the stream combining process for the input slice 1.
- slice 2 is input to the first stream combination unit 330, but the slice number of the slice notified of stream combination and the value of SN1 do not yet match. skip.
- the first stream combining unit 330 generates and assigns PPS2 because the slice number of the slice notified of stream combining matches the value of SN1, and performs stream combining processing on slice 3 And a new slice 3 of the generated partial stream is output to the first partial stream buffer 363.
- the stream combining process for the slice 2 by the second stream combining unit 330 ends, so the second stream combining unit 330 notifies the stream combining control unit 340 of the completion of the process and the second partial stream.
- the partial stream information output to the buffer 363 the number of NAL units constituting the slice 2 “1” is notified.
- the stream combining control unit 340 Upon receiving the processing completion notification from the second stream combining unit 330, the stream combining control unit 340 notifies the second stream combining unit 330 of the stream combination of slice 4.
- the second stream combination unit 330 stores the slice number of the slice notified of the stream combination and the SN2 held as the number of the slice to be processed from now on. Compare the value. At this timing, the slice number of the slice notified of stream combination indicates 4, SN2 indicates 3, and the difference is 1. Therefore, the second stream combination unit 330 skips the process for one input slice and performs the stream combination process for the second input slice.
- the second stream combining unit 330 first skips processing of the input slice 3.
- the second stream combination unit 330 performs stream combination processing on slice 4 and generates the generated partial stream.
- the new slice 4 is output to the second partial stream buffer 363.
- the stream combining process for the slice 3 by the first stream combining unit 330 ends, so the first stream combining unit 330 notifies the stream combining control unit 340 of the completion of the process and the first partial stream.
- the stream combining process for the slice 4 by the second stream combining unit 330 ends, so the second stream combining unit 330 notifies the stream combining control unit 340 of the completion of the process and The partial stream information output to the two partial stream buffer 363 and the number of NAL units constituting the slice 4 “1” are notified.
- the stream combination control unit 340 sequentially distributes the stream combination processing of slices to the stream combination unit 330 that has completed the processing. Thereby, the processing amount of each stream combining unit 330 becomes equal.
- FIG. 33 is a diagram illustrating a state of the partial stream buffer 363 when the slice allocation and the stream combination process illustrated in FIG. 32B are performed.
- the second partial stream buffer 363 stores partial streams corresponding to slice 1, slice 2, and slice 4.
- the storage order of each slice is not constant.
- the stream combination control unit 340 acquires the partial stream from which partial stream buffer of the M partial stream buffers 363 so that the multiplexing unit 350 can acquire the partial stream in the same slice order as the divided stream before the stream combination processing.
- the multiplexing unit 350 is notified of selection information indicating whether it should be acquired.
- FIG. 34 is a diagram illustrating an example of a format of selection information when the slice allocation and the stream combination process illustrated in FIG. 32B are performed.
- the selection information includes partial stream buffer information and NAL unit number information for each slice, and is generated each time slice distribution processing by the stream combination control unit 340 is performed.
- the partial stream buffer information indicates to which of the first stream combination unit 330 and the second stream combination unit 330 the stream combination control unit 340 allocated the slice. That is, the partial stream buffer information indicates a partial stream buffer in which a new slice (partial stream including a new slice) generated by the stream combining process by the stream combining unit 330 is stored.
- the NAL unit number information indicates the number of NAL units output when processing of the target slice is performed in the stream combination unit 330. When the processing of the stream combination unit 330 is completed, This is notified to the stream combination control unit 340.
- the selection information generated by the stream combination control unit 340 is notified to the multiplexing unit 350 and stored in, for example, a FIFO (first-in first-out) memory in the multiplexing unit 350.
- the selection information stored in the FIFO is read by the multiplexing unit 350 in the notified order, and is used for stream acquisition processing from the partial stream buffer 363.
- the multiplexing unit 350 acquires three NAL units (SPS, PPS0, slice 0) from the first partial stream buffer 363 according to the selection information of slice 0.
- the multiplexing unit 350 acquires two NAL units (PPS1, slice 1) from the second partial stream buffer 363 according to the selection information of slice 1.
- the multiplexing unit 350 acquires one NAL unit (slice 2) from the second partial stream buffer 363 according to the selection information of slice 2.
- the multiplexing unit 350 acquires two NAL units (PPS2, slice 3) from the first partial stream buffer 363 according to the selection information of the slice 3.
- the multiplexing unit 350 acquires one NAL unit (slice 4) from the second partial stream buffer 363 according to the selection information of the slice 4.
- the multiplexing unit 350 uses the selection information notified from the stream combination control unit 340, so that the M partial stream buffers 363 are in the same slice order as the divided streams generated by the N encoding engines 320.
- a partial stream (slice of a partial stream) can be acquired from
- slice allocation processing described with reference to FIGS. 32A to 34 is an example of the processing operation of the image encoding device 300 of the present invention, and the present invention is not limited to the processing operation described here.
- the stream combination control unit 340 specifies the slice number when notifying the stream combination unit 330 of the stream combination of slices, but specifies the number of slices to skip processing instead of the slice number. May be.
- the stream combination control unit 340 stores the number of slices allocated to each of the M stream combination units 330, and calculates the number of slices to skip processing based on that.
- the stream combining unit 330 completes the process to the stream combining control unit 340 and outputs the number of NAL units output to the partial stream buffer 363.
- the multiplexing unit 350 can notify the information that can determine the size of the partial stream to be acquired from the partial stream buffer 363.
- the selection information includes the NAL unit number information. However, as described above, the selection information may include information indicating the bit number of the partial stream instead of the NAL unit number information.
- FIG. 35 is a block diagram showing the configuration of the stream combining unit 330. As shown in FIG. 35
- the stream combination unit 330 includes a process management unit 330m, a selector Sct1, a start code detection unit 331, an EPB removal unit 332a, an EPB insertion unit 332b, a header insertion unit 333, and slice data processing units 334a and 334b.
- the process management unit 330m acquires mode information and distribution control information, and controls other components included in the stream combining unit 330 according to the information.
- the processing management unit 330m holds, for example, the number of the slice to be processed (SN1 or SN2 or the like) so that the stream combination processing shown in FIGS. 32A to 34 is performed, and the selector based on the number is selected. Control Sct1.
- the process management unit 330m outputs a new slice (part stream including the new slice) reconstructed from the SPS, PPS, or the slice to be processed, or stops the output.
- the start code detection unit 331 recognizes a slice by reading one of the N divided streams from the divided stream buffer 352 and detecting the start code.
- the EPB removing unit 332a removes the EPB (emulation prevention byte) from the divided stream, and outputs the divided stream from which the EPB has been removed to the slice data processing units 334a and 334b.
- the EPB insertion unit 332b inserts the EPB removed by the EPB removal unit 332a into the partial stream generated by combining the divided streams.
- the header insertion unit 333 generates header information such as SPS, PPS, and slice header, and outputs the header information to the EPB insertion unit 332b.
- the slice data processing units 334a and 334b reconstruct slice data by combining the N divided streams from which EPB has been removed, and output the reconstructed slice data.
- the slice data processing unit 334a performs processing according to CAVLD (Context-Adaptive Variable Length Decoding), and combines N partial streams generated by CAVLC (Context-Adaptive Variable Length Coding). Create a stream.
- the slice data processing unit 334b performs processing according to CABAD (Context-Adaptive Binary Arithmetic Decoding), and generates N partial streams generated by combining CABAC (Context-Adaptive Binary Arithmetic Coding) streams. Is generated.
- the slice data processing unit 334a includes a slice data layer analysis unit 335a, a macroblock layer analysis unit 336a, a skip run correction unit 337a, a QP delta correction unit 338a, and a division point detection unit 339a.
- the slice data layer analysis unit 335a analyzes the encoded data of the slice data layer included in the divided stream and extracts information necessary for the stream combining process.
- the macroblock layer analysis unit 336a analyzes the encoded data of the macroblock layer included in the divided stream, and extracts information necessary for stream combination processing.
- the skip run correction unit 337a corrects the MB skip run information “mb_skip_run” extracted by the slice data layer analysis unit 335a, re-encodes the corrected MB skip run information, and outputs the encoded MB skip run information. To do. That is, when the MB skip run information indicates the number of blocks that are continuous across at least two consecutive slice portions in the divided stream, the skip run correction unit 337a divides the number of consecutive blocks and determines each slice portion. The MB skip run information modified so as to indicate the number of blocks is set to a divided stream to which at least two consecutive slice portions are respectively assigned.
- the skip run correction unit 337a when a plurality of blocks corresponding to a plurality of set MB skip run information continues in the divided stream in the partial stream generated by combining the divided streams, The skip run information is converted into one MB skip run information indicating the total number of pieces indicated by each of the plurality of MB skip run information.
- the MB skip run information is an example of a first codeword indicating the number of consecutive blocks when a specific type of block is continuous in a slice included in the encoded picture. Specifically, the MB skip run information indicates the number of macro blocks that are skipped continuously.
- the MB skip run information extracted by the slice data layer analysis unit 335a is the set Indicates the number of macroblocks skipped in succession.
- N divided streams are divided for each MB line, and when MB lines included in these N divided streams are sequentially assigned to one partial stream, The number of skipped macroblocks will change. That is, the dependency relationship between MB lines by the MB skip run information is broken.
- the skip run modification unit 337a specifies, for each MB line that includes a part of the above-described set, the number of macroblocks that are continuously skipped and that form the part included in the MB line. Then, the skip run correcting unit 337a corrects the MB skip run information so that the number indicated by the MB skip run information becomes the number specified for the MB line for each MB line.
- the QP delta correction unit 338a corrects the QP change amount “mb_qp_delta” of the macroblock extracted by the macroblock layer analysis unit 336a, re-encodes the corrected QP change amount, and performs encoding.
- the QP change amount is output.
- the QP delta correction unit 338a calculates the change amount of the coding coefficient based on the new context of the block in the partial stream when the QP change amount indicates the change amount between the blocks across two MB lines. To do. Then, the QP delta correction unit 338a corrects the QP change amount to the calculated change amount.
- the QP change amount is an example of a second codeword indicating the change amount of the coding coefficient between consecutive blocks in the slice included in the coded picture.
- the amount of QP change is included in a macroblock (target macroblock), and a difference value between the QP value of the target macroblock and the QP value of the macroblock located immediately before the target macroblock. Indicates.
- N divided streams are divided into MB lines, and when MB lines included in the N divided streams are sequentially assigned to one partial stream, they are continuous with each other across the MB line boundary.
- Each macroblock is assigned to a position separated in the partial stream.
- the QP delta correction unit 338a recalculates the QP change amount of the macroblock (target macroblock) based on the context of the new macroblock in the partial stream.
- the division point detection unit 339a divides the divided stream into MB lines, and generates a partial stream by combining the MB lines. Specifically, the dividing point detection unit 339a detects the MB line boundary in the divided stream from the information extracted by the slice data layer analysis unit 335a and the macroblock layer analysis unit 336a, and for each MB line boundary, The stream combination unit 330 switches the divided streams read from the divided stream buffer 352, thereby combining the N divided streams in units of MB lines. Further, the division point detection unit 339a includes the MB skip run information acquired from the skip run correction unit 337a and the QP change amount acquired from the QP delta correction unit 338a for the partial stream.
- the division point detection unit 339a detects and removes the slice end information included in the input divided stream, and assigns appropriate slice end information to the slice reconstructed by combining the divided streams. Assign to partial streams.
- the slice data processing unit 334b includes a slice data layer analysis unit 335b, a macroblock layer analysis unit 336b, a QP delta correction unit 338b, and a division point detection unit 339b.
- the slice data layer analysis unit 335b analyzes the encoded data of the slice data layer included in the divided stream and extracts information necessary for the stream combining process.
- the macroblock layer analysis unit 336b analyzes the encoded data (binary data) of the macroblock layer included in the divided stream, and extracts information necessary for stream combination processing.
- the QP delta correction unit 338b corrects the QP change amount “mb_qp_delta” of the macroblock extracted by the macroblock layer analysis unit 336b for each macroblock, similarly to the QP delta correction unit 338a described above.
- the QP change amount is encoded again, and the encoded QP change amount is output.
- the division point detection unit 339b divides the division stream into MB lines and generates a partial stream by combining the MB lines.
- the dividing point detection unit 339b includes the QP change amount acquired from the QP delta correction unit 138b in each of the divided streams.
- the division point detection unit 139b detects and removes the slice end information included in the input divided stream, and assigns appropriate slice end information to the slice reconstructed by combining the divided streams. Assign to partial streams. Further, the division point detection unit 139b performs arithmetic coding on the binary data included in the division stream.
- the skip run correction unit 337a corrects “mb_skip_run” that is the MB skip run information as described above.
- the MB skip run information is a codeword included in an encoded stream when CAVLC is used as an encoding method, and indicates the number of skip macroblocks (hereinafter also referred to as “length”).
- the length of MB skip run information means the number of consecutive skip macroblocks indicated by the MB skip run information.
- FIG. 36 is an explanatory diagram for explaining the MB skip run information correction process.
- the N encoding engines 320 indicate that five at the end of the MB line L2, three at the beginning of the MB line L3, two at the end of the MB line L5, and four at the beginning of the MB line L6.
- the picture is coded so that there is a skip macroblock.
- MB skip run information having a length of 2 indicating the number of skip macroblocks continuous at the end of the MB line L5 of the first divided stream is encoded
- MB skip run information of length 3 indicating the number of skip macroblocks to be encoded is encoded.
- MB skip run information is not encoded in the fourth divided stream.
- the MB skip run information originally included in the divided stream includes the MB skip run information of length 2 in the first divided stream and the MB skip run information of length 9 in the second divided stream. While MB skip run information having a length of 3 was present in a three-part stream, it is necessary to output MB skip run information having a length of 8 and 6 in the combined stream. That is, when MB skip run information indicating the number of skip macroblocks continuous across a plurality of MB lines in each divided stream has a plurality of MB lines dependent on each other, the dependency is combined. It is necessary to correct the MB skip run information so that a new dependency relationship according to the context of the MB lines in the stream that has been set is obtained.
- the skip run modification unit 337a first includes a set of skip macroblocks corresponding to the MB skip run information extracted by the slice data layer analysis unit 335a across a plurality of MB lines in one divided stream.
- MB skip run information is divided at MB line boundaries.
- dividing MB skip run information at an MB line boundary divides the number of skip macroblocks that are continuous across a plurality of MB lines in one divided stream, and determines the skip macroblocks for each MB line. This means that a plurality of MB skip run information each indicating the number is generated.
- the skip run modification unit 337a performs MB skip run information corresponding to a set of nine skip macroblocks that exist across the MB lines L2 to L6 in the second divided stream. Is divided into MB skip run information corresponding to a set of five skip macroblocks included in the MB line L2 and MB skip run information corresponding to a set of four skip macroblocks included in the MB line L6. .
- the skip run modification unit 337a recombines a plurality of MB skip run information corresponding to a set of skip macroblocks continuous in the combined stream among the divided MB skip run information.
- the recombination of a plurality of MB skip run information means that the plurality of MB skip run information is converted into one MB skip run information indicating the sum of the numbers indicated by each of the plurality of MB skip run information. To do.
- the skip run modification unit 337a combines the two MB skip run information respectively corresponding to the set of these two skip macroblocks, and converts it into MB skip run information of length 8. Also, the set of skip macroblocks of length 2 included in the MB line L5 and the set of skip macroblocks of length 4 included in the MB line L6 are continuous in the combined stream. Therefore, the skip run modification unit 337a combines the two MB skip run information respectively corresponding to the set of these two skip macroblocks and converts the information into MB skip run information having a length of 6.
- the skip run correction unit 337a encodes the MB skip run information obtained in this way again, and outputs the encoded MB skip run information.
- the skip run modification unit 337a divides the input MB skip run information at the MB line boundary, and then recombines as necessary, so that an appropriate length is obtained for the combined stream. MB skip run information can be output.
- the skip run modification unit 337a recombines the MB skip run information that is continuous in the combined stream as necessary without being divided. This is because the H.264 / AVC standard does not allow a plurality of MB skip run information to exist continuously. That is, H.H. Since the H.264 / AVC standard does not allow the number of consecutive skip macroblocks to be expressed using a plurality of MB skip run information, the skip run modification unit 337a combines the plurality of MB skip run information. . In this way, the skip run correction unit 337a is H.264.
- the MB skip run information is modified in a format compliant with the H.264 / AVC standard, so that the combined stream becomes H.264. It is generated in a format compliant with the H.264 / AVC standard.
- the skip run modification unit 337a divides and recombines MB skip run information when MB skip run information exists across a plurality of MB lines in each divided stream is shown. Even if the MB skip run information does not exist across multiple MB lines in each divided stream, if the MB skip run information exists across multiple MB lines in the combined stream, the skip run correction unit In step 337a, MB skip run information is not divided, and only MB skip run information is recombined.
- the operation of the QP delta correction units 338a and 338b will be described in detail.
- the functions and processing operations common to the QP delta correction units 338a and 338b are described, they are collectively referred to as the QP delta correction unit 338 without distinction.
- the QP delta correction unit 338 corrects the QP variation “mb_qp_delta” that exists in principle for each macroblock.
- FIG. 37A and FIG. 37B are explanatory diagrams for explaining the correction process of the QP change amount.
- the macroblock to be processed immediately before in the second divided stream is the macroblock A. Therefore, in the macroblock C, a difference value between the QP value of the macroblock A and the QP value of the macroblock C is encoded as a QP change amount.
- the macroblock positioned immediately before the macroblock C becomes the macroblock B after stream combination. Therefore, when the decoder decodes the encoded stream encoded in this way, it is a difference value between the QP value of the macroblock A and the QP value of the macroblock C with respect to the QP value of the macroblock B. The amount of QP change is reflected, and the QP value of the macroblock C cannot be correctly decoded. That is, the dependency relationship between the MB lines due to the QP change amount indicating the change amount between the macroblocks straddling the two MB lines is broken.
- the QP delta correction unit 338 corrects the QP change amount so as to correct the change in the context of the macroblocks generated by combining the streams.
- the dependency is a combined stream.
- the amount of change in QP is corrected so as to have a new dependency according to the MB line context.
- the QP change amount is calculated based on the context of the new macroblock after stream combination.
- a method to re-execute is conceivable. However, this method requires two processes of QP value decoding and QP change amount calculation, and the amount of processing in the QP delta correction unit 338 increases.
- the QP delta correction unit 338 accumulates the QP change amount of the macroblock that has not been allocated to the target divided stream for each divided stream, and subtracts the accumulated QP change amount to obtain the QP value.
- the modified QP variation is directly derived without decoding.
- the accumulation of the QP change amount is performed according to the equation (3).
- the macroblock C includes a difference between the QP value of the macroblock B and the QP value of the macroblock C. The value needs to be included as a QP variation.
- the QP delta correction unit 338 accumulates the QP change amounts of all the macroblocks included in the MB lines L3 to L5. In this way, by accumulating the QP variation of all the macroblocks between the macroblock A and the macroblock C, the QP variation that is the difference value between the QP value of the macroblock B and the QP value of the macroblock C. A correction value for deriving can be obtained.
- the QP delta correction unit 338 subtracts the accumulated value of the obtained QP change amount from the QP change amount of the macroblock C in accordance with the following equation (5) to obtain the QP value of the macroblock B and the QP value of the macroblock C.
- the amount of change in QP that is the difference value is derived.
- Mb_qp_delta (mb_qp_delta-acc_mb_qp_delta + 52)% 52 (5)
- mb_qp_delta indicates the QP change amount of the macroblock C
- acc_mb_qp_delta indicates the cumulative value of the QP change amounts of all the macroblocks included in the MB lines L3 to L5.
- the QP delta correction unit 338 sets the QP variation “mb_qp_delta” in the range of ⁇ 26 to +25 by the following equation (6). Modify as follows.
- the QP change amount of the macro block C is corrected.
- the same processing is performed on the first macroblock of all MB lines.
- the QP change amount of all the macroblocks in the MB lines L4 to L6 is accumulated, and the accumulated value is subtracted from the QP change amount of the corresponding macroblock.
- the corrected QP variation is derived.
- each of the N encoding engines 320 encodes a macroblock at the head of each MB line so that the macroblock at the head becomes a macroblock including a QP change amount.
- the macroblocks that do not include the QP variation in the H.264 / AVC standard are (1) skip macroblocks, (2) uncompressed macroblocks (I_PCM), or (3) intra prediction prediction mode is “Intra16 ⁇ This is a macroblock that is not 16 ”and whose“ coded_block_pattern ”is 0 (not including any non-zero coefficient).
- the QP change amount can be corrected correctly for the first macroblock of each MB line.
- the QP change amount is continuously accumulated until a macroblock to be corrected for the QP change amount appears.
- the QP delta correction unit 338 re-encodes the corrected QP variation obtained in this way, and outputs the encoded modified QP variation to the division point detection units 339a and 339b.
- the QP delta correction unit 338a performs encoding using the CAVLC method
- the QP delta correction unit 338b performs encoding using the CABAC method.
- the QP delta correction unit 338 corrects the input QP change amount so as to match the context of the macroblocks in the combined stream, thereby appropriately changing the QP change for the combined stream.
- the amount can be set.
- FIG. 37B shows an example in which a slice is divided at the boundary between MB lines L4 and L5.
- MB lines L1 to L4 are included in slice A
- MB lines L5 to L8 are included in slice B.
- the macroblock A and the macroblock C are continuous as in FIG. 37A, but the slices including the respective macroblocks are different, and the macroblock A and the macroblock C are between.
- the dependency is gone.
- the QP change amount of the macroblock C indicates a difference value between the QP value of the macroblock C and the slice QP value of the slice B. Yes.
- the QP delta correction unit 338 accumulates the QP change amount for the macro blocks included between the macro block A and the macro block C and includes the QP change amount and subtracts the accumulated QP change amount.
- the QP difference value between the macroblock B and the macroblock C can be obtained.
- the QP delta correction unit 338 accumulates the QP change amount for all the macroblocks included between the macroblock A and the macroblock C. However, the QP delta correction unit 338 performs accumulation at the start of the macroblock processing at the head of the slice B. The QP variation “acc_mb_qp_delta” is reset to zero. In this way, the QP change amount can be accumulated only for the macroblocks included in the slice B, and the corrected QP change amount of the macroblock C can be obtained correctly.
- the modified QP variation obtained in this way is encoded again, and the encoded modified QP variation is output to the dividing point detection units 339a and 339b.
- a picture is divided into a plurality of MB lines (configuration units), and each of the plurality of MB lines is assigned to N encoding engines 320 and encoded. Therefore, the burden of encoding processing by the N encoding engines 320 can be equalized, and parallel encoding processing can be appropriately executed. For example, H.M. Even when one picture is composed of one slice in the H.264 / AVC format, since the picture is divided into a plurality of MB lines, one encoding engine 320 bears the encoding of the one slice. Without being performed, the N encoding engines 320 can be equally burdened.
- a slice straddling a plurality of MB lines in one picture is originally divided into a plurality of slice parts, and these slice parts are different from each other. May be assigned to a split stream. That is, one divided stream does not include the entire slice of the encoded picture, but includes a slice portion group configured by collecting one or more slice portions that are fragments of the slice.
- a plurality of MB lines may have a dependency relationship.
- H.M. In H.264 / AVC, there are cases where a plurality of MB lines have a dependency relationship depending on MB skip run information “mb_skip_run” and QP change amount “mb_qp_delta”. If such an encoded stream is divided into a plurality of MB lines and assigned to different divided streams, the dependency relationship between MB lines cannot be maintained correctly.
- the stream combination unit 330 reconstructs a new slice by combining N divided streams.
- the image encoding device 300 is able to An encoded stream conforming to the H.264 / AVC format can be generated.
- the overall configuration of the image encoding device 300 can be simplified. can do.
- the above-described stream combination processing is processed by the M stream combination units 330, and the stream combination control unit 340 has a processing amount of the M stream combination units 330.
- the stream combination processing is distributed to the M stream combination units 330 in units of slices so as to be equal. With such a configuration, the stream combining process can be equally borne by the M stream combining units 330, and parallel processing of encoding is realized in the entire system.
- the encoder 310 includes the M stream combining units 330. However, the encoder 310 may include only one stream combining unit 330.
- FIG. 38A is a block diagram illustrating a configuration of an image encoding device including only one stream combining unit.
- the image encoding device 300a includes an encoder 310a and a memory 360a. Similar to the memory 360 described above, the memory 360 a includes the frame memory 361 and the divided stream buffer 362, but does not include the partial stream buffer 363.
- the encoder 310 a includes N encoding engines 320 as in the encoder 310 described above, but includes one stream combining unit 330 b instead of the M stream combining units 330.
- the stream combining unit 330b serially performs a plurality of stream combining processes performed in parallel by the M stream combining units 330. That is, if each of the N divided streams output from the N encoding engines 320 includes a plurality of slices, the stream combining unit 330b sequentially performs the stream combining process according to the arrangement of the slices. . For example, if slice 0, slice 1, slice 2,... Slice n are included in each of N divided streams, the stream combining unit 330b combines a plurality of slices 0 included in the N divided streams. Next, a stream combining process for combining a plurality of slices 1 is performed, and then a stream combining process for combining a plurality of slices 2 is performed. Finally, the stream combining unit 330b performs a stream combining process for combining a plurality of slices n.
- the stream combining process performed by the stream combining unit 330b is the same as the process performed by the stream combining unit 330 described above.
- the stream combining unit 330b sequentially outputs new slices (reconstructed slices) generated by the stream combining process performed sequentially as described above. As a result, the encoded stream is output from the stream combining unit 330b.
- the image encoding device 300a is an image encoding device that generates an encoded stream by encoding image data, and for each picture included in the image data, a plurality of constituent units included in the picture.
- N encoding units encoding engine 320
- N is an integer equal to or greater than 2 divided streams
- a stream combining unit 330b that performs a combining process for generating a combined coding region that is the processing target region by combining the partial regions corresponding to the processing target region included in each of the N divided streams.
- the stream combining unit performs the combining process, the plurality of structural units encoded from the plurality of structural units
- the combined region is generated by dividing the partial region into a plurality of encoded structural units and rearranging, and is included in the image data when the rearrangement is performed.
- a slice portion group including a plurality of encoded slice portions is newly added in the combined coding region. Reconfigure as a slice.
- the structural unit is, for example, a macroblock line
- the processing target area is a slice of an encoded stream
- the partial area is a slice of a divided stream.
- FIG. 38B is a flowchart showing the operation of the image encoding device 300a.
- the N encoding engines 320 of the image encoding device 300a encode, for each picture included in the image data, a plurality of constituent units included in the picture in parallel, so that N (N is 2 or more). Are divided streams) (step S30).
- the stream combining unit 330b combines the partial areas corresponding to the processing target areas included in each of the N divided streams for each processing target area constituting the encoded stream, thereby processing the processing target areas.
- a combining process for generating a combined coding region, which is a region, is executed (step S31).
- the stream combining unit 330b when performing the above combining process, if a partial area is configured from a plurality of encoded structural units, the partial area is encoded.
- a joint coding region is generated by dividing the plurality of structural units and rearranging them. Further, when the stream combining unit 330b performs the recombination, if the slice included in the image data is divided and encoded into a plurality of slice parts and assigned to N divided streams, the stream combining unit 330b The slice portion group consisting of the plurality of encoded slice portions is reconstructed as a new slice.
- the speed of the stream combining process by the stream combining unit is equal to the speed of the parallel encoding process by the N encoding engines 320, or In such a case, the high coding processing performance by the parallel coding processing can be sufficiently exhibited, and the configuration of the image coding device can be simplified. Furthermore, since it is not necessary to include the stream combination control unit 340, the multiplexing unit 350, the M partial stream buffers 363, and the like, the overall configuration and processing of the image encoding device can be simplified.
- FIG. 39 is a diagram illustrating an application example of the image decoding device according to the first and second embodiments and the image coding device according to the third embodiment.
- the above-described image decoding apparatus and image encoding apparatus are provided in a reproduction / recording apparatus 101 that receives a broadcast wave and reproduces / records an encoded stream included in the broadcast wave.
- the playback / recording apparatus 101 includes an antenna 101a that receives a broadcast wave of BS digital broadcasting and an apparatus main body 101b.
- the apparatus main body 101b includes the above-described image decoding device and image encoding device.
- the image decoding apparatus provided in the apparatus main body 101b extracts, for example, a 4k2k encoded stream from the broadcast wave received by the antenna 101a. Then, as described above, the image decoding apparatus divides the extracted encoded stream to generate N divided streams, and decodes the N divided streams in parallel.
- the image encoding device provided in the apparatus main body 101b re-encodes the 4k2k pictures decoded by the image decoding apparatus in parallel and records them in the storage medium provided in the apparatus main body 101b.
- the image decoding apparatus and the image decoding method according to the present invention have been described using the embodiments, but the present invention is not limited to these. Unless it deviates from the meaning of this invention, the form which made the said embodiment the various deformation
- the image decoding devices 100 and 200 include the divided stream buffer 152 and the like, but may not include these.
- FIG. 40 is a block diagram showing the minimum configuration of the image decoding apparatus according to the present invention.
- the image decoding apparatus 10 is an apparatus that decodes an encoded stream in which image data is encoded.
- the image decoding apparatus 10 has a minimum configuration for realizing the present invention.
- a stream division unit 12, a second division control unit 13, and N decoding units 14 are provided.
- the components including the first division control unit 11 and the second division control unit 13 correspond to the stream division control unit 140 of the first and second embodiments.
- the M stream dividing units 12 correspond to the M stream dividing units 130 according to the first embodiment or the M stream dividing units 230 according to the second embodiment.
- the N decoding units 14 correspond to the N decoding engine units 120 according to the first embodiment or the N decoding engines 220 according to the second embodiment.
- the first division control unit 11 designates a processing target area (for example, a slice or a picture) included in the encoded stream. That is, the first division control unit 11 transmits distribution control information indicating the processing target area.
- Each of the M stream dividing units 12 is at least one of N divided streams (N is an integer of 2 or more) from the processing target region each time the processing target region is designated by the first division control unit 11.
- a stream division process for generating a copy is executed.
- the M stream division units 12 execute the above-described stream division processing in parallel on M (M is an integer of 2 or more) processing target areas specified by the first division control unit 11. As a result, M ⁇ N segment streams are generated.
- the second division control unit 13 generates, for each processing target area designated by the first division control unit 11, M stream dividing units 12 based on the arrangement of the processing target areas in the encoded stream. A part of at least one divided stream is selected from the M ⁇ N divided streams. Note that the above-described arrangement is the order of processing target areas arranged in the encoded stream. In addition, in this way, a part of at least one divided stream is selected from the M ⁇ N divided streams, so that one divided stream buffer 152 is selected from the M divided stream buffers 152 of the first and second embodiments. Is equivalent to being selected. Then, the N decoding units 14 decode in parallel each part of the N divided streams including a part of at least one divided stream selected by the second division control unit 13.
- each of the M stream division units 12 divides the processing target region into a plurality of structural units (for example, macroblock lines), and each of the plurality of structural units is divided into N generation targets.
- the above-described stream division processing is executed by assigning to any one of the streams.
- each of the M stream division units 12 divides a processing target area into a plurality of structural units, so that a slice included in the processing target area is divided into a plurality of slice portions and assigned to a plurality of divided streams. For each divided stream, a slice portion group including at least one slice portion assigned to the divided stream is reconfigured as a new slice.
- FIG. 41 is a flowchart showing an image decoding method by the image decoding apparatus 10.
- This image decoding method is a method in which the image decoding apparatus 10 decodes an encoded stream, and includes a first division control step S50 for specifying a processing target area included in the encoded stream, and a first division Each time a processing target area is specified in the control step S50, a stream division process for generating at least a part of N (N is an integer of 2 or more) divided streams from the processing target area is performed as a first division control step.
- Stream division step S51 for generating M ⁇ N divided streams by executing in parallel on M (M is an integer equal to or greater than 2) processing target areas designated in S50, and first division control For each processing target area specified in step S51, stream segmentation is performed based on the arrangement of the processing target area in the encoded stream.
- a second division control step S52 for selecting a part of at least one division stream from the M ⁇ N division streams generated in step S51, and one of the at least one division stream by the second division control step S52.
- a decoding step S53 for decoding in parallel each part of the N divided streams including a part of the at least one divided stream.
- the processing target region is divided into a plurality of structural units, and each of the plurality of structural units is assigned to a part of any of the N divided streams to be generated, thereby The stream division process is executed.
- the slice included in the processing target area is divided into a plurality of slice portions and assigned to a plurality of divided streams.
- a slice portion group consisting of at least one slice portion assigned to the divided stream is reconfigured as a new slice.
- a region to be processed such as an encoded picture or slice is divided into structural units such as a plurality of macroblock lines, and each of the plurality of macroblock lines.
- N decoding units are assigned to N decoding units as a part of the divided stream and decoded. Therefore, it is possible to equalize the burden of decoding processing by the N decoding units, and to appropriately execute parallel decoding processing.
- H.M. Even when a H.264 / AVC encoded picture is composed of one slice, since the encoded picture is divided into a plurality of macroblock lines, the decoding of the one slice is performed by one decoding unit. Without burdening, the N decoding units can bear the burden equally.
- a slice extending over the plurality of macroblock lines may be divided into a plurality of slice portions, and these slice portions may be assigned to different divided streams.
- one divided stream does not include the entire slice of the encoded picture, but includes a slice portion group configured by collecting one or more slice portions that are fragments of the slice.
- such a slice portion group may not include a header indicating its head or end information indicating its end.
- at least two consecutive slice portions in the encoded stream may have a dependency relationship depending on a predetermined codeword included in the encoded stream.
- the decoding unit 14 that decodes the divided stream including the slice portion group includes the slice portion.
- the slice portion group can be easily recognized as a new slice and appropriately decoded without requiring a special process for recognizing the group and appropriately decoding the group. That is, in the image decoding apparatus 10 and the image decoding method thereof, it is not necessary to provide a function or configuration for performing special processing in each of the N decoding units 14, and thus the decoding unit 14 that decodes the divided stream.
- a conventional decoding circuit can be used, and the overall configuration of the image decoding apparatus can be simplified.
- the stream division processing shown here is executed in parallel by, for example, slice units by the M stream division units 12, the burden of the stream division processing in each of the stream division units 12 can be reduced. Furthermore, since the processing amount in the M stream dividing units 12 is controlled by the first division control unit 11 so that the processing amount of the stream division processing varies from slice to slice, Stream division processing can be equally borne by the M stream division units 12.
- the stream division processing is performed in parallel on M processing target areas, when the data amount of the encoded stream is large, the number of decoding units is increased and the number of parallel processes is increased. As a result, the processing speed can be increased, and the number of stream division units can be increased, and the processing speed can be increased.
- M processing target areas are designated for the M stream dividing units 12. That is, the stream division processing for dividing the processing target area into a plurality of structural units (for example, macroblock lines) is distributed to each of the M stream division units.
- the order relationship of the plurality of processing target areas included in the encoded stream cannot be maintained in the M ⁇ N divided streams generated by the M stream dividing units 12, and M ⁇ N The divided stream cannot be decoded as it is. Therefore, in the image decoding apparatus 10 and the image decoding method thereof, for each designated processing target area, based on the arrangement of the processing target areas, that is, the decoding order of the processing target areas in the encoded stream, etc.
- a part of at least one divided stream is selected from the M ⁇ N divided streams generated by the stream dividing units 12. Then, a part of each of the N divided streams including the selected part is decoded in parallel. Therefore, M ⁇ N segment streams can be decoded in the correct order relationship. Further, in the image decoding device 10 and the image decoding method thereof, the configuration in which the processing target area is specified and the partial stream is selected is different from that of the M stream dividing units 12 and the N decoding units 14. Centralized by element. Therefore, in order to decode M ⁇ N segment streams in the correct order as described above, special components are provided for each component such as M stream segmentation units 12 and N decoding units 14. Decoding parallel processing can be appropriately executed with a simple configuration without requiring processing and configuration.
- the image decoding apparatus 10 and the image decoding method thereof do not require the divided stream buffer 152 and the like in the above-described embodiment, and even without them, the above-described object can be achieved and the above-described object can be achieved. Can be achieved.
- FIG. 42 is a block diagram showing the minimum configuration of the image coding apparatus according to the present invention.
- the image encoding device 20 is a device that generates an encoded stream by encoding image data, and has a minimum configuration for realizing the present invention.
- the image encoding device 20 includes N encoding units 21, A combination control unit 22, M stream combination units 23, a second combination control unit 24, and a multiplexing unit 25 are provided.
- the components including the first combination control unit 22 and the second combination control unit 24 correspond to the stream combination control unit 340 of the third embodiment.
- the N encoding units 21 correspond to the N encoding engines 320 of the third embodiment, and the M stream combining units 23 correspond to the M stream combining units 330 of the third embodiment. .
- the multiplexing unit 25 corresponds to the multiplexing unit 350 of the third embodiment.
- the N encoding units 21 For each picture included in the image data, the N encoding units 21 encode N constituent units (for example, macroblock lines) included in the picture in parallel, so that N (N is 2) is generated.
- the first combination control unit 22 designates a processing target area (for example, a slice or a picture) constituting the encoded stream. That is, the first combination control unit 22 transmits distribution control information indicating the processing target area.
- a processing target area for example, a slice or a picture
- the M stream combining units 23 combine a partial process corresponding to the processing target area included in each of the N divided streams to generate a combined coding area that is a processing target area.
- the process is executed in parallel on the M processing target areas designated by the first combination control unit 22. That is, each of the M stream combining units 23 performs the above-described combining process (stream combining process) in parallel.
- the partial area is an area divided from the processing target area.
- Each partial region corresponding to the processing target region is included in each of the N divided streams by the processing by the N encoding units 21. These partial areas become one combined coding area by the above-described combining process.
- the N encoding units 21 divide the slice into a plurality of slice portions, and a new slice is reconfigured as a combined encoding area by the above-described combining process.
- the stream combining unit 23 sequentially generates the combined coding regions (slices), thereby generating and outputting a partial stream including these combined coding regions.
- the second combination control unit 24 generates M generated by the M stream combination units 23 based on the arrangement of the M processing target areas designated by the first combination control unit 22 in the encoded stream.
- a joint coding area to be multiplexed is sequentially selected from the number of joint coding areas. For example, the selection result is notified to the multiplexing unit 350 as selection information as in the third embodiment.
- the multiplexing unit 25 generates an encoded stream by multiplexing the M combined encoded regions in the order selected by the second combined control unit 24.
- each of the M stream combining units 330 is configured to perform a combining process and a partial region is configured from a plurality of encoded structural units (for example, macroblock lines)
- the above-described joint coding region is generated by dividing the partial region into a plurality of encoded structural units and rearranging them.
- each of the M stream combination units 330 is rearranged, when a slice included in the image data is divided and encoded into a plurality of slice portions and assigned to N divided streams, In the joint coding region, a slice portion group composed of a plurality of coded slice portions is reconfigured as a new slice.
- FIG. 43 is a flowchart showing an image encoding method performed by the image encoding device 20.
- This image encoding method is a method in which the image encoding device 20 generates an encoded stream by encoding image data, and for each picture included in the image data, a plurality of constituent units included in the picture Are encoded in parallel, N encoding steps S60 for generating N (N is an integer greater than or equal to 2) divided streams, and a first combination for designating processing target areas constituting the encoded stream
- N encoding steps S60 for generating N (N is an integer greater than or equal to 2) divided streams and a first combination for designating processing target areas constituting the encoded stream
- a combining step for generating a joint encoding region that is a processing target region by combining the control step S61 and the partial regions corresponding to the processing target regions included in each of the N divided streams is performed as a first combination.
- a stream combining step S62 executed in parallel with respect to the M processing target areas designated in the control step S61, and a first combining control step.
- the combined coding areas to be multiplexed are selected from the M combined coding areas generated in the stream combining step S62.
- the stream combining step S62 when a combination process is performed, if a partial area is configured from a plurality of encoded structural units, the partial area is converted into a plurality of encoded structural units.
- the above-described joint coding region is generated by dividing and rearranging.
- the stream combination step S62 when recombination is performed, if a slice included in the image data is divided and encoded into a plurality of slice portions and assigned to N divided streams, The slice portion group consisting of the plurality of encoded slice portions is reconstructed as a new slice.
- the combination processing is performed in parallel, but the combination processing may be performed serially.
- the image encoding device 20 includes only one stream combining unit 23.
- a picture is divided into structural units such as a plurality of macroblock lines, for example, and each of the plurality of macroblock lines is assigned to N encoding units. Encoded. Therefore, the burden of encoding processing by the N encoding units 21 can be equalized, and parallel processing of encoding can be appropriately executed. For example, H.M. Even when a H.264 / AVC coded picture is configured with one slice, the picture is divided into a plurality of macroblock lines, so that one coding unit 21 does not burden coding of the one slice. , N encoding units 21 can be equally burdened.
- the combining process is distributed to the M stream combining units 23 in units of processing target areas according to the designation of the processing target areas by the first combining control unit 22, the combining process is divided into M stream combining units. 23 in parallel.
- a slice straddling the plurality of macroblock lines is divided into a plurality of slice portions, and these slice portions are sequentially assigned to the divided streams. is there. That is, slice portions that are slice fragments are distributed in each divided stream. Therefore, the plurality of slice portions dispersed in this way do not have a context in the image data. Therefore, when a plurality of continuous macroblock lines have a dependency relationship based on a predetermined code word, a plurality of distributed slice portions cannot maintain the dependency relationship, and an encoded stream conforming to the encoding method cannot be used as it is. Cannot be generated.
- the slice portion group that is a set of the plurality of distributed slice portions is reconfigured as a new slice, and therefore the combination including the slice portion group is performed.
- the encoding area can be in a format according to the encoding method.
- the combining process is distributed to the M stream combining units 23 in units of processing target areas and is performed in parallel by the M stream combining units 23, the order of the plurality of processing target areas included in the encoded stream The relationship cannot be maintained in the M joint coding regions (partial streams) generated by the parallel joint processing, and the M joint coding regions cannot be multiplexed in the correct order.
- the image encoding device 20 and the image encoding method thereof M pieces are specified for each designated processing target area based on the arrangement of the processing target areas, that is, the order of encoding of the processing target areas in the encoded stream. From the M joint coding regions generated by the stream joint unit 23, joint coding regions to be multiplexed are sequentially selected. Then, M joint coding regions are multiplexed in the selected order. Therefore, it is possible to multiplex M joint coding regions in the correct order relationship. Further, in the image encoding device 20 and the image encoding method thereof, the designation of the processing target region and the selection of the combined encoding region to be multiplexed include M stream combining units 23 and N encoding units.
- the image encoding device 20 and the image encoding method thereof do not require the partial stream buffer 363 or the like in the above-described embodiment, and even without them, the above-described effects can be achieved and the above-described object can be achieved. Can be achieved.
- a 1 MB line is treated as one structural unit and a picture is divided into a plurality of structural units.
- the structural unit is not limited to a 1 MB line, and may be a 2 MB line or a 3 MB line. It may be a plurality of macroblocks arranged in a line in the vertical direction of the picture.
- 2 MB lines may be treated as a structural unit
- 1 MB line may be treated as a structural unit.
- the stream division control unit distributes the processing to the M stream division units in units of slices.
- a larger unit for example, a unit of pictures or a plurality of pictures is collected. Processing may be distributed in units of pictures.
- the stream combination control unit distributes the processing in units of slices to the M stream combination units, but a larger unit, for example, a unit of pictures or a picture in which a plurality of pictures are collected Processing may be distributed in units of groups.
- the stream dividing unit inserts the duplicate slice header into the divided stream, and the decoding engine reads and decodes the divided stream in which the duplicate slice header is inserted.
- the stream dividing unit may output the duplicate slice header directly to the decoding engine without inserting the duplicate slice header into the divided stream.
- the stream segmentation unit determines whether or not a duplicate slice header should exist immediately before the MB line in the segment stream to be read into the decode engine, and when determining that it should exist, the MB line is read into the decode engine.
- the duplicate slice header is output to the decoding engine.
- the stream dividing unit may output only a part of the information included in the duplicate slice header to the decode engine without outputting the duplicate slice header itself to the decode engine.
- the stream combining unit generates and gives header information such as SPS, PPS, and slice header.
- header information is generated and added by other processing units. Also good. Specifically, when each of the N encoding engines generates a divided stream, the divided stream may be generated with a slice header attached. In that case, when the divided streams are combined by the stream combining unit, the slice headers for the same slice may overlap. In this case, the unnecessary slice header is removed from the stream combining unit.
- the SPS, PPS, and slice header generation and assignment may be performed by the multiplexing unit.
- the stream dividing unit executes either one of the correction of the QP change amount and the insertion of the QP change amount, but both may be executed.
- the stream division unit may determine whether or not the top macroblock of the MB line includes a QP change amount.
- the stream division unit may perform replacement of the QP change amount of the macroblock (steps S318 to S322 in FIG. 21).
- the stream dividing unit may output the accumulated QP change amount or the like (steps S352 to S356 in FIG. 28).
- the skip run correction unit when the stream division unit performs both the correction of the QP change amount and the insertion of the QP change amount, the skip run correction unit, for example, when the MB skip run information is located at the head of the MB line, It is preferable to determine whether or not a QP change amount is inserted at the head of the MB line.
- the skip run correction unit may add the preceding MB skip run information to the MB skip run information (S210 in FIG. 18).
- the preceding MB skip run information and MB skip run information may be output as MB skip run information.
- the first codeword is MB skip run information.
- the first codeword does not necessarily need to be MB skip run information.
- the first code word may be a code word indicating that a different type of macro block from the skip macro block is continuous.
- the second codeword is a QP change amount.
- the second codeword is not necessarily a QP change amount.
- the second codeword may be a codeword indicating a change amount of an encoding coefficient between macroblocks different from the QP change amount.
- each functional block in the block diagrams is typically an integrated circuit. It is realized as LSI (Large Scale Integration). These may be individually made into one chip, or may be made into one chip so as to include a part or all of them. For example, a portion represented by the decoder 110 in FIG. 1 (including M stream division units 130 and the first to Nth decoding engines 120) may be included in one chip.
- LSI Large Scale Integration
- LSI Integrated Circuit
- IC Integrated Circuit
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- the image decoding device and the image encoding device according to the present invention have an effect that it is possible to appropriately execute parallel processing of decoding and encoding with a simple configuration. For example, a 4k2k encoded stream is decoded. It is useful as a playback device or a recording device that encodes a 4k2k moving image.
- Second division control unit 14 Decoding unit 20, 300, 300a Image encoding device 21 Encoding unit 22 First combination control unit 23, 330, 330b First to Mth stream combination units (stream combination units) 24 Second coupling control unit 25, 350 Multiplexing unit 110, 210 Decoder 120, 220 First to Nth decoding engines (decoding engines) 130m, 330m Processing management unit 131 Start code detection unit 132a EPB removal unit 132b EPB insertion unit 133 Slice header insertion unit 133a NAL type determination unit 133b Header insertion counter 133c Header address update unit 133d Header buffer 134, 134a, 134b Slice data processing unit 135a, 135b Slice data layer decoding unit 136a, 136b Macroblock layer decoding unit 137a, 237a Skip run correction unit 138, 138a, 138b QP delta correction unit 139, 139a, 139b
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
図1は、本発明の実施の形態1における画像復号化装置の構成を示すブロック図である。
acc_mb_qp_delta≦25の場合:mb_qp_delta = acc_mb_qp_delta ・・・(4)
次に、本発明の実施の形態2について、詳細に説明する。
次に、本発明の実施の形態3について、詳細に説明する。
mb_qp_delta≦25の場合:mb_qp_delta = mb_qp_delta ・・・(6)
11 第1の分割制御部
12、130、230 第1~第Mストリーム分割部(ストリーム分割部)
13 第2の分割制御部
14 復号化部
20、300、300a 画像符号化装置
21 符号化部
22 第1の結合制御部
23、330、330b 第1~第Mストリーム結合部(ストリーム結合部)
24 第2の結合制御部
25、350 多重化部 110、210 デコーダ
120、220 第1~第Nデコードエンジン(デコードエンジン)
130m、330m 処理管理部
131 スタートコード検出部
132a EPB除去部
132b EPB挿入部
133 スライスヘッダ挿入部
133a NALタイプ判定部
133b ヘッダ挿入カウンタ
133c ヘッダアドレス更新部
133d ヘッダバッファ
134、134a、134b スライスデータ処理部
135a、135b スライスデータ層デコード部
136a、136b マクロブロック層デコード部
137a、237a スキップラン修正部
138、138a、138b QPデルタ修正部
139、139a、139b 分割点検出部
140 ストリーム分割制御部
150 メモリ
151 ストリームバッファ
152 第1~第M分割ストリームバッファ(分割ストリームバッファ)
153 フレームメモリ
160 スキップラン抽出部
161 スキップラン分割部
162 スキップラン累積・保持部
163 加算部
164 スキップラン符号化部
238、238a、238b QPデルタ挿入部
310 エンコーダ
320 第1~第Nエンコードエンジン(エンコードエンジン)
331 スタートコード検出部
332a EPB除去部
332b EPB挿入部
333 ヘッダ挿入部
334a、334b スライスデータ処理部
335a、335b スライスデータ層解析部
336a、336b マクロブロック層解析部
337a スキップラン修正部
338、138a、138b QPデルタ修正部
339a、339b 分割点検出部
340 ストリーム結合制御部
360 メモリ
361 フレームメモリ
362 分割ストリームバッファ
363 第1~第M部分ストリームバッファ(部分ストリームバッファ)
Claims (20)
- 画像データが符号化された符号化ストリームを復号化する画像復号化装置であって、
前記符号化ストリームに含まれる処理対象領域を指定する第1の分割制御部と、
前記第1の分割制御部によって前記処理対象領域が指定されるごとに、前記処理対象領域からN個(Nは2以上の整数)の分割ストリームの少なくとも一部を生成するストリーム分割処理を、前記第1の分割制御部によって指定されたM個(Mは2以上の整数)の処理対象領域に対して並列に実行することによって、M×N個の分割ストリームを生成するM個のストリーム分割部と、
前記第1の分割制御部によって指定された処理対象領域ごとに、当該処理対象領域の前記符号化ストリーム内における配置に基づいて、前記M個のストリーム分割部によって生成されるM×N個の分割ストリームから、少なくとも1つの分割ストリームの一部を選択する第2の分割制御部と、
前記第2の分割制御部によって少なくとも1つの分割ストリームの一部が選択されるごとに、当該少なくとも1つの分割ストリームの一部を含む、N個の分割ストリームのそれぞれの一部を並列に復号化するN個の復号化部とを備え、
前記M個のストリーム分割部のそれぞれは、
前記処理対象領域を複数の構成単位に分割し、複数の前記構成単位のそれぞれを、生成対象であるN個の分割ストリームのうちの何れかの一部に割り当てることにより、前記ストリーム分割処理を実行し、
前記処理対象領域を複数の構成単位に分割することによって、当該処理対象領域に含まれるスライスが複数のスライス部分に分割されて複数の分割ストリームに割り当てられるときには、分割ストリームごとに、当該分割ストリームに割り当てられる少なくとも1つのスライス部分からなるスライス部分群を新たなスライスとして再構成する、
画像復号化装置。 - 前記M個のストリーム分割部のそれぞれは、
指定された処理対象領域に関わらず、前記符号化ストリームに含まれる各第1のヘッダ情報を解析し、その解析結果に基づいて前記N個の分割ストリームを生成する、
請求項1に記載の画像復号化装置。 - 前記M個のストリーム分割部のうちの何れか1つのストリーム分割部は、
前記符号化ストリームに含まれる第2のヘッダ情報を含むN個の分割ストリームを生成し、
前記M個のストリーム分割部のうちの他の全てのストリーム分割部は、
前記第2のヘッダ情報を含まないN個の分割ストリームを生成する、
請求項1または2に記載の画像復号化装置。 - 前記第2の分割制御部は、さらに、
選択された分割ストリームの一部を示す選択情報を生成して前記N個の復号化部のそれぞれに出力し、
前記N個の復号化部は、前記第2の分割制御部から出力された前記選択情報によって示されるN個の分割ストリームのそれぞれの一部を並列に復号化する、
請求項1~3の何れか1項に記載の画像復号化装置。 - 前記第2の分割制御部は、
選択された分割ストリームの一部のデータのサイズを含む前記選択情報を前記N個の復号化部のそれぞれに出力し、
前記N個の復号化部は、前記第2の分割制御部から出力された前記選択情報に含まれるデータのサイズに基づいて、N個の分割ストリームのそれぞれの一部を特定して、それらの一部を並列に復号化する、
請求項4に記載の画像復号化装置。 - 前記第2の分割制御部は、
前記N個の分割ストリームのそれぞれを構成するデータ構成単位の数、またはビット量を前記サイズとして含む前記選択情報を出力する、
請求項5に記載の画像復号化装置。 - 前記第1の分割制御部は、さらに、
前記ストリーム分割部ごとに、当該ストリーム分割部によって実行される1つの処理対象領域に対するストリーム分割処理が終了したか否かを判定し、
終了したと判定した際には、前記ストリーム分割処理を終了したストリーム分割部に対して優先的に新たな処理対象領域を指定する、
請求項1~6の何れか1項に記載の画像復号化装置。 - 前記N個の復号化部は、第1および第2の復号化部を含み、
前記第1の復号化部が、前記N個の分割ストリームのうちの当該第1の復号化部に割り当てられた分割ストリームに含まれる第1のスライス部分を復号化し、前記第2の復号化部が、前記N個の分割ストリームのうちの当該第2の復号化部に割り当てられた分割ストリームに含まれる第2のスライス部分を復号化する場合であって、前記第1および第2のスライス部分が空間的に隣接していた場合には、
前記第1の復号化部は、前記第2の復号化部による前記第2のスライス部分の復号化開始前に、前記第1のスライス部分の復号化を開始し、
前記第2の復号化部は、前記第1の復号化部による前記第1のスライス部分の復号化によって生成される隣接情報を前記第1の復号化部から取得し、前記隣接情報を用いて前記第2のスライス部分を復号化する、または、前記隣接情報を用いずに前記第2のスライス部分を復号化する、
請求項1~7の何れか1項に記載の画像復号化装置。 - 前記第1の分割制御部は、
前記符号化ストリームに含まれる、スライス、ピクチャ、または複数のピクチャからなるピクチャ群を前記処理対象領域として指定する、
請求項1~8の何れか1項に記載の画像復号化装置。 - 画像データが符号化された符号化ストリームを復号化する画像復号化方法であって、
前記符号化ストリームに含まれる処理対象領域を指定する第1の分割制御ステップと、
前記第1の分割制御ステップによって前記処理対象領域が指定されるごとに、前記処理対象領域からN個(Nは2以上の整数)の分割ストリームの少なくとも一部を生成するストリーム分割処理を、前記第1の分割制御ステップによって指定されたM個(Mは2以上の整数)の前記処理対象領域に対して並列に実行することによって、M×N個の分割ストリームを生成するストリーム分割ステップと、
前記第1の分割制御ステップによって指定された処理対象領域ごとに、当該処理対象領域の前記符号化ストリーム内における配置に基づいて、前記ストリーム分割ステップによって生成されるM×N個の分割ストリームから、少なくとも1つの分割ストリームの一部を選択する第2の分割制御ステップと、
前記第2の分割制御ステップによって少なくとも1つの分割ストリームの一部が選択されるごとに、当該少なくも1つの分割ストリームの一部を含む、N個の分割ストリームのそれぞれの一部を並列に復号化する復号化ステップとを含み、
前記ストリーム分割ステップでは、
前記処理対象領域を複数の構成単位に分割し、複数の前記構成単位のそれぞれを、生成対象であるN個の分割ストリームのうちの何れかの一部に割り当てることにより、前記ストリーム分割処理を実行し、
前記処理対象領域を複数の構成単位に分割することによって、当該処理対象領域に含まれるスライスが複数のスライス部分に分割されて複数の分割ストリームに割り当てられるときには、分割ストリームごとに、当該分割ストリームに割り当てられる少なくとも1つのスライス部分からなるスライス部分群を新たなスライスとして再構成する、
画像復号化方法。 - 請求項1~9の何れか1項に記載の画像復号化装置が備える各部としてコンピュータを機能させるためのプログラム。
- 前記画像復号化装置は集積回路として構成されている、
請求項1~9の何れか1項に記載の画像復号化装置。 - 画像データを符号化することにより符号化ストリームを生成する画像符号化装置であって、
前記画像データに含まれるピクチャごとに、当該ピクチャに含まれる複数の構成単位を並列に符号化することにより、N個(Nは2以上の整数)の分割ストリームを生成するN個の符号化部と、
前記符号化ストリームを構成する処理対象領域を指定する第1の結合制御部と、
N個の分割ストリームのそれぞれに含まれる、処理対象領域に対応する部分領域を結合することによって、前記処理対象領域である結合符号化領域を生成する結合処理を、前記第1の結合制御部によって指定されたM個(Mは2以上の整数)の前記処理対象領域に対して並列に実行するM個のストリーム結合部と、
前記第1の結合制御部によって指定されたM個の処理対象領域の前記符号化ストリーム内での配置に基づいて、前記M個のストリーム結合部によって生成されるM個の結合符号化領域から、多重化されるべき結合符号化領域を順次選択する第2の結合制御部と、
前記第2の結合制御部によって選択された順に、前記M個の結合符号化領域を多重化することによって前記符号化ストリームを生成する多重化部とを備え、
前記M個のストリーム結合部のそれぞれは、
前記結合処理を行う際に、符号化された複数の前記構成単位から前記部分領域が構成されている場合には、前記部分領域を符号化された複数の前記構成単位に分割して組み替えることによって前記結合符号化領域を生成し、
前記組み替えを行う際に、前記画像データに含まれるスライスが複数のスライス部分に分割および符号化されて前記N個の分割ストリームに割り当てられているときには、前記結合符号化領域内において、符号化された複数のスライス部分からなるスライス部分群を新たなスライスとして再構成する、
画像符号化装置。 - 前記第2の結合制御部は、さらに、
多重化されるべき結合符号化領域を選択するごとに、当該結合符号化領域を示す選択情報を生成して前記多重化部に出力し、
前記多重化部は、前記第2の結合制御部から前記選択情報を取得するごとに、当該選択情報によって示される結合符号化領域を前記符号化ストリームに多重化する、
請求項13に記載の画像符号化装置。 - 前記第2の結合制御部は、
選択された結合符号化領域のデータのサイズを含む前記選択情報を前記多重化部に出力し、
前記多重化部は、前記選択情報に含まれるサイズの結合符号化領域を前記符号化ストリームに多重化する、
請求項14に記載の画像符号化装置。 - 前記第1の結合制御部は、さらに、
前記ストリーム結合部ごとに、当該ストリーム結合部によって実行される結合処理が終了したか否かを判定し、
終了したと判定した際には、前記結合処理を終了したストリーム結合部に対して優先的に新たな処理対象領域を指定する、
請求項13~15の何れか1項に記載の画像符号化装置。 - 前記N個の符号化部は、第1および第2の符号化部を含み、
前記第1の符号化部が、前記N個の構成単位のうちの当該第1の符号化部に割り当てられた第1の構成単位を符号化し、前記第2の符号化部が、前記N個の構成単位のうちの当該第2の符号化部に割り当てられた第2の構成単位を符号化する場合であって、前記第1および第2の構成単位が前記ピクチャ内で互いに隣接していた場合には、
前記第1の符号化部は、前記第2の符号化部による前記第2の構成単位の符号化開始前に、前記第1の構成単位の符号化を開始し、
前記第2の符号化部は、前記第1の符号化部による前記第1の構成単位の符号化によって生成される隣接情報を前記第1の符号化部から取得し、前記隣接情報を用いて前記第2の構成単位を符号化する、または、前記隣接情報を用いずに前記第2の構成単位を符号化する、
請求項13~16の何れか1項に記載の画像符号化装置。 - 画像データを符号化することにより符号化ストリームを生成する画像符号化方法であって、
前記画像データに含まれるピクチャごとに、当該ピクチャに含まれる複数の構成単位を並列に符号化することにより、N個(Nは2以上の整数)の分割ストリームを生成する符号化ステップと、
前記符号化ストリームを構成する処理対象領域を指定する第1の結合制御ステップと、
N個の分割ストリームのそれぞれに含まれる、処理対象領域に対応する部分領域を結合することによって、前記処理対象領域である結合符号化領域を生成する結合処理を、前記第1の結合制御ステップによって指定されたM個(Mは2以上の整数)の前記処理対象領域に対して並列に実行するストリーム結合ステップと、
前記第1の結合制御ステップによって指定されたM個の処理対象領域の前記符号化ストリーム内での配置に基づいて、前記ストリーム結合ステップによって生成されるM個の結合符号化領域から、多重化されるべき結合符号化領域を順次選択する第2の結合制御ステップと、
前記第2の結合制御ステップによって選択された順に、前記M個の結合符号化領域を多重化することによって前記符号化ストリームを生成する多重化ステップとを含み、
前記ストリーム結合ステップでは、
前記結合処理を行う際に、符号化された複数の前記構成単位から前記部分領域が構成されている場合には、前記部分領域を符号化された複数の前記構成単位に分割して組み替えることによって前記結合符号化領域を生成し、
前記組み替えを行う際に、前記画像データに含まれるスライスが複数のスライス部分に分割および符号化されて前記N個の分割ストリームに割り当てられているときには、前記結合符号化領域内において、符号化された複数のスライス部分からなるスライス部分群を新たなスライスとして再構成する、
画像符号化方法。 - 請求項13~17の何れか1項に記載の画像符号化装置が備える各部としてコンピュータを機能させるためのプログラム。
- 前記画像符号化装置は集積回路として構成されている、
請求項13~17の何れか1項に記載の画像符号化装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11812073.2A EP2600612A4 (en) | 2010-07-30 | 2011-07-27 | IMAGE DECODING DEVICE, IMAGE DECODING METHOD, IMAGE ENCODING DEVICE, AND IMAGE ENCODING METHOD |
JP2011550158A JP5656879B2 (ja) | 2010-07-30 | 2011-07-27 | 画像復号化装置、画像復号化方法、画像符号化装置および画像符号化方法 |
CN201180003988.1A CN102550029B (zh) | 2010-07-30 | 2011-07-27 | 图像解码装置、图像解码方法、图像编码装置以及图像编码方法 |
US13/498,685 US9307260B2 (en) | 2010-07-30 | 2011-07-27 | Image decoding apparatus, image decoding method, image coding apparatus, and image coding method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010173069 | 2010-07-30 | ||
JP2010-173069 | 2010-07-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012014471A1 true WO2012014471A1 (ja) | 2012-02-02 |
Family
ID=45529702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/004259 WO2012014471A1 (ja) | 2010-07-30 | 2011-07-27 | 画像復号化装置、画像復号化方法、画像符号化装置および画像符号化方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9307260B2 (ja) |
EP (1) | EP2600612A4 (ja) |
JP (1) | JP5656879B2 (ja) |
CN (1) | CN102550029B (ja) |
WO (1) | WO2012014471A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016047375A1 (ja) * | 2014-09-24 | 2016-03-31 | 株式会社日立情報通信エンジニアリング | 動画像符号化装置、動画像復号装置、および動画像符号化・復号化方法 |
JP2016063534A (ja) * | 2014-09-12 | 2016-04-25 | パナソニックIpマネジメント株式会社 | 送信装置、受信装置、送信方法及び受信方法 |
WO2016129031A1 (ja) * | 2015-02-09 | 2016-08-18 | 株式会社日立情報通信エンジニアリング | 画像圧縮伸長装置 |
WO2018008076A1 (ja) * | 2016-07-05 | 2018-01-11 | さくら映機株式会社 | リアルタイム編集システム |
WO2019239931A1 (ja) * | 2018-06-14 | 2019-12-19 | ソニー株式会社 | 画像処理装置および方法 |
JP2020072369A (ja) * | 2018-10-31 | 2020-05-07 | 日本電信電話株式会社 | 復号装置、符号化装置、復号方法、符号化方法、及びプログラム |
JP7471740B2 (ja) | 2021-07-07 | 2024-04-22 | テンセント・アメリカ・エルエルシー | メディアの並列処理用のセグメントベースの分割およびマージ機能のための方法および装置 |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9307258B2 (en) * | 2012-10-30 | 2016-04-05 | Broadcom Corporation | Parallel transcoding |
US9241163B2 (en) * | 2013-03-15 | 2016-01-19 | Intersil Americas LLC | VC-2 decoding using parallel decoding paths |
CN104053000B (zh) * | 2013-03-15 | 2018-12-25 | 英特希尔美国公司 | 使用平行译码路径的视频压缩(vc-2)译码 |
JP6361866B2 (ja) * | 2013-05-09 | 2018-07-25 | サン パテント トラスト | 画像処理方法および画像処理装置 |
US20150023410A1 (en) * | 2013-07-16 | 2015-01-22 | Arcsoft Hangzhou Co., Ltd. | Method for simultaneously coding quantized transform coefficients of subgroups of frame |
WO2015040824A1 (ja) * | 2013-09-20 | 2015-03-26 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 送信方法、受信方法、送信装置及び受信装置 |
JP6268066B2 (ja) | 2013-09-20 | 2018-01-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 送信方法、受信方法、送信装置及び受信装置 |
US9338464B2 (en) * | 2014-02-04 | 2016-05-10 | Cable Television Laboratories, Inc. | Adaptive field and frame identification |
US10080019B2 (en) * | 2014-09-19 | 2018-09-18 | Intel Corporation | Parallel encoding for wireless displays |
US9186909B1 (en) | 2014-09-26 | 2015-11-17 | Intel Corporation | Method and system of lens shading color correction using block matching |
US9854261B2 (en) * | 2015-01-06 | 2017-12-26 | Microsoft Technology Licensing, Llc. | Detecting markers in an encoded video signal |
CN106210729A (zh) * | 2015-05-06 | 2016-12-07 | 扬智科技股份有限公司 | 视频流解码***及视频流解码方法 |
US20170105010A1 (en) * | 2015-10-09 | 2017-04-13 | Microsoft Technology Licensing, Llc | Receiver-side modifications for reduced video latency |
CN105472371B (zh) * | 2016-01-13 | 2019-11-05 | 腾讯科技(深圳)有限公司 | 视频码流处理方法和装置 |
US10484701B1 (en) * | 2016-11-08 | 2019-11-19 | Amazon Technologies, Inc. | Rendition switch indicator |
US11245901B2 (en) * | 2017-10-04 | 2022-02-08 | Panasonic Intellectual Property Management Co., Ltd. | Video signal processing device, video display system, and video signal processing method |
EP3499870B1 (en) * | 2017-12-14 | 2023-08-23 | Axis AB | Efficient blending using encoder |
CN109412755B (zh) * | 2018-11-05 | 2021-11-23 | 东方网力科技股份有限公司 | 一种多媒体数据处理方法、装置及存储介质 |
FR3101741A1 (fr) * | 2019-10-02 | 2021-04-09 | Orange | Détermination de corrections à appliquer à un signal audio multicanal, codage et décodage associés |
WO2021237510A1 (zh) * | 2020-05-27 | 2021-12-02 | 深圳市大疆创新科技有限公司 | 数据解压缩的方法、***、处理器及计算机存储介质 |
CN115063326B (zh) * | 2022-08-18 | 2022-10-25 | 威海天拓合创电子工程有限公司 | 基于图像压缩的红外夜视图像高效通讯方法 |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000295616A (ja) * | 1999-04-08 | 2000-10-20 | Matsushita Electric Ind Co Ltd | 画像符号化装置、画像復号装置、画像符号化方法、画像復号方法及びプログラム記録媒体 |
JP2003032679A (ja) * | 2001-07-11 | 2003-01-31 | Lsi Systems:Kk | 復号装置、復号方法およびその方法をコンピュータに実行させるプログラム |
JP2006129284A (ja) | 2004-10-29 | 2006-05-18 | Sony Corp | 符号化及び復号装置並びに符号化及び復号方法 |
WO2007136093A1 (ja) * | 2006-05-24 | 2007-11-29 | Panasonic Corporation | 画像復号装置 |
JP2008067026A (ja) | 2006-09-07 | 2008-03-21 | Fujitsu Ltd | Mpegデコーダ及びmpegエンコーダ |
JP2008072647A (ja) * | 2006-09-15 | 2008-03-27 | Toshiba Corp | 情報処理装置、デコーダおよび再生装置の動作制御方法 |
WO2008139708A1 (ja) * | 2007-04-27 | 2008-11-20 | Panasonic Corporation | 画像復号装置、画像復号システム、画像復号方法、及び集積回路 |
JP2008306450A (ja) * | 2007-06-07 | 2008-12-18 | Panasonic Corp | 画像符号化装置および画像復号化装置 |
WO2009119888A1 (en) * | 2008-03-28 | 2009-10-01 | Sharp Kabushiki Kaisha | Methods, devices and systems for parallel video encoding and decoding |
JP2009246539A (ja) * | 2008-03-28 | 2009-10-22 | Ibex Technology Co Ltd | 符号化装置、符号化方法、符号化プログラム、復号化装置、復号化方法および復号化プログラム |
JP2010041472A (ja) | 2008-08-06 | 2010-02-18 | Nikon Corp | 電子カメラ、データ配信方法およびサーバ |
JP2010041352A (ja) * | 2008-08-05 | 2010-02-18 | Panasonic Corp | 画像復号装置及び画像復号方法 |
WO2010041472A1 (ja) * | 2008-10-10 | 2010-04-15 | パナソニック株式会社 | 画像復号化装置および画像復号化方法 |
JP2010109572A (ja) * | 2008-10-29 | 2010-05-13 | Toshiba Corp | 画像処理装置、及び方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5280003B2 (ja) * | 2003-09-07 | 2013-09-04 | マイクロソフト コーポレーション | 映像コーデックにおけるスライス層 |
JP4789200B2 (ja) * | 2006-08-07 | 2011-10-12 | ルネサスエレクトロニクス株式会社 | 動画符号化と動画復号とのいずれかを実行する機能モジュールおよびそれを含む半導体集積回路 |
US8428125B2 (en) * | 2006-12-22 | 2013-04-23 | Qualcomm Incorporated | Techniques for content adaptive video frame slicing and non-uniform access unit coding |
WO2009150801A1 (ja) * | 2008-06-10 | 2009-12-17 | パナソニック株式会社 | 復号化装置、復号化方法及び受信装置 |
US8548061B2 (en) | 2008-08-05 | 2013-10-01 | Panasonic Corporation | Image decoding apparatus and image decoding method |
US9602821B2 (en) * | 2008-10-01 | 2017-03-21 | Nvidia Corporation | Slice ordering for video encoding |
CN101939994B (zh) | 2008-12-08 | 2013-07-17 | 松下电器产业株式会社 | 图像解码装置及图像解码方法 |
-
2011
- 2011-07-27 JP JP2011550158A patent/JP5656879B2/ja not_active Expired - Fee Related
- 2011-07-27 WO PCT/JP2011/004259 patent/WO2012014471A1/ja active Application Filing
- 2011-07-27 US US13/498,685 patent/US9307260B2/en not_active Expired - Fee Related
- 2011-07-27 EP EP11812073.2A patent/EP2600612A4/en not_active Withdrawn
- 2011-07-27 CN CN201180003988.1A patent/CN102550029B/zh not_active Expired - Fee Related
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000295616A (ja) * | 1999-04-08 | 2000-10-20 | Matsushita Electric Ind Co Ltd | 画像符号化装置、画像復号装置、画像符号化方法、画像復号方法及びプログラム記録媒体 |
JP2003032679A (ja) * | 2001-07-11 | 2003-01-31 | Lsi Systems:Kk | 復号装置、復号方法およびその方法をコンピュータに実行させるプログラム |
JP2006129284A (ja) | 2004-10-29 | 2006-05-18 | Sony Corp | 符号化及び復号装置並びに符号化及び復号方法 |
WO2007136093A1 (ja) * | 2006-05-24 | 2007-11-29 | Panasonic Corporation | 画像復号装置 |
JP2008067026A (ja) | 2006-09-07 | 2008-03-21 | Fujitsu Ltd | Mpegデコーダ及びmpegエンコーダ |
JP2008072647A (ja) * | 2006-09-15 | 2008-03-27 | Toshiba Corp | 情報処理装置、デコーダおよび再生装置の動作制御方法 |
WO2008139708A1 (ja) * | 2007-04-27 | 2008-11-20 | Panasonic Corporation | 画像復号装置、画像復号システム、画像復号方法、及び集積回路 |
JP2008306450A (ja) * | 2007-06-07 | 2008-12-18 | Panasonic Corp | 画像符号化装置および画像復号化装置 |
WO2009119888A1 (en) * | 2008-03-28 | 2009-10-01 | Sharp Kabushiki Kaisha | Methods, devices and systems for parallel video encoding and decoding |
JP2009246539A (ja) * | 2008-03-28 | 2009-10-22 | Ibex Technology Co Ltd | 符号化装置、符号化方法、符号化プログラム、復号化装置、復号化方法および復号化プログラム |
JP2010041352A (ja) * | 2008-08-05 | 2010-02-18 | Panasonic Corp | 画像復号装置及び画像復号方法 |
JP2010041472A (ja) | 2008-08-06 | 2010-02-18 | Nikon Corp | 電子カメラ、データ配信方法およびサーバ |
WO2010041472A1 (ja) * | 2008-10-10 | 2010-04-15 | パナソニック株式会社 | 画像復号化装置および画像復号化方法 |
JP2010109572A (ja) * | 2008-10-29 | 2010-05-13 | Toshiba Corp | 画像処理装置、及び方法 |
Non-Patent Citations (4)
Title |
---|
BONGSOO JUNG ET AL.: "Adaptive slice-level parallelism for H.264/AVC encoding using pre macroblock mode selection", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 19, no. 8, October 2008 (2008-10-01), pages 558 - 572, XP008154714 * |
JIE ZHAO ET AL.: "New Results using Entropy Slices for Parallel Decoding", ITU- TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG) 35TH MEETING, July 2008 (2008-07-01), BERLIN, GERMANY, XP030003597 * |
MICHAEL ROITZSCH: "Slice-Balancing H.264 Video Encoding for Improved Scalability of Multicore Decoding", EMSOFT '07 PROCEEDINGS OF THE 7TH ACM & IEEE INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE, 30 September 2007 (2007-09-30), pages 269 - 278, XP008154718 * |
See also references of EP2600612A4 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016063534A (ja) * | 2014-09-12 | 2016-04-25 | パナソニックIpマネジメント株式会社 | 送信装置、受信装置、送信方法及び受信方法 |
US10116936B2 (en) | 2014-09-24 | 2018-10-30 | Hitachi Information & Telecommunication Engineering, Ltd. | Moving image coding device, moving image decoding device, moving image coding method, and moving image decoding method |
JP2016066850A (ja) * | 2014-09-24 | 2016-04-28 | 株式会社日立情報通信エンジニアリング | 動画像符号化装置、動画像復号装置、および動画像符号化・復号化方法 |
WO2016047375A1 (ja) * | 2014-09-24 | 2016-03-31 | 株式会社日立情報通信エンジニアリング | 動画像符号化装置、動画像復号装置、および動画像符号化・復号化方法 |
WO2016129031A1 (ja) * | 2015-02-09 | 2016-08-18 | 株式会社日立情報通信エンジニアリング | 画像圧縮伸長装置 |
JP6085065B2 (ja) * | 2015-02-09 | 2017-02-22 | 株式会社日立情報通信エンジニアリング | 画像圧縮伸長装置 |
JPWO2016129031A1 (ja) * | 2015-02-09 | 2017-04-27 | 株式会社日立情報通信エンジニアリング | 画像圧縮伸長装置 |
WO2018008076A1 (ja) * | 2016-07-05 | 2018-01-11 | さくら映機株式会社 | リアルタイム編集システム |
JPWO2018008076A1 (ja) * | 2016-07-05 | 2019-04-18 | さくら映機株式会社 | リアルタイム編集システム |
WO2019239931A1 (ja) * | 2018-06-14 | 2019-12-19 | ソニー株式会社 | 画像処理装置および方法 |
JP2020072369A (ja) * | 2018-10-31 | 2020-05-07 | 日本電信電話株式会社 | 復号装置、符号化装置、復号方法、符号化方法、及びプログラム |
WO2020090408A1 (ja) * | 2018-10-31 | 2020-05-07 | 日本電信電話株式会社 | 復号装置、符号化装置、復号方法、符号化方法、及びプログラム |
JP7471740B2 (ja) | 2021-07-07 | 2024-04-22 | テンセント・アメリカ・エルエルシー | メディアの並列処理用のセグメントベースの分割およびマージ機能のための方法および装置 |
Also Published As
Publication number | Publication date |
---|---|
CN102550029A (zh) | 2012-07-04 |
JPWO2012014471A1 (ja) | 2013-09-12 |
JP5656879B2 (ja) | 2015-01-21 |
CN102550029B (zh) | 2015-10-07 |
US20120183079A1 (en) | 2012-07-19 |
EP2600612A1 (en) | 2013-06-05 |
EP2600612A4 (en) | 2015-06-03 |
US9307260B2 (en) | 2016-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5656879B2 (ja) | 画像復号化装置、画像復号化方法、画像符号化装置および画像符号化方法 | |
JP5341104B2 (ja) | 画像復号化装置および画像復号化方法 | |
JP5345149B2 (ja) | 画像復号化装置および画像復号化方法 | |
US10750192B2 (en) | Image coding apparatus, image coding method, image decoding apparatus, image decoding method, and storage medium | |
KR100957754B1 (ko) | 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법, 화상 복호 방법, 화상 부호화 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체, 화상 복호 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체 | |
KR101879890B1 (ko) | 화상 처리 장치, 화상 처리 방법 및 기록 매체 | |
CN107172441B (zh) | 图像编码装置及方法、图像解码装置及方法 | |
KR101962591B1 (ko) | 화상 처리 장치, 화상 처리 방법 및 기록 매체 | |
JP4879269B2 (ja) | 復号化方法及び装置 | |
US20120121024A1 (en) | Method and apparatus for parallel entropy encoding and parallel entropy decoding based on decoding rate | |
CN114009036B (zh) | 图像编码设备和方法、图像解码设备和方法以及存储介质 | |
US10841585B2 (en) | Image processing apparatus and method | |
JP2008271068A (ja) | 動画像符号化方法,動画像並列符号化用符号化器,動画像並列符号化方法,動画像並列符号化装置,それらのプログラム,およびそれらのプログラムを記録したコンピュータ読み取り可能な記録媒体 | |
AU2337099A (en) | Method and apparatus for advanced television signal encoding and decoding | |
JP6505142B2 (ja) | 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム | |
CN118354096A (zh) | 图像编码设备和方法以及图像解码设备和方法 | |
CN118354097A (zh) | 图像编码设备和方法以及图像解码设备和方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180003988.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011550158 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11812073 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13498685 Country of ref document: US Ref document number: 2011812073 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |