CN115315958A - Image decoding device, image decoding method, and program - Google Patents

Image decoding device, image decoding method, and program Download PDF

Info

Publication number
CN115315958A
CN115315958A CN202180023940.0A CN202180023940A CN115315958A CN 115315958 A CN115315958 A CN 115315958A CN 202180023940 A CN202180023940 A CN 202180023940A CN 115315958 A CN115315958 A CN 115315958A
Authority
CN
China
Prior art keywords
slice
sub
picture
unit
encoded data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180023940.0A
Other languages
Chinese (zh)
Inventor
河村圭
内藤整
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KDDI Corp
Original Assignee
KDDI Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KDDI Corp filed Critical KDDI Corp
Publication of CN115315958A publication Critical patent/CN115315958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention provides an image decoding device (200), comprising: a sub-picture layout derivation unit (211) configured to decode the encoded data and derive layout information of the sub-picture; a filler slice identification unit (212) configured to decode the encoded data and identify whether or not each of the slices constituting the sub-picture is a filler slice; and a slice decoding unit (213) configured to decode the encoded data and reconstruct slice data on the basis of the layout information and the recognition result by the filler slice recognition unit (211).

Description

Image decoding device, image decoding method, and program
Technical Field
The present invention relates to an image decoding device, an image decoding method, and a program.
Background
In a sub-picture in VVC (scalable Video Coding), which is a next-generation Video Coding scheme described in patent document 1, the sub-picture is a rectangular region composed of one or more slices within the picture, and for example, as shown in fig. 10, the picture is completely covered with sub-pictures that are a plurality of rectangular regions without overlapping.
Further, a process of taking out a bitstream for each sub-picture is disclosed in non-patent document 1. With such a process, it is possible to extract a bitstream of sub-pictures corresponding to a desired region from different bitstreams and generate a new bitstream by combining a plurality of sub-pictures into a picture.
For example, in a 360 ° panoramic image of the cube map system, it is possible to generate bitstreams having different resolutions from a sub-picture including a field of view region and other sub-pictures without re-encoding.
Documents of the prior art
Non-patent document 1: ITU-T H.265high Efficiency Video Coding.
Non-patent document 2: versatile Video Coding (Draft 8).
Disclosure of Invention
However, VVC, which is a next-generation moving picture coding method, has the following problem due to the restriction that the shapes of pictures and sub-pictures are rectangular: when it is desired to separate and combine bit streams, there are cases where a picture cannot be completely covered without duplication regardless of how sub-pictures are arranged.
The present invention has been made in view of the above-described problems, and an object thereof is to provide an image decoding device, an image decoding method, and a program that can decode a picture by a sub-picture function that is a filler having no content even when a plurality of bitstreams are not rectangular when combined.
Means for solving the problems
A first aspect of the present invention is directed to an image decoding apparatus configured to decode encoded data, the image decoding apparatus including: a sub-picture layout deriving unit configured to decode the encoded data and derive layout information of a sub-picture; a padded slice identification unit configured to decode the encoded data and identify whether or not each slice constituting the sub-picture is a padded slice; and a slice decoding unit configured to decode the encoded data and reconstruct slice data based on the layout information and the recognition result by the filler slice recognition unit.
A second technical aspect of the present invention relates to an image decoding method, including: decoding the encoded data and deriving layout information of the sub-picture; decoding the encoded data to identify whether or not each slice constituting the sub-picture is a filler slice; and a step of decoding the encoded data and reconstructing slice data based on the layout information and the recognition result.
A third aspect of the present invention relates to a program for causing a computer to function as an image decoding apparatus for decoding encoded data, the image decoding apparatus including: a sub-picture layout deriving unit configured to decode the encoded data and derive layout information of a sub-picture; a padded slice identification unit configured to decode the encoded data and identify whether or not each slice constituting the sub-picture is a padded slice; and a slice decoding unit configured to decode the encoded data and reconstruct slice data based on the layout information and the recognition result by the filler slice recognition unit.
Effects of the invention
According to the present invention, it is possible to provide an image decoding device, an image decoding method, and a program that can decode a picture by a sub-picture function that becomes a filler having no content even when a plurality of bit streams are not rectangular when combined.
Drawings
Fig. 1 is a diagram showing an example of the configuration of an image processing system 1 according to an embodiment.
Fig. 2 is a diagram showing an example of functional blocks of the image coding apparatus 100 according to the embodiment.
Fig. 3 is a diagram showing an example of functional blocks of the entropy encoding unit 104 of the image encoding device 100 according to one embodiment.
Fig. 4 is a diagram showing an example of syntax (syntax) used in the image processing system 1 according to the embodiment.
Fig. 5 is a diagram showing an example of functional blocks of the image decoding apparatus 200 according to the embodiment.
Fig. 6 is a diagram showing an example of functional blocks of the entropy decoding unit 201 of the image decoding device 200 according to the embodiment.
Fig. 7 is a diagram showing an example of syntax (syntax) used in the image processing system 1 according to the embodiment.
Fig. 8 is a diagram showing an example of the configuration of the coded data conversion system 2 according to the embodiment.
Fig. 9 is a diagram for explaining an example of extracting encoded data to be combined with a 360 ° panoramic image of a cube map system having 6 planes in one embodiment.
Fig. 10 is a diagram for explaining the prior art.
Description of reference numerals:
1 \ 8230;
100 \ 8230, image coding device;
101. 203 \ 8230and an inter-frame prediction part;
102. 204\8230anintra-frame prediction part;
103 \ 8230and a transformation/quantization part;
104 \ 8230a entropy coding part;
105. 202 \ 8230and an inverse transformation/inverse quantization unit;
106 \ 8230and a subtraction arithmetic unit;
107. 205 \ 8230and an adder;
108. 206 \ 8230and a loop filter part;
109. 207\8230aframe buffer;
110 \ 8230and a block dividing part;
111. 208 \ 8230and a block integrating part;
121 \ 8230a sub-picture layout determining part;
122 \ 8230a filling section judging part;
123\8230asection coding part;
200 \8230andan image decoding device;
201\8230theentropy decoding part;
211 \ 8230a sub-picture layout leading-out part;
212 \ 8230filling a section identification part;
213 \ 8230a slice decoding part;
2 \ 8230encoded data transform system;
21 \ 8230coded data extracting device;
22\8230codeddata combining device.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings. Further, the components in the following embodiments may be replaced with existing components and the like as appropriate, and various changes including combinations with other existing components may be made. Therefore, the contents of the invention described in the claims should not be limited to the description of the embodiments described below.
First embodiment
Fig. 1 is a diagram showing an example of functional blocks of an image processing system 1 according to a first embodiment of the present invention. The image processing system 1 includes an image encoding device 100 that encodes a moving image to generate encoded data, and an image decoding device 200 that decodes the encoded data generated by the image encoding device 100. Between the image encoding device 100 and the image decoding device 200, the encoded data is transmitted and received via a transmission path, for example.
Image encoding apparatus 100
Fig. 2 is a diagram showing an example of functional blocks of the image coding apparatus 100. As shown in fig. 2, the image encoding device 100 includes: an inter-frame prediction unit 101, an intra-frame prediction unit 102, a transformation/quantization unit 103, an entropy coding unit 104, an inverse transformation/inverse quantization unit 105, a subtraction unit 106, an addition unit 107, a loop filter unit 108, a frame buffer 109, a block division unit 110, and a block integration unit 111.
The block dividing unit 110 is configured to divide the entire screen of the input image into the same squares and output an image (divided image) obtained by recursive division using a quadtree or the like.
The inter prediction unit 101 is configured to perform inter prediction using the divided image input from the block dividing unit 110 and the filtered local decoded image input from the frame buffer 109, and generate and output an inter prediction image.
The intra prediction unit 102 is configured to perform intra prediction using the divided image input from the block dividing unit 110 and a local decoded image before filtering, which will be described later, and generate and output an intra prediction image.
The transform/quantization unit 103 is configured to perform an orthogonal transform process on the residual signal input from the subtraction unit 106, perform a quantization process on the transform coefficient obtained by the orthogonal transform process, and output a quantized level value obtained by the quantization process.
The entropy encoding unit 104 is configured to entropy encode the quantized level value, the transform unit size, and the transform size input from the transform/quantization unit 103 and output them as encoded data.
The inverse transform/inverse quantization unit 105 performs inverse quantization processing on the quantized level value input from the transform/quantization unit 103, performs inverse orthogonal transform processing on the transform coefficient obtained by the inverse quantization processing, and outputs an inverse orthogonal transform residual signal obtained by the inverse orthogonal transform processing.
The subtraction unit 106 is configured to output a residual signal, which is a difference between the divided image input from the block dividing unit 110 and the intra-prediction image or the inter-prediction image.
The addition unit 107 is configured to output a divided image obtained by adding the intra-prediction image or the inter-prediction image to the residual signal subjected to the inverse orthogonal transform and input from the inverse transform/inverse quantization unit 105.
The block integration unit 111 is configured to output the local decoded image before filtering obtained by integrating the divided images input from the addition unit 107.
The loop filter unit 108 is configured to apply loop filter processing such as deblocking filter processing to the local decoded image before filtering input from the block integration unit 111, and generate and output a local decoded image after filtering. Here, the local decoded image before filtering is a signal obtained by adding the intra-prediction image or the inter-prediction image to the residual signal after the inverse orthogonal transform.
The frame buffer 109 stores the filtered local decoded image, and adaptively supplies the filtered local decoded image to the inter prediction unit 101.
The entropy encoding unit 104 of the image encoding device 100 according to the present embodiment will be described below with reference to fig. 3. Fig. 3 is a diagram showing an example of a part of functional blocks of the entropy encoding unit 104 of the image encoding device 100 according to the present embodiment.
The entropy encoding unit 104 is configured to derive a sub-picture composed of a padded slice. Specifically, as shown in fig. 3, the entropy coding unit 104 includes a sub-picture layout determination unit 121, a padding slice determination unit 122, and a slice coding unit 123.
The sub-picture layout determining unit 121 is configured to determine the layout of the sub-picture to be signaled at the sequence level, and output layout information on the determined layout.
The filled slice determination unit 122 is configured to determine whether or not each slice constituting each sub-picture is a filled slice, and output the determination result.
The slice encoding unit 123 encodes the bit stream in units of slices and outputs the encoded bit stream as slice data.
Here, the slice encoding unit 123 is configured to encode a bitstream corresponding to the slice according to the slice decoding process described in non-patent document 1 and output the bitstream as slice data, when the slice is not a filler slice, based on the layout information determined by the sub-picture layout determining unit 121 and the determination result by the filler slice determining unit 122.
On the other hand, the slice encoding unit 123 is configured to encode and output slice data for a padding slice in the slice, when the slice is the padding slice, based on the layout information determined by the sub-picture layout determining unit 121 and the determination result by the padding slice determining unit 122.
The slice data for padding a slice may be a bitstream indicating that an INTRA slice is not divided from the maximum CU size, the INTRA prediction mode is INTRA _ PLANAR, and no residual signal is present. The slice data for padding slices may be a bitstream indicating that an inter slice is not divided from the maximum CU size, that a motion vector is a merge index 0 of a merge mode, and that no residual signal is present.
Fig. 4 shows an example of syntax including the layout information determined by the sub-picture layout determining section 121 and the determination result by the fill slice determining section 122. The syntax is encoded by the entropy encoding unit 104 (slice encoding unit 123).
In fig. 4, the sub _ present _ flag is a flag indicating that a picture is composed of more than one sub-picture.
In fig. 7, the filer _ slice _ sub _ present _ flag is a flag indicating whether or not there is a sub-picture composed of a padded slice in a sequence. For example, if the value of the filer _ slice _ sub _ pictures _ present _ flag is "1" (in the case of being valid), it indicates that a sub-picture composed of a padded slice exists in the sequence, and if the value of the filer _ slice _ sub _ pictures _ present _ flag is "0" (in the case of being invalid), it indicates that a sub-picture composed of a padded slice does not exist in the sequence.
In fig. 4, subacic _ CTU _ top _ left _ x [ i ] is a coordinate value in units of CTUs in the horizontal direction constituting the upper left of the sub-picture, and subacic _ CTU _ top _ left _ y [ i ] is a coordinate value in units of CTUs in the vertical direction constituting the upper left of the sub-picture.
In FIG. 4, sub _ width _ minus _1[ i ] is the number of CTUs in the horizontal direction constituting a sub picture, and sub _ height _ minus _1[ i ] is the number of CTUs in the vertical direction constituting a sub picture.
In fig. 4, sub _ linear _ as _ pic _ flag [ i ] is a flag indicating whether or not decoding processing is applied to a sub picture as a picture in addition to loop filtering processing, and loop _ filter _ across _ sub _ enabled _ flag [ i ] is a flag indicating that loop filtering processing can be applied to a sub picture boundary.
In fig. 4, the filter _ slice _ sub _ flag [ i ] is a flag indicating whether each slice constituting each sub-picture is a padding slice. For example, in the case where the value of the filer _ slice _ sub _ flag [ i ] is "1", it indicates that the slice is a filler slice, and in the case where the value of the filer _ slice _ sub _ flag [ i ] is "1", it indicates that the slice is not a filler slice.
Here, the filer _ slice _ subapic _ flag [ i ] corresponds to the determination result by the filled slice determination section 122.
In fig. 4, subac _ ctu _ top _ left _ x [ i ], subac _ ctu _ top _ left _ y [ i ], subac _ width _ minus _1[ i ], and subac _ height _ minus _1[ i ] correspond to the layout information described above.
Image decoding device 200
Fig. 5 is a block diagram of the image decoding apparatus 200 according to the present embodiment. As shown in fig. 5, the image decoding apparatus 200 of the present embodiment includes an entropy decoding unit 201, an inverse transform/inverse quantization unit 202, an inter prediction unit 203, an intra prediction unit 204, an addition unit 205, a loop filtering unit 206, a frame buffer 207, and a block integration unit 208.
The entropy decoding unit 201 is configured to perform entropy decoding on the encoded data and output a quantized level value, a motion compensation method generated by the image encoding device 100, and the like.
The inverse transform/inverse quantization unit 202 is configured to perform inverse quantization processing on the quantized level value input from the entropy decoding unit 201, perform inverse orthogonal transform processing on the result obtained by the inverse quantization processing, and output the result as a residual signal.
The inter prediction unit 203 is configured to perform inter prediction using the filtered local decoded image input from the frame buffer 207, and generate and output an inter prediction image.
The intra prediction unit 204 is configured to perform intra prediction using the pre-filter local decoded image input from the addition unit 205, and generate and output an intra prediction image.
The adder 205 is configured to output a divided image obtained by adding the residual signal input from the inverse transform/inverse quantization unit 202 to a predicted image, which is an inter-predicted image input from the inter-prediction unit 203 or an intra-predicted image input from the intra-prediction unit 204.
Here, the predicted image is a predicted image calculated by a prediction method obtained by entropy decoding, out of the inter-predicted image input from the inter-prediction unit 203 and the intra-predicted image input from the intra-prediction unit 204.
The block integration unit 208 is configured to output the local decoded image before filtering obtained by integrating the divided images input from the addition unit 205.
The loop filter unit 206 is configured to apply loop filter processing such as demo filter processing to the local decoded image before filtering input from the block integration unit 208, and generate and output a local decoded image after filtering.
The frame buffer 207 is configured to store the filtered local decoded image input from the loop filter 206, adaptively supply the filtered local decoded image to the inter prediction unit 203, and output the filtered local decoded image as a decoded image.
The entropy decoding unit 201 of the image decoding device 200 according to the present embodiment is described below with reference to fig. 6.
The entropy decoding unit 201 is configured to derive a sub-picture composed of a padded slice. As shown in fig. 6, the entropy decoding unit 201 includes a sub-picture layout deriving unit 211, a filler slice identifying unit 212, and a slice decoding unit 213.
The sub-picture layout derivation section 211 is configured to decode syntax signaled at the sequence level and derive the layout information of the sub-picture based on the syntax.
The filler slice identifier 212 decodes a syntax signaled at the sequence level, as in the sub-picture layout derivation section 211, and identifies whether or not each slice constituting each sub-picture is a filler slice based on the syntax.
The syntax is included in the encoded data, the same as the syntax shown in fig. 4.
The slice decoding unit 213 is configured to decode the encoded data based on the layout information derived by the sub-picture layout deriving unit 211 and the recognition result by the padded slice recognition unit 212, and reconstruct slice data.
Specifically, the slice decoding unit 213 decodes the slice data according to the slice decoding process described in non-patent document 1 when determining that the slice is not a filler slice based on the layout information derived by the sub-picture layout deriving unit 211 and the recognition result by the filler slice recognizing unit 212, and reconstructs the slice data.
On the other hand, when the slice is determined to be a filled slice based on the layout information derived by the sub-picture layout deriving unit 211 and the recognition result by the filled slice recognizing unit 212, the slice decoding unit 213 decodes and outputs the slice as slice data for the filled slice in the slice.
According to the present embodiment, even when a plurality of bitstreams are combined and not rectangular, the sub-picture composed of a filler slice is introduced, and the sub-picture can be decoded as a rectangular picture by the image decoding apparatus 200.
In addition, according to the present embodiment, the image decoding apparatus 200 capable of decoding in units of sub-pictures can reduce the amount of processing required for decoding.
In addition, according to the present embodiment, in extracting and combining bit streams, filler slices can be removed and added to a multiplex layer.
Second embodiment
Hereinafter, an image processing system 1 according to a second embodiment of the present invention will be described with a focus on differences from the image processing system 1 according to the first embodiment.
The slice decoding unit 213 according to the second embodiment of the present invention is configured to, when determining that the slice is a filler slice based on the layout information derived by the sub-picture layout deriving unit 211 and the recognition result by the filler slice recognizing unit 212, regard the slice as encoded data described below, decode the slice according to the slice decoding process described in non-patent document 1, and reconstruct slice data.
Specifically, where the slice category is INTRA, the CU size is the largest size, the INTRA prediction mode is INTRA _ PLANAR, and there is no residual signal.
Third embodiment
An image processing system 1 according to a third embodiment of the present invention will be described below with reference to fig. 7, focusing on differences from the image processing system 1 according to the first embodiment.
In the image coding apparatus 100 according to the third embodiment of the present invention, the sub-picture layout determining unit 121 is configured to determine whether or not the filler slice function is used in the sequence, and output the determination result.
When the filled slice function is determined not to be used, the filled slice determination unit 122 does not perform the above determination.
Fig. 7 shows an example of syntax including the result of determination made by the sub-picture layout determination section 121.
In fig. 7, the filer _ slice _ sub _ present _ flag is a flag indicating whether the padding slice function is utilized in the sequence. The file _ slice _ sub _ present _ flag corresponds to a determination result by the sub picture layout determination unit 121.
The image decoding device 200 according to the third embodiment of the present invention is configured such that the sub-picture layout deriving unit 211 derives whether or not the filler slice function is used in the sequence, based on the syntax.
When it is determined not to use the filler slice function, the filler slice identifier 212 does not perform the above identification.
According to the present embodiment, the coding performance for signaling the presence or absence of a padding slice is improved.
Fourth embodiment
An image processing system 1 according to a fourth embodiment of the present invention will be described below with a focus on differences from the image processing system 1 according to the first embodiment.
The image encoding device 100 according to the present embodiment is configured such that the slice encoding unit 123 encodes and outputs slice data filled with a specific value when the slice is a filled slice, based on the layout information determined by the sub-picture layout determination unit 121 and the determination result by the filled slice determination unit 122.
The image decoding device 200 of the present embodiment is configured such that the slice decoding unit 213 decodes and outputs slice data filled with a specific value when determining that the slice is a filled slice, based on the layout information derived by the sub-picture layout deriving unit 211 and the recognition result by the filled slice recognizing unit 212.
According to this embodiment, a lightweight decoding process reflecting the characteristics of the filler slice can be provided.
Fifth embodiment
Next, a coded data conversion system 2 according to a fifth embodiment of the present invention will be described with reference to fig. 8.
The coded data conversion system 2 includes a coded data extraction device 21 and a coded data combination device 22, the coded data extraction device 21 inputs coded data of a plurality of moving images and outputs coded data corresponding to a sub-picture designated from outside the system, and the coded data combination device 22 inputs the output coded data, combines the sub-pictures, and outputs the coded data as a picture.
The encoded data combining device 22 is configured to output encoded data that realizes the requirements that a picture is rectangular and a sub-picture is completely covered and sub-pictures do not overlap each other.
Here, the encoded data combining device 22 is configured to, when the above-described requirement is not satisfied even when the input sub-pictures are combined, add a sub-picture made up of an arbitrary number of padded slices and output encoded data satisfying the above-described requirement.
Specifically, as shown in fig. 9, encoded data can be extracted and combined for a 360 ° panoramic image of a cube map system having 6 planes. The characters in fig. 9 indicate the position and orientation of each face of the cube map system, L indicates the left face, R indicates the right face, F indicates the front face, bot indicates the lower face, bac indicates the rear face, and Top indicates the upper face. Further, fil is a filled slice according to the present invention.
In this case, if three kinds of encoded data, i.e., high resolution, medium resolution, and low resolution, are generated in advance as shown in the left column of fig. 9, the encoded data conversion system 2 can dynamically extract and combine the encoded data according to the field of view region and the transmission band as shown in the right column of fig. 9.
For example, the resolution of each plane in fig. 9 is 1024 × 1024 pixels, 768 × 768 pixels, and 512 × 512 pixels.
Here, as shown in the upper right column of fig. 9, the encoded data combining device 22 generates a picture including high-resolution sub-pictures for two planes corresponding to the field of view region and a picture including low-resolution sub-pictures for the other four planes, and generates a picture of 2048 × 1536 pixels as a whole.
Here, as shown in the lower right column of fig. 9, the encoded data combining device 22 generates a picture composed of a sub-picture of medium resolution for two planes corresponding to the field region, and generates a picture composed of a sub-picture of low resolution for the other four planes, that is, a picture composed of two filler slices of 256 × 512 pixels and two filler slices of 512 × 256 pixels, and thus generates a picture of 2048 × 1280 pixels as a whole.
As described above, by introducing the padding slice, the degree of freedom in selecting the resolution of the sub-picture in extraction and combination of encoded data can be improved.
The image encoding device 100 and the image decoding device 200 may be realized by a program that causes a computer to execute each function (each step).
In the above embodiments, the present invention has been described as being applied to the image encoding device 100 and the image decoding device 200, but the present invention is not limited to this, and is also applicable to an image encoding system and an image decoding system having the functions of the image encoding device 100 and the image decoding device 200.

Claims (6)

1. An image decoding device configured to decode encoded data, characterized in that,
the image decoding apparatus includes:
a sub-picture layout derivation unit configured to decode the encoded data and derive layout information of a sub-picture;
a padded slice identification unit configured to decode the encoded data and identify whether or not each slice constituting the sub-picture is a padded slice; and
and a slice decoding unit configured to decode the encoded data and reconstruct slice data based on the layout information and the recognition result by the filler slice recognition unit.
2. The image decoding apparatus according to claim 1,
the sub-picture layout deriving unit decodes a flag indicating whether or not there is a sub-picture composed of the padded slice in the sequence,
the filled slice identification unit performs the identification only when the flag is valid.
3. The image decoding apparatus according to claim 1 or 2,
the slice decoding unit is configured to decode and output slice data filled with a specific value when it is determined that the slice is a filled slice.
4. The image decoding apparatus according to claim 1 or 2,
the slice decoding unit is configured to decode the slice data as specific encoded data and output the slice data when it is determined that the slice is the filler slice.
5. An image decoding method, comprising:
decoding the encoded data and deriving layout information of the sub-picture;
decoding the encoded data to identify whether or not each slice constituting the sub-picture is a filler slice; and
and a step of decoding the encoded data and reconstructing slice data based on the layout information and the recognition result.
6. A program for causing a computer to function as an image decoding device for decoding encoded data, the program being characterized in that,
the image decoding apparatus includes:
a sub-picture layout deriving unit configured to decode the encoded data and derive layout information of a sub-picture;
a padded slice identification unit configured to decode the encoded data and identify whether or not each slice constituting the sub-picture is a padded slice; and
and a slice decoding unit configured to decode the encoded data and reconstruct slice data based on the layout information and the recognition result by the filler slice recognition unit.
CN202180023940.0A 2020-03-30 2021-02-19 Image decoding device, image decoding method, and program Pending CN115315958A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020061330A JP2021164005A (en) 2020-03-30 2020-03-30 Image decoding device, image decoding method, and program
JP2020-061330 2020-03-30
PCT/JP2021/006375 WO2021199783A1 (en) 2020-03-30 2021-02-19 Image decoding device, image decoding method, and program

Publications (1)

Publication Number Publication Date
CN115315958A true CN115315958A (en) 2022-11-08

Family

ID=77927732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180023940.0A Pending CN115315958A (en) 2020-03-30 2021-02-19 Image decoding device, image decoding method, and program

Country Status (3)

Country Link
JP (1) JP2021164005A (en)
CN (1) CN115315958A (en)
WO (1) WO2021199783A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980022287A (en) * 1996-09-20 1998-07-06 김광호 Decryption device
US20060139379A1 (en) * 2003-07-11 2006-06-29 Tadamasa Toma Medium data display device, medium data display method, and medium data display program
US20080027888A1 (en) * 2006-07-31 2008-01-31 Microsoft Corporation Optimization of fact extraction using a multi-stage approach
US20090034615A1 (en) * 2007-07-31 2009-02-05 Kabushiki Kaisha Toshiba Decoding device and decoding method
WO2014049982A1 (en) * 2012-09-28 2014-04-03 三菱電機株式会社 Video encoding device, video decoding device, video encoding method and video decoding method
CN103971388A (en) * 2014-03-07 2014-08-06 天津大学 Method for reconstructing flame CT (computed tomography) images with adaptive section sizes
CN105981389A (en) * 2014-02-03 2016-09-28 三菱电机株式会社 Image encoding device, image decoding device, encoded stream conversion device, image encoding method, and image decoding method
WO2019159820A1 (en) * 2018-02-14 2019-08-22 シャープ株式会社 Moving image encoding device and moving image decoding device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009081579A (en) * 2007-09-25 2009-04-16 Toshiba Corp Motion picture decoding apparatus and motion picture decoding method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980022287A (en) * 1996-09-20 1998-07-06 김광호 Decryption device
US20060139379A1 (en) * 2003-07-11 2006-06-29 Tadamasa Toma Medium data display device, medium data display method, and medium data display program
US20080027888A1 (en) * 2006-07-31 2008-01-31 Microsoft Corporation Optimization of fact extraction using a multi-stage approach
US20090034615A1 (en) * 2007-07-31 2009-02-05 Kabushiki Kaisha Toshiba Decoding device and decoding method
WO2014049982A1 (en) * 2012-09-28 2014-04-03 三菱電機株式会社 Video encoding device, video decoding device, video encoding method and video decoding method
CN105981389A (en) * 2014-02-03 2016-09-28 三菱电机株式会社 Image encoding device, image decoding device, encoded stream conversion device, image encoding method, and image decoding method
CN103971388A (en) * 2014-03-07 2014-08-06 天津大学 Method for reconstructing flame CT (computed tomography) images with adaptive section sizes
WO2019159820A1 (en) * 2018-02-14 2019-08-22 シャープ株式会社 Moving image encoding device and moving image decoding device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOHANNES SAUER等: "Coding of 360°video in non-compact cube layout using uncoded areas,JVET-P0316-v1", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 16TH MEETING: GENEVA, CH, 1–11 OCTOBER 2019》, 1 October 2019 (2019-10-01), pages 2 *
JOHANNES SAUER等: "Geometry padding for cube based 360 degree video using uncoded areas,JVET-O0487-v1", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING: GOTHENBURG, SE, 3–12 JULY 2019》, 4 July 2019 (2019-07-04) *

Also Published As

Publication number Publication date
WO2021199783A1 (en) 2021-10-07
JP2021164005A (en) 2021-10-11

Similar Documents

Publication Publication Date Title
KR102493516B1 (en) Intra prediction method based on CCLM and its device
CN112689147B (en) Video data processing method and device
JP6412910B2 (en) Method for decoding video, method for encoding video, decoder, encoder, computer readable recording medium recording decoding program, and computer readable recording medium recording encoding program
CN108028919B (en) Video or image coding and decoding method and device
CN112534813B (en) Method and apparatus for chrominance quantization parameter derivation in a video processing system
CN107277530B (en) Method for decoding video
KR102062821B1 (en) Method and apparatus for image encoding/decoding using prediction of filter information
KR102543468B1 (en) Intra prediction method based on CCLM and its device
AU2013403225B2 (en) Features of base color index map mode for video and image coding and decoding
CN115022634B (en) Method, device and medium for dividing blocks
US11792394B2 (en) Method and apparatus for signaling adaptive loop filter parameters in video coding
TWI739042B (en) A method for encoding video
CN114586369B (en) Method and apparatus for encoding and decoding video sequence
CN113796075A (en) Method and apparatus for syntax signaling and reference restriction in video coding system
KR20230124767A (en) Image coding method based on affine motion prediction, and device for same
CN113507603A (en) Image signal encoding/decoding method and apparatus thereof
CN114830655A (en) Method and device for limiting secondary conversion and transmitting in image coding and decoding
KR20180101123A (en) Apparatus and Method for Video Encoding or Decoding
CN117834922A (en) Method for decoding or encoding video and method for transmitting data
CN114430908A (en) Indication of a single slice per sub-picture in sub-picture based video coding
CN113273193A (en) Encoder, decoder and corresponding methods for block configuration indication
CN115567713B (en) Decoding method and decoding device based on sub-images and device for storing code stream
CN110603810A (en) Image decoding method and apparatus according to block separation structure in image coding system
CN115315958A (en) Image decoding device, image decoding method, and program
CN117397238A (en) Encoding and decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination